Audio Of The Week: Living With Robots

This Sam Harris conversation with Kate Darling of the MIT Media Lab is fascinating. I listened a few days ago and I am still thinking about it.

#robots and drones

Comments (Archived):

  1. PhilipSugar

    It is interesting. There are also wearable robots that augment humans not replace them: http://www.wearablerobotics… and http://www.springactive.com (Full disclosure I am a founder and an investor of both)There are many ethical questions as she points out. When we go to West Point and Aberdeen Proving Grounds in addition to physicians, physical therapists, and physiologists, there also is a Colonel in charge of ethics. What are the ethics of these things, if you knew you could run faster would you willingly amputate a good leg?? Should you be allowed to??Tough question.When working in warehouses like Fedex or Amazon what are the ramifications. These are questions we discuss. Which is why we started the conference.

    1. LE

      That looks great. Has there or will there be any attempts by the association to try and get funding for companies in this sector that don’t have any or need more investment? Or is there just the usual list of investors who get this and put money into it?

      1. PhilipSugar

        We have invited investors to the conference. We really don’t get too many. I think investors have tired of going to conferences, for instance Brad Feld told me he really just doesn’t go to them anymore.

  2. Tom Labus

    Virginia passed the first robot legislation this week for straight to your house delivery:http://www.recode.net/2017/

  3. Vendita Auto

    Victimized, more callous, uncanny valley ? the next generation will not be running on the same software as Kate Darling. Being at MIT Media Labs and being interviewed by Sam Harris is not a warranty of intellectual relevance, I find this on the same level as I would a spiritualist.

    1. fredwilson

      not a warranty but an indication for sure

      1. Twain Twain

        Horizon Ventures invested $7.5 million in Soul Machines which makes BabyX.When you see it, it’ll be clear how a cute baby face can get humans to share EVERYTHING with the bots. It’s designed to tap into our natural instincts to help, protect and educate babies.* http://www.nzherald.co.nz/b

        1. Vendita Auto

          Reminded of a recent tweet: A sufficiently advanced AI would chose to fail the Turing Test

          1. Twain Twain

            The Turing test is a parlor game.Turing’s model for AI is itself too narrow a definition of intelligence. We get some indication that he wanted to factor in emotions to the logic but he was also concerned with being uncovered as a homosexual — which in those days was illegal.From Stanford site: https://plato.stanford.edu/…”His last two years were particularly full of Shavian drama and Wildean irony. In one letter (to his friend Norman Routledge; the letter is now in the Turing Archive at King’s College, Cambridge) he wrote:Turing believes machines thinkTuring lies with menTherefore machines do not think——It speaks to the question of what belief, thinking, truth and lies are and who’s defining these things.

          2. Vendita Auto

            “Reminded of a recent tweet comments” was meant to be simply illustrative not a definitive I rather like the ability to say so much in 11 words.”who’s defining these things” AI playing the parlor game.

      2. Twain Twain

        The differences in AI’s decision-making, language, biases and understanding from humans is key.Google Research, by the way, are stuck — which is why their team has been showing up at SF AI Meetup to ask for help from the community. https://uploads.disquscdn.chttps://uploads.disquscdn.c…I gave them a little steer that if they had better sentiment classifiers that might help them overcome the syntactic and semantic problems in their xNNs and embedding networks.Of course … I already knew they’d run into these problems because they’re building on top of 2000+ years of Mathematical Logic from the Greeks through to Descartes and today’s somewhat autistic, narrowly focused, maths PhDs in AI.They don’t even know their Plato like I do:https://uploads.disquscdn.c…And Silicon Valley’s AI folks and the wider developer community have no consciousness that the systems they’re creating and personal data they’re collecting are reinforcing the very biases that prevent them from properly modeling human decision-making and language.https://uploads.disquscdn.c…As I shared in comments of another blog, in front of 1500 at a major data science conference last July, I had to correct a Professor of AI when he said the AI brain “simulates human intelligence and emulates human evolution.”I had to remind him of Science 101 and that AI’s been built from mathematical functions of the neocortex-first whereas human intelligence has evolved differently.https://uploads.disquscdn.c…The only way to break free from the “Chains of Logic” of 2000+ years and its contingent biased data sets (which are not only unrepresentative of humans but serious bottlenecks for training the AI to understand us), would be if there was a mathematician who was also multi-disciplinarian, multi-lingual and had lived and worked in diverse industries.And that mathematician would need to also be a product inventor with have the drive to challenge 2000+ years of Maths and data.Maybe because their parent died following a coma and this was a wake-up call for them to the fact Scientific Rationalism has reduced all of us to binaries — 0 (dead) and 1 (alive), vote up / vote down, swipe left / swipe right — when consciousness and what it means to be, say, think and do as humans is a lot more meaningful than that.And if 2000+ years of Maths and tools are amiss, they need to be changed and made better for Humankind.

      3. Twain Twain

        Kate is right: standards get set really early on. Engineers aren’t trained to think about AI and robots in a wider socio-cultural, moral-ethical and economic mindset even at design stage.Then their systems ship and add to deluge of data biases.

      4. Twain Twain

        Google et all set up Partnership in AI to discuss the issues of AI’s impact.They’re unaware that the way they all collect and process data creates the very undemocratic biases the industry’s supposed to stand against.Also, it’s assumed Google has all the data and all the algos and it’s “game over” for startups and innovation.This is not so — as Google Research’s presentation showed.They have the wrong types of data & the wrong types of algos.I had to be in SF and in situ to see those slides.

      5. Vendita Auto

        True “but an indication for sure” therein lies the rub

  4. LE

    One of the points raised was how people tend to think of some robots as human and what that means. The underpinnings of this are very basic to human and animal nature. From Cialdinis book is the concept of ‘red feathers’ (not sure it’s called that exactly don’t remember but I remember the point and it is very relevant).From someone talking about it that I just found:In a tree near his tool shed, a family of robins had nested. We slowly and quietly worked our way to just beneath the tree, and my grandfather told me to carefully raise the feather end of the stick up to the nest. Nearby, a male redbreasted robin stood guard. When he saw the red feathers, he immediately attacked them, chirping wildly and flapping his wings in distress. I was dumbfounded.Since then, I’ve seen experiments demonstrating that a male robin will attack a simple bunch of red feathers but ignore a detailed replica of an actual male robin that does not have red feathers….This is an example of what scientists call “fixed-action patterns” in animals. A fixed action pattern is a precise and predictable sequence of behavior. It’s instinctive. There’s no thought involved, just automatic response. This sequence is set in motion by a very specific “trigger.” For the robin, the red feathers are the trigger, and they set off a sequence of attack.By the way if you have a cat you’ve probably seen a similar behavior with a fake mouse and then just a tail and also just a feather without even a bird. Just flop the feather around. But take another object that is not a feather and not the same reaction. I have our cat trained to do all sorts of things by the sound of the treat bag (so you can modify and add to these ‘built ins’).The point is in someone’s brain certain things will trigger certain responses. No way around it. Even if they know they are being manipulated.I remember very clearly the first time I had a GPS system with voice navigation. I remember thinking that the woman would be annoyed if I diverted from the path that was being instructed. Because she kept repeating herself. Even though I knew it was a taped voice I still had that reaction to it. And it took some time to get over my thought process. Even when I knew it was fake to begin with. Because I am trained to feel that way by interactions with humans. Or my parents, whatever.Another loose example of this is how you can get scared in a movie (maybe not the same thing but similar human nature wise) even when you know the movie is fake. And even when you know the main protagonist isn’t going to be killed in the first 5 minutes of the film (or get crippled) you emotions are telling you something different.So peoples reactions to robots are actually close to 100% predictable.You can use this concept quite effectively (as I often do) in other ways to manipulate both people and animals if you need to do so. All comes down to basic human nature. Science mixed with a bit of art.

  5. jason wright

    Try to imagine that your television is, and always has been, a sentient robot.

  6. William Mougayar

    Something to think about for sure. It had to be a Canadian !

  7. creative group

    CONTRIBUTORS:OFF TOPIC ALERT:Question….Do you use a password app to store your passwords?Does anyone use Lastpass? Is it prudent to trust one company with all that sensitive information?

    1. LE

      Well let me say that revealing how one does security is not a secure way to do things. Less info the better. So I won’t tell you what I do for this exactly. What I can say though is that I would never use a product like lastpass even though I know that there are people that use it, love it, and that some of those people are in the security business themselves. But as you know every time someone tells you they’ve got it all figured out and that a bad event will never happen it often does. Just the way it is. Look at the cloudflare error the other day (which I think was overblown as I said but my point is human error caused it by otherwise extremely capable people).That said you are not me and it’s very possible that using something like lastpass for you might make more sense than some other way that you are not able to do. So it’s a matter of managing risks for each individual person. Personally I would put the convenience part at the bottom of the pile.So to answer your question directly no I would not trust them. They can tell you all of the things they do and maybe they do those things and maybe there is no risk but there will be no chance you will every be able to verify.

      1. creative group

        LE:”That said you are not me and it’s very possible that using something like lastpass for you might make more sense than some other way that you are not able to do”We are second adopters to most technologies. We allow the bugs to be ironed out. We use more than 16 characters for important passwords. We don’t duplicate or use dictionary words. So we wouldn’t use lacrosse or any ite to hold our sensitive passwords that would control financial info. Would be foolish and ridiculous.But always a great idea to read opinions from people in the know.Thanks

  8. cavepainting

    Let us be clear though that General AI is a very very long way from simulating a human brain, let alone the totality of the human mind and its emotions. More importantly, human consciousness ( how we see, perceive and feel reality) is something we know very little about, let alone embed in a machine.Sam Harris is a materialist who believes that an objective world exists independent of an observer. The counter view to materialism is that consciousness creates reality.

  9. jason wright

    so when a robot becomes indistinguishable from we humans do they become human or do we become robot?

  10. Steven Kane

    Big fan of Sam Harris but havent heard this one. Thank you