Video Of The Week: Talking With Machines

A few weeks ago, USV and our friends at Box Group hosted a panel discussion in the USV Event Space on artificial intelligence. We called it Talking With Machines (bios of the panelists are on that page).

Here’s the video of the discussion.

Talking with Machines from Union Square Ventures on Vimeo.

If you’d prefer to listen to it and not watch it, you can do that on SoundCloud:

#machine learning

Comments (Archived):

  1. jason wright

    nicely framed flagthe sound is a bit iffy left and right

    1. Jonathan Libov

      “Iffy” is probably too generous. This was something of an experiment for us so neither audio nor video are super (though crazy convenient that we did it all from the iPhone).The flag was unintentional but cracks me up 🙂

  2. Richard

    “Welcome To The Machine”Welcome my son, welcome to the machine.Where have you been? It’s alright we know where you’ve been.

  3. Scott

    Thanks for sharing, Fred, what a great discussion. For those who want to understand AI and how to think about it for the enterprise, Narrative Science put out a great book, ‘Practical Artificial Intelligence for Dummies’, https://www.narrativescienc…. The book discusses more in-depth some points that are touched upon in the video: weak vs. strong AI, and narrow vs. broad AI.

  4. Dave W Baldwin

    Interesting discussion. Thanks.

  5. Twain Twain

    Wish I’d been there! A few notable points in the discussion:10:25 — exporting American rules into AI which may not translate elsewhere (social and cultural cues are different around the world, even to do with N parties agreeing and confirming time and meetings… and the different interpretations of what “tomorrow” means; humans understand it differently from the machines).Twain opinion: This is an example of how the probability approach of current AI can’t deal with what is a Quantum Relativity variable: time.It’s also an example of how English language rules which have been frameworked by Stanford+Harvard+Princeton+MIT+CarnegieMellon+W3C+UStechcos cannot be universally and generally applied over to other languages => this is why translation algorithms are so amiss. Other languages simply don’t follow English grammar rules.So not only is Natural Language NOT the same as Maths as a language … Between the languages (English, Spanish, Chinese, Hindi, Arabic, Russian etc) the grammar rules are different so the Maths that maps the Natural Language as probabilistic clusters CANNOT be universally applied, coherently.10:50 to 12:20 — IBM Watson guy breaking down the evolution of Linguistics in AI to 3 specific levels: 1950s English Grammar & Syntax; Semantics & Ontologies; Pragmatics (also known as Common Sense) which is what some leading AI researchers are currently working on. He points out there is no clear scientific methodology for this yet.Twain opinion: IBM Watson, Google, Facebook, Baidu, Stanford, W3C et al are not even aware yet that pretty much all the syntax, semantics and pragmatics will NEED to and be rewritten if Machine Intelligence is to be universally rather than narrowly intelligent and if it’s to be used by the whole world rather than an elite of AI researchers and developers who understand computing syntax and symbolics.53:00 — Dennis of x.ai says, “So me falling in love with my wife… At no point do I want your (the AI’s) help. Your predictions. Recommendations. Suggestions. I’m gonna figure that out myself.”Twain opinion: Umm…AI algorithms run recommendations across Amazon, Google search, Facebook feed, Twitter, LinkedIn … Financial portfolios … AND dating apps like Match.com and Tinder. They also run across IoT, energy sector and risk manage everything from distribution, Cryptography, weather reporting to stock control.He also made a comment about how his Mom doesn’t understand the syntax in Twitter (@, # etc.) which is why designing UX with syntax as interaction is so alienating and confusing for some users who may not have the same reference points as developers and who aren’t tech-savvy (the mass rather than the first adopters).This is interesting given Fred’s post, Return of Command Line Interface:* http://avc.com/2015/09/the-…It’s an interesting generational and cultural data point. Dennis is referring to his Mom who may be a Digital Migrant whereas Fred is talking about this nextgen of Millennial Digital Natives for whom Comp Sci may be compulsory so they’d have no issues with understanding @ and # as readily as they do the words “to” and “tag”.

    1. Stephen Voris

      So, they’re leaving out morphology and phonetics/phonology? Granted, those are hairier problems due to how languages have been mostly-anarchically formed (accents and spelling differences in English alone are pitfalls for the unwary)… but these are rather important foundations to leave out, and contain their own insights to the higher levels.

      1. Twain Twain

        The Baidu team isn’t leaving out morphology and phonetics / phonology. The Chinese language has arguably the most complex phonology:* https://medium.com/s-c-a-l-…There’s a race between Baidu, Google, MS, Apple, IBM Watson et al to find the “Holy Grail Unified Algorithm for AI”.This would be applicable across text, sounds, images, audiovisuals, robotics, etc.* http://www.wired.com/2013/0…* http://www.popularmechanics…It would be the ULTIMATE INTEGRATION OF ART & SCIENCE.It’s like hoping all the geniuses and Nobel Prize Winners in Art, Literature, Maths, Physics, Chemistry, Economics etc. who’ve ever lived were alive and working together to solve the AI problem.@wmoug:disqus – This is why the Information Revolution is far from done.In computing terms, if data is fuel we may have evolved from transporting them around on horses and country lanes to the cars of the Information Superhighway.However, we haven’t solved the artistic origami and science to make paper airplanes yet — much less air gliders, airships, airplanes, Concordes and space ships that can travel to the Moon and Mars and colonize it.Blockchain is equivalent to the traffic lights on the Information Superhighway in this analogy.If Financial Capital throws in the towel because of Carlota’s theories then they shouldn’t later regret that Production Capital invested in and invented the future instead of them.

  6. Twain Twain

    The issue of whether Machine Language (maths) is as understandable and interoperable with Natural Language is the crux of this discussion and one of the hard problems in AI, arguably the hardest.Interestingly, there was this question in my G+ feed yesterday: “What word do you use when working with a (0) bit? Zero or nought?”One of the answers was: “Null means no value present, and zero means the value present is zero. They are not the same thing.”My answer was: “Null is the code convention across multiple programming languages.Meanwhile, in everyday language we use “nada, zip, blank, not a scratch, origin, nix….” etc.Right there is an example of how the language of developers and of machines is different from the language of everyday users.It really makes no difference if the word is inputted as text, sound, images, video etc.It still all boils down to whether Machine Language can ever be directly equivalent and as understandable as Human Language (Natural Language).And if we can build AI that everyone from 2 to 102 from anywhere in the world can understand and use, and that the AI would understand them reciprocally in a beneficial relationship.

    1. Twain Twain

      (2.) On 08 October 2015, Facebook changed from “like” to emotion buttons.This was 3 years AFTER I’d proposed them to RCS Mediagroup in Italy in Q4 2012 and they then used my ideas. Those ideas are less than 0.0000000000000000001% of my systems invention.So I am at least 3 years ahead of Facebook in emotion buttons.

  7. Twain Twain

    Also interesting is that two recent moves by Google and Facebook in Machine Intelligence has validated my ideas in the space. Ideas I’ve had … since childhood and can now execute on as an adult with a lot more tools at my disposal.(1.) On 16 Oct 2012 at a Campus event, I asked Amit Singhal, SVP of Google Search, “Will Google’s Star Trek engine have a heart?” to which he replied, “That’s very deep for a first question. Most journalists only ask that question at the very end — after we’ve all had a few drinks. No, we’re focusing on facts and figures. We can only build the machine for those.”Now, on 05 Nov 2015, in Robert Hof’s MIT Technology Review article: “We’re at the Commander Data stage,” staff research engineer Pete Warden said in a reference to the emotionless android in the television show Star Trek: The Next Generation. “But we’re trying to get a bit more Counselor Troi into the system”—the starship Enterprise’s empathetic counselor.* http://www.technologyreview…Back in 2012, Amit Singhal had gone on a roadshow to share Google’s vision of a Star Trek computer that could answer our questions — as competition between Google and the Jeopard-beating IBM Watson machine increased:* http://www.slate.com/articl…YES, IT COULD BE THAT THAT ENCOUNTER WITH ME MADE AMIT & GOOGLE CHANGE THEIR COURSE A BIT. LOL.

  8. Kevin Murphy

    Here is an example that I think is relevant. Sharp has released a walking, talking cellphone that is a personal assistant. It walks around, talks to you and even projects an image from its forehead – of maps, or things, or selfies that it has taken of you on command.I find that examples like this tend to do more to enlighten me on the use of AI in work and personal situations than by reading papers about it. It’s worth a look. Here is some text from the article, written by Reuters, and below that is a link so you can find this and more:text:”With smartphones becoming ever more elaborate, one might think that there’s little left to surprise users. But Japan’s Sharp Corporation have brought us RoBoHoN, a robot smartphone that employs facial recognition and a two-inch touchscreen embedded in its back for making calls, sending texts, and surfing the web.”RoBoHoN can also project images onto any surface from short distances with its in-built forehead projector and even call a cab for its user.http://petersmvis.blogspot….

  9. James Ferguson @kWIQly

    After 2 mins someone says “wanna get started” – after 2 mins I lost patience and stopped !Maybe the rest added value – I for one will never know.

    1. Twain Twain

      Fast fwd to the sections I highlight below, James.

  10. Twain Twain

    We can certainly continue with Narrow AI: train the AI to beat a specific game (chess, Jeopardy, Atari 2600, Go) and then generalize that over as if it’s exactly equivalent to human intelligence and how our minds learn, use language, filter data, connect data, optimize information and risk manage by probability. We can also continue to do directed ingestion of “Big Data” and get the machines to learn along the same narrow functional commands.After my maths degree, I did that with financial AI; pulling in data from the stock exchanges and modeling for patterns of stock behaviors.OR ….We apply Quantum Human Intelligence to the problem in much the same way we invented flight and airplanes, launched spaceships that landed on the Moon and came back safely and dispatched Rovers to Mars.As Dennis of x.ai says, “It (AI) is … HARD.”LOL, that’s an understatement of the century.Not even the top AI researchers at Google, Stanford, IBM Watson et al are aware how hard it will be to solve the Natural Language understanding problem universally (yet).Basically, we’d need to rewrite every single Mathematical algorithm and every record of human experience ever. THAT”S HOW HARD.LOL … I know it, though …

    1. Twain Twain

      Here’s how the invention of a unified algorithm for AI will affect multiple areas.So … that’s why the Information Revolution is far from done.My mind and code hands have been busy, LOL.

  11. sigmaalgebra

    Artificial intelligence? Machine learning?I can claim to have done good research in both, peer-reviewed, published. But, mostly I can’t really say what either field actually is.From all I’ve seen, both fields, if want to ascribe such status, follow: “There is a lot that is good and new, however the good is not new and the new, not good.”.The good parts are essentially always some good, occasionally original, work in applied math, statistics, or fairly traditional engineering.The uses of the words intelligence and learning are apparently nothing like human or even kitty cat, puppy dog, elephant, dolphin, or horse intelligence or learning.It’s becoming clear that somehow much of the computer science community wants to borrow words that have been describing things about humans in an effort to imply, without careful argument, that their work is somehow close to reproducing, emulating, and duplicating what humans do, and this is just deceptive hype. This borrowing is decades old, has never been real, and isn’t now. This word usage is like saying that, because in some circumstances a bicycle can move as fast as a human, bicycle wheels are artificial legs.Let’s see: Is a car machine running? Nope, and if a car were only as good at moving as a human running, then we’d be in big trouble and have to go back to horses. Generally for some computer application to be no better than humans is a bummer; instead, usually good applications are, for their intended use, in their assumed context (two really biggie qualifications and design approaches) are much, much better than humans — cars move faster than humans; airplanes fly higher, faster, farther, and with larger loads than birds; submarines mostly are better at swimming than fish, whales, etc.; some now quite old statistics is much better at detecting a crooked roulette wheel than unaided humans; commonly telescopes see much, much better than humans; a computer with some ordinary differential equations, Runge-Kutta applied math, and corresponding software is much, much better at tracking and predicting the motion of astronomical objects than humans; some fast Fourier transform software can be much better than humans at getting audio signals out of audio noise; and on and on.Net, in information technology, the good stuff has been and remains applied math, statistics, and engineering.Computer science? They have some clever sorting and searching algorithms, know how to write software for compilers, operating systems, databases, hierarchical file systems, graphical user interfaces, lexical scanning and parsing according to syntax given in, say, Bachus-Naur form, have worked out some good approaches to authentication (e.g., public key cryptosystems, RSA, and Kerberos) and security (e.g., attribute control lists), multi-programming with semaphores, locks, critical sections, automatic deadlock detection and resolution to maintain transactional integrity, and more. Good, useful, important, powerful, valuable? Yes. Brilliant? Occasionally. Intelligent? No. Learning anything like humans do it? Apparently not.

    1. Twain Twain

      In ‘Startup Physics’ post, we covered how we shouldn’t over-ascribe / confer abilities to Physics, Maths, Computing as tools capable of modeling human intelligence or behavior.Notably, @samedaydr:disqus posted a great graphic with the words “All about the base.”You know what Plato said? “All learning has an emotional base.”Do any of those Machine Learning algorithms from Google, IBM Watson, Facebook etc. have an emotional base? NOPE.How do I know this?(1.) On 16 Oct 2012, I personally asked Amit Singhal, SVP of Google Search, if “Google’s Star Trek machine will have a heart?” to which his answer was “No.”(2.) In 2014, I was at IBM Watson’s first-ever developer day and asked its CTO if IBM Watson is “akin to an autistic human savant” as its inventor said in an CNN interview.(3.) In March 2015, I spoke with the FB Data Scientist who did a piece of research ‘Love in the time of Facebook.’ I asked him if how many times a “like” is clicked really is an expression of love because, often, we just click “like” to indicate we’ve read something so how can their algorithm filter those nuanced differences or does it all go into the same algorithm as direct equivalents?I’ve spoken with the AI Natural Language teams at Google, Yahoo, Stanford etc. and can’t find a single Machine Learning algorithm WITH AN EMOTIONAL BASE in any of their research papers.Yes, so I’d like to see that on Rich’s graphic, LOL.

  12. William Mougayar

    Coincidentally, I’m speaking at a private event in Toronto next week (on blockchains) and Dennis Mortensen who is in the video from x.AI is also speaking at it.

    1. Twain Twain

      FAB, you can tell him that his point about the human understanding of the word “tomorrow” being different from the machine understanding of the same word is an example of how the probability of the machines can’t deal with something which is a Quantum Relativity variable: time, especially when it isn’t a discrete event (10:00am, date) or a duration (noon to one, date).AND especially if there is a time difference between the two parties.This is an example of Probability’s limitations in machine intelligence, by the way.

      1. William Mougayar

        OK!

        1. Twain Twain

          I’ve shared this video from Google on ‘The Science of Talking to Computers’ before.* https://www.youtube.com/wat…You can ask Dennis for his views on it.

  13. Michael K. Clifford

    We are deep into Adaptive Learning…this tech will transform education globally…

  14. best writing service

    Whenever I comment on an educational blog they become dead after some days.This was bad for me but now I have learned the technique to post live comments.Now maximum people can read my live comments.