AI: Why Now?

UK-based VC David Kelnar wrote an excellent primer on Artificial Intelligence that is a relatively quick read and helps explain the technology and its advancement over the past sixty years since the term was coined in the mid 1950s.

I like this chart which explains the relationship between AI, machine learning, and deep learning.

But my favorite part of David’s post is his explanation of why AI has taken off in the past five years, as this chart shows:

Like most non-linear curves, it is not one thing, but a number of things happening simultaneously, that is causing this explosion of interest. David cites four things:

  1. Better algorithms. Research is constantly coming up with better ways to train models and machines.
  2. Better GPUs. The same chips that make graphics come alive on your screen are used to train models, and these chips are improving rapidly.
  3. More data. The Internet and humanity’s use of it has produced a massive data set to train machines with.
  4. Cloud services. Companies, such as our portfolio company Clarifai, are now offering cloud based services to developers which allow them to access artificial intelligence “as a service” instead of having to “roll your own”.

I feel like we are well into the “AI wave” of technology right now (following in order web, social, and mobile) and this is a wave that seemingly benefits the largest tech companies like Google, Facebook, Amazon, Microsoft, IBM, Uber, Tesla which have large datasets and large userbases to deploy this technology with.

But startups can and will play a role in this wave, in niches where the big companies won’t play, in the enterprise, and in building tech that will help deliver AI as a service. David included this chart that shows the massive increase in startup funding for AI in the last four years:

I would like to thank David for writing such a clear and easy to understand primer on AI. I found it helpful and I am sure many of you will too.

#machine learning

Comments (Archived):

  1. William Mougayar

    Wow, more investment than in blockchain ventures!It seems to me that most use cases are centered around predicting the future (eg security, shopping), better understanding the present (e.g. bots, healthcare), or making automation smarter (e.g. robots, autonomy).Is AI limited by what humans direct it to do?What are the use cases that are more disruptive and paradigm shifting?

    1. Vitomir Jevremovic

      Maybe we don’t need use cases that are more disruptive, but small additional power-ups that AI can provide. For example better architectural designs, measuring quality of written texts, corrections in product design.. basically strive for perfection. Scary a bit, though.

      1. William Mougayar

        Yup, it seems that most of the AI applications are providing a step-wise improvement, not a disruptive one. So, the technology is creeping up gradually inside a variety of areas.

    2. pointsnfigures

      Easier to grasp, shorter learning curves, easier to implement.

      1. LE

        Easier to grasp. Exactly. So most important you can give examples that practically anyone can understand.Here is one of them:https://news.fastcompany.co…Hours after President-elect Donald Trump tweeted a tough comment about defense giant Lockheed Martin on Monday morning, its stock had fallen off a cliff, erasing $4 billion off the company’s market value. The same pattern was evident last Tuesday when he attacked Boeing’s Air Force One contract in a tweet, its shares tumbled (though they recovered by the end of the afternoon). Trump’s ability to move shares is attracting plenty of attention from the quants on Wall Street, who are creating algorithms to take monetary advantage of his tweets, reports Politico

      2. William Mougayar

        Easier to understand and get into than the blockchain, for sure. The trick is in what comes out !! ๐Ÿ™‚

        1. Twain Twain

          Easier to grasp because the examples of AI given by Kelnar have their basis in applied maths and economics that’s been around since C17th and C18th, so that’s several centuries of communications simplification which Blockchain hasn’t had.Still, some of the maths can be complex to grasp.The part that’s much harder than Blockchain is the Natural Language Understanding and General AI.That’s something which falls into what Turing (and subsequent AI thought leaders) didn’t and couldn’t provide frameworks for: the (seemingly) “uncomputables”.Except … of course … they are computable — just not by Google et al because they haven’t been able to invent the tools or the frameworks needed.

    3. Twain Twain

      Natural Language Understanding and General Intelligence AI would be paradigm shifting. However, NONE of the Google, Facebook, Amazon, Microsoft, IBM, Uber, Tesla etc have the knowhow to get there.And the knowhow and pattern recognition of investors is in the low-hanging fruits of those existing frameworks and ecosystems — NOT in the paradigm shifting frontiers.Kelnar’s piece is comprehensive and it does trace the developments to the current state of Deep Learning.Nevertheless, it’s important to contextualize John McCarthy of Stanford’s thinking about AI and how that’s led to AI folks designing and engineering in the mathematical physics abilities of our neocortex (applying the logic, probability and mechanics knowledge bases we’ve built up since Descartes’ time), but not the liberal arts and cultural abilities and the limbic+reptilian parts of our natural intelligence. Limbic and reptilian parts of the brain are where emotions and language originally formed during human evolution.Moreover, it’s not the case that existing mathematical language can model those liberal arts and limbic functions.So that’s why Google et al are STUCK. Human-level intelligence would need more than 1.7 Einsteins etc. It would need Da Vincis, Shakespeares, Steve Jobs and more.https://uploads.disquscdn.chttps://uploads.disquscdn.chttps://uploads.disquscdn.c

    4. ShanaC

      healthcare isn’t just a present thing. It’s also a future thing

  2. Vitomir Jevremovic

    AI as a service sounds fantastic, but there is a concern. Data sets and algorithms are owned by AI company, logic is theirs, output mechanics is theirs. Creating a new business on top of someone else’s is a risky relationship, especially if it is a core of your “future” product. What is to be done then?

    1. Jonathan Coffey

      If you believe the insights from your data are truly valuable and a source of competitive advantage, you’re going to be incentivized to build AI competency in-house.If you don’t know what answers you’re looking for from the data you have or can collect, then you probably aren’t going to find significant long term value from AI and will have low willingness to pay. Many cases (e.g. image recognition) are only high value in certain instances (e.g. facial recognition is a high value prop for government/security applications, but in retail is only a small incremental improvement over existing tag-based metadata). Winners will end up being narrow in focus, not broad.

    2. ShanaC

      not if the company doesn’t own the data ๐Ÿ™‚ totally possible

  3. jason wright

    …and Foursquare?

  4. Al Mazzone

    From a user point of view, this is a remarkable time. And with more voice interfaces surfacing seemingly every week it looks like the commercial side will take off and become a real driver.I recently purchased an Echo with the hope that the Sonos integration happens soon. But surprisingly I started using the Echo more than the Sonos unit in that room. Now I have 3 Dots scattered about and will probably get another Echo and several more Dots.

  5. pointsnfigures

    Yup, gonna be big. High Frequency Trading firms have set up and organized around AI.

  6. Tom Labus

    The NY Times magazine cover story today:http://www.nytimes.com/2016…ยฎion=rank&module=package&version=highlights&contentPlacement=1&pgtype=sectionfront&_r=0″use speech to control devices”

  7. William Mougayar

    Question: Will the cloud be able to support all the new computing power required for this increased AI usage, or will we need more local/edge computing e.g. what the Fog Consortium is working on?

    1. LE

      Good question. I say business opportunity with fog computing almost for sure.Our work is centered around creating a framework for efficient & reliable networks and intelligent endpoints combined with identifiable, secure, and privacy-friendly information flows between clouds, endpoints, and services based on open standard technologies. OpenFog members represent the leading thinkers and innovators in fog computing, who have come together to help build the foundational elements for this emerging area.http://www.webopedia.com/TEhttps://www.openfogconsorti

      1. Lawrence Brass

        The cloud, then the fog.. cute, but silly imo. Those are networks you know.Why people invent new names for things that already have a name? What would smog computing be then? :)Anyway, I agree with you on the potential of edge (node) computing.

    2. Twain Twain

      AWS Cloud and AI:* http://www.cio.com.au/artic…There’s limited niche space now that the big techcos own the monetizable data, have open sourced their algo frameworks to let developers train their data (which simultaneously lets the big techco’s have those data sets) AND they operate the AI Cloud services.Albert Wenger posted his thoughts on the big techco’s voice AI platforms like so:https://uploads.disquscdn.chttp://continuations.com/po

      1. cavepainting

        You are correct that it is ridiculous for people to compare AI with the totality of the human mind.a) First, we do not know where the mind is located: Is it local? If so, is it in the brain, or somewhere else? Or is it non-local? if so, where exactly is it?b) The human mind is experiential, not a series of transistor gates that generate an outcome. We are filled with experiences of what we perceive through the five senses, but traditional science is yet to understand what generates this subjective, conscious experience (qualia).c) The rational, logical part of the mind is what AI is trying to emulate and there is value in that for a whole range of use cases, but is important to not overstate what we are trying to do.

        1. Twain Twain

          Exactly.Descartes’ formalization of Scientific Method and quantifying observations did provide us with tools that sparked and spurred the Enlightenment and its attendant Industrial Revolution.However, it also led to the separation of what had previously been modeled as our integrated experience. Compare Descartes with Da Vinci.Descartes’ version of human reasoning (without art and heart) means we’ve lost 1/2 the picture. Meanwhile, Da Vinci and the I Ching’s methods show us the whole picture. https://uploads.disquscdn.chttps://uploads.disquscdn.c…The other thing that’s ridiculous about some AI researchers saying that the sum of our intelligence is mathematical ability to quantify and reduce probabilistic risks is that the neuroscience and cognitive science don’t even support this!https://uploads.disquscdn.c

      2. ShanaC

        As much as I love admiral acbar, sort of, but no.It is like saying working with US steel to get better steel to build a building is a trap

        1. Twain Twain

          Albert wrote: “Google and Amazon will have a perfect copy of every enduser interaction. So anything that takes off they can wrap into the core based on what they have learned.”Who learns how to make better steel? Google and Amazon.

    3. ShanaC

      Yes. Even more interesting, a lot of this stuff has less precise computing requirements in the chips, making them over time cheaper to manufacture

  8. LE

    his explanation of why AI has taken off in the past five years, as this chart shows5. Investor money chasing the next big thing.Obvious on it’s face, but did say:As venture capitalists, we look for emerging trends that will create value for consumers and companies. We believe AI is an evolution in computing as, or more, important than the shifts to mobile or cloud computing.

  9. Peter Van Dijck

    Just one thing with apologies: I automatically (maybe unfairly) deduct points for anyone that talks about AI instead of machine learning. We’re in an ML boom, not an AI boom. Your company should figure out how ML will affect them, not how AI will affect them. Etc.

  10. LE

    This article is great you are right. And very investor and general public friendly. Almost to much actually.Interesting that one thing that AI hasn’t done is the litmus (turing?) test of spam reduction. It appears to be the perfect problem for AI or DL to tackle, yet even gmail still doesn’t know with enough certainty when I am getting spam vs. real mail (I run a few mail servers but also use gmail for some things). In fact it’s remarkably bad not even taking into account obvious cues which it should. (Also would be nice to use it to prevent DDOS attacks.) It has a large data set and all of the things that should allow it to block with incredible accuracy. Yet it constantly gets it wrong. Otoh, my brain knows spam when it sees it quite easily. And in fact some things that most other brains would classify as spam my brain actually has made money on (really from people spamming… lot’s of money).So my question is this. How many areas is AI operating in where you don’t need to be 100% accurate you can be for example arbitrarily 99.8% accurate (or less) and still ‘win the prize’ and provide great value. Driving a car isn’t one of them. Even if you say ‘it’s better than a human’. Investing is probably one of them. Medical diagnosis is probably fine to be that % correct (it’s better than what we have now and still provides value with little drawback an always human oversight which is not time specific). And so on.

  11. jason wright

    in a nutshell what is AI? i hear the term bandied about all the time and everywhere in tech land, but what exactly is it?

    1. Girish Mehta

      How about in the manner its described in the primer by David Kelnar ?

      1. jason wright

        I don’t know him. Perhaps someone here can help. I value this community and the discussion, leading to greater understanding and less ignorance…on my part.

    2. Twain Twain

      Artificial Intelligence is machine simulation of the mathematical functions of the neocortex (logical reasoning) part of our brain.”Intelligence,” as defined and modeled by AI researchers, is “information processing that reduces probabilistic risks” (aka the standard errors in statistics).Personally, I HATE this definition of intelligence.

      1. Dustin Ezell

        It’s probably a fallacy to say AI is simulation of the brain. But there are as many definition of AI as there are researchers in the field.

        1. jason wright

          …as many brains, and each with a unique orchestration, as there are definitions of AI, which kind of makes sense.

        2. Twain Twain

          AI as simulation or emulation of the brain via risk reduction is pretty much the standard definition in the community. Not that I agree with it but it’s what 60 odd years of AI research has been built upon.There’s currently a quest for the “Master Algorithm” — the one black box that fits all the other black boxes.There are all sorts of issues with existing definitions of intelligence for a very simple reason: WE DON’T YET HAVE THE NEUROSCIENCE, BIOCHEMISTRY OR THE PHILOSOPHICAL FRAMEWORKS TO KNOW EXACTLY HOW OUR MINDS WORK.So AI researchers are limited to simulation and emulation of the brain with the tools and ontologies that are available from Mathematical Physics, psychometrics from Cognitive Sciences and Psychology … and none of those existing tools are adequate for Natural Language Understanding (NLU).https://uploads.disquscdn.chttps://uploads.disquscdn.c

          1. ShanaC

            but, we are getting closer ๐Ÿ™‚

          2. Twain Twain

            We are not getting closer to Natural Language Understanding.* https://www.technologyrevie…We are getting closer to better machine translation which is a different function from understanding, and even then still has issues:* https://thewire.in/85833/ma…You should also watch Jaron Lanier of Microsoft explain how behind the curtain of machine translation is the work of thousands of translators who haven’t been paid. They’re not making the better steel:* https://m.youtube.com/watch

        3. Twain Twain

          I did stand up at Turi’s Data Science conference in SF and ask Professor Domingos in front of 1,500 AI & data scientists whether AI emulates the brain and simulates evolution — since the starting basis of how the AI brain and our natural human brains evolved ARE THE OPPOSITE AND INVERSE OF EACH OTHER!!!https://uploads.disquscdn.c…That means AI’s thinking is different from ours and why it can’t do Natural Language Understanding.But, apparently, I’m the only one who spots this.He ended up admitting in front of 1500 people that AI doesn’t emulate the brain and simulate evolution when I pointed out that AI brain has maths of the neocortex as its basis whilst we evolved from reptilian emotions to limbic languages to the mathematical neocortex.@wmoug:disqus — This is an example of the bottlenecks. When successions of Professors of AI insist the AI is like-for-like to our brain’s evolution and intelligence and they indoctrinate those methods to the latest intake of PhDs, AI researchers, data scientists and devs…Adding to the gridlock of why the paradigm-shifting technologies can’t be invented or ship to market accordingly.

  12. Frank W. Miller

    I mean this as a serious suggestion. I don’t know what your training is but have you considered entering a CS program yourself? Whatever level you’re at. I would focus your attention on CS, not IT or some media oriented program, but a serious real CS program. Columbia is just down the street with some really kickass professers, Schulzrinne for example. Given what you do, I think it would benefit you tremendously to get that level of depth.

  13. Pete Griffiths

    If Norvig is correct it is massive data = huge training sets that has been critical.https://www.youtube.com/wat

    1. Twain Twain

      Norvig is only 1/2 right.The quantities of data that the Googles etc have accumulated provides some advantages to their ML.However, any system that offers better QUALITY of data wins.https://uploads.disquscdn.c

      1. Pete Griffiths

        Absolutely – quality is critical.Data engineering is an essential precursor to analysis.

        1. Twain Twain

          Right, it’s Maths 101: garbage in, garbage out.People, including Google’s top people know it. They, though, keep being stuck because the lack of diversity has produced an echo chamber of looping the same-old-same-old methods which are about quanta and how fast that quanta of data can be processed — not about the qualia of the data.https://uploads.disquscdn.c

          1. Pete Griffiths

            Data engineering when your source is unstructured data is extremely difficult.

          2. Twain Twain

            Not so difficult.All current ML methods referenced in Kelnar’s article are equivalent to putting birds (unstructured data) back into order on the telegraph pole (structured binary, linear, probabilistic order) AFTER they’ve flown. https://uploads.disquscdn.c…There is a way to shift the paradigm of current ML methods, but it’s not obvious.The inventor would have to invent whilst standing on Da Vinci and Einstein’s shoulders rather than on the shoulders of Descartes logic and Bayes’ probability.Pretty much all of Maths — since we invented it as a language to model us and the world — would have to be ROCKED & CHANGED TO THE CORE. Lol.It would be as game-changing as the inventions of logic, x and algebra, probability, mathematical bases (binary, logarithmic etc) etc.And, at the same time, the systems invention would re-frame economics, computing, neuroscience, psychology, linguistics and more.@wmoug:disqus @lawrencebrass:disqus

        2. ShanaC

          even better than engineering, getting a quality source. That’s the hardest part

      1. Pete Griffiths

        Exactly.This has very peculiar implications for what we consider to constitute understanding.It is becoming possible to have incredibly effective models which have the form:x = 2.997914a + 3409.349867b / 43957.59487c etc etcThe point is that these kind of coefficients are meaningless to us, but they can result in extremely effective fitting of a phenomena.So if we have a super effective model, which as always been the aim of science, to fit the data, but the result has lots of terms and non-intuitive coefficients, do we understand the model?

        1. Twain Twain

          Right, what do any of those coefficients, intercepts, quants etc actually MEAN?So here’s an example of two sentence parsers at work: Google SyntaxNet and Netbase.Notice how language understanding is modeled as a probability, an F-score: “So the Netbase parser is about 2 percentage points better than Google SyntaxNet in precision but 1 point lower in recall. Overall, Netbase is slightly better than Google in the precision-recall combined measures of F-score.”https://uploads.disquscdn.c…https://uploads.disquscdn.c…When humans communicate, we don’t go around thinking, “Pete’s parser is about 2 percentage points better than Johns, but 1 point lower in recall.”We also don’t say things like: “Right now, I love my mother 88% and an hour later I love her 33% or two standard deviations from the norm.”See? Parameterizing human language by probabilistic and statistical means doesn’t make sense.

          1. Pete Griffiths

            Completely agree.You may be interested in a Quora question I posed a while ago on this very topic.https://www.quora.com/What-

          2. Twain Twain

            Thanks, Pete.Whilst grappling with how to tool the machines to think more like us, to understand our language and to be baked with our values, I traced the path to where Determinism and “causality” became mathematically defined (maths is capable of quantifying but not necessarily qualifying, imo; I studied maths at university).And how it’s affecting our modern technologies and the tools we have to support decision-making. https://uploads.disquscdn.chttps://uploads.disquscdn.chttps://uploads.disquscdn.chttps://uploads.disquscdn.chttps://uploads.disquscdn.chttps://uploads.disquscdn.c…So when I say that the Enlightenment provided useful systematic methods but that Descartes et al separated out and removed tools of art+heart and biochemistry that are intrinsic and integrated with our human reasoning, intuitions and language understanding, I do mean we’ll have to re-frame paradigms and re-engineer a lot of models we’ve had since the Enlightenment.Especially the paradigms that relate to us, our reasoning, our language and our values. Not the paradigms relating to Classical Methods as they apply to inorganic materials and manufactured organic materials that have no inherent consciousness (e.g. Newton’s laws on metal and plastic balls).Now, at a subatomic level, in 2014 the physicist Max Tegmark of MIT proposed “Perceptronium, the most general substance that can feel subjectively self-aware” and he includes it in a General Theory of Integrated Information.However, he’s not provided a proof or experimental method to enable us to establish the existence of such a particle. Perhaps another LHC would need to be built to do so.* https://medium.com/the-phys…OR someone will design and code an Internet system to do it.Five years before Tegmark published his paper, I’d written some Quantum notation with perceptions baked in and I’d read more about consciousness because my Dad had passed away following a coma.In coroner’s court, I’d shown the neurosurgeons that their fMRI scans and behavioral tests had failed to detect my Dad’s consciousness.So my perspectives on data, causality, the chasm between Art + Science caused by Descartes et al after Da Vinci’s death, conscious reasoning, language and how to change paradigms at a core-level are very, very different from the existing “in the box” binary logic approaches.

          3. Twain Twain

            Where we went amiss was that during the Enlightenment, we’d found the tools to measure and model SOME metaphysical phenomena (relating to forces of motion of metal balls, random behaviors of dice which gave us probability and gas expansions which gave us Boyles and Gauss) and a wrong leap was made to use those same tools to measure and model humans, our minds, our language, our values and our socio-economics.As with all science, we have to invent the right instrument to measure the things we want — or as Galileo said: “Measure what is measurable. MAKE MEASURABLE what is not.”Just as we can’t measure our weight with a ruler, we can’t measure human intelligence and experience with Descartes’ binary logic and Bayes’ probability trees.We have to invent better scientific instruments.

  14. baba12

    I am flummoxed by the use of the words Artificial Intelligence. I find that there is zero cognitive reasoning/thinking capabilities in any so called AI machines such as IBM’s Watson.I would agree with those who have commented about the machine learning. When a baby is born it takes about 12-16 months before it can start to walk wobbly at best. It takes another few years before the baby is able to start to associate words with objects. Another few years before words can be joined to from very simple sentences minus grammar. All this while cognitive thinking as a function is still being developed in the brain.The ability to make decisions for a child or an adult is based more on data absorbed over time and the brain going through a decision tree really fast and coming up with answers based on the various inputs and connecting it to stored memory. Very little of cognitive thinking is involved in most activities on a daily basis.So in terms of machines we are getting to place where the machines ability to process data and make decisions has become faster i.e, Watson being able to answer questions as it has been fed lots of data about various things and the logic to connect the various pieces of information together. Machine learning is happening at a faster rate than ever before as there is more readily available data be it email, pictures,audio,video from the vast quantities of information being shared between humans through tools like Facebook or Google search etc.I don’t know the right term but when someone searches for a word/string on google the engine isn’t just taking that input into account to give results but also applying context based on when,where and how those word/strings are being asked and over time it is building large amounts of decision trees that it then traverses at faster and faster speeds as computing power increases. If and when we get to a stage where a machine can do the thinking that will be the day machines will decide that we humans are no longer needed.I think this is the age of machine learning and not artificial intelligence.

    1. Twain Twain

      Leading AI researchers from Google etc, don’t begin to know how intelligence is natured+nurtured into us …AI NEEDS female talent in the design and coding of AI because, without us, no one has a hope in hell of cracking the Natural Language and General Intelligence AI problems. (@wmoug:disqus , @lawrencebrass:disqus).https://uploads.disquscdn.c

      1. ShanaC

        yet children learn words without parents telling them them/what they mean ALL THE TIME.That’s the sticky part that would push towards Jeff Dean

        1. Twain Twain

          The slide is about conditioning and context. Children learn via the integration of multiple signals — not only when their parents label it for them (“That’s a cat that sat on a mat.”)Note Yann Le Cun’s comments: “Robots will have feeling. Emotions are related to prediction. Fear is related to predicting something scarry. Robots will be able to predict, hence they will have simulated emotion.”That indicates Facebook thinks emotions play a role in machine learning and prediction. That’s why they implemented the reaction emojis in Newsfeed.Contrast that with Jeff Dean’s comments on how kids learn: via facts “It’s a car” but no emotional context.So Facebook has a better grasp of the “sticky part” than Google.

  15. Henry Thornton

    Apple, Microsoft, Intel, Oracle, Google, Facebook and tens of others began as startups in their respective markets and became giants. The AI market is driven by a subset of these companies with Google and Microsoft in the vanguard respectively covering the consumer and enterprise markets.These companies have the data, the infrastructure, hundreds of talented researchers and thousands of engineers, lots of ready cash to trial multiple experiments and established distribution channels to customers. Plus, these companies have made strategic decisions to address the AI market.Academic papers are freely published on axiriv and machine learning algorithms and frameworks are open-source.The opportunities for startups in AI are not as wide or big as investors imagine. The AI market is becoming real but unlike previous technology cycles it is not being driven by startups and VC’s.

    1. Adam Sher

      Maybe the lack of data is the advantage. When you have enough information, you can create learning by back-testing. Traders and quants do this. Correct back-testing can look like prediction and learning but isn’t.

      1. Twain Twain

        Plato: “All learning has an emotional base.”Yann Le Cun, Director of Facebook AI Research (FAIR): “Robots will have feeling. Emotions are related to prediction. Fear is related to predicting something scarry. Robots will be able to predict, hence they will have simulated emotion.”Notice that FB added emojis to their Newsfeed so then they can back-test with that as a feature variable in their NN models.

  16. someone

    Several errors, including that Microsoft has equalled human language understanding performance. They matched word-error rate on a particular Domain Task, but that is a far cry from human level understanding.And neural nets have been used in AI for decades. The biggest development is that the training corpora have become massive.

  17. jason wright

    Is Page Rank a form of ML?

    1. Twain Twain

      Yes because ML is basically applied probability, statistics and linear algebra.Page Rank is a form of ML:* https://en.m.wikipedia.org/

  18. Twain Twain

    This is where we are after 60+ years of AI. The sphere is where we need to get to.https://uploads.disquscdn.c…To get there, we’d pretty much have to re-imagine and re-engineer every model created since the Enlightenment, sparked by Descartes’ work of 1637.There’d have to be a “Renaissance of Da Vinci for the Digital Age” and, indeed, we’d have to go back further to the I Ching of 800 BC and back even further to when humans started documenting our experiences in cave paintings 40,000+ years ago.To say that Google, FB, IBM Watson et al have all the data and all the tools to get the machines to be more “human-like” and to think like us and to understand language and values like we do … is a FALLACY.Such things does Da Vinci show me how to see and solve for.@wmoug:disqus @cavepainting:disqus @pointsnfigures:disqus @le_on_avc:disqus @lawrencebrass:disqus @jasonpwright:disqus @myscrawl:disqus @jhabdas:disqus — Clearly, I’m focused on building the sphere, LOL.

    1. Michael Elling

      We need a system of incentives and disincentives; something the settlement free IP stack lacks. We also need to transcend the winner takes all mentality imbued in nearly every socio-economic institution mankind has developed. We need to enter into an age of equilibrism if we are to foster and survive the power of AI.

    2. Pete Griffiths

      I’ve seen you make comments along these lines before. I don’t pretend to find where you are coming from completely clear but it’s interesting for sure.

    3. Twain Twain

      Zuckerberg on what he learnt from building his Jarvis AI, with my comments. https://uploads.disquscdn.c…* https://www.facebook.com/no…@wmoug:disqus @cavepainting:disqus @pointsnfigures:disqus @le_on_avc:disqus @lawrencebrass:disqus @jasonpwright:disqus @myscrawl:disqus @jhabdas:disqus @ShanaC:disqus — Fundamental breakthroughs needed, indeed.LOL.

  19. David Kelnar

    Hi all, I’m the author of the AI Primer that Fred describes above. Fred, thanks very much for your kind words and for featuring the research – I’m glad it was helpful. If anyone has any questions, I’d be pleased to answer them. If you’re interested in AI, myself and my colleague Dominic have just finished mapping 226 AI companies in the UK and meeting with over 40 of them. You can explore the first map of UK AI, and six key trends we found that are shaping the market, here: https://medium.com/@dkelnar

  20. Twain Twain

    Machine Learning is the CDO of AI. AI is the parent, ML is its complex child derivative.