An AI First World
Sundar Pichai said this last week on Alphabet’s earnings call:
In the long run, I think we will evolve in computing from a mobile-first to an AI-first world
That statement got a lot of pickup and attention and deservedly so.
It explains how the CEO of one of the most important tech companies thinks about where tech is heading and where his company is heading.
What does an AI first world look like?
It was easier to think about a mobile first world. That’s a smartphone centric computing environment. That is very much where we are right now.
Does an AI first world suggest we will move beyond carrying around devices? Does it suggest that computing moves into the ether and is just there when we need it “on demand”? Does it suggest that voice will emerge as the primary user interface?
I do believe AI is the most important next big thing and have been saying that here and publicly for the past few years.
I am running into AI technology more and more in my daily life. It feels like this AI first world is arriving. That’s big.
AI is the next big thing. Timing is everything. I’ve always been too late for all the other advances, but I feel poised this time. I plan to make something of it. You ask fundamental questions here. Those who answer them and what is peripheral to them will, to some degree, be writing and determining the future.
“you ask a lot of questions here”truth
I’ve changed that sentence from “a lot of questions” to “fundamental questions” in order to more precisely reflect what I wanted to say, but I trust that you’re inquisitive indeed.
Indeed, Google’s big bets on AI was declared last year. Sundar said this towards the end of the call, but he was short on what this AI-world would look like. I’d like to hear what developers are doing with TensorFlow, their “Android of AI”.What are some examples of AI usages that you are running into, daily- are we talking GoogleNow and text-based bots?
Google’s bet on AI more than a decade ago. You run into “AI” every time you use Google Search. It’s just a different kind of AI.Sundar’s recent announcement is PR category. Open sourcing of TensorFlow is probably one of the first steps to bringing developers into Google AI ecosystem.We’ll probably see more cloud APIs and services as well as Android libraries using those services that would allow developers to make their applications “smarter” while feeding Google more and more data. With the neural network approach currently favourited by Google (and other big guys), the more data you have the better your AI is.
>Google’s bet on AI more than a decade ago. You run into “AI” every time you use Google Search. It’s just a different kind of AI.Right. Peter Norvig – norvig.com – Google’s Head of Research, is one of the top AI people. One of his books is supposed to be a standard AI textbook. And before his current role he was Head of Search Quality at Google.>the more data you have the better your AI is.Right again. He has talked about this somewhere. Interestingly, though, I read a bit about a sort of controversy between him (and those with his view/approach of using lots of data) and Noam Chomsky(?), where the latter is talking about having a rigorous theory-based approach and the former says that data trumps that (when large enough). Paraphrased, mind – I’m not at all knowledgeable about AI, just find it interesting.
Peter Norvig’s position and then the penny drops …One of the problems with the use of measuring standard errors in Probability’s playbook is that, yes, increasing the number of data point (n in the denominator) does reduce the standard errors of estimation.However, that shouldn’t and can’t be interpreted as the data is meaningful. This is at the roots of why none of the big techco’s have been able to solve the Natural Language problem.
Yes, I have my doubts about that approach (data-driven) being valid everywhere. I should have been more precise – instead of saying this:>Right again.I should have said it works in some cases – their spelling correction suggestions in Google Search is a good one. But the approach may not be (so) applicable to other areas, where more of AGI / Human Intelligence is involved.Speaking of natural language, a couple of points:1. That phrase I had quoted here on AVC some weeks ago, in some context:Time flies like an arrow.Fruit flies like a banana.If you google the first statement, you find on Wikipedia or somewhere, that it has not just two possible meanings, but many more (albeit some of them contrived).2. An old AI joke:A prestigious project is undertaken to build an AI computer that can translate sentences from English to language X and back to English.After the high-pressure project finishes, the computer is delivered. The head honchos give it a test input – to translate this line.The spirit is willing but the flesh is weak.Back comes the answer – after round trip translation from English to language X and back to English:The vodka is good but the meat is rotten.https://en.wikipedia.org/wi…
Ha ha ha! The most geeky joke I’ve ever heard.
> However, that shouldn’t and can’t be interpreted as the data is meaningfulRight.And more importantly, I’ve always thought (w.r.t. AI), that in the general case, no amount of data by itself can assign or determine *meaning*; and that is more or less what your Amit Singhal quote says.
Dr. Norvig is one of my personal inspirations, I’ve been following his books and writing over the past 15 years, probably have to think him more than any other author for my interest in AI.The debate you are talking about, goes all the way back to the earliest days of AI. You can read more about it here:https://en.wikipedia.org/wi…
Oh cool, thanks.
Well … maybe the current neural network approach (Deep Learning) isn’t the optimal one …It works reasonably well for image recognition and verbal speech recognition — although it’s still amiss there.Maybe there’s a whole other model structure and data sets for AI yet to be invented and yet to be collected so the AI can be more accurate and serve us better …
Image recognition AI doesn’t have enough background data to recognize what the heck it is looking at, and the recognition techniques don’t know how to make use of the necessary additional data.
Of course, generally in statistics and curve fitting, accuracy grows as the square root of the number of data points. Then, in multivariate cases, the data needed grows roughly as an exponential in the dimension. So, so with their simple techniques, collecting a lot more data won’t do much for accuracy.
I think the biggest business use cases are still in sales and marketing. http://www.insidesales.com is a great example of a AI driven service that is delivering 30+% improvements in inside sales effectiveness. Wise.io does the same for service. In marketing, AI can really change how people find apps and web sites for specific jobs-to-be-done, and how budgets get allocated.
If you’ve ever interacted with a political campaign in the United States you can be assured that there is machine learning behind that interaction.
I think intelligent agents will come embodied in many forms – as apps, bots, add-ons or voice driven devices, but their essence is the disembodied intelligence powered by big data – that lives in the ether – that helps users holistically navigate both their virtual and physical worlds.I also believe there will be four kinds of agents. Platform agents (OS or messaging), Supply agents (owned by specific product / service makers), Independent agents ( owned by third parties or aggregators), and autonomous agents ( that are operated by the customer or consumer).I just wrote this on the rise of intelligent agents. It is a draft but thought I will share anyway.https://meta-edge.com/the-r…
I personally see a window of opportunity with “autonomous agents” and have been thinking about it for some time but I had no word to refer to them and now I do, thanks to you. And BTW, I think your Medium article is seminal on the topic of AI agents / bots
Mario, Thank you.
Could there be more four kinds?
I think so. I was looking at it more from the perspective of who owns and operates that intelligence, what data fuels it and what outcomes it is intended to achieve? Within each category, there are a lot of variants and possibly, there are new categories as well.For platforms like iOS, Amazon, Android or Facebook messenger, they have tremendous data about each user, his apps and how they get used (or not), so that naturally lends itself to a broad variety of problems they can solve.For specific suppliers, AI is still important to infuse in their apps, legacy ERP and web sites, but they are largely going to be dependent on the data they have about their customers. For example, GE has built predix.io and has realized big gains in proactive maintenance.For aggregators, marketplaces and third parties, they are doing data science on top of all transactions across each supplier and his customers. From how they made the match to how the service was consumed. To make better matches and improve the experience. A good example is Takadu.com who uses AI to reduce water leakage for utilities. Or insidesales.com for sales effectiveness.In the last category, the consumer is aggregating his own data from different supplier silos and using AI to get insights about his own data. Of course, this will be valuable only when suppliers agree to play nice and open up APIs. In industries like healthcare, this seems closer to a real possibility and can drive real value. Especially if these agents can also contribute to a larger pool (anonymously).
“Autonomous AgentsThe last category is the most transformational, and also the most difficult to achieve.”It is also the stuff of sci-fi fantasies. The new wild west, where everyone gets to holster their own AI six shooter :-)Gangs of “Autonomous Agents” roving cyberspace looking to”*?*?*?*?*”:-)
🙂 That is true.For now. But may not be as distant a possibility as it seems.It is not so much that people are hosting their own AI as extracting their own data from suppliers into a cloud – or a space that they clearly own – and then allow third party services to provide insights and recommendations by analyzing their data combined with anonymous data sets of others.Mint is an example of such a service, that launched 15 years back. A consumer could link up to all his bank accounts and they would provide descriptive analytics. In more recent years, they have provided specific insights based on what everyone else is doing. Dunnhumby has enabled retailers to do the same. And the service provided real consumer insights to CPG.What the recenly acquired Quettra did in mobile analytics is also similar. They allowed developers to use their SDK,contribute data and realize insights back.So, there may be variants between “independent” and “autonomous” that use third party cloud and AI services. Common theme in all is that a statistically significant sample contributes data anonymously to bigger cloud, and a larger population enjoys the insights.
Doesn’t the whole AI thing become organically recombinant in short order ?How many categories of cells are there ?
Probably.Though the direction of nlp and image detection is moving into modeling the brain. Not called neural networks for nothing
Thanks for this. The utility categorization of AI is useful from a business model perspective. However, from the technical perspective, there are whales in the room around Natural Language.It’s a problem in the foundations and structures of code itself that permeates across all applications of AI — which NO variation in UI (is it mobile / PC, touch / gesture / voice, VR, car, embedded sensors?) or current approach in bot automation, including Google Now, solves.That AI can automate a lot of data and we rely on it to make the decisions for us was already shown pre-2008 and post-2008:* http://www.nytimes.com/2008…* http://www.nytimes.com/2012…Note, from second article: “At the M.I.T. conference, a panel was asked to cite examples of big failures in Big Data. No one could really think of any. Soon after, though, Roberto Rigobon could barely contain himself as he took to the stage. Mr. Rigobon, a professor at M.I.T.’s Sloan School of Management, said that the financial crisis certainly humbled the data hounds. “Hedge funds failed all over the world,” he said.The problem is that a math model, like a metaphor, is a simplification. This type of modeling came out of the sciences, where the behavior of particles in a fluid, for example, is predictable according to the laws of physics.In so many Big Data applications, a math model attaches a crisp number to human behavior, interests and preferences. The peril of that approach, as in finance, was the subject of a recent book by Emanuel Derman, a former quant at Goldman Sachs and now a professor at Columbia University. Its title is “Models. Behaving. Badly.”MS Tay is a classic example of AI. Behaving. Badly.The Deep Learning structures in MS Tay and other AI chatbots are similar to the Deep Learning Structures in the AI that contributed to last global financial [email protected]:disqus @pointsnfigures:disqus — Better models for human behavior, interpretation, preferences, diversity of definitions etc are non-trivial problems that Silicon Valley hasn’t even made a start on solving yet (or is in denial about; much in the same way SV was in denial about its diversity problems until a year or so ago).Automating the existing data is straightforward enough with the open source algorithms and frameworks released by Google and MS, e.g. TensorFlow. Sure, there may be a $billion Slackbot that will emerge in next 5-10 years.However, it’s in the data we don’t have but need, in the tools that haven’t yet been invented but need to be, in the code structures & frameworks that are better than the binary, logic box ones we inherited from Descartes & Bayes & even Turing …That a true AI that solves and serves Humankind can emerge.
Nice categorizations. A sort of framework to think about all this – though we should not limit ourselves to a framework, rigidly, of course.That aside, I wish there was another kind or category – AI in PC software, so that over time it learns what you want to do and how, etc., and shortens your work 🙂
It may be useful to read Greg Leffler of LinkedIn’s view on chatbots:* https://www.linkedin.com/pu…
Wow – Nice clear framing !Freeze dried Intelligence just add context.That is what I call a gem post:It instantly rewires your head about a lot of thing you kind of already know but in a way that makes them much clearer and more accessible for remix.Calling it a draft, now is that you being humble or a subtle form of bragging 🙂
“I am running into AI technology more and more in my daily life.”is this as a citizen/ consumer, or as a tech investor?
I should have been more clear in my post. I meant as a consumer
thanks for the clarification. may i have one or two examples as i need a ‘fix’ on this subject?
watch this https://youtu.be/oZikw5k_2FM
thanks. so the technology will reside ‘under the hood’ but not be apparent to the end user?
Let’s presume you are right.How does this impact the startup world?Couple of people in a WeWorks–how can they play in this as it very much a resource intensive, big incumbent advantage–no?
Making use of APIs initially. Someone, be it Amazon, etc. will eventually sell AI as a service much like AWS did it for the cloud. That’s a guess, of course. It’s also a case of David and Goliath.
Yes, For AI to be widely adopted by companies in varied industries – both big and small – it has to get better abstracted. Right now, in traditional enterprises (not internet companies like google or FB who do this for a living), the AI / big data stack and the transactional stack run in parallel and the data->insight-> action loop is basically manual or non real-time for most use cases. That is a big problem and opportunity in enterprise software.
Seems that you have given this a lot of thought. What do you think is keeping the AI/big data stack from joining the transactional stack?Super insightfull thoughts, thank you for sharing them.
AWS does sell AI as a service: https://aws.amazon.com/mach…https://developer.amazon.co…
Thanks I saved that link to Pocket. Smart, they’re monetizing allowing developers to do away with the complex algebra. Have you seen those machine learning equations, gee….:-)
I’ve done a bit of it. Requires some mental bench pressing. Fortunately with online resources (Khan Academy, EdX, etc.) it is easier than ever to re-learn all the math I forgot from college. I actually recommend the book Data Smart ( http://www.amazon.com/Data-… ) to people that are new to Machine Learning. Great introduction to the math and concepts using Microsoft Excel, which makes it accessible to people wondering what the hell is this wizardry.
I’m listening to this Stanford lecture series on machine learning (although the algebra is so definitely above my pay grade!): http://youtu.be/UzxYlbK2c7E
Yes. I wondered if you had stumbled onto the Ng lectures. I did that some months ago and did a big upchuck. Don’t worry; you didn’t miss much.The content is next to baby talk with poor quality content and a really sloppy presentation.Some of the worst academic material I ever saw. Really low grade stuff. There are and have been some really good people at Stanford — K. Chung, D. Luenberger, H. Royden, with content polished, elegant, precise, powerful, crown jewels of civilization, nearly flawless — but Ng was not one of them. Good to hear that soon he left.I watched several of his silly lectures and gave up — the content was not worth trying to clean it up.Basically he is just doing some curve fitting.His gradient descent method shows that his problem is relatively easy because gradient descent doesn’t work very well. Better is conjugate gradients, Newton, or quasi-Newton. And there are more techniques, still. And that’s just for the unconstrained stuff.This is now ancient history stuff, already old when I was at the beginning of my career.Those Ng lectures have much that is good and new, however the good is not new and the new, not good. You didn’t miss anything.Ah, artificial intelligence with a lot of the first and none of the second.A book on that would be illuminating if ignited.Those Ng lectures were one of the reasons I concluded that so far Stanford, etc. artificial intelligence and machine learning were 99 44/100% hype and the rest polluted baby talk.
Those lectures are not why I had my views, but they helped making your point clear to me, actually.”A book on that would be illuminating if ignited” That’s a great line, BTW
Wow. From following that link I found out about fraud.net (was an example given by AWS). I was actually looking for a service like that.http://aws.amazon.com/solut…
Well our investment in Clarifai, which I’ve been writing about a fair bit lately, is about “AI as a service” and is available to any developer who wants to use it
Thanks Fred–been super focused and busy and didn’t notice this.
I’ve been consulting on podcasts and asked to do transcripts. The cost of the popular professional transcript is $1.00/minute, and IBM Watson will transcribe fairly decently (90% of the way there) for $0.05 minute.
Massive. Incumbents have huge dataset advantage
AI technologies have this interesting property – once they happen, they are promptly declared “non AI” and become a part of our daily lives.What most people refer to as AI is commonly called AGI (Artificial General Intelligence) or “Strong AI”, this elusive concept of a thinking machine. We are still nowhere close to AGI today as we were in 1956 when the term “Artificial Intelligence” was first introduced.
But we are moving to simulating it at a fast pace and there are opportunities with that. My personal bet is that the singularity will never happen anyhow so I wouldn’t wait too long for that one.
There are definitely opportunities, as results of AI research can improves lots of current businesses.For example, my company is using various AI techniques (combination of NLP, good old fashioned expert systems and machine learning) to make recruiting process “smarter” and much more efficient. Our product will not bring singularity one step closer to reality, but we’ll use byproducts of research that is trying to achieve that.This brings up a question – should we market ourselves as an AI company?
Based on the current general adaption of the term and on the fact that there are three distinctions that have been made (ANI, AGI & ASI), plus it is generally appreciated that we’ve only reached the first level, I’d say definitely yes, call yourselves an AI company — no one will think you’ve actually invented HAL 9000; but when you do, let me know, I want one.
Interesting that expert systems don’t seem to have been mentioned much in mainstream tech media for quite a while. (Maybe I am wrong on that.) They were one of the early applications of AI that had concrete benefits, as I have read.
Instead of marketing yourselves as an AI company, why not, using your own statement:>There are definitely opportunities, as results of AI research can improves lots of current businesses., market yourselves as a company that “uses the results of AI research”?Or if you want to make it sound better, “we strategically deploy various cutting-edge and established AI techniques”/JK about the last bit.Oops, that’s getting near to marketroid territory.
I think that a company’s identity has everything to do with the product or service you sell. It can be adorned, projected, boosted with a secret sauce but at the end, you are what you sell. Maybe using AI makes your company smarter and more competitive.
Turing’s “Thinking Machine” is not even AGI or Strong AI, though. Turing himself wrote about the “uncomputables” that he couldn’t solve for. Uncomputables related to Quantum Relativity and human experiences of conscious self-awareness (which, in fact, affect our Natural Language and mental frames of reference).Strong AI (AGI) would be a CONSIDERATION SYSTEM capable of “thinking with care”.One of the issues with Turing logic and game theory (that then goes into programming AI such as AlphaGo) is that nowhere in Nash’s or any other game theory is the function factors of emotional perceptions that dynamically inform the entire duration of the game process.So innovation would need to happen simultaneously at the parent-root levels for General AI.
Across-the-board reengineering needed for AGI. The low-hanging fruit of narrow, mathematical AI is easy and that’s what investors will back because they’re risk-averse and few have the technical+operational frames of reference to venture beyond low-hanging fruits. Only the extremely brave and pioneering will seed entire new trees and Amazon forests for data, AI and economics.I’m a former investor, so I know how these processes work.
Couldn’t believe the Tay incident. What do you think about Nvidia’s new chip?
Tay incident was inevitable — given that most NLP structures are about frequency counting of words and their normal distributions over vector space. So bot attack the algorithm by increasing the value of n (frequency of any word, including a racist, Hitler-referencing one) and voilà … MS Tay’s lack of understanding re. what it’s parroting back with some variance in word frequency, order & association.NVidia’s new chip has same issues as other Quantum chips. They are binary I/O, regardless of how they’re stacked in the diamond lattice.We want to get to a more genetic, neuromorpphic type of chip but will have to make do with increased speed of processing and probability correlations by the likes of NVidia rather than better velocity of understanding of data.Lots of things still to be solved by tools invention so that makes it a great time for tech — despite the slump of unicorns, the diversity problem, the gap between what’s needed and what investors will invest in etcetcetc.
It was badly written nlp. They could have controlled that vector space. They didn’t
Ah well now … What if natural language isn’t, shouldn’t be and can’t be parameterized by vector spaces?Vector spaces are the basis that Google, MS, Facebook, Baidu, IBM Watson are all doing Nat Lang AI with.What if it’s as fundamentally amiss as “Earth is flat” vs “Earth is round”?
also @ShanaC:disqus… totally agree, my point regarding Tay was more from strategic viewpoint. You could put Tay out there for media exposure and then take it down after achieving. But then to put it back up immediately like that was stupid. I’m doing okay with my Micro stock, but this move is not good for the NLP field where it leads to confusion.At the same moment in time you already had the wrong move from Hanson that was a matter of filter and it had gone (though less) viral.
Maybe instead of mimicking end-point biological intelligence mechanisms it might pay some dividends to identify biologically recurring collective-coherency organizing dynamics/principles and the end-point strange-atractor organizing wells around which such distributed-coherency/collective-intelligence can emerge/coalesce.
There is talk of “reverse engineering” the brain and ideas about back propagation from those end-points.These ideas borrow from mathematical constructs of commutability and equivalence on both sides of a linear algebra equation. For example:If a = 1, b = 2, c = 3Then a + b + c = b + a + c = c x b x a = 1 + 3 + 2 is true.However, language does not work like that.If a = person, b = wears, c = hatThen “person wears hat”, “wears person hat”, “hat wears person”, “person hat wears” are all different end-outcomes and they have different meanings.Even with three words as simple as these, the machines are incapable of working out their meaning because of the inadequacy of the maths.Then when we get to more complexity, e.g. sentences with subjunctive tenses (emotion expressions of happiness, doubts, fears, etc), the current frameworks like Stanford-Google’s 2Vec try to deal with it using probability and proximity in a vector space.Well … It gets over-looked that probability was invented to measure the randomness of UNBIASED dice. Meanwhile, when we humans express ourselves there are always SUBJECTIVE BIASES.I even asked Greg Corrado, one of the creators of Stanford-Google 2Vec, if it can deal with subjunctive tenses and his answer was, “No.”I believe in collective intelligence so I built my system with crowd-sourcing mechanisms as de facto. In this way, humans will always be calibrating the definitions of the machines — but not in an “let MS Tay run wild” way.Frankly, if MS Tay team had had a woman on their team, they’d never have released such a bot in such a way.Mothers don’t let their 2-year olds run wild saying silly things without prior social conditioning, contextualization and rules. So why anyone would release something like MS Tay without these factors makes no [email protected]:disqus @ShanaC:disqus @creative_group:disqus @sigmaalgebra:disqus @lawrencebrass:disqus @pointsnfigures:disqus @tyusupov:disqus @cavepainting:disqus @fredwilson:disqus @wmoug:disqus — AI is simultaneously a maths problem, a business model problem, an investment problem and a technology problem that needs solving.How does cracking Natural Language apply to Clarifai and the vision recognition in autonomous self-driving vehicles? Well, suppose in the logic code…a = child, b = crossing, c = street, d = with, e = dogSure, the machines can recognize that a child, a street, a dog are present in the scene. However, because of the limitations of the maths, it doesn’t understand the meaning of order of importance and they’re all treated as commutative and equivalent (per my example above) — when they are not.So this is the type of work I do and areas of AI I’m interested in and trying to fix.Every day for the last few years I’ve wanted to quit. It’s been especially hard because:(1.) There is no prior literature on how to solve problems like these. If it was a standard Deep Learning problem then there’s plenty of material from mid-1970s onwards; this is what the likes of Clarifai and MS Tay have references to.(2.) My friend advised me, “Go seek out people who’ve done the thing you want to do successfully, many times.” Well, no one’s cracked the Natural Language problem and I certainly don’t want to repeat the same-old-same-old Deep Learning approaches that haven’t worked for NLP.(3.) I’m a woman. We’re subject to conscious and unconscious biases, left+right+center.Thankfully, I discovered AVC community a couple of years ago which keeps me positive and interested in tech and determined to stay in it because there’s already a huge deficit of women in STEM.Otherwise, I would just quit and go and do something else — like renovate homes with things I find on my travels or travel photography.
Awesome comment. Good luck and best wishes on your project. Solving for the uncomputables is a really big deal. Though some of it is likely beyond the scope of machines forever, every little step forward represents big progress.
(include @tyusupov:disqus) Mistakes are made and it is a matter of limiting fallout. I can tell you that many involved with AGI are very committed and are able to take criticism/advice.
“Uncomputables related to Quantum Relativity and human experiences of conscious self-awareness (which, in fact, affect our Natural Language and mental frames of reference).”Or perhaps “Quantum Relativity” itself is simple an intelligence/perception-limiting artifact that attends all self-referential modelling processes ?We live in the self-referential percept/precept cheap-seats.Or as Alan Watts liked to say ” a knife can’t cut itself !
Twain Twain:When you expound on technology we are all ears.
How powerful does Artificial Intelligence have to become before we start referring to it as Alternate Iintelligence ?Or is this term destined be be permanently anchored in our carbon-chauvinist :>)
You would need a strict definition of autonomy.
It’s likely not a matter of degree but mostly just a matter of kind. E.g., mice, crows, kitty cats, seem to be intelligent in ways mostly like but somehow weaker than human intelligence. So, mice are the same in kind but not degree.Computers are very different in kind.I have an outline of how to program a computer to be similar in kind, but I have another project now. If the idea worked, then maybe with just the speed and capacity of current hardware one result would be strong in degree.
Though it seems so superficially, “artificial” intelligence has never really been about creating other intelligences. It’s always been about understanding our own intelligence. We create these machines to emulate us. And even when we create them to behave in ways we think “alien” intelligence might behave, it’s still about what *we* think, and how *we* see ourselves relative to the rest of the universe.As Thomas Nagel points out (rather famously), we can only *ever* be anthropocentric. https://www.cs.helsinki.fi/… – for an intelligence to be anything other than some form of anthropocentrism, it would have to come from some other source than humans. But then, it would be centered on whatever source that was.
Pretty much. Reason being, it’s just harder probability and math to compute. And there are always harder problems out there
It’s not just harder math. It is that some AI problems may simply not be computable. And that might be a good thing. Robots with feelings, etc.? Ugh.
Twain talks about this a lot.
It doesn’t matter much what we call it. When I start talking to my phone the same way I talk to an assistant and can expect the same level of proficiency and performance, I don’t care whether it’s marketed as AI or not.
Fred,Arguably to know you are interfacing with AI implies you are not.When, things get easier under the hood, and do what you would have done had you realised the need then you are a) unawareb) really interfacing with AISo where are opportunities?At pain points or friction where the need is well defined and the value implicit.Commodity buying is an obvious one. In these areas we will not even notice the optimisations.
The link to the Wolfram AI Vid / Audio is a must for ones mindset:https://www.linkedin.com/pu…
Doesn’t seem to link to a podcast.
My error Mario, should be Audio / Video, long day.
He’s still doing his cellular automation and artificial life stuff?
“sloppy stuff will be regarded as a skunk at a garden party” “Math won’t sell — people learned to hate it in high school.””He’s still doing his cellular automation and artificial life stuff?”I look forwarding to reading your contribution/s on Edge.org on anything.Speaking as a simple atom Wolfram’s post changed my entrenched mindset. The other links on the same thread might also help to change the mindset of others unaware of sigmaalgabra peer reviewed articles.
And a thousand pitch decks are instantly revised. Search and replace decentralized platform with AI. Sorry. It may be that declaring something “first” is necessary to spawn innovation and investment. Or maybe we all just need to be forewarned that our lives are increasingly going to be or mostly are already governed by intelligent agents. And maybe that’s the value of such a statement. Be forewarned?
And I bet you didn’t even mean for it to sound sarcastic, ha ha!
Somewhat more constructively than my original note, there is research, at least on a theoretical level, to the concept of “autonomous agents.” Such an agent solves for the customer including allowing us to fetch our data from across many “silos” to leverage as we choose as well as derive insights. Yes, there are many, many dependencies to making this work, or not. But at least it puts AI in the context of human interests and our desire for agency even as we embrace amazing new technologies.
We call that “right to be represented by a bot” internally
Cheaper than a lawyer….
I doubt that although there may well be different categories of cost at work here ?
What? 🙂 An under $30/mo SAAS offering vs a $300 + /hr lawyer — that’s what I’m talking about here.
I should have ended with 🙂 instead of ?
Rights may vary by income !
Agents go way back in computing. As I understood it, it was mostly just software architecture, and there much of the idea was around the work then of object-oriented communications. There was some standards work CMIS/CMIP — here I’ll try to unwind the acronyms common management information systems/protocols IIRC from some UN funded standards group and IIRC generalizing some Unix work based on ASN.1 — abstract syntax notation 1, MIB, management information base, etc. In simple terms, it was to put an agent in a printer and have it notice when the paper level was low and use the communications to send alerts back to system management. The MIB was some data that described the printers, scanners, security card readers, furnace, air conditioners, backup batteries, etc. The ASN.1 was essentially the syntax of the data sent. The CMIS/CMIP was some stuff about how to specify more general syntax. The object stuff was to let the description of a new printer inherit, i.e., copy, its description and messages from an older printer. So, the objects were just chunks of data sent via communications (were not software in any sense although a LOT of people got totally confused on this) where the hope was to use inheritance to simplify the data definitions. Then, what to inherit from? Sure, a big hierarchy, including one that was public and spanned the world (the UN seems to like such things) with how to register new objects, etc.About then Microsoft came out with XML, whatever the heck it stands for, that threw away nearly everything I mentioned above. Turned out the XML was mostly just some key-value pairs but with, IIRC, some possibilities of some more complicated values that were hierarchical data aggregates.There was, from Google,The Common Object Request Broker Architecture (CORBA) is a standard defined by the Object Management Group (OMG)At one time all that stuff was regarded as really hot and important.When I encountered that stuff, I was outraged by the nearly brain-dead bureaucratic bloat, heavily European. So it looked like the main objective was to get to serve on committees and go to meetings in fancy places and have long lunches and dinners with lots of Beaujolais, fresh raspberries, etc.At best, the stuff is essentially just some software and communications architecture with no significant content, about as meaningful as the standard for an AC power plug or a USB plug.
AI in finance will be huge. So much of it is rules based.
AI in finance was huge in pre-2008 and that cycle is simply repeating itself:* http://www.nytimes.com/2008…* http://www.nytimes.com/2016…
thinking it goes deeper-accounting etc.
Without exception, all the AI models in finance go first to the publicly disclosed Accounting reports of the bluechips and whatever financial data is available for private companies, e.g. ChubbyBrain, SEC company filings etc.That AI will, potentially, eradicate the role of analysts and a whole raft of functions in the finance sector is commented on in this NYT article:* http://www.nytimes.com/2016…Look, I’m pro-people and pro-AI so I find current developments fascinating. As consumers, we’re sold this vision that AI will solve quant problems faster than the human brain can and so we should just hand over all of that to the machines. Well, Wall Street did that pre-2008 and look what the outcome was.Paradoxically, at the same time, it means potential mass unemployment so what’s the answer there? Universal Basic Income:* http://www.techinsider.io/s…On top of that are the whales in the room about the machines not understanding our Natural Language, our cultural diversity and our values (as individuals, as group collectives, as country society and as global conscience).Unfortunately, it doesn’t seem as if SV has really thought through the whole economics+technology+humankind factors involved in this AI revolution in coherent ways.They bolt-on these building blocks that are incongruent with each other whilst forgetting that the AI brain may be logic-boxed shaped but we humans are human-shaped.
Your first link is an NYT thing about Nadler which has him doing essentially just cross tabulation, that is, estimating conditional probabilities. When he grows up and starts shaving, then we will let him learn about analysis of variance. But we will let him tell his semi-brilliant customers at GS that it’s AI. Maybe then he can get a 2BR place. Then when he learns about girls, maybe he will money enough for a place in the burbs!Once again, the NYT wasted time!
Spending 15 years on Wall St, I left still looking for the regular I 😉
What did James Simons do?
Current AI tends to live at the datacenter, close to collected data and some of it is being exposed through APIs as SaaS. This enables small developers wanting to play with AI, which is good. What I would like to see is AI in appliances, in trained ASICs so that it can be used autonomously. However, I don’t know if the powerful business models that fuel ad tech companies will allow this to happen right now.I would like to see Clarifai (or competitors) trained ASICs in the market during the following decade and a move away from aging client-server internet to a more graph oriented and distributed application network, which should lead to distributed AI. But these are just wishes.
I’m increasingly curious if the Big breakthroughs in AI will occur in my lifetime. My perspective is a little different on what shape AI may take, as Augmented Intelligence tools.We already rely on the cognitive assistance of google search to clarify vague or unproven memories. We can lose ourselves reading links of data on the web, increasing our understanding of obscure areas by reviewing multiple perspectives.Imagine skipping the device search bit. We could all have extended memory, unimaginably vast research abilities, immediate language translation, and bewildering reading and comprehension capabilities.We will consume “books” of knowledge in moments instead of hours. Communication will evolve into digital ballets where deeply shared feelings and beliefs occur in the blink of an eye. People will be capable of sharing their entire life experiences faster than I can type this comment.That’s my 2 cents on AI.more on extending consciousness through augmentation:http://victusfate.github.io…
Definitely utopian, but as a bridge from “here to there”, what about some sort of simulated approximation of HAL 9000 to fill the gap? It would seem omnipresent as it would not reside on a given device but be accessed from a multitude of devices. Wouldn’t that rock your boat? That would do it for me… if executed “Apple style”.
We will consume “books” of knowledge in moments instead of hours. Communication will evolve into digital ballets where deeply shared feelings and beliefs occur in the blink of an eye. People will be capable of sharing their entire life experiences faster than I can type this comment.Will produce even more anxiety in people. More need for drugs more need for yoga to unwind.
That’s only if it works. So, the good news is, likely it won’t work! Relax! Maybe they will get you next time!
RJ Mical from Google Gaming spoke about this at MIT this week. He raised many interesting issues as did the QA session. Old brain and new brain areas is a meaty area to weigh in on which a psychiatrist dug into with him afterwards. The talk got the whole conference thinking.
Google has focused and is focusing on the mathematical abilities of the neocortex, as is clear from Ray Kurzweil’s TedTalk from March 2014. As can be seen in the implementations of AlphaGo and all the mathematical games they train their AI with.Given the mathematics and CS background of Google founders and their inner circles, this is to be expected.By comparison, notice the backgrounds of Facebook’s founder and inner circle: psychology, economics, CS. So these variations are part of the DNA of the techcos and what parts of the brain they want to replicate in the AI.
Facebook hires the same math people via Yann LeCun
Yup and then the maths people at FB all have a love-in about which mathematical game they can train their AI to beat faster than Google; reinforcing the maths biases of the systems — rather than enabling them with natural language understanding which is the harder problem.* http://gizmodo.com/facebook…
i feel more that we still going to remain a mobile first rather than Ai first. AI is like college sex: every talks about it but very little *significant* is happening at mass scale. I am still very often disappointed by the so called AI agents which is just a funky way to rebrand algoryhtms and conversational agents. I feel that the true trend is more about interface-less future (amazon echo like, magic leap,…). That would be the biggest breakthrough to our future: not having to hold a device in our hand to get something done
i don’t yet see how we will be able to not carry devices. how do we connect and interact in real time?
What if it’s not about that? What about AI that resides in the “ether”, as Fred has called it, and appears to “follow” you by being accessible from a number of devices, that in the aggregate, form a network:You’re at home, the AI identifies you through the cameras in your house and it engages with you as it recognizes that you’re alone. This is one-on-one time for it to get to know you better. Later on, you are in line at the grocery store and you ask it for advice through a chat bot. Meanwhile, it sends you notifications that it has found what it thinks is a qualified plumber — one of the tasks you had assigned it to — and it is subsequently offering you by email to choose between a few suggested times to be present for the leaky pipe repair. It offers a few profiles of people you may want to look at of individuals it has determined you’d likely benefit from getting acquainted to, etc, to ad infinitum.All somewhat utopian, but worthwhile to strive for.
so this is a world of AI + IoT, an all pervasive ‘mesh’, the very fabric of the realm we inhabit, 24/7, and never off?
Take a break by going to an AI-free resort, he he he! Possibly, that’ll be the jail we’ll likely be creating for ourselves.
oh joy 🙂
Everything will be embedded into our environments.It’s when General AI converges with IoT, Blockchain and AR (I’m not keen on VR because of natural vision obstruction and physiological adaptation requirements to counter-balance nausea issues) that things will be really interesting, imho.
It explains how the CEO of one of the most important tech companies thinks about where tech is heading and where his company is heading.But yet they get out of robotics after buying the company in 2013:But behind the scenes a more pedestrian drama was playing out. Executives at Google parent Alphabet Inc., absorbed with making sure all the various companies under its corporate umbrella have plans to generate real revenue, concluded that Boston Dynamics isn’t likely to produce a marketable product in the next few years and have put the unit up for sale, according to two people familiar with the company’s plans.http://www.bloomberg.com/ne…
Those are grand aspirations, and I’m sure at some point they/we get there but this makes me think about google’s overall strategy – how do they make AI work efficiently across their existing platform, and doesn’t this path eventually lead to reducing their own people/physical infrastructure? That might just build a much different culture…
The psych came w me. He’s quite civilized.
My apologies, I couldn’t resist a little satire.
So the 80s was called the AI Winter. Wonder what this period will be called later – the AI Spring? Or is the progress too little so far to call it that? Time will tell.
Fred, I worked in an AI project at IBM’s Watson lab and published peer reviewed original research in AI, but IMHO, so far there is nothing, not even as much as zip, zilch, or zero, that deserves to be called AI. Nothing. Moreover, I expect nothing for a long time.Fred, it’s a free country, and you can call anything you want that uses electricity AI.Instead of AI, what there is is some software that looks smart:(1) Long ago we had software that played a perfect game of the matchstick game Nim. The algorithm is in Courant and Robbins, What is Mathematics. Once I used that in the C-suite at FedEx to blow away everyone else and got begged to explain how to do that? It looked really smart, made me look really intelligent.(2) We have some amazing software for playing chess. But chess is a game with some special properties as shown in, say, T. Parthasarathy and T. E. S. Raghavan, Some Topics in Two-Person Games, ISBN 0-444-00059-3. So it is possible to play a perfect game of chess, and doing so either white always wins or black always wins (except for draws). For a perfect game, just do a tree search. So far we don’t have a really cute solution such as we have for Nim. So, a tree search it is. Of course, for now, the tree is too large to search fully. So, we approximate and have some heuristics to evaluate parts we don’t search. Works great. Nearly always beats the pants off human players, including the ones who wrote the software. Intelligent? Ah, it couldn’t even wash the dishes. It couldn’t figure out how to pop the hood on my car.(3) How to evaluate financial options and make a lot of money? Back in the 1940s, S. Kakutani showed that can use first exit times of Brownian motion to solve the Dirichlet problem, and that solves a lot of problems of exotic options. Since then there has been more from J. Doob, I. Karatzas, S. Shreve, R. Blumenthal, K. Chung, and others. A special case is the Black-Scholes formula some people made money with, likely including E. Thorpe. What’s going on with what Kakutani explained can look darned smart. In particular, can get a proof, sit down for this one, of the Hahn-Banach theorem!(4) One of the applications of AI I published about was to monitoring of server farms and networks. But if, instead, borrow from some work in ergodic theory and group theory and tap lightly, can get a much better solution. That can look just astoundingly smart. For something still smarter, for “fraud indicators”, with enough more data, and these days there is a lot of data, can get as close as please to the best possible result of Neyman-Pearson, e.g., as in https://news.ycombinator.co…Ah, some applied math based on J. Lagrange, H. Everett, etc. Could look just astoundingly smart for fraud detection. If over time carefully watch false positives and false negatives, then it will look brilliant beyond belief.There are huge oceans of more that can be done with mobile, the Internet, servers, data, and software based on some pure/applied math going way back along with some more recent results.But intelligent? Not a chance.Given a problem and some relevant data, how to manipulate the data to get a good solution to the problem? Sure, and the candidates are, program what we would have done manually, use intuitive heuristics, do cases of essentially curve fitting to old data, or actually derive some appropriate math. And may I have the envelope please? And the winner is [drum roll, please], math!Why math? Because get to take properties in the real problem, e.g., obvious cases of probabilistic independence, e.g., the future moves of Brownian motion, as hypotheses of some theorems, prove the theorems, and use the conclusions of the theorems to say how to manipulate the data. E.g., in a right triangle, given the lengths of the two legs, how to find the length of the third side. How to move in Nim. How to detect fraud. How to trade exotic options. How to do optimal stopping of a closed end fund? Where is a good Lagrangian point for the new Webb space telescope? How to design a linear filter that will best separate signal from noise?Theorems and proofs are by a wide margin the most solid information in our civilization, and now they can make part of the logical path from important problem to valuable solution more valuable.AI? The best of it has long been and long stands to be some good applied math given a different name.
That’s open to debate. Actual intelligence, as in awareness of being aware, IMO, will never be reachable by a machine, but that’s a philosophical rabbit hole to be getting into; and proving it conclusively could take centuries. However, we have started to make computers approximate the behaviour of the thinking mind, although without its sentient capabilities. And with the classifications of ANI, AGI and ASI into a gradient scale, it’s perfectly workable to refer to the entire range as AI. It’s semantics and it’s about finding the most useful way to facilitate communication on the topic with the least amount of hang ups.
> Actual intelligence, as in awareness of being aware,For the future, I don’t see that as a problem. E.g., a Windows operating system knows that it is an operating system, occasionally says “Windows shutting down now” or some such. As a computer has more and can do more with data about the world, then it will be able to say more and start to look, say, generally aware including being aware of being aware. I see no big philosophical chuckholes here.> However, we have started to make computers approximate the behaviour of the thinking mind,So far I’m from not impressed down to I don’t believe it. Indeed, I wroteInstead of AI, what there is is some software that looks smart:E.g., I don’t believe that the way, that is, the internal data processing of the way, computers play chess is anything like the way humans play chess. Instead, now often computers can do some work, e.g., play chess well, that we used to believe required high human intelligence and do such work better than any human. But the computer, e.g., a chess playing program, still shows no human intelligence, e.g., can’t pop the hood of my car. E.g., change the rules of chess so that, say, each bishop is a queen. Now humans still play just fine, but computers don’t. For a huge range of such changes, humans do just fine, but computers don’t. A chess program that can accept all changes in the rules humans could handle no one knows how to write. Humans are intelligent; computers just do what the heck they are told.Computers are really good at spell checking, and we used to think that kids who won spelling bees were really smart. But computers do it with a disk drive, and humans don’t.I’m not big on undefined three letter acronyms; since often they are not in Webster’s, they are not really part of the English language. So, for your “ANI, AGI and ASI”, I have no idea what the heck they are.> It’s semantics.IMHO it’s hype.We have what we need with the words heuristics, algorithms, statistics, digital filtering, control theory, applied math, etc.Some more such hype is machine learning: From what I’ve seen, e.g.,Trevor Hastie, Robert Tibshirani, and Jerome Friedman, The Elements of Statistical Learning Data: Mining, Inference, and Prediction, Second Edition, Springer, 2008.Kevin P. Murphy, Machine Learning: A Probabilistic Perspective, ISBN 978-0-262-01802-9, MIT Press, 2012.Shai Shalev-Shwartz and Shai Ben-David, UNDERSTANDING MACHINE LEARNING From Theory to Algorithms, 2014.the new is not very good, and good, not very new.A huge fraction of the good is some quite old applied statistics, especially versions of old regression analysis, but they left out a lot of quite relevant, old statistics, e.g, essentially the whole fields of categorical data analysis, analysis of variance (experimental design), and statistical hypothesis testing.Sure, with such applied statistics, first take in some data. Well, computer science calls this learning. Amazing that for 100 years or so the statistics community didn’t have to use a word, learning, that tried to imply intelligence.To me, doing some applied statistics and calling it learning is from deliberately confusing, ordinary hype down to academic fraud. I’ve done plenty of work in mathematical and applied statistics, stochastic optimal control theory, and applied math (including my present project) that took in data and manipulated it, but I never called it learning.Gee, maybe I should take my current project in Internet content discovery and recommendation, meeting needs both Fred and Ben Evans have described plus some, and call it personalized, intelligent search or some such. Hmm, then suddenly Fred would write me a big check and make me a unicorn — I’m not holding my breath for that!Gee, the Jet Propulsion Lab (JPL) took in data on the positions of the planets, moons, etc. in our solar system, applied Newton’s second law and law of gravity, maybe in some cases with some corrections for general relativity, and calculated, e.g., with some some good work from the field of numerical analysis for accurate numerical solutions of ordinary differential equations, how to navigate and control spacecraft, including to Pluto — big, nearly round, very cold rock. But I don’t recall that the JPL claimed to be doing solar system learning.The learning JPL used was from Newton and Einstein and the applied math. E.g., the control theory might have been fromMichael Athans and Peter L. Falb, Optimal Control: An Introduction to the Theory and Its Applications.
Look, you’ve obviously put a lot more thoughts into than I have and it’s obvious you have strong convictions about it; and so instead of asserting that I’m right or say that we simply agree to disagree, I’ll consider what you’ve said in a new light and draw conclusions anew after the fact. Thanks for taking the time to share your views so comprehensively.
Update: Actually, I get your point that it’s just mathematics and it is kind of a fraud to dub it machine “learning”.Unfortunately from your perspective, people will keep calling it what they want, even if it’s essentially inaccurate. Same with AI; and so you may want to brace yourself for that 🙂
Yup. But when I see another one of those solid, round, brown, smelly things floating in the lake, I still get offended!Funniest scene in all of movies, in Caddy Shack. After they drained the pool, it was just a candy bar, and when the guy picked it up and took a bite, the highly proper, hysterical old lady totally lost it! This time, it’s not a candy bar.The good thing about hype, including the bad hype, is that it doesn’t last very long. So, we had an AI winter. Now the hype machine is trying for AI spring. Then there will be summer and winter again. One thing it won’t be is anything with intelligence beyond some chess playing program.It’s an old publicity scam going way back, at least to IBM’s vacuum tube computers that were called “giant electronic human brains”. Good to manipulate little biological human fools.Math won’t sell — people learned to hate it in high school.Maybe now I’ve found a case of real intelligence! I liked beige computer parts; for the new black ones, I can’t see anything without a few hundred Watts of light. Well, I’ve typed enough to wear out several keyboards, e.g., the 80,000 lines of typing for my project made a contribution. I’ve used epoxy to do some repairs, but really I need a good, new keyboard.And I don’t like the present keyboards since too often I’m not sure if I really hit a key hard enough or not. That is a very, very old story, and not everything IBM did was wrong. Instead, they worked their little tails off on designing some darned good keyboards.Well, at Amazon I happened to see that there is a cheap converter from a plug for an old IBM PC/AT keyboard to a more recent IBM PS/2 keyboard socket or extension cable. And it looks like just wires with no electronics.So, really? So, that means that with such a plug, I could use, right, from upstairs with some old stuff, an old IBM PC/AT keyboard, in beige, and WITH keys actually very well designed. The housing is made of steel — the thing may outlast the pyramids, and don’t drop it on your foot. The cable is only a little smaller in diameter than my index finger — maybe strong enough to support my full weight.So, I brushed the dust off that old keyboard and checked: Right, all there, Esc, Alt, Ctrl, Home, Page Up, Page Down, Delete, Print Screen, Left, Right, Up, Down. Looks like enough is there. For the Windows key? That is just Ctrl-Esc anyway. Sure, now, Print Screen copies to the clipboard, but that is a Windows function, not a keyboard thingy.So, from Amazon, the adapter will be here in a day or two! Ah, the power of real intelligence! I get a good keyboard, rugged, in beige, for a $5 or so adapter! Try to program that with your machine learning!
I think aware of being aware is a human-only possibility. But that is my personal belief. It is also deeply related to the question of being conscious and what exactly we mean by being alive or dead.On AI being just math and statistics, you are correct. But the larger point is that applying these techniques to the data that flows from our lives (and from business operations), at the scale of billions of people and trillions of touch-points, is new and recent. And we have made progress in figuring out how to create tangibly better outcomes through that process with the potential of improved productivity, more convenience, higher revenues, optimized supply chains etc.It is fascinating more for what it can accomplish even if what it is at its core may be just old wine in a new bottle. I will also argue that aspects like voice and video recognition, streaming analytics etc. have come a long way. These are genuine breakthroughs in terms of technologies that feed AI.
Yup.But we could do a lot better if just went ahead and were clear that for the good stuff we’re talking about applied math.Part of the difference is that with the math we know we have some assumptions to check. With AI, what the heck are the assumptions? We don’t hear about those. E.g., if we do statistical polling, then we understand that we need a sample at least as good as a simple random sample. With an election predicted by AI, what input data is assumed?Good applied math is much higher quality stuff, much easier to evaluate and trust.AI is what, like selling cosmetics based on the assumed magical properties of “oil of the turtle”?In serious work, likely in medicine, national security, serious high end engineering, hopefully in the more important parts of finance, that sloppy stuff will be regarded as a skunk at a garden party.
Plato: “All learning has an EMOTIONAL BASE.”Ergo, by Aristotlian & Socratian logic … “Deep Learning” machines (aka Neural Networks & Graph Theory rebranded) CANNOT BE LEARNING since not a single one of those systems from Google, FB, IBM Watson etc have an emotional base.They can’t have an emotional base because (drum rolls, please) … MATHS HAS NEVER HAD AN EMOTIONAL BASE!!!And, since maths has never had an emotional base… then the economists have also been unable to correctly model our emotional experiences as consumers.No one — not even the John Nashs, Alan Turings, John Von Neumanns, Albert Einsteins and preceding geniuses — have ever created an emotional base for maths.And it would make no sense to do a standard probability distribution on emotions.So this is a root cause for why Google, FB, IBM Watson, Apple SIRI, Baidu et al are STUCK and can’t solve the Natural Language understanding problem and the only thing they can iterate on is the narrow, quant-biased AI that’s expedient with established maths (e.g., functional mechanics, logic, probabilistic correlations)[email protected]:disqus @tyusupov:disqus @davewbaldwin:disqus @ShanaC:disqus @creative_group:disqus @SubstrateUndertow:disqus @lawrencebrass:disqus @vasudevram:disqus @VictusFate:disqus @le_on_avc:disqus @JLM:disqus @fredwilson:disqus — Nothing is “uncomputable”. Turing wrote that paper in 1937 at a time before Neuroscience and genetic-DNA research as scientific disciplines emerged in late 1950s and before Quantum Physics discovered black holes in 1970s and Einstein’s gravitational waves in 2016.Meanwhile, I follow the principles of people like Galileo Galilei who wrote: “Measure what is measurable and make measurable what is not so.”Turing made cryptography measurable but not the meaning of human expressions. Where Turing went amiss was on an over-bias towards Scientific Rationality & Logical Objectivity in defining language expressions.However, language isn’t rational or probabilistic. It is … relative and PERCEPTUAL.The great inventors, who advanced human evolution, always created tools to measure what had previously been unmeasured and unmeasurable.See the AI problems from beyond the logic box of maths per my a+b+c example above and the tools that need to be invented become a lot clearer AND CAN BE CODED.As McLuhan wrote: “The message is the medium.” The AI code is the message for all media.It doesn’t matter if it’s via the mobile, the VR headset, the embedded devices, the Blockchain, the wearable, robotics, self-driving cars etcetcetc.Get the message (the Nat Language AI) right and the coherency transmits and translates across all media.Ok, now going to a session on ‘Building a scalable architecture for processing streaming data on AWS’.:*).
It’s quite possible, indeed, routine to use quantitative methods to do fairly well detecting, describing, and analyzing emotions. So, emotions are not beyond ‘measuring’.Next, it is obvious that computers are electronic machines and humans biological machines.Well, on the one hand, humans can emulate electronic machines, e.g., just work through a computer program and do the computations by hand. Tedious but possible.So, we do strongly suspect that in principle, although not yet very well in practice, the electronic machines of computers can emulate biological machines, including the emotions.
Mario, whether AI ever achieves consciousness, or as you put it, “awareness of being aware,” depends on what you believe is the source of human consciousness. Does it derive simply from the arrangement of neurons in our brains? If so, then it’s hard to build a case that AI won’t eventually get there, since given enough time, eventually hardware/software should be able to simulate every neuron in the human brain, and eventually even every molecule in every neuron in the human brain.On the other hand, if consciousness derives from “something else,” i.e. something that touches on religion, perhaps our souls, or our spirits, quickened by the light of God, or something along these lines which depends on some explanation outside of science’s ability to observe and measure, then in that case you’d probably be right in predicting it will never be achieved by a machine.However, it’s worth noting that achieving consciousness is not a pre-requisite to achieve intelligence, or even AGI or even ASI super-intelligence. Given enough artificial neurons, an AI can become superior to humans on every measurable test for intelligence, and be able to far surpass our ability to reason. Developing such an AI would truly be “summoning the demon” in Elon Musk-speak. And all of this regardless of whether it has the ability to think for itself or make a conscious choice. It could be simply executing the directives that humans programmed into it, e.g. “maximizing the number of paper clips” to use the cliche’d example, and still ruin our lives as effectively as if it had gained consciousness. Or if its AI directives were programmed very cleverly and carefully, it might make our lives wonderful beyond imagination.
Bingo, you’ve nailed it. I’ve hinted at this in various comments and blog posts in the last many months, but you’re the first one to catch on:The scientific community is making the assumption that thoughts are emanating from the brain. All I’m saying is that this is an assumption, we shouldn’t assume anything.I’d add that just because thoughts can be registered in the brain, does automatically mean they originate there.As for “Can non-sentient artificial intelligence be able to achieve what is commonly expected one day AGI or ASI will become?”, I’ve often wondered. I’m ultimately unconvinced, and I think that’s the million-dollar question indeed; however, you may possess more facts than I with which to illustrate your thinking on that point.
A few observations about consciousness:(1.) Internal experience is different from external behavior.What AI, including robotics, has focused on is external behavior (how many clicks it logs, can AI see the object and then pick it up etc) but it has no frameworks for internal experience.The AI has no constructs of self and its subjective relationship with objects — unlike we do.(2.) Consciousness may be simultaneously localized in the brain and distributed across the nervous system and our senses.Even there, it operates on a dual model, entangled basis.Again, this is why probability and its bit constraints of 0 or 1 (so no duality is possible) with some type of % parameters within these bounds in current AI is [email protected]_cantin:disqus — Subsequent science tends to prove previous assumptions were wrong.We assumed all sorts of things to do with heliocentricity, gravity, electricity etc and then scientists invented tools to prove those assumptions were amiss.So it will be with advances in Neurosciences, genetics, Quantum Relativity, etc.
Or as some wisdom traditions of the world believe, consciousness may exist in the the non physical or subtler layers surrounding the physical body, which are yet to be recognized by science. We know very little about how we experience life, where we originated from and what happens after death.
> Actual intelligence, as in awareness of being awareIn some Indian schools of thought, that state is not called intelligence, it is called consciousness.They distinguish between intelligence and consciousness.
You’re right, I wasn’t being precise.
I can make a program that CLAIMS to be aware of its own existence. Maybe I can even get it to make some attempts at arguing for it (and some excuses for why it isn’t general purpose, and why this doesn’t matter anyway).Say I built it into a tamper-proof box and made it scream and protest if you tried to figure out how it worked.Would it “really” be self-aware? How would you know it was any different from me? I mean, presumably you could pick my brain apart physically and figure out why I say what I say and it’s all just deterministic + some random noise. (I, too, would scream and protest if you tried.)Self-awareness is a complete red herring wrt. AI. I can’t prove my own self-awareness to you, you can’t prove yours to mine. You take it on faith that I’m a real person and not a “philosophical zombie” as they call it. No matter how advanced I built my AI, you would face the same problem in deciding if it was “real”.So I suggest that to stop chasing this red herring is the most useful way to facilitate communication on the topic.
Semantics.Change “aware of being aware” to “conscious” and you’ll have what I was trying to say. I was referring to the so-called singularity.
Can you suggest a good read on 3 (Black Scholes etc)? I’ll wikipedia it up in the meantimeJust escaped from a wiki trip of links, pausing long enough to drool over true arbitrage. The BS 😉 pde drives an options pricing model as a function of a handful of inputs for hedging, just reviewing the assumptions and details now. The future volatilty parameter is the wildcard. With that free param I can retroactivity fit it to most data – note the jaggedness of the geometric brownian motion. And the notable delta T tics must be agreed upon by exchanges?A shame we don’t have smooth prices vs time, that would eliminate much of the exchange tax we pay now. I guess that would be vulnerable to abuse – pump and dump faster than a smooth price constraint can correct.
The Black-Scholes formula is derived in the back ofK. L. Chung and R. J. Williams, Introduction to Stochastic Integration, Second Edition.Chung is at Stanford and one of my favorite authors.Yes, today, Fred had occasion to give a Wikipedia link for Black-Scholes.The basic math is also in texts by Karatzas at Columbia, Shreve at CMU, and more. It’s part of the subject of Brownian motion, Markov processes, and potential theory.For the practical aspects you are already into, might look at some of S. Shreve’s materials at CMU in the practical Master’s program in financial engineering he ran at least for a while. Cinlar at Princeton may also have some practical materials. Of course also consider M. Avellaneda at Courant — while a good mathematician, he is also relatively close to the practical aspects.At times, MIT gave a practical course in such mathematical finance. IIRC, Shreve was a student of D. Bertsekas at MIT, and Bertsekas may have some materials.Once I wrote Fisher Black at GS and got a nice letter back from him saying that he saw no role for applied math on Wall Street!
Thanks, I’ll grab the Chung & Williams book you recommended.
As math, Black Scholes is a part of stochastic processes or, if you will, stochastic optimal control. There, Black Sholes is a theorem with a proof.The proof has some math prerequisites. E.g., need to know what a sigma algebra is. Basically need what is called a course in graduate probability, and that is based on a pure math subject measure theory which, in part, replaces Riemann integration from freshman calculus and the usual advanced calculus.Measure theory has long been a standard first year grad school course for students, with an undergraduate pure math major, looking for a Master’s or Ph.D. in math. Standard sources include Royden, Real Analysis and the first half of Rudin, Real and Complex Analysis.Measure theory was by H. Lebesgue in France near 1900 and is better for a lot of theorem proving than Riemann integration.Using measure theory as a foundation for probability theory was due to A. Kolmogorov in a paper he wrote in 1933. Now for essentially all serious academic theorem proving work in probability, stochastic processes, stochastic optimal control, and mathematical statistics, the Kolmogorov measure theory foundation is what is used.I wrote my Ph.D. dissertation in stochastic optimal control and did do the measure theory parts carefully. E.g., I paid careful attention to a topic measurable selection.But, again, Black Scholes is some math, in stochastic processes, and should come with proofs. K. Chung is a terrific writer, and his book has the proofs. While there are some easier, intuitive arguments, for the math, with the proof, what is in Chung is about it.Note: In essentially all cases in practice, the Riemann integral and the Lebesque integral give exactly the same numerical value for an integral. The main difference is, Lebesgue assumes a lot less about the properties of the function being integrated. As a result, some important limit theorems are true for the Lebesgue integral that are false for the Riemann integral because the limit can result in a function that has a Lebesgue integral but does not have a Riemann integral. E.g., the function that is 1 for rational numbers and 0 otherwise has no Riemann integral but does have a Lebesgue integral. There is no Riemann integral of that function because, as the partitions of the X axis get small, the upper and lower Riemann sums never converge to the same thing. The Lebesgue integral? It partitions the Y axis instead of the X axis. The value of the Lebesgue integral of that function? Sure, 0. Why? In part because the rational numbers have Lebesgue measure 0 and, thus, in Lebesgue integration can be ignored. Then the other values of the function are all 0 and the integral, 0.Since the Lebesgue integral partitions the values, in the real numbers, of the function, that integral does not partition the domain. So, the domain can be much more general, can be just an abstract measure space of points, a sigma algebra of sets that have measure (essentially area), and a definition of the areas.Then Kolmogorov said to make the sample space of probability that abstract measure space, sets in the sigma algebra the events, random variables real valued functions measurable with respect to the sigma algebras on the sample space and the real numbers (smallest sigma algebra that contains the open sets of the usual topology on the real numbers) on that space, and the expectation of the random variable just its Lebesgue integral. Nice.Such a treatment of probability theory is mostly just now classic, pure math measure theory until consider covariance and independence and, then, get a lot of results important in practice and not in books just on measure theory. Start considering stochastic processes and get a lot more not in classic measure theory, e.g., get the amazing martingale theorems, some astounding 0-1 laws, an unbelievable result on the envelope of Brownian motion, some astounding results of ergodic theory, etc.Probability at this level has long been relatively popular in math departments in Russia (and the old USSR), France, and Japan but not in the US. E.g., in the US at Cornell has long been E. Dynkin, but he was a Kolmogorov student!Work in stochastic processes is still less popular in the US. US math departments that take seriously work like Black Scholes and the rest of Markov processes for mathematical finance are limited to just a few people — Chung at Stanford, Shreve at CMU, Karatzas at Columbia, Cinlar at Princeton, and a few more. All the math profs who have worked through stochastic integration and the Black Scholes formula at the level of Chung might be able to hold a convention in one classroom.
I took stochastics in grad school, its been 20 years so we’ll see how dusty the cobwebs have grown.My old aerospace R&D career was focused on estimation theory which is built on the stats from stochastics (covariances/correlarions) so I recognize some of the terms immediately but need to poke around for definitions & assumptions.My sigma algebra knowledge is weakest/decayed the furthest. I only had one solid course on linear subspaces with Armen Zemanian. I feel like I was about to really get it intuitively by the end of the semester (all proof exams) but at the moment I can’t remember what makes a Banach space (apologies Zemaniac) ;). There was some really wicked stuff on transfinite electrical networks. I’ll do some hw reviewing sigma algebra first.
.It is difficult to see how much of what we call “artificial” intelligence is anything other than data being sliced and diced to find patterns and suggest resultant actions based on “natural” intelligence.Bit of tongue in cheek, take politics as an example.I doubt there is anything even remotely “intelligent” about the way primaries are conducted by either party and, yet, there is a huge amount of data which can be organized into a pattern (straight election of delegates for the national convention, caucuses, unbound delegates, unelected delegates) which can be reduced to a spreadsheet that aggregates the similar patterns to arrive at a plausible course of action based solely upon the ability to recognize and catalog the pattern.The shortage of natural intelligence seems to be the call to action for AI.JLMwww.themusingsofthebigredca…
Woah, that’s a great play on words, and definitely tongue in cheek. Great shots! Love it. If there was an inverted ratio at play of lesser intelligence = greater need for AI, then the political system deserves a generous serving of It indeed.
.In much the same way that Americans learn their global geography from wars and the capitals of countries in which we are engaged in combat, the primaries have been an excellent lesson in how corrupt — and devoid of intelligence of any kind — the process is.I sit on my county’s Republican Executive Committee and I am flabbergasted by how little I know. I am a precinct chair and an election judge and I knew nothing as to how the system really worked.Worth noting that it has been since 1952 since the Republicans have really had a contested convention.We could really use some natural or artificial intelligence.JLMwww.themusingsofthebigredca…
I’d make a push for the natural one first 🙂 … which you personally are evidently not devoid of, I want to add.
Here’s an AI prediction from patterns: Due heavily to the unbound delegates, between now and the end of affairs in Cleveland (one night I spent a week in Cleveland), there will be boom times for Diesel fuel for Lake Erie yachts, the local Jack Daniels distributor, high end restaurants, bars, hotel rooms, and ladies of the evening!Street corner guys with signs “Tickets, votes, virtue for sale!”.Then at the Democrat convention, signs “Votes, virtue, speeches, executive orders, clemencies, SCOTUS nominations, national security leaks, IRS decisions, revolutions for sale — bargains at just $300,000 each!”. “Wealthy social justice warriors welcome!”.Ah, true democracy!
That’s a really funny way to put it. One of the interesting things about AI though is that it is harder for it to learn new patterns that it does not see in the data or in the explicit rules of play coded into it. It may substitute bad behavior or actions with even more pronounced ones. Like what we saw with Microsoft Tay. Also, correlations that worked in the past may not work now because of changes in macro-conditions and the over-all circumstances.For irreversible decisions with consequences, there is nothing like people with common sense – who may be hard to find 🙂
> The shortage of natural intelligence seems to be the call to action for AI.Ha ha, good one. Reminds me of another AI joke:Artificial intelligence beats natural stupidity.
Do you recommend any book(s) for someone who wants to get a general understanding of AI?
Adham AbdelFattah:try this:http://www.barnesandnoble.c…
“The business plans of the next 10,000 startups are easy to forecast: Take X & add AI. This is a big deal, and now it’s here.” — Kevin KellyThat is a great and logical quote, and a fun exercise is to go segment by segment, and scenario plan what “Take X & Add AI” looks like in the 1.0 stage, the 2.0 stage, and fully realized at 3.0 for each market.
Fusing Human Imagination with Machine Intelligence Allows Human Imagination to Soarhttp://www.slideshare.net/s…At Barcamp Orlando (Apr 23), I shared a presentation “Soaring Human Imagination with Machine Intelligence”. I highlighted that AI can be viewed as an ongoing fulfillment of Turing’s vision that intelligence can be computed while dismissing human imagination.By contrast, Lovelace’s vision is ” a future in which machines would become partners of the human imagination” (according to Isaacson). Perhaps the key to creating the future of creativity is the symbiosis of human imagination and machine intelligence, which I have named “symbiotic genius”.Connecting human imagination and machine intelligence will require the mathematization of ideas. So far, a Fields medalist has responded that I am “very premature”, and an economist (growth theory specialist) said that “I think you just blew my mind” 🙂
An AI-first world is a paradigm shift from “what features and interfaces do my users need so this technology is useful to them”, to instead collecting usage and impact data as part of your system, using the right learning models, building the right analysis engines to address “how the next individual can benefit from highly contextual cues matched against the collective experience that my app has accrued through helping everyone who came before them.”Today, companies who get mobile right make it big, because it lets you disrupt usage patterns for anything that was around pre-mobile and build entirely new workflows. Tomorrow, companies who get domain-specific AI right – who build their system to learn and adapt to their users behaviour and context based on the collective experience – will flourish.This is an untapped question with deep potential value that applies to every piece of software in existence. Very few established companies are trying to answer this question today, so there’s a huge opportunity to come up with a good answer and be highly disruptive.
Hi @fredwilson:disqus . As a lovely coincidence, we are hosting a webinar tomorrow about machine intelligence. Jeff Hawkins’ Numenta is going to talk about their open source software and how it can be applied to Enterprise IT problems.Numenta’s technology is one of the most approachable, fundamentally sound ideas I’ve ever seen in this space. So I’d like to invite every AVC reader who is interested in this topic to our webinar.The topic is indeed very hot. We are at 186 signups already. You can register at: http://www.prohuddle.com/we…
All impressive way above my capabilities good on you as you have the ability to contribute to mortal society, but Sheldon you missed my point. Live long & prosper.
I think at some point, there’s going to be a clear delineation between that AI First World you’re envisioning in 2016, and the rest of world that is still developing. While we can envision some semblance of a AI First World society for “developed” nations, those in “Third World” countries today are just starting to familiarize themselves with this “smartphone-centric, computing environment.” So, I do wonder what Mr. Pichai’s “long run” timeline extends out to because even now, I meet travelers all the time that literally have switched to their first ever smartphones to utilize travel apps. While I can imagine what an AI First World might look like, we should all be concerned with the gap that exists between those developing these technologies and those that are years behind before they have access. As we evolve, I hope that those investing and project-managing these technologies focus on closing that gap and proper implementation where we are not marginalizing the majority of people on this planet. I’m optimistic about an AI-World; but I’d be terrified if we not only revert to using such anachronistic terms but applying it within the context of our evolution as a species.
For the first time since the birth of AI, we are actually in a position to meaningfully approach true AI. Given the recent advancements in deep learning and GPUs, we are definitely heading towards an AI-first world. This doesn’t necessarily mean we’ll have a universal learning model that can learn everything, but we’ll live in a world where the intelligence layer becomes the norm as opposed to being optional (the way it is today).Over the last couple of decades, there has been a misunderstanding of what AI is supposed to provide. People tend to expect a lot from AI — like an emotionally intelligent robot that can go to Mars and make pancakes while juggling 7 balls. This is the reason verticalized AI becomes really important. The reason AI is making strides is because we are giving it a chance to learn things in a verticalized manner. Horizontal AI is understandably the holy grail, but that shouldn’t stop us from incrementally progressing towards an AI-first world.
When problem spaces are well bound with a finite number of known data sources and the goals to be optimized are clear, the probability of success is higher. As @sigmalgebra said, a lot of the work is indeed applied math in the garb of AI, but that does not matter as long as the techniques result in substantially better outcomes for the user, customer and the company.
Hi Fred (or others), would you mind sharing some specific incidents where you are running into AI your day to day life? I doesn’t seem to happen to me! (Or I’m not aware of it)
Hello Josh, There is a lot of hype around AI, but as @sigmaalgebra points to all the time , most AI software is wrapped around fairly well known statistics and applied math techniques. Some newer technologies around image / video recognition and deep learning are starting to become mainstream.A few examples from our day to day life. It is mostly the little things that simplify your life, but there are many data engines grinding in the background.- Google Photos applies advanced image recognition tech to your photos to extract metadata so that you can search for photos taken in beaches ,of a specific person, or any other attribute.-Almost all recommendations you see on Amazon or a retail web site are based on collaborative filtering techniques or advanced correlation algorithms.- Facebook uses a sophisticated algorithm in the newsfeed to show you the stuff most interesting to you, that you are most likely to read.- Facebook, Linkedin and Twitter recommend people to you based on intelligent profile matching.- Pricing on travel and retail websites is dynamic based on yield management, customer profile and sometimes even the device you are using (Mac users may get higher prices than Windows users).
The way I read that is that the reason mobile first works is because it makes you focus on what minimal set of controls gets you a maximal output. The logical extension is having no controls at all, and the system knowing what you need instead of having to ask for it.
@fredwilson:disqus @ShanaC:disqus — AVC readers based in NYC can go and test out the ‘Confessional AI’ as part of HBO’s upcoming documentary on AI:* http://www.theverge.com/201…
Great discussion. What you guys think of this experience?Uday: Pi, can you help me manage my wake up time?GoPiGo: Certainly. Do you allow me to access your Calendar?Uday: Go ahead.GoPiGo: What time do you usually like to get to work?Uday: By 9am or 30min before my first meeting, whichever is earlier.GoPiGo: Got it. How long does it take you to get ready?Uday: About an hour.GoPiGo: Okay. You’re all set. I will use this info along with live traffic and weather for the day to wake you up at an appropriate time. It seems that your wake up time is likely to be around 7am every weekday if the traffic and weather conditions are normal.Uday: Thank you, Pi.This is an example of an interaction I am building using off-the-self robotic and AI tech: GoPiGo, Alexa Voice Service, Clarifai, Google Vision API, and Google Maps API (medium.com/@sandhar).One way I think of an AI first world is where machines/devices will get to know its user and user’s surrounding to deliver experiences serving the end goal (e.g. get to work on time) and hide the means (e.g. set 7am alarm).
Ah, sounds like we might be competitors!