The AI Nexus Lab
In Matt Turck‘s recent blog post about the state of NYC’s tech sector, he wrote:
The New York data and AI community, in particular, keeps getting stronger. Facebook’s AI department is anchored in New York by Yann LeCun, one of the fathers of deep learning. IBM Watson’s global headquarter is in NYC. When Slack decided to ramp up its effort in data, it hired NYC-based Noah Weiss, former VP of Product at Foursquare, to head its Search Learning and Intelligence Group. NYU has a strong Center for Data Science (also started by LeCun). Ron Brachman, the new director of the Technion-Cornell Insititute, is an internationally recognized authority on artificial intelligence. Columbia has a Data Science Institute. NYC has many data startups, prominent data scientists and great communities (such as our very own Data Driven NYC!).
And now NYC has our very own AI accelerator program based at NYU’s Tandon Engineering School Accelerator, called The AI Nexus Lab.
The 4 month program will immerse early stage AI companies from around the world with NYU AI resources, computing resources at the Data Future Lab, two full time technical staff members, and a student fellow for each company. Unlike a traditional accelerator, they are recruiting only 5 companies with the goal of market entry and sustainability for all 5. They won’t have a Demo Day, the program will end with a day long AI conference celebrating AI entrepreneurs, researchers, innovators and funders during which which they will announce the 5 companies. Companies will get a net $75,000 for joining the program.
If you have an early stage AI company and want to join this program, you can apply here.
I like these types of university + private sector partnerships when they are well funded and supported, like this one.
Yes, universities have labs and other resources they can use. The University of Illinois in Champaign has some companies building incredible AI stuff.
Thanks Fred! We’re really excited to be partnering with the ffVC team to put together this new model of supporting early stage Artificial Intelligence companies.We’re hoping this, along with other programs like it will make NYC a hub for Data and AI activity and more companies like Clarifai, Geometric Intelligence and others will feed the ecosystem with AI jobs for university students from across the globe.
Curious, what are some other hubs around the world that are also working on this field as well in similar ways. For example, the UoT has a program focused on ML and AI in Toronto:http://www.creativedestruct…
Google acquired DeepMind for $500m, Microsoft acquired Swiftkey for $250m, Twitter acquired Magic Pony for $150m …Yet there’s no London-based AI accelerator as such.There may be one or two soon, though.
What other cities have a concentration of AI/ML too?
An obvious city is Tel Aviv and an under-radar one is Stockholm.
Haven’t heard about this until now. I’m seeing David Teten speak this evening at a YJP event, I’m sure he’ll mention it since you’re partnering with ffVC.As Matt mentioned in his blog, there’s a lot of quant talent in NY in the hedge fund and bank community as well as with high frequency trading. No reason NYC shouldn’t be a hub for Data and AI.
This is a big deal.Been working with more and more larger companies this year and it is clear that they have the data advantage–the reach, the history, the resources to own the platforms.The great equalizer is innovation about how to use it, harness it both within and without the enterprise. Still a fledgeling discipline.
PBC presented by Tech Stars Alum Tak Lo just started an AI accelerator called Zeroth.Ai in Hong Kong
Thanks Bill. We’re focused on backing early stage Asian AI/MI startups
Contributors:Question!If a funding source extends their finding and support to an AI startup would they actually benefit financially when there is no real exit for decades or more unless consolidated.
Vendita Auto:you do understand the meaning of question?
Yes, I just just messed up the comment was for elsewhere but had scrolled down looking at one thing & thinking another. My apologies
It is no different than any equity investment. There is no benefit until there is a liquidation event. If you are unwilling to lose the money, do not invest !
A question re AI and the leading players: My thoughts on analyses is that Google Deep Learning is moving to better understand the AI 2 AI approach to data and the now (not quite understood) new results. This surely means the leading financial / insurance groups that keep data analyses in-house will be at a great disadvantage to those investment arms who enlist Google Deep Learning to better understand the AI 2 AI approach to data ? I seems to me that there are very few instances where the algorythems are created by AI 2 AI and new approaches to data / pathways are found. In short the cards are stacked against those that do not have the deep pockets to enrol into Google Deep Learning / super computers, or am I losing the plot ?
Judging by the replies I must be
Alhpha Go AI 2 AI My interests was sparked by the Alpha Go game and the comment “I think this will the theme of our future interactions with AIs. We simply can’t imagine in advance how they will see and interact with the world.There will be many surprises.””I think an important point was brought up by the Google engineer in the beginning of the game: Humans usually consider moves that put them ahead by a greater margin and base their strategies on that, while computers don’t have that bias””Actually this AI kind of confirms these myths, since its basis are not in mathematics but in neural networks. While it could be argued to be just math, so is the brain, but that’s besides the point. The point is, even the programmers have no idea what the AI is thinking”https://news.ycombinator.co…Please think about the advantages/advances rather than personal ego when replying. The implications for financial data is huge. No I have not lost the plot, the less I know the more I see in this instance.
Ingrid Daubechies recently claimed, IIRC at the science site of J. Simons, that it was known that sigmoid functions could approximate arbitrary functions.Okay, then if have some data and want a function to fit it, say, return 1 for a winning board position, a safe driving situation, or a good investment and a 0 otherwise, then get a lot of data and fit. For the sigmoid functions, use elements in a neural network.Not a big mystery or surprise.And no doubt more can be said, mathematically.
Hi, Might I ask you the same question I asked Rob Larson. My thanks
Maybe the question you are asking isIn your opinion even at this these early stages. if you were advicing a leading financial group would you advocate using / allocating Google DL / Watson for data analyses as apposed to in-house. It seems to me that Google DL / Watson are rare commodities and early stage relationships would be valuable assets ?Well, IIRC somewhere in this thread I asked for a reason to consider using “Google DL”.Generally the burden of proof is on the proposer: If Google and/or Watson have some really good stuff, then they I would want them to make that clear. The board game Go and the TV game Jeopardy or even chess might be convincing if I were playing such games. Otherwise, their tools appear to be for problems I’m not interested in.For applying computer smarts to finance, I’d try to see what J. Simons did. I’d listen to J. Simons. I’d take what Simons said seriously.For more, I’d want more details on what parts of finance to be attacked how?And I’d take points, a lot of points, off for hype.And I’d expect the level of the work to be high enough so that what I typed in ashttp://avc.com/2016/07/acti…is seen as, yes, of course true but otherwise, baby talk. E.g., likely the main point of just those elementary derivations would be beyond the AI stuff being considered.For an early filter, I’d expect the people bringing expensive tools to have a solid definition of a random variable X. Then the set of all real valued random variables X where the expectation E[X] exists and E[X^2] is finite form a Hilbert space. There, of course, the amazing part is completeness. I’d expect the people to know, cold, how the completeness proof goes. To me, if we are going to make the big bucks analyzing financial data, then that little, old Hilbert space result is just openers, like a good cook being able to use a chef’s knife. Google and Watson? At each of those two, I’d be surprised if could find enough people who understood to fill a Toyota.For such things, are necessarily looking for what is quite exceptional. Well, out there in the hype for the masses is not a good place to look for anything very exceptional. It’s something like looking for an eagle in a flock of birds — eagles don’t flock!Finally, also essential, need some solid methodology. Hype doesn’t qualify. Neither do ad or infomercial techniques, passionate hand waving, or celebrity testimonials. There is such methodology although I’ve seen only meager evidence of it or its importance in anything in the AI hype.To say that there is such methodology can be a little deceptive because one might suspect some variety. Nope: There’s almost no variety at all. Part of it is math, that is, the part where make assumptions and prove resulting theorems. Part of it is science where have mathematical theories and test them, severely. Part of it is what can draw from solid engineering. And that’s about it. If the people you are listening to don’t know such things, then they aren’t the eagles you are looking for.
Respectfully, I am not J. Simons although it is gratifying to note someone you would listen to. As brilliant as you might be your comments come over to me as a blinkered diatribe Thank you for trying to answer my comment/s.
My comments follow easily from a good pure/applied math Ph.D. Can’t expect to understand that material or its larger lessons first hand without years of appropriate study. Even when a few centuries of terrific work have cut an amazing path 200 miles long through the Rockies, following that path on foot still won’t be easy. And, uh, it’s not a spectator sport.Simons is not nearly the only one I respect. I can add J. von Neumann, A. Kolmogorov, J. Doob, E. Dynkin, J. Neveu, E. Cinlar, R. Blumenthal, G. Nemhauser, D. Bertsekas, D. Luenberger, W. Rudin, and more.If you want to get interested in mathematical finance, then study from some people who actually know some math and the relevant math and are interested in applications to finance. So, look to, say, M. Avellaneda and others at Courant, D. Bertsekas at MIT, I. Karatzas at Columbia, E. Cinlar at Princeton, S. Shreve at CMU, etc. J. Neveu may be part of such a program in Paris. It’s not like these universities are big secrets. Maybe read some of, say, A. Shiryaev. If you can, for advice, borrow 15 minutes of time from Simons — he is definitely interested in math education and, thus, likely also education for the math of finance.AI hype can push me into diatribe mode. I should just f’get about AI, not pay any attention to it at all. E.g., I didn’t pay any attention to the DNC either, won’t pay any attention to Tesla cars or solar power, either — IMHO they are all from hopeless down to destructive.Here I’m writing for the AVC audience. Sure, for startups and investing in them, terrific if can think of another SnapChat. Not all fathers of teenage daughters will be happy with all the selfies sent, but there are worse circumstances possible for their daughters, and, and maybe from a relatively elementary exercise such as SnapChat, their daughters need to learn to be prudent and to protect themselves.What I’ve written here may suggest, IMHO correctly, to some of the AVC readers, beyond another SnapChat, (1) there’s a lot of low quality stuff with a lot of hype out there and (2) for some important problems there can be an especially powerful, valuable rock solid foundation in applied math and/or science that the software basically just implements — e.g., does the specified arithmetic.The US DoD has lots of amazing examples of (2), with astoundingly high batting averages. E.g.,http://iliketowastemytime.c…Bottom line results? Gulf War I where, IIRC, the US took on a 7 million man army, totally blew them away, and had more casualties from recreational sports than enemy action. A lot of good applied math in there!Why? At the beginning of the movie about J. Nash, a remark is “math won WWII”. There’s a point to that. At the end of the war, Ike saw the situation and was so impressed supposedly he said “Never again will US academics be permitted to operate independent of the US military.” Then to make sure the money would not stop, J. Conant, V. Bush, etc. set up several funding sources — NSF, ONR, etc. Soon a lot of the best math and science texts were written with NSF, ONR, etc. support. Soon, about 60% of the budgets of the top US research universities were funded by the US Federal Government for purposes of US national security, e.g., usually in the form of research grants where ballpark 60% was taken for university overhead. E.g., in my ugrad school, the physics department had a grant from the USAF that read “To further the technology of the infrared.” The department actually did that. And there was a lot — grand understatement — of exploitation of the infrared in Gulf War I. The USAF and the US got their money’s worth, and I got a better physics education!This stuff about math and science is serious, historically serious.So, try to filter out the low quality stuff and look for, at times insist on, some solid stuff. That is, there can be better alternatives, especially for the future, than just another SnapChat or hype.Or, the US FDA raises the quality of US foods and drugs, and I’m saying that there are ways to raise the quality of some parts of technology for information technology startups. So, don’t have to accept the rotten food or the hyped, nonsense technology behind some startups.
Love you now, you made me smile and you’ve written another novel. Guessing what you would have thought about S Jobs blue boxes back in the day. Some people get wet & some feel the rain. Feel the rain my friend, feel the rain.
No doubt more will be said mathematically whilst AI learns the long game.
I believe by “AI 2 AI” you are referring to an AI learning by interacting with other AI instances, like Google & Alpha Go.The framework to keep in mind is that an AI needs data (LOTS of data) in order to learn by pattern matching good outcomes from bad outcomes. Sometimes you can get real-world data for that. (e.g. Tesla autopilot analyzing all of the real-time driving data from all of the Tesla drivers on the road each day). Often you don’t have access to massive amounts real-world data to analyze. In those cases you have to be clever – one way to do that is to create a simulation of the real world, and then let you AI interact with the simulated world millions of times, thus creating your own data set. (e.g. Google’s simulated driving experiences which they run their AI driver through in virtual space). The AI then learns the ropes of your simulated world, and if your simulation does a good enough job representing the real world, then what it learned will be useful in the real world as well.That’s what Google did with Alpha Go. By having one AI play games with another AI, they effectively created a simulation of a Go gameboard, and turned loose the AI to learn from the simulation. Games like this are a special case, because since the relevant world (the gameborad & rules) is so simple and constrained, the simulated world is a *perfect* imitation of the real world. Therefore in the case of board games etc. this approach ends up being superior to using real-world data.
“AI learning by interacting with other AI instances” yes creating in-house Ai 2 AI algorithms. “simple and constrained” yes but (not my world) do / is the Watson & Google research approach the same ? “AI needs data (LOTS of data)” Finance/ insurance has lots of historic real-world data. In your opinion even at this these early stages. if you were advicing a leading financial group would you advocate using / allocating Google DL / Watson for data analyses as apposed to in-house. It seems to me that Google DL / Watson are rare commodities and early stage relationships would be valuable assets ?
The only scenario I can imagine recommending that a leading financial group partner with Watson/Google would be if they have a specific one-off engineering problem that they think AI can help with, similar to how Google used AI to lower their electric bills in their server rooms.For anything transformative, such as new product development, (1) the company should own the technology not just rent it, and (2) there is a good chance the new AI-enabled product will be disruptive i.e fit best in a new business model, and therefore would probably be better done in a separate company (subsidiary) rather than inside the main finance co. Therefore my advice to them would be to closely watch all the startups that will be springing up over the next few years using AI to attack fintech, then when they see one that looks like a good fit, is solving a need their own customers have, and is working, then acquire that startup and let it operate as an independent/separate subsidiary. Support the founders’ efforts to grow the company and disrupt the industry and create value for their customers, even if it cannibalizes the business of the parent company.
Thanks Rob Larson, I agree in part. Totally understand that “AlphaGo was built by humans, for a purpose selected by humans, out of algorithms designed by humans. It is not a more advanced species. It’s not even a general intelligence.” I agree that the best fintechs can/will be assimilated (funded by Ellison NetSuite) However Alpho Go AI 2 AI created algos & analyses would possible offer insights into new approaches for financial/insurance groups. Guess one can just see data churn or a new planet : ) Thanks again.
What do you mean by “AI 2 AI”? What in the heck is any promise from a Google “Deep Learning”? To be more clear, what the heck is “AI”?To be more clear, from IBM’s Watson lab, at one time I published peer-reviewed papers in AI, and from that experience my view of AI was junk. Is that what you mean by AI, junk? Are you sure you don’t mean junk?The deep in deep learning is how deep the learning is or just how many layers some neural network or other data structure or algorithm has?It used to be in AI, that it was accepted that for progress the AI had to have deep knowledge, e.g., to diagnose causes of car problems, have to have the deep knowledge that the engine connects to the clutch connects to the transmission connects to the drive shaft connects to the differential pinion gear connects to the differential center section and its spider gears connects to the axle half shafts are supported by the wheel bearings and turn the brake disks, etc. E..g, if the engine is rotating but the drive wheels are not, then look at the clutch, transmission, …, etc.But my guess is that the current buzz word deep learning is just something about the software and not much like the old deep knowledge.But in all of this, what is not very deep is the suspicion of a propensity for hype.Get your heavy clothes in line, folks — medium term climate forecast, another AI winter is on the way. It goes, AI winter, spring hope, summer hype, fall where hopes fail, and another AI winter!Uh, folks, what’s the big attraction for this hype stuff?Here’s a novel, maybe radical, idea: How about taking on some important problems and getting some solutions? Gee, who’d ever think of that? Seditious?I’ve written lots of software that in some vague sense, learns, but I would be totally ashamed of myself to call any of it anything like AI, deep learning, deep knowledge, or machine learning. E.g., receive data on a second order stationary stochastic process and learn about the power spectrum that, then, can use in, say, Wiener filtering. Receive data from a system, apply Kalman filtering to learn about the system and, thus, predict and control the process. More generally, stochastic optimal control where learn about the system and use its state to control for minimum cost, etc. In a problem in how to allocate limited marketing resources, learn about the results of different allocations, keep learning, and then find the allocation that makes the most money. We used to call that optimization, mathematical programming, and maybe 0-1 integer linear programming, right, a case of NP-complete.There’s a really easy example, good for a middle school programming project. It’s the old game Animals. So, it’s interactive and is smart about animals, in a sense, all animals. But to be smart, it learns. So, the game starts off knowing about, say, a dog. A user thinks of an animal, and the game asks, “Is your animal a dog?”. If no, say, the animal is a fish, then the game say, “Please help me learn. What is false for a dog but true for a fish?”. The answer might be “Lives under water.” So, right, the game builds a binary tree. By the end of the school year it can know about nearly all the animals the kids can know about, even the kids who rush to the library to get more background on animals. Ah, software that learns! We’ll have the Terminator and the T-1000 just any day now?Uh, how about some real work, on some real problems, to get some valuable solutions? Ah, seditious, has to have been caused by something my mother did when I was still nursing?And, what’s with all this stuff with new, anthropomorphic labels for old topics in applied math?And, this, hype, what’s with all this hype? Who is the bigger fool here, any customers who believe this hype or the sellers shoveling that stuff?
>It used to be in AI, that it was accepted that for progress the AI had to have deep knowledge, e.g., to diagnose causes of car problems, have to have the deep knowledge that the engine connects to the clutch connects to the transmission connects to the drive shaft connects to the differential pinion gear connects to the differential center section and its spider gears connects to the axle half shafts are supported by the wheel bearings and turn the brake disks, etc. E..g, if the engine is rotating but the drive wheels are not, then look at the clutch, transmission, …, etc.I am not an expert on AI (far from it), but I have read parts of a few classic introductory books & papers about it, and similarly some about expert systems. Based on that, I think what you describe above can possibly be called deep knowledge (but not only knowledge, some rules and logic are also needed, like an inference engine such as the Prolog language has), but I haven’t come across the term “deep knowledge” used for what you describe (though I don’t say it is necessarily wrong usage).I think what you describe above can be implemented by expert systems. They seem to have fallen somewhat out of fashion nowadays (there are fashions/fads in tech, just as in other fields), or maybe people are not writing about them in the tech news, but a couple of decades or so ago, they were used for applications in various business domains, and with some success.Software configurators are another example of software that uses data, and information about the connections and dependencies between data, and some sort of rule or inference engine, to do things like:- get inputs from you on what functionality you want from, say, a PC or a server,- apply the rules to the data- and suggest a few optimal configurations for the PC or server, meaning with suggested CPU, RAM, peripherals etc.Selectica was a company which made such configurators, which were used by Cisco and various other customers. I think I read that Selectica was fairly successful commercially.Update: I googled a bit and found that Selectica is now called Determine. See links below.https://en.wikipedia.org/wi…https://en.wikipedia.org/wi…https://www.determine.com/a…
Deep knowledge was recognized, from 100,000 feet up, as a fundamental need. I doubt that it was ever successfully programmed in any significant way.I’m thoroughly convinced that programming with rules would be a terribly clumsy way to program such deep knowledge. I can’t say that rules are impossible; because the rule based programming languages I saw — e.g., IBM’s KnowledgeTool which the group I was in invented, developed, used, and shipped as an IBM Program Product — when programming with rules, were a terribly clumsy way to program anything I ever saw.E.g., we did some work with GM Research; I was our contact; and we gave a paper on our work at the AAAI IAAI conference at Stanford, but, gotta tell you, the programming by GM was basically just procedural, from flow charts, etc. For the rules, they just f’got about those. More generally, the good work at the AAAI IAAI conference was all just good, traditional engineering.Our group at IBM was for real time monitoring and management for server farms and networks and grew out of, right, the C. Forgy, RETE algorithm and the Lisp-based rule based language OPS5 at least once used for system configuration.For system configuration, right, that was a problem, e.g., for DEC, and it appeared that a good solution was programmed in OPS5.IIRC, the IBM System 38 group tried to do rule based programming for system configuration and struggled. They had lots of rules. What they didn’t have was a working system for configuration.A guy in our group got called in to put the fire out. He, too, struggled with rules. Lots of rules.My remark was: “It’s not a programming language problem. It’s just that the guy who did the configuration system for DEC saw how to solve the problem while earlier efforts didn’t. That he used OPS5 was not crucial. If he could have done the work in 6 weeks in OPS5, then he could have done it in 8 weeks in PL/I, 12 weeks in Fortran, and 20 weeks in assembler. The key is seeing how to solve the problem. The programming language is nearly irrelevant.”Soon the guy had a nice solution, in PL/I. PL/I is a darned nice language.Since part of what we were trying to do was real-time system monitoring, I looked at our approaches with rules, etc., ran to the little boy’s room, opened a stall, grabbed the bowl with both hands, and did a huge upchuck or some such. So, instead, I did and published some actual research, “radical, provocative,” that totally knocked the socks off anything in AI.Over and over I’ve solved problems, found good solutions, for which the AI techniques, then or now, would be useless.How? Traditional work in applied math and/or engineering.AI is about hype. There is next to nothing both good and new there. The good is not new, and the new, not good. It’s all wrapped in hype. And, fundamentally, this essentially must be the case — we’ve spent thousands of years working out and refining pure and applied math, theoretical, mathematical, and applied physics and chemistry, and engineering. It’s rationalism, proceeding with deductions from assumptions, lots of testing of scientific hypotheses, lots of engineering experience. Believe me, AI has yet to add hardly a single, tiny drop of anything good and new tothat background. And beyond that drop, if it exists, is hype.
Interesting story about your work.Also see my answer to cavepainting.
Hi, products like selectica largely use deterministic rules to drive interaction. Like customer wants X,Y,Z, then recommend configuration A with components a1,a2,a3. However, newer e-commerce systems like Amazon and others use a wide range of inference techniques to auto-deduce key insights. So that the system can recommend you what other customers like you have bought, what you are more likely to buy based on your past browsing patterns, etc.As Sigma Algebra says, none of this is really that new. Just that with more availability of data on the cloud, it is possible to apply old learning techniques from Applied Math and realize more insights that can directly affect outcomes.The real story is not AI, but the availability of large scale behavioral data including web and mobile browsing, audio and video. That does not sound as sexy as AI and deep learning, but that is the truth
I don’t think I am really in disagreement with you or Sigma. It is more a matter of terms and what they are defined to mean (deep knowledge, etc.). The bigger point, I think, is that real AI – as in what is sometimes called AGI – Artificial General Intelligence – is something that only humans have as of now, and likely will take a long time for software, if ever, to attain. (Also see some of Twain’s points in posts on this blog.) I had given an example some months back here on Fred’s blog, in another post. I think it was like this: if a person is parachuted into a foreign country (where the language. culture. geography, shops, etc. is unknown to them – they, if reasonably smart and adaptive, would be able to manage, or at least stand a good chance of managing, initially with difficulty, and with more ease as time went by – e.g. use sign language initially, make friends with locals, pick up the local language over time, barter labor for food and shelter, etc. (There are some documented real life stories of such incidents, and of course, all stories of explorers in previous centuries have something of that – the managing to survive part, not necessarily labor or barter.) Something like Robinson Crusoe or the Swiss Family Robinson (great children’s stories). Adaptability to new/unfamiliar situations and environments – using all of man’s faculties – is what I mean.A super-powerful chess computer, brilliantly programmed, no matter if it beats Garry Kasparov, Vishwanathan Anand or whoever is the current human world chess champion , will fail miserably at bargaining with vendors at the local produce market, to take just one example – or at deciding which coat to buy for a friend’s celebration – to take another. (Okay, maybe not that last one, or at least it may do an okay, job, because of stuff like Amazon’s software and data). But the point stands. Just take any other example (of the second task, sufficiently different from the first task. A human may also fail but IMO stands a better chance of making at least a reasonable attempt. A program will simply fall over.) And yes, I did read somewhere that a human is not the champion any more.
Hi, I agree. In the wisdom traditions of the world, the human mind is said to consist of 4 major parts. These are the intellect, ego, memory and intelligence. And of course there is the experience of seeing, touching, feeling, etc. that happens within us.The intellect is the CPU. It makes decisions through logic. The ego is the identities that we associate ourselves with. Stronger these identities, the more biased is the function of the intellect. The memory is the data of all that we have experienced in this lifetime and beyond.Intelligence is awareness and a deeper understanding of life beyond what can be grasped by the intellect.It is crazy to even suggest that algorithms can do the totality of what the human mind can do. Yes, it can perform the role of the intellect combined with memory (data), but it is limited by the fact that there is no inner experience, and no deeper intelligence of a living human being.It does not mean AI is not transformational. It can unlock tremendous value for consumers and businesses but let us keep perspective on what it is and what it is not.
Thank you for the novel, good luck with the publishing deal
Your hunch is more or less correct: deep learning is buzzword rebranding of what are essentially neural networks, where “deep” describes the graph or a set thereof.There are plenty of people doing real work on real problems, and many more doing real work on their favourite toy problems. As best I can tell, the new buzzwords seem to come about in part because it is extraordinarily difficult to communicate your work to the layperson, say, in marketing, and in part because computer science is hell-bent on reinventing and/or sexyfying statistics and applied mathematics.
Well, if have 15+ years or so between each AI spring of hope and summer of hype, then maybe can get the media to go along and find a significantly large audience that is not bothered by the old winters.And, for someone who mentions the previous winter, the response can be “this time it’s different”.In the history of computing, some of the hype for the early IBM vacuum tube computers went that they were “gigantic electronic human brains”. Apparently not many people called BS on that nonsense.But some efforts at hype that fall flat stay down for a long time and look like they will never get up again.I thought that the way to convince customers, marketing people, and the newsies was actually to deliver good stuff. E.g., for a business customer, increase the top and/or bottom lines. For many end users, say 1+ billion, give each of them something they like a lot, will keep wanting more of for a long time, and will keep coming back. I.e., get devoted users. For more, maybe give the users something that is significantly social and viral, e.g., as in Fred’s old “large network of engaged users”.I know; I know: Sell the sizzle, not the steak. This is especially true if the steak is just old hamburger!
.These type of collaborations — a specific developing subject, an established educational institution, money, crawl/walk/run, industry participation — are the intellectual equivalent of the Manhattan Project.They are how and why the US will maintain its tech advantage and leadership. They are not just “nice to have” — they are essential.I have a sense that BIG DATA is the solution to a lot of real world problems which are otherwise extremely contentious — gun regulation, immigration, terrorism, surveillance of communications, crime.The connection to AI is not as direct as it is more data than anything else but it can be a predictive element which can allow many of these problems to be identified before they actually occur.There is an excellent social and capitalistic bent to these efforts when one simultaneously focuses on the creation of jobs (taxpayers) and wealth.Like a lot of things, it will only take one or two big wins to justify the entire effort.JLMwww.themusingsofthebigredca…
AI is dead, long live Big Data:* http://www.irishtimes.com/b…
I see big data as a foundation. Applications built on top of the big data foundation are tailored to address specific use cases. They will use a variety of algorithms / techniques including AI.As has often been discussed in this forum, most of what we call AI is not new and has been around for a long time. Just that its applicability is more real now with big data.
What I like most about T-800s is their optimism.
Silicon Valley Companies locating and relocating to the Valley of the Sun (Phoenix, Arizona).http://technical.ly/2016/07…Steve Case former cofounder/CEO of AOL and Chairman of Revolution, LLC has added Phoenix Arizona to The Rise of the Rest Tour.http://www.riseofrest.com/http://tech.co/phoenix-salt…
This is pretty groovy, look forward to seeing what companies come through the program, and their early products!
CONTRIBUTORS:• SoundCloud, a Germany-based music streaming site, is considering strategic options that could include a sale of the business, according to Bloomberg. Soundcloud recently raised $70 million from the investment arm of Twitter. Other shareholders include Doughty Hanson, Eniac Ventures, GGV Capital, Kleiner Perkins, IVP and Union Square Ventures.SOURCE: http://www.bloomberg.com/ne…
I can’t imagine a buyer with more synergies and strategic interest than Twitter.