Biggest Technology Change In The Next Five Years
The Upfront Summit is happening here in LA over the next two days. The folks at Upfront asked a number of well known VCs (me included) a bunch of questions. One of them was “What will be the biggest technology change over the next five years?”. Here are the answers in a short two minute video.
“If you think you know it, you’re wrong”Spot on.This video could have been from 2000, amirite?
Or 1991. https://www.youtube.com/wat…
LOL – at the end of 1991 I was working in a research lab investigating early VR systems and constructing autonomous mobile robots that undertook real time navigation using hardware nueral nets and other applied AI techniques and listening to A Tribe Called Quest. Nice flashback.
Not disagreeing on its verity and Vinod’s brilliant career speaks for itself.However that point is probably well understood by most/all of the others who spoke in this video and its not new either ( Mark Twain’s famous line about “Its what you know for sure that just ain’t so…” comes to mind).Investing money, time and energy requires a point of view about the future. Otherwise you simply hold onto your money – Cash is the ultimate optionality if you have no point of view on the future.But thats not why venture exists. Like they say – A ship is safe in the harbour, but…I think it was a soundbyte to round off the video, but he also probably has some great predictions re that question…
In all honesty what Khosla did is kind of a schmucking reply to a question that he could have easily just rolled with. Shows that he is a bit of a politician.
Also Vinod Khosla has a lot of investments in various new or promising tech areas – which shows that he does have predictions.
And he calls himself a “venture assistant”, not a VC. That bit is cool.http://jugad2.blogspot.de/2…
That’s a cop out.Khosla is one of the pioneers of thematic investing, making buckets of $$$ for KP in the late 90’s by investing in companies that increased internet [email protected]:disqus is 100% right – his investments are his predictions.
Hey thanks! Wish I could be all the time :)But yes, that’s why I said that. He is putting his money (and that of others) where his mouth is.
See my comments below about Khosla investing $8 million in Metamind.
“If you think you know it, you’re wrong”Maybe being right/wrong about any particular big technological-change building block is missing the contemporary integrative point” ?Network empowered/synchronized talentMassive global data-sensor arraysUbiquitous bot-driven neural-net messaging-streamsVR/AR biological extension toolsAI orchestrated everythingAren’t these just the new pivotal figures playing out on the presently hidden-ground(platform) of society/culture as it transforms into a higher level organismic-structural-dynamic with all the known attributable expectations and organizing-dynamics that attend to this well studied organic behavioural class ?The “medium is the message” and the message of empowerment embedded in the “network-effect as medium” is its synchronizing potential to elevate human organizational prowess into the realm of self-organizing, adaptive, integrative, living-system dynamics.We are like fish swimming in the empowerment of a ubiquitous network-effect synchronization-dynamic that offers us actionable, reusable, generalized morphogenic tools but we chose to myopically focus only on the power of isolated, low hanging fruit, network-effect phenomena.I get that all the bottom up hard work still needs to be done a few steps at a time but some attempt at collective top down framing surely won’t hurt ?
That’s heavy. A lot to contemplate at the end of a long day.
But isn’t that a VC’s job? To know these things? Or at least take a stab at knowing?I think Khosla is messing with us.
Maybe what he’s really saying is “that’s for me to know, and you to find out, na nah na nah na!”
well,, there are some more obvious trends than others
Ha! Great editing.Love how they let everyone pontificate with an answer then end with Vinod saying anyone who thinks they know is wrong.
I couldn’t understand a word of that “song.”
Vinod is the first truly brilliant valley technologist I had the pleasure of getting to know very early in my career.
Vinod and Khosla Ventures invested $8 million in Metamind which has an all-star Deep Learning team (https://www.metamind.io/about) and the investors include Marc Benioff. And yet … see Metamind’s wrong classifications in the slides:(1.) a simple Twitter stream to do text and sentiment analysis on; and(2.) a photo of my knitting.The problem with Deep Learning systems is that they are only as good as the quantity and quality of their data corpus and classifiers.If a data input doesn’t exist (this creates probabilistic skews) and the language classifier is amiss, then the algorithm does poorly mapping over and abstracting what an item should be.
Metamind’s episodic memory module which they presented at 2016 Deep Learning Summit having a foundational flaw in Word2Vec and Sentence2Vec.The foundational flaws I know about because, in Jan 2015, I asked the Google researcher who was on the team that created them about those flaws and they haven’t been fixed in Metamind’s module.
He goes back a long way, right? To before the founding of Sun.He must have been through and seen a lot. He says that in one of his videos – something like that he has made more mistakes than most people in the valley, and hence has the ability / “right” to advise startups 😉 Good point.
more and more I’m thinking that experience breeds perspetcive, success patience and failure a healthy dose of humility.
A Corollary then is: humility is often the first taste, patience comes over time and perspective takes the longest to acquire.
Funny its all tech.Sensors will be everywhere for certain but the real change is that we then have proximity data which changes the human experience on a hyper local level. That’s the game changer for consumers, culture and commerce.I think everyone is right and all wrong.
>I think everyone is right and all wrong.Ha, reminds me of a jingle heard long ago:What’s the way to nowhere?Straight down the crooked lane and round the square.
Like this, thanks for sharing. All compelling answers, including AI. That said, curious to know if you primarily had robots in mind when you chose AI as your theme, as the video suggested?
What struck me was – great marketing by Upfront.
Naw: A result will be that a lot of entrepreneurs new to venture capital will send Upfront pitches assuming that Upfront is really interested in what’s new in technology over the next five years. So, the Upfront input queue will have a lot of technical content they will have to filter through, and they won’t like that.They won’t filter through that content, will ignore it, and, thus, will piss off a lot of entrepreneurs who will eventually conclude that Upfront is interested in traction, not new technology, and, not knowing it, believes in a Markov assumption: The future of a startup and its past technology are conditionally independent given current traction and its rate of growth in a large market. So, Upfront will look only traction and just ignore anything at all technical. Indeed, they wouldn’t bother or be able to evaluate anything technical.But Upfront is pretending to be interested in the future of technology so will have some entrepreneurs take them seriously and so respond.Evaluating new technology? It’s done well everyday by the reviewers of peer-reviewed journals of original research, NSF, NIH, and DARPA problem sponsors, STEM field Ph.D. committees at leading research universities, some high end company internal R&D labs, etc. For Upfront to do such work? LOL! E.g., has anyone at Upfront recently read a peer-reviewed paper of original research? Reviewed one for a journal? Written one? Supervised a grad student writing one? Reviewed a grant proposal for, say, NSF? Submitted a grant proposal for NSF? Won a grant from NSF? Served in a tenure-track position in a leading research university? Hmm ….I’ll make it simpler: How many people at Upfront have written as many as 1000 lines of code for a compiled language in the past five years?
They hired a friend/client of mine as Head of Marketing. 🙂
Ah. Naiiiiice Donnaaa! 🙂
Fred, follow up on your prediction: When do you think AI will be able to pass the Turing test?
Fred may be more interested in AI passing the very positive ROI test!
Watson get spun out and finds its way.
Watson goes bust!
IBM Watson has issues:* http://www.businessinsider….
FRED:If you schedule permits go soak up the Super Bowl experience and parties in San Francisco. The networking will be unhinged. 🙂
Cop-out Kholsa, they’ll call him!!
Khosla wins the prize (hah but is a poor sport). Reason is he can after the fact show that the majority of predictions made were wrong thereby making his prediction (about not making predictions) right.
“starting with no” always looks smarter than saying “yes” (or giving an answer). And I agree with you he answered this in a way that he can’t be wrong. Cowardly, but brilliant!
“…the biggest technology change…” – as measured by a VC’s carry?i’m with Vinod.
FRED:The video wins at pressing play. A Tribe Called Quest onintro. New York City values all day long! (Built to be a Independent at birth, can’t brainwash us)
All those people out there addicted to stuff (lot of people) will be hooked on VR, the ultimate escape from reality. There’s habit forming, then there’s dangerous addiction – VR has potential to have dangerous MASS addiction, would love to own a piece of that action!!
I’m addicted to my eyes and ears too.But this VR brings whole new levels of media-ecology based mind-control.If virtual-reality becomes our new reality then what is really left for us realists 🙂
There’s always a walk in the park? (sans device) 🙂
I predict tiny cameras everywhere on earth so you can see somewhere else and be transported without being there. On-demand everything, on the streets of a city or inside a conference. We will be teleported with great ease.
…back to the Starship Enterprise.
William Mougayar:Star Trek-style teleportation isn’t as crazy as it sounds, though it would depend on intricate quantum information systems, as developed by physicist Alex Kuzmich.http://discovermagazine.com…
Periscope it’s already here
fixed ones i mean…and you can control them remotely yourself.
Don’t ever overlook the need of people to collaborate. Last night, I watched a scope in Hawaii. It was a swim with sea turtles using a go pro.
The following video Funding Your Venture series from the Kauffman Founders School.Certified Platinum.
Yes to AI and sensors.Last week two big nuggets dropped in AI:(1.) At Deep Learning Summit, Stanford’s Professor of Computer Science & Linguistics said this: “Pattern recognition, where we map object-to-object in vectors, doesn’t work once we get above speech recognition because we don’t know what the target is for knowledge representation in language.Higher level language has a DIFFERENT NATURE to lower-level pattern recognition.”Now, pattern recognition by logic, mechanics, probabilistic and vector space methods is what’s used in Deep Learning computer vision such as Clarifai and speech recognition such as Cortana.Of course, they’re great in their own right but what the “DIFFERENT NATURE” part of his comments point towards is that the mathematical tools and philosophical frameworks we’ve had so far to deal with Natural Language understanding and meaning extraction are inadequate and we’ll need to invent other tools of a different form factor to logic, mechanics, probability and vector spaces.The comment is particularly interesting since Google and Stanford have been applying logic, mechanics, probability and vector spaces at the Natural Language problem (Google-Stanford Word2Vec — https://www.youtube.com/wat….AND in Nov 2015 Amit Singhal, Google’s SVP of Search, said: “Meaning is something that has eluded computer science.”(2.) Google DeepMind learnt and beat the Go game, which is more strategic and involves more possible optimization parameterization than a game like chess.However, Dr Simon Stringer, director of the Oxford Centre for Theoretical Neuroscience and Artificial Intelligence, said that AlphaGo and other deep learning systems are good at specific tasks – be that spotting objects or animals in photos or mastering a game. But these systems work very differently from the human brain and shouldn’t be viewed as representing progress towards developing a general, human-like intelligence – which he believes requires an approach guided by biology.”If you want to solve consciousness you’re not going to solve it using the sorts of algorithms they’re using,” he said.”We all want to get to the moon. They’ve (Google) managed to get somewhere up this stepladder, ahead of us, but we’re only going to get there by building a rocket in the long term.”Combining this with “different nature” from Stanford’s professor, what emerges is that there needs to be a tools paradigm shift in AI that is as significant as when Newton discovered gravity, Einstein wrote the Theory of Quantum Relativity and the Wright Brothers built the machines capable of flying.
Of course … Stanford Professor’s comments confirmed parts of my working assumptions …I’d figured out that Descartian rationality, Bayesian probability and Hilbert spaces within which we assume words move as in Brownian motion vectors is what’s weighing down, holding back and limiting what AI can and should be able to do for us.They’re “gravity” and for Natural Language we need is “space fuel data” and Quantum notation.Anyway, my instinct was that there was a chasm in AI and when Albert Wenger directed me towards Stanford’s commentary on Turing’s 1936-1937 papers, there is this:”He (Robin Gandy) wrote of Turing having discussed a problem in understanding the reduction process, in the form of…‘the Turing Paradox’; it is easy to show using standard theory that if a system start in an eigenstate of some observable, and measurements are made of that observable N times a second, then, even if the state is not a stationary one, the probability that the system will be in the same state after, say, 1 second, tends to one as N tends to infinity; i.e. that continual observation will prevent motion. Alan and I tackled one or two theoretical physicists with this, and they rather pooh-poohed it by saying that continual observation is not possible. But there is nothing in the standard books (e.g., Dirac’s) to this effect, so that at least the paradox shows up an inadequacy of Quantum Theory as usually presented.”Subsequently, Sir Roger Penrose’s work argued that the reduction process must involve something uncomputable (quantum consciousness itself per his ORCH-OR theory). Turing was aiming at the opposite idea, of finding a theory of the reduction process that would be predictive and computable, and so plug the gap in his hypothesis that the action of the brain is computable.”Turing himself had not been able to framework for a “universal machine” that would have general intelligence similar to ours which includes Natural Language and not just Mathematical Language.The flaw and limitations in the AI models wrt Natural Language lies with our historical assumptions of everything should be bound by probability and its Bell curve.The flaw and limitations in AI models has also overlooked that Natural Language and Mathematical Language (machine language) are of a … DIFFERENT NATURE.Therefore, someone … will … need … to … invent … what Turing himself was unable to do — in part because he didn’t have today’s Neuroscience and Quantum Relativity knowhow at his disposal.
Yes and two little slides I’d created before I heard Stanford’s Professor of CS and Linguistics last week.
@wmoug:disqus — My system is on the border of 3 and 4.
FYI: For all the good input you have; your slide (designs) always make me cringe 🙂
Yes, I know :*). They look TMI.I know the cool thing is just to put like 3 words or 1 image because people’s attention spans are supposed to be limited and we’re only supposed to sell them 1 key idea per slide.That would mean I’d have to do 10,000 slides. Haha.
Holographic metaphor integration ?
It goes deeper than making the UI pretty and intuitive.A lot of what happened during the move from paper => Web => mobile apps was that the functional processes stayed stagnant whilst the form factors changed.For AI that can “get us to the moon” and can understand Nat. Language, the functional processes also need to change.
> bound by probability and its Bell curve.No one serious in probability believes that the bell curve is more than just a small part of the theory. It’s main role is just the central limit theorem, and there the big result is the one from Lindeberg and Feller. It’s tough and delicate even to state, and the proof is a long slog through a lot of math analysis — it’s in my notes from the course I took that covered it.There are other classic limit theorems that are of more interest, especially the martingale convergence theorem, e.g., that makes proving the strong law of large numbers easy. The ergodic theorem is weaker but also amazing.
It’s not about people who are mathematicians and “serious in probability”. It’s about business people appropriating the basics of it to do things like audience segmentation, ratings and employee reviews which then generate a whole bunch of “wrong” data that gets fed into the machines and which Data Scientists then need to scrub.Check it out .. .
Yes. There are many ways to make a total mess, and the use of the bell curve, as in your example, is one of the best.That use of the bell curve has essentially nothing to do with the central limit theorem or anything in probability. Instead that usage got accepted as some quasi religion about 100 years ago, got accepted as the truth in educational statistics, much of psychometrics, in your example, employee ranking, etc., all with no rational justification at all. None. Zip, zilch, and zero. In two words, it ain’t true. In one word, it’s false. In another one word, garbage.At least the following statement is true: In practice, the data from essentially any probability distribution can scaled to result in Gaussian distributed data. So, it’s still common in educational statistics to do that.But, if what someone wants, say, the 10th percentile, just use the original data and f’get about scaling to a Gaussian.Gee, any bounded monotone increasing differentiable function (and more is true, and I’m not going to take time to look up and get the fine details exact here now) is essentially a cumulative distribution for a random variable — simple from a beautiful, classic, fundamental, extension result of C. Caratheodory in H. Royden, Real Analysis. Of course, likely also in nearly all the related literature, Loeve, Neveu, Breiman, Chung, Feller volume 2, Halmos, Rudin, Kolmogorov, etc.Gaussian has some important roles — the central limit theorem is one. It’s role in sufficient statistics (Halmos, Savage, Dynkin) is just gorgeous and astounding. It has to pop up in stochastic processes with stationary, independent increments (I’m being sloppy again), e.g., Brownian motion. There are lots of examples of things converging to a Gaussian, IIRC, under some circumstances, chi-squared. And it plays a role in infinite divisibility. I could go on and on.But there’s nothing that says that test scores or employee performance has to be Gaussian, not even approximately.The use of Gaussian as in your example for employee performance is wacko, quasi religion, totally without justification, just flatly wrong, maybe in some cases destructive. That people supposedly in research would do anything like that, be so darned irrational, is shocking.
Stochastic optimization and deep learning should merge nicely.
Except, unfortunately, it doesn’t.Ian Goodfellow, a Google researcher, gave a presentation on stochastic optimization (including gradient descent) with DL and it looks really UGLY.People are looking towards hybrid models but what happens is they end up looking like gorgons because of mathematical incongruences.Like round pegs into square holes or bolting a tetrahedron onto a cube. It’s not a smooth fit.
At the hybrid level – there can be a lot of incongruences – hence why some researchers now looking at leveraging geometric algebra as a richer more unified maths framefork for representation. While there are some frameworks for GPU accelleration of geometric algebras- its still very early (akin to so called deep learning circa 2008) in terms of research momentum and demonstration but interesting stuff. @Twain – thanks for some great links and info. Also big fan of Andrej Karpathy’s work.
> Hilbert spacesAs in the lecture you linked to, they had some lists they wanted to regard as n-tuples and, thus, call vectors. But I’m not sure that vector addition or scalar multiplication were meaningful. Okay, take the standard definition of scalar multiplication and addition for the n-tuples and get a vector space.Next they wanted to take inner products. So, have an inner product space. With the inner products, do the usual with it and define a norm. Then have a normed linear space and a metric space. But is the norm complete? If not, then don’t have a Hilbert space. In their case I suspect their norm is not complete. Completeness means? Each Cauchy convergent sequence converges.For Hilbert space, this is all as elementary as it gets.
Exactly this: “not sure vector addition or scalar multiplication were meaningful”.Word2Vec and Sentence2Vec can’t get us to meaning.There’s a missing transformer in Hamiltonian operators.Gah, I wrote a one-line equation to resolve for that transformer in 2009.For the last few years, I kept waiting for the lightbulbs to spark in the well-funded labs of Stanford or Google or other AI powerhouse and for them give us the right tools.They didn’t. So I found myself putting head+hands to the problems.”Necessity being the mother of invention” and all that.
Does their vector addition have meaning? Here I didn’t mean meaning in any sense of intelligence or natural language but just in the simplest sense of writing out some math axioms.E.g., for the usual uses of vectors in physics, especially mechanics in freshman physics, and in mechanical engineering, multiplying a vector by a scalar or adding two vectors has a lot of meaning in the context of the physics or engineering.But take one of the vectors mentioned in the talk, say that word go is component 12,448 so that that component of that vector is 1, and multiply that vector by 2 so that component 12,448 is now 2, now what the heck does that mean in their work in natural language understanding? Likely nothing.Really, then, in no reasonable sense are they working in a vector space, and with much less sense in a Hilbert space where have completeness which is just silly for their data.Guess: They are just throwing out math concepts they don’t understand.What they have so far is just a list or, if you will, an n-tuple. About the best that can be said is that so far they are not obviously, seriously wrong. Fine. Let them call their data an example of a table in relational database. Still fine. An error? No, not yet. Progress? Nope, not yet.In such a problem here is a way to get a Hilbert space that might have some utility: Consider the set of all real valued random variables X. Then in the usual way common in ugrad math, that set is a vector space over the real numbers. Consider, then, the subset of those random variables where the expectation E[X^2] is finite and with inner product (X,Y) = E[XY]. Then it turns out, amazing, astounding, that now have an example of a Hilbert space. The key step is showing completeness, and that is commonly called the Riesz-Fisher theorem, which can also be stated in forms that look quite different but really are equivalent. Uh, to appreciate the Riesz-Fisher theorem, take a few minutes and prove it.Some of the quotes you gave above are right on target: The guys talking about those vectors have no clue about how to get to natural language understanding, less clue than they would have trying to prove the Riesz-Fisher theorem. I’d bet there is no one at Google who hasn’t seen the Riesz-Fisher theorem who could prove it on their own. No one, at least in less than, say, a month. Ah, be generous, three months. IMHO, finding the needed, new ideas for natural language understanding is harder.The analogy in your quotes that they are trying to go to the moon and so far have a stepladder is good. Stepladders can get them up a few feet but, otherwise, have nothing at all to do with the key ideas for how to get to the moon. Zip, zilch, and zero. We’re talking hype. Or, they dance ’round and ’round and suppose while the secret sits in the middle, and knows.Some good brain training for them for understanding just how hard it is to find good, new ideas would be for them to spend 15 hours a day for the next month trying to prove the Riesz-Fisher theorem. They still won’t have a proof, but they will start to understand some of the challenge of good research, that is, results that are “new, correct, and significant”.I’ll give them some warm and rock solid advice: Except for the simplest and most trivial parts, e.g., now AVL trees, just totally forget computing and computer science. Totally forget everything recent in artificial intelligence, neural networks, Breiman’s random forests, maximum likelihood estimation. It’s no more relevant than how to bake blueberry muffins.Actually, thinking about how humans understand how bake blueberry muffins would be a good exercise.Then, for the research, get a good math background to understand how to conceive of and build a body of math with definitions, theorems, and proofs. That’s at least a good ugrad math major. Mostly people don’t get that ability with less than a good Ph.D. For Hilbert space, study it as an example of some gorgeous math but otherwise likely f’get about it. It is gorgeous: It’s credited to von Neumann.Well rested, in a quiet room, get out a clean sheet of paper, a soft pencil, a big eraser, put feet up, pop open a cold can of diet soda or have a good quart of strong iced tea, and start to think. It’s all easy except the last three words.Uh, for some likely useful brain training, take a pass through Kelly, General Topology where do prove nearly all the exercises.While those exercises all need careful math proofs, finding the proofs needs a lot of good intuition. So, have to build up some intuition. One of the lessons is how to do that, especially at what intuitive level, 100,000 feet up, only 1 foot up, whatever, to work.Next start to make conjectures and then, to test them, mostly with just fast, first-cut intuitive arguments. So, this way can rule in some stuff that is likely true and some stuff that is likely false. Then keep going. Before getting too deep, back up and write out rock solid, beautifully precise and careful proofs of some of the little results have so far. For each proof, put a date and title at the top of the first page, make the work just gorgeous, since you need to trust it standing on it, put a staple in the UL corner, put it on a stack, where spilling the iced tea won’t ruin it, and continue on.Here is another view:Perhaps I could best describe my experience of doing mathematics in terms of entering a dark mansion. You go into the first room and it’s dark, completely dark. You stumble around, bumping into the furniture. Gradually, you learn where each piece of furniture is. And finally, after six months or so, you find the light switch and turn it on. Suddenly it’s all illuminated and you can see exactly where you were. Then you go into the next dark room …That is from A. Wiles, at Princeton. Right, he nearly single handedly knocked off Fermat’s last theorem. But his work was likely less original than will be needed for the math for natural language understanding.We were talking about five years. Okay, let’s hear from them in five years.
Einstein said, “Everything should be as simple as it can be, but not simpler” whilst Da Vinci said, “Simplicity is the ultimate sophistication.”Meanwhile…@samedaydr:disqus (rightly) teased me about my slides. I tend to put lots of related information on a slide instead of following “slide norms” which is 1 point per slide in linear flow of reasoning.That’s because the slides are designed to do what Da Vinci advocated: “Realize that everything connects to everything else.”In fact, the answer to everything in the Universe is simple and can be shown in a single image. Likewise, the system for users is simple.99.9% of people are not going to get the things PhDs in Maths, AI, Physics etc talk about.However, this particular work of Da Vinci and Yin+Yang is universal. So, if aiming for universal general AI (@girishmehta:disqus) …. start from appropriate atoms.
In anything at all solid in research, the intuitive stuff and the analogies basically do not count. What does count is the solid stuff. But the intuitive is important as a tool, or as I wroteWhile those exercises all need careful math proofs, finding the proofs needs a lot of good intuition. So, have to build up some intuition. One of the lessons is how to do that, especially at what intuitive level, 100,000 feet up, only 1 foot up, whatever, to work.So, need to pick, say, a level at which to work and there find some good leads to something solid where we really know what the heck we are doing and what we are getting, and for that the support of theorems and proofs is about the best to have.I need to make really clear, that the code and, indeed, apparently the computer science work so far, do not have much to do with the challenge of getting real natural language understanding from a computer, and, instead, what is crucial is the work well before writing the code. Or, the computer just does what we tell it to do, and for this problem we don’t know what to tell it to do.I have some ideas how to proceed, but I also have a startup to do. So, working on natural language understanding will have to wait.
Who are you, or should I just call you the Batman of AVC’s comment page? Great comment(s).
Instead of “Dark Knight” Taleb might call me this: a black swan.And funnily enough … AI needs the Yin part (the dark matter of data) of Yin+Yang (female+male) codified by the Chinese for the coherency of the Universe and its information way back in 600 BC in the I Ching.Ah and we also invented symbolic languages, simultaneous equations and the compass. All of which, when integrated well, are very very powerful tools for solving some of the hardest problems in AI that have confounded Google and Stanford’s brainiacs.And, unlike them, I have no dogma about Descartes, Bayes or even Turing’s methods — as much as I respect these giants of thinking.I belong to the schools of Da Vinci, Einstein, Schrodinger, Ada Lovelace, Daniel Kahneman and others wrt the fallacies of probability for dealing with information.So how to invent a new mathematical system that’s congruent and consistent with the other natural sciences and the Liberal Arts too — in the way Da Vinci might design for intelligence if he was around today in the Digital Age …[email protected]:disqus @pointsnfigures:disqus @JLM:disqus @samedaydr:disqus @domainregistry:disqus @girishmehta:disqus @jameshrh:disqus @sigmaalgebra:disqus — My prediction for the next 5 years would be that an integrated male+female team seriously disrupts not only AI but also how we model economics, data structures, neuroscience, evolutionary theory and more.In this way, code engineering XY into systems so the machines consider and behave more like humans and better support our knowledge and decision-making.
Einstein never wrote a theory of quantum relativity.Otherwise the quotes are nearly all right on target and for just the reasons given.
Ah, ok for precision … he wrote a ‘Theory of Relativity’ which has quantum effects for how we measure and model the universe and information.
Einstein didn’t even like quantum mechanics.For the connection between relativity and quantum mechanics, I’m not an expert, but maybe you have in mind the controversy that maybe the boundary of a black hole destroys what quantum mechanics regards as information.
I just wrote this for an article to be published soon.”Meanwhile, in 1926 when the German physicist and mathematician Max Born proposed that Quantum Mechanics should be understood as a probability without causal explanation, Einstein replied in a letter to Born:”I, at any rate, am convinced that He [God] does not throw dice.”Now, if no dice are thrown, then that means no probability is at play in the Universe or in information.”
In 2009, I read this piece of analysis by an Oxford physics professor and it seriously set the bees in my bonnet off:”At first sight, all types of information look very different from one another. For example, contrast thermodynamics – how chaotic a system is – with the information in your genome. You’d say: what on earth is the relationship between these two types of information? One looks much more orderly, the living system, while the other is disorder. But it’s actually one and the same information… you actually need very little to define the concept of information in the first place.When you strip out all the unnecessary baggage, at the core is the concept of probability. You need randomness, some uncertainty that something will happen, to let you describe what you want to describe.Once you have a probability that something might happen, then you can define information. And it’s the same information in physics, in thermodynamics, in economics.”Except, of course, he’s WRONG.It’s not the same information in physics, in thermodynamics and in economics.The first two are concerned with the natural sciences and atomic matter that have been measured and modeled with rational logic, mechanics and probability.The last is concerned with social sciences and subjective, organic matter that are more than logical, mechanistic and probabilistic.That means the information and the instruments of information measurement necessitate different approaches.
Is the rest of the class on YouTube. (That was a superb CS lecture.)
Stanford posts a lot of their Deep Learning & CS on YouTube.I particularly like Andrej Karpathy’s work. He’s a total star because unlike a lot of AI researchers he’s got a great sense of wit and mischief:* https://www.youtube.com/wat…
At Deep Learning Summit, Andrej Karpathy presented an example of an AI bot he’s trained to do coding.That’s right … it’s the beginnings of Skynet writing its own code programs.I was sitting right at the front and couldn’t stop LOL’ing because the comment annotations and functional representations the coding bot did were so off-the-mark but also so wry and witty.
From what I heard, a large chunk of Socher’s premises were proven wrong, Ironically by someone else in Chris Manning’s research group…And we make meaning. That’s why
Is there a link to the rebuttal of Socher’s work? Thanks.Manning is on his advisory board alongside Bengio.
http://arxiv.org/abs/1506.0…I probably have met Sam (U of C hillel). Socher’s designs requires excess tree structures, which explains some of the oddities you are seeing. In actual hard coding for industry purposes, it turns out Sam’s work usually works better/as fast.
Thanks for sharing!
AI? Sorry Fred: No, not really. There the good is not new and the new, not good, but with some exceptions which, however, have narrow utility. E.g., the Netflix prize was won not by computer scientists, new computer science, or AI but by a team from the statistics group at Bell Labs — statistics is an old field, and remove the old statistics from AI and have next to nothing new and significant left.But, with the flowing oceans of new data, old techniques of working with data will have new applications, and with hype can call that AI. But for nearly all the applications the utility will be nearly as narrow as traditionally. Really, the data is new and the applications can be, but the techniques nearly all are not new.The Internet will get, on average, significantly faster.The new, really small computers will help grow the internet of things, IoT. For that, will want to modify some existing software and maybe get a little new software.Block chain applications may grow. If there are flaws in the current ideas for block chain, then maybe they will get fixed.There stands to be more in digitally produced movies. The progress in the needed computer hardware has been so good that the movie people will have to take notice and make the exploitations.In materials, uses of carbon fiber stand to grow.Retailing via on-line ordering and same-day delivery stands to grow.The fraction of all technology start ups in Silicon Valley, Boston, New York, and DC stands to fall significantly, but in total the rate of technology startups will increase.The US Federal Government subsidies for all electric cars will stop, and the cars themselves will be seen as a fad that is quickly dying.The interest in reducing CO2 will fall to irrelevance along with everything related to it.Fusion energy will make significant progress.The US will take energy independence seriously and, thus, have us do more with nukes, drilling, and, as necessary, fracking.There will be some increased interest in the parts of applied math sometimes called the mathematical sciences, mathematical analysis, optimization, probability, stochastic processes, statistics, optimal control, etc.The automation of financial trading will significantly increase.There will be some efforts, maybe not in the US, to redesign roads to make self-driving cars more realistic.Technical education will make significant progress via some much more serious on-line learning materials and some serious and respected certification means.Due to the threats to privacy, encryption of e-mail will become significantly more common.China will move strongly to be self-sufficient in digital and computer technology, e.g., from semi-conductors to applications software.Due partly to Trump’s strong interests in manufacturing in the US, the interest in automation, optimization, and control in manufacturing will increase.Virtual reality will find serious customers in the product design and systems engineering communities.Essentially all video will be delivered for viewing via the Internet. TV schedules will die. Essentially all one on one news interviews will be via video conferencing over the Internet.Possibilities of big surprises: Particle physics. Dark matter. Dark energy. NP-complete. Perfect algorithms for games of perfect information. Computer understanding of natural language and associated high quality original thought.
The 2000s were driven by broadband, which afforded a whole host of changes and rapid evolution in user experience ubiquitously and rapidly. Wifi was built on this BB access and that’s what gave the smartphone ubiquity (Thank Jobs for resurrecting “equal access”). 4G followed rapidly, as the app ecosystem drove demand. Video consumption in “snippets” has followed that driving binge-ing.So I ask you, what “new” access/transport solution will pave the way for new, ubiquitous and rapidly scaling tech booms/ecosystems? Methinks VR, AR, Drones, smart this and that (IoT), are all symptomatic of tech trying to chase markets that can’t exist due to the lack of networks with necessary coverage, latency, bandwidth, security, etc… KPIs.
I’ve always thought the Yelp Monocle, where you hold your phone up and it shows restaurants/bars/etc along with pertinent information was cool. I’d imagine it’s buried in the app as it currently is due to the clunky nature of walking around looking through your phone’s screen. Something a bit more streamlined and under the radar than Google Glass would be awesome. Maybe some sort of mini-projector that can be retrofitted onto normal glasses/sunglasses?
Can you imagine a couple having an argument and Alexa getting caught in the middle of it? AI abuse of the worst kind!I do believe glass-like, gesture and voice rec will all improve the mobile experience, but that’s superficial stuff. Not ground-breaking the way broadband and app ecosystem were in terms of enabling entirely new forms of communications and disruption.
believably secure/accessible personal-bot-network assemblers/editorsenabling all manner of personally-orchestrated AI-sensors/data/actuatorsThis time around the revolution will not be televisedyou will instead be subsumed as a transmission-threadinto the collective neural-net messaging-streamwe are the Borgwe and only we will decidehow to assimilate ourselves
Only with a scalable economic system of settlements that convey price signals, share network effect value and offer incentives and disincentives to actors at the core and edge.The IP stack consists of many insecure and disconnect islands. As Fred has said, winner takes all. Only few realize the pitfalls because we’ve come so far so quickly and ascribe it to a settlement-free technology stack.
That all sounds right ?but you forget you are talking to an interloping peanut-gallery big-picture dreamer hereMy “superficial working characterizations” are just my layman’s personal attempt to simplify/summarize/retain what I learn here at AVC.I’m not very technically deep here. I’m just top down brainstorming/framing :-)I leave all the realistic heavy-lifting to the right-smart crowd of actual Tech/VC plays here at AVC
5 year Technology prediction: AVC.com will still be strong, and we’ll be predicting another 5 years after that.
Well now that depends. For example 5 years ago Fred (from a spot check and memory) appeared much more involved in replying to comments. (This was my feeling intuitively prior to spot checking). I think that was a large part of the sauce that made AVC successful in addition to the posts.Here is an example, ironic in a way:http://avc.com/2011/02/wow/Another example:http://avc.com/2011/05/fina…Don’t have stats to back this up, just my gut feeling.
Fred did mention in a previous blog that he was stepping back from commenting.
I remember the post Rich is referring to. Maybe more than one post for different stages of the transition. It was a big change… Fred went from reading every comment and frequently responding to not reading every comment and minimally responding. He also made William and Shana mods. It was more fun back then and we got to see more of Fred’s personality. It was also one of the ways I felt welcomed. But I can understand hard to keep up. I’d rather have the sustainability of AVC that probably comes with his not spending as much of his time tending the comments.
It was more fun back then and we got to see more of Fred’s personality. It was also one of the ways I felt welcomed.Agree it’s unfortunate that he is not able to do this more. I think it was definitely one of the keys to the blogs success.
And Fred, being Fred, will be doing a post-mortem at the 5 year mark on the accuracy of his prediction regarding AI. (one of the things I like about him).On the subject of AI – Naval Ravikant had some interesting comments in a podcast last week on “General AI” vs. “Specific/Emergent AI”. He does not believe the fundamental theoretical breakthroughs are in place where General AI is concerned.
If his prediction about AI is right, at the 5 year mark he may have a robot assistant doing that post for him :)Robot VC company interns, anyone? Or is the market too small?No moon shots or “go big or go home” possible there, maybe.
Yes, I was thinking on those lines too, and recently said as much to a couple (of non-techies) who asked my opinion on how AI may affect us. Mentioned first that I am even less than a newb in the AI field (but am a programmer), then went on to say something like what Naval said, also mentioning how Artificial General Intelligence (which some people are calling it now) is still far away – per what I have read.And the example I gave was that current AI can do human-like things in very narrow fields where it has been trained (by both software algorithms and data models), But if you put it in a new and very unfamiliar situation – some area that neither its algos nor data have trained it for, it is likely to fumble, or fail outright, whereas a smart human would have a good chance (but not guaranteed) to adapt, learn needed new stuff somewhat on the fly, and come out successful from that situation – e.g. being para-dumped into a new country, without knowing the language, with very few tools or money, etc. Or a Robinson Crusoe / Swiss Family Robinson type of situation.Maybe people should start qualifying it as AGI and ASI (G = General, S = Specific), where the Specific means AI in narrow domains where the software can be trained and then deployed.
In some senses, humans also count as narrow – getting paradropped to a random point on Earth is, I’d estimate, fatal in at least three-quarters of cases simply due to local environmental conditions (read: oceans, deserts, mountaintops, and so on). We’re less narrow than computers, so far, but our intelligence is just a bit biased towards survival, in a band of ‘likely’ or ‘conceivable’ situations.Not that I’m disagreeing with you on your assessment of AGI vs ASI, though.
>fatal in at least three-quarters of casesYes, I agree, quite possible – not to mention other causes like hitting the side of a mountain, landing in a tree, and others.But just to clarify, being paradropped was just one example, and maybe not the best one. I was thinking more of situations where a person can survive by their wits (and some physical actions) than of winning against huge physical odds. There are many such real life cases, where people have done that.>We’re less narrow than computers, so far, but our intelligence is just a bit biased towards survival, in a band of ‘likely’ or ‘conceivable’ situations.Agreed
Link to podcast, please, thanks.
At about the 06:40 mark…http://fourhourworkweek.com…
Cheers, mas.On the general AI thing … see my comments citing Simon Stringer. The article’s here:* http://www.techrepublic.com…We’re trying to move towards general AI by not simply training it to beat games but to try and get it to understand and do common sense and make abstractions like 2 year-olds can:* https://www.technologyrevie…
Amen to that.
Disclaimer: Don’t know hardly much 🙂 about AI.But if a lot of AI is being powered by stats / big data, just had this thought: what happens if/when there is a big outlier in the data – an outliner that was not accounted for in the stats / machine learning models that power the AI? Could it result in machines going haywire?Or can machine learning be trained with outliers as well?
If we start seeing tons of drones flying around for consumer product deliveries in the next five years then I’ll feel like I’m really living in the future. Have to think theft, and many other issues would come of that though. VR and AI will be big I agree. Always interesting hearing what VC’s think of the future since they are pitched concepts on a daily basis by entrepreneurs who are trying to change the world by projecting what they think the world will look like 5-10 years down the road. Nobody can predict the future, but it’s interesting to get people’s takes and how they feel about those potential impacts and evolutions. Thanks for sharing this Fred.
Wait…who stole my robot?!? Very funny!
.And the correct answer is ……………………………………………….. ALL OF THE ABOVE.BTW, wasn’t that quite a great conglomeration of old white guy talent? I am glad that at least there was one tan guy but then where were the women?I may have to vote for Hillary.JLMwww.themusingsofthebigredca…
Well … it means echo chamber reinforcements of AI being modeled and built according to male functional brain biases rather than also reflecting female intelligence.Ergo … not surprising male investors investing in male AI engineers haven’t been able to solve Natural Language problems in AI because language needs to ALSO CARRY FEMALE CODE.Amazing what Neuroscience can show us.
.OK, I agree with you but how do I get a free barbecue out of this?JLMwww.themusingsofthebigredca…
Haha, well … vote for Hilary. The CTO of the US is female (Megan Smith), by the way.Problems in AI will be solved because there are star women in AI working closely with male colleagues like:* Fei Fei Li, Director of Stanford’s AI Lab —https://www.youtube.com/wat…* Rosalind Picard and Rana el Kaliouby of Affectiva —* http://fortune.com/2015/09/…* Marian Bartlett, co-founder of Emotient acquired by Apple recently —https://www.youtube.com/wat…* Kieran Schneider, co-founder of Textio —http://recode.net/2015/09/0…
i thought overall women’s brains are more similar than different than mens brains on average
Upfront could have interviewed Gotham Gal. She’s nominated for Techcrunch’s “Angel investor of the year”:* http://techcrunch.com/event…We the public can’t vote, though. Winner is selected by 100 Valley insiders.
you should. she could use your vote
.That coin flipper?I won’t lie to you — six for six, I want her on MY team.JLMwww.themusingsofthebigredca…
That was the first thing I noticed. Not a single woman in the entire video!
YES. Fred, it looks like Upfront made the decisions on who to interview for this video. But if you do want to post a video that’s obviously lacking in diversity, maybe at least acknowledge it up front? Given the heat of ongoing debate about women’s experiences in Silicon Valley, it’s pretty dispiriting to see videos like this without caveat…
I understand your concern about diversity — look at my photo PLUS I’m a working mom and over 40 so I’ve got it well-covered. I think a statement from Fred about the lack of diversity in the video would have felt contrived and taken away from the message being communicated.The cause for diversity will be better served if it doesn’t become another form of “political correctness.” Trust me, I’ve had my fair share of being a token and I know that words and posturing mean little.
You’re changing the game and the ratio for women in tech, Donna.Have you read the WEF reports on how AI will impact women’s jobs the most?* http://www.huffingtonpost.c…Economists are projecting that AI will hit women’s jobs the hardest:* http://www.wired.co.uk/news…I’m as against PC as you are. I like knowing I earned my promotions into CEO-Chairman’s Office of UBS investment bank purely on the quality of my work.When I go to technical conferences and see the types of AI systems that are being built and that, unconsciously, disadvantage women because they’re AI that reinforce male thinking and don’t also include female thinking, I feel an even greater drive to “stick with it and see it through.”
Firstly, welcome to AVC community and, secondly, thanks for commenting and adding to the female voices here. I’m a woman, by the [email protected]:disqus rightly called AI.Now, at World Economic Forum recently they examined how AI would impact the employment opportunities of women and Susan Wojcicki, YouTube’s CEO, has blogged about her concerns:* http://www.huffingtonpost.c…Economists are projecting that AI will hit women’s jobs the hardest:* http://www.wired.co.uk/news…Fred’s colleague Albert has discussed a guaranteed basic income model and it’s worth asking him if there should be particular consideration for women — given that AI will affect their jobs more.An alternative and tandem strategy to pursue, of course, is to get more girls into STEM so that they participate in programming the AI and the robots.Literally, I thank my parents every day for getting me into STEM from a really early age (toddler) and not gender-restricting me with “Dolls are for girls, computers are for boys” which is what seems to have happened in the US, according to the articles I read.My advice to all other women is: get yourself to a coding class asap. Even if you don’t end up coding, it will tool you to collaborate with engineers and to build systems that are male+female rather than male-biased.
Thanks for the welcome! All super interesting and I take @donnawhite:disqus’s point about political correctness.That being said, I was trying to find this amazing list of female/BAME VCs I read recently (lost on Twitter somewhere) – because I just can’t believe there weren’t any available to interview who’d earned a place at the table…On the AI point, totally agree and this is somewhat new to me @twaintwain:disqus. I’m a literature graduate so coding was off my radar until becoming a product manager. Now I’m taking a Ruby on Rails traineeship myself for that very reason: becoming conversant in technology, rather than being translated to all the time. Hoping to encourage other women around me to follow suit, if it goes well.
Terrific. It will be HARD but as Professor Moore, Dean of Carnegie Mellon’s Computer Science school, pointed out at 2016 World Economic Forum in Davos: AI NEEDS men and women to teach and program it.If you decide to get into AI, you’ll often be sitting in audiences similar to this photo. That’s a database meetup on H2O.ai I was at last night in SF.You’ll also wonder why panels on Conscious AI are only male — as if human consciousness and how we develop intelligence & language is completely devoid of the X code and training of our mothers.It’s all so “???!!!” you’ll develop a fantastic wit about being a female engineer.Stick with it because the world needs the language knowhow of men and women to get the machines to understand (and not kill) [email protected]:disqus @JLM:disqus @ShanaC:disqus — The funniest thing is that because I don’t look like a coder, people assume I’m a designer who accidentally wandered into an AI event. Or that I’m there for the pizza and not the learning, haha.
TC — when will the machines wake up?* http://techcrunch.com/2016/…
Share this with your female friends as they consider jumping over to coding. You’re already over 80% on your way to 100%.
hi thereThinking about it – lets pretend for a second I was in the video.Whle my answer would have been based on personal experience, that would have been probably accidental as much as anything else (not really related to my gender)I’m not totally sure how many women vcs would say for social level changes the biggest tech change would be ____ in a much different way than men. Most of those experiences that drive that blank tend to be human driven, before gender driven
Hi ShanaC.Of course – in the same way that I guess we all draw on our experience in a holistic way when it comes to making business decisions or predictions.And yet we still see that more diverse boards are more profitable (cf this http://www.ft.com/cms/s/2/2…. So – my question is: in principle, as long as interviewees have equal expertise, would greater diversity not lead to more accurate tech predictions?
Diversity is a holistic item. When is diverse enough, what is diverse enough, what makes for diverse?Presence of board membership right now usually means similarities on many levels, not just gender, or race. Age, the income of ones parent, educational attainment, likelyhood of a severe disability, how many siblings one had, overall religiosity or lack of religiosity, political beliefs, ect, I would bet run rather similar among fortune 500 board members. Being female maybe a confounding factor to all sorts of glass ceilings of not being similar in all the many possible ways one could be to board members and executives. It could be that female is a smoke screen for me, or someone else, or female/feminine is being used a label to mix in a total set of other factors that are actual non-inclusive, more so than actually being female. I’m not sold that a board would suddenly become more diverse by adding more women and people who are not white who all have similar education, family income in their youth, current income, political beliefs, and jobs. it would look more diverse though. It would, if anything, mean a minor boost in profit because of slight essentialist differences. If the board looked alike and had radical differences in background, current status, and current beliefs, it might be more diverse and generate more profit (and be even less likely to exist).It could be that sticking a friend/acquaintance of mine who is male and was raising his siblings at 18 (instead of parents) while escaping the projects on a board would cause the same effects as sticking a woman on a board, even though he is white, because he has inequality of access as well and would provide a radically (as in socialist radically) different set of perspective to a board (possibly more than some women who have otherwise similar education/career paths as men on boards). For some, his views are in many ways more feminized than mine because of how politics are constructed. I’m actually female. But my views in some cases are perceived as more male, more standard. Parts of my background would as well (though a large chunk also would not be, which again, is not related to my gender at all). In this case I’m the guy, and while that may seem odd at first glance, adoption of the standard norm is the adoption of the “not other” which is what essentially being male is about in the context of boards (to borrow from thinkers smarter than me)So as I said, I’m not sure it is female, or the labeling of something as female, or other, that causes the problem. And furthermore, without diversity targets and better definition of the goal by defining diversity in the ways we need*, we tend to spin in circles and cause jealousy/political anger problems, as well as make for “protected classes” without clear cases of how it long terms gets us to the goal and why we are getting there. Which I think we are a long time from
The bigger issue is the composition of the group that’s represented. The video is a spot on representation of a particular segment of the VC community.
Welcome to AVC community and thanks for speaking up!
I will give it them though for not having a token woman or person of color. That’s even worse. The guys in the video are top VCs (or at least the more visible ones) within their respective firms which are all prominent firms. That’s already a narrow group. If they were going for a broader representation of the VC community then they would have needed to be more “inclusive.”I appreciate the solution hinted at in your response. Change the fact. It’s not about the video.To Mark Suster’s credit — he has been a strong advocate for diversity and I’ve come across the results interacting with Upfront team members and portfolio.
I love that the stock video for AI is humanoid robots.
Witnessing strong non-tech industries continue to be assisted and unhinged by tech – (i.e. Real Estate brokerage). I agree with you that AI will lead that charge.Also Marc’s sentiment – as the next billion come online, there will certainly be a few smart, driven, and resilient founders with good ideas sprinkled in.
All the things they talk about will come true. It’s the way in which they come true that will be interesting. AI and predictive algorithms are going to revolutionize a lot of things. Wait until the high frequency traders take a step back and realize that instead of pulling down a nice chunk of change working for someone else, they could start a predictive algo company in medicine, or some B2B process and own a significant chunk of it; and long term create more wealth for their family by doing that than making code go faster for someone else.
IBM Watson’s creator joined Bridgewater Capital, though:* http://bits.blogs.nytimes.c…
cool! I really think it’s key to get the HFT algo writers into the entrepreneurial community. They can see how their skill translates into building massive blow out firms where the short run money is hard, but the long term reward could be really big.
Is that the same as Bridgewater Associates (BA), or related? The BA founder, Ray Dalio, has a free ebook with a lot of his views on things, including a lot about business and running a company. I’ve read part of it and it was very interesting. Apparently he/they were one of the most successful investors over a long period, something like that (might have some details wrong). And the ebook is very different in content from what one might think would be written / practiced by such a person/company.
Yes, that’s them.
video predicting the future uses a song from 1993…
ahh @jessbachman:disqus already got it…
Here in Africa, I am just hoping for broadband. Enjoy all that awesome stuff USA….not jealous at all 😉
An invitation:http://neuroscience.berkele…deep learning and other approaches running on top of a Turing machine might not be able to achieve what you are looking for… these are not the droids you are looking for?
You called this, Fred: “Alphabet Inc.’s Google is putting the leader of its artificial-intelligence research in charge of its online search engine, showing how important AI technology is to the future of the company’s main profit engine.” http://www.wsj.com/articles…
Worth reading new Google Head of Search, John Giannandrea’s, views: the “holy grail,” he says, is language understanding. He was a key architect of Google’s knowledge graph:* http://fortune.com/2015/10/…@fredwilson:disqus — No techco will get to the “Holy Grail” without female language knowhow, without frame working what even Turing couldn’t and without solving the Einstein-Schrödinger equations.It’s AI all the way for the next 5 years indeed!
Big data and software reaches maturity, AI starts the new S-curve, retail & commerce consolidates and millennials end up paying more taxes.
AI? Didn’t see that coming. Unless I’ve been missing more of your posts than I realized.
Everyone looks a little older than the last time I watched them 😉
I imagine VC is a job that ages a person.Hey, Mark. Hope all is well.
I’m over 40 now as well, feeling the acceleration of years passing. Wasn’t I just 30 ;)?
Keith Rabois’s response is scariest in terms of thinking about the transition point when young people start realizing that starting your own company is no longer “easy”. I don’t think we’ve reached that point yet. It’s encouraging to see so many of my peers in business school trying to start their own companies upon graduation (definitely wasn’t the case years ago), but at the same time, it’s scary to think about the single digit success rate of start ups coupled with reduced access to cheap capital.
While my earlier post and attempt to demonstrate a key point works its way through Disqus filtering, I’ll have to revert to making my point in English (a language spoken by a subset of humanity and incomprehensible to machines).The celebrated Web is a collection of documents that’s changed the world for the better. Moving forward, its many untapped dimensions (e.g., enabling the creation and sharing of data across data silos) will continue to drive technology change in the future.Smart Agents (the new hot topic) can only function with structured data constructed using hyperlinks that expose relationship type semantics in a form comprehensible to both humans and machines i.e., we will have a new and less ambiguous language than English that will drive everything.Links: https://medium.com/@kidehen… — Medium post that demonstrates the key point (assuming I am allowed to make references using hyperlinks).
The cost to decode a genome will drop to $1, creating massive storage and processing problems (since genetic code is way more efficient storage than a harddrive, and can do multiple things in a cell and have multiple things done to it, making it extremely complex to understand from a processing point of view)
Blog contributors:With all innovation, historical firsts and change (Which is enviable) there have been opposition. The innovator’s rarely highlight the opposition’s errors and being on the wrong side of history. The very large elephant in the room is if you review the last one hundred and fifty years you will be surprised that the same viewpoints have opposed historical firsts.This will be no different today as it will be tomorrow.The dumbing down by cutting education nationally. The Borg.https://youtu.be/AyenRCJ_4Ww
Really struck by the lack of women in the video – perhaps seeing them in the video will be the biggest change in five years?
liked Vinod Khosla’s view. All the VC’s who presented which technologies will be bi in the future are basically touting things either they are invested in or planning to invest in. It isn’t like any of the VC’s know which one of their investments is going to pay off, they just bet on a lot of things and probabilities of investing suggest that a few of them will be winners.What is interesting is none of the VC’s predicted anything that would have a impact on improving the lives of masses that suffer due to the inequities that exist in the world. I guess their libertarian views don’t allow for technology playing a role in “social engineering”Also no women interviewed nor any minorities who maybe involved in activities of expressing what the future may hold. Keeps the masses thinking the Whiteman is the only one who knows…. Maybe Mr.(fred) Wilson would speak to those organizers to include others in asking these questions…
…..For six months and then android wear for the other. Seriously as a watch lover I never thought I’d wear a “smart watch” turns out I was wrong 😉