Thanks–cued up.I’m a big fan of his and first met him when I worked for Digital F/X and he was on the board. He also does some really interesting investments in the healthcare/wellness space which is a new consulting interest of mine.Have a great Sunday Fred!
super cool interview. Google are going to WIN for a long time to come. Long-term plan, eager to make huge moonshot bets. Overarching strategy is actually very cohesive. So comfortable in each others company. Basically just 2 geeks who find themselves, despite themselves, running the world+ any non-shaven, messy-haired, Croc wearing billionaire is a friend of mine.
The full transcript for those who are interested – http://www.khoslaventures.c…Loved a few points – + 4 year outlook vs 20 year outlooks for CEOs+ Large companies and the idea of having multiple adjacent focuses that are interrelated+ The whole chat around self driving cars and how it’ll transform the economics of the industry by cutting 97% of the costs+ The chat around solving the unemployment issues (I’m sure Albert will like that bit)
Thanks Rohan. That transcript just saved me 35 mins.
For self driving cars, I don’t buy the 97% cost reduction. Think about how self driving cars would work in NYC, Beijing. Funny thing about transportation, we all tend to need it at the same time*. *This is why companies like Sidecar and Uber can charge 2-4x their base fares at will.
Glad! 🙂
I think its great they are making big bets. As far as government is concerned, they should start listening to John Taylor’s Economics One class when it goes virtual this year. http://economicsone.com/201…
Larry: “For every page a regulation you add, you need to remove a page”… if only!
When he said that I thought, in the US we have the Bill of Rights. That might be enough.
So you’re thinking reboot? Start with a clean slate?
I think we should. In the last year I have looked pretty seriously at industries like farming. They are so screwed up with regulation and subsidy that the only way would be to start from scratch. For example, if you grow organic food in Indiana, you cannot sell it in Illinois unless you go through a distributor (or at a Farmer’s market which has other operational costs) Meanwhile, the factory farmers receive all kinds of subsidies courtesy of the Farm bill which artificially reduces the cost of their products.Raw milk is illegal in this country. Yet in the rest of the world its legal. The USDA and FDA are actively seeking out raw milk producers and taking away their animals and land. But, if we allowed production and clearly labeled it we would create entire artisan industries (which would create jobs etc)I think the biggest mistake people believe is that Republicans are for big corps and Democrats are for the little guy. If anything, Obama has been ONLY for the big guy. Big banking, big insurance, big farm etc.Big govt programs and big regulation only help big guys. Google is now a big guy. They actually will have an interest in perpetuating big government to kill potential competitors
I can’t argue with you as regards farming. Big business farms and politicians make it look like a program to help out the small family farm, which are increasingly few in USA.Odd how many people have a picture of farming in their mind that looks like Old MacDonald. Ain’t true.
what’s amazing is how they are trying to get control of the land. Big corps and big govt are creating economic incentives to help themselves. Small independent farmers are confronted with high costs, few ways to get to a mass market (regs) and sell their land to the big corps-who then get subsidies to farm it and are big enough to process and distribute without running afoul of the FDA or USDA. Polyface Farms in Virginia is flying in the face of it. http://www.polyfacefarms.com/ And, they open source their processes so anyone can do it.
I read somewhere that the # of small farms are increasing dramatically (no time to look now).This ties I believe to the huge increase in awareness, willingness to pay and availability of alternative distribution in the major urban areas.
Not sure they are. I know in row crops, they are dwindling. Most farmers rent the land and work it for a big corp. (that may be the most econ efficient outcome-but let’s get rid of subsidies and see). In meat processing, it depends on the vertical. Chicken and pork are not like beef. But, the USDA and FDA strangle smaller producers via regs.You are very prescient in talking about major urban. Impossible to have a functioning business without being close to a dense area with enough customers to support you.
You also quite bizarrely wash your eggs and consequently must refrigerate them. Cockamamy. Pun intended.
We need that.
Where can we find the regulation that govern that process ?
Only in Larry Page’s head.
Thanks for sharing Fred. Great breakfast entertainment and food for thought.
“the complexity of government increases over time.”That’s an understatement. So does it have to be that way.
I don’t think so but complexity is a moat that protects a lot of vested interests. Large insurance companies hire lots of new lawyers just do to compliance but it makes it increases capital requirements for a new insurance company. There is also a huge mismatch between policy analysis skill available in the public versus private sector. A lot of influence lobbyists yield involves their superior command of complexity in various fields compared to the understaffed and under-resourced internal government research orgs.
Best line: Larry Page @ minute 11:20, “so i think the actual amount of knowledge that you get out of your computer vs. the amount of time you spend is still pretty bad; our job is to solve that and most of the things we are doing make sense in that context.”
Yep, think about it, google’s knows just how long to and how wrong the stopping point is of many searches.
they come across well. i like them. i still don’t trust google.that there was no question about virtual currencies, or even a reference to virtual currencies in the discussion about economies and employment et.c., seemed anomalous.is that Sergey’s favourite top? he seems to wear it often. who makes it i wonder?
Google X seems like a great initiative, reminiscent of HP Labs & Microsoft Research, both legendary in their impacts during their respective hey days.I wished Google was more forthcoming with their real intent behind each product, service or initiative. Everything starts by being very important & “world changing”, until the plug is pulled.Of course they build lots of goodwill with users because they provide so many free services, but they also take some trust off the table when they discontinue services or let them linger.Steve Jobs would probably tell them now,- “You’re still doing too much.” But overall, Google gives back more than it takes, and that’s a good thing.
What does ‘gives back more than it takes’ really mean William?
Understood re: business, of course. Give & take comes in many forms. Among them, Google gives us lots of free services (Search, Now, Hangouts, Calendar, Drive, Gmail, etc.)On the take side, they take our data mostly, and our time / loyalty commitments.
So you are saying they are a media advertising model?Never heard anyone call a media model where they sell our data as give and take honestly.
It is interesting that William and many folks including USV feel in the value created by large networks.This value that gets created by the many millions of users who contribute to the network without getting paid for it.USV would not invest in a service if they don’t see a large network forming, Google would not be considered a good corporate citizen as William points out, but it all comes at a tremendous cost. It has a net effect of centralizing wealth and overall economic growth is limited.So as much I like Google and Twitter and can relate to the USV investment model from a wealth creation standpoint, I don’t think it benefits society in a healthy way in the long run.I wonder if the twitter’s, youtube’s, facebook’s, of the world were to incorporate crypto currencies in giving back to the user small amounts of value that can be exchanged for goods and services in the digital networked economies, then possibly there is a way to also claim that label of “giving back more than they take in”I wonder if the USV’s of the world would invest such constructs.But for now I don’t think there is a give and take really as much as we throw you a bone and in return you get to stay afloat…
I agree the idea of give and take is the wrong terminology.I agree that business is about making money.But–When I see companies (like Google and Facebook) looking at alternative electric supply with an eye towards the environment, recycling of hardware–this is good stuff.When see companies like the Gap especially moving to a higher minimum wage rate voluntarily and funding third party initiatives to monitor and make transparent labor conditions where their goods are produced–this is good stuff.I would consider all of the above good corporate citizens to some degree.
I didn’t say they sell our data. they use it implicitly or in aggregate.
my mistake buy in reality they are selling it.fine with me. advertising pays for the web and basically before the web and now, it is simply tolerated.
Changing the topic 🙂 – Did you notice the new changes to the Disqus Dashboard.
nope
Hmmm. Maybe the roll-out is staggered.
I’ll take a look and thanks for the nudge my friend.Honestly, I think little of Disqus as a entree into much cross network or community any longer. That dream and I think that need faded a bit over time as my own personal networks created their own dynamics.I am reliant on Disqus as I’m reliant on a very few communities. They are key there and the smartest plumbing in the world of conversations.The idea that we have to work at all hard to discover information or communities or that most participate in more than a couple is not reality to most today.Or so I think.
I agree. They are on the incoming end of conversations, not at the initiation tip.
Not what I thought about them in 2009:Comments, Conversations and Community http://awe.sm/r70ZK
Sorry, I have to conclude that artificial intelligence (AI) and machine learning (ML), e.g., as in the video, are, as those fields stand, hardly more than just junk. About the best that can be said for both is that they are not really ‘technologies’ or ‘techniques’ but just goals. Back when I was working on AI at Watson at Yorktown Heights, my complaints about AI were answered with, the field is a ‘goal’.Okay, a goal: But I see nothing in AI or ML that looks like a solid approach to anything significant in ‘intelligence’ or ‘learning’. So, the naming is essentially just hype.I can believe in ‘problem solving’: Humans have done amazing cases of problem solving for millennia. E.g., once in the Middle Ages, there was an effort to move an obelisk in Rome. There was a big turnout, lots of plans, wood, rope, workers, etc. and, finally, success. But we should recall how the thing got there in the first place: Some of Caligula’s slaves had cut the thing as one piece out of solid rock near the headwaters of the Nile, moved it to Rome, and put it in place. So, they did some amazing ‘problem solving’.More was done by the Wright Brothers: Langley had just fallen into the Potomac River, but the brothers ‘cheated’: They developed the first really useful wind tunnel and a good wing shape (they didn’t yet understand Reynolds number so got the scaling wrong) and were able to do accurate enough calculations of lift, drag, and thrust. And they had a good enough solution to the problem of three axis control. So, as they packed up on the way to Kitty Hawk, they had on paper and in their workshop some really good evidence that they would be successful. They did; Langley didn’t. The brothers did some good enough basic engineering; they had a specific goal and knew what the heck they were doing.I believe in problem solving based on engineering, where we know what the heck we are trying to do and have some solid ideas for getting it done. So far the new parts of AI and ML don’t qualify.When our group gave a paper on AI at an AAAI/IAAI conference at Stanford, my conclusion was that the good work, and there was some, was not AI at all but just good engineering.Such ‘engineering’ is the broad ‘paradigm’, but that point says very little about the ‘big question’ in the video clip, e.g., doesn’t say what great progress in ‘technology’ and problems solved will be in the next 20 years. Instead, under the paradigm of engineering, we take the problems and the solutions mostly just one at a time. Each time someone does some good, useful, valuable engineering, fantastic, but that that does not enable any general extrapolation. Sorry ’bout that. Instead, successes are one at a time, carefully hand made.I suspect that Page and Brin have been badly influenced by some parts of Standford that want to push AI and ML and neglect the wonderful background Stanford has in pure and applied math, science, engineering, and technology.If AI and ML are only goals, okay, but then they are still too narrow for the good problem solving to be done. And for good solutions, still the best approaches are, and may I have the envelope, please, pure and applied math, science, and engineering. Sorry ’bout that.AI and ML themselves and their ‘techniques’ so far, e.g., neural networks and maximum likelihood estimation, bring much that is new and good although the new is not good and the good, not new. The contributions that are unique to AI and ML fill much needed gaps in progress and would be illuminating if ignited.The idea that computers are ‘electronic brains’ is not at all new and was marketing hype going way back. Yes, at times computers can do some amazing things, that without a computer would need ‘intelligence’, but that does not mean that such a computer was in any meaningful sense ‘intelligent’. Indeed, so far there is not a single glimmer of hope that humans know how to implement anything on a computer that is anything like natural intelligence, even that of my kitty cat Pollux — bright little guy.”Self driving cars”? Nope: So far such cars are only for streets where nearly everything has been previously mapped down to a few millimeters. Move the stop lights, and have to map again before the ‘self-driving’ car can ‘drive’ — the ‘self driving car’ can’t really find the stop lights. The ‘self driving cars’ are about as ‘intelligent’, autonomous, and flexible as a train on rails.Yes, in part, I’m bitter about the hype because it is harmful, i.e., pollutes the well meaning that anything good has to fight the bad reputation from the bad from the pollution.Finally, I don’t like the idea of the many places things could have been different. Instead, pick a problem, work to find a good solution, typically seen as good just on paper, and, in cases when have such a solution, then implement and sell the solution. Should have a ‘must have’ solution to a really big problem where such a solution is clearly very much wanted (e.g., a safe, effective, cheap one pill cure for any cancer) and can make a lot of money.
what’s your story @sigmaalgebra:disqus ?
As from Lebesgue, Kolmogorov, etc., given a set A, a ‘sigma algebra’ is a collection of subsets of A that includes A and is closed under complements and countable unions. There is much more in Halmos, ‘Measure Theory’.
The philosopher Plato said, “All learning has an emotional base” whilst the neuroscientist says, “Emotions amplify our memories.”It’s known that Google is building the Star Trek computer which is the basis of Google Now:* http://www.slate.com/articl…In Oct 2012, I asked Amit Singhal, Google’s SVP Engineering and Head of its Core Ranking team (aka Search), whether their Star Trek computer could and would be able to understand our emotions. He replied, “That’s a very deep question that journalists usually only ask an hour or so into Q&A. No, we focus on objective facts and figures.”In March 2014, Ray Kurzweil of Google gave a TED talk in which he said, “The frontal cortex (where we think, “That’s ironic. He’s funny. She’s pretty”) is not qualitatively different from the neocortex (where our numerical reasoning and motor-neuron controls happen). It’s a quantitative expansion of our neocortex.That additional quantity of thinking was the enabling factor for us to take a qualitative leap and invent language and art and science and technology.”* https://www.ted.com/talks/r…Here are a couple of my thoughts on this:(1.) If the Google machine — and its IBM Watson counterpart which was called a “human autistic savant” by its technical lead — can’t understand emotions, then by Plato’s logic………How can it be a Machine LEARNING system?(2.) Quality is not the same or equivalent to quantity.Let’s do a simple mathematical logic proof to verify this. Suppose at the start of the week we have 1 dog and 0 cats and then…* Every day we get 1 more dog and 0 more cats.* Every year each dog produces 100 more dogs.Now, is this statement true or false:* At the end of year 1 we have 36,865 cats.The answer is false. This is because the increase in quantity of dogs does not produce a qualitative change from the dogs into cats.To me, quality is a completely different factor variable from quantity.That’s an important distinction because it affects how we define and build Intelligent Systems. The idea that intelligence is purely mathematical and quantitative should be rejected by all intelligent people.
I found the Slate article athttp://www.slate.com/articl…but I can’t do much with a news story, especially in a literary magazine, about such a technical subject. A picture from the TV series ‘Star Trek’ doesn’t help.For such a project, Google will have to write code, and they can do that, if they know what code to write, what the heck the code is supposed to do. E.g., for such code, they need a ‘specification’ or a ‘design document’, or functional design, component design, module design, etc. They can skip such design documents only if they are able to write them! If they can’t write them, then they are stuck-o right there.Certainly it is possible, under some significant circumstances, to use mathematics to analyze and predict emotions, at times with considerable accuracy.
Have you seen this from Sentiment Symposium in March 2014? It reflects Stanford’s current approach to NLP with emotions:* http://vimeo.com/90178708Re. Google needs a ‘specification’ or a ‘design document’, or functional design, component design, module design, etc., what if they can’t do that or what if someone else already beat them to it?
Okay, I watched it.Yes, for your concern about the connections between math and emotions, he did do some quantitative analysis of emotions. So, if only from this example, such is possible.Broadly, what he did is okay.Remarks:(1) Just what he did with natural language processing was not made clear, Instead, somehow he was able to take the texts from his ‘corpus’ and for each ‘encode’ or ‘tag’ it with some emotions.(2) He said that the emotion when wake up is not “random” or said some such. Here he has a naive view of random variables. Instead, the emotional state on waking up is certainly ‘random’ in the sense that we can call it the value of a random variable. By saying that the emotion was not ‘random’, what he meant to say was that the emotion and the random variable are not probabilistically independent of everything else. So, he confused ‘random’ and ‘independent’.(3) He has a big graph with lots of colored arcs. I’m surprised that he hasn’t noticed that such a ‘graphic’ does not show much. Instead for his data there is a matrix — for some positive integer n and n emotions and emotions i, j = 1, 2, …, n, he has a matrix P = (p<sub>ij</sub>) where p<sub>ij</sub> is the probability of being in state j next given that are now in state i.Such a matrix is the core of the subject Markov chains. Can see E. Cinlar, ‘Introduction to Stochastic Processes’ for much more. Uh, we’re talking stochastic processes, that is, definitely things that are ‘random’!The matrix is more useful than the graph if only because matrix algebra can be applied to the matrix and does yield some powerful results. Again, there is more in Cinlar.(4) He said that “emotions are continuous, not discrete”. No. In the terminology of Markov processes, he is saying that his state space of emotions is discrete but his process in time is continuous. That is, he has a continuous time, finite (discrete) state space Markov process. Cinlar says much more,(5) He mentioned ‘clusters’. From that and his transition probabilities, etc., he appears to have taken some introductory courses in probability and statistics for the social sciences or some such and is using those.(6) What he has so far is neither very surprising nor very useful. So, so far he is short on valuable applications. E.g., his reaction to the work so far appears to be to look at the clusters as a summary, qualitative view of the work.Instead, one thing he might do is start to look at what can drive a person from one emotion to another, in some circumstances find the cost in some sense (e.g., money, time) for such a transition, and then find the shortest paths for moving from a given emotion to a desired one. Here the work would still be routine and elementary but might be more useful.(7) For no doubt most of his applications, really he needs to assume that the graph (or transition matrix) he has applies outside of the context where he collected the data. In some cases, this assumption will be shaky.But, yes, broadly he has shown that it is possible to do quantitative analysis of emotions. But, really, there never was much doubt.Finally, what he is doing has essentially nothing to do with my work!> what if they can’t do that or what if someone else already beat them to it?What I described, that is, being able to write the design documents, is where the main challenge is. The solution is not routine software or what is pursued in computer science. So, really, I doubt that much is being done that is at all new or powerful. Basically the ‘Silicon Valley culture’ is, to borrow from the first Indiana Jones movie, “digging in the wrong place”. And with the Silicon Valley focus on routine software and academic computer science, they won’t figure out where the or a right place is.In simple terms, Silicon Valley and it’s ‘culture’ in technical things is very much an echo chamber. Or, as is nearly standard in corporations, no one wants to pursue a project that is much different in approach from what the CEO has or would have in mind.So, at Google, it’s Page and Brin, and they want routine coding, the classic computer science algorithms, programming languages, operating systems, some of what else is done in computer science, how to handle their big data, how to run their huge server farms, some statistics at times, some efforts at natural language processing, refinements on page rank, some attention to user interface at their Web site, and artificial intelligence and machine learning.In particular, when they recruit, they are strongly ‘self-perpetuating’, that is, believe that they know what all the good stuff is and then with astounding severity work to ensure that each candidate is really good at what they already believe is most important and, indeed, where they already have some expertise — echo chamber, self-perpetuating, re-plowing the same furrow over and over, be like Page and Brin.Google has been seduced by their financial success and their respect for Stanford’s computer science department. But there are other departments at Standford that provide a better foundation for the future. I can say this in part because there is zero chance that Page or Brin would react!It’s a very old story: In effect corporations, especially financially successful ones, are set up and managed to pursue the success, the bird already in the hand, they have. For this, unless pushed hard otherwise, they want to do more of the same that got them to the success they have. Then, a subordinate is to add effort to the goals and broader concepts of the supervisor.Then the supervisor is betting their career on the success of the work they pass out to their subordinates. Then such a supervisor doesn’t want their subordinates doing work the supervisor doesn’t understand.So, this situation extends from the first, lowest level manager all the way to the top and the CEO who doesn’t want to have to tell the Board that he sponsored a project that failed and that he didn’t understand. So, basically the only projects that can be pursued are those the CEO understands at least broadly. So, at Google, the projects are ones that Page and Brin understand in at least some broad sense, and those projects are based on routine software and what is in the Stanford computer science department.Net, business is just awful at working effectively with things that are new. The research community is much, much better at it.Very thankfully for US national security, since early in WWII, the US DoD has understood quite effectively the importance of research academics for US national security. DoD? Yes. Business? No.So, right: Typically a new idea has little competition from established businesses. And if some new work has some crucial prerequisites too advanced or rare for the startup community, then that work can go forward with little or no chance of meaningful competition. In effect, Page and Brin, for all their billions, don’t want to risk wasting even 10 cents on something that is outside of their broad understanding of routine coding and Stanford computer science. However, for Google’s work, there are departments at Stanford much more relevant for the future than computer science.
Yes, I saw the Markov straight away.So…I’ve designed and coded a system which is not routine coding and not dependent on Probability frameworks (Bayes, Markov, Boltzmann-Hinton, all the usual correlation/automata/ “Big Data” suspects).This Kanjoya-Stanford approach to NLP and Stanford’s Sentiment Treebank approach:* http://gigaom.com/2013/10/0…is, for me, “digging in the wrong place with the wrong tools for human subjectivity.”John von Neumann said: “When we talk mathematics, we may be discussing a secondary language built on the primary language of the nervous system.”Let’s suppose there is a germ of truth in Von Neumann’s statement.If so, Probability as a mathematical language is subordinate to another language, another tool.What is that primary language?Why did Leonardo da Vinci say, “All our knowledge has its origins in our perceptions”? Did he literally preface Symbolic Systems and Visual Recognition (both of which are dependent on Probabilistic algorithms) or was he also talking about subjective interpretation?If so, what would a tool to measure subjective interpretation do? How would it work and how could it be constructed to act in precedence before Probability? How does that affect ranking and correlations of data points?These are the things I thought of before I designed and coded my non-routine system.
Okay, I readDerrick Harris, “Stanford researchers to open-source model they say has nailed sentiment analysis”, Gigaom, Oct. 3, 2013 – 3:00 PM PDT.Maybe there is some utility there, but as research I’m not impressed.Basically their analysis of the natural language input is nothing like ‘understanding’ of natural language. So, they have some tricky ways to manipulate sentences to estimate ‘sentiment’, but understanding language needs to understand the full range of what is in the libraries, and ‘sentiment’ is just one task out of millions.Moreover, there is no hope that their techniques, ‘deep learning’ or whatever deceptive name they use, has any hope of generalizing to language understanding.Their work is about as close to natural language understanding as the first Wright brothers airplane was close to, say, a red tailed hawk: Both fly but by very different means, and the hawk is really ‘alive’ in many additional, complicated ways.Their work looks like routine, trivial busy work to let Google keep writing paychecks for the researcher and to get publicity for Google.Real natural language understanding is clearly a very different thing, and real progress would be welcome but seems far off.
The data and lexical libraries (Harvard Enquirer, LIWC, Princeton Wordnet and others) I realized, very early on, are limited and limiting.Moreover, for scientific integrity and to be consistent with the idea of us as a “data-driven society”, we need to ask the hard question of “Are the word definitions of a handful of academics REPRESENTATIVE of 1.8+ billion English speakers in the word? What gave these academics the mandate to define the meaning of words? Were they democratically elected by the 1.8+ billion English speakers?In Probability sampling, we avoid biases and skews by surveying a representative population. A handful of academics should not be deciding the definitions, meanings and sentiment value of words for 1.8+ billion English speakers.@fredwilson:disqus — Here’s an example of a network effects and crowdsourcing area that needs disrupting for business and democratic reasons.
Good to a few Non Type A personalities make it to the top.
A great interview to see how these guys think.
Sadly, the comments around healthcare were to be expected.SB: “Generally, health is just so heavily regulated. It’s just a painful business to be in. It’s just not necessarily how I want to spend my time. Even though we do have some health projects, and we’ll be doing that to a certain extent. But I think the regulatory burden in the U.S. is so high that think it would dissuade a lot of entrepreneurs.”That’s a pretty big headwind in terms of signaling – no?
A great interview. It gives an insight to how online products become what they are. Why people make certain decisions about their firm and product, is what, in the next years influence the way we interact with technology.
As an ESG (Envrionmental, Social, Governance) analyst, it’s encouraging to hear what is said at 6:14. Thanks for posting great interview
Honest question: doesn’t Larry Page seem to be a bit removed from reality? His idea of addressing unemployment only sounds reasonable on the surface. Wouldn’t splitting a full-time job into two part-time jobs mean half the income for the employees? Can people make a living off of that? What about the additional overhead of HR, communication, health insurance, etc? His talk last year about setting aside part of the world for technologists to experiment outside of law and regulation also seems just off. Then there was his comment about how people only care about privacy around health because of insurance (or maybe people don’t want everybody to know they used to have a drug problem, STD, or other very personal health issue). A natural explanation could simply be that he’s fundamentally a scientist and not well-versed in other areas, but if that’s the case, he should be a bit more hesitant to speak on those matters. Personally, it makes me doubt the Google leadership a bit when I hear these things.
Comments (Archived):
Thanks–cued up.I’m a big fan of his and first met him when I worked for Digital F/X and he was on the board. He also does some really interesting investments in the healthcare/wellness space which is a new consulting interest of mine.Have a great Sunday Fred!
super cool interview. Google are going to WIN for a long time to come. Long-term plan, eager to make huge moonshot bets. Overarching strategy is actually very cohesive. So comfortable in each others company. Basically just 2 geeks who find themselves, despite themselves, running the world+ any non-shaven, messy-haired, Croc wearing billionaire is a friend of mine.
The full transcript for those who are interested – http://www.khoslaventures.c…Loved a few points – + 4 year outlook vs 20 year outlooks for CEOs+ Large companies and the idea of having multiple adjacent focuses that are interrelated+ The whole chat around self driving cars and how it’ll transform the economics of the industry by cutting 97% of the costs+ The chat around solving the unemployment issues (I’m sure Albert will like that bit)
Thanks Rohan. That transcript just saved me 35 mins.
For self driving cars, I don’t buy the 97% cost reduction. Think about how self driving cars would work in NYC, Beijing. Funny thing about transportation, we all tend to need it at the same time*. *This is why companies like Sidecar and Uber can charge 2-4x their base fares at will.
Glad! 🙂
I think its great they are making big bets. As far as government is concerned, they should start listening to John Taylor’s Economics One class when it goes virtual this year. http://economicsone.com/201…
Larry: “For every page a regulation you add, you need to remove a page”… if only!
When he said that I thought, in the US we have the Bill of Rights. That might be enough.
So you’re thinking reboot? Start with a clean slate?
I think we should. In the last year I have looked pretty seriously at industries like farming. They are so screwed up with regulation and subsidy that the only way would be to start from scratch. For example, if you grow organic food in Indiana, you cannot sell it in Illinois unless you go through a distributor (or at a Farmer’s market which has other operational costs) Meanwhile, the factory farmers receive all kinds of subsidies courtesy of the Farm bill which artificially reduces the cost of their products.Raw milk is illegal in this country. Yet in the rest of the world its legal. The USDA and FDA are actively seeking out raw milk producers and taking away their animals and land. But, if we allowed production and clearly labeled it we would create entire artisan industries (which would create jobs etc)I think the biggest mistake people believe is that Republicans are for big corps and Democrats are for the little guy. If anything, Obama has been ONLY for the big guy. Big banking, big insurance, big farm etc.Big govt programs and big regulation only help big guys. Google is now a big guy. They actually will have an interest in perpetuating big government to kill potential competitors
I can’t argue with you as regards farming. Big business farms and politicians make it look like a program to help out the small family farm, which are increasingly few in USA.Odd how many people have a picture of farming in their mind that looks like Old MacDonald. Ain’t true.
what’s amazing is how they are trying to get control of the land. Big corps and big govt are creating economic incentives to help themselves. Small independent farmers are confronted with high costs, few ways to get to a mass market (regs) and sell their land to the big corps-who then get subsidies to farm it and are big enough to process and distribute without running afoul of the FDA or USDA. Polyface Farms in Virginia is flying in the face of it. http://www.polyfacefarms.com/ And, they open source their processes so anyone can do it.
I read somewhere that the # of small farms are increasing dramatically (no time to look now).This ties I believe to the huge increase in awareness, willingness to pay and availability of alternative distribution in the major urban areas.
Not sure they are. I know in row crops, they are dwindling. Most farmers rent the land and work it for a big corp. (that may be the most econ efficient outcome-but let’s get rid of subsidies and see). In meat processing, it depends on the vertical. Chicken and pork are not like beef. But, the USDA and FDA strangle smaller producers via regs.You are very prescient in talking about major urban. Impossible to have a functioning business without being close to a dense area with enough customers to support you.
You also quite bizarrely wash your eggs and consequently must refrigerate them. Cockamamy. Pun intended.
We need that.
Where can we find the regulation that govern that process ?
Only in Larry Page’s head.
Thanks for sharing Fred. Great breakfast entertainment and food for thought.
“the complexity of government increases over time.”That’s an understatement. So does it have to be that way.
I don’t think so but complexity is a moat that protects a lot of vested interests. Large insurance companies hire lots of new lawyers just do to compliance but it makes it increases capital requirements for a new insurance company. There is also a huge mismatch between policy analysis skill available in the public versus private sector. A lot of influence lobbyists yield involves their superior command of complexity in various fields compared to the understaffed and under-resourced internal government research orgs.
Best line: Larry Page @ minute 11:20, “so i think the actual amount of knowledge that you get out of your computer vs. the amount of time you spend is still pretty bad; our job is to solve that and most of the things we are doing make sense in that context.”
Yep, think about it, google’s knows just how long to and how wrong the stopping point is of many searches.
they come across well. i like them. i still don’t trust google.that there was no question about virtual currencies, or even a reference to virtual currencies in the discussion about economies and employment et.c., seemed anomalous.is that Sergey’s favourite top? he seems to wear it often. who makes it i wonder?
according to google reverse image search:https://www.google.com/sear…
thanks. can’t see the top there.
Google X seems like a great initiative, reminiscent of HP Labs & Microsoft Research, both legendary in their impacts during their respective hey days.I wished Google was more forthcoming with their real intent behind each product, service or initiative. Everything starts by being very important & “world changing”, until the plug is pulled.Of course they build lots of goodwill with users because they provide so many free services, but they also take some trust off the table when they discontinue services or let them linger.Steve Jobs would probably tell them now,- “You’re still doing too much.” But overall, Google gives back more than it takes, and that’s a good thing.
What does ‘gives back more than it takes’ really mean William?
Understood re: business, of course. Give & take comes in many forms. Among them, Google gives us lots of free services (Search, Now, Hangouts, Calendar, Drive, Gmail, etc.)On the take side, they take our data mostly, and our time / loyalty commitments.
So you are saying they are a media advertising model?Never heard anyone call a media model where they sell our data as give and take honestly.
It is interesting that William and many folks including USV feel in the value created by large networks.This value that gets created by the many millions of users who contribute to the network without getting paid for it.USV would not invest in a service if they don’t see a large network forming, Google would not be considered a good corporate citizen as William points out, but it all comes at a tremendous cost. It has a net effect of centralizing wealth and overall economic growth is limited.So as much I like Google and Twitter and can relate to the USV investment model from a wealth creation standpoint, I don’t think it benefits society in a healthy way in the long run.I wonder if the twitter’s, youtube’s, facebook’s, of the world were to incorporate crypto currencies in giving back to the user small amounts of value that can be exchanged for goods and services in the digital networked economies, then possibly there is a way to also claim that label of “giving back more than they take in”I wonder if the USV’s of the world would invest such constructs.But for now I don’t think there is a give and take really as much as we throw you a bone and in return you get to stay afloat…
I agree the idea of give and take is the wrong terminology.I agree that business is about making money.But–When I see companies (like Google and Facebook) looking at alternative electric supply with an eye towards the environment, recycling of hardware–this is good stuff.When see companies like the Gap especially moving to a higher minimum wage rate voluntarily and funding third party initiatives to monitor and make transparent labor conditions where their goods are produced–this is good stuff.I would consider all of the above good corporate citizens to some degree.
I didn’t say they sell our data. they use it implicitly or in aggregate.
my mistake buy in reality they are selling it.fine with me. advertising pays for the web and basically before the web and now, it is simply tolerated.
Changing the topic 🙂 – Did you notice the new changes to the Disqus Dashboard.
nope
Hmmm. Maybe the roll-out is staggered.
I’ll take a look and thanks for the nudge my friend.Honestly, I think little of Disqus as a entree into much cross network or community any longer. That dream and I think that need faded a bit over time as my own personal networks created their own dynamics.I am reliant on Disqus as I’m reliant on a very few communities. They are key there and the smartest plumbing in the world of conversations.The idea that we have to work at all hard to discover information or communities or that most participate in more than a couple is not reality to most today.Or so I think.
I agree. They are on the incoming end of conversations, not at the initiation tip.
Not what I thought about them in 2009:Comments, Conversations and Community http://awe.sm/r70ZK
Sorry, I have to conclude that artificial intelligence (AI) and machine learning (ML), e.g., as in the video, are, as those fields stand, hardly more than just junk. About the best that can be said for both is that they are not really ‘technologies’ or ‘techniques’ but just goals. Back when I was working on AI at Watson at Yorktown Heights, my complaints about AI were answered with, the field is a ‘goal’.Okay, a goal: But I see nothing in AI or ML that looks like a solid approach to anything significant in ‘intelligence’ or ‘learning’. So, the naming is essentially just hype.I can believe in ‘problem solving’: Humans have done amazing cases of problem solving for millennia. E.g., once in the Middle Ages, there was an effort to move an obelisk in Rome. There was a big turnout, lots of plans, wood, rope, workers, etc. and, finally, success. But we should recall how the thing got there in the first place: Some of Caligula’s slaves had cut the thing as one piece out of solid rock near the headwaters of the Nile, moved it to Rome, and put it in place. So, they did some amazing ‘problem solving’.More was done by the Wright Brothers: Langley had just fallen into the Potomac River, but the brothers ‘cheated’: They developed the first really useful wind tunnel and a good wing shape (they didn’t yet understand Reynolds number so got the scaling wrong) and were able to do accurate enough calculations of lift, drag, and thrust. And they had a good enough solution to the problem of three axis control. So, as they packed up on the way to Kitty Hawk, they had on paper and in their workshop some really good evidence that they would be successful. They did; Langley didn’t. The brothers did some good enough basic engineering; they had a specific goal and knew what the heck they were doing.I believe in problem solving based on engineering, where we know what the heck we are trying to do and have some solid ideas for getting it done. So far the new parts of AI and ML don’t qualify.When our group gave a paper on AI at an AAAI/IAAI conference at Stanford, my conclusion was that the good work, and there was some, was not AI at all but just good engineering.Such ‘engineering’ is the broad ‘paradigm’, but that point says very little about the ‘big question’ in the video clip, e.g., doesn’t say what great progress in ‘technology’ and problems solved will be in the next 20 years. Instead, under the paradigm of engineering, we take the problems and the solutions mostly just one at a time. Each time someone does some good, useful, valuable engineering, fantastic, but that that does not enable any general extrapolation. Sorry ’bout that. Instead, successes are one at a time, carefully hand made.I suspect that Page and Brin have been badly influenced by some parts of Standford that want to push AI and ML and neglect the wonderful background Stanford has in pure and applied math, science, engineering, and technology.If AI and ML are only goals, okay, but then they are still too narrow for the good problem solving to be done. And for good solutions, still the best approaches are, and may I have the envelope, please, pure and applied math, science, and engineering. Sorry ’bout that.AI and ML themselves and their ‘techniques’ so far, e.g., neural networks and maximum likelihood estimation, bring much that is new and good although the new is not good and the good, not new. The contributions that are unique to AI and ML fill much needed gaps in progress and would be illuminating if ignited.The idea that computers are ‘electronic brains’ is not at all new and was marketing hype going way back. Yes, at times computers can do some amazing things, that without a computer would need ‘intelligence’, but that does not mean that such a computer was in any meaningful sense ‘intelligent’. Indeed, so far there is not a single glimmer of hope that humans know how to implement anything on a computer that is anything like natural intelligence, even that of my kitty cat Pollux — bright little guy.”Self driving cars”? Nope: So far such cars are only for streets where nearly everything has been previously mapped down to a few millimeters. Move the stop lights, and have to map again before the ‘self-driving’ car can ‘drive’ — the ‘self driving car’ can’t really find the stop lights. The ‘self driving cars’ are about as ‘intelligent’, autonomous, and flexible as a train on rails.Yes, in part, I’m bitter about the hype because it is harmful, i.e., pollutes the well meaning that anything good has to fight the bad reputation from the bad from the pollution.Finally, I don’t like the idea of the many places things could have been different. Instead, pick a problem, work to find a good solution, typically seen as good just on paper, and, in cases when have such a solution, then implement and sell the solution. Should have a ‘must have’ solution to a really big problem where such a solution is clearly very much wanted (e.g., a safe, effective, cheap one pill cure for any cancer) and can make a lot of money.
what’s your story @sigmaalgebra:disqus ?
As from Lebesgue, Kolmogorov, etc., given a set A, a ‘sigma algebra’ is a collection of subsets of A that includes A and is closed under complements and countable unions. There is much more in Halmos, ‘Measure Theory’.
The philosopher Plato said, “All learning has an emotional base” whilst the neuroscientist says, “Emotions amplify our memories.”It’s known that Google is building the Star Trek computer which is the basis of Google Now:* http://www.slate.com/articl…In Oct 2012, I asked Amit Singhal, Google’s SVP Engineering and Head of its Core Ranking team (aka Search), whether their Star Trek computer could and would be able to understand our emotions. He replied, “That’s a very deep question that journalists usually only ask an hour or so into Q&A. No, we focus on objective facts and figures.”In March 2014, Ray Kurzweil of Google gave a TED talk in which he said, “The frontal cortex (where we think, “That’s ironic. He’s funny. She’s pretty”) is not qualitatively different from the neocortex (where our numerical reasoning and motor-neuron controls happen). It’s a quantitative expansion of our neocortex.That additional quantity of thinking was the enabling factor for us to take a qualitative leap and invent language and art and science and technology.”* https://www.ted.com/talks/r…Here are a couple of my thoughts on this:(1.) If the Google machine — and its IBM Watson counterpart which was called a “human autistic savant” by its technical lead — can’t understand emotions, then by Plato’s logic………How can it be a Machine LEARNING system?(2.) Quality is not the same or equivalent to quantity.Let’s do a simple mathematical logic proof to verify this. Suppose at the start of the week we have 1 dog and 0 cats and then…* Every day we get 1 more dog and 0 more cats.* Every year each dog produces 100 more dogs.Now, is this statement true or false:* At the end of year 1 we have 36,865 cats.The answer is false. This is because the increase in quantity of dogs does not produce a qualitative change from the dogs into cats.To me, quality is a completely different factor variable from quantity.That’s an important distinction because it affects how we define and build Intelligent Systems. The idea that intelligence is purely mathematical and quantitative should be rejected by all intelligent people.
I found the Slate article athttp://www.slate.com/articl…but I can’t do much with a news story, especially in a literary magazine, about such a technical subject. A picture from the TV series ‘Star Trek’ doesn’t help.For such a project, Google will have to write code, and they can do that, if they know what code to write, what the heck the code is supposed to do. E.g., for such code, they need a ‘specification’ or a ‘design document’, or functional design, component design, module design, etc. They can skip such design documents only if they are able to write them! If they can’t write them, then they are stuck-o right there.Certainly it is possible, under some significant circumstances, to use mathematics to analyze and predict emotions, at times with considerable accuracy.
Have you seen this from Sentiment Symposium in March 2014? It reflects Stanford’s current approach to NLP with emotions:* http://vimeo.com/90178708Re. Google needs a ‘specification’ or a ‘design document’, or functional design, component design, module design, etc., what if they can’t do that or what if someone else already beat them to it?
Okay, I watched it.Yes, for your concern about the connections between math and emotions, he did do some quantitative analysis of emotions. So, if only from this example, such is possible.Broadly, what he did is okay.Remarks:(1) Just what he did with natural language processing was not made clear, Instead, somehow he was able to take the texts from his ‘corpus’ and for each ‘encode’ or ‘tag’ it with some emotions.(2) He said that the emotion when wake up is not “random” or said some such. Here he has a naive view of random variables. Instead, the emotional state on waking up is certainly ‘random’ in the sense that we can call it the value of a random variable. By saying that the emotion was not ‘random’, what he meant to say was that the emotion and the random variable are not probabilistically independent of everything else. So, he confused ‘random’ and ‘independent’.(3) He has a big graph with lots of colored arcs. I’m surprised that he hasn’t noticed that such a ‘graphic’ does not show much. Instead for his data there is a matrix — for some positive integer n and n emotions and emotions i, j = 1, 2, …, n, he has a matrix P = (p<sub>ij</sub>) where p<sub>ij</sub> is the probability of being in state j next given that are now in state i.Such a matrix is the core of the subject Markov chains. Can see E. Cinlar, ‘Introduction to Stochastic Processes’ for much more. Uh, we’re talking stochastic processes, that is, definitely things that are ‘random’!The matrix is more useful than the graph if only because matrix algebra can be applied to the matrix and does yield some powerful results. Again, there is more in Cinlar.(4) He said that “emotions are continuous, not discrete”. No. In the terminology of Markov processes, he is saying that his state space of emotions is discrete but his process in time is continuous. That is, he has a continuous time, finite (discrete) state space Markov process. Cinlar says much more,(5) He mentioned ‘clusters’. From that and his transition probabilities, etc., he appears to have taken some introductory courses in probability and statistics for the social sciences or some such and is using those.(6) What he has so far is neither very surprising nor very useful. So, so far he is short on valuable applications. E.g., his reaction to the work so far appears to be to look at the clusters as a summary, qualitative view of the work.Instead, one thing he might do is start to look at what can drive a person from one emotion to another, in some circumstances find the cost in some sense (e.g., money, time) for such a transition, and then find the shortest paths for moving from a given emotion to a desired one. Here the work would still be routine and elementary but might be more useful.(7) For no doubt most of his applications, really he needs to assume that the graph (or transition matrix) he has applies outside of the context where he collected the data. In some cases, this assumption will be shaky.But, yes, broadly he has shown that it is possible to do quantitative analysis of emotions. But, really, there never was much doubt.Finally, what he is doing has essentially nothing to do with my work!> what if they can’t do that or what if someone else already beat them to it?What I described, that is, being able to write the design documents, is where the main challenge is. The solution is not routine software or what is pursued in computer science. So, really, I doubt that much is being done that is at all new or powerful. Basically the ‘Silicon Valley culture’ is, to borrow from the first Indiana Jones movie, “digging in the wrong place”. And with the Silicon Valley focus on routine software and academic computer science, they won’t figure out where the or a right place is.In simple terms, Silicon Valley and it’s ‘culture’ in technical things is very much an echo chamber. Or, as is nearly standard in corporations, no one wants to pursue a project that is much different in approach from what the CEO has or would have in mind.So, at Google, it’s Page and Brin, and they want routine coding, the classic computer science algorithms, programming languages, operating systems, some of what else is done in computer science, how to handle their big data, how to run their huge server farms, some statistics at times, some efforts at natural language processing, refinements on page rank, some attention to user interface at their Web site, and artificial intelligence and machine learning.In particular, when they recruit, they are strongly ‘self-perpetuating’, that is, believe that they know what all the good stuff is and then with astounding severity work to ensure that each candidate is really good at what they already believe is most important and, indeed, where they already have some expertise — echo chamber, self-perpetuating, re-plowing the same furrow over and over, be like Page and Brin.Google has been seduced by their financial success and their respect for Stanford’s computer science department. But there are other departments at Standford that provide a better foundation for the future. I can say this in part because there is zero chance that Page or Brin would react!It’s a very old story: In effect corporations, especially financially successful ones, are set up and managed to pursue the success, the bird already in the hand, they have. For this, unless pushed hard otherwise, they want to do more of the same that got them to the success they have. Then, a subordinate is to add effort to the goals and broader concepts of the supervisor.Then the supervisor is betting their career on the success of the work they pass out to their subordinates. Then such a supervisor doesn’t want their subordinates doing work the supervisor doesn’t understand.So, this situation extends from the first, lowest level manager all the way to the top and the CEO who doesn’t want to have to tell the Board that he sponsored a project that failed and that he didn’t understand. So, basically the only projects that can be pursued are those the CEO understands at least broadly. So, at Google, the projects are ones that Page and Brin understand in at least some broad sense, and those projects are based on routine software and what is in the Stanford computer science department.Net, business is just awful at working effectively with things that are new. The research community is much, much better at it.Very thankfully for US national security, since early in WWII, the US DoD has understood quite effectively the importance of research academics for US national security. DoD? Yes. Business? No.So, right: Typically a new idea has little competition from established businesses. And if some new work has some crucial prerequisites too advanced or rare for the startup community, then that work can go forward with little or no chance of meaningful competition. In effect, Page and Brin, for all their billions, don’t want to risk wasting even 10 cents on something that is outside of their broad understanding of routine coding and Stanford computer science. However, for Google’s work, there are departments at Stanford much more relevant for the future than computer science.
Yes, I saw the Markov straight away.So…I’ve designed and coded a system which is not routine coding and not dependent on Probability frameworks (Bayes, Markov, Boltzmann-Hinton, all the usual correlation/automata/ “Big Data” suspects).This Kanjoya-Stanford approach to NLP and Stanford’s Sentiment Treebank approach:* http://gigaom.com/2013/10/0…is, for me, “digging in the wrong place with the wrong tools for human subjectivity.”John von Neumann said: “When we talk mathematics, we may be discussing a secondary language built on the primary language of the nervous system.”Let’s suppose there is a germ of truth in Von Neumann’s statement.If so, Probability as a mathematical language is subordinate to another language, another tool.What is that primary language?Why did Leonardo da Vinci say, “All our knowledge has its origins in our perceptions”? Did he literally preface Symbolic Systems and Visual Recognition (both of which are dependent on Probabilistic algorithms) or was he also talking about subjective interpretation?If so, what would a tool to measure subjective interpretation do? How would it work and how could it be constructed to act in precedence before Probability? How does that affect ranking and correlations of data points?These are the things I thought of before I designed and coded my non-routine system.
Okay, I readDerrick Harris, “Stanford researchers to open-source model they say has nailed sentiment analysis”, Gigaom, Oct. 3, 2013 – 3:00 PM PDT.Maybe there is some utility there, but as research I’m not impressed.Basically their analysis of the natural language input is nothing like ‘understanding’ of natural language. So, they have some tricky ways to manipulate sentences to estimate ‘sentiment’, but understanding language needs to understand the full range of what is in the libraries, and ‘sentiment’ is just one task out of millions.Moreover, there is no hope that their techniques, ‘deep learning’ or whatever deceptive name they use, has any hope of generalizing to language understanding.Their work is about as close to natural language understanding as the first Wright brothers airplane was close to, say, a red tailed hawk: Both fly but by very different means, and the hawk is really ‘alive’ in many additional, complicated ways.Their work looks like routine, trivial busy work to let Google keep writing paychecks for the researcher and to get publicity for Google.Real natural language understanding is clearly a very different thing, and real progress would be welcome but seems far off.
The data and lexical libraries (Harvard Enquirer, LIWC, Princeton Wordnet and others) I realized, very early on, are limited and limiting.Moreover, for scientific integrity and to be consistent with the idea of us as a “data-driven society”, we need to ask the hard question of “Are the word definitions of a handful of academics REPRESENTATIVE of 1.8+ billion English speakers in the word? What gave these academics the mandate to define the meaning of words? Were they democratically elected by the 1.8+ billion English speakers?In Probability sampling, we avoid biases and skews by surveying a representative population. A handful of academics should not be deciding the definitions, meanings and sentiment value of words for 1.8+ billion English speakers.@fredwilson:disqus — Here’s an example of a network effects and crowdsourcing area that needs disrupting for business and democratic reasons.
Good to a few Non Type A personalities make it to the top.
A great interview to see how these guys think.
Sadly, the comments around healthcare were to be expected.SB: “Generally, health is just so heavily regulated. It’s just a painful business to be in. It’s just not necessarily how I want to spend my time. Even though we do have some health projects, and we’ll be doing that to a certain extent. But I think the regulatory burden in the U.S. is so high that think it would dissuade a lot of entrepreneurs.”That’s a pretty big headwind in terms of signaling – no?
A great interview. It gives an insight to how online products become what they are. Why people make certain decisions about their firm and product, is what, in the next years influence the way we interact with technology.
As an ESG (Envrionmental, Social, Governance) analyst, it’s encouraging to hear what is said at 6:14. Thanks for posting great interview
Honest question: doesn’t Larry Page seem to be a bit removed from reality? His idea of addressing unemployment only sounds reasonable on the surface. Wouldn’t splitting a full-time job into two part-time jobs mean half the income for the employees? Can people make a living off of that? What about the additional overhead of HR, communication, health insurance, etc? His talk last year about setting aside part of the world for technologists to experiment outside of law and regulation also seems just off. Then there was his comment about how people only care about privacy around health because of insurance (or maybe people don’t want everybody to know they used to have a drug problem, STD, or other very personal health issue). A natural explanation could simply be that he’s fundamentally a scientist and not well-versed in other areas, but if that’s the case, he should be a bit more hesitant to speak on those matters. Personally, it makes me doubt the Google leadership a bit when I hear these things.
http://technbiz.blogspot.co…