Where Is The Value In The Tech Stack?
Yesterday’s discussion was fantastic. Days like that are where the USV community really shines.
David argued that data is the new oil, not software. There’s a lot of validity in that comment.
Kirk pointed out that ISPs are the pipelines of the digital revolution and I liked that.
And TR followed Kirk with this observation:
Oil in all its various forms (software)
Infrastructure/Drilling Equipment (hardware)
What they were all discussing is the tech stack and where the value is.
A simple version of the tech stack of the information revolution would look something like this:
At USV, we have invested mostly in the top three layers, and most actively in layers four and five (Applications and Data), but we are increasingly drawn to Access and Infrastructure (software).
But, as I tried to point out in the Dentist Office Software Story, software alone is a commodity. You need data to provide defensibility and differentiation. And so most of USV’s investments have been companies that combine software and data to provide a solution to the market that we believe is defensible, usually via network effects which are a data driven phenomenon.
So why would we move down the stack to Infrastructure (software) and Access? Well there are data driven network effects in those layers too if you know where to look for them. And, increasingly, we are finding them there and finding them at prices that make a lot more sense to us too.
So, in summary, I agree with David that software alone is not the new oil. But neither is the entire tech stack. That’s why I moved away from the phrase “tech is the new oil” that I used in the comments the day before. I also don’t think data alone is the new oil.
The new oil is going to be found in various places in the tech stack where software and data come together to produce a service that has high operating leverage at scale and is defensible by the network effects that the data provides. That’s a mouthful. Software is the new oil sounds a lot better. But the mouthful is more accurate.
have you ever been approached to write an article for a mainstream publication (e.g. NYT)?
I agree with all that Fred, and would just add one extra element: control.For software, control is the difference between open source and closed source; for data it’s the difference between open data and proprietary.To be able to leverage the economics of zero cost replication you must be able to control (via IP rights or physical access) who gets a copy and under what terms. If you don’t have control then you need to look at less scalable models — which is the underlying problem with open source companies (though it can be made to work).For the same reason, the big social networks keep their most valuable data private.Control basically transforms a potentially abundant resource into one with artificial scarcity.
I agree.Most maybe all of the big social nets are media based models so one would think that open/closed to them is the balance between keeping people coming and comfortable, and extracting from the data targeted patterns to sell to their advertisers.
Yep – one way of gaining control is constantly providing and proving value to both sides of the equation – data givers, and data takers. If people voluntarily give their data for a service, and then that data can be arranged into another product for customers, then i think you are in business. All this without violating privacy laws 🙂
The value in the tech stack is more similar to old school media than oil, because the lack of uniformity in the end users (people v machines):During the Convergence mania of the ’90’s, the eternal question was content versus distribution. The answer was both.Content = Hit (application + data basically)Distribution = infrastructure + HW + accessA hit show like Seinfeld eventual ‘gets over’ the need for distribution, as does a hit channel like ESPN. They content calls the shots, when the content is a massive hit. Distribution calls the shots for everybody else.Same for tech.
This is a great, succinct explainer for consumer tech, but what about enterprise tech? For example, deriving value from a key dashboard powered by data on-premises does not require an ISP for distribution.I’m using the word “attention” to capture both “content” and “insight”.
What if we don’t want to give up abundance to enrich software monopolists! Well, you’re right, no one has really figured out how to do general purpose open source on anything other than altruism.Yeah, there’s bitcoin but computing on it is too costly to be used for general purpose computing, even though there are turing-complete clones that are theoretically general purpose computers.I’m hoping that the Urbit project succeeds in its goal of creating a viable personal server in the cloud, with unrestricted user app data migration between providers. This is not the Libertarian utopia of bitcoin, but more of a digital republic. I’d say there’s a very low chance that urbit will replace the current web tech stack. But it’s fun to read about, and say what you will about Curtis Yarvin, the man can write to entertain. http://urbit.org/docs/theor…
Control takes many forms in the informational stack. It can be in the lower layers and out to the edge (1913, 2-way communications), it can be in the form of content control from the core (or at the distribution) (1934, audio and video content), or it can be done at the addressing and signalling layers (1980s for wired and 2000s for wireless). It can be done at the BSS/OSS layers (Google Adsense). Finally it can occur at the application layers (Facebook).Regulators need to look at the impact of control at all layers and boundary points to determine if there are monopoly bottlenecks. More often than not, if we open up access at layers 1-2, then it will be very hard for any sustainable monopoly to develop in higher layers. Addressing (ie number portability, or email or @ portability and ownership in the IP world) require researching.If mandated interconnection at any of these points is necessary, then we need a framework of (or understanding of) settlements to ensure market driven network effect and universal service.
Not totallyJust because data is open doesn’t mean you have the ability to do anything with itLook at (some) genomic data. it isn’t like firms with proprietary databases do better research that much more often than those working off just the open databases.
tech expertise and capital requirements increase dramatically as you move down the stack2006-2012: Age of Social Media | Soft Tech | Design First | Virality | Consumer Plays2015+: Age of Ubiquity | Deep Tech | Code First | Bus Dev | Enterprise Plays???
I’m not sure it’s an age of “code first”…at least I really hope it’s not. My hope is that it’s an age of “customer first”…if it’s not, then we are in for troubled times until it is……but of course, I have no problem with code coming before design 😉
“At USV, we have invested mostly in the top three layers, and most actively in layers four and five (Applications and Data), but we are increasingly drawn to Access and Infrastructure (software).”Struggling understanding that paragraph
numbered bottom-up : 5) Data 4) Applications 3)… 😉
Fred’s stack is odd for me…..data / infra / access / HW / applicationmakes more sense for me, as that is the drill down from the UX.USV has mostly worked the top and the bottom, which – hubris alert – is where I think they should stay.
There is often more hardware at the bottom of the stack (often a commodity) eg Temperature Sensors, Cash Registers, Nuts and Bolts – the data is a proxy for a reality.
That’s because the “stack” is 3D, not 2D. There is a geographic & directional component (x axis) with nebulous vertical boundaries between PAN, LAN, MAN, WAN, MAN, etc… Then there is the traditional 7 layer model as the y axis. Yes, invariably we get there; even if you just start with 2 or 3 layers. All public service provisioning networks must have the 7 layers at the end of the day. Finally there is for want of a better word, an application continuum due to convergence (z axis; which previously defined the 4 disparate information networks). I see it as a sundae cone with the rich scoops of ice cream being the infinite (application) opportunities.Within each layer there are sublayers that look and act similarly. Within devices there are software/hardware boundaries, much like the cloud as a whole is a giant tradeoff of software and hardware depending on a variety of supply/demand issues. It resembles a giant biological organism. Our digital networks reflect the analog networks of old, which in turn reflect natural networks.Data as I said elsewhere can be created at the core or edge, top or bottom. It can be stored and processed anywhere depending on the particular context. I use this framework to figure out how 4K VoD, 2-way HD collaboration, mobility, and IoT will scale across an array of supply and demand assumptions.
Here’s an illustration
Look what the tech wars are fighting over, and you’ll find your oil analogy 🙂
and the data they create.
I’d say Connections are the Oil. The Connections (between people, documents, images, software…) are what’s new, that’s what we didn’t have before and that’s what affords innovation and societal development.
and of course, that’s why Facebook is so powerful, they control the ability to Connect
In financial markets, speculators are the oil.
In oil markets speculators are the speculators ;)Sorry the wild-cat in me couldn’t resist
Software is a commodity until it is populated with community or user-base, thats what makes it defensible in the business sense. The key thing is to leverage the data in the correct way to serve the community which in turn drives growth and creates value.
“Software is the new oil sounds a lot better”Reminded me of one of the other famous VC quotes”Why Software Is Eating The World”
awesome discussion and learning from yesterday and today… +1 on the postSoftware+Design+Data leads to Network Effects. Increasingly I see Design driving software and data. Networks get built around those constructs. This is becoming more clear as the software stack evolves.
If you believe in this premise then I *highly* recommend reading the book “B4B”. It is, in short, about the move that enterprise SW developers have to make so that they can utilize the data to deliver differentiated business outcomes for their users – particularly in the SaaS market. Don’t worry, I don’t make a nickel of the book….just found it incredibly insightful re: the value chain that data can create. Here’s a link http://goo.gl/S6rWXX
Uber is the motor oil of the new transportation economy.
you are back.
…and GrubHub the olive oil?
Have referenced the very interesting dinner where I was fortunate to sit with Cmdr. Chris Hadfield. I asked a question that led to a discussion of energy sources.” Nothing else will get adopted until it can replace a 5 gallon gas can. Oil is incredibly flexible, for something so affordable. “What is the tech version of a 5 gallon can of gas?
transistors => Moore’s law !
Look for what is replacing its dominant use cases.Look at innovation in transportation, Autonomous driving, ride sharing, even things as small today as hyper local bike delivery etc. I can’t quantify the dent today or what it is tomorrow – but there must be some impact.Look at innovation in energy. Solar, wind, nuclear, even fusion (yes fusion) based energy innovation.These are two – but they are presenting the beginnings of a new energy and transportation stack, driven in part by innovations in software (not entirely).Not sure if you can link cheap oil to any of these, but we should all be rooting for cheap oil in perpetuity for a whole host of socio economic and political reasons.
Fracking too! Traditional oil exploration is also being impacted by better tech making it more efficient. Cheap oil right now is not only a function of demand, but a strong dollar as well.
when i think of disurption – i think of the replacement of those predatory corporations and organizations like OPEC.while fracking might be an oil alternative – those in charge are the same disease ridden profiteers that would literally scortch the planet to get their sheckles….
At this point because of the high startup costs in fossil fuels, only a corporate entity can disrupt OPEC. I’d like to see us get rid of all energy subsidies and let the markets decide. Ending subsidies would make all energy disruption cheaper too!
I think Saudi Policy is now “sell it while we can” – this is a form of breaking ranks in the face of climate change.In effect dumping of oil was to break the fracking industry (who invested and lost a lot of money) and to show that they can.Both of the above argue that the volume available exceeds demand (because Climate change must Cap demand somewhere) – The supplier skill (cartel or not) is to position output at a level that discourages investment, but makes returns fast before they are forbidden.
What’s their marginal cost of production? They will sell until MC=MR. Once MR goes under MC, they will start to ease back and shut down. I don’t think the Saudis or any oil producing nations pay any attention to climate change-pro or con.
I don’t think the Saudis or any oil producing nations pay any attention to climate change-pro or con.Agree.Not anymore than people manufacturing drugs in Mexico or Columbia (or anyone in their distribution chain) care about the almost immediate (and not hypothetical) impact of their actions.
A bit more complex. Free market economics (Marginal cost = Marginal Rev) does not hold when 85% of government revenue is based upon one commodity.There is a 6X delta between the price of oil that covers the marginal cost of production for Saudi Arabia and the price that covers the national budget of Saudi Arabia.Public estimates of marginal cost of production for Saudi Arabia range around the $15 a barrel mark.But the price required to balance the national budget is ~ $90 – $105a barrel.Right now Saudi Arabia is dipping into their cash reservesand also raising debt – they have issued $4 Billion of bonds within the last year (they have also withdrawn billions of dollars from global asset management firms). Their cash reserves are expected to last till 2019 at current oil prices. Will run out much faster if prices drop to their marginal cost.I don’t expect they will want to allow prices to drop to marginal cost – because that marginal cost is so low that it will destroy the national budget. But, yes – it can go lower than the range of the past few months.
but at current prices saudi runs out of cash in 5 years according to CNN…
In effect dumping of oil was to break the fracking industry (who invested and lost a lot of money) and to show that they can.Yep. And shows the amount of risk and luck that is required in business and how impossible it often is to predict how future events will impact investments.
there is working fusion in vancouver
Something that eliminates 150 miles of driving – eg a site visit to a branch of some enterprise. – Could be better communications (video conferencing), better training (delivered tutorial apps) – better use of agricultural land / water (analytics) or any energy efficiency breakthrough delivered by use of technology.
Portable energy source? There are biggie advantages to liquid fuels, especially gasoline.
Well, one of the three us was a Top Gun and made three trips to space.I’m going with him unless you want to spend 5000 words trying to convince me otherwise…… 😉
Tech version of that gas can? Smartphone comes to mind. Maybe a laptop for relatively comparable portability.Of course, different people have different ideas of what “affordable” means there, and flexibility’s still in heavy development (how long did it take to standardize types of fuel oil?)… but for insight on demand, one way or another a computer is going to show up.
Battery. Of course there are many variants competing to be the best. And the car will make it portable and foster co-generation. Thereby making energy more of a 2-way model, like communications.
Good post! If we go with the “oil analogy” where is the Standard Oil of tech ?
just GGL ? 😉
I think it’s a shame no one talked about the value of the tech stack being built well. Building software is a craft and not slapping together a pile of crap full of tech debt and problems has a significant share of the core business value. If you don’t have all that, executing on business strategy becomes a whole lot harder and more expensive.
I agree with you mostly, technical fitness is undervalued. The problem is that a quality-first approach requires a significant initial investment and enough *time*. At the beginning most startups don’t have these. Some startups get funded with a fairly good working prototype to later develop further and fix things. So at this stage you have the money and start hiring to build version 2, but something you can’t buy is *time* so there is the risk that you may not scale quickly enough and, as you say, things get harder.The good thing is that, if you have your well built stack and things go well, you will be able to explode gracefully.Quality never goes out of style.
Thanks for the reply Lawrence 🙂 I think we completely agree. I don’t think building lean means building poorly. I actually think that building lean and building well is an important part of our craft. Its critical that product, UX and engineering make all the right decisions from cradle to grave. Sometimes the smartest choice is building fast, knowing you’re going to learn fast and jettison it all to dump your technical debt and move to version 2. I think it’s probably inevitable for more that a few products (especially along the curve of complex database needs). I do think it’s important to think about these things and make them as strategic and tactical choices as much as possible instead of blindly hoping for the graceful recovery.
Somehow people believe that as we move from the analog to digital world, we can either reinvent everything or forget everything. Networks and principles that drive programming have been around for all eternity. They are found in the universe, in nature and in our biology. In yesterday’s post I pointed out that it’s all about how energy is used to take something from idea to product. All the processes that make that happen have been fairly constant over time.
Totally agree but that’s because we’re seasoned vets. It’s easy to be generalized and correct on this side of the fence. I think it’s vitally important not to forget the myriad of very important baby steps we make every day to get to a winning solution with a lean and profitable answer.One of the important aspects on Fred’s commentary about the similarities of tech to the oil business is that we should keep market maturity in mind.In the early days of the oil rush, any fool could run out into a field and stick a pipe in the ground and start making money if he or she was anything shy of being a complete complete idiot. Then there were no more easy spots because the medium sized companies bought all the single plots.The cycle continued as the medium sized oil opportunities were all staked out. Once that happened and there weren’t “easy” (air quotes) enough opportunities for medium sized plots and thus the big oil companies were born. Then the big oil companies bought up all the medium sized wells, thus making it progressively harder and harder for the single person or medium sized group of people to find and exploit a reasonably profitable oil well.What happens is we get individuals and groups working harder and harder to get small and medium sized oil wells for the sole purpose of selling it to the big oil company for a profit and moving on.Thus, sophistication sets in and hardens the market.The same thing is happening in our industry. Back in the day, anyone this side of stupid could put up a web site and make money. The metaphor follows along to today as many teams are building properties for the sole purpose of selling to medium or large organizations so they can win the payoff and move on.The hardening of the market due to sophistication is happening all around us. Aside from the fact that our craft is hard enough due to the inherent technical challenges, competitive landscape, and all the details it takes to make and launch great products is significant.I reiterate: the myriad of little things we do every day that accumulate into winning answers is neither easy nor intuitive so when you say incredible insightful things like “we can either reinvent everything or forget everything” underscoring the benefits of the thought process (much easier than having to find oil :), you might do your audience a favor when you say things like, “Somehow people believe…” to remember that we are seasoned vets and we have a lot to teach 🙂
The only difference with the the oil comparison, and I argue this against traditional utilities as well, is that it is principally a 1-way stream from production to consumption. Communication networks are 2-way. They have different properties. While network effects and externalities hold within given technologies and apps (silos), the key difference is what is the settlement structure BETWEEN networks that fosters macro-network effects?A bill and keep model (net neutrality, IP settlement-free peering, etc…) results in a silo-ed monopoly model; particularly if they are vertically integrated to begin with. But look at Google’s or Facebook’s vertical creep supported by Wall Street. New services and technologies have a much harder time of getting an entry. A “balanced” settlement model on the other hand in which terminating settlements (based on real, marginal costs) are introduced affords new service introduction by little guys who can be more flexible.Two things need to occur, particularly as most communication networks run into govt initiated barriers like rights of way (both wired and wireless). First the govt has to mandate interconnection at layer 1 and/or 2 all the way out to the edge if necessary. They did this unwittingly with Part 15 (aka wifi) and see how positive that has been. Second the govt has to guide the brokering or arriving at of settlements recognizing that most value is captured at the core or top of the stack. The network with 1m subs will obviously have greater network effect than the one with 10k subs. But the latter can sometimes move faster. What settlement model can be beneficial to both?A body of economic research needs to develop around settlements in which the larger network or the one introducing a new technology should pay a disproportionately higher terminating settlement to provide incentives to other actors, but in the process, by virtue of size or first to market will receive a larger absolute benefit due to overall network effect ACROSS the network boundaries. I personally don’t know what these differences should be, but imagine if someone who invests in a beacon is incentivized to continuously or readily update the address and performance of the endpoint when/if necessary by receiving some compensation from the hundreds/thousands/millions of apps that use the signal from that endpoint for some reason.We thought such settlement systems would be too expensive. Not entirely sure, but if based on the blockchain transaction costs should approach zero.Anyway, after 25 years studying these TMT/ICT businesses and having a digital/computer background since the late 1970s (which gave me the benefit of knowing what it was like in the old, wired, analog POTS days) this is the only way I see us getting to a future of 4K VoD, 2-way HD collaboration, mobility-first, and IoT, ubiquitously, rapidly and inexpensively. Vertically-integrated models are simply not sustainable in a world of constantly changing supply and demand where capex and opex obsolete rapidly.
Great comments 🙂 Inevitably, like the oil business, the answer will come out of the crowd-sourcing which will pile up the bodies of those that tried and failed….
building lean and building poorly are different things – you can build poorly heavy, and poorly heavyThe larger problem is too much money in the system, chasing mostly silly ideas, which means talent to build things right (whether lean or heavy) is expensive/can get spread out
I’m just an interloper here, not a developer or VC, so I’m kind of talking out of may ass on this.Still as a layperson when you mention “tech stack being built well” I assume you mean from the bottom up through to the Application layer “built-well”.But as an end user of network-effect services, as a digital citizen, I get the feeling that a “big-picture” version of “built-well” is not even under discussion.By “big-picture” version of built-well”, I mean all the parts are built to optimize for a democratized platform of recombinant/modular software/data that prioritizes everyone’s access to social and economic agility moving forward.That landscape seems very barren, no incentives, on advocates, no collective long term vision around the fact that we are now building out an organic social nervous system who’s silo-ed everything will seriously constrain our ability to bring mutually-adoptive living-systems dynamics into social and economic play in meaningfully distributive way.Sure that is pie in the sky now but for how long ???
So for perspective, I’m a veteran product guy. It’s my job see the product and business vision and own it. By own it, it means its up to me that whatever is being done is delivering on the business proposition so that everyone prospers and is happy.The topic as you’ve outlined it has the big Lebowski would say has a lot of ins and outs. In order, yes, I specifically mean built well from the low level systems to the application layer, and on to the front end whether it be web, console, or whatever. My experience is as a guy who are up in an engineering household and has been programming, project managing and product managing in some capacity or another since I was 14. To me this is a lifestyle. Where some people hire bricklayers, I am a mason and I bring together rock star teams to build awesome products.And yes the part about being well built is not often discussed which is why it’s often the reason project and budgets blow up in people’s faces. But I am the advocate and whether there is someone like me pushing teams to produce somethings great varies from project to project. It may or may not happen and very often, amazing engineers pull up the slack for other folks on the team who may not understand the finer points.As far as your description goes, that sound too eloquent. It’s more like a teeming throng of people who stake claims around a river with gold in it and how well those people cooperate to build a collective society will dictate whether or not a city springs up and prospers. It’s about all the little daily decisions that need to be made correctly.We humans are still trying to get it right but technology is helping us get there 🙂
ugggg.tech debt, is an uggg
It’s fascinating that we’re making such a difference between “data” and “software”. They’re 2 sides of the same coin (value!). The distinction between the 2 is only coming from the fact that in computers, we distinguish the 2 elements (software belongs to the processor, while data belongs to memory/disk)… but when we look at us, humans, data and software are actually combined in a single entity: neurons.When you plan to invest in a company, you don’t just look at its assets (the data), but also at its processes (the software). When you assess an employee’s capabilities, you check their technical knowldedge, their address book, their reputation (all of this is data), but you also check their reasoning, their ability to find solutions to problems… etc and this is their software.The value is always in the combination of both!
interesting algorithms will output data as much as take it in
I belonged for many years to a trade group that provided education to its most junior members. The value prop for mid level members was mainly benefitting from the network effects derived from those early years of education and socializing. Naturally, software companies try to come in and take advantage of the networks. I see chunks of good strategy executed by different software firms and like to think that I could knit them together in an aligned fashion. Work in progress I guess.
Yes, technology is next oil. Where strong Network effect creates monopoly market if need is crucial to users. And where product is software, economy of scale is also low. Remaining thing will be depend on strategies, asset and network. Probably every business will need technology and there is huge gap of knowledge.
Digital Information is the oil, no question. Zeros and ones. The tech stack is the oil industry that has cropped up around oil to drill for it, pipe it, distribute it, refine it into it’s various forms, distribute it, serve it to customers via various interfaces. Sneakers got it right. https://www.youtube.com/wat…
Forgive me for giving a concrete example where access, hardware and software begin to blur in a new era and given new challenges.The pain – – We change the clocks once a year (easy)- Lighting controls are adjusted accordingly (obvious and visible)- Heating (remote and invisible) – what if we fail?In effect this means for comfort we add a heating hour (before or after occupancy) So say 10% waste of around 30% of energy used by mankind (Heating for buildings) for some subset of buildings.Net effect in the European Union – about 1% carbon emissions from around 50% of public buildings. (Dont have sufficient domestic homes data figures – but probably worse)See diagram below – red is non-compliant (about 1500 European public buildings).>>>> Where is the value in the tech stack ?Software can be an alternative to physical access.The report below can be run for a few cents – It replaces inspections at 1000’s of sites, by experts where solving the problem does not justify the time or travel cost.- Using pattern recognition we diagnose thousands of buildings using smart meter data (a small infrastructure IoT hurdle).The Network effect -For an energy manager to write the required code for one control system or one client just does not make sense – but the utility company benefits by offering this (reduced churn, lower acquisition costs and cross-sale models) and differentiating offering.So this is a productivity, cost saving, software substitution for physical access that only works through network effects – This is why it hasn’t been done before.Sorry its no elevator pitch explanation :)If interested can post compliant/ non compliant diagram
Thanks for calling attention to yesterday’s comments barkeep, an oil well of value 😉
Let’s look at the stack using known perspectives to the AVC community. In short, Unbundling, Network beating hierarchy, everyone a node on a network, software is eating the world. Where is the opportunity there?Certainly access. Figuring out how to disrupt the telco’s would be a huge business. Certainly hardware. Figuring out how to network hardware and making it interface with humans as nodes on a network. Certainly data. But as Fred points out, data is a commodity so you have to figure out how to build a moat and differentiate. Applications: Yes, but the toughest most competitive part of the stack. As we have seen with companies like Uber, existing companies are not afraid to enter adjacent spaces.As Fred also points out, I think this next wave is going to happen in highly specialized places where having some local knowledge will help.
>> highly specialized places where having some local knowledgeIt is notable that the more specialised the space the rarer becomes local knowledge.Essentially APIs make understanding a service very consumable (IF you know why you need or might value the service) – One role that is becoming very much more valuable is the multi-disciplinarian who acts as a knowledge transfer agent. They often answer the question “Ahh to solve that – what you need is a … or you need to speak to …”This is an old story and challenge of match-making – for example whenUniversities had to climb down from ivory towers to seek industrial partners to wing reasearch grants
What they were all discussing is the tech stack and where the value is. Sorry: The best possible answer to this question/issue alone is not very useful or valuable. Why? Because the ROI VC needs is so high that the best possible answer to this question cannot hope to provide that ROI.Or, it appears that the goal is to look for a pattern, that is, “where the value is” and then invest there. Sorry, the best possible “where the value is” alone is not nearly good enough.For the ROI VC needs, there’s no “value” anywhere in “the tech stack” alone.Indeed, as we know well, on average the information technology VC ROI is poor:http://www.avc.com/a_vc/201…http://www.kauffman.org/new…Instead, the ROI VC needs is exceptionally high and, thus, also the projects need to be exceptional.Biggie point: In looking for the needed high ROI, “the tech stack” alone says next to nothing and, instead, have to look for exceptional projects. Then, in looking for such exceptional projects, can’t hope to get much information from looking at the vast majority of projects that are much less good than VC ROI needs.In a military analogy, leading up to WWII, couldn’t look at cloth covered, wooden framed, air cooled radial engine biplanes to see what was crucial about building the British Spitfire. Instead, needed good work with water cooled engines, superchargers, a thin, single wing, construction from aluminum and special steel, etc. The Spitfire was exceptional.We got biplanes because in the wind tunnel of the Wright brothers biplanes were the way to go. But the Wright brothers didn’t understand the Reynolds number for scaling in fluid flow.Couldn’t look at artillery shells going way back to see how to build one with the US proximity fuse that was crucial when Patton ran to Bastogne. The proximity fuse was exceptional.Couldn’t look at old radio vacuum tubes and transmitters to see how to build the British cavity magnetron — a key part of England beating Germany in The Battle of Britain. The magnetron was exceptional.Later, couldn’t tweak turbojet engines to see how to build the engines of the SR-71 — the Russians tried that with their MIG-25 which did have stainless steel skin to take the heat of air friction at Mach 3 and engines that were powerful enough, but their engines overheated in just a few minutes and the MIG-25 had nowhere near the range of the SR-71. Instead, the Pratt & Whitney J58 engine was exceptional.How to build radar that is tough to jam or even to detect? Sure, use spread spectrum with shift register sequence encryption. That was exceptional.How to build an airplane that is invisible to radar? Sure, be very careful about the shape of the airplane, coat the surface with radar absorbing material, etc. That was exceptional.Ah, once in grad school on a part time job, I did something exceptional: The US Navy wanted an evaluation of the survivability of the US SSBN fleet under a special, controversial scenario of global nuclear war limited to sea. They wanted their results in two weeks. The group I was in had at least two independent projects attempting a solution. I saw a way to do it: Wrote out the math, passed a review from a well known prof with a relevant background, and wrote and ran the software. Later my work was sold to a leading US intelligence agency. I could tell you which one, “but then I’d have to ….”! Exceptional little project, exceptional enough that at the end of the two weeks, the other project had no results and was just ended.From the exceptional projects the Rolls-Royce engine for the Spitfire, the cavity magnetron, the proximity fuse, the J-58 engine, stealth technology, and spread spectrum, shift register sequence radar, there’s not enough in common for knowledge of some of those projects to say much about the the others. Instead, had to conceive of, evaluate, and pursue each of those projects one at a time.Now to find the needed exceptional projects, even at best, which part of “the tech stack” won’t mean very much. Instead, have to look at the projects one at a time.Network effects, proprietary sources of valuable data, proprietary algorithms, etc. can be quite helpful.So, yes, sure, from 100,000 feet up, do look at “tech” — Moore’s law, processors with multiple cores and low power from 14 nm or so line widths, disk drives with giant magneto resistive disk heads, solid state disks, optical fibers and dense wavelength division multiplexing, TCP/IP, authentication from public key encryption and Kerberos ideas, HTML5, mobile devices with, e.g., GPS, etc. But all that’s just from 100,000 feet up and, now, just commodities and not exceptional.So, how to find what’s exceptional? Sure, wait until a project is (1) providing a solution, defensible, that is a must have and not just a nice to have for a problem in a huge market, (2) has traction significantly high and growing rapidly, and (3) where the founders desperately need some cash.If want to know about exceptional projects before (1)-(3), then have to look in detail one project at a time. Sorry ’bout that.Also it would be good to be lucky!
Brilliant argument – In effect :commodity/follow only VC = small stake gambler against stacked oddsstartup = massive gamble with some inside knowledge – maybe odds in favourSuccessful VC requires to spot startups with inside knowledge that is correct and not commonly acknowledgedProxy for above (must fail long term) – invest in serial entrepreneurs
Yup — once again I typed in the long version!Successful VC requires to spot startups with inside knowledge that is correct and not commonly acknowledgedor by entrepreneurs “commonly” understood.Yes, there are now several projects in nuclear fission/fusion with “inside knowledge” and equity funding, but the equity funding is from the personal checkbooks of, say, Gates, Allen, Thiel, Bezos, maybe Page, Brin, Zuck, etc. For small projects, sure, there can be equity funding from family, friends, fools, crowd funding, angels, etc.The value of your “inside knowledge” has long been recognized, e.g., intellectual property, trade secrets, patents.In spite of this long, broad recognition of the value of “inside knowledge”, as far as I can tell, no LP funded VC in the country would ever evaluate a “startup with inside knowledge”. Instead apparently such VCs stick with the criteria (1)-(3) I mentioned. There the VCs are, in effect, accepting a Markov assumption: The “inside knowledge” and the success of the project are conditionally independent given the size of the market and the current size and growth rate of the traction. So, to estimate the ROI of the project, just look at the market and the traction and ignore everything prior, e.g., “inside knowledge”. Or for the simple view, just read the old Mother Goose story “The Little Red Hen”.Or, “this rocket is headed out to mine the asteroids and just got off the ground and is accelerating past Mach 2 — INVEST!”. Then, at 100,000 feet up, BOOM! Sorry ’bout that! Not enough attention to the rubber seals! Actually, that was clear enough to anyone who looked objectively while the rocket was still on the ground! Lesson: Even with the rocket at Mach 2, still need the careful evaluation from early on, well before the launch.An example of about the closest the LP funded VCs come to looking at knowledge seems to be what Fred is doing looking here at where in the tech stack the good opportunities are, and IMHO that approach is too far from the needed exceptional projects to be useful.For me, I’m just one guy, with ten fingers, typing and want to be successful. Any such success will be significantly exceptional, even if I get only $10 million instead of $800 billion, ah, just round it off, $1 trillion (I’m thinking I’m going for a must have a few times a week for on average each of 3+ billion people). So, I want my work to be exceptional and to know that early on. So, I want some “inside knowledge” as good at the cavity magnetron, etc., and I want to know, NOW, actually, ASAP when I started the project, that that knowledge is solid.It was for JUST just exceptional “inside knowledge” that I sacrificed, a lot, to get my Ph.D. and, there, pushed hard to make sure what I learned would be useful, and that I would be able to do original work on that foundation; so, naturally enough, now I’m using that background I worked so hard to get and intended to use.Sure, for a time I was a college prof, as part of trying to help my wife, but I do not now nor have I ever had any academic career ambitions at all. Never. I regarded being a college prof as a waste, something like a device inserted in my arm and that slowly dripped away all my blood and life.But all the work I’m doing that is crucial is exceptional; likely and apparently I’m the only person in the world who is doing, or who understands, such work. I’m fooling myself with fantasy? Likely not: I’ve done such technical work, e.g., original, solid, powerful, and the first in the world, often enough in the past. E.g., I did some work in zero day anomaly detection for high end server farms and networks, basically some novel derivations in advanced applied probability, that remains the cat’s meow for that field and published the work. But, it’s become quite clear that no one at all interested in that real problem is able to read the math in my paper. Indeed, when I went to publish, several chaired professors of computer science at some of the best research universities and editors in chief of some of the best journals of computer science wrote me: “Neither I nor anyone on my board of editors is qualified to review the mathematics in your paper.”, and exact quote from one such person and close for some more.There’s an issue: Exceptional work is an opportunity but also a challenge, both from the fact that few others can understand the work.Net, for the ROI VCs need, one of the most promising approaches is exceptional technical work, but such work is difficult to evaluate. The work is, in a word, that I’ve mentioned a few times now, exceptional.Or, many VCs have academic backgrounds in the liberal arts, business backgrounds in business development, and no significant technical backgrounds. So, any technical work that they could evaluate would be so simple it would be routine and not exceptional. So, they have to stick to my criteria (1)-(3). So, LP backed, VC funded projects don’t look like commercial analogies of, say, the British cavity magnetron — that is, the VCs won’t evaluate the magnetron itself, only its operation in practice.Since I was taught the lesson to pursue and exploit what was exceptional in technology and saw the advantages of that lesson, it took me a while to accept that, really, VC is determined to ignore that lesson and stay with just the criteria (1)-(3) instead. DARPA, NSF, and NIH are super big on the lesson; VC hates it.Fortunately my project has been able to go forward with just my checkbook, and now there’s a good chance of good revenue, enough for me to refuse any equity funding, ever, soon.As I understand the VC system, once my project is live and getting publicity and traction, plenty of VCs will call me. In that case I will very likely be able to tell them that way back when, and have the time and date, I did contact them or at least their firm, and learned that they were not interested.Simple — we’re just talking the story “The Little Red Hen”. Super simple.
Lots of military examples. Have you read Sex, Bombs & Burgers by Canadian journalist Peter Nowak? Here’s my take on it: http://bit.ly/1NBc1UP
The US military has been by far the best source of really good examples of support and exploitation of the STEM fields. By far. E.g., my ugrad physics department had a USAF contract to “contribute to the technology of the infrared”. Wasting money? NO WAY! Instead, fantastic ROI from missiles, night vision, and much more. My Master’s and Ph.D. studies? I never paid even a dime in tuition. Why? Research grants to the university from DC; even if from NSF, Congress knew that the main purpose was US national security, ROI? Fantastic.From your “take”, Nowak seems okay.The battles with AT&T, MCI, etc. were a land of milk and honey for lawyers. Bummer.Once it was legal to connect a modem to a phone, quickly we got modems with data rate close to the Shannon limit, BBSs, etc.IIRC, ARPA intended TCP/IP for battlefield communications. So, it was much more robust and needed much less management than IBM’s packet system SNA. Then somehow ARPA wanted actually to build an internet (with BBN?) and let the FFRDCs connect, which they did. Then soon NSF provided funding for more, and IBM ran it. And we got ISPs.As the data rates went from T1 lines to T3 lines, etc., with more irony, the Bell Labs work on Ga-Al-As heterojunctions for solid state lasers driving optical fibers (Corning) along with an amplifier (IBM) that worked directly on the optics inside the fibers and didn’t need to convert to/from digital, saved the day.So, with irony, the Bell Labs research on optical fibers finally essentially killed off nearly all asset value of nearly all the Bell System’s copper. E.g., at one point there was an estimate that the entire US long distance voice network bandwidth was only, IIRC, 28 Gbps — now with the Bell Labs work, totally trivial less than one wavelength on one fiber.Apparently some of the funding of the backbone of the Internet was handled by peering agreements,e.g., at MAE East in Virginia and ISP revenue.IIRC, much of the early good software for TCP/IP was done as part of BSD (Berkeley Software Distribution) done by the Berkeley computer services group with some DoE funding.With TCP/IP, the old ISO seven layer protocol stack is not a very good description.Of course, what originally built Silicon Valley was the US DoD and NASA, along with Stanford’s Dean Terman. Now, with incongruity and even high irony, in the support and exploitation of the STEM fields, the US DoD and NSF make the Silicon Valley information technology revolution look like some naughty pre-teen boys out back trying to make fireworks out of match heads.Absolutely crucial to making the Internet work was the Bell Labs transistor of the late 1940s, the growth in transistor technology from consumer electronics and computing, the growth of integrated circuits instead of discrete transistors, the microprocessor and the growth of PCs, then the growth of basically PC technology for all of digital computing, for the PCs, the routers, and the servers.It takes a lot of fast transistors to show a movie from YouTube on a PC. Last night I watched part of King Solomon’s Mines on YouTube, for free, and that takes a lot of really, really cheap really fast transistors.
“Once it was legal to connect a modem to a phone”We’ve lost sight of mandated interconnection all the way to the edge (that edge now being our smartphones). Computers 2/3.”Bell Labs research on optical fibers finally essentially killed off nearly all asset value of nearly all the Bell System’s copper”Networks are like living organisms portions are constantly dying and either growing or giving off new life.”Apparently some of the funding of the backbone of the Internet was handled bypeering agreements,e.g., at MAE East in Virginia and ISP revenue.”The volume of data relative to voice was a pimple on the elephant’s butt. Voice (WAN transport) costs were dropping in excess 50% annually. Nobody thought data settlements really mattered (or they would cost more than they were worth). Also, these were “trusted” agents. Ha!!!!In the end, the internet (4 layer TCP/IP stack) was a digital/packet arbitrage of an inefficiently priced, overly expensive, and inflexible analog, circuit-switched voice world. Any growth for the latter has resulted from a huge metcalfian suck or pull through for ever increasing access speed for information and apps.We lost our competitive way when we undid the 1984 equal-access, vertical separation of the MaBell system in the early 2000s (thanks to the farcical 1996 TA) and conveniently forgot about mandated interconnection at the edge for wireless networks sometime in the late 1990s.I still maintain that Steve Jobs’ greatest legacy will be that he resurrected equal access in the smartphone with wifi offload. After AT&T let the trojan horse in they couldn’t stop the app ecosystem revolution and the big bucks for USV. And Troy is definitely still in the throes of chaos and it’s only chapter 1.
I’m afraid to be rude, but how old are you?
I’m going to quibble on a couple things here:1. Your list is a vast over simplification…even mine is, but it’s a lot more accurate to say something likeHardware -> Infrastructure (software) -> Access -> Applications (software) -> Data -> Applications (software) -> Access -> DataMy point is that each of these things appears up and down the stack and more than once…that’s important to note because it’s all one big…ahem…web.2. I continue to believe “knowledge” is the word you are looking for here when you put it all together…that’s where the value is (it’s where it’s always been really). All of these systems/platforms/companies are about building knowledge around something and the better they do that, the more value they have (and control).
Now I need to quibble Knowledge can be common.Value is found specifically where the knowledge is sparse – maybe “insight”.
it can be common, and that’s exactly what the open-source movement is about (and I would argue the basic-income angle is also about)…but that doesn’t mean that other bits of knowledge can’t be or aren’t insanely valuable…
OK – agreed – but when value is built on common knowledge then the common-knowledge is not a differentiator.So I would argue the value is in what makes solution X unique – It might be a secret sauce (IP), or an owned source (Patent), or a more efficient production method, and more commonly an ability to reconfigure to deliver alongside rapid change.
In my book common knowledge ups the table stakes (which is a good thing for everyone).I think the value is in the unique knowledge that is derived from the application across the whole stack.More companies than google have search knowledge…but google is the best at refining and applying that knowledge (hence they are the most valuable there).More companies than Amazon have product and sales knowledge…but Amazon is the best at refining and applying that knowledge (in fact they so good that they initially started the book db by lic. a data set from RR Bowker; then augmented it so much that eventually they sold back access to that augmented data to RR Bowker).More companies than Apple have design and hardware knowledge…but Apple is the best at refining and applying that knowledge to provide customers with a comfortable ‘wow’ experience (hence they are the most valuable there).etc. etc. etc. 😀
I think your formulation in the comments yesterday is more accurate: tech is the new oil.Like tech, all oil companies are not profitable, and it has never been that way. Outsiders see Exxon and Texas/Saudi Arabia and notice only the succe$$, just like tech outsiders view Apple and Silicon Valley. But failed companies are everywhere if you look closer.Drilling for oil is not unlike a tech startup: you study the lay of the land, try to be smart about it, sink millions of dollars into your chosen location, but only find out 1-2 years later if you can expect to recoup your money or even get rich off it. Both require significant capital and expertise, with large rewards flowing to those that are able to create smart, defensible positions.The web 2.0 network effects that USV has been so successful with is sort of analogous to hydraulic fracturing over the last decade: the largest companies (exxon, etc) largely missed the trend till it was late, allowing new startups to create enduring, valuable businesses. Businesses who aren’t positioned well or spent their money foolishly die by the wayside. Eventually that trend plays itself out, and the startup ecosystem finds the next source of enduring profitability…The analogy isn’t perfect, but it works, sort of.
At the large and established companies I have worked for, there is often a need for translation from one stack to another. It’s my perspective that there would be faster adoption of new technologies inside some of the big traditional companies if there was a better understanding of how both worked together to deliver impact to end users in our case the consumer or HCP. It isn’t as simple as replacing the legacy with something new. If there were more people who played that role at both established companies and startups you would see even faster adoption and more innovation.
Don’t most VC’s generally prefer to invest in the top two — Data and Application/Software because of capital efficiency/higher margins?
I would extend a couple of points of the analogy.One, the ultimate function of the entire oil stack is “energy delivery” — so there’s your “data value” analog. Two, as an investor this analog is valuable in viewing the landscape for the long term.Some love oil infrastructure that offers consistent, reliable returns that are ‘oil price independent.’ They will love the data analogs. And this was what I was thinking yesterday but did not post — it helps me better understand Dell’s logic.There is more drama in software — oil wildcatting — and more stability in pipelines and pumps, probably for many generations.Mixing metaphors: A few 49ers struck gold vs. but stores sold picks and shovels day in, day out.
“Various places in the tech stack where software and data come together to produce a service that has high operating leverage at scale and is defensible by the network effects that the data provides.” You have almost written-up an updated investment thesis.Goodbye users, hello data?
“high operating leverage at scale”I am not sure what that means though?
See my post with diagram below -Or an example – Take Breast Cancer Scans1) Looking at Xrays is expensive skilled diagnosis time2) If data is available – pattern recognition can substitute diagnostic skill3) Developing 2) is expensive and only works if it serves a very large market of 1)If someone does 3) has access to 2) and can secure a market serving 1) there is a) a big barrier to entry for competitors (moat) b) options for cross-sales via relationship.The leverage does not exist before scale because the market entrant MUST grow to secure share before a competitor does – but once held the market is near unassailable (unless someone else can take the data) – so vertical integration (infrastructure / hardware), or signiifcant channel partnerships adds layers to the moat,
don’t even start with me about this. I can super rant.
There are some services that should be free and universal (like educaton, morality guidance, aspects of healthcare) to further society.As soon as OpenSource solutions start breaking down some of these barriers the better – however, in some cases it might take a market solution to open the possibilities (eg pharma). Then we can have the old patents / IP debate – (you wanna rant – Im with you !)
The slow one nowWill later be fastAs the present nowWill later be past- Bob Dylan (from the 1964 song, The Times They Are A-Changin’)Bob Dylan was right. And 1964 was also the year IBM started commercializing their first mainframe when this whole software computing revolution really started.
The need for mainframes was driven by war and communications (Bell Labs invented the transistor for switches). On the one hand we willingly waged war in the first half of the 20th century (and continued to plow R&D into defense in the second half), while on the other hand we constrained growth of communications for the first 80 years of the last century. How things might have been different but for policy failures (ignorance) in 1913 and 1934. What if competitive communication networks had more rapidly advanced the field effect transistor of the 1920s for more positive (and valuable) use? The reason it didn’t occur until the 1950s is that if digital switching (layer 3) applied to get around wired and wireless rights of way constraints (layer 1) had happened sooner, we could have justified and scaled competitive networks sooner. For those who say it was a material and science issue, I’m not so sure. Remember that radio led the Wall Street mania in the 1920s and the resulting depression and War sapped research budgets. Both constrained spending on transistor research, which could have materially improved people’s lives and had a vast impact on the growth of our country well before the 1960s-80s. Unfortunately, inefficient policy is still constraining us; by about 30% per annum. The latter is based on the discrepancy between core and edge transport costs.
This is all part of the silicon revolution. They are successive pieces to its evolution.
That’s the “material and science” version. I am positing a different past and potential future based on self-imposed institutional barriers/constraints. The speed at which the ball bearing was developed after the modern steel furnace was invented in the 1850s, led in turn to the first, durable bicycles in the 1880s. Much like the smartphone it heralded personal transportion; only they didn’t have the wifi/4G networks of today. Just a lot of dirt, rutted roads. The concept of the transistor originated in the 1910s and was patented in the 1920s. Did it really need 40 years to be commercialized?A communications infrastructure that promoted competition (switching amongst many networks) would have driven the demand for electrical/computerized switches faster and sooner. And computer resources went to building the bombs. The latter didn’t need to worry about size, but in urban markets, switch size was an issue.
Using the above analogy, the current internet is like the rutted roads and we’re all riding bicycles. Video (1 and 2-way) is the car that will finally get us to buildout the networks properly, if not out of necessity. Software isn’t eating the world; video is.
Or try: Attention is the new oil.Data is somewhat akin to any material from which a fossil fuel can be derived. It could be incredibly useful without much refinement, like say market data, or a key piece of insider information (call it a rich oil reservoir). It could be incredibly useless in raw form, like a firehose of user log events (call it a shale sand). Data can be high entropy or low, in jargon-speak.Software is an imperfect analogy, but it creates value from that data, like drilling equipment. It can be open source or proprietary, but its value comes from making incomprehensible or inaccessible data comprehensible to someone who derives value from it.Infrastructure (SW) and Hardware enable the above two layers. Access (ISPs) provide the distribution pipeline. If distribution can be monopolized it will extract rent.Attention is the finite resource akin to refined oil. It’s valuable to attract concentrated, sustained attention, as business managers happily pay for key insights that drive better decision-making. It’s also valuable to attract high volumes of distributed, broken attention, as social media audiences pay through eyeballs.Data is only valuable if it can be turned into something that draws attention. Software is only valuable if it can convert data into attention. Access is only valuable if there is attention to draw or distribute. Attention is the common denominator.
I think the stack is a little deeper. The top isn’t ‘data’ it’s ‘information.’You typically don’t operate on raw data. The value is extracted from data by analysis that extracts information.
Billions of sensors will inundate us with information. Some of it will be processed and consumed at the edge in real-time. A lot of it will be stored and processed at the core. What distinguishes information from knowledge? How is it used and repurposed? Cloud/fog boundaries are not static. But there are guidelines in the form of horizontal layers and geodensity (and application) determined vertical boundary points (PAN/LAN/MAN/WAN). While everything will scale horizontally and afford vertically complete solutions, the value is principally captured at the core and top of the stack.
I think billions of sensors will inundate us with data.
Apps turn the data immediately into information. In turn that impacts knowledge. If I watch a ticker stream all day is that data, information or knowledge? Depends on who is watching. Simply putting out sensors to generate data is meaningless unless it’s immediately turned into information and then knowledge. Also, different apps will take different information away from those sensors. That’s my point elsewhere about compensating the end-points with a terminating settlement.
I am sure there are contexts in which data may be readily consumed as information. But the fact that a skilled user can in some contexts do that transformation instantly does not mean the data is information per se. I agree that with the declining cost of intelligence we can push a good deal of what is actually analytics out to the edge of the cloud but it nonetheless takes analysis. I totally agree that putting out sensors to generate data is meaningless unless it’s turned into information. But that’s really my point. Data isn’t the top of the stack. Actionable intelligence derived from analysis of data is top of the stack. If data were top of the stack we wouldn’t need data science. 🙂
The stack is indeed deeper and data is pervasive. See my 3D informational stack illustration in answer to JamesHRH below. It is a model about data flows more than anything. Elsewhere I state that most of the value gravitates to the core or the top due to how the data is collected, analyzed and repurposed. Basic end-result of network effect. OTT video provider Netflix has the advantage over any edge access ISP since it has a complete view of edge demand from the core in all markets and therefore can better determine where to cache (closer to the core or farther out to the edge) a particular video ex ante to reduce transport costs and latency. Netflix also has a better view of all the supply layers and boundaries as well, since its client can provide feedback on UX. When we’re talking 2-way HD collaboration across widely or narrowly dispersed groups or individuals, the challenges (costs) are even greater.So the govt should mandate interconnection all the way out to the edge; just like carterphone which paved the way for faxes and modems, and along with dial-1 equal access and computers 2 and 3, paved the way for the digital voice and data booms of the past 30 years. Interconnection would open up smartphones and edge caching devices (set-top boxes) and sensors to completely new business models (and protocol stacks) to handle the performance loads and layer/boundary tradeoffs that 4K VoD, 2-way HD collaboration, mobile first and IoT will bring. Demand will be both infinite and infinitely varied.Our present “internet” is vastly underfunded and structurally incapable (at every layer and boundary point) of handling all of this data to/from even 1% of our global population. If more ascribed to this view 2016 might indeed resemble 2000 when the internet slammed into the narrow-band brick wall with no real sustainable revenue model in sight. Fortunately we still have 4 bn smartphones and billions of sensors to grow into to cover up this market inefficiency (fool people) according to a16z. So the unicorns are not turning into ponies anytime soon.As an example, if I put on a 4K aquarium video 3 hours a day streaming 28mbs on Youtube, my cat will consume 1 Tbyte of capacity in a month under the present model. In my model (interconnection out to the edge and market-priced balanced settlements which compensate those end-points and facilities), the core provider (Youtube) would sense a pattern not just in my house but in every cat-house and cache that content ex ante at the edge thereby dramatically reducing capacity consumption. But this cannot happen with current ISP interconnection barriers at the WAN/MAN divide (let alone in the LAN/PAN as it should be).Add mobility (the PAN becomes the WAN or cloud turns into mist as devices cross boundaries) to infinitely varied demand and constantly obsoleting supply and it is easy to see why neither fully distributed nor fully centralized models for data flows will work. Rather we need centralized hierarchical networking (CHN). “Data” is not just what is being produced and consumed (1-way and 2-way), but also how it is being handled and for what purpose, thereby begetting more data. If data is the new oil, then if we get this right we have indeed created the perpetual motion machine. But that motion machine will consume energy at the end of the day. So as I said in the prior day’s comments, this is really just part of a cycle of networks that goes back thousands of years. Same frameworks and principles, different forms.
If a software company like, for example Dropbox, is a commodity, how are they still growing their user base in spite of competition from Google, Amazon, etc? They must offer a superior UX, otherwise everyone would go with the lower cost provider.
data and control. old testament, new testament, rome, catholic church, education, behaviour et.c. nothing new
In terms of the analogy, I would make the argument that oil refineries are perhaps the most valuable part of Oil’s production. Likewise Data, by itself, is useless until refined into something meaningful/tangible/etc.Sounds like the argument is saying that software is the “refinery” so to speak, but it’s my opinion that ultimately it’s the Data Scientist who are the refinery. They convert the Data into meaningful insight and therefore bring the most value in the Data production process. Where software, would simply be the distribution method of the refined product.
So venture investing = drilling for “network effects.”Not sure if one is down there until the data comes spewing out of the top of the stack.
The balance between closed source until you reach scale to open source appearsto be a successful model. Each model appears to stand on its own merit based onthe offerings. Why does any particular source need to be christened as King by theKing maker who have a vested interest in the success of what they are putting theirstakes on?
It sounded like yesterday was a super interesting day that I am sorry I missed.
“You need data to provide defensibility and differentiation”I’m not persuaded this is true. There are MANY software companies whose defensibility comes not from the fact that they have access to data but rather because their software (and increasingly their associated ecosystem) locks customers into their products. You could argue that Oracle’s dbms products satisfy the ‘need for data’ but I think that’s a stretch.
When I hear infrastructure, I think of hardware. Maybe that’s because I’m old school & I remember building servers in the 90’s.With the cloud & automation tools like Ansible, Chef, Puppet, Amazon’s opsworks, cloudFormation etc, software builds your infrastructure.Is that what you mean by Infrastructure (software) above?
I love the mouthful version!! It is very difficult too explain to people when you have discovered new oil. It’s not where anywhere people would expect. It is only thru months of thinking and questioning your assumptions that realize that your tech stack of software and data has operating leverage at scale.One sure sign is that you create a new product category. Your business model is a critical component also, there must be a very low friction for adoption. I could go on and on, but I’ll stop. I’ll send you an email Fred.
Really great discussion and varying points of view. I think access is an interesting part of the tech stack; some companies operate across the stack and hit hard on controlling access to defend their position (i.e. Facebook content blocking, unless you agree to their terms). You can have great software and data but without a network/platform, you have no audience or scale.Really enjoyed reading posts the last few days, great stuff!
Often times people assume that hardware is collecting some sort of data (i.e. like a drilling rig), but often times hardware can be the end result of software processing (i.e. 3D printing). So keep in mind hardware exists at both ends of the chain in many instances. I’m not sure which end of the chain would be more valuable.I think ‘smart hardware’ – hardware that is defensible via good design, manufacturing / quality know-how, that is powered by machine intelligence is going to be where the gold is. Not so much distinct layers of the value chain, but a cross-functional integration of several layers is where value can be packed and sold at premium prices.Think Medical Robots as an example.
Pardon the rookie question but why is a company with high operating leverage at scale interesting? Seems counterintuitive to the network effects hypothesis I’ve seen here quite a bit. Thanks in advance.
Agreed. You can’t just start with “something is like oil”. Oil is like oil. Tech is like tech.Now excuse me while I pop into the genius bar for an oil change.
Charlie – I think this is implicit in Freds QuestionWhere is the value ? – This always resolves at a customer benefit – Its just that the value lies all along the stack if the stack is consumed by different entities.I consume AWS services and ISP services – but my clients consume analytics (I hide the stack)This means a discussion about the stack is about the distribution up and down the stack – ie where the value layer is widest next (regardless of channel path).
you are already losing sight of where the value really is. when you don’t step into the discussion through the needs or wants of customers, you miss the point.As I put it, from 100,000 feet up, sure, when looking for good projects, do look at tech.Why? Because tech has so suddenly given so much functionality good for information and automation for so little money, and that still falling rapidly, that we have to guess that there are unexploited opportunities. IMHO, that’s correct, but it’s only from 100,000 feet up — for actual success, sure, need your “the needs or wants of customers”.Or, commonly “the needs or wants of customers” have been around a long time, and what’s new now, the new opportunity, is to exploit tech to satisfy some of those “needs or wants”.E.g., cartoon Dick Tracy had a wrist radio. Well, now something like a watch with a wireless connection to an iPhone should do the trick! That is, if a wrist radio is one of the “needs or wants”!
“Wars killing millions of innocents” is a product of the geopolitical reality in that corner of the world, mixed with a heavy dose of religion.Not the fault of oil.The US is now the world’s largest oil producer, but you don’t see mass killings on the plains of Texas.
“this is not a top comment on the topic. someone downvote it”Not yours to decide!
avc is an investors view of the world, by definition top down.Builders are by definition bottoms up.You look at the same world from different view points.I read avc cause that view is useful to me. My point of view is I think less interesting to an investor.Such is life.
Well, needs and wants is a mature way of looking at company relevancy. Unfortunately, when many start-ups don’t even know how they’re going to monetize their biz, N & W becomes, particularly short-term, somewhat irrelevant. It’s a fundamental reason why I think many start-ups fail. The mentality is if we can achieve scale we’ll figure the rest out later, including monetization.
The value in networks typically gravitates to the core and top. That’s why we need to better understand the role of settlements which serve as price signals and (dis)incentives clearing supply AND demand both north-south (app to infrastructure, or software to hardware) and east-west (between networks, apps, agents, actors, users, etc…). We have no market driven settlements (other than trading in our privacy) to serve as price signals and drive ubiquitous upgrades and ensure low cost universal access.
why should we downvote. there are smart things in it.And yes, people might kill over data one day.
Oil/energy was a powerful force because it enabled the industrial revolution to magnify human productivity many more than tenfold via machines, and it was an economic powerhouse because although for a time supply appeared infinite, there was always a scarcity in available supply at any given time. To fuel our machines and technology, we will always need some energy source, but the reverse is not true.Software/data/network bandwidth have similar multiplier effects, like oil, but are fundamentally different because you can always create more — there is only a scarcity of imagination, not of supply.What that means ultimately is that with oil, there was a small oligarchy of drillers/producers and resource owners which could generate enormous power and profits worth fighting over. No one has, or could maintain, such control in the tech space. There is always room for another new monopolist (mega company) if you find a novel way to address some human need and relative scarcity — that was never true with oil, which is why people had to start wars for access to a share of it.Hence, tech will be used to fight more sophisticated wars, but we won’t fight wars over tech.And, you are absolutely correct. It is too easy to put the focus on the tech that enables solving problems, rather than on the problems that need to be solved. Most financial analysts and techies do the former, and inevitably put blinders on themselves regarding the reason why the tech succeeds. When breakthrough technology fails, it is almost always because people saw the tech as an end in itself, rather than focusing on the application and how it solves a job to be done.
The killing is an exported product ! It is no less real.US troops are sent abroad to die.
I am hoping @jlm might find time to give a well-needed history lesson on how commodity supplies are critical to keep war moving
Lots of rich, raw natural resources to exploit => Heavy interest from foreign investors+ Weak institutions, government + Poor education => Kleptocracy, Extremism=> Coups, Revolutions => Wars+ Foreign intervention, proxy wars => More war, instability, extremism=> Failed statesNot only true in the Middle East, but also Africa.
you might if there are massive crop failures in the US
??? Sorry but I can’t follow your logic.
That’s what I’m not following. US energy policy -> A -> B -> C -> conflict in the middle east.
What he is saying goes something like this.Israel gets a boatload of money not just because of the Israel lobby but because it’s in the Middle East where there is oil so it’s strategically important to US Interests. The US spends money and fights wars, supports regimes etc. in various places in order to keep the oil flowing.
Charlie – I am with you on abhorrence of many of the side effects that market structures can cause (Oil is a good example).However, I don’t think we can resolve the minutiae of ethics whenever we discuss oil.If we accept that oil is analogous to “ridiculously cheap labour, in the hands of very few, geographically clustered, with horrible long-term side effects”, we must also accept that that is how the pyramids got built.Are the pyramids valuable ? – Interesting ethics question.But we cannot explore it each time we talk about the middle east and say tourism.
Exactly. Sometimes very frustrating reading AVC but always interesting. The reason is to me the devil is in the details of how and why something works or doesn’t work with a product or a service or even a cold email that you send to try and sell something to someone. Details matter. Details always matter. Nuance and execution.I remember a story that Henry Kravitz told in a book. He hired a guy to run a hotel chain that he had purchased (private equity). The guy shows up at a meeting with (among other things) some new signs they will be using and shows it to Henry and (I guess) asks what he thinks. Henry tells his underling to replace the guy. Fire him. Doesn’t matter how true the story even is. The point is that Henry Kravitz (quite imperial and king a 2 shirt a day man) doesn’t get into things like that. The mere thought of someone even asking him was enough to make him feel he needed to find a replacement. He is a big picture finance guy.  That is his view on the world and it works for him in a big way. Contrast that with, say, Steve Jobs who was legendary for being the opposite (and it doesn’t even matter if that is true either actually). Same with Sam Walton. And Ray Kroc. And I am not! On a commercial condo board that I am on I wrote the sign policy and it was my idea to even have a sign policy. Of course I got exactly what I wanted as the rest of the board had no idea or ability to dispute anything in the 3 page detailed document that I had laid out to ensure the correct conformity according to my particular vision. (One of the reasons I am on the board so things are done my way..)
Kravitz and family were once seated at a table next to mine at a very upscale Batali restaurant. The staff were fawning over him like he was the friggin Pope. It reminded me of another rest experience in an UES Italian rest, a known mob joint, where the staff similarly was fawning over this guy and his girlfriend, both who could have easily been cast in Goodfellas. Different set of power brokers, one built from the top down, the other from the bottom up.
I believe that’s our foreign policy, not our energy policy. (energy policy is primarily focused on energy use & production in the US)Though yes, energy (along with historical alliances, moral beliefs, governmental ideology, trade interests, etc. etc.) is one of the things that helps inform our foreign interests.
Wow how suffocating. I can’t even stand any extra attention that I get at Starbucks (and I don’t even tip).
Network effect. Value grows geometrically and is captured at the core. Costs grow linearly at the edge, but users at the edge get just enough value (network externality) to offset that cost. Cost at the core is not just silicon and software, but includes product development, marketing, customer service and retention and upsell. Those who win (Facebook) have managed to do it better than others. Typically, in the network layers there are tradeoffs in transport & switching between layers 1-3, in the control layers there are tradeoffs between 3-6, and in the app layers there are tradeoffs between 5-7 based on traffic densities, performance requirements, type of application, security, monetization, etc….Today’s “internet” is a bunch of isolated silos at the core and edge (partially and fully vertically integrated) that have the majority of the value. There is no efficient settlement clearing mechanism of supply and demand to bridge these silos and drive rapid adoption of 4K VoD, 2-way HD collaboration, mobile first (not only), and IoT. All of these are technologically doable today, but it’s all those tradeoffs above that cannot occur efficiently without north-south and east-west settlements.As Herzog said, Every Man for Himself and God Against All.
All true, and well-said. Though I would lay that at the feet of geopolitics and economics, rather than blame “oil”.Emerging markets with unstable governments leads to little foreign investment in labor intensive industries, leaving natural resources to receive the heavy share of investment / development. Compounded by the tendency of these gov’ts to claim ownership of below-ground mineral rights rather than rely on taxing their citizens for their funding (thus reducing scrutiny of their spending). Results in the scenario you described.For example, I spent a month in DR Congo trying to buy a bank 10 years ago. An interesting, stressful experience, given their then decade-long conflict which was (and is) still smoldering (bloodiest conflict since WW2, with 5MM dead and counting). DR Congo is the best (worst) example of a “failed state” the world has. They don’t have that much oil, but they have over half of the world’s cobalt. And lots of tantalum, tin, and tungsten (the 3 T’s). And lots of diamonds.People like to talk about “oil wars” rather than “cobalt wars”, I guess because it sounds more sinister. But to the people living in the conflict areas, there is no difference.
Wow it’s really interesting you’ve spent time in the Congo. I just finished reading a book about it (http://www.amazon.com/Danci…, which is how my brain wound up there. Would love to hear about it sometime.Substitute ethnic hatred for religion, cobalt for oil, and Rwanda, Zimbabwe, Angola for US, Russia, Iran, and I can’t help but note parallels between the Congo and Syria. Different pieces, but similar result. Very unfortunate.Really interesting note on the lack of labor intensive industries. Wondering whether that ensures a downward cycle that prevents enough investment in education to escape and get to strong institutions, etc.
That looks like a great book – added it to my list. Congo was fascinating – give me a shout if you’re ever in Boston – email in image below.You make good points about Congo vs. Syria (or even Afghanistan for that matter).Re: foreign direct investment: I don’t think a downward cycle is the issue so much as the upward cycle that never gets going. 3 things you need to get economic development going: 1. education, 2. strong justice system / rule of law (ideally with strong property rights), 3. capital.Africa actually does fairly well with education. Capital isn’t hard to get – it comes knocking whenever #2 is strong and the government is stable. #2 is the problem. And it’s a hard nut to crack.
Isn’t a key feature of American energy policy to project power to keep oil supplies flowing at all cost ?There are few other components of foreign policy that America is willing to go to war over so although you may be technically correct it is largely semantic.
I’d say that’s squarely in the domain of foreign policy.Also, the narrative that America goes to war to keep oil flowing is false. Saddam didn’t invade Kuwait to turn off the oil spigots – he invaded so he could sell their oil too and get even more wealth. We would have had plenty of oil either way. Economic self interest ensures that we will be able to buy as much oil as we are willing to pay for. (Further, OPEC’s reputed ability to set oil prices is way overblown.) It is our alliance with Israel, not oil, that is the biggest source of our conflict with other countries in the middle east.Honestly, even if all of OPEC (30% of global oil production) somehow tried to prevent us from getting oil, they wouldn’t be able to. The US consumes 19MM bpd, most of which we produce, so net imports = 5mm bpd. Compared to a global market of 93MM bpd. i.e. we can buy our oil elsewhere if necessary.energy policy= how much of our energy should come from nuclear vs. oil, coal, gas, wind, etc. and what regulations we should have governing each of these types of energy production, distribution, and consumption.