Usage Based Pricing For Code Execution
I’ve been thinking about Amazon’s Lambda service which I mentioned in the Hacking Echo post last week. Lambda is not new, but last week was the first time I saw it in action. I need to see things in action to understand them.
Business model innovation is interesting to me, maybe more than technology innovation. Because new business models open up new use cases and new markets. And new markets create a lot of new value. And I am in the new value business.
Think about this value proposition:
AWS Lambda lets you run code without provisioning or managing servers. You pay only for the compute time you consume – there is no charge when your code is not running. With Lambda, you can run code for virtually any type of application or backend service – all with zero administration. Just upload your code and Lambda takes care of everything required to run and scale your code with high availability. You can set up your code to automatically trigger from other AWS services or call it directly from any web or mobile app.
“You pay only for the compute time you consume, there is no charge when your code is not running”
So we have gone from “you have to buy a server, put it in a rack, connect it to the internet, and manage it” to “you can run your code on a server in the cloud” to “you can run your code on a shared server in the cloud” to “you can pay for code execution as you use it”. And we have done that in something like ten years, maybe less. That’s a crazy reduction in price and complexity.
But we also have put a ton of code in an open repository that anyone can access and copy from.
So, like we did in the Hacking Echo situation, you can now browse GitHub, find some code, use it as is or modify it as needed, and then put it up on Lambda and pay only when you execute it.
I think this is going to make for a lot more hacking, experimentation, and trying new things.
And that is going to result in new use cases and new markets. It may already have.
Comments (Archived):
This is the promise IMO of 21.co
+1. This is native to Bitcoin more so than anything else I’ve seen.
“you can pay for code execution as you use it”That’s also how smart contracts run on a blockchain. You run the contract and pay for it. It might take a few seconds, and that’s it.
Same for cars. You had to buy one, then you could lease it for a few years and return it with no obligation. Then with zip car, you just pay for it by the hour. Then Uber came along, and you pay by trip.
Is there a granularity below trip for people transport? I can’t think of one.
Sharing the Uber fare. Maybe in the future, by weight. 🙂
Per seat?
it’s called UberPool
the future is here, it’s just not evenly distributed… yet…
then you could lease it for a few years and return it with no obligationSure but you lose the ability to get rid of that car if you don’t like it prior to the end of the lease period. On the upside a lease has a guaranteed residual value so you don’t have to wonder what you will get for the car at the end of the lease. As a general rule if you have cash my feeling is leasing is a bad idea. [1][1] If you have a car that is in an accident though having a lease ends up to your benefit of course as an upside.
Of course, I was using this as an example. But, re: “Sure but you lose the ability to get rid of that car if you don’t like it prior to the end of the lease period.”, Any car dealer help happily break your lease to get you in a new lease, because they keep you longer as a customer.
great analogy!
“You pay only for the compute time you consume, there is no charge when your code is not running”You mean, just like going way back to the days of time sharing and, before that, to the days of service bureaus?
Except no paperwork/contract to get going, just a credit card and you’re off to the races.
And acceptance of their terms & conditions, which is a contract that your company lawyer should vet.
Example 1 — Saving FedExAt one point, I saved FedEx from going out of business: From my home in MD, I got a terminal, a phone number for dial up access, and an account at a time sharing service running an IBM system with VM/CMS. I wrote the software in PL/I. The software scheduled the fleet in a way that pleased the founder, COB, CEO F. Smith, erased doubts on the BoD, enabled quite a lot of funding, and, net, saved the company from going out of business (it was a close call).I don’t remember signing anything! I just gave a billing address at FedEx, and they sent bills.Example 2: Reading a Tape for My WifeWhen my wife was in grad school at Johns Hopkins, her prof, from a former student, in Chicago, of the prof, got a standard reel of computer tape, 1/2″ wide, 2400 feet long. The tape was supposed to have some good data from some survey work done by the guy in Chicago.So, my wife and her prof tried getting tape read on the new IBM computer in the computer center at Johns Hopkins. The computer center people said that they tried everything, even “variable” format, and couldn’t read it.Ah, at the time I had a good career in progress around DC in computing and applied math for mostly US national security and, as a small part of that career, sure, had long since understood a lot of details about tapes and how to read them!So, I talked with the Hopkins computer center people, concluded that they likely didn’t have any idea what the heck they were doing and that likely the tape was okay. So, after dinner, I got on the phone, called a service bureau I’d known about but had never done business with, and asked if they had a PL/I compiler on their machine. They did. So, I grabbed my wife and the tape, and we drove to their site in Virginia, just over Key Bridge. Right away they gave me a user number, and I sat down and typed in some PL/I that would read darned near anything! Just do PL/I RECORD reads, one tape physical record at a time, into a PL/I string of type CHARACTER VARYING with maximum length 32,768 bytes or some such.Print a few such physical records and their lengths and figure out what the tape format was.Well, the format was dirt simple: FIXED BLOCKED with LOGICAL RECORD LENGTH 80 bytes. That’s about as simple as it gets, still!So, I modified the program to print the data, the 80 characters per line, printed it, at dawn left, got us a nice breakfast at an iHOP, went home, and got some sleep. When the bill arrived, I hoped my wife’s prof would pay it, and he did. Else I would have paid it — maybe $50. I had a good career going!Just a hand shake deal. No contract. No credit card. Somehow they trusted someone who walked in ready to type in some software and run it, start late at night, and stay until dawn. The marginal cost to the service bureau? Less than zip, zilch, and zero. And, they did get paid. It all worked fine.The situation is a lot better now?The Main TensionNow can plug together one heck of a server from parts costing less than $1000. For anyone who knows how to do system management of such a box, either the cloud guys aren’t making much money or the DIY approach is a lot cheaper.The tension has long been between (A) buy hardware versus (B) share it and still is. The main issue is just system management.To share, end up paying some astounding total for the system management, system administration, marketing, insurance stuff, legal stuff, management overhead stuff, five nines stuff, security stuff, and profit.If DIY, can save a huge bundle but have to do own system management.Maybe the Main OpportunityThen for the opportunity: Have Microsoft and Linux distro vendors come out with some well written documentation (I know, nearly impossible for the hacker culture) on how to DIY.The Long Term, Huge PointLong term, there’s no hope for centralized versus DIY. The huge point is that hardware is too cheap and centralized is too darned clumsy bureaucratically, too labor intensive.Fun War StoryThere was a tape that did give some problems. The tape had a lot of research sonar data on it and was recorded on a US Navy submarine during some sea trial experiments.Well, there were several experiments.Nicely enough, at the beginning of each experiment, the sailors wanted to note where the beginning of that data was. So, they got one of those gun-like plastic thing-ys, for making labels, with a dial for the alphabet and a trigger, that squeezed characters into some soft plastic tape with adhesive on one side.Then before each experiment, the sailors took such a label, reached into the relatively open tape drive they had, and put the label on the tape.Then when the tape came back to us, we tried to read it, right, on a high end, likely the most expensive ever, IBM computer — national security money!So, sure, the tape loads and gets to the little reflective thing-y for logical beginning of tape, and the software starts to read. Then, suddenly in the computer room, “BAM!” as the sticky, soft plastic label hits the beautifully engineered and built tape heads, gets jammed, and causes the tape to break!Well, that was one tape my software was not able to read!
I think lambda is absolutely fascinating but I can tell you as an engineer it scares me. Why? Because you have almost no insight into how the system performs. Nor how to fix it when it breaks.Debugging, testing and performance tuning all become much harder, especially if you get to any kind of scale. These tasks all basically coalesce into one: “call Amazon and hope”. Not a plan I’d want to bet my business on.But for prototyping or small services where you don’t care performance, lambda seems like an awesome experience.
“you have almost no insight into how the system performs”Don’t they have SLAs on performance levels?
Not that I could find, searching briefly. But this isn’t comforting: https://forums.aws.amazon.c…Also, SLAs are not very useful when the issue is code that you wrote, or interactions between code you wrote and an opaque system.
I think an SLA is a bit of theater in some respects. If you are operating a small service and paying a small amount of money getting credit for that small amount does nothing for you the hit you take in your business is much greater. And honestly even if you are Netflix and paying millions having unhappy customers has a much greater cost than whatever you would get as credit. The point is if an SLA is breached in some way nobody is turning over the keys to the ranch to you it’s like having a wedding and getting credit for the steak served which ruined your guests experience.
We’ve tried Lambda and abandoned it for 3 reasons. The first is the one you state: no transparency or control. The second is that it does not work well with custom npm modules, which is a big issue for us. And the third is that it runs an old version of Node.I think it’s interesting because it does not require that you keep a server running just to hold code on it. But then again there are services that allow you to spawn servers at $0.04 per hour, which is what we choose to use when experimenting/prototyping.
All of which can be fixed in 2.0.Beware of rejecting what looks like a toy because of a 1.0 feature set… 😉
The latter two issues can be fixed, but the first is intrinsic.
Not necessarily. You could find ways to create transparency and control. This is what people used to say about virtualization, or relying on the electrical grid. 🙂
Fair enough. Once standards and real levels of service arrive, I can see how giving up vast amounts of transparency make sense. I am not sure software can be as standardized as electric current, but I may be wrong.
This isn’t like an investment where you can see the value as it iterates; it’s about operating solutions reliably. We cannot do that on Lambda so we dropped it for the stated reasons.Since when did dropping something become synonymous with ignoring it completely?
Hmm. Not sure I was claiming “dropping something” was “synonymous with ignoring it completely”. But regardless, I think we agree that lambda wasn’t the solution you needed, and isn’t a magic bullet.That said, I think that just as electricity generation started out creaky and became more and more reliable, we’ll see a similar progression with Functions As A Service like Lambda.
Sorry Dan – my comment should have been in reply to Aaron, not you. In the same way a wind-powered mill would have rejected an intermittent electricity supply, we’re rejecting (for now) Lambda.
Of course, but we can’t use it now and now is what matters for our purposes.
I also think its fascinating, and I’ve used it for a React app I built at (shame less plug) http://rateapolitician.usAren't all non-self hosted solutions, essentially call the provider and pray? Even if your server is self hosted, unless you are an expert HW and SW engineer, you are reliant on some nameless person/entity some where in the community for your business.When I was developing my app (I used Lamda, API Gateway, Dynamo DB, etc) the stack was a bit of a bear to get going, but now that its up, I pretty much never have to worry about the backend stuff. That is a great feeling and I would definitely strongly consider that stack for any new projects I’m working on.
I agree that all outsourced solutions force you to give up some elements of control. But the lack of insight that lambda forces upon you is different in kind, not just degree.
Re debugging, testing and performance, that’s what AWS hopes we use CloudWatch and QuickInsights for.QuickInsights is their version of Tableau.It’s all about automation of code and Google does it too:* http://venturebeat.com/2016…
Those are all monitoring/analytics solutions, right? What do you do other than call Amazon if these tools indicate a problem?
I was wondering while reading the post what the use case would be, beyond testing. Is there a production use case for Lambda? Did you have one in mind when you tried it?
The use cases I have seen (I have not used it myself, other than reading docs and a bit of hello world stuff) are all asynchronous services. Seems awesome for fore and forget takes like making thumbnails of images.Also, worth checking out: https://github.com/serverle…
+1 @mooreds:disqus comments.Essentially any feature/component of a product/service you are building that does not have real time performance expectations can be done via lambda. Think of use cases like “importing large amounts of data”, “running reports that take long time to complete”. Also anything that fits into the side effect programming pattern is a prime candidate for lambda, i.e. “you just created a new user account and now do steps 2-5 as a follow through”, etc.
Nor how to fix it when it breaks.Agree. Honestly the more you get away from racking your own server in your own facility (or even in colo) the more you have this situation.We operate equipment:a) at the office b) in colo c) at Rackspace d) some other places (such as Media Temple)With a&b we are in total control as far as uptime, when upgrades of software are made, outages (and fixing them) and so on. At c&d we are at the mercy of when others see fit to deploy fixes, upgrades or even replace hardware.The upside of c&d of course is that many things become someone else’s problem to deal with so if there is an outage you can sit back and just wait for it to get fixed instead of having to stress out fixing it yourself. So there are pros and cons. As a general rule though I absolutely agree with your point I like to have as much knowledge and ability to fix something as reasonably possible.
As someone who is currently deploying to heroku, where I get no control but plenty of other people’s help (upgrading servers, patching ruby, etc) I totally agree that there is a link between control and responsibility. The issue is, how far can we push responsibility to another entity and maintain the level of service we need.
All true, and it will probably stay in the realm of prototyping and non-realtime tasks but if more cloud providers offer this type of service (and Moore’s law keeps chipping away at the performance issue), I can see this becoming a common implementation much like cdn/static hosting services are for files.
Also see webtask.io, which is like the Heroku for serverless. Good/fun explanation here: https://tomasz.janczuk.org/…
I bet Uber is working on that.
A flash back to the 70s where you pay for computer time?
Microkernels and microservices.They may take some time to mature but appear to be the future.
So we have gone from “you have to buy a server, put it in a rack, connect it to the internet, and manage it” to “you can run your code on a server in the cloud” to “you can run your code on a shared server in the cloud” to “you can pay for code execution as you use it”. And we have done that in something like ten years, maybe less. That’s a crazy reduction in price and complexity.This is true however to me the impediment to doing something on the Internet or starting a business is more knowledge than the cost of doing it. While it was expensive back in the 80’s 90’s and early 00’s it hasn’t been expensive to do anything for years. A high school kid can easily buy a VPS for $5 to $10 per month to play and test an idea. Actually they don’t even need to do that they can do things on their home computer which they already have and fly under the radar of the cable company/FIOS TOS. Wouldn’t be an issue until they had an actual idea that is working and at that point mom and dad would step in and give them money. In order to do something you have to have a) motivation b) ability to learn c) time d) the idea. Simply having free hardware isn’t even close to being enough. All the knowledge is out there it is nowhere near what it was like back in 1996 or earlier. Where info was scare and you had to RTFM you couldn’t google for answers.
My Borg associates think the same
I’m with some of the others here – while this at first blush feels revolutionary, I do’t know the there are real advantages. I can spin up an instance at Linode within a few minutes and even if I leave it up for the entire month idle it’s $20. For experimentation, staging, etc I don’t see a lot of advantage in removing that minimal cost.For production where one needs bursts of compute power for short times but for the vast majority of time, your service is idle or consuming much less power? Yes, those cases are potentially very interesting here.
It may not be as impressive but I’m looking forward to using Amazon’s newly released Elastic File System!
This is so amazing… when we scaled our auto dealer software company in the late 90s and into the 2000s we got caught in a vicious cycle of bandwidth and server upgrades – we never seemed to get ahead. What an exciting time to innovate.
Not to take anything away from the all the innovators out there but… as your application/service scales you will have to manage the provider — whether that provider sells you servers, storage, networking gear, hypervisors, containers, etc. and you rack and stack yourself or provides what is essentially a PaaS offering you only pay for when it’s resources are utilized or anything in between or beyond. As your application/service consumes more and more resources (we’re all after growth, right) you have to know that growth rate, the application’s/service’s performance characteristics, and you have to constantly be on the lookout for innovative approaches to infrastructure/platforms that can do some portion or all of it more efficiently.As these applications and services grow, all those upgrades will have to occur. The good people at Amazon, Heroku, Rackspace, you, somebody, will be upgrading the underlying compute, networking, storage, etc. Making it seamless to your users and to the business, that’s one additional area to create real value in the form of user experience from the provider to you and from you to your users. Get to know the people managing your systems, demand a rich set of features including SLAs and reporting capabilities, leverage those reports, drive communication, essentially manage it the same as if it was in your own data center. Ultimately, the business and your personal success depend on it.
The biggest bottleneck for business model innovation is cost of data traffic/broadband when growing fast. Server cost itself is usually affordable, but it’s the data traffic that get very expensive if want to have full control of your own content. If you start to play around with others data from Youtube, Amazon etc, you have no control of your assets (data/platform). So entrepreneurs usually don’t put everything up in the cloud to a potential competitor. If you test a new business model with for instance video as content, the costs grow fast. So broadband suppliers, don’t be evil, reduce data traffic costs. Someone should challenge todays business model for peering/data traffic costs, to get entrepreneurs on board without being afraid growing too fast (cost challenges).
Law of unintended consequences has gotten us to this point. Settlement free (also known as all you can eat or unmetered) has resulted in the high pricing you describe. Silos everywhere. Instead, usage based settlement models result in more efficient allocation of costs as well as providing necessary price signals to clear supply and demand. Incentives and disincentives on both sides that do not exist today.
How do you support/rationalize settlement free peering and usage based pricing of software. The ills of the internet ecosystem have less to due with tech and policy and more to do with economics of efficient supply/demand clearing. See my response to Kent.
Lambda seem to be very similar to what Ethereum hopes to accomplish. Usage based pricing is common in most blockchains. Makes sense for amazon to move in that direction.
It’s fascinating to watch the pace at which “serverless” has captured the attention of many who typically don’t report on AWS services (like yourself Fred). We’re seeing a confluence (and amplification) of trends that’s leading to a shift in computing: cloud, containers, and micro-services. This shift is both enabling and reenforcing a state of mind that allows one to write software and simply deploy it while someone else worries about the rest. It’s hard to think of a “no-ops” world — but for a developer and consumer, we’re getting there.In 2011, we founded Iron.io on this premise that the world would be both “serverless” and multi-cloud. We wrote about it in 2012 (http://readwrite.com/2012/1…, but were a bit early and the other supporting themes hadn’t emerged/evolved far enough yet… and it’s still early as seen by many comments here and elsewhere.Suffice it to say, we’re excited and passionate about what’s to come! Thanks for covering.