I'm Having A Meltdown

So the chips we use in our personal computers and cloud computers have some newly discovered security holes. One is called Meltdown. The other is called Spectre.

My first reaction upon hearing the news yesterday was “so what do we do about this?”

The answer is you can’t do much on your own.

For Meltdown, we need the operating system and hardware manufacturers to issue patches and firmware upgrades. I am sure they are furiously working on them.

The Verge has a good piece on what we can and should be doing about this.

Here’s the key part of that post:

  • Update to the latest version of Chrome (on January 23rd) or Firefox 57 if you use either browser
  • Check Windows Update and ensure KB4056892 is installed for Windows 10
  • Check your PC OEM website for support information and firmware updates and apply any immediately

I expect Apple will be issuing an update shortly for their OS.

Apparently Microsoft, Google, and Amazon’s cloud services are already patched for Meltdown.

As for Spectre, apparently there are no fixes for it as of now, but it is also a lot harder to implement hacks using that one.

Finally, my partner Albert has some optimism about all of this. We should expect some good to come of this mess.


Comments (Archived):

  1. NS

    Perception and reality play a big role in security. Hardware is assumed to be more secure. We’ve heard phrases like “hardware root of trust”, trusted platform module etc. which assumes that the ultimate bedrock of security is hardware – which is now shaken to its “foundation”. Also goes to show the lack of diversity in core h/w components which is a huge risk. If security is an arms race, hardware just can’t evolve fast enough. Sometimes buggy software might be easier and faster to fix.

  2. Twain Twain

    Yes, it’s very serious. Still, instead of terms like “Meltdown” and “Spectre” there should be a British term like “Computer’s having kittens!” (problems). https://uploads.disquscdn.c

  3. PhilipSugar

    It has slowed down those cloud services. We monitor.

    1. fredwilson

      speed vs security

      1. Matt Zagaja

        They should offer a YOLO option for those of us who do not need security. I appreciate the effort but if someone manages to steal my list of scraped Massachusetts state snow alerts the consequences are far from dire.

      2. Vasudev Ram

        “Move fast and break things” – Zark Muckerberg (for the sake of appearing cool or hip)https://xkcd.com/1428/https://www.brainyquote.com…And later they go like this (duh – what everyone else knew long back – you’re actually slower that way, coz you got to fix the breaks anyway):http://mashable.com/2014/04

        1. Michael Elling

          What fake news?

          1. Vasudev Ram

            Don’t know what you mean.

          2. Michael Elling

            Law of unintended consequences. Little thought to strategy and long-term (unintended) consequences. Same holds for the chip debacle. These were known outcomes decades ago. But who cared. Fail fast. Make a buck. Capitalism at its best.

          3. Vasudev Ram

            Get your last para now, but still don’t know how it relates to fake news – maybe because I haven’t been following that stuff much.

          4. Vasudev Ram

            Got it now …

      3. PhilipSugar

        Not debating whether they should have slowed down you have to. Just pointing out something.Your first second and third requirement is security. Then the next three are uptime. Then actually performing how you designed without bugs.I’d say speed is around priority 10, but it still is in the top 10. But life is a tradeoff.

      4. Gustavo Melo

        I can’t read this type of argument at the moment without thinking about the final episode of Black Mirror new season, where the “entrepreneur” is telling the story. “Move fast and break things!”

    2. LE

      The cloud services will end up giving you more horsepower for the same price to make up any degradation in service. I am pretty certain that will happen. After all the pricing is arbitrary to begin with.

  4. Elia Freedman

    It’s believed that Apple’s High Sierra OS has already been fixed but Apple hasn’t officially announced that yet.

    1. rimalovski

      According to MacRumors it has “already been partially addressed by Apple in the recent macOS 10.13.2 update” https://www.macrumors.com/2

    2. jason wright

      that’s good news. windows is a risk not worth taking imo.

  5. sigmaalgebra

    Sorry. But I anticipated some such.For the attempts at the really high end BIOS stuff, serve breakfast in bed, etc., no way to get the bugs and security holes out of that stuff for years.For the shopping I’m doing now for the parts for my first server, I’m going with all old stuff!Yes, I know it’s old and it’s cheaper, but gotta remember, it’s a lot BETTER! In particular, fewer security holes.As of last week and this week, I HAVE been having trouble: During my planning, all the parts were readily available at good prices at Amazon, etc. Now the candidate motherboards I like, AM3+ socket and support for ECC main memory, are getting tough to find. All the other parts are still readily available at good prices, even the ECC memory.

  6. Anne Libby

    I get your posts by email now — when I saw your headline, I worried that email was already getting you down. Happy New Year.

  7. LE

    Intel stock is down but I am thinking there is a theory that this could be good for them. The small drop in the stock price confirms that actually.

  8. DJL

    This is pointing to flaws in the supply chain. If we think about all the components that make up a device, all it takes is one vulnerability. It is well known that Chinese manufacturers (if possible) will install malware on components that are installed in US tech. So US companies have to essentially check each component and then the system as a whole. (Which is not really feasible today.)This is very much related to the “internet of things” (IoT) problem. Thousands of vulnerable devices released into the wild every day. There are some promising blockchain startups working on this (Filament). Whether they make an impact is another thing completely.

  9. Frank W. Miller

    This reminds me of something I say to my students in my OS classes. I lament to them how when I was coming up through the program, these designs were still fresh and being worked out. Remember BSD? This was a book that I poured over when I was 22: https://en.wikipedia.org/wi…We used to talk about the wonderful design elements, like fork/exec, the device model, and objects, etc. Now we teach the kids about how to attack these designs, buffer overflow attacks and spoofing and such. The reason is, of course, that we’re trying teach them how not to write code so that its vulnerable to these types of attacks. I can’t help thinking tho how naive we were at that time, not thinking about how the designs should be done to avoid being attacked.While Albert is on the right track, I too believe that the fundamental machine design from the logic gate up, needs to be redone with security as the basic focus, he should now go back and start reviewing OS literature regarding sandboxing and virtualization and capabilities, etc. There has been mountains of work in this area already. However, much of it was never adopted because of the Linux/Windows/MacOS hegemony. These days, they all run basically the same kernel design on basically the same hardware. These fundamental designs have not changed in decades primarily because of the commercial inertia associated with the designs and the mountains of application code that uses them. While the need to do a fundamental reset on these things in apparent, I don’t know that now is the time they will be addressed.

    1. sigmaalgebra

      Buffer overflows are super tough to stop in big software projects in C following the overall attitudes in K&R. IIRC Microsoft was STILL fixing buffer overflow problems as of just a few years ago. Memory leaks? Also super tough to stop in big projects in C.It’s just outrageous that programming 101 buffer overflow, array bounds errors, memory leaks should still be with us. Much of the blame belongs to your favorite C. Then there’s much worse — the C ‘cast’ sickness where they do type conversions but really are only interested in their silly religion of strong typing and ignore the details of the conversions. Good languages well before C, e.g., IBM’s F-level PL/I, were very careful about the details of type conversions.

      1. Frank W. Miller

        You’re correct, my favorite C is all about speed. I tell my students, its like a Ferrari. Super quick, fast, and gives you ultimate control of the basic elements. However, its really easy to drop it into gear, pop the clutch, and end up in a wall… 😉

        1. pointsnfigures

          Great analogy. Asking a stupid question because I am not a CS, if you redesigned from the ground up what would it mean for open architecture vs closed and what would it mean for privacy?

          1. Frank W. Miller

            Certainly a thing called capabilities and a thing called whitelists (made famous by PCMatic) could be hardware level constructs, but that question deserves a dissertation level response… 😉

          2. obarthelemy

            I’m not sure privacy is in the picture at all, right now we’re mostly giving it away willingly in exchange for free stuff, the cases where our privacy is invaded via security breaches are comparatively rare. Of course better security would help with those, but that’s a side-show, privacy is mostly a business, not technical, issue.As to whether Closed or Open is safer, that debate has been raging for decades. First, it’s a bit of a secondary issue: you first need success, so the main question is not whether you can be closed and secure or open and secure; but closed and successful or open and successful. The market strongly favours closed, which moves quicker and lets you have proprietary stuff. Second, Open = many eyes but often messy governance, Closed = Obscurity but usually more decisive governance. At its best, Closed doesn’t have to be worse than Open, for example Windows (closed) is safer than Android/Linux (Open). But, with Closed you never know how serious a consideration Security was. For example, Equifax obviously didn’t care. Because of that motive issue, I think only Open can reliably be considered safe, especially since unsafe stuff never leads to meaningful penalties.

      2. Lawrence Brass

        Careless and childish people shouldn’t be allowed to use C. As it reflects and exposes the processor internals without compromises, there is no abstraction layer between your algorithms (and errors) and their optimal implementations. Every new improvement in microprocessor technology is first made available and accesible in a C/C++ compiler.C is a professional programming language, intended for professional programmers and systems programming. Almost every second order abstraction built upon the logic model that the processor provides is still expressed mostly in C/C++ . New languages, virtual machines, libraries, frameworks, blockchains.Using C sucessfully requires from the programmer an attitude and a compromise towards correctness which very few people are willing to assume or able to understand. The learning curve to mastery is steep and requires a lot of time.C language criticism, misplaced in my opinion, trumped its evolution. I thought C would evolve to represent the new CPUs paradigms, mainly the ones introduced by multicore CPUs. It didn’t happen at the language level itself, but through compiler directives and careful memory layouts you still can extract some of the benefits and promises of multicore.It is remarkable that the world we live in is dominated largely by the lasting influence of what the people at Bell Labs imagined and built in the 70’s. Evolution will lay out the path ahead but I think that bashing the atoms is a waste of time.Go and Rust are interesting, so called alternatives which they are not, but watching the mainstream is an important factor.

        1. Frank W. Miller

          Agree that C is for professionals. This is why I advocate for putting C and its app sister C++ in the hands of children as soon as possible. 1) They can handle it and 2) it will make them real professional programmers much earlier.

          1. Lawrence Brass

            Soccer players do crash their Ferraris in the highways here from time to time. Everytime it happens I think what a bad example for the kids that follow them it is.Teach them the correct attitude towards programming.“Good morning boys and girls”“Gooood mooorning sir”Today we will do an excercise. You have to make a program that increases your granny’s pacememer heartrate by 10%, or something like that. :)Robotics is great because there are observable consequences from code execution.There is a playful and conversational attitude in some companies today that I agree can benefit the creative processes. But at some point serious work have to be done, with the right attitude. Kids are taught in these playful environments because they are kids, but I think that simulating some harsh conditions or problems for them would be good so they can develop a sense of urgency and responsability for when it is needed.I would love to teach kids in the future, I practice with my grandchildren. What age are the boys and girls you teach? I guess they are in their teens?

        2. Vasudev Ram

          Shooting yourself in the foot in various programming languages:http://www.toodarkpark.org/…Go and Rust are not in that list. Maybe someone can add them …

          1. Lawrence Brass

            So funny! I am still reading it and laughing. Thanks.Cloud servers with hardened BASIC interpreters are the future. :[

        3. obarthelemy

          You’re essentially saying C should never be used: there are no “careless and childish people” vs “conscientious and mature ones”. First it’s a continuum. Sames as ethics, skin color, calmness, … people are more or less conscientious, not 100% with or without.Second, those 2 characteristics can ebb and flow for endogenous reasons: must wrap that up and pick up the kid, got a cold, just had a fight w/ boss, baby cried all night…Third, they can change for exogenous reason. If boss/big boss/client says we must ship at the end of the month, or gain 20% performance, or add features within the same budget… code quality will suffer. You need not only “conscientious and mature” coders, but teams, corps, clients, users… and that, all the time. Show me just one of those, especially in the Real World, not academia.Also, over time, personnel gets swapped out and in.What we need is for vendors to be accountable for their mistakes, which is the first thing EULAs rule out, and isn’t very well covered by the law anyway. Right now, even gross negligence gets a pass, it generates profits and never jail, fines rarely. I find putting the blame on individuals when the issue is systemic in very bad taste.Edit: several, all over. Finished now !Edit: I lied, 2 more things:1- C is just an extreme because it allows particularly low-level access and requires a lot of explicit ressources management and control, but bad/unsafe code can be written in about any language. Not all will give let you jailbreak down to the OS/Hardware, but you mostly don’t need that to do a lot of evil. Examples of Meltdown are written in Javascript.2- Even that mythical “conscientious and mature” coder would need to able able to step back and look at the bigger picture and what’s happening not only at the interfaces but in the rest of the code flow. He won’t, because a) it’s not his job and b) we all know what happens to whistle-blowers.

          1. Lawrence Brass

            Never say never.”careless and childish” v/s “caring and mature” fits better I think.There is a bit of exaggeration there to make my point. I live and work in the same imperfect world that you describe with its ups and downs and shit. We are not perfect, they are not perfect.If you want to use C just be extremely careful or it will bite you. Not good for casual programming like testing algorithms or exploring data. Very good for infrastructure, microservices, glue code, performance optimization. But it comes with a discipline attached, that is my point.Using languages that have good “C bindings” is a good strategy to get the best of both worlds.

          2. Michael Elling

            Hence why it is not just a technology problem, but an economic problem where incentives and disincentives have to be built down and out to the chip/logic layer/boundary.

        4. sigmaalgebra

          I’ve written some C code off and on. I always wrote it only at the level of K&R.It’s picky work, maybe like watch making. Then, for tens of thousands of lines of code it would be like using the techniques and tools of watch making to dig the Panama Canal.I would guess that to get very far, would need at least a good string package with a LOT of functions. and some good debugging aids, e.g., for staying within the allocated length of the strings. And have to get rid of the idea that the end of the string is denoted by a null character.A good string package for Unicode could be a lot of work!Then, sure, need a lot of time-date routines.Then would need to replace malloc and free also with some really good debugging aids.SHOULD have some garbage collection with usage counters, etc.And would need a good array package, also with a lot of debugging aids, e.g., runtime checking of obeying array bounds.Would end up with a LOT of header files and would want some tools to help managing those.Really, for debugging, an interpreter might be called for. An interpreter could trace what variables or storage changed when and where.Would want some help avoiding stack overflows, etc.Need much more on data type conversion.Would want a decimal arithmetic package.Likely would want some aids that would parse the source and report on it, e.g., what variables get used where.Would want some decent scope of names functionality. As it is, it’s communism, nearly everything for everyone, and this strongly contradicts modularity and the principle of divide and conquer. Some scope of names with some solid, useful properties for static code analysis would also be good.Might want some code, for use at least during debugging, to add some discipline for the use of pointers.Maybe a package for just in time compiling would be good, also some on reflection where at runtime the code can understand itself.For the intended high performance, would want some tools to help with that, e.g., profiling, hot spot detection, tracking what the replacements for malloc and free do. Would want some analysis of locality of reference.Would want some good sort packages, look up routines, priority queues, and collection classes.Of course, there is no threading K&R, and, really, threading is too light in weight. So, should upgrade to more of a task, say, as in PL/I, that owns memory so that when the task goes away so does its memory.Then with threading would need some good thread synchronization functionality, with during development some debugging aids and monitoring. Dijkstra semaphores anyone?Exceptional condition handling would need a LOT of work.SHOULD be able to branch back into the stack of dynamic descendancy with the right things done with memory and exceptional condition handling.Definitely would want lots of help with getting routines reentrant.As it reflects and exposes the processor internals without compromises, I never got that far, e.g., never used C to look at the registers, page-segment tables, get info on the caches, the OS dispatching queue and functionality, the security rings, the I/O port usage, etc.Yes, for an abstraction layer, basically would have to invent and implement one.All this work with too many function calls and too much pointer chasing promises to be SLOW. Really the C compiler is not doing enough. If the functionality, e.g., strings, arrays, tasks, etc. were in the language, then a compiler could generate much faster code. That’s why I don’t regard C as for high performance.

          1. Lawrence Brass

            Very thorough list of requirements.Most of the packages you want do exist but they are scattered around. Very good tools too, debuggers, static analyzers. Libraries for everything with a few showing its age or abandoned. POSIX for all your threading and semaphores. What you definitely don’t have is structured exception handling, the implementations are hacks or extensions that work with the respective OS. Reflection neither.I would like to see new versions of the C language, but as C++ got all the attention and resources I think it won’t happen. C++ is not strictly a super set of C however as both languages share the same compiler infrastructure we still have fresh tools. The ecosystem is vast.Definitely for systems programming and not for applications.

    2. LE

      I too believe that the fundamental machine design from the logic gate up, needs to be redone with security is the basic focusa) They think it’s already secure. See my ‘white men we have it all figured out’ comment elsewhere. I’m not joking. It’s like when security is stepped up in the physical world (because of some intel that says something will happen) where you would assume it already is or should be. Or when a doctor is dealing with an aids patient they take more care. Why? Should the normal procedures be enough? How do you know someone doesn’t have a disease that you need to protect against?b) This will introduce an entire new series of problems not to mention the cost of everyone dealing with any changes.In any case people are going to still be the weak link. Reason I don’t have my mom do online banking. Let her drive to the bank branch and make deposits and get a smaller amount of interest.Likewise many people who are keeping their crypto assets and think that whatever they are doing is safe. (Whether online, cold and so on).This will blow over. It’s like everything else the media goes after because of the story value that dies in the next news cycle.

    3. Lawrence Brass

      Reminds me about the “row hammer” vulnerability in RAM modules. I don’t know how it was handled of if it was handled at all. It was a hardware flaw.Are these exploits hardware or microcode flaws?The level of sophistication of the analysys techniques used to find this type of flaws should be equalled in the chip design stages or OS mitigation layers.It continues to be a catchup game.

    4. Michael Elling

      Not just technology solutions; we need to be thinking “inter-network” settlement systems (ie economics) that provide incentives and disincentives down to the chip/logic level. But people don’t think that way and that far forward. But everything now at the core and edge points to the necessity of this happening before things get out of control.

      1. Frank W. Miller

        DARPA thinks that way. The Federal Reserve thinks that way. The three letter agencies think that way. 😉 If somebody does the work, and its good, i.e. is solves security/privacy/whatever problem, somebody else will use it and off you go. The VCs would jump on that bandwagon fast. As Fred eluded to, the main problem is speed so hardware solutions will be required.

        1. Michael Elling

          We are 170 years into “digital” network effects and first thought about universal service and democratizing what were obvious monopolies (Western Union, AT&T) over 100 years ago. We still haven’t learned. Nor have we learned that network effects really govern everything around us and have been since the dawn of mankind; something Graeber hasn’t factored in to his assessment of risk and debt. That’s why I say the edge is simply a mirror of the core, much as the atom is a mirror of the universe. We are constantly looking for technology solutions. But the answer lies as much in economics or “equilibrism”. http://bit.ly/2iLAHlGAs for DARPA they came very late to the game of understanding how to counter the social media weapons our own agencies spawned and our adversaries have so cleverly used against us. As for the Fed, just look at their inability to control kicking risk down the road and where that got us in 2008. Now what’s gonna happen when crypto implodes? There is no “riskless” world and protocols are by their nature developed to reduce risk and enable interworking between unknown actors. But in all networks and internetworks value is always captured at the core and top while costs are borne at the bottom and edge.Lastly, there are no fully distributed or fully centralized networks in nature. Rather there are centralized-decentralized hierarchies constantly moving towards balance (risk free) but never getting there. We should mimic those models.

  10. LE

    This is yet another thing I hate about the security industrial complex. This has been around forever (I am hearing since 1995) but now researchers have poked around enough to find it. And the race is on to exploit and/or patch something which wasn’t a problem prior to today in any measurable way.And I don’t even think that is an AFAIK situation. And if it was, it only impacted a small amount of people. There is always a trade off in life and obviously computer security. So now everybody knows this can be done and there will be money spent and slowness to protect the small amount of people that might have been impacted and/or might be impacted if it had not been disclosed.This is almost like medical tests. If you look hard enough you will find something. And that will potentially mean drugs and operations which bring their own risks.You know we can just spend our entire life worrying about computer security and shit that might happen vs. things that are happening.

    1. Gustavo Melo

      Alternate reality: researchers don’t find or publish it now. Eastern European hackers eventually find it, build some malware powered by this thing, and manage to steal/destroy massive amounts of data before some security company figures out exactly what is happening. Now instead of a bunch of money and time being spent on prevention, 10x as much is lost in damages and spent in damage control. Mind you, this is the *tame* alternate reality that doesn’t factor in much scarier stuff like state actors.

  11. creative group

    CONTRIBUTORS:This is a hardware issue that can possibly be fixed using software patches.Would it have been a better path to fix or issue a patch first then say why the patch was issued as usually updates do?Now the hackers who were not concentrating on this flaw will not focus on this flaw just to test it. GheezDoes every secret, flaw or weakness need to be publicized?Unless seeking outside help why telegraph until after the fix or to let the public know the fix is available.Captain Obvious!#UNEQUIVOCALLYUNAPOLOGETICALLYINDEPENDENT

  12. LE

    My other thought was this is yet another example of white men and their ‘we’ve got this all figured out’ meme. Like docker, containers, VPS, cloud services etc. All thought to be entirely safe from any cross exploit etc.

  13. jason wright

    well, i just had a chat with James. he’s up for it.

  14. obarthelemy

    Alas, most of our phones’ ARM SoCs are also vulnerable to Meltdown, especially the high-end SoCs, since Meltodwn is linked to out-of-order/speculative execution. Anything with a Cortex-A7x (Snapdragon 8xx, 65x, 66x), even most Cortex-A5x cores ( https://www.cnx-software.co… )Given Android’s update situation, this ain’t gonna be pretty. The only current core that’s safe is the low-end Cortex-A53… time to buy that $150 Xiaomi Redmi 5 Plus ;-pAs always, Apple is mum on the issue.Edit, Sorry, the Redmi 5+ is a 625… You need the non-Plus Redmi 5 which has an SD450 = 8xA53.

  15. jason wright

    midlife crisis moment? take an aspirin, buy a yacht, and sell your Twitter stock.

  16. kevando

    2018… American society wakes upApple is the first casualty.duckduckgo up 10,000%Blockchain apps thrive on Microsofthome drone gets weaponizedWall is built… around ColoradoStories like Bart’s People steal eyes from “News”Bonus prediction: Taco Bell releases synthetic meat that’s incrediblehttps://azure.microsoft.com…https://www.youtube.com/wat

  17. LE

    Well as expected it’s not ‘cancer’…SANTA CLARA, Calif., Jan. 4, 2018 — Intel has developed and is rapidly issuing updates for all types of Intel-based computer systems — including personal computers and servers — that render those systems immune from both exploits (referred to as “Spectre” and “Meltdown”) reported by Google Project Zero.That is if you believe the white men who say ‘we’ve got this figured out, move along, nothing to see here’ or the media which of course will continue to make hay with this.https://newsroom.intel.com/

  18. Gustavo Melo

    Waiting to read the news about how someone used this to steal millions in bitcoin!

  19. todd

    Commercial software and hardware is developed more insanely now than previously because of increased collaboration, atlassian, and thus: peer pressure. Technical folks can understand that getting the job done correctly is invaluable, much more important than getting that next ‘sprint’. Once it’s really, truly done correctly, it’s worth all the delays. These are the forces working against proper security work. Thankfully the open source and free software movement brings level headed judgement to this field.

  20. george

    An unfortunate meltdown for Intel, and it really hurts their brand. Hacking is inevitable, but this is leaving the keys under the welcome mat.

  21. James Sandberg

    Just goes to show how important it is to take some ownership of your own security, offline backups etc. We can’t solely rely on the software developers for our security when our work is worth so much to us!