The Machine Will See You Now

Siddhartha Mukherjee has a good long read in The New Yorker about machine learning and medical diagnosis this week.

In it, he explores whether machines are going to replace radiologists, dermatologists, etc or help them do their jobs better.

He concludes with this observation:

The word “diagnosis,” he reminded me, comes from the Greek for “knowing apart.” Machine-learning algorithms will only become better at such knowing apart—at partitioning, at distinguishing moles from melanomas. But knowing, in all its dimensions, transcends those task-focussed algorithms. In the realm of medicine, perhaps the ultimate rewards come from knowing together. 

We are very excited about the possibilities of using machine learning to help diagnose medical conditions early when they can be treated successfully. We have made a number of investments in this area and I expect we will make many more.

I believe that this is the future of medicine and the sooner we get to it the better off everyone, including the practitioners, will be.

#hacking healthcare

Comments (Archived):

  1. awaldstein

    I read this as well Fred.I agree, the less we worry about protecting the status quo and the more we focus on making it better for the end user–you and I and everyone’s health–the better we are.

    1. Vijay Vasandani

      obviously this is true, but the concern is the very many people caught out mid-transition. Not arguing against the ultimate adoption of these technologies, but definitely have to approach it with caution.

      1. awaldstein

        what does approach with caution mean?

        1. Vijay Vasandani

          true, but when things are not as clear as they should be following training and testing, you err on the side of caution. you take the option with the least potential for damage, after doing as much training and testing possible.You’re initial comment glorifying “the less we worry about protecting the status quo” is one I agree with, up to a point. A complete disregard for the status quo is akin to playing with all your chips in the pot, which sets you up for some spectacular successes and some spectacular failures as well.Edit: in this scenario, “erring with caution” would be not going to the extreme of not considering the status quo. Specifically, there’s a threshold point where decreasing the amount of worry regarding the status quo no longer brings about net benefits in terms of expected value

          1. awaldstein

            dunno–of course everything is done with thought at best.I think if you start with the patient as the rationale things fall into line.Remember when xrays were analog and you have to go to a special place to have them done? Now they are smaller, mostly inside doctors complexes and the diagnosis happens in minutes not days.Sure impacted the labor force. But the upside i obviously first and the rest rolls down hill.

          1. awaldstein

            Of course.The role of the expert, the role of the human interpreter, the needed requirement of communicating at a better level with the patient in the face of more obscure unknown indicators becomes more important.I ghost wrote a book on the medical ethics of neonatal care a decade ago and still think a lot about of these issues.And honestly, you get closer to these issues if you personally or in family have ever had really early, and really ambiguous data be the driver for choices that actually did save someone’s life.The combination of crazy tech, medical innovation and just great doctors is beyond a gift.

          2. ShanaC

            also false negatives

        2. Richard

          Google sensitivity and specificity.

          1. awaldstein

            Interpretation is where the truth is applied not in definitions. That is the whole point.

          2. Richard

            No, the whole point is that healthcare has always been based on data, in some cases the data is more exact than others. Take back pain (one of the most common reasons for visiting a doctor), the problem is that the noise within the X-rays, MRIs, and noise within the treatments (nsads, surgery, prp, cortisone, Accupunture) is also noisy, that’s noise ^2.

    2. Matt A. Myers

      Reminded me of the 2nd last blog post I made, a quote I posted:”With the increased velocity of modern changes we do not know what the world will be a hundred years hence. We cannot anticipate the future currents of thought and feeling. But years may go their way, yet the great principles of satya and ahimsa, truth and non-violence, are there to guide us. They are the silent stars keeping holy vigil above a tired and turbulent world. Like Gandhi we may be firm in our conviction that the sun shines above the drifting clouds. We live in an age which is aware of its own defeat and moral coarsening, an age in which old certainties are breaking down, the familiar patterns are tilting and cracking. There is increasing intolerance and embitterment. The creative flame that kindled the great human society is languishing. The human mind in all its baffling strangeness and variety produces contrary types, a Buddha or a Gandhi, a Nero or a Hitler. He who wrongs no one fears no one. He has nothing to hide and so is fearless. He looks everyone in the face. His step is firm, his body upright, and his words are direct and straight. Plato said long ago: ‘There always are in the world a few inspired men whose acquaintance is beyond price.'”— S. Radhakrishnan

  2. Vendita Auto

    The advent of D-Wave systems will make a huge difference. Think Dirac’s quantum mechanics calculations data in nano seconds . What a great time to be alive.

    1. Twain Twain


  3. pointsnfigures

    True innovation takes about 30 years to be fully implemented and realize it’s full economic potential. Generation after generation, that’s the macroeconomic heuristic. Software and machines might be a bit faster, but I don’t believe it will be decades faster.

    1. Twain Twain

      So true. It’s taken Deep Learning 30 years from academia to something like Clarifai. https://uploads.disquscdn.c

    2. cavepainting

      In all medical problems that are being attempted to solve through AI, there is a simple question to be asked: Are the patient outcomes (or) the individual practitioner decisions leading up to the outcome better with the use of the software and if so, under what circumstances?Knowing this definitively takes many years of careful piloting, testing, and objective measurement.The nature of medicine and the human body is holistic. You have to connect dots across many things and no medical discipline operates in complete isolation. Machines and software can augment in providing decision support, but let us not under-estimate the human involved and his/her special abilities to understand the why.There is likely to multiple phases of adoption, each crossing over to the next stage based on proof: a) offline decision support, b) real-time decision support (for example in-surgery or during patient observation), and c) auto-decisioning and execution with minimal human interpretation, which would be the toughest of all.

  4. markbarrington

    Dr AI : It’s benign.Patient : How do you know?Dr AI : I can’t explain how.Patient : I need another Dr.

    1. aminTorres

      If the AI can tell is benign, why would it not be able to explain it?it will take a while for things to be patient to machine the way you describe and it probably will never ever 100% patient to machine. Unless, of course, we get to the point that machines are statistically so much more effective than humans that actually going to a human doctor would constitute a risk.

      1. markbarrington

        From the article – “That’s the bizarre thing about neural networks,” Thrun said. “You cannot tell what they are picking up. They are like black boxes whose inner workings are mysterious.”

      2. Vasudev Ram

        IIRC there were expert systems in the 80’s or earlier that could not only make diagnoses or predictions, but could explain the chain of “reasoning” by which they arrived at their results. Prolog, Lisp, forward chaining, backward chaining, inference engines, all that good stuff. If they just store their internal state and state changes in some sort of data structure (as they work out a solution), it should be possible to generate the explanation from it.

      3. Pete Griffiths

        Because the model comprises many factors with coefficients that make no ‘sense.’

        1. Twain Twain

          They’re not that dissimilar from the models in financial derivatives.See Global Financial Crisis 2008 for what happened there!

          1. Pete Griffiths

            Black Swan – TalebBut I think that was slightly different, wasn’t it? The problem there was that they assumed normal distribution for risk without actually having reason to do so.

          2. Twain Twain

            Yesterday, Taleb tweeted this bit of brilliance: “You know that behavioral economics fails to get ergodicity in their experiments” which “PUTS A KIBOSH / SPANNER” into AI researchers’ belief that their hammers for Natural Language Processing works.Not that Taleb is aware his musings in economics are read by me and then cross-pollinated over to AI to help me validate my theories — lol.Definitions of Ergodicity = 1 : of or relating to a process in which every sequence or sizable sample is equally representative of the whole (as in regard to a statistical parameter) 2 : involving or relating to the probability that any state will recur; especially : having zero probability that any state will never recur.https://uploads.disquscdn.c…Google and Stanford have this theory about word vectors, sentence vectors and “thought vectors.”They think that the meaning of words, sentences and thoughts can be modeled in a similar way to the Brownian motion of molecules — via probability distributions of word clusters and similarity angles in a vector space or via Reinforcement Learning (which is related to Behavioral Economics and Game Mechanics).They’ve made 2Vec de rigeur in all their NLP frameworks on Tensorflow.Alas, for them … they’re contributing to the spread of bias. https://uploads.disquscdn.c…AH and … THERE IS NO NORMALIZED DISTRIBUTION OF RISK!!!Up to 50% of risk hasn’t been measured or measurable because, if the machines can’t discern between “fake news” and understand language generally, then how are they supposed to risk manage whatever Trump’s or any company’s announcements are?!How are they supposed to read through thousands of years of medical research to understand it, gain insights and be able to support human doctors in coherent and comprehensive diagnosis?I proofed Black+Scholes equations for the world’s leading derivatives trade journal when I was 22. So, by the time Global Financial Crisis 2008 happened, I had a reasonable insight on what happened because of failures in AI’s stop-loss functions and how the wrong types of co-efficients were spreading.

    2. Twain Twain

      The AI increasingly does probabilistic correlations and statistical inferences over many layers of Neural Networks faster (more powerful and efficient processors).However, the adage “Correlation is not causation” still applies.This is why HUMAN INTERPRETATION and SENSE-MAKING is still needed.Moreover, Prob+Stats (of and in themselves) bias the systems with their guesstimates that are not founded upon human experience, intelligence or knowhow.https://uploads.disquscdn.c

    3. Pete Griffiths

      LOL.Deeply true.

  5. Mac

    Let’s all take a deep breath and say, ahhhhh haaaaa.

  6. jason wright

    …moles from melanomas? good one.

  7. William Mougayar

    I was reminded by something you said last week at the fireside chat with Howard that “healthcare is moving from biology/chemistry to data science.”There is a company that’s doing this already I’m wait listed to go for a visit in their New York center. Also, read this related article by Leila Janah who visited them. “The AI Will See You Now”

    1. Richard

      This sounds good in theory, but if you read the article, they basically did a blood work up. That said, it makes sense if you are indifferent to the $1500 a year (and I myself would go). But, they didn’t mention if they charge you for the blood work. And there is definitely blood work that one can not do in 15 minutes. If they don’t mention that or the article above during your first visit, we have salesmen along with doctors treating you. As to the quote Healthcare is moving from biology / chemistry to data science (survival analysis) has always been at the roots of medicine.

      1. ShanaC

        you ok, or you just want to check them out?

  8. JoeK

    A question that has nagged me ever since I started reading this blog – Is Twain Twain human, or a machine that Fred is experimenting on us with?

    1. Twain Twain

      William, Jim and Shana C have seen me IRL. Google Campus, 2015:* https://uploads.disquscdn.c…Also, last month, I passed Sir Roger Penrose’s chess test for human intuition and consciousness. Machines can’t (yet) solve this chess problem.I got to a draw for White in 8 seconds and a win in 12 seconds (8 seconds plus 4 seconds to re-orientate my perspective on the board).*…I also do things like crash AI that’s been built up using language corpuses accumulated at Princeton since 1985 — AI that’s “passed the Turing test.” Apparently, the people who tested Eugene Goostman asked 30+ questions before discovering he was a bot.I crashed him with my 1st question. Then, with my 2nd question, he clearly revealed himself to be a bot and not a human boy.https://uploads.disquscdn.c…Whilst leading AI researchers at MIT, Stanford, Carnegie Mellon, Google, FB, IBM Watson, et al are ONLY NOW discovering their AI is “biased” and “brittle” and that both Chomsky (semantics) and Norvig (statistics) models can’t get us to NLU …Venturebeat, Aug 2016: “Even with deep learning, a machine can’t yet converse naturally with a human, and it may never even beat a three-year-old child in NLU. Human language is infinitely more complex than a Go or chess game. And the chatbot ecosystem will always lose potential traction and consumer adoption if the conversation isn’t natural.Google’s Parsey McParseface and the normal factor graphs (or LFG) technique parse to grammar, which does not help with meaning. The most-quoted scientist alive today, Chomsky, said that “statistical models have been proven incapable of learning language.” But the Chomskyan models have failed in NLU, since they, too, parse to grammar. Other models like LFG and Combinatory Categorial Grammar (or CCG) suffer from the same problems.Machines will be able to handle conversational speech and text only when they match every word to the correct meaning, based on the meanings of the other words in the sentence — just like a three year old does.To solve NLU, computer scientists Yann Lecun and Andrew Ng agree that we need to develop new paradigms.”*…* https://www.technologyrevie…* https://www.technologyrevie…… I was aware these biases and limitations in NLU existed many years ago.So I invented better, alternative tools to get the machines towards NLU.I’ve been waiting for the “penny to drop” in rest of the AI field that the data biases and algorithms like Chomsky, Norvig, Deep Learning, Reinforcement Learning, Evolutionary GANs and more can’t get the machines to NLU.

      1. JoeK

        I would expect a machine to declare that they are human because they passed a Turing test – it does not seem like the kind of thing a human would say. If you can post a picture eating beef stroganoff then maybe I’ll believe it …

      2. LE

        Hah! Well one thing AI can’t do that a human can (like me) is figure out your real name which I did last month even though all I had to work with was your comments, your handle and your icon. Photo above looks better than the ones I turned up (and google image search is no help..)

        1. Twain Twain

          I didn’t post photos of myself online because, well, women get judged on our looks — which detracts from our expertise.My manager at UBS would enter the meeting room and tell the other executives: “Everything I’m presenting was written by Twain. This is her strategy paper. I’m just presenting it.”They thought I was either the PA there to pour the coffee, an intern on work experience or someone’s trophy wife who’d accidentally wandered into the board room. https://uploads.disquscdn.c…Anyway, in tech, I started to dress like a teenage boy (sneakers, jeans, hoodies).So it’s sort of witty I’ve invented a system for de-biasing data! Lol.

      3. Tom Labus

        You’re more than safe here.

        1. Twain Twain

          There are lots of very smart people on AVC who appreciate context and perspectives.Other parts of the Internet are less evolved.

    2. ShanaC

      she’s real and shorter than me, William is about 2 inches taller (though I might be underestimating) fred is about 6- 6’2 if I remember correctly.(I’m 5’6)

  9. Richard

    What I don’t understand is why USV is at any competitive advantage to investing into the winners in this space? To the VC with an AI hammer, every healthcare issue is a nail. https://uploads.disquscdn.c

    1. Twain Twain

      It’s healthy for VCs and industry players to continue to invest in AI. Another “AI winter” would be terrible for innovation and experimentation.We simply have to be sensible and mindful in AI experiments, though. We have to know what they can / cannot do so we can continuously improve on AI.At the same time, we have to put human interests (language, culture, values) and primacy over the machines at the core of everything we build.They do not think or see the world as we do.…*

  10. Matt A. Myers

    Something I haven’t shared too much publicly is I have been dealing with chronic pain the past 3 years. Unfortunately again – ultrasounds, x-rays, MRIs, etc – none of them showed a physical problem that would indicate pain.I resorted to experimenting to try to find some relief. Some of the injury/damage I have is from barefoot running – those toe shoes. I was diagnosed with compound stress fractures – from impact on concrete/hard surface of the sidewalk and not having the cushion that normal running shoes have – none of which showed up on scans.My experiment, which wasn’t covered by our healthcare system in Canada – is expensive and few doctors who do it – was stem injections, stem cells derived from my own body fat; years ago they discovered there are stem cells in our fat – a nice personal supply we all have if we have at least 5% body fat, whereas athletes with less than 5% have to hammer into [literally] the femur bone to get the stem cells.Luckily the injections I’ve had in right sesamoid area and big toe have helped a lot, it did however unmask pain in the metatarsals; the pain from sesamoid was so much that it covered over the other pains. It’s been a tricky balance of deciding how much of the money I have left to spend on healing myself further, with how much to spend on my life’s work.This path I am on is turning into a bit of an odyssey. It’s been an interesting journey that’s felt like it’s slowly destroyed my life these past few years. I’m coming out at the other end now though, even though I have likely $20K+ more worth of experimental injections to try to see if it can bring me further relief.I’ve learned so much in regards to understanding pain and self-care – managing one’s health – that my desire to help people with practical tools to help them has grown even stronger. I feel I have a unique story that will help at least a few people, and I feel everyone elses’ story is just as valuable. I still have an eventual goal of building a healthy city, a city with policy that facilitates health and a high quality of life – and I can still see what is aligning to allow that to eventually happen. It is faith and courage now that I must find, now that things have turned around for the better, that I have a chance at succeeding with my vision.The point of sharing all this … don’t forget that the imagining resolution we currently have isn’t going to necessarily catch the cause of people’s reported symptoms. On top of working towards improving AI, we need to also advocate for good policy relating to how those results are used – which will become an issue in the future.

    1. Richard

      Chronic pain is a huge challenge for data science. Is your email around? I’d like to know more

    2. ShanaC

      be very careful with those injections, a percentage of the population has developed cancer from them

      1. Matt A. Myers

        Risk-benefit analysis, I have no real choice. Thank you though.

    3. cavepainting

      Have you tried Ayurveda (or) alternative medicine? Sometimes, holistic approaches that work with the flow vs. against the natural order work better.

      1. Matt A. Myers

        Indeed, I have done a fair amount – though not specifically Ayurvedic medicine; I have physical damage that won’t heal on its own, the energy flows in my body if I don’t allow myself to agitate certain areas too much, otherwise my Wood meridian energy gets locked up. Thank you for making the suggestion.

  11. Pete Griffiths

    There is a still deeper point to be noted wrt healthcare.The point is not simply diagnosis, but is cure and the history of all medicinal treatment until very recently consisted of 2 paradigms:1) manual trade akin plumbing and carpentry. All surgery falls under this category.2) poisoning. eg antibiotics and chemo. Poisons have evolved but they all work on the principle of doing more damage to the cause of the condition than to healthy tissue.Very recently we are seeing the emergence of a completely new paradigm:3) medicine as debugging. The human machine may be seen as a wet computer with poor modularity. Nonetheless, it is proving possible to identify coding errors and to remedy them. A spectacular contemporary example is eACT therapy developed by Kite Pharma ( emerging paradigm is data science heavy. The challenge is identifying targets (bugs) and this is non trivial given the body’s spaghetti code wetware. Nonetheless, I am confident that this is a glimpse into the future of what will prove to be a huge new paradigm for medicine.(Full disclosure: I have known one of the inventors and senior scientists of Kite for decades.)

  12. ShanaC

    I’m trying to think about all of what I want to say to respond to this, since the issue is intensely personal to me. Already I’ve written one response (with this article and the arxiv preprint) to a facebook article club that is a mixture of doctors, researchers and previvor patients/cancer patients.This topic has been on my mind since at least 6-ish months ago as I’ve been settling in with the reality that I am a previvor for breast cancer, with the ridiculous amount of screening that comes with; and that I am not a viable candidate for a preventative bilateral mastectomy, unless my father’s family history suddenly becomes very shocking, even with genetic testing. Parts of this topic, however, have been on my mind emotionally for over 2 years now, when I was told how statistically unlikely a genetic test would find anything for me.I choose to be an experimental subject in research. Part of it is because I’m terrified – I don’t want to die young. However, that is a much smaller part of the drive to be an experimental subject by this point in my life: I see it as ethically obligatory in the face of the pain I felt when I was told even as a research candidate how statistically unlikely a genetic test would find information for me. I don’t like the idea of many people facing that kind of pain, so if I have to be a human lab rat to move science, technology, and medicine forward so others don’t feel that pain, my internal ethical calculator says the world is doing a bit better by my actions.With that thought in mind1)If I had access to a study, particularly one involving the commercialization/fda approval of an Ai around mammograms, breast tomographs (aka 3d mammograms), ultrasounds, or mris, I’d do it in a heartbeat. I was a strong believer about AI in diagnostics previous to Sebastian Thurm’s work anyway, and I’m already in a conversation with my doctor about this actually happening (even if last we spoke, she might not have believed me)2)from the article: Esteva and Kuprel found eighteen repositories of skin-lesion images that had been classified by dermatologists. This rogues’ gallery contained nearly a hundred and thirty thousand images—of acne, rashes, insect bites, allergic reactions, and cancers—that dermatologists had categorized into nearly two thousand diseases. Notably, there was a set of two thousand lesions that had also been biopsied and examined by pathologists, and thereby diagnosed with near-certainty.Normally, this is the sort of material that could/should be in your medical record. I have to say that I am about to get a bit emotional with what I am about to sayI) It totally burns me inside we don’t have a single payer system, because if we did, we could figure out how to have a single RECORD system. Sebastian Thrum and his students took from random sets they found online and trained a system. I’m very curious what the performance would be like if the dataset was based on a high quality record system that had tracked a large number of people from birth to death, and also had access to information like well structured (for a computer and a person) pathology notes.Furthermore, there are algorithms that are somewhat better (not perfect, better) with the why question (see, for example, the algorithm sets that put together Ayasdi – which as an early proof of context, actually discovered a set of genes for who will survive breast cancer doi: 10.1073/pnas.1102826108 ).In both cases, the dataset and its quality matters. It bothers me that our health care discussions don’t discuss healthcare innovation and the economics of health care companies around this sort of tragedy of the commons issueII) There might be a 2fold reason we don’t discuss/have lots of these datasets available. One is HL7.… I’m not sure v3 is fully finished, which means v2, aka something finished in 1989 (before my brother, who is in graduate school, was born), is the baseline standard for EHR interoperability. It uses a format designed for concision, because in 1989, memory was an issue, even though drives were taller than my height as a toddler. The other is ICD-10 and the need to game billing in clinical patient files.… and… when you have consultants about how to encode a bill (and how that interrelates to a patient file) that also means patient health records are compromised. We need to confront this issue sooner rather than later if we want the positive aspects of AI (let alone the neutral or the negative, or any AI).3) A correction: IBM Watson Failed at MD Anderson because of how it’s ai worked plus patient records are really badly formatted to be useful right now. (see this post: http://www.healthnewsreview… – Sloan Kettering made it work, see this… ) These technologies are going to open a whole kettle of other political worms beyond radiologists losing jobs on the basis of them failing in the US because of symptoms of how badly the US healthcare system works to deliver and track healthcare and outcomes for everyone, (especially preventative care so people don’t get cancer and need fewer mammograms). IT craziness questions are suddenly going to be life and death questions. Although AIs are constantly learning, they also can’t really be in beta when they are used en mass in a hospital or a clinic. I’m not totally sure how the FDA is going to (or not going to) regulate and or approve an AI, since I think technically the Ai would fall under a medical device. I think they (and the NIH) are woefully underprepared for all the questions interrelated to AI software in medicine and research, especially when it comes to personalized medicine, Ai, and complex clinical trials to prove efficacy (especially around rare diseases).4)While Dr. Siddhartha Mukherjee does open that why question box, I think he is underselling the why question. We need to be careful with black box AIs and how questions/answers because as the article mentions about the yellow circles, you can train an AI in a reality that isn’t there. When it comes time for the AI to confront reality, it crumbles in unexpected ways. This is especially true about what happens with cases that are understudied but aren’t necessarily edge cases (say, the underparticipation of black women in breast cancer drug trials despite the fact that if black women get breast cancer they die of it more often. We therefore have less data on black women in general and drug performance compared to their actual cancer rate). Confronting the why question early and often means building better AIs5) Shawn, my fiance, mentioned that when he first saw the article on arxiv (…. ) that beyond the neural network being naive, it is also overly simple -it could have more layers, be recurrent or convolutional (just different types of more complicated neural networks). He mentioned that this lack might be holding back its specificity and sensitivity. Clearly Thrun could get way better performance if he wanted.6) I don’t think doctors are going away so fast. I see my radiologists (they rotate in the clinic I am in, and I really only know them by the form letters they send me) as partially going away (the radiologists also help with live guiding during a biopsy – and that’s not going to be a purely ai thing yet). I don’t see my oncological surgeon disappearing so fast, because beyond diagnostic opinion, there are ethical and emotional considerations that she is there for me about. I can’t see an ai arguing with me about both the scientific results as well as how I feel about said results of a study. My doctor does. <37) I think AI in medicine and research is going to have much broader effects than what the article is discussing, but I am also not sure where to start that conversation.

    1. ShanaC

      I apologize to people who read this. as I said, I’m very emotionally attached to the subject, because often I am the literal subject

      1. Vendita Auto

        D-Wave the more you use it the smarter it gets the overall data sets can but grow = early analysis, probabilities editing. Health & Wisdom be yours ShanaC

    2. Twain Twain

      On specificity and sensitivity, Albert Wenger has a post on this as it relates to the Probability of false positives/negatives:

      1. ShanaC

        he was briefly talking about it the week before…which gets back at the question of AI and personalized medicine. Clearly this test isn’t right for him by itself and probably not in context of his medical files, but we really need larger, better formatted medical files from wide swaths of people to make that assessment. Plus, how would the test’s specificity and sensitivity change if the result’s probabilities are also dependent on not binary healthy/not healthy but more nuanced data in those files that it currently does not see?

        1. Twain Twain

          The nuanced data is in medical notes, language understanding and the power of human intuition in data interpretation and abstraction.The medical notes and language understanding parts raise all sorts of questions about the gaps between what the patient self-reported, what the doctor(s) understood and wrote down and how every hospital has a different content format and semantic tree for filing records.

      2. ShanaC

        and yes, I realize I wrote a huge overload of material.

    3. cavepainting

      The human body is complex because everything is inter-related. A doctor uses gut, experience and insight to make a decision or diagnosis in the face of ambiguity or contradicting data. The WHY for a decision is a big deal and not limited to data-based correlations that can be outsourced to machines.Human intelligence, empathy, gut and ability to connect the dots beyond what something looks like really cannot be replaced.

      1. Eric P. Rhodes

        Exactly, data is just one part of the prognosis/diagnosis processes that a medical professional will employ. It can be an important part of that process, but data without context is just a thin assumption.

  13. Caleb Kemere

    It’s a very interesting article. The melanoma example made very clear the biggest challenge for DNN-style machine learning in medicine: data. HIPAA, or at least the way that it is implemented in most practices, appears to make aggregating and sharing data super complex. So the big EMRs have a barrier which makes it difficult to act like Google has with images or audio. Still, one wonders why Epic has not started a massive data scientist hiring campaign???Related second-hand anecdote. For the whole of an un-named medical school and teaching hospital system in Houston at which my wife works, which has 1M+ patient encounters are year there are apparently a total of 2 people who have the ability to database search from the Epic EMR.

  14. Marc Rowley


  15. Twain Twain

    Thanks, Charlie.I learnt as a child that people think pretty little girls aren’t supposed to be nerdy whilst also being sporty and great at languages.People can set pretty low expectations for pretty little girls — which is wrong and ridiculous.Thankfully, my grandparents spotted my brain when I was 2 and they made sure my parents encouraged my education.The potential of any human mind to learn and improve is GINORMOUS.