Deep Medicine — Eric Topol (2019)


Created 20 Sep, 2025

Deep Medicine

This page is a curated collection of highlights and key insights from “Deep Medicine” by Eric Topol, organized by chapter. The selections aim to capture the essence of each section, with occasional personal notes to enrich the context.


Contents



1. Introduction to Deep Medicine

↑ Contents

  • I hope to convince you that deep medicine is both possible and highly desirable. Combining the power of humans and machines— intelligence both human and artificial— would take medicine to an unprecedented level. There are plenty of obstacles, as we’ll see. The path won’t be easy, and the end is a long way off. But with the right guard rails, medicine can get there. The increased efficiency and workflow could either be used to squeeze clinicians more, or the gift of time could be turned back to patients— to use the future to bring back the past. The latter objective will require human activism, especially among clinicians, to stand up for the best interest of patients. Like the teenage students of Parkland rallying against gun violence, medical professionals need to be prepared to fight against some powerful vested interests, to not blow this opportunity to stand up for the primacy of patient care, as has been the case all too often in the past. The rise of machines has to be accompanied by heightened humaneness— with more time together, compassion, and tenderness— to make the “care” in healthcare real. To restore and promote care. Period.


2. Shallow Medicine

↑ Contents

  • I hope that I’ve been able to convince you that the shallow medicine we practice today is resulting in extraordinary waste, suboptimal outcomes, and unnecessary harm. Shallow medicine is unintelligent medicine. This recognition is especially apropos in the information era, a time when we have the ability to generate and process seemingly unlimited data about and for any individual. To go deep. To go long and thick with our health data. That body of data— Big Data per individual— has the potential to promote the accuracy of diagnosis and treatment.

  • We’re not yet using it because it’s far more than any human, any doctor can deal with. That’s why we need to change the way we make medical diagnoses, the fundamental decision process of clinicians.


3. Medical Diagnosis

↑ Contents

  • But Bayes’s theorem relies on priors, and, because we, as inexperienced medical students, had visited so many books but so few patients, we didn’t have much to go on. The method would leave aged physicians, who had seen thousands of patients, in far better stead.

  • New tech will decrease the value gap between the experienced doctors and the inexperienced ones. Meaning younger doctors can expect to be as good as the experienced ones with the help of tech.

  • These data point to the serious problems with how physicians diagnose. System 1— what I call fast medicine— is malfunctioning, and so many other of our habitual ways of making an accurate diagnosis can be improved. We could promote System 2 diagnostic reasoning. Kahneman has argued that “the way to block errors that originate in System 1 is simple in principle: recognize the signs that you are in a cognitive minefield, slow down and ask for reinforcement from System 2.”

  • But to date, albeit with limited study, the idea that we can supplement System 1 with System 2 hasn’t held up: when doctors have gone into analytic mode and consciously slowed down, diagnostic accuracy has not demonstrably improved.

  • A major factor is that the use of System 1 or System 2 thinking is not the only relevant variable; other issues come into play as well. One is a lack of emphasis on diagnostic skills in medical education.

  • Failure to diagnose or delay in diagnosis are the most important reasons for malpractice litigation in the United States, which in 2017 accounted for 31 percent of lawsuits. 21 When the affected physicians were asked what they would have done differently, the most common response was to have had better chart documentation, which again reflects the speed with which encounters and record keeping typically occur.

  • Either shallow or fast medicine, by itself, is a significant problem.

  • He also uploaded a picture of her inflamed hands. Within hours, multiple rheumatologists confirmed the diagnosis. Human Dx intends to recruit at least 100,000 doctors by 2022 and increase the use of natural-language-processing algorithms to direct the key data to the appropriate specialists, combining AI tools with doctor crowdsourcing.

  • In this chapter, for example, I’ve talked a lot about human biases. But those same biases, as part of human culture, can become embedded into AI tools. Since progress in AI for medicine is way behind other fields, like self-driving cars, facial recognition, and games, we can learn from experience in those arenas to avoid similar mistakes. In the next two chapters, I’ll build up and then take down the field. You’ll be able to gain insights into how challenging it will be for AI to transform medicine, along with its eventual inevitability. But both doctors and patients will be better off to know what’s behind the curtain than to blindly accept a new era of algorithmic medicine. You’ll be fully armed when you visit with Dr. Algorithm.


4. The Skinny on Deep Learning

↑ Contents

  • Machine learning tends to work best if you give it enough data and the rawest data you can. Because if you have enough of it, then it should be able to filter out the noise by itself.

  • Filtering the patient sample to only outpatients— because it seemed at first to the people involved to be the best cohort to analyze— nearly killed the project, as did the human assumption that the valuable input would be in the T wave rather than the whole ECG signal.

  • I want to flag this parallel— self-driving cars and the practice of medicine with AI— as one of the most important comparisons in the book. While Level 4 for cars may be achievable under ideal environmental and traffic conditions, it is unlikely that medicine will ever get beyond Level 3 machine autonomy. Certain tasks might be achieved by AI, like accurately diagnosing a skin lesion or ear infection via an algorithm. But, for medicine as a whole, we will never tolerate lack of oversight by human doctors and clinicians across all conditions, all the time. Level 2— partial automation, like cruise control and lane keeping for drivers— will be of great assistance for both doctors and patients in the future. Having humans serve as backup for algorithmic diagnosis and recommendations for treatment represents conditional automation, and over time this Level 3 autonomy for some people with certain conditions will be achievable.


5. Deep Liabilities

↑ Contents

  • When I asked Fei-Fei Li in 2018 whether anything had changed or improved, she said, “Not at all.”

  • clearly this book is from around 2018, and a lot has changed since then in 2025. AI is definitely far more reliable than in 2018.

  • BLACK BOXES

We already accept black boxes in medicine. For example, electroconvulsive therapy is highly effective for severe depression, but we have no idea how it works. Likewise, there are many drugs that seem to work even though no one can explain how. As patients we willingly accept this human type of black box, so long as we feel better or have good outcomes. Should we do the same for AI algorithms?

  • Any time a machine results in a decision in medicine, it should ideally be clearly defined and explainable. Moreover, extensive simulations are required to probe vulnerabilities of algorithms for hacking or dysfunction. Transparency about the extent of and results from simulation testing will be important, too, for acceptance by the medical community.

  • It’s a bit funny that AI experts systematically propose using AI to fix all of its liabilities, not unlike the surgeons who say, “When it doubt, cut it out.”

  • BIAS AND INEQUITIES

  • THE BLURRING OF TRUTH

  • PRIVACY AND HACKING

  • Perhaps more encouraging is a concept known as differential privacy, which uses a family of machine learning algorithms called Private Aggregation of Teacher Ensembles to preserve the identity of each individual by not ingesting the specific medical history. 57 Yet this use of limited data could bias models to certain subgroups, highlighting the interactions of privacy and bias.

  • ETHICS AND PUBLIC POLICY

  • JOBS

  • EXISTENTIAL THREAT


6. Doctors and Patterns

↑ Contents

  • This meeting between a patient and a radiologist was an anomaly, but it may be an important indicator of the future.

  • Geoffrey Hinton proclaimed, “I think that if you work as a radiologist, you are like Wile E. Coyote in the cartoon. You’re already over the edge of the cliff, but you haven’t yet looked down. There’s no ground underneath. People should stop training radiologists now. It’s just completely obvious that in five years deep learning is going to do better than radiologists.”

  • Digital pathology has helped improve the workflow efficiency and accuracy of pathology slide diagnosis. In particular, the digital technique of whole slide imaging (WSI) enables a physician to view an entire tissue sample on a slide, eliminating the need to have a microscope camera attachment.

  • Establishing direct patient contact to review the results could be transformative both for pathologists and for patients and their physicians.

  • The remarkable AI parallels for radiology and pathology led Saurabh Jha and me to write an essay in JAMA about the “information specialists.” 59 Recognizing that many tasks for both specialties will be handled by AI and the fundamental likeness of these specialists, we proposed a unified discipline. This could be considered a natural fusion that could be achieved by a joint training program and accreditation that emphasizes AI, deep learning, data science, and Bayesian logic rather than pattern recognition. The board-certified information specialist would become an invaluable player on the healthcare team.


7. Clinicians Without Patterns

↑ Contents

  • This modern e-ritual has contributed to the peak incidence of burnout and depression seen among physicians.

  • Doctors usually seem to not like using much computer.

  • Artificial intelligence already forms the backbone of tools known as clinical decision support systems (CDSS).

  • That’s where deep learning about an individual’s comprehensive, seamlessly updated information could play an important role in telling doctors what they want to know. Instead of CDSS, I’d call it AIMS, for augmented individualized medical support.

  • EYE DOCTORS

  • HEART DOCTORS

  • interventional

An interventional doctor is a medical specialist who uses image-guided, minimally invasive procedures to diagnose and treat diseases, often as an alternative to traditional surgery.

  • CANCER DOCTORS

  • SURGEONS

  • AI could ultimately reduce the need for nurses, both in hospitals and in outpatient clinics and medical offices. Using AI algorithms to process data from the remote monitoring of patients at home will mean that there is a dramatically reduced role for hospitals to simply observe patients, either to collect data or to see whether symptoms get worse or reappear. That, in itself, has the potential for a major reduction in the hospital workforce. Increasing reliance on telemedicine rather than physical visits will have a similar effect.


8. Mental Health

↑ Contents

  • the majority of attendees polled said they’d be happy to, or even prefer to, share their secrets with a machine rather than a doctor.

  • In the face of a global mental health crisis with increasing suicides and an enormous burden of depression and untreated psychiatric illness, AI could help provide a remedy.


9. AI and Health Systems

↑ Contents

  • The critical assessment of deployment of AI in health systems deserves mention; it will require user research, well-designed systems, and thoughtful delivery of decisions based on models that include risk and benefit.

  • Mercy Hospital’s Virtual Care Center in St. Louis gives a glimpse of the future. 46 There are nurses and doctors; they’re talking to patients, looking at monitors with graphs of all the data from each patient and responding to alarms. But there are no beds. This is the first virtual hospital in the United States, opened in 2015 at a cost of $ 300 million to build. The patients may be in intensive care units or in their own bedroom, under simple, careful observation or intense scrutiny, but they’re all monitored remotely. Even if a patient isn’t having any symptoms, the AI surveillance algorithms can pick up a warning and alert the clinician. Their use of high-tech algorithms to remotely detect possible sepsis or heart decompensation, in real time, before such conditions are diagnosed, is alluring. Although being observed from a distance may sound cold, in practice it hasn’t been; a concept of engendering “touchless warmth” has taken hold. Nurses at the Virtual Care Center have regular, individualized interactions with many patients over extended periods, and patients say about the nurses that they feel like they “have fifty grandparents now.”

  • Going global is the best way to achieve medical AI’s greatest potential— a planetary health knowledge resource, representing the ultimate learning health system.


10. Deep Discovery

↑ Contents

  • In this case, it wasn’t an image or document retrieval but an odor. It turns out that the fly’s olfactory system uses three nontraditional computational strategies, whereby learning from the tagging of one odor facilitates recognition of a similar odor. Who would have guessed that nearest-neighbor computing searches would have common threads with the fly’s smell algorithm?

  • influenced and unraveled by AI. Christof Koch, who heads up the Allen Institute, provided the endeavor world-historical contextualization: “While the 20th century was the century of physics— think the atomic bomb, the laser and the transistor— this will be the century of the brain. In particular, it will be the century of the human brain— the most complex piece of highly excitable matter in the known universe.”

  • Both the Perceptron, invented by Frank Rosenblatt, and its heir, the artificial neural network, developed by David Rumelhart, Geoffrey Hinton, and colleagues, were inspired by how biological neurons and networks of them, like the human brain, work. The architecture and functionality of many recent deep learning systems have been inspired by neuroscience.

  • Another important difference between computers and humans is that machines don’t generally know how to update their memories and overwrite information that isn’t useful. The approach our brains take is called Hebbian learning, following Donald Hebb’s maxim that “cells that fire together wire together.” 58 The principle explains the fact that if we use knowledge frequently, it doesn’t get erased. It works thanks to the phenomenon of synaptic neuroplasticity: a brain’s circuit of repetitive, synchronized firing makes that behavior stronger and harder to overwrite.

  • I don’t believe we will ever progress to “ghost” scientists, replaced by AI agents, but off-loading many of the tasks to machines, facilitating scientists doing science, will, in itself, catalyze the field. It’s the same theme as with doctors, acknowledging we can develop the software that writes software— which in turn empowers both humans and machines to a higher order of productivity, a powerful synergy for advancing biomedicine.


11. Deep Diet

↑ Contents

  • This indeed is the biggest problem facing nutrition guidelines— the idea that there is simply one diet that all human beings should follow. The idea is both biologically and physiologically implausible, contradicting our uniqueness, the remarkable heterogeneity and individuality of our metabolism, microbiome, environment, to name a few dimensions.

  • We now know, from the seminal work of researchers at Israel’s Weizmann Institute of Science, that each individual reacts differently to the same foods and to precisely the same amount of a food consumed.

  • The field of nutrigenomics was supposed to reveal how our unique DNA interacts with specific foods. To date, however, there is very little to show for the idea that genomic variations can guide us to an individualized diet— the data are somewhere between nonexistent and very slim.

  • for people with diabetes taking insulin, carbohydrate counting is the main way to calculate dosing.

  • Importantly, the study didn’t just highlight that there were highly variable individual responses to the same food— it was able to explain it. The food constituents weren’t the driver for glucose response. The bacterial species in the gut microbiome proved to be the key determinant of each person’s glucose response to eating.

  • The extensive body of work done by Segal and Elinav is summarized in their book The Personalized Diet. Cumulatively, they’ve studied more than 2,000 people and summed up their revelations about nutrition science as “we realized we had stumbled across a shocking realization: Everything was personal.” 30 To quote a key conclusion in their book: “Because our data set was so large and our analysis so comprehensive, these results have an enormous impact— they show more conclusively than has ever been shown before that a generic, universal approach to nutrition simply cannot work.” That’s the kind of bold statement you wouldn’t find in a peer-reviewed journal article, but the kind of strong assertion you might find in a book.


12. The Virtual Medical Assistant

↑ Contents

  • We cannot actualize the full potential of deep medicine unless we have something like a virtual medical assistant helping us out.

  • Getting all of a person’s data amalgamated is the critical first step. It needs to be regarded as a living resource that has to be nurtured, fed with all the new and relevant data, be they from a sensor, a life stress event, a change in career path, results of a gut microbiome test, birth of a child, and on and on. All that data has to be constantly assembled and analyzed, seamlessly, without being obtrusive to the individual. That means there should not be any manual logging on and off or active effort required.

  • owning your medical data should be a civil right.

  • Much of the virtual medical assistant’s ultimate success will be predicated on changing human behavior because so much of the burden of disease is related to poor lifestyle. As Mitesh Patel and colleagues asserted, “The final common pathway for the application of nearly every advance in medicine is human behavior.”

  • We’ve learned so much about behavioral science in recent years, but we still know relatively little about making people’s lifestyle healthier.

  • Perhaps some combination of AI nudges, individualized data, and incentives will ultimately surmount this formidable challenge.


13. Deep Empathy

↑ Contents

  • In 1895, William Osler wrote, “A case cannot be satisfactorily examined in less than half an hour. A sick man likes to have plenty of time spent over him, and he gets no satisfaction in a hurried ten or twelve minute examination.” 7 That’s true 120 years later. And it will always be true.

  • David Meltzer, an internist at the University of Chicago, has studied the relationship of time with doctors to key related factors like continuity of care, where the doctor who sees you at the clinic also sees you if you need care in a hospital. He reports that spending more time with patients reduced hospitalizations by 20 percent, saving millions of dollars as well as helping to avoid the risks of nosocomial infections and other hospital mishaps. That magnitude of benefit has subsequently been replicated by Kaiser Permanente and Vanderbilt University.

  • Empathy is crucial to our ability to witness others who are suffering. 23 Ironically, as doctors, we are trained to avoid the s-word because it isn’t actionable. The American Medical Association Manual of Style says that we should “avoid describing persons as victims or with other emotional terms that suggest helplessness (afflicted with, suffering from, stricken with, maimed).”

  • there is hope— indeed, anatomical and empirical evidence— that empathy and soft skills can be fostered and that we could take on intensive initiatives to promote empathy in all clinicians. The healers, after all, need healing, too. It shouldn’t take the escalating incidence of depression and suicide to move on this potential.

  • Rather than listening, doctors interrupt. Indeed, it only takes an average of eighteen seconds from the start of an encounter before doctors interrupt their patients. Eighteen seconds. 31 This desire to cut to the chase instead of giving the patient a chance to tell her narrative certainly matches up with the extreme time pressure that doctors and clinicians are facing.

  • It was the father of modern medicine, William Osler, who said, “Just listen to your patient; he is telling you the diagnosis.” Likewise, my friend Jerome Groopman wrote a whole book— How Doctors Think— on the adverse impact of not listening, not giving patients a voice.

  • David Epstein and Malcolm Gladwell wrote an editorial to accompany the paper, which they called “The Temin Effect,” after the Nobel laureate Howard Temin, who not only discovered reverse transcriptase but also read deeply about philosophy and literature. 44 Their conclusion: “Taking would-be physicians out of the hospital and into a museum— taking them out of their own world and into a different one— made them better physicians.”

  • Nearly a century ago Peabody wrote about this: “The significance of the intimate personal relationship between physician and patient cannot be too strongly emphasized, for in an extraordinarily large number of cases both diagnosis and treatment are directly dependent on it.”

  • if there were a system available all of the time that had all the information one needed to memorize to apply for medical school, “maybe there’s an argument to be made that you don’t have to memorize it.”

  • Knowledge, about medicine and individual patients, can and will be outsourced to machine algorithms. What will define and differentiate doctors from their machine apprentices is being human, developing the relationship, witnessing and alleviating suffering.

  • Machine medicine need not be our future. We can choose a technological solution to the profound human disconnection that exists today in healthcare; a more humane medicine, enabled by machine support, can be the path forward.

  • The triad of deep phenotyping— knowing more about the person’s layers of medical data than was ever previously attainable or even conceived— deep learning and deep empathy can be a major remedy to the economic crisis in healthcare by promoting bespoke prevention and therapies, superseding many decades of promiscuous and wasteful use of medical resources. But to me, those are the secondary gains of deep medicine. It’s our chance, perhaps the ultimate one, to bring back real medicine: Presence. Empathy. Trust. Caring. Being Human.

  • If you’ve ever experienced deep pain, you know how lonely and isolating it is, how no one can really know what you are feeling, the anguish, the sense of utter despair. You can be comforted by a loved one, a friend or relative, and that certainly helps. But it’s hard to beat the boost from a doctor or clinician you trust and who can bolster your confidence that it will pass, that he or she will be with you no matter what. That you’ll be okay. That’s the human caring we desperately seek when we’re sick. That’s what AI can help restore. We may never have another shot like this one. Let’s take it.