Every year, millions of women are screened for breast cancer via mammograms. Over 100,000 each year will have their worst fear confirmed and face confusing and frightening treatments, scans, and procedures—only to find out that their dreaded diagnosis was false. This phenomenon is known as a false-positive scan and is one of the many diagnostic errors that may be soon eradicated by the use of AI.
AI, or Artificial Intelligence, is poised to radically change healthcare as we know it. Over the past decade, healthcare as an institution has begun to fully inhabit the 21st century: the days of paper charts, 48-hour shifts, and individual medical bravado are far away. Doctors are conforming more closely to health guidelines and patient outcomes are becoming standardized—and thus ripe for the introduction of AI. Who would want a fallible doctor, someone who is sleep deprived, stressed, and not familiar with the cutting edge of medicine, when they could rely on the cool logic of AI for a diagnosis? That is the proposed future of healthcare: a singularity looming to take away doctors’ autonomy.
This AI overhaul of healthcare is not without its detractors. This March, Stanford’s School of Medicine weighed in on this issue with a perspective piece in the New England Journal of Medicine.Lead author Danton Char, M.D. cautioned that “Ethical guidelines can be created to catch up with the age of machine learning and artificial intelligence that is already upon us, [but] remaining ignorant about the construction of machine-learning systems or allowing them to be constructed as black boxes could lead to ethically problematic outcomes.”
Some of the ethical concerns with AI’s use are regarding its very nature: data can contain bias, which is reflected in clinical recommendations that doctors receive. In addition, algorithms can be designed to skew results if, for example, the algorithm is supposed to prioritize cost minimization over best practices. Another major concern is that for AI to work effectively, it must have access to as much patient data as possible. This means that a third party would be introduced to the confidential relationship between doctor and patient, perhaps leading to privacy breaches, or even to misdiagnoses, as patients may lie if they know their information is semi-public. Navigating these ethical quandaries will take time and legislation, but for now, the pace of AI development is far outstripping the pace of detractors’ multiplying concerns.
There are three different types of AI currently being used or developed for healthcare. The first, algorithmic, is also the simplest: the AI suggests evidence-based approaches programmed by researchers and clinicians, based off clinical trials or updated best practices. More complex healthcare AIs use machine learning, which relies on neural networks. These networks are computer systems modeled on the human brain, which use probabilistic analysis on multiple levels to simulate and expand upon the way that the human mind processes and provides information. The most complex healthcare AIs use deep learning, wherein software learns to recognize patterns in distinct layers. Each neural-network layer operates both independently and in concert to yield recommendations for physicians.
Fallibility is the name of the game so far with many applications of AI, which strive to reduce medical errors. According to a report in September 2015 by the Institute of Medicine at the National Academies of Science, Engineering and Medicine, “diagnostic errors contribute to approximately 10 percent of patient deaths,” and account for 6 to 17 percent of hospital complications. However, very few of these errors can be directly linked to poor physician performance. Instead, most can be traced to inefficient integration of health information technologies, gaps in communication among clinicians, patients and their families or worst of all, healthcare systems not adequately designed for efficient and cutting-edge diagnoses.
One instance of AI used in diagnosis successfully was at Stanford University, where researchers trained an algorithm to diagnose melanoma using deep learning. The training consisted of 129,450 clinical images of skin lesions representing 2,032 different diseases. Normally, skin cancers are diagnosed by inspecting the skin using a handheld microscope (a dermatoscope). Only if the dermatologist believes the lesion is suspicious or inconclusive will a biopsy be ordered to confirm the diagnosis. Twenty-one board-certified dermatologists viewed 370 images along with Stanford’s AI, and were asked if they would biopsy, treat, or reassure the patient for each given image. The algorithm was able to recommend the best course of action at the same rate as the dermatologists. This is no mean feat considering that dermatologists must be trained for 16 years following high school to reach the baseline level of acuity required to responsibly diagnose patients, let alone the expertise of a board-certified attending dermatologist
Though this case may seem a classic example of the approaching singularity, we are nowhere near a future where doctors are rendered obsolete. In fact, AI’s tentative use in healthcare has served to enhance human intuition and not replace it. In short, while AI presents a brave new world where medicine is free from the idiosyncrasies of doctors, it also poses many ethical questions. AI in healthcare can help eliminate errors and improve patient outcomes. However, the connection between doctor and patient must remain an essentially human connection, lest we forfeit the humanity in one of the oldest and most compassionate professions.