Will We Be Visiting RoboDocs in the Future?

Will We Be Visiting RoboDocs in the Future?
(Vladyslav Otsiatsia/iStock)
Jonathan Zhou
8/30/2015
Updated:
8/30/2015

While diagnostic software is still in its infancy, it is making steady progress that, if continued apace, could allow machines to overtake humans in our lifetime. 

Since the second half of the 20th century, computers have progressively replaced humans in one profession after another: automated systems made telephone operators extinct, ATMs have decimated the ranks of bank tellers, and now self-driving technology is threatening to put cabbies out of business.

In medicine, a field elevated far above the collection of menial tasks that are considered prime candidates for automatization, software is already being used to assist physicians make diagnoses. Now, a new crop of companies is trying to build intelligent machines that can outdo doctors in this regard—and maybe even one day replace them outright.

The medical community has scarcely shown any anxiety at the possibility that its rarified profession could be replaced by machines, and perhaps with good reason—computer scientists have a track record of making overly optimistic promises.

In the 1950s, artificial general intelligence, which would match humans at any task, was supposed to be just around the corner. Now, it’s still talked about as just 30 years away. 

But progress in machine intelligence has a way of creeping up. In 1962, IBM created a program that could beat a master checker player. Philosopher Hubert Dreyfus predicted, however, that no chess program would ever defeat a gifted 10-year-old. In 1967, Dreyfus accepted a challenge from MIT computer scientists to square off against their chess program—the philosopher lost. 

As late as 1977, cognitive scientist Douglas Hofstadter predicted in his book “Godel, Escher, and Bach” that chess programs would never be able to beat the best human players, out of the belief that “profoundly insightful chess-playing draws intrinsically on central facets of the human condition.” Two decades later, IBM’s Deep Blue defeated the reigning world champion, Garry Kasparov. 

Garry Kasparov, left, is seen before his next move against Deep Blue, IBM's chess playing computer Sunday, May 4, 1997, in New York. (AP Photo/Adam Nadel)
Garry Kasparov, left, is seen before his next move against Deep Blue, IBM's chess playing computer Sunday, May 4, 1997, in New York. (AP Photo/Adam Nadel)

Today, while diagnostic software is still in its infancy, it is making steady progress that, if continues apace, could allow machines to overtake humans in our lifetime. 

A Double Check 

Tools that help doctors make diagnoses already exist and are used by a growing number of physicians.

For example, Isabel Symptom Checker generates a list of possible ailments based on symptoms the doctor inputs. The idea is to widen the list of potential diagnoses—rather than narrow it—to present options the doctor might otherwise overlook, with the goal of preventing misdiagnoses. 

The tool was launched in 1999 by Jason Maude, an investment banker who turned to medicine after his daughter Isabel almost died from a misdiagnosis. True to its namesake, the company originally focused on pediatrics, but its growing database now covers over 6,000 diseases, and can be accessed by around 30,000 doctors. 

“It’s a really sophisticated and direct search engine,” said Isabel CEO Don Bauman. “We have built out a disease database that allows us to receive natural language inputs, search through the database and provide a list of diagnoses that match those inputs.” 

The medical market also offers image-based support tools. VisualDx helps physicians diagnose rare diseases by matching up visual symptoms in a picture database. The doctor inputs a list of symptoms and the app yields an array of pictures, each with a case history attached. The tool is used by more than 1,500 health care institutions around the world. 

Isabel and VirtualDx were created to assist doctors, but over the past few years, a new generation of health care tech companies have appeared that go one step further—into the world of “deep learning.”

Butterfly Network, a 3-year-old startup that raised $100 million in November, is developing an imaging device for MRIs and ultrasounds that’s small enough to fit in your hands. Aside from size, Butterfly wants to integrate into it deep learning software that would allow the device to process images produce a diagnosis for things like birth defects like a cleft lip or even Down Syndrome.

Deep Learning

Deep learning refers to an innovation in computer processing that moves machine learning closer to the goal of true artificial intelligence (AI).

Deep learning programs use artificial neural networks that ape the way information is transmitted in the human brain. The problem-solving code runs on many different tracks simultaneously, emulating the interconnectedness of the way knowledge is organized in the brain. The programs can “learn” to recognize patterns by training on data.

For example, in 2012, when Google wanted its experimental computer to learn how to identify pictures of cats, it directed the program to train itself by scanning 10 million YouTube thumbnails. 

By 2014, the program was sophisticated enough to generate accurate photo captions. One image was labeled by a human as “A group of men playing Frisbee in the park” and by the computer as “A group of young people playing a game of Frisbee.”

The same methods have been applied to the detection of diseases that rely heavily on images to diagnose. 

In a 2013 competition, 14 computer programs were tasked with the detection of leading indicators of breast tumors. The winning program, designed by the Swiss-based Dalle Molle Institute, was able diagnose on par with the consensus of expert pathologists. 

The dazzling potential heralded by deep learning has led to a burst of investment in the technology.

In early August, IBM spent $700 million to acquire Merge Healthcare, which sits on a trove of 30 billion X-rays, MRIs, and other medical data, all of which can be used to train IMB’s famous AI program, Watson. 

Researchers use medical licensing-practice exams to test deep learning algorithms. Dan Lambert, CEO of BoardVitals, a medical exam prep company, said in his experience, the algorithms can perform close to human levels in very narrow fields like just dermatology or just radiology, but often lack basic medical common sense, like knowing what a Band-Aid is, and can’t understand visual data that’s not pre-processed, like detecting a discoloration on a patient’s arm and asking them if they’ve been hurt. 

“There’s still a lot of work to be done. We’re getting closer … it’s much better than just three years ago,” Lambert said.

Lambert thinks that at the current rate of progress, algorithms will be able to replace the diagnostic duties of a general practitioner in just 10 years. 

“I may be looking at too short of a sample, but I think in the last couple of years, we’ve been getting exponentially better,” he said. 

The Analog World  

For diagnostic algorithms to revolutionize medicine, it will not only have to work, but also win hearts and minds in the medical community. Doctors and patients may not be ready to accept being hooked up to a diagnostic computer like a car at the mechanic.

“There’s always a disconnect between what the techies are thinking and what the real world practices,” said Dr. Hardeep Singh, program chief at the Michael E. DeBakey VA Medical Center, and an expert in patient safety.

Even if algorithms could substitute for the diagnostic functions of doctors, much of data collection itself—talking to patients and gathering relevant symptoms and personal details—should still be performed by humans.

Singh points out that for the technology to work, it would have to be applied to the entire health care social system that includes workflows and processes. “People often forget the non-technological part,” he said.

The Transition 

Technology companies play up the humanitarian potential of diagnostic algorithms, but have been reticent about other potential impacts on the medical community. If machines can take over diagnoses, arguably the most difficult part of medicine, then might many doctors be rendered superfluous? 

Vinod Khosla, whose eponymous venture capital firm invests in a number of tech companies that apply AI to health care, spelled out that conclusion in late 2012. 

Earlier that year, Khosla wrote a Techcrunch OP-ED titled “Do We Need Doctors Or Algorithms?” arguing that soon “We won’t need the average doctor” and that only the top 20 percent of doctors will be needed, at least for another decade or two, to help improve the diagnostic software. 

Khosla’s remarks provoked a fierce wave of backlash from doctors, many of whom argued that machines would never be allowed to make diagnostic decisions independently, whether because of liabilities issues or because they lack the human touch.

Even those sympathetic to diagnostic algorithms said that Khosla’s estimates for the fraction of doctors that would be replaced was too high. 

“Khosla’s words exemplify the type of incendiary arrogance that makes people in medicine hate to work with people in tech, particularly those of the hype-loving Silicon Valley variety,” wrote Jae Won Joh, in the top-voted answer in a Quora thread asking what medical professionals think of Khosla’s prediction.  

Khosla has since toned down his rhetoric, but still stands by the initial conjecture.

It is inevitable that … the majority of physicians’ diagnostic, prescription, and monitoring … will be replaced by smart hardware,“ Khosla wrote in a blog post last year. ”This is not to say 80 percent of physicians will be replaced, but rather 80 percent of what they currently do might be replaced, leading to new possibilities and functions for the physicians.”