Venture Cash Is Pouring Into AI that Can Diagnose Diseases. Doctors Aren’t Sure They Can Trust It.



Medical imaging AI, which can help diagnose health problems doctors don't alway see, is only getting more sophisticated—and more lucrative. Just last month, Tel-Aviv-based Aidoc raised $65 million for it's AI-powered medical imaging platform and other local companies are attracting investors at a rapid clip.


The software can find, and in some cases, diagnose polyps, tumors or anomalies that may otherwise go undetected by the human eye – a feat that has the potential to save lives. Beyond its most promising attributes, AI-driven technology could also dramatically decrease wait times at hospitals and doctors' offices by automating some of the most tedious work, allowing doctors to see and treat more patients. But critics of the unregulated technology say results can be inconsistent.

Brendan Burke, an emerging tech analyst at Pitchbook, estimates investment in the space has skyrocketed, from $1.6 billion in 2019 to $2.6 billion in 2020.

"[Venture capitalists] have certainly seen enough adoption to justify substantial investments," Burke said. "But there's still uneven adoption overall and a degree of skepticism from health care providers. "

The money is pouring in because most illnesses are diagnosed with a terrifying cocktail of subjectivity and luck, and a hard-to-find tumor could rear its ugly head when it's too late, forcing clinicians to scramble to use invasive (sometimes dangerous) procedures to course correct.

Paul Grand, founder and CEO of MedTech Innovator, a medical technology startup accelerator said interest is gaining traction because investors see the potential for a breakthrough technology — even if it isn't fully proven yet.

"They're not looking for little incremental improvements when they make investments as VCs, they're looking for game-changing, industry needle-moving investments," he said.

Irvine-based Docbot, a gastrointestinal AI startup that has raised $6.5 million according to Pitchbook, developed Ultivision AI to find polyps that could turn cancerous. Most diagnoses come from a doctor's ability to find them through a camera inserted into the GI tract. Created by gastroenterologist William Kames, Docbot uses AI to point out faded or small polyps through the camera lens.

"By doing this, you'll catch more polyps, and thus the colonoscopy will have a higher performance rate in hopefully catching more polyps, so a patient would have less risk of getting colon cancer afterwards." said Docbot CEO Andrew Ritter.

After feeding 50,000 colonoscopy videos through a machine learning algorithm, Docbot put Ultivision AI up next to a panel of physicians to detect polyps in a slew of videos. The AI found 61% more polyps than the panel.

Now, the AI has been trained on more than 10 million images.

Another AI-based tool, Woodland Hills-based Eyenuk, received FDA approval as a medical imaging AI device that can diagnose diabetic retinopathy 10 months ago. The device has been trained on more than two million images and is scattered across 15 different institutions in the US.

Eyenuk's device became useful during the coronavirus pandemic. Nose-to-nose contact is often unavoidable for opthamologists who need to conduct eye exams, but the device could operate autonomously, taking photos of a patient's eyes and diagnosing the problem in a span of minutes.

"[Doctors] want AI to prescreen people's eyes in the community," Frank Cheng, president of Eyenuk, said. "...if there is a need for evaluation and treatment, they then jump in to more efficiently treat the patient."

Eyenuk Inc.'s AI-based diabetic retinopathy screening software was tested in a study on cost-effective mass retinal screening.

Doctors Remain Skeptical

Despite the sweeping promises of medical imaging AI, doctors remain largely distrustful of the tech. A survey from the American College of Radiology found that only 30% of doctors use medical imaging AI, and a study presented to the FDA found that 95% of clinicians largely think AI is inconsistent or doesn't work at all.

"Sometimes these machine learning models are so sophisticated, it's really hard to tell how a program actually came to its decision," said Ritika Chaturvedi, a precision medicine expert at the USC Schaeffer Center. "How is that physician to know whether to evaluate their own judgment or use the AI's recommendation?"

With most medical imaging AI, a doctor or a startup will collect a set of reference images or videos of whatever it is they want to target—rashes on the skin, tumors in the body, or x-rays of bone fractures—which they then feed through a machine learning algorithm that uses those images to learn what to look for. The algorithm marks different patterns it finds in the images, such as shape or color, to build a framework for what it should look for. When the algorithm is calibrated to detect images at the level of accuracy the team desires—sometimes 80% accuracy, sometimes 60%—the team applies the machine learning algorithm to an unknown image to see if it can catch it.

But the lack of standardization in medical imaging AI makes it difficult for clinicians to know if they can trust the technology. There are no standards on how many reference images need to be used to train the AI (though the more, the better). There is also no rule that dictates a machine learning algorithm is satisfactory at 80% accuracy, or 60% accuracy. Nor are protocols in place for when a doctor disagrees with an AI's assessment.

"Because this field is so new, people are just now starting to grapple with the ethics," Chaturvedi said.

When a specific AI software is approved by the Food and Drug Administration, it doesn't undergo re-approval when it adds images or videos to its machine-learning model, which can change how the AI performs. Datasets are often not available to the public to review if the data is representative of the population.

"The adage is in computer science, garbage in, garbage out," Chaturvedi said. "So if your training data set is highly biased, then your outputs are going to be highly biased."

Grand says there's an adoption phase with every new technology, and medical imaging AI will one day reach a point where it could be considered negligent for doctors not to use it.

"It could be five years, 10 years, but that's the phase we're going to be in where doctors go, 'Okay, AI is a new tool for me to be a better doctor, '" Grand said.

Indeed, there may soon come a time when doctors embrace medical imaging AI, when residents are trained to use the technology in hospitals and clinics and when medical organizations will consider AI to be as much of a staple as a stethoscope or an MRI to diagnose. But in order for that to happen, experts say, the data needs to be unequivocally clear that AI is beneficial, and regulations need to be put in place to encourage board adoption.

"You've diagnosed the cancer," Chaturvedi said. "But if you can't treat it, then what's the point?"

Lead art by Ian Hurley.