There has been much hype about how artificial intelligence (AI) can revolutionize health care. Has it been able to do so? According to a The Economist‘s Technology Quarterly on AI and its limits, the answer is not as much as we had hoped. First, the hype:
PWC…predicts that artificial intelligence will add $16trn to the gobal economy by 2030. The total of all activity…in the world’s second largest economy was just $13trn in 2018…[however] clvere computers capable of doing the jobs of radiologists, lorry drivers or warehouse workers might cause a wave of unemployment…In 2016 Geoffrey Hinton, a computer scientist…remarked that “it’s quite obvious that we should stop training radiologists” on the grounds that computers will soon be able to do everything they do, only cheaper and faster.
The reality doesn’t look as good. For instance, we are facing a shortage of radiologists. Secondly, AI has not performed as well as hoped. There is the well-known failure of the Watson AI in its partnership with MD Anderson. Consider another example:
In 2018 researchers at Mount Sinai, a hospital network in New York, found that an AI system trained to spot pneumonia on chest x-rays became markedly less competent when used in hospitals other than those it had been trained in. The researchers discovered that the machine had been able to work out which hospital a scan had come from [based on small metal token placed in the corner of the scans].
Since one hospital in its training set had a baseline rate of pneumonia far higher than the others, that information by itself was enough to boost the system accuracy substantially. The researchers dubbed that clever wheeze “cheating”, on the grounds that it failed when the system was presented with data from hospitals it did not know.
Consider also the much hyped case of retinal scans. A 2018 paper used the DeepMind AI and found that given a retina scan, the AI:
…could make correct referral decisions 94% of the time, matching human experts. A more recent paper described a system that predict the onset of age-related macular degeneration, a progressive disease that causes blindness, up to six months in advance.
But Dr. [Pearse] Keane cautions that in practice moving from a lab demonstration to a real system takes time.
What are the issues? First, the data must be in a standardized, usable format. Second, there are regulatory challenges with patient privacy. Without sharing patient scan data, the AI will not be able to be trained or to evolve over time. Third, AI can’t explain “why” it chose a specific outcome which means that physicians may have trouble identifying cases where the AI could be wrong. Finally, AI is a long way from gaining human’s emotional intelligence and ability to communicate with patients.
A bigger issue is getting data. For instance, AI could be used to detect COVID-19 cases using smartphones. A recent poll, however, found that “half of Americans would refuse to install a location-tracking contact-tracing app on their phones.”
In short, while AI has a number of promising applications in the future, the value to date has not met the hype.