Tyler Cowen has one of his “Conversations” with Atul Gawande. The interview is interesting throughout. Below is an excerpt from their discussion on artificial intelligence.
TYLER COWEN: …How far are we from having an AI that is capable of actually doing diagnosis to people? That is, they might speak into a Skype connection, something like Watson would hear what they say, and they would then diagnose the person well enough that this would be a usable form of healthcare? Is that far, close?
ATUL GAWANDE: Massively far. I think it’s one of the hardest things. You want me to tell why?
COWEN: Tell us why, yes.
GAWANDE: OK, the diagnosis process—people imagine what it is, is that people come to you with a crisply defined problem. “I have symptom one, two, three. I have data to add to it, and now give me the answer.”
The reality is, first of all, people come to you often unable to explain what their problem is. “I have pain.” “Where?” “Hmmm. Well, it’s sort of here.” And they’ll point with a hand. “Well, do you mean there under your rib cage, or you mean in your chest, or . . .”
So you have this probing process that is part of it and how they tell the story. Then there’s also how their story had evolved over time, and they often have to put it in their words. It’s more of a narrative than it is a straight set of data. That’s problem one.
IBM Watson put their AI on this problem, and it would never be the problem I would have put them on. The second part of it is that it changes over time, and you’re adding data along the way. You’re integrating it with a little bit about your view of the understanding of the person and their likelihood to even say that something is a major symptom or not.
There is no question that you can augment the human capability. But the idea that you pull out your phone and it would give you the diagnosis—it is still one of the hardest problems in reducing error in medicine, is the fact that we still have a high rate of error, and the sources of the error have to do with the human being rather than the calculation.
COWEN: But say you only get 15 minutes with your doctor, which is pretty common, and as you know, those conversations don’t always run so well. People are intimidated, they forget the right question to ask. You could have three hours talking to something like Watson. Maybe 80 percent of the dialogue is nonsense, but at the end, you apply machine learning.
And keep in mind, the alternative now is that people use Google, which is in a sense the world’s number one doctor. So AI only needs to be better than Google, which is already a form of AI. In that sense, isn’t it just around the corner that it would be a marginal improvement on what we have today?
GAWANDE: Yeah, one is the replacement question. Can I simply have something that will make the diagnosis? And lots of reasons why that’s difficult. But to augment the human capability, absolutely. There already are programs. One example is called Isabel, where the clinician, having elicited all of this information, can simply put the observations into a list. It will allow them to recognize, “OK, fine. You think that what they have is diagnosis one, but here are eight others in rank order of consideration compared to the one that you think.”
There have been plenty of studies, and it’s been around for more than a decade without the need for AI. This is just crunching some basic data to begin with that can add real value. I think the puzzle of it is that you need that capability to integrate information coming from the person interpreted and be able to get it into these kinds of systems. And in many cases, people may be able to do some of that over time for themselves.