AI doctors outperform their fleshy counterparts, but still we don’t trust them

Despite a 90% success rate in diagnosing an issue (and the promise of a discount) AI doctors in America are facing discrimination. Shame.

 

 

Research from two Professors at Boston University indicates that patients are averse to health care provided by medical AI, even when it outperforms human doctors.

The reason seems to be that patients believe that their medical needs are unique and cannot be adequately address by algorithms. To realise the advantages and reduced costs that medical artificial intelligence promises, those who provide care will have to find ways to overcome this prevailing thought.

According to Chiara Longoni and Carey K. Morewedge of the Questrom School of Business in Boston, medical artificial intelligence can “perform with expert-level accuracy deliver cost-effective care at scale”. For example, algorithms identify eye diseases just as well as specialised physicians, and smartphone apps can now detect skin cancer with expert accuracy. Some speculate that 90% of hospitals will adopt AI measures that can replace up to 80% of what doctors currently do. For that to become a reality, however, the healthcare system will have to dispel patients’ distrust of AI.

Longoni and Morewedge published a paper in the Journal of Consumer Research which explored patients’ receptivity to medical AI. The research showed a strong reluctance across a variety of procedures, ranging from skin cancer screening to pacemaker implant surgery. When the healthcare was to be carried out by AI instead of a human, patients were far less likely to utilise the service and argued for a reduction in price, even if it meant the risk of inaccurate diagnosis or surgical complication was greater.

This resistance, as mentioned, was found by Longoni and Morewedge to stem from a belief that AI cannot consider an individual’s idiosyncratic characteristics and circumstances—people view themselves as unique, and this thinking permeates into their health. Medical AI is seen as inflexible and standardised, a way of treating an “average” patient, but inadequate when it comes to the individual in question. In all the tests carried out in their study, participants were willing to forego better health care to have a human care provider.

The more the participants viewed themselves as unique, the more pronounced their resistance was. 243 participants from an online panel were asked to indicate their preference between two providers for a skin cancer screening, of which both were 90% accurate in their diagnoses. The degree to which the participants perceived themselves as unique predicted their greater preference for a human than as opposed to an (equally accurate) AI provider, but it had no effect on their preference between the two human providers.

They also found, however, that people are more comfortable utilising medical AI if a physician remains in charge of the ultimate decision. 

“In one study discussed in our paper, participants reported that they would be as likely to use a procedure in which an algorithm analysed scans of their body for skin cancer and made recommendations to a doctor who made the final call as they would be to utilise care provided from start to finish by a doctor,” they mentioned in this press release.

To harness the full potential of medical AI services, health care providers will need physicians to confirm the recommendations made by an AI provider to ease patients’ scepticism of having an algorithm, rather than a person, making decisions about their care. Physicians are going nowhere for the time being.

 

 

 

Share via