Search

Study: Patients are less likely to follow advice from AI doctors that know their names - The Next Web

poloong.blogspot.com

Engineers forever strive to make our interactions with AI more human-like, but a new study suggests a personal touch isn’t always welcome.

Researchers from Penn State and the University of California, Santa Barbara found that people are less likely to follow the advice of an AI doctor that knows their name and medical history.

Their two-phase study randomly assigned participants to chatbots that identified themselves as either AI, human, or human assisted by AI.

The first part of the study was framed as a visit to a new doctor on an e-health platform. 

[Read moreThis dude drove an EV from the Netherlands to New Zealand — here are his 3 top road trip tips]

The 295 participants were first asked to fill out a health form. They then read the following description of the doctor they were about to meet:

Human doctor Dr. Alex received a medical degree from the University of Pittsburgh School of Medicine in 2005, and he is board certified in pulmonary (lung) medicine. His area of focus includes cough, obstructive lung disease, and respiratory problems. Dr. Alex says, “I strive to provide accurate diagnosis and treatment for the patients.”
AI doctor AI Dr. Alex is a deep learning-based AI algorithm for detection of influenza, lung disease, and respiratory problems. The algorithm was developed by several research groups at the University of Pittsburgh School of Medicine with a massive real-world dataset. In practice, AI Dr. Alex has achieved high accuracy in diagnosis and treatment.
AI-assisted human doctor Dr. Alex is a board-certified pulmonary specialist who received a medical degree from the University of Pittsburgh School of Medicine in 2005.
The AI medical system assisting Dr. Alex is based on deep learning algorithms for the detection of influenza, lung disease, and respiratory problems.

The doctor then entered the chat and the interaction began.

Each chatbot was programmed to ask eight questions about COVID-19 symptoms and behaviours. Finally, they offered diagnosis and recommendations based on the CDC Coronavirus Self-Checker.

Around 10 days later, the participants were invited to a second session. Each of them was matched with a chatbot with the same identity as in the first part of the study. But this time, some were assigned to a bot that referred to details from their previous interaction, while others were allocated a bot that made no reference to their personal information.

After the chat, the participants were given a questionnaire to evaluate the doctor and their interaction. They were then told that all the doctors were bots, regardless of their professed identity.

Diagnosing AI

The study found that patients were less likely to heed the advice of AI doctors that referred to personal information — and more likely to  consider the chatbot intrusive. However, the reverse pattern was observed in views on chatbots that were presented as human.

Per the study paper:

In line with the uncanny valley theory of mind, it could be that individuation is viewed as being unique to human-human interaction. Individuation from AI is probably viewed as a pretense, i.e., a disingenuous attempt at caring and closeness. On the other hand, when a human doctor does not individuate and repeatedly asks patients’ name, medical history, and behavior, individuals tend to perceive greater intrusiveness which leads to less patient compliance.

The findings about human doctors, however, come with a caveat: 78% of participants in this group thought they’d interacted with an AI doctor. The researchers suspect this was due to the chatbots’ mechanical responses and the lack of a human presence on the interface, such as a profile photo.

Ultimately, the team hopes that the research leads to improvements in how medical chatbots are designed. It could also offers pointers on how human doctors should interact with patients online.

You can read the study paper here.

Greetings Humanoids! Did you know we have a newsletter all about AI? You can subscribe to it right here.

Adblock test (Why?)



"follow" - Google News
May 13, 2021 at 02:22AM
https://ift.tt/3hgPaFc

Study: Patients are less likely to follow advice from AI doctors that know their names - The Next Web
"follow" - Google News
https://ift.tt/35pbZ1k
https://ift.tt/35rGyU8

Bagikan Berita Ini

0 Response to "Study: Patients are less likely to follow advice from AI doctors that know their names - The Next Web"

Post a Comment


Powered by Blogger.