There’s a term for when you take medical advice from a search engine instead of a medical provider: Dr. Google. It often delivers worst-case-scenario diagnoses, spouts bad advice, and is the bane of real doctors who have to debate its merits with their patients. But there's a challenger to Dr. Google on the horizon, which might work with physicians instead of against them: Dr. AI.
AI chatbots have been getting all sorts of new jobs lately, and now they could assist physicians—although Americans are largely hesitant to trust healthcare AI, according to a Pew Research Center survey(Opens in a new window). The prospect of healthcare providers relying on artificial intelligence to diagnose diseases and recommend treatments was met with an unfavorable reaction by 60% of those surveyed, who said they would be uncomfortable with such a scenario.
Only 38% said it will lead to better health outcomes, 33% said it would lead to worse outcomes, and 27% said it wouldn’t make much difference. Here, the move-fast-and-break-things tech motto comes up against the first-do-no-harm ethos of medicine, with 75% of people expressing concern that healthcare providers will implement AI before fully understanding the risks.
One of the primary concerns about AI’s involvement in healthcare is the view by 57% that it would erode the doctor-patient relationship. This relationship is especially important when it comes to mental healthcare. Most of those surveyed (79%) said they would not want to use an AI chatbot if they were seeking mental health support. (While there are many options to obtaining therapy online, AI is not designed to provide such services(Opens in a new window).)
Additionally, people expressed distrust regarding privacy, with 37%
Read more on pcmag.com