AI may not care whether humans live or die, but tools like ChatGPT will still affect life-and-death decisions — once they become a standard tool in the hands of doctors. Some are already experimenting with ChatGPT to see if it can diagnose patients and choose treatments. Whether this is good or bad hinges on how doctors use it.
GPT-4, the latest update to ChatGPT, can get a perfect score on medical licensing exams. When it gets something wrong, there's often a legitimate medical dispute over the answer. It's even good at tasks we thought took human compassion, such as finding the right words to deliver bad news to patients.
These systems are developing image processing capacity as well. At this point you still need a real doctor to palpate a lump or assess a torn ligament, but AI could read an MRI or CT scan and offer a medical judgment. Ideally AI wouldn't replace hands-on medical work but enhance it — and yet we're nowhere near understanding when and where it would be practical or ethical to follow its recommendations.
And it's inevitable that people will use it to guide our own healthcare decisions just the way we've been leaning on “Dr. Google” for years. Despite more information at our fingertips, public health experts this week blamed an abundance of misinformation for our relatively short life expectancy — something that might get better or worse with GPT-4.
Andrew Beam, a professor of biomedical informatics at Harvard, has been amazed at GPT-4's feats, but told me he can get it to give him vastly different answers by subtly changing the way he phrases his prompts. For example, it won't necessarily ace medical exams unless you tell it to ace them by, say, telling it to act as if it's the smartest person in the world.
H
Read more on tech.hindustantimes.com