For years, many have feared that artificial intelligence (AI) will take over national security mechanisms, leading to human slavery, domination of human society, and perhaps the annihilation of humans.
One way of killing humans is medical misdiagnosis, so it seems reasonable to examine the performance of ChatGPT, the AI chatbot that is taking the world by storm.
This is timely in light of ChatGPT's recent remarkable performance in passing the US medical licensing exam.
Computer-aided diagnosis has been attempted many times over the years, particularly for diagnosing appendicitis. But the emergence of AI that draws on the entire internet for answers to questions rather than being confined to fixed databases open new avenues of potential for augmenting medical diagnosis.
More recently, several articles discuss the performance of ChatGPT in making medical diagnoses.
An American emergency medicine physician recently gave an account of how he asked ChatGPT to give the possible diagnoses of a young woman with lower abdominal pain.
The machine gave numerous credible diagnoses, such as appendicitis and ovarian cyst problems, but it missed ectopic pregnancy.
This was correctly identified by the physician as a serious omission, and I agree. On my watch, ChatGPT would not have passed its medical final examinations with that rather deadly performance.
I'm pleased to say that when I asked ChatGPT the same question about a young woman with lower abdominal pain, ChatGPT confidently stated ectopic pregnancy in the differential diagnosis.
This reminds us of an important thing about AI: it is capable of learning.
Presumably, someone has told ChatGPT of its error and it has learned from this new data – not unlike a medical student. It is this
Read more on tech.hindustantimes.com