Passing part of a medical licensing exam doesn’t make ChatGPT a good doctor


Smiling doctor discussing medical results with a woman.
Enlarge / For now, “you should see a doctor” remains good advice.

ChatGPT was able to pass some of the United States Medical Licensing Exam (USMLE) tests in a study done in 2022. This year, a team of Canadian medical professionals checked to see if it’s any good at actual doctoring. And it’s not.

ChatGPT vs. Medscape

“Our source for medical questions was the Medscape questions bank,” said Amrit Kirpalani, a medical educator at the Western University in Ontario, Canada, who led the new research into ChatGPT’s performance as a diagnostic tool. The USMLE contained mostly multiple-choice test questions; Medscape has full medical cases based on real-world patients, complete with physical examination findings, laboratory test results, and so on.

The idea behind it is to make those cases challenging for medical practitioners due to complications like multiple comorbidities, where two or more diseases are present at the same time, and various diagnostic dilemmas that make the correct answers less obvious. Kirpalani’s team turned 150 of those Medscape cases into prompts that ChatGPT could understand and process.

This was a bit of a challenge because OpenAI, the company that made ChatGPT, has a restriction against using it for medical advice, so a prompt to straight-up diagnose the case didn’t work. This was easily bypassed, though, by telling the AI that diagnoses were needed for an academic research paper the team was writing. The team then fed it various possible answers, copy/pasted all the case info available at Medscape, and asked ChatGPT to provide the rationale behind its chosen answers.

It turned out that in 76 out of 150 cases, ChatGPT was wrong. But the chatbot was supposed to be good at diagnosing, wasn’t it?

Special-purpose tools

At the beginning of 2024. Google published a study on the Articulate Medical Intelligence Explorer (AMIE), a large language model purpose-built to diagnose diseases based on conversations with patients. AMIE outperformed human doctors in diagnosing 303 cases sourced from New England Journal of Medicine and ClinicoPathologic Conferences. And AMIE is not an outlier; during the last year, there was hardly a week without published research showcasing an AI performing amazingly well in diagnosing cancer and diabetes, and even predicting male infertility based on blood test results.

The difference between such specialized medical AIs and ChatGPT, though, lies in the data they have been trained on. “Such AIs may have been trained on tons of medical literature and may even have been trained on similar complex cases as well,” Kirpalani explained. “These may be tailored to understand medical terminology, interpret diagnostic tests, and recognize patterns in medical data that are relevant to specific diseases or conditions. In contrast, general-purpose LLMs like ChatGPT are trained on a wide range of topics and lack the deep domain expertise required for medical diagnosis.”



Source link

About The Author

Scroll to Top