If ChatGPT were cut out in the Emergency Department, it could recommend unnecessary X-rays and antibiotics for some patients and admit others who didn’t need hospital care, according to a new study from UC San Francisco.
The researchers said that while the model could be prompted in ways that make its responses more accurate, it still doesn’t match the clinical judgment of a human doctor.
“This is a valuable message to clinicians not to blindly trust these models,” said postdoctoral researcher Chris Williams, MB BChir, lead author of the study, which appears Oct. 8 in Nature communications. “ChatGPT can answer medical exam questions and help write clinical notes, but it’s not currently designed for situations that require multiple thoughts, such as situations in an emergency department.”
Recently, Williams showed that ChatGPT, a large language model (LLM) that can be used to research clinical applications of artificial intelligence, was slightly better than humans at determining which of two emergency patients was more acutely ill, a simple choice between patient A and patient. SI.
With the current study, Williams challenged the AI model to perform a more complex task: providing the recommendations a physician makes after an initial examination of a patient in the ED. This includes deciding whether to admit the patient, take X-rays or other scans, or prescribe antibiotics.
The AI model is less accurate than a resident
For each of the three decisions, the team compiled a total of 1,000 ED visits for analysis from a file of more than 251,000 visits. The sets had the same ratio of “yes” to “no” responses for admission, radiology, and antibiotic decisions seen throughout the UCSF Emergency Department.
Using UCSF’s secure production AI platform, which has broad privacy protections, the researchers entered doctors’ notes about each patient’s symptoms and test findings into ChatGPT-3.5 and ChatGPT-4. They then tested the accuracy of each set with a series of increasingly detailed prompts.
Overall, the AI models tended to recommend services more often than necessary. ChatGPT-4 was 8% less accurate than resident physicians and ChatGPT-3.5 was 24% less accurate.
Williams said the AI’s tendency to overprescribe may be because the models are trained on the Internet, where legitimate medical advice sites are not designed to answer urgent medical questions but rather to send readers to a doctor who can .
These models are almost perfected to say “seek medical advice”, which is very correct from a general public safety perspective. But erring on the side of caution is not always appropriate in the ED setting, where unnecessary interventions could harm patients, strain resources and lead to higher patient costs.”
Chris Williams, MB BChir, lead author of the study
He said models like ChatGPT will need better frameworks for evaluating clinical information before they are ready for the ED. The people designing these frameworks will have to strike a balance between making sure the AI doesn’t miss anything serious while keeping it from causing unnecessary testing and expense.
This means that researchers developing medical applications of AI, along with the wider clinical community and the public, must consider where to draw those lines and how cautious they should be.
“There’s no perfect solution,” he said, “But knowing that models like ChatGPT have these tendencies, we’re tasked with thinking about how we want them to perform in clinical practice.”
Source:
Journal Reference:
Christopher, W., et al. (2024). Evaluation of the use of large language models to provide clinical recommendations in the Emergency Department. Nature communications. doi.org/10.1038/s41467-024-52415-1.