Study evaluates large language model for emergency medicine handover notes, finding high utility and safety comparable to physicians
Study: Development and Evaluation of Emergency Medical Emergency Management Notes Generated by the Large Language Model. Image credit: Kamon_wongnon / Shutterstock.com
In a recent study published in JAMA Network Openresearchers developed and evaluated the accuracy, safety, and utility of Emergency Medicine (EM)-generated Long Language Model (LLM) handoff notes to reduce physician documentation burden without compromising patient safety.
The critical role of transfers in health care
Handles are critical points of contact in healthcare and a known source of medical errors. As a result, many organizations, such as The Joint Commission and the Accreditation Council for Graduate Medical Education (ACGME), have advocated standardized procedures to improve safety.
EM to inpatient (IP) transfers are associated with unique challenges, such as medical complexity, time constraints, and diagnostic uncertainty. However, they remain poorly standardized and inconsistently implemented. Electronic health record (EHR)-based tools have attempted to overcome these limitations. However, they remain unexplored in emergency situations.
LLMs have emerged as potential solutions for streamlining clinical documentation. However, concerns about factual inconsistencies require further research to ensure safety and reliability in critical workflows.
About the study
The present study was conducted in an 840-bed urban tertiary care academic hospital in New York City. EHR data from 1,600 EM patient encounters resulting in acute hospital admissions between April and September 2023 were analyzed. Only encounters after April 2023 were included due to the implementation of an updated EM-to-IP handover system.
Retrospective data were used with a waiver of informed consent to ensure minimal risk to patients. Handoff notes were created using a combination of LLM detail and rule-based heuristics while adhering to standard reference guidelines.
The delivery note template closely resembled the current structure of the manual, incorporating rule-based elements such as laboratory tests and vital signs and LLM-generated elements such as history of present illness and differential diagnoses. IT experts and EM physicians curated data to refine the LLM to improve their quality, while excluding race-based characteristics to avoid bias.
Two LLMs, robust Optimized Bidirectional Encoder Representations by Transformers Approach (RoBERTa) and Large Language Model Meta AI (Llama-2), were used for meaningful content selection and abstract summarization, respectively. Data processing included heuristic prioritization and saliency modeling to address potential limitations of the models.
The researchers evaluated automated metrics, such as the Recall-Oriented Understudy for Gisting Evaluation (ROUGE) and the Bidirectional Encoder Representations from Transformers Score (BERTScore), alongside a new framework focused on patient safety. A clinical review of 50 delivery notes assessed their completeness, readability and safety to ensure their rigorous validation.
Study findings
Among the 1,600 patient cases included in the analysis, the mean age was 59.8 years with a standard deviation of 18.9 years, and 52% of patients were female. Automated evaluation metrics revealed that LLM-generated summaries outperformed those written by physicians in many aspects.
ROUGE-2 scores were significantly higher for LLM-generated summaries compared to physician summaries at 0.322 and 0.088, respectively. Similarly, BERT accuracy scores were higher at 0.859 compared to 0.796 for physician summaries. In contrast, the source segmentation approach for large-scale inconsistency assessment (SCALE) produced a score of 0.691 compared to 0.456. These results indicate that LLM-generated summaries demonstrated greater lexical similarities, higher fidelity to source notes, and provided more detailed content than their human-generated counterparts.
In clinical evaluations, the quality of LLM-generated summaries was comparable to physician-written summaries, but slightly inferior on several dimensions. On a Likert scale of one to five, LLM-generated summaries scored lower on usefulness, completeness, curation, readability, correctness, and patient safety. Despite these differences, the automated summaries were generally considered acceptable for clinical use, with none of the identified issues identified as life-threatening for patient safety.
When assessing worst-case scenarios, clinicians identified potential second-level safety risks, which included incomplete and flawed logic in 8.7% and 7.3%, respectively, for LLM-generated summaries compared to written summaries by doctors, which were not associated with these risks. Hallucinations were rare in LLM-generated summaries, with five identified cases all receiving safety scores between four and five, thus indicating mild to negligible safety risks. Overall, LLM-generated notes had a higher inaccuracy rate at 9.6% compared to written physician notes at 2%, although these inaccuracies rarely involved significant safety implications.
Interrater reliability was calculated using intraclass correlation coefficients (ICC). The ICCs showed good agreement between the three expert raters for completeness, diligence, correctness, and utility at 0.79, 0.70, 0.76, and 0.74, respectively. Readability achieved fair reliability with an ICC of 0.59.
conclusions
The current study successfully generated EM-to-IP handoff notes using a refined LLM and rule-based approach within a user-developed template.
Traditional automated assessments were associated with superior LLM performance. However, manual clinical assessments revealed that although most LLM-generated notes achieved promising quality scores between four and five, they were generally inferior to physician written notes. Detected errors, including incompleteness and faulty logic, occasionally pose moderate security risks, with less than 10% causing significant problems compared to doctor’s notes.
Journal Reference:
- Hartman, V., Zhang, X., Poddar, R., et al. (2024). Development and Evaluation of Emergency Medical Emergency Management Notes Generated by the Large Language Model. JAMA Network Open. doi:10.1001/jamanetworkopen.2024.48723