In a recent study published in the journal PLOS Digital Healthresearchers assessed and compared the clinical knowledge and diagnostic reasoning capabilities of large language models (LLMs) with those of human ophthalmology experts.
Study: Large language models approximate expert-level clinical knowledge and reasoning in ophthalmology: A cross-sectional study. Image credit: ozrimoz / Shutterstock
Record
Generative Pre-trained Transformers (GPTs), GPT-3.5 and GPT-4, are advanced language models trained on massive Internet-based datasets. They power ChatGPT, a conversational artificial intelligence (AI) noted for its medical application success. Despite previous models struggling in specialized medical tests, the GPT-4 shows significant advances. Concerns remain about data “contamination” and the clinical relevance of test scores. Further research is needed to validate the clinical applicability and safety of language models in real medical settings and to address existing limitations in their expert knowledge and reasoning abilities.
About the study
The questions for the Part 2 examination of the Fellowship of the Royal College of Ophthalmologists (FRCOphth) have been extracted from a specialist manual that is not widely available online, minimizing the likelihood of these questions appearing in LLM training data. A total of 360 multiple-choice questions spanning six chapters were extracted and a pool of 90 questions was isolated for a mock examination used to compare the performance of LLMs and MDs. Two researchers aligned these questions with the categories established by the Royal College of Ophthalmologists and classified each question according to Bloom’s levels of cognitive processes. Questions with non-text elements that were unsuitable for LLM entry were excluded.
Exam questions were entered into versions of ChatGPT (GPT-3.5 and GPT-4) to collect responses, repeating the process up to three times per question where necessary. Once other models such as Bard and HuggingChat became available, similar tests were conducted. Correct answers, as defined by the textbook, were noted for comparison.
Five specialist ophthalmologists, three intern ophthalmologists and two general practitioners independently completed the virtual examination to assess the practical application of the models. Their answers were then compared with the answers of the LLMs. After testing, these ophthalmologists rated the LLMs’ responses using a Likert scale to rate accuracy and relevance, without knowing which model provided which response.
The statistical design of this study was powerful enough to detect significant performance differences between LLMs and human doctors, aiming to test the null hypothesis that both would perform similarly. Various statistical tests, including chi-squared and paired t-tests, were applied to compare performance and assess the consistency and reliability of LLM responses versus human responses.
Study results
Of the 360 ​​questions contained in the FRCOphth Part 2 exam manual, 347 were selected for use, including 87 from the mock exam chapter. The exceptions were mostly questions with pictures or tables, which were unsuitable for input into LLM interfaces.
Performance comparisons revealed that GPT-4 significantly outperformed GPT-3.5, with a correct response rate of 61.7% versus 48.41%. This advance in GPT-4 capabilities was consistent across different types of questions and subjects, as outlined by the Royal College of Ophthalmologists. Detailed results and statistical analyzes further confirmed the strong performance of GPT-4, making it a competitive tool even among other LLMs and human doctors, especially junior doctors and trainees.
Exam features and detailed performance data. Topic and question type distributions are presented along with the scores achieved by LLMs (GPT-3.5, GPT-4, LLaMA and PalM 2), Ophthalmologists (E1-E5), Ophthalmologists (T1-T3) and non-ophthalmologists (T1-T3). specialized young doctors (J1- J2). Median scores do not necessarily add up to the overall median score, as fractional scores are impossible.
In the specially adapted 87-question mock exam, the GPT-4 not only outperformed among LLMs, but also scored comparably to ophthalmologists and significantly better than juniors and interns. Performance across different groups of participants showed that while specialist ophthalmologists maintained the highest accuracy, trainees approached these levels, far outperforming junior non-ophthalmology specialists.
Statistical tests also highlighted that agreement between responses given by different LLMs and human participants was generally low to moderate, indicating variation in reasoning and application of knowledge between groups. This was particularly evident when the differences in knowledge between the models and human doctors were compared.
A detailed examination of the mock questions against the actual exam standards showed that the mock setup closely mirrored the actual FRCOphth Part 2 Written Exam in difficulty and structure, as agreed upon by the ophthalmologists involved. This alignment ensured that the assessment of LLMs and human responses was based on a realistic and clinically relevant framework.
In addition, qualitative feedback from ophthalmologists confirmed a strong preference for GPT-4 over GPT-3.5, correlating with quantitative performance data. The higher accuracy and relevance scores for the GPT-4 highlighted its potential utility in clinical settings, particularly in ophthalmology.
Finally, an analysis of the cases where all LLMs failed to provide the correct answer showed no consistent patterns related to the complexity or subject matter of the questions.