Close Menu
Healthtost
  • News
  • Mental Health
  • Men’s Health
  • Women’s Health
  • Skin Care
  • Sexual Health
  • Pregnancy
  • Nutrition
  • Fitness
What's Hot

How should you eat when your diet is over?

August 14, 2025

Scientists decode internal speech from high -precision brain activity

August 14, 2025

Your final guide to facial oxygen Joanna Vargas

August 14, 2025
Facebook X (Twitter) Instagram
  • About Us
  • Contact Us
  • Privacy Policy
  • Terms and Conditions
  • Disclaimer
Facebook X (Twitter) Instagram
Healthtost
SUBSCRIBE
  • News

    Scientists decode internal speech from high -precision brain activity

    August 14, 2025

    PSMA PET/CT improves results for men with repetitive prostate cancer

    August 14, 2025

    ISSCR updates to address progress on embryo -based embryocyte models

    August 13, 2025

    HEPA infiltration reduces blood pressure for highway residents

    August 13, 2025

    Rsna AI Challenge models show excellent performance to detect breast cancer in mammograms

    August 12, 2025
  • Mental Health

    Transitions to school can cause stress and anxiety-these 5 books can help

    August 10, 2025

    National Month of Readiness: Design for Destruction and Emergency Situations

    August 6, 2025

    How do you feel about taking exams? Our research exceeded 4 types of test testers

    August 5, 2025

    Action is the antidote to ecological sadness and climate anxiety – explains an ecology

    July 31, 2025

    5 ways couples in relationships can …

    July 27, 2025
  • Men’s Health

    5 days Dumbbell Workout split to build strength and muscles

    August 14, 2025

    Lavender oil could accelerate recovery after surgery on the brain

    August 12, 2025

    Stroke now clearly pulls in 205 and counting

    August 12, 2025

    Do you work with pain? You’re not alone.

    August 11, 2025

    How to divorce-from-backs your marriage: the simple secret your wedding advisor won’t tell you

    August 11, 2025
  • Women’s Health

    When choosing their own snacks: How to guide adolescents to healthy habits (without drama)

    August 12, 2025

    How long have you been leaving a dilator? A guide to safe and effective – Vuvatech

    August 10, 2025

    Irina Haller: In horses, high fashion and building a life moving on purpose

    August 9, 2025

    Practical gift ideas for women in menopause

    August 8, 2025

    Events on Medical File Fees

    August 7, 2025
  • Skin Care

    Your final guide to facial oxygen Joanna Vargas

    August 14, 2025

    The hidden causes of compromised skin (for which no one speaks)

    August 14, 2025

    All for your sunlight and skin

    August 13, 2025

    Hyaluronic acid recipe, retinol & face collagen

    August 11, 2025

    Better skin care for a wet climate

    August 11, 2025
  • Sexual Health

    Enjoying intimacy despite sexual pain and hassle

    August 14, 2025

    $ 150 billion to release immigrants? Here are 4 other ideas.

    August 11, 2025

    The artist behind the cover

    August 11, 2025

    Is the semen of swallowing good for you?

    August 10, 2025

    Aasect Certified Sex Therapist Amanda Jepson Talks Kink – Sexual Health Alliance

    August 9, 2025
  • Pregnancy

    Why doctors recommend folic acid before and during pregnancy

    August 11, 2025

    Alternative treatments and repellent mosquito mosquitoes

    August 11, 2025

    Safe places for birth disappear in rural America – what should mothers know

    August 10, 2025

    5 wellness myths that sabotage pregnancy and postpartum journey

    August 9, 2025

    Things to do in a Playdate that will not leave you Frazzled

    August 8, 2025
  • Nutrition

    Health Tips for Healthy Hair: Reviewing Slicked-Back “Do”

    August 13, 2025

    How to start organizing a dirty house • Kath eats

    August 12, 2025

    Are carboxymethythyyl cellulose, polysorbate 80 and other emulsifiers?

    August 11, 2025

    How your gut produces the hormone of happiness

    August 11, 2025

    How to Party Cooking Healthy Meals for the Week

    August 9, 2025
  • Fitness

    How should you eat when your diet is over?

    August 14, 2025

    Strength Education 101: Proven Authorities, Elevators and Training Programs to build real power

    August 14, 2025

    25 minutes speed train de Joel Freeman

    August 13, 2025

    Can kids go to the gym? What families should they know

    August 11, 2025

    The 4th degree Homeschool curriculum

    August 11, 2025
Healthtost
Home»News»The impressive diagnostic skills of GPT-4 were demonstrated
News

The impressive diagnostic skills of GPT-4 were demonstrated

healthtostBy healthtostApril 21, 2024No Comments5 Mins Read
Facebook Twitter Pinterest LinkedIn Tumblr Reddit WhatsApp Email
The Impressive Diagnostic Skills Of Gpt 4 Were Demonstrated
Share
Facebook Twitter LinkedIn Pinterest WhatsApp Email

In a recent study published in the journal PLOS Digital Healthresearchers assessed and compared the clinical knowledge and diagnostic reasoning capabilities of large language models (LLMs) with those of human ophthalmology experts.

Study: Large language models approximate expert-level clinical knowledge and reasoning in ophthalmology: A cross-sectional study. Image credit: ozrimoz / Shutterstock

Record

Generative Pre-trained Transformers (GPTs), GPT-3.5 and GPT-4, are advanced language models trained on massive Internet-based datasets. They power ChatGPT, a conversational artificial intelligence (AI) noted for its medical application success. Despite previous models struggling in specialized medical tests, the GPT-4 shows significant advances. Concerns remain about data “contamination” and the clinical relevance of test scores. Further research is needed to validate the clinical applicability and safety of language models in real medical settings and to address existing limitations in their expert knowledge and reasoning abilities.

About the study

The questions for the Part 2 examination of the Fellowship of the Royal College of Ophthalmologists (FRCOphth) have been extracted from a specialist manual that is not widely available online, minimizing the likelihood of these questions appearing in LLM training data. A total of 360 multiple-choice questions spanning six chapters were extracted and a pool of 90 questions was isolated for a mock examination used to compare the performance of LLMs and MDs. Two researchers aligned these questions with the categories established by the Royal College of Ophthalmologists and classified each question according to Bloom’s levels of cognitive processes. Questions with non-text elements that were unsuitable for LLM entry were excluded.

Exam questions were entered into versions of ChatGPT (GPT-3.5 and GPT-4) to collect responses, repeating the process up to three times per question where necessary. Once other models such as Bard and HuggingChat became available, similar tests were conducted. Correct answers, as defined by the textbook, were noted for comparison.

Five specialist ophthalmologists, three intern ophthalmologists and two general practitioners independently completed the virtual examination to assess the practical application of the models. Their answers were then compared with the answers of the LLMs. After testing, these ophthalmologists rated the LLMs’ responses using a Likert scale to rate accuracy and relevance, without knowing which model provided which response.

The statistical design of this study was powerful enough to detect significant performance differences between LLMs and human doctors, aiming to test the null hypothesis that both would perform similarly. Various statistical tests, including chi-squared and paired t-tests, were applied to compare performance and assess the consistency and reliability of LLM responses versus human responses.

Study results

Of the 360 ​​questions contained in the FRCOphth Part 2 exam manual, 347 were selected for use, including 87 from the mock exam chapter. The exceptions were mostly questions with pictures or tables, which were unsuitable for input into LLM interfaces.

Performance comparisons revealed that GPT-4 significantly outperformed GPT-3.5, with a correct response rate of 61.7% versus 48.41%. This advance in GPT-4 capabilities was consistent across different types of questions and subjects, as outlined by the Royal College of Ophthalmologists. Detailed results and statistical analyzes further confirmed the strong performance of GPT-4, making it a competitive tool even among other LLMs and human doctors, especially junior doctors and trainees.

Exam features and detailed performance data.  Topic and question type distributions are presented along with the scores achieved by LLMs (GPT-3.5, GPT-4, LLaMA and PalM 2), Ophthalmologists (E1-E5), Ophthalmologists (T1-T3) and non-ophthalmologists (T1-T3). specialized young doctors (J1- J2).  Median scores do not necessarily add up to the overall median score, as fractional scores are impossible.Exam features and detailed performance data. Topic and question type distributions are presented along with the scores achieved by LLMs (GPT-3.5, GPT-4, LLaMA and PalM 2), Ophthalmologists (E1-E5), Ophthalmologists (T1-T3) and non-ophthalmologists (T1-T3). specialized young doctors (J1- J2). Median scores do not necessarily add up to the overall median score, as fractional scores are impossible.

In the specially adapted 87-question mock exam, the GPT-4 not only outperformed among LLMs, but also scored comparably to ophthalmologists and significantly better than juniors and interns. Performance across different groups of participants showed that while specialist ophthalmologists maintained the highest accuracy, trainees approached these levels, far outperforming junior non-ophthalmology specialists.

Statistical tests also highlighted that agreement between responses given by different LLMs and human participants was generally low to moderate, indicating variation in reasoning and application of knowledge between groups. This was particularly evident when the differences in knowledge between the models and human doctors were compared.

A detailed examination of the mock questions against the actual exam standards showed that the mock setup closely mirrored the actual FRCOphth Part 2 Written Exam in difficulty and structure, as agreed upon by the ophthalmologists involved. This alignment ensured that the assessment of LLMs and human responses was based on a realistic and clinically relevant framework.

In addition, qualitative feedback from ophthalmologists confirmed a strong preference for GPT-4 over GPT-3.5, correlating with quantitative performance data. The higher accuracy and relevance scores for the GPT-4 highlighted its potential utility in clinical settings, particularly in ophthalmology.

Finally, an analysis of the cases where all LLMs failed to provide the correct answer showed no consistent patterns related to the complexity or subject matter of the questions.

demonstrated diagnostic GPT4 impressive skills
bhanuprakash.cg
healthtost
  • Website

Related Posts

Scientists decode internal speech from high -precision brain activity

August 14, 2025

PSMA PET/CT improves results for men with repetitive prostate cancer

August 14, 2025

ISSCR updates to address progress on embryo -based embryocyte models

August 13, 2025

Leave A Reply Cancel Reply

Don't Miss
Fitness

How should you eat when your diet is over?

By healthtostAugust 14, 20250

A proper diet can improve body synthesis and performance in the gym, but many strategies…

Scientists decode internal speech from high -precision brain activity

August 14, 2025

Your final guide to facial oxygen Joanna Vargas

August 14, 2025

Strength Education 101: Proven Authorities, Elevators and Training Programs to build real power

August 14, 2025
Stay In Touch
  • Facebook
  • Twitter
  • Pinterest
  • Instagram
  • YouTube
  • Vimeo
TAGS
Baby benefits body brain cancer care Day Diet disease exercise finds Fitness food Guide health healthy heart Improve Life Loss Men mental Natural Nutrition Patients Pregnancy protein research reveals risk routine sex sexual Skin study Therapy Tips Top Training Treatment Understanding ways weight women Workout
About Us
About Us

Welcome to HealthTost, your trusted source for breaking health news, expert insights, and wellness inspiration. At HealthTost, we are committed to delivering accurate, timely, and empowering information to help you make informed decisions about your health and well-being.

Latest Articles

How should you eat when your diet is over?

August 14, 2025

Scientists decode internal speech from high -precision brain activity

August 14, 2025

Your final guide to facial oxygen Joanna Vargas

August 14, 2025
New Comments
    Facebook X (Twitter) Instagram Pinterest
    • About Us
    • Contact Us
    • Privacy Policy
    • Terms and Conditions
    • Disclaimer
    © 2025 HealthTost. All Rights Reserved.

    Type above and press Enter to search. Press Esc to cancel.