Researchers at Osaka Metropolitan University have discovered a practical way to detect and correct common labeling errors in large radiographic collections. By automatically verifying body segment, projection, and rotation labels, their research improves deep learning models used for routine clinical tasks and research projects.
Deep learning models using chest X-rays have made remarkable progress in recent years, evolving to achieve tasks that are challenging for humans, such as estimating cardiac and respiratory function.
However, AIs are only as good as the images fed into them. Although X-ray images taken in hospitals are tagged with information such as location and imaging method before they are fed into the deep learning model, this is mostly done manually, which means errors, missing data and inconsistencies occur, especially in busy hospitals.
This is further complicated by images with various rotations. An X-ray is taken from front to back or in reverse, and may also be sideways, inverted or rotated, further complicating the data set.
In large imaging files, these minor errors quickly add up to hundreds or thousands of incorrect results.
A research team at Osaka Metropolitan University School of Medicine, including graduate student Yasuhito Mitsuyama and professor Daiju Ueda, aimed to improve the detection of mislabeled data by automatically detecting errors before they affect the input data for deep learning models.
The team developed two models: Xp-Bodypart-Checker, which classifies X-rays by body part; and CXp-Projection-Rotation-Checker, which detects the projection and rotation of chest X-rays.
Xp‑Bodypart‑Checker achieved 98.5 % accuracy and CXp‑Projection‑Rotation‑Checker achieved 98.5 % accuracy for projection and 99.3 % for rotation. Researchers are optimistic that integrating both into a single model will deliver game-changing performance in clinical settings.
Although the results were excellent, the team hopes to further refine the method for clinical use.
We plan to retrain the model on radiographs that were labeled despite being correctly labeled, as well as those that were not labeled but were actually mislabeled, to achieve even greater accuracy.”
Yasuhito Mitsuyama, Osaka Metropolitan University
The study was published in European Radiology.
Source:
Journal Reference:
Mitsuyama, Y., et al. (2025). Deep learning models for radiograph body segment classification and chest radiograph projection/orientation classification: a multi-institutional study. European Radiology. DOI: 10.1007/s00330-025-12053-7. https://link.springer.com/article/10.1007/s00330-025-12053-7.
