AI is increasingly embedded in medical devices, but it doesn’t always get things right—and sometimes it’s mistakes look convincing. This journal club article by Granstedt et al (2025). Hallucinations in medical devices, defines hallucinations as plausible errors made by AI systems in devices, which can be either impactful (e.g., a fake lesion that changes diagnosis) or benign (e.g., a tiny added feature that doesn’t affect care). Unlike traditional imaging artifacts that clinicians are trained to spot, AI hallucinations can be subtle, polished and hard to detect, even for experts.
The authors explore hallucinations in imaging reconstruction, synthetic data generation, and large language/multimodal models used in healthcare. They argue that hallucinations can’t be completely eliminated—they are a built-in limitation of current neural network methods—and that reducing them often comes at a performance cost.
Clinical Safety Challenge: This creates new safety and governance challenges, especially when multiple AI systems are chained together in a clinical workflow.
Article Key message: AI devices don’t need to be perfect to be useful, but regulators, developers and clinicians must actively measure plausibility and impact of errors, design better evaluation studies, and recognise hallucinations as a distinct, patient-relevant risk in AI-enabled care.
Journal Club Article: Granstedt, J., Kc, P., Deshpande, R., Garcia, V., & Badano, A. (2025). Hallucinations in medical devices. Artificial Intelligence in the Life Sciences, 100145.

Leave a Reply