Publication:
Development and validation of an AI algorithm to generate realistic and meaningful counterfactuals for retinal imaging based on diffusion models.

cris.virtualsource.author-orcidc0a6afe6-6fcf-47eb-bdeb-f70b81b3951c
datacite.rightsopen.access
dc.contributor.authorIlanchezian, Indu
dc.contributor.authorBoreiko, Valentyn
dc.contributor.authorKühlewein, Laura
dc.contributor.authorHuang, Ziwei
dc.contributor.authorSeçkin Ayhan, Murat
dc.contributor.authorHein, Matthias
dc.contributor.authorKoch, Lisa
dc.contributor.authorBerens, Philipp
dc.date.accessioned2025-07-07T13:44:18Z
dc.date.available2025-07-07T13:44:18Z
dc.date.issued2025-05
dc.description.abstractCounterfactual reasoning is often used by humans in clinical settings. For imaging based specialties such as ophthalmology, it would be beneficial to have an AI model that can create counterfactual images, illustrating answers to questions like "If the subject had had diabetic retinopathy, how would the fundus image have looked?". Such an AI model could aid in training of clinicians or in patient education through visuals that answer counterfactual queries. We used large-scale retinal image datasets containing color fundus photography (CFP) and optical coherence tomography (OCT) images to train ordinary and adversarially robust classifiers that classify healthy and disease categories. In addition, we trained an unconditional diffusion model to generate diverse retinal images including ones with disease lesions. During sampling, we then combined the diffusion model with classifier guidance to achieve realistic and meaningful counterfactual images maintaining the subject's retinal image structure. We found that our method generated counterfactuals by introducing or removing the necessary disease-related features. We conducted an expert study to validate that generated counterfactuals are realistic and clinically meaningful. Generated color fundus images were indistinguishable from real images and were shown to contain clinically meaningful lesions. Generated OCT images appeared realistic, but could be identified by experts with higher than chance probability. This shows that combining diffusion models with classifier guidance can achieve realistic and meaningful counterfactuals even for high-resolution medical images such as CFP images. Such images could be used for patient education or training of medical professionals.
dc.description.sponsorshipUniversity Clinic for Diabetes, Endocrinology, Clinical Nutrition and Metabolism (UDEM)
dc.identifier.doi10.48620/89115
dc.identifier.pmid40373008
dc.identifier.publisherDOI10.1371/journal.pdig.0000853
dc.identifier.urihttps://boris-portal.unibe.ch/handle/20.500.12422/211166
dc.language.isoen
dc.publisherPublic Library of Science
dc.relation.ispartofPLOS Digital Health
dc.relation.issn2767-3170
dc.subject.ddc600 - Technology::610 - Medicine & health
dc.titleDevelopment and validation of an AI algorithm to generate realistic and meaningful counterfactuals for retinal imaging based on diffusion models.
dc.typearticle
dspace.entity.typePublication
dspace.file.typetext
oaire.citation.issue5
oaire.citation.startPagee0000853
oaire.citation.volume4
oairecerif.author.affiliationUniversity Clinic for Diabetes, Endocrinology, Clinical Nutrition and Metabolism (UDEM)
unibe.contributor.roleauthor
unibe.description.ispublishedpub
unibe.refereedtrue
unibe.subtype.articlejournal

Files

Original bundle
Now showing 1 - 1 of 1
Name:
pdig.0000853.pdf
Size:
20.29 MB
Format:
Adobe Portable Document Format
File Type:
text
License:
https://creativecommons.org/licenses/by/4.0
Content:
published

Collections