Publication: Application of ChatGPT as a content generation tool in continuing medical education: acne as a test topic.
cris.virtual.author-orcid | 0000-0001-8161-6138 | |
cris.virtualsource.author-orcid | f7d9d367-6080-4fe7-9ee6-d72fcc445aed | |
cris.virtualsource.author-orcid | f008391c-fcfc-43fe-9155-a480a9df287e | |
datacite.rights | open.access | |
dc.contributor.author | Naldi, Luigi | |
dc.contributor.author | Bettoli, Vincenzo | |
dc.contributor.author | Santoro, Eugenio | |
dc.contributor.author | Valetto, Maria Rosa | |
dc.contributor.author | Bolzon, Anna | |
dc.contributor.author | Cassalia, Fortunato | |
dc.contributor.author | Cazzaniga, Simone | |
dc.contributor.author | Cima, Sergio | |
dc.contributor.author | Danese, Andrea | |
dc.contributor.author | Emendi, Silvia | |
dc.contributor.author | Ponzano, Monica | |
dc.contributor.author | Scarpa, Nicoletta | |
dc.contributor.author | Dri, Pietro | |
dc.date.accessioned | 2025-03-06T11:46:52Z | |
dc.date.available | 2025-03-06T11:46:52Z | |
dc.date.issued | 2024-11-28 | |
dc.description.abstract | The large language model (LLM) ChatGPT can answer open-ended and complex questions, but its accuracy in providing reliable medical information requires a careful assessment. As part of the AICHECK (Artificial Intelligence for CME Health E-learning Contents and Knowledge) Study, aimed at evaluating the potential of ChatGPT in continuous medical education (CME), we compared ChatGPT-generated educational contents to the recommendations of the National Institute for Health and Care Excellence (NICE) guidelines on acne vulgaris. ChatGPT version 4 was exposed to a 23-item questionnaire developed by an experienced dermatologist. A panel of five dermatologists rated the answers positively in terms of "quality" (87.8%), "readability" (94.8%), "accuracy" (75.7%), "thoroughness" (85.2%), and "consistency" with guidelines (76.8%). The references provided by ChatGPT obtained positive ratings for "pertinence" (94.6%), "relevance" (91.2%), and "update" (62.3%). The internal reproducibility was adequate both for answers (93.5%) and references (67.4%). Answers related to issues of uncertainty and/or controversy in the scientific community scored the lowest. This study underscores the need to develop rigorous evaluation criteria for AI-generated medical content and for expert oversight to ensure accuracy and guideline adherence. | |
dc.description.sponsorship | Clinic of Dermatology | |
dc.identifier.doi | 10.48620/85797 | |
dc.identifier.pmid | 39969058 | |
dc.identifier.publisherDOI | 10.4081/dr.2024.10138 | |
dc.identifier.uri | https://boris-portal.unibe.ch/handle/20.500.12422/205833 | |
dc.language.iso | en | |
dc.publisher | PAGEpress | |
dc.relation.ispartof | Dermatology Reports | |
dc.relation.issn | 2036-7392 | |
dc.subject.ddc | 600 - Technology::610 - Medicine & health | |
dc.title | Application of ChatGPT as a content generation tool in continuing medical education: acne as a test topic. | |
dc.type | article | |
dspace.entity.type | Publication | |
dspace.file.type | text | |
oairecerif.author.affiliation | Clinic of Dermatology | |
unibe.contributor.role | author | |
unibe.contributor.role | author | |
unibe.contributor.role | author | |
unibe.contributor.role | author | |
unibe.contributor.role | author | |
unibe.contributor.role | author | |
unibe.contributor.role | author | |
unibe.contributor.role | author | |
unibe.contributor.role | author | |
unibe.contributor.role | author | |
unibe.contributor.role | author | |
unibe.contributor.role | author | |
unibe.contributor.role | author | |
unibe.description.ispublished | inpress | |
unibe.refereed | true | |
unibe.subtype.article | journal |
Files
Original bundle
1 - 1 of 1
- Name:
- DERMA+10138.pdf
- Size:
- 2.46 MB
- Format:
- Adobe Portable Document Format
- File Type:
- text
- License:
- https://creativecommons.org/licenses/by-nc/4.0
- Content:
- published