• LOGIN
Repository logo

BORIS Portal

Bern Open Repository and Information System

  • Publication
  • Projects
  • Funding
  • Research Data
  • Organizations
  • Researchers
  • LOGIN
Repository logo
Unibern.ch
  1. Home
  2. Publications
  3. Application of ChatGPT as a content generation tool in continuing medical education: acne as a test topic.
 

Application of ChatGPT as a content generation tool in continuing medical education: acne as a test topic.

Options
  • Details
BORIS DOI
10.48620/85797
Date of Publication
November 28, 2024
Publication Type
Article
Division/Institute

Clinic of Dermatology...

Author
Naldi, Luigi
Bettoli, Vincenzo
Santoro, Eugenio
Valetto, Maria Rosa
Bolzon, Anna
Cassalia, Fortunato
Cazzaniga, Simoneorcid-logo
Clinic of Dermatology
Cima, Sergio
Danese, Andrea
Emendi, Silvia
Ponzano, Monica
Scarpa, Nicoletta
Dri, Pietro
Subject(s)

600 - Technology::610...

Series
Dermatology Reports
ISSN or ISBN (if monograph)
2036-7392
Publisher
PAGEpress
Language
English
Publisher DOI
10.4081/dr.2024.10138
PubMed ID
39969058
Description
The large language model (LLM) ChatGPT can answer open-ended and complex questions, but its accuracy in providing reliable medical information requires a careful assessment. As part of the AICHECK (Artificial Intelligence for CME Health E-learning Contents and Knowledge) Study, aimed at evaluating the potential of ChatGPT in continuous medical education (CME), we compared ChatGPT-generated educational contents to the recommendations of the National Institute for Health and Care Excellence (NICE) guidelines on acne vulgaris. ChatGPT version 4 was exposed to a 23-item questionnaire developed by an experienced dermatologist. A panel of five dermatologists rated the answers positively in terms of "quality" (87.8%), "readability" (94.8%), "accuracy" (75.7%), "thoroughness" (85.2%), and "consistency" with guidelines (76.8%). The references provided by ChatGPT obtained positive ratings for "pertinence" (94.6%), "relevance" (91.2%), and "update" (62.3%). The internal reproducibility was adequate both for answers (93.5%) and references (67.4%). Answers related to issues of uncertainty and/or controversy in the scientific community scored the lowest. This study underscores the need to develop rigorous evaluation criteria for AI-generated medical content and for expert oversight to ensure accuracy and guideline adherence.
Handle
https://boris-portal.unibe.ch/handle/20.500.12422/205833
Show full item
File(s)
FileFile TypeFormatSizeLicensePublisher/Copright statementContent
DERMA+10138.pdftextAdobe PDF2.46 MBAttribution-NonCommercial (CC BY-NC 4.0)publishedOpen
BORIS Portal
Bern Open Repository and Information System
Build: d1c7f7 [27.06. 13:56]
Explore
  • Projects
  • Funding
  • Publications
  • Research Data
  • Organizations
  • Researchers
More
  • About BORIS Portal
  • Send Feedback
  • Cookie settings
  • Service Policy
Follow us on
  • Mastodon
  • YouTube
  • LinkedIn
UniBe logo