• LOGIN
    Login with username and password
Repository logo

BORIS Portal

Bern Open Repository and Information System

  • Publications
  • Theses
  • Research Data
  • Projects
  • Organizations
  • Researchers
  • More
  • Statistics
  • LOGIN
    Login with username and password
Repository logo
Unibern.ch
  1. Home
  2. Publications
  3. ChatGPT Generated Otorhinolaryngology Multiple-Choice Questions: Quality, Psychometric Properties, and Suitability for Assessments.
 

ChatGPT Generated Otorhinolaryngology Multiple-Choice Questions: Quality, Psychometric Properties, and Suitability for Assessments.

Options
  • Details
  • Files
BORIS DOI
10.48620/76001
Publisher DOI
10.1002/oto2.70018
PubMed ID
39328276
Description
Objective
To explore Chat Generative Pretrained Transformer's (ChatGPT's) capability to create multiple-choice questions about otorhinolaryngology (ORL).Study Design
Experimental question generation and exam simulation.Setting
Tertiary academic center.Methods
ChatGPT 3.5 was prompted: "Can you please create a challenging 20-question multiple-choice questionnaire about clinical cases in otolaryngology, offering five answer options?." The generated questionnaire was sent to medical students, residents, and consultants. Questions were investigated regarding quality criteria. Answers were anonymized and the resulting data was analyzed in terms of difficulty and internal consistency.Results
ChatGPT 3.5 generated 20 exam questions of which 1 question was considered off-topic, 3 questions had a false answer, and 3 questions had multiple correct answers. Subspecialty theme repartition was as follows: 5 questions were on otology, 5 about rhinology, and 10 questions addressed head and neck. The qualities of focus and relevance were good while the vignette and distractor qualities were low. The level of difficulty was suitable for undergraduate medical students (n = 24), but too easy for residents (n = 30) or consultants (n = 10) in ORL. Cronbach's was highest (.69) with 15 selected questions using students' results.Conclusion
ChatGPT 3.5 is able to generate grammatically correct simple ORL multiple choice questions for a medical student level. However, the overall quality of the questions was average, needing thorough review and revision by a medical expert to ensure suitability in future exams.
Date of Publication
2024
Publication Type
Article
Keyword(s)
ChatGPT
•
artificial intelligence
•
exam
•
large language model
•
multiple choice question
•
otolaryngology
Language(s)
en
Contributor(s)
Lotto, Cecilia
Sheppard, Sean C.orcid-logo
Clinic of Ear, Nose and Throat Disorders (ENT)
Anschuetz, Wilma
Institut für Medizinische Lehre, Assessment und Evaluation, Praktisches Assessment (CS)
Institute for Medical Education, Assessment and Evaluation Unit (AAE)
Stricker, Daniel
Institute for Medical Education, Assessment and Evaluation Unit (AAE)
Institut für Medizinische Lehre, Assessment und Evaluation, Praktisches Assessment (CS)
Molinari, Giulia
Huwendiek, Sören
Institute for Medical Education, Assessment and Evaluation Unit (AAE)
Anschuetz, Lukas
Clinic of Ear, Nose and Throat Disorders, Head and Neck Surgery
Additional Credits
Clinic of Ear, Nose and Throat Disorders (ENT)
Institut für Medizinische Lehre, Assessment und Evaluation, Praktisches Assessment (CS)
Institute for Medical Education, Assessment and Evaluation Unit (AAE)
Clinic of Ear, Nose and Throat Disorders, Head and Neck Surgery
Series
OTO open
Publisher
Wiley
ISSN
2473-974X
Access(Rights)
open.access
Show full item
BORIS Portal
Bern Open Repository and Information System
Build: 9f4e9a [ 5.02. 18:48]
Explore
  • Projects
  • Funding
  • Publications
  • Research Data
  • Organizations
  • Researchers
  • Audiovisual Material
  • Software & other digital items
More
  • About BORIS Portal
  • Send Feedback
  • Cookie settings
  • Service Policy
Follow us on
  • Mastodon
  • YouTube
  • LinkedIn
UniBe logo