Publication:
In ChatGPT we trust? Auditing how generative AIs understand and detect online political misinformation

cris.virtual.author-orcid0000-0001-7143-5317
cris.virtual.author-orcid0000-0003-3332-9294
cris.virtualsource.author-orcidc84686d4-ed4c-4a9a-8b82-e5683d89d9dc
cris.virtualsource.author-orcidda603b0a-68a7-4aa1-b8da-8563d3a374fd
cris.virtualsource.author-orcidd7e04fba-219d-416b-9eb9-18a2194ab6a9
datacite.rightsmetadata.only
dc.contributor.authorKuznetsova, Elizaveta
dc.contributor.authorMakhortykh, Mykola
dc.contributor.authorBaghumyan, Ani
dc.contributor.authorUrman, Aleksandra
dc.date.accessioned2024-10-25T18:14:02Z
dc.date.available2024-10-25T18:14:02Z
dc.date.issued2023-09-06
dc.description.abstractThe growing use of AI-driven systems creates new opportunities as well as risks for cyber politics. From search engines organising political information flows (Unkel & Haim, 2019) to personalised news feeds determining individual exposure to misinformation (Kuznetsova & Makhortykh, 2023), these systems increasingly shape how human actors perceive and engage with political matters worldwide. However, besides changing human interactions with cyber politics, the development of technology also gives rise to new types of non-human political actors which go beyond information curation (e.g. as search algorithms do) and are capable of generating and evaluating political information in a more nuanced way. In this paper, we focus on one type of non-human actors dealing with cyber politics: generative artificial intelligence (AI). Generative AIs, such as ChatGPT or MidJourney, are distinguished by their ability to generate new content in the text or image format. More advanced forms of text-oriented generative AIs (e.g. ChatGPT or ChatSonic) are not only capable of producing content in the variety of textual formats but can also serve as conversational agents interpreting and evaluating human input (e.g. to detect whether it contains false information or has a certain political leaning). Consequently, such generative AIs can transform many aspects of cyber politics, including the use of misinformation in online environments which is viewed as a major threat for liberal democracies. By identifying misinformation and bringing awareness of the users to it, generative AIs can cull the spread of false content and counter disinformation campaigns. However, by failing to deal with it properly, generative AIs can also facilitate spread of misinformation online or even be used for generating and disseminating new types of false narratives. In this study, we examine the possible implications of the rise of generative AIs on online misinformation. For this aim, we conduct an algorithmic audit of two commonly used generative AIs: ChatGPT and ChatSonic. Specifically, we examine how these AIs understand the concepts of disinformation and misinformation and to what degree they distinguish them from the related concepts of digital propaganda using the definition-oriented inquiries. Then, we systematically examine the ability of generative AIs to differentiate between the true and the false claims dealing with the two case studies: the war in Ukraine and the COVID-19 pandemic.
dc.description.sponsorshipInstitut für Kommunikations- und Medienwissenschaft (ikmb)
dc.identifier.urihttps://boris-portal.unibe.ch/handle/20.500.12422/170369
dc.language.isoen
dc.relation.conferenceECPR 2023
dc.relation.organizationDCD5A442BFA3E17DE0405C82790C4DE2
dc.subjectchatGPT
dc.subjectgenerative AI
dc.subjectmisinformation
dc.subjectpropaganda
dc.subjectdisinformation
dc.subjectHolocaust
dc.subjectwar in Ukraine
dc.subjectRussia
dc.subjectdenial
dc.subjectclimage change
dc.subjectLGBTQ+
dc.subjectCOVID
dc.subject.ddc000 - Computer science, knowledge & systems
dc.subject.ddc300 - Social sciences, sociology & anthropology
dc.subject.ddc300 - Social sciences, sociology & anthropology::320 - Political science
dc.subject.ddc900 - History
dc.titleIn ChatGPT we trust? Auditing how generative AIs understand and detect online political misinformation
dc.typeconference_item
dspace.entity.typePublication
oaire.citation.conferenceDate4-8 September 2023
oairecerif.author.affiliationInstitut für Kommunikations- und Medienwissenschaft (ikmb)
oairecerif.author.affiliationInstitut für Kommunikations- und Medienwissenschaft (ikmb)
oairecerif.author.affiliationInstitut für Kommunikations- und Medienwissenschaft (ikmb)
oairecerif.identifier.urlhttps://ecpr.eu/Events/Event/PaperDetails/70765
unibe.contributor.rolecreator
unibe.contributor.rolecreator
unibe.contributor.rolecreator
unibe.contributor.rolecreator
unibe.description.ispublishedunpub
unibe.eprints.legacyId186815
unibe.refereedtrue
unibe.subtype.conferencepaper

Files

Collections