• LOGIN
Repository logo

BORIS Portal

Bern Open Repository and Information System

  • Publication
  • Projects
  • Funding
  • Research Data
  • Organizations
  • Researchers
  • LOGIN
Repository logo
Unibern.ch
  1. Home
  2. Publications
  3. In ChatGPT we trust? Auditing how generative AIs understand and detect online political misinformation
 

In ChatGPT we trust? Auditing how generative AIs understand and detect online political misinformation

Options
  • Details
Date of Publication
September 6, 2023
Publication Type
Conference Paper
Division/Institute

Institut für Kommunik...

Author
Kuznetsova, Elizaveta
Makhortykh, Mykolaorcid-logo
Institut für Kommunikations- und Medienwissenschaft (ikmb)
Baghumyan, Ani
Institut für Kommunikations- und Medienwissenschaft (ikmb)
Urman, Aleksandraorcid-logo
Institut für Kommunikations- und Medienwissenschaft (ikmb)
Subject(s)

000 - Computer scienc...

300 - Social sciences...

300 - Social sciences...

900 - History

Language
English
Uncontrolled Keywords

chatGPT

generative AI

misinformation

propaganda

disinformation

Holocaust

war in Ukraine

Russia

denial

climage change

LGBTQ+

COVID

Description
The growing use of AI-driven systems creates new opportunities as well as risks for cyber politics. From search engines organising political information flows (Unkel & Haim, 2019) to personalised news feeds determining individual exposure to misinformation (Kuznetsova & Makhortykh, 2023), these systems increasingly shape how human actors perceive and engage with political matters worldwide. However, besides changing human interactions with cyber politics, the development of technology also gives rise to new types of non-human political actors which go beyond information curation (e.g. as search algorithms do) and are capable of generating and evaluating political information in a more nuanced way.

In this paper, we focus on one type of non-human actors dealing with cyber politics: generative artificial intelligence (AI). Generative AIs, such as ChatGPT or MidJourney, are distinguished by their ability to generate new content in the text or image format. More advanced forms of text-oriented generative AIs (e.g. ChatGPT or ChatSonic) are not only capable of producing content in the variety of textual formats but can also serve as conversational agents interpreting and evaluating human input (e.g. to detect whether it contains false information or has a certain political leaning). Consequently, such generative AIs can transform many aspects of cyber politics, including the use of misinformation in online environments which is viewed as a major threat for liberal democracies. By identifying misinformation and bringing awareness of the users to it, generative AIs can cull the spread of false content and counter disinformation campaigns. However, by failing to deal with it properly, generative AIs can also facilitate spread of misinformation online or even be used for generating and disseminating new types of false narratives.

In this study, we examine the possible implications of the rise of generative AIs on online misinformation. For this aim, we conduct an algorithmic audit of two commonly used generative AIs: ChatGPT and ChatSonic. Specifically, we examine how these AIs understand the concepts of disinformation and misinformation and to what degree they distinguish them from the related concepts of digital propaganda using the definition-oriented inquiries. Then, we systematically examine the ability of generative AIs to differentiate between the true and the false claims dealing with the two case studies: the war in Ukraine and the COVID-19 pandemic.
Related URL
https://ecpr.eu/Events/Event/PaperDetails/70765
Handle
https://boris-portal.unibe.ch/handle/20.500.12422/170369
Show full item
BORIS Portal
Bern Open Repository and Information System
Build: b407eb [23.05. 15:47]
Explore
  • Projects
  • Funding
  • Publications
  • Research Data
  • Organizations
  • Researchers
More
  • About BORIS Portal
  • Send Feedback
  • Cookie settings
  • Service Policy
Follow us on
  • Mastodon
  • YouTube
  • LinkedIn
UniBe logo