View a PDF of the paper titled Biased AI can Influence Political Decision-Making, by Jillian Fisher and 8 other authors
View PDF
HTML (experimental)
Abstract:As modern large language models (LLMs) become integral to everyday tasks, concerns about their inherent biases and their potential impact on human decision-making have emerged. While bias in models are well-documented, less is known about how these biases influence human decisions. This paper presents two interactive experiments investigating the effects of partisan bias in LLMs on political opinions and decision-making. Participants interacted freely with either a biased liberal, biased conservative, or unbiased control model while completing these tasks. We found that participants exposed to partisan biased models were significantly more likely to adopt opinions and make decisions which matched the LLM’s bias. Even more surprising, this influence was seen when the model bias and personal political partisanship of the participant were opposite. However, we also discovered that prior knowledge of AI was weakly correlated with a reduction of the impact of the bias, highlighting the possible importance of AI education for robust mitigation of bias effects. Our findings not only highlight the critical effects of interacting with biased LLMs and its ability to impact public discourse and political conduct, but also highlights potential techniques for mitigating these risks in the future.
Submission history
From: Jillian Fisher [view email]
[v1]
Tue, 8 Oct 2024 22:56:00 UTC (5,012 KB)
[v2]
Mon, 4 Nov 2024 20:12:07 UTC (6,041 KB)
[v3]
Thu, 5 Jun 2025 15:55:15 UTC (1,758 KB)