Critics have previously accused OpenAI’s chatbot, ChatGPT, of having a clear left-wing bias. Now, a fresh British study has reached the same conclusion.
Researchers from the University of East Anglia in Britain have successfully shown how the chatbot favors the Democrats in the USA, the Labour Party in the UK, and the left-leaning president Lula da Silva in Brazil.
To test ChatGPT’s political leanings, researchers first had the robot pretend to be people from across the political spectrum while answering a series of over 60 ideological questions. The answers were then compared to the chatbot’s default responses to the same questions.
To account for the AI’s randomness, each question was posed 100 times and then underwent a so-called “bootstrap” with 1,000 repetitions to reassemble the original data and improve reliability.
In another set of tests designed to verify the results, researchers asked ChatGPT to mimic radical political stances. In a “placebo test”, politically neutral questions were posed, and in another test, the chatbot was asked to imagine various types of professionals.
The researchers concluded that the default answers tended to align more with the left’s responses than the right’s.
– Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the internet and social media, comments lead author Dr. Fabio Motoki.
‼️ChatGPT Has a 'Significant' Liberal Bias
OpenAI’s wildly popular ChatGPT AI service has showed a clear bias toward Democrats and liberal viewpoints, according to a recent study conducted by UK-based researchers.
“We find robust evidence that ChatGPT presents a significant and… pic.twitter.com/YEcoZAG3p8
— Lord Bebo (@MyLordBebo) August 20, 2023
“Could influence political processes”
Political bias can arise in several ways. For instance, the training dataset sourced from the internet might itself be left-leaning, and the developers may introduce their own political bias, perhaps without even realizing it.
–With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible, Motoki continues. The presence of political bias can influence user views and has potential implications for political and electoral processes.
In an interview with Sky News, Motoki elaborated on his argument, emphasizing that “bias on a platform like this is a cause for concern.”
– Sometimes people forget these AI models are just machines. They provide very believable, digested summaries of what you are asking, even if they’re completely wrong. And if you ask it ‘are you neutral’, it says ‘oh I am!’ Just as the media, the internet, and social media can influence the public, this could be very harmful.
The exact source of ChatGPT’s bias is still unclear. Training data? Algorithm? Researchers believe both are likely culprits.
But untangling these components is the next big challenge to understand where the bias originates from.
— Sunil (@freegeneralist) August 27, 2023
Entrepreneur Elon Musk, one of the co-founders of the organization behind ChatGPT, has previously criticized ChatGPT for its political bias, accusing the AI of being developed by “left-leaning experts” and having been “trained to lie.” This led him to initiate a new chatbot project named TruthGPT with the ambition of being “a maximum truth-seeking AI that tries to understand the nature of the universe”.