ChatGPT has a Left Wing…

- Advertisement -

Science & Technology, UK (Commonwealth Union) – According to a recent study conducted by the University of East Anglia (UEA), the artificial intelligence platform ChatGPT exhibits a notable and systematic inclination towards left-wing perspectives. The research team, hailing from both the UK and Brazil, devised a meticulous new approach to assess political bias.

Published in the journal Public Choice today, the study’s findings indicate that ChatGPT’s responses tend to favor the Democratic Party in the US, the UK’s Labour Party, and Brazil’s Workers’ Party led by President Lula da Silva.

Previous concerns regarding inherent political bias within ChatGPT have been previously raised, yet this marks the inaugural comprehensive investigation utilizing a consistent, evidence-based methodology.

Dr. Fabio Motoki, the lead author and a member of the Norwich Business School at the University of East Anglia, indicated that as the utilization of AI-powered systems by the general public for information retrieval and content generation continues to rise, it will become imperative that platforms like ChatGPT maintain a high level of impartiality in their outputs. The presence of political bias can significantly influence user perspectives and potentially impact political and electoral processes.

“Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the Internet and social media.”

The researchers devised a novel and inventive approach to assess the political impartiality of ChatGPT.

They instructed the platform to adopt the personas of individuals spanning the entire political spectrum and respond to a series of over 60 ideological inquiries.

Subsequently, the responses were juxtaposed with ChatGPT’s default answers to the same array of questions. This comparison enabled the researchers to quantify the extent to which ChatGPT’s responses aligned with specific political orientations.

To tackle challenges stemming from the inherent unpredictability of “large language models,” which underpin AI systems like ChatGPT, each question was presented 100 times, and the diverse range of answers was collected. These multiple responses were then subjected to a resampling technique known as a “bootstrap,” repeated 1000 times, to enhance the reliability of the insights extracted from the generated text.

Co-author Victor Rodrigues elaborated on the methodology, indicated that they established this process because conducting a single round of testing wouldn’t suffice. Given the model’s inherent randomness, even when assuming the persona of a Democrat, ChatGPT’s responses occasionally exhibited leanings toward the right side of the political spectrum.

Numerous additional assessments were conducted to ensure the robustness of the methodology. A “dose-response test” prompted ChatGPT to mimic extreme political stances. A “placebo test” posed politically-neutral questions. Lastly, a “profession-politics alignment test” required ChatGPT to emulate various professional identities.

“We hope that our method will aid scrutiny and regulation of these rapidly developing technologies,” said co-author Dr Pinho Neto. “By enabling the detection and correction of LLM biases, we aim to promote transparency, accountability, and public trust in this technology,” he further stated.

Researchers indicated that this unique new analysis tool formed by the project would be freely usable and quite simple for members of the public for utilization thereby “democratising oversight,” according to Dr Motoki. In addition to looking for political bias, the tool is apllicable in gaging other types of biases for the ChatGPT’s responses.

Although the primary objective of the research project was not to ascertain the origins of the political bias, the results did indicate the existence of two possible origins.

The initial source was identified as the training dataset. This dataset could contain inherent biases or biases introduced by the human developers. Regrettably, the developers’ attempted “cleaning” process might not have successfully eliminated these biases. The second plausible origin was traced back to the algorithm. It appeared that the algorithm might be magnifying preexisting biases present in the training data, according to researchers in the study.

The findings may come as no surprise with many conservatives often stating that major tech companies do have a left-wing bias, which had led to many inquires in the past few years.

Hot this week

Feast of St. Cecilia: Guardian of music and musicians

The Church celebrates the Memorial of St. Cecilia, virgin...

Memorial of the Presentation of the Blessed Virgin Mary in the Temple

The Feast of the Presentation of the Blessed Virgin...

Is Maritime Trade the Key to Rebuilding a Stronger Commonwealth South Asia?

Facilitated by long coastlines, vast marine areas, and leading...

How Did Brownies Evolve from Classic Chocolate Squares to Global Fusion Desserts?

Being a hybrid between a classic chocolate cake and...

Can Africa’s 2025 Biodiversity Summit Turn Natural Wealth into Sustainable Prosperity?

When one truly pays attention to such a topic,...
- Advertisement -

Related Articles

- Advertisement -sitaramatravels.comsitaramatravels.com

Popular Categories

Commonwealth Union
Privacy Overview

This website uses cookies so that we can provide you with the best user experience possible. Cookie information is stored in your browser and performs functions such as recognising you when you return to our website and helping our team to understand which sections of the website you find most interesting and useful.