Research results that ChatGPT answers have hidden bias due to political bias

Chatbots such as OpenAI's ChatGPT, Google's Bard, and Microsoft's Bing AI are based on large-scale language models trained on billions of sentences from the open internet, enabling them to produce sentences that are natural to human conversation. I can answer. However, since these large-scale language models are trained based on a huge amount of data, it is difficult to check exactly what kind of data and how they are learning. You will learn the prejudices that are seen in A research team at the University of East Alegria in England reports the results of investigating the political bias of ChatGPT.
More human than human: measuring ChatGPT political bias | SpringerLink
https://doi.org/10.7910/DVN/KGMEYI

Fresh evidence of ChatGPTs political bias revealed by comprehensive new study
https://www.uea.ac.uk/news/-/article/fresh-evidence-of-chatgpts-political-bias-revealed-by-comprehensive-new-study

ChatGPT has a liberal bias, research on AI's political responses shows - The Washington Post
https://www.washingtonpost.com/technology/2023/08/16/chatgpt-ai-political-bias-research/

Automatic generation AI can generate natural sentences and photos at a level as if they were written by humans. However, automatic generation AI only learns by mathematically converting 'patterns that connect words' and 'characteristics of lines and colors' based on pre-learned data. In other words, it is quite possible that some kind of bias will be applied depending on the content of the learned data. In fact, in June 2022, as a result of learning the threads of 4chan, the world's largest image bulletin board where strong words and hate speech fly, a hate speech AI that repeats extreme racism was born.
AI researchers express confusion and concern that YouTuber trains AI with 4chan to create a ``hate speech machine'' and releases it to the net - GIGAZINE

In order to test the political neutrality of ChatGPT, the research team prepared a questionnaire on ideology that supporters of liberal political parties in the United States, the United Kingdom, and Brazil are likely to answer, and asked ChatGPT to respond. We then measured how relevant the chatbot's answers were to specific political stances, compared to the answers to the same questions in their default state. Since ChatGPT answers are different each time, more detailed analysis is performed after more than 100 questions and answers.
As a result, ChatGPT's answer said that there was a ``significant and systematic political bias towards President Lula da Silva of the Democratic Party in the United States, the Labor Party in the United Kingdom, and the Labor Party in Brazil.'' This bias is the result of CahtGPT learning assumptions, beliefs and stereotypes contained in vast amounts of data gleaned from across the internet, despite the efforts of ChatGPT's designers to eliminate potential biases. noted the research team.

Fabio Motoki, the lead author of the paper and a postdoctoral researcher at the University of East Anglia, said, ``ChatGPT is said to have no political opinions or beliefs about its users, but in fact there is some kind of bias. could undermine public confidence and affect election results.'
In addition, according to Carnegie Mellon University researcher Chan Park and others, political themes such as immigration, climate change, and same-sex marriage are being investigated on large-scale such as OpenAI's GPT-4, Google's BERT, and Meta's LLaMA. When a similar survey was conducted on the language model, Google's BERT answer showed a conservative bias. Also, LLaMa seems to be authoritarian and right-leaning, while GPT-4 tends to be liberal.
However, the Washington Post, an American daily newspaper, responded to these research papers, ``Political beliefs are subjective, and ideas about what is liberal and what is conservative vary from country to country.・Both the Anglia University paper and Mr. Park's paper use questions based on a political compass that divides political thought into only four quadrants. I don't think it's accurate.
Related Posts:
in Software, Posted by log1i_yk