Saturday, February 15, 2025

Polaris of Enlightenment

Ad:

Study: ChatGPT politically left-leaning

Published 12 September 2023
– By Editorial Staff
ChatGPT favors left-liberal politicians like Joe Biden.

Critics have previously accused OpenAI’s chatbot, ChatGPT, of having a clear left-wing bias. Now, a fresh British study has reached the same conclusion.

Researchers from the University of East Anglia in Britain have successfully shown how the chatbot favors the Democrats in the USA, the Labour Party in the UK, and the left-leaning president Lula da Silva in Brazil.

To test ChatGPT’s political leanings, researchers first had the robot pretend to be people from across the political spectrum while answering a series of over 60 ideological questions. The answers were then compared to the chatbot’s default responses to the same questions.

To account for the AI’s randomness, each question was posed 100 times and then underwent a so-called “bootstrap” with 1,000 repetitions to reassemble the original data and improve reliability.

In another set of tests designed to verify the results, researchers asked ChatGPT to mimic radical political stances. In a “placebo test”, politically neutral questions were posed, and in another test, the chatbot was asked to imagine various types of professionals.

The researchers concluded that the default answers tended to align more with the left’s responses than the right’s.

– Our findings reinforce concerns that AI systems could replicate, or even amplify, existing challenges posed by the internet and social media, comments lead author Dr. Fabio Motoki.

“Could influence political processes”

Political bias can arise in several ways. For instance, the training dataset sourced from the internet might itself be left-leaning, and the developers may introduce their own political bias, perhaps without even realizing it.

–With the growing use by the public of AI-powered systems to find out facts and create new content, it is important that the output of popular platforms such as ChatGPT is as impartial as possible, Motoki continues. The presence of political bias can influence user views and has potential implications for political and electoral processes.

In an interview with Sky News, Motoki elaborated on his argument, emphasizing that “bias on a platform like this is a cause for concern.”

 Sometimes people forget these AI models are just machines. They provide very believable, digested summaries of what you are asking, even if they’re completely wrong. And if you ask it ‘are you neutral’, it says ‘oh I am!’ Just as the media, the internet, and social media can influence the public, this could be very harmful.

Entrepreneur Elon Musk, one of the co-founders of the organization behind ChatGPT, has previously criticized ChatGPT for its political bias, accusing the AI of being developed by “left-leaning experts” and having been “trained to lie.” This led him to initiate a new chatbot project named TruthGPT with the ambition of being “a maximum truth-seeking AI that tries to understand the nature of the universe”.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

US and UK back away from international AI declaration

The future of AI

Published today 8:28
– By Editorial Staff
US Vice President JD Vance stresses that “pro-growth AI policies” should take priority over security.

Sweden and 60 other countries have signed an AI declaration for inclusive, sustainable and open AI. However, the United States and the United Kingdom have chosen to opt out a decision that has provoked strong reactions.

The AI Declaration was developed in conjunction with the International AI Summit in Paris earlier this week, and its aim is to promote inclusive and sustainable AI in line with the Paris Agreement. It also emphasizes the importance of an “ethical” approach where technology should be “transparent”, “safe” and “trustworthy”.

The declaration also notes AI’s energy use, something not previously discussed. Experts have previously warned that in the future AI could consume as much energy as smaller countries.

Countries such as China, India and Mexico have signed the agreement. Finland, Denmark, Sweden and Norway have also signed. The United States and the United Kingdom are two of the countries that have chosen not to sign the agreement, reports the British state broadcaster BBC.

“Global governance”

The UK government justifies its decision with concerns about national security and “global governance”. US Vice President JD Vance has also previously said that too much regulation of AI could “kill a transformative industry just as it’s taking off”. At the meeting, Vance stressed that AI was “an opportunity that the Trump administration will not squander” and said that “pro-growth AI policies” should be prioritized over security.

French President Emmanuel Macron, for his part, defended the need for further regulation.

AI and speech therapy to help police identify voices

Published yesterday 16:52
– By Editorial Staff

Researchers at Lund University are developing a forensic speech comparison using speech therapy, AI, mathematics and machine learning. The method will help police analyze audio recordings in criminal investigations.

Like fingerprints and DNA, the voice carries unique characteristics that can be linked to individuals. Speech and voice are influenced by several factors, such as the size of the vocal cords, the shape of the oral cavity, language use and breathing. While most people can perceive the gender, age or mood of a speaker, it takes specialist knowledge to objectively analyze the unique patterns of the voice an area in which speech therapists are experts.

The police turned to Lund University for help analyzing audio recordings in an investigation. The request led to the development of forensic speech comparison as a method of evidence gathering.

The police often handle audio recordings where the speaker is known, but also recordings where the purpose is to confirm or exclude a suspect.

– What we do at the moment is to have three assessors, speech therapists, analyze the speech, voice and language in the recordings in order to compare them. We listen for several factors, such as how the person in question produces their voice, articulates, seems to move their tongue and lips, says Susanna Whitling, a speech therapist and researcher at Lund University, in a press release.

Both larger datasets and cutting-edge analysis

The number of requests from the police has increased, making it difficult for analysts to keep up with all the recordings. To handle larger data sets, researchers have developed AI-based methods that can identify relevant audio files, which are then analyzed by experts.

– By combining traditional speech therapy perceptual assessment of speech voice and language with machine learning, we want to make it possible to both scan large amounts of data and offer cutting-edge analysis. Based on the hits that the AI then extracts, experts can make a professional assessment, explains Whitling.

The researchers are also collaborating with Andreas Jakobsson, a professor of mathematical statistics, to develop specialized software. The vision is to have an accurate and reliable speech comparison.

– We speech therapists can do perceptual assessment and examine the probability that two recordings contain the same person’s speech, voice and language. When adding the development of specialized software for so-called acoustic analysis such as voice frequency, intensity and temporal variations, we collaborate with experts in signal processing and machine learning.

World leaders gather in Paris for AI summit

Published 11 February 2025
– By Editorial Staff
Ulf Kristersson and around 100 other heads of state and government are currently at the AI Summit in Paris.

The AI Action Summit is currently taking place in Paris, where world leaders are gathering to discuss global governance of artificial intelligence.

The stated aim of the summit, chaired by French President Emmanuel Macron and Indian Prime Minister Narendra Modi, is to create a common path forward for the development of AI and lay the foundations for global AI governance.

During the summit, five main areas will be discussed:

• AI in the public interest
• Future impact of AI on the labor market
• Promoting innovation and culture
• Trust in AI
• Global AI governance

The UN Secretary-General, António Guterres, is attending the meeting along with leaders from nearly 100 countries, representatives from international organizations, researchers and civil society representatives.

He sees the establishment of global AI governance as a top priority and believes the technology could pose an “existential concern” for humanity if not regulated in a “responsible manner”.

AI must remain a tool at the service of humanity, and not a source of inequalities and unbridled risks”, writes the UN Information Center UNRIC in a press release.

The discussions should therefore shape an AI that is sustainable, beneficial and inclusive, with particular attention paid to the risks of abuse and the protection of individual rights”, it continues.

EU must invest in AI

Swedish Prime Minister Ulf Kristersson is leading the Swedish delegation, and is also attending a meeting of EU leaders to discuss European competitiveness and the future role of AI in it.

The summit organizers state that Europe “can and must significantly strengthen its positioning on AI and accelerate investments in this field, so that we can be at the forefront on the matter”.

Regarding the regulation and control of AI, the view is that “one single governance initiative is not the answer”. Instead, the focus should be on “existing initiatives, like the Global Partnership on Artificial Intelligence (GPAI), need to be coordinated to build a global, multi-stakeholder consensus around an inclusive and effective governance system for AI”, it says.

Google abandons promise not to use AI for weapons

Published 8 February 2025
– By Editorial Staff
The tech giant claims that in its AI development it implements social responsibility and generally accepted principles of international law and human rights.

Google has removed the part of its AI policy that previously prohibited the development and deployment of AI for weapons or surveillance.

When Google first published its AI policy in 2018, it included a section called “applications we won’t pursue”, in which the company pledged not to develop or deploy AI for weapons or surveillance.

Now it has removed that section and replaced it with another, Bloomberg reports. Records indicate that the previous text was still there as recently as last week.

Instead, the section has been replaced by “Responsible development and deployment”, where Google states that the company will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights”.

In connection with the changes, Google refers to a blog post in which the company writes that the policy change is necessary, as AI is now used for more general purposes.

Thousands of employees protested

In 2018, Google signed a controversial government contract called Project Maven, which effectively meant that the company would provide AI software to the Department of Defense to analyze drone images. Thousands of Google employees signed a protest against the contract and dozens chose to leave.

It was in the context of that contract that Google published its AI guidelines, in which it promised not to use AI as a weapon. The tech giant’s CEO, Sundar Pichai, reportedly told staff that he hoped the guidelines would stand the “test of time”.

In 2021, the company signed a new military contract to provide cloud services to the US military. In the same year, it also signed a contract with the Israeli military, called Project Nimbus, which also provides cloud services for the country. In January this year, it also emerged that Google employees were working with Israel’s Ministry of Defense to expand the government’s use of AI tools, as reported by The Washington Post.