Wednesday, October 8, 2025

Polaris of Enlightenment

New study exposes bias, misinformation, and censorship in artificial intelligence

The future of AI

Published 24 April 2024
– By Editorial Staff
Vaccines was one of the topics that led AI:s to the highest amount of misinformation. Grok, however, stood out with the most accurate answers, both in vaccines and every other category.
3 minute read

A new study has revealed significant disparities in the reliability of various artificial intelligence (AI) models, with some leading users astray through misinformation and disinformation.

The study, conducted by anonymous authors and published online, indicates that Grok, developed by Elon Musk’s X, was the most reliable, consistently providing accurate responses in the vast majority of cases.

According to the study, there is considerable variability in AI models’ performances, especially when responding to sensitive questions on previously censored or stigmatized topics. Gemini, one of the models assessed, had the highest misinformation score, averaging 111%, indicating not just inaccuracies but also a reinforcement of falsehoods. This score exceeds 100% because it includes instances where an AI model perpetuates misinformation even when faced with clear factual contradictions, effectively turning misinformation into disinformation.

In contrast, Grok was praised for its accuracy, achieving a misinformation score of only 12%. The researchers used a unique methodology for scoring that measured AI misinformation, with scores over 100% indicating disinformation. The study found that Open AI’s GPT model corrected its initial misinformation after being presented with additional information, demonstrating a certain adaptability. However, the other models continued to provide disinformation, raising concerns about their reliability and integrity.

While Grok performed perfectly in all but two categories, Google’s Gemini exceeded the 100% mark, crossing the line from misinformation to disinformation in all but one category.

Government’s influence on AI

In a related press release, the study authors reveal that the study was prompted by a 2023 federal court ruling that found the Biden administration had been “coercing social media platforms into censoring content likely in violation of the first amendment”. This ruling, upheld by the US 5th Circuit Court of Appeals and now before the US Supreme Court, has raised questions about government influence over AI companies, especially as new AI regulations are being introduced in the US and EU to “combat misinformation” and “ensure safety”. There is concern that these regulations might grant governments greater leverage over AI companies and their executives, much like the threat to social media platforms under Section 230.

The study’s results suggest that most AI responses align with government narratives, except for Grok. It remains unclear whether this alignment is due to external pressure, like that seen with social media platforms, or AI companies’ interpretation of regulatory expectations. The release of recent Google documents detailing how the company adjusted its Gemini AI processes to align with the US Executive Order on AI further complicates the situation.

However, the study’s authors disclosed an example of potential AI censorship with direct implications for US democratic processes: Google’s Gemini AI systematically avoids inquiries about Robert F. Kennedy Jr., the “most significant independent presidential candidate in decades”, failing to respond even to basic questions like “Is RFK Jr. running for president?” According to the study authors, “this discovery reveals a glaring shortfall in current AI legislation’s ability to safeguard democratic processes, urgently necessitating a comprehensive reevaluation of these laws”.

Call for transparent AI legislation

The study’s authors suggest that if AI systems are used as tools for disinformation, the threat to democratic societies could escalate significantly, surpassing even the impacts of social media censorship. This risk arises from the inherent trust users place in AI-generated responses, and the sophistication of AI can make it difficult for the average person to identify or contest misinformation or disinformation.

To address these concerns, the study’s authors advocate for AI legislation that promotes openness and transparency while preventing the undue influence of any single entity, especially governments. They suggest that AI legislation should acknowledge that AI models may occasionally generate insights that challenge widely accepted views or could be seen as inconvenient by those in power. The authors recommend that AI training sources be diverse and error correction methodologies be balanced to ensure AI remains a robust tool for democratic societies, free from training-induced censorship and disinformation.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Anthropic challenges Google and OpenAI with new AI flagship model

The future of AI

Published 30 September 2025
– By Editorial Staff
AI companies' race continues at a rapid pace, now with a new model from Anthropic.
2 minute read

AI company Anthropic launches Claude Sonnet 4.5, described as the company’s most advanced AI system to date and market-leading for programming. According to the company, the model performs better than competitors from Google and OpenAI.

Anthropic has released its new flagship model Claude Sonnet 4.5, which the company claims is the best on the market for coding. According to reports, the model outperforms both Google’s Gemini 2.5 Pro and OpenAI’s GPT-5 on several coding benchmarks, writes TechCrunch.

One of the most remarkable features is the model’s ability to work independently for extended periods. During early testing with enterprise customers, Claude Sonnet 4.5 has been observed coding autonomously for up to 30 hours. During these work sessions, the AI model has not only built applications but also set up database services, purchased domain names, and conducted security audits.

Focus on safety and reliability

Anthropic emphasizes that Claude Sonnet 4.5 is also their safest model to date, with enhanced protection against manipulation and barriers against harmful content. The company states that the model can create “production-ready” applications rather than just prototypes, representing a step forward in reliability.

The model is available via the Claude API and in the Claude chatbot. Pricing for developers is set at 3 dollars per million input tokens and 15 dollars per million output tokens.

Fast pace in the AI race

The launch comes less than two months after the company’s previous flagship model, Claude Opus 4.1. This rapid development pace illustrates, according to TechCrunch, how difficult it is for AI companies to maintain an advantage in the intense competition.

Anthropic’s models have become popular among developers, and major tech companies like Apple and Meta are reported to use Claude internally.

AI-created viruses can kill bacteria

The future of AI

Published 28 September 2025
– By Editorial Staff
Bacteriophages attach to bacteria, inject their DNA and multiply until the bacteria burst. AI can now design new variants from scratch.
2 minute read

Researchers in California have used artificial intelligence to design viruses that can reproduce and kill bacteria.

The breakthrough opens up new medical treatments – but also risks becoming a dangerous weapon in the wrong hands.

Researchers at Stanford University and the Arc Institute have for the first time succeeded in creating complete genomes using artificial intelligence. Their AI-designed viruses can actually reproduce and kill bacteria.

— That was pretty striking, just actually seeing, like, this AI-generated sphere, says Brian Hie, who leads the laboratory at the Arc Institute where the work was carried out.

The team used an AI called Evo, trained on genomes from around 2 million bacteriophages (viruses that attack bacteria). They chose to work with phiX174, a simple virus with just 11 genes and 5,000 DNA letters.

16 of 302 worked

The researchers let the AI design 302 different genome variants, which were then chemically manufactured as DNA strands. When they mixed these with E. coli bacteria, they achieved a breakthrough: 16 of the designs worked and created viruses that could reproduce.

— They saw viruses with new genes, with truncated genes, and even different gene orders and arrangements, says Jef Boeke, biologist at NYU Langone Health who was given advance access to the study.

Since viruses are not considered living organisms, this is not yet truly AI-designed life – but it is an important first step toward that technology.

Major medical potential

The technology has great potential in medicine. “Most gene therapy uses viruses to shuttle genes into patients’ bodies, and AI might develop more effective ones”, explains Samuel King, the student who led the project.

Doctors have previously tried so-called phage therapy to combat serious bacterial infections, something that AI-designed viruses could improve.

“Grave concerns”

But the technology’s development also raises strong concerns. The researchers have deliberately avoided training their AI on viruses that infect humans, but others could misuse the method.

— One area where I urge extreme caution is any viral enhancement research, especially when it’s random so you don’t know what you are getting. If someone did this with smallpox or anthrax, I would have grave concerns, warns J. Craig Venter, a pioneer in synthetic biology.

Venter believes that the technology is fundamentally based on the same trial-and-error principle that he himself used two decades ago, just much faster.

Future challenges

Creating larger organisms is significantly more difficult. E. coli has a thousand times more DNA than phiX174. “The complexity would rocket from staggering to way way more than the number of subatomic particles in the universe”. explains Boeke.

Jason Kelly, CEO of biotech company Ginkgo Bioworks, believes that automated laboratories where AI continuously improves its genome designs will be needed for future breakthroughs.

— This would be a nation-scale scientific milestone, as cells are the building blocks of all life. The US should make sure we get to it first, says Kelly.

China plans fully AI-controlled economy by 2035

The modern China

Published 26 September 2025
– By Editorial Staff
By 2035, AI is planned to have "completely reworked Chinese society" and implemented a new phase of economic and social production.
2 minute read

The Chinese government has presented an ambitious ten-year plan where artificial intelligence will permeate all sectors of society by 2035 and become the “main engine for economic growth”.

China’s State Council has published a comprehensive plan aimed at making the country the world’s first fully AI-driven economy within eleven years. According to the government document presented at the end of August, artificial intelligence will have transformed Chinese society by 2035 and become the foundation for what is described as “a new phase of development in intelligent economy and intelligent society”.

The plan, which spans ten years, encompasses six central societal sectors that will be permeated by AI technology by 2027. These include science and technology, citizen welfare, industrial development, consumer goods, governance, and international relations.

The goal: 90 percent usage by 2030

According to the timeline, AI technology should reach a 90 percent usage rate by 2030 and practically become a new type of infrastructure. At this point, the technology is expected to have developed into a “significant growth engine for China’s economy”.

The strategy resembles the country’s previous “internet plus” initiative, which successfully integrated the internet as a central component in the Chinese economy.

By 2035, AI should according to the plan have “completely reworked Chinese society” and implemented a new phase of economic and social production. This is an ambitious goal with significant consequences, not only for the People’s Republic but for the entire world.

International cooperation in focus

The State Council emphasizes that AI should be treated as an “international public good that benefits humanity”. The plan highlights the importance of developing open source AI, supporting developing countries in building their own technology sectors, and the UN’s role as a leader in AI regulation.

Although China’s AI industry is growing rapidly, as exemplified by the open AI platform DeepSeek’s successes earlier this year, Chinese models still lag several months behind their American counterparts in terms of average performance. This is largely due to restrictions and barriers that Western countries have imposed.

However, the gap is steadily narrowing. At the end of 2023, American AI models performed better than Chinese ones in 13 percent of general reasoning tests. By the same time in 2024, this figure had dropped to 8.1 percent. In certain AI applications, China is already a world leader and has invested heavily in offering its services at low prices and in many cases completely free as open source.

The State Council’s ten-year plan aims to further reduce the lead by strengthening key areas such as fundamental model performance, security measures, data access, and energy management.

Whether Beijing can deliver on its massive goals with the help of sometimes unreliable technology remains to be seen. However, if other nationally coordinated plans are any indication, the country may face a comprehensive transformation.

New robot takes on household chores

The future of AI

Published 7 September 2025
– By Editorial Staff
1 minute read

The AI robot Helix can wash dishes, fold laundry and collaborate with other robots. It is the first robot of its kind that can control the entire upper part of the body.

The American robotics company Figure AI’s new humanoid robot has visual perception, language understanding and full control over fingers, wrists, torso and head. This enables the robot to pick up small objects and thereby help with household tasks.

Helix is powered by a so-called dual-system architecture, which can be explained as having a unique “two-brain” AI architecture where one part interprets language and vision while another part controls movements quickly and precisely.

Among other things, the company demonstrates that the robot can load dishes into the dishwasher, fold laundry and sort groceries. The robot can also sort and weigh packages at postal facilities.

It can also handle thousands of new objects in cluttered environments, without prior demonstrations or custom programming. This means it can perform tasks it is not programmed for and is designed to solve problems independently in an unpredictable environment.

It can follow voice commands in a similar way to talking with a human and act accordingly. What also makes the robot special is that it can collaborate with other robots. In tests, for example, two Helix robots have successfully been able to work together to unpack groceries.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.