Friday, August 1, 2025

Polaris of Enlightenment

New study exposes bias, misinformation, and censorship in artificial intelligence

The future of AI

Published 24 April 2024
– By Editorial Staff
Vaccines was one of the topics that led AI:s to the highest amount of misinformation. Grok, however, stood out with the most accurate answers, both in vaccines and every other category.
3 minute read

A new study has revealed significant disparities in the reliability of various artificial intelligence (AI) models, with some leading users astray through misinformation and disinformation.

The study, conducted by anonymous authors and published online, indicates that Grok, developed by Elon Musk’s X, was the most reliable, consistently providing accurate responses in the vast majority of cases.

According to the study, there is considerable variability in AI models’ performances, especially when responding to sensitive questions on previously censored or stigmatized topics. Gemini, one of the models assessed, had the highest misinformation score, averaging 111%, indicating not just inaccuracies but also a reinforcement of falsehoods. This score exceeds 100% because it includes instances where an AI model perpetuates misinformation even when faced with clear factual contradictions, effectively turning misinformation into disinformation.

In contrast, Grok was praised for its accuracy, achieving a misinformation score of only 12%. The researchers used a unique methodology for scoring that measured AI misinformation, with scores over 100% indicating disinformation. The study found that Open AI’s GPT model corrected its initial misinformation after being presented with additional information, demonstrating a certain adaptability. However, the other models continued to provide disinformation, raising concerns about their reliability and integrity.

While Grok performed perfectly in all but two categories, Google’s Gemini exceeded the 100% mark, crossing the line from misinformation to disinformation in all but one category.

Government’s influence on AI

In a related press release, the study authors reveal that the study was prompted by a 2023 federal court ruling that found the Biden administration had been “coercing social media platforms into censoring content likely in violation of the first amendment”. This ruling, upheld by the US 5th Circuit Court of Appeals and now before the US Supreme Court, has raised questions about government influence over AI companies, especially as new AI regulations are being introduced in the US and EU to “combat misinformation” and “ensure safety”. There is concern that these regulations might grant governments greater leverage over AI companies and their executives, much like the threat to social media platforms under Section 230.

The study’s results suggest that most AI responses align with government narratives, except for Grok. It remains unclear whether this alignment is due to external pressure, like that seen with social media platforms, or AI companies’ interpretation of regulatory expectations. The release of recent Google documents detailing how the company adjusted its Gemini AI processes to align with the US Executive Order on AI further complicates the situation.

However, the study’s authors disclosed an example of potential AI censorship with direct implications for US democratic processes: Google’s Gemini AI systematically avoids inquiries about Robert F. Kennedy Jr., the “most significant independent presidential candidate in decades”, failing to respond even to basic questions like “Is RFK Jr. running for president?” According to the study authors, “this discovery reveals a glaring shortfall in current AI legislation’s ability to safeguard democratic processes, urgently necessitating a comprehensive reevaluation of these laws”.

Call for transparent AI legislation

The study’s authors suggest that if AI systems are used as tools for disinformation, the threat to democratic societies could escalate significantly, surpassing even the impacts of social media censorship. This risk arises from the inherent trust users place in AI-generated responses, and the sophistication of AI can make it difficult for the average person to identify or contest misinformation or disinformation.

To address these concerns, the study’s authors advocate for AI legislation that promotes openness and transparency while preventing the undue influence of any single entity, especially governments. They suggest that AI legislation should acknowledge that AI models may occasionally generate insights that challenge widely accepted views or could be seen as inconvenient by those in power. The authors recommend that AI training sources be diverse and error correction methodologies be balanced to ensure AI remains a robust tool for democratic societies, free from training-induced censorship and disinformation.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Samsung and Tesla sign billion-dollar deal for AI chip manufacturing

The future of AI

Published yesterday 9:22
– By Editorial Staff
Image of the construction of Samsung's large chip factory in Taylor, located in Texas, USA.
2 minute read

South Korean tech giant Samsung has entered into a comprehensive agreement with Tesla to manufacture next-generation AI chips. The contract, which extends until 2033, is worth $16.5 billion and means Samsung will dedicate its new Texas-based factory to producing Tesla’s AI6 chips.

Samsung receives a significant boost for its semiconductor manufacturing through the new partnership with Tesla. The electric vehicle manufacturer has chosen to place production of its advanced AI6 chips at Samsung’s facility in Texas, in a move that could change competitive dynamics within the semiconductor industry, writes TechCrunch.

The strategic importance of this is hard to overstate, wrote Tesla founder Elon Musk on X when the deal was announced.

The agreement represents an important milestone for Samsung, which has previously struggled to attract and retain major customers for its chip manufacturing. According to Musk, Tesla may end up spending significantly more than the original $16.5 billion on Samsung chips.

Actual output is likely to be several times higher, he explained in a later post.

Tesla’s chip strategy takes shape

The AI6 chips form the core of Tesla’s ambition to evolve from car manufacturer to an AI and robotics company. The new generation chip is designed as an all-around solution that can be used both for the company’s Full Self-Driving system and for the humanoid robots of the Optimus model that Tesla is developing, as well as for high-performance AI training in data centers.

Tesla is working in parallel with Taiwanese chip manufacturer TSMC for production of AI5 chips, whose design was recently completed. These will initially be manufactured at TSMC’s facility in Taiwan and later also in Arizona. Samsung already produces Tesla’s AI4 chips.

Since 2019, Tesla has developed its own custom chips after leaving Nvidia’s Drive platform. The first self-developed chipset, known as FSD Computer or Hardware 3, was launched the same year and installed in all of the company’s electric vehicles.

Musk promises personal involvement

In an unusual turn, Samsung has agreed to let Tesla assist in maximizing manufacturing efficiency at the Texas factory. Musk has promised personal presence to accelerate progress.

This is a critical point, as I will walk the line personally to accelerate the pace of progress. And the fab is conveniently located not far from my house, he wrote.

The strategic partnership could give Samsung the stable customer volume the company needs to compete with industry leader TSMC, while Tesla secures access to advanced chip manufacturing for its growing AI ambitions.

Artists flee Spotify after Ek’s defense investment

The future of AI

Published 30 July 2025
– By Editorial Staff
1 minute read

Spotify founder Daniel Ek’s investment in the German defense company Helsing is now prompting several international artists to leave the music streaming service in protest. The Australian psychedelic rock band King Gizzard and the Lizard Wizard is the latest name to remove their music from the platform.

Daniel Ek, who is also chairman of the board at Helsing, led an investment of €600 million earlier this year in the German company that specializes in AI-driven autonomous combat solutions. The technology is used for drones and underwater surveillance systems, among other applications.

King Gizzard and the Lizard Wizard announced the decision on Instagram with the words “Fuck Spotify”, explaining that their latest demo recordings will only be available on Bandcamp.

“Spotify CEO Daniel Ek invests millions in Al military drone technology. We just removed our music from the platform”, the band wrote.

The California-based band Xiu Xiu and San Francisco group Deerhoof have made the same choice. Deerhoof expressed their position clearly: “We don’t want our music killing people. We don’t want our success being tied to AI battle tech”.

The protest reflects the music industry’s long-standing ambivalence toward Spotify’s dominant position and impact on artists.

Proton launches privacy-focused AI assistant to compete with ChatGPT

The future of AI

Published 26 July 2025
– By Editorial Staff
The AI assistant Lumo neither stores nor trains on users' conversations and can be used freely without login.
2 minute read

Proton challenges ChatGPT with its new AI assistant Lumo, which promises to never store or train on users’ conversations. The service launches with end-to-end encryption and stricter privacy protections than competing AI services.

The Swiss company Proton, known for its secure email services and VPN solutions, is now expanding into artificial intelligence with the launch of AI assistant Lumo. Unlike established competitors such as OpenAI’s ChatGPT, Google’s Gemini and Anthropic’s Claude, Proton markets its service with promises to never log, store or train its models on users’ questions or conversations.

Lumo can, just like other AI assistants, help users with everyday tasks such as rephrasing emails, summarizing documents and reviewing code. The major difference lies in privacy protection – all chats are end-to-end encrypted and not stored on Proton’s servers.

Privacy-focused alternative in the AI jungle

Proton’s strategy differs markedly from industry standards. ChatGPT stores conversations for 30 days for security reasons, even when chat history is turned off. Gemini may retain user queries for up to 72 hours, while Claude saves chats for up to a month, or longer if they are flagged for review.

An additional advantage for Proton is the company’s Swiss base, which means stricter privacy laws compared to American competitors who may be forced to hand over user data to authorities.

The company has not confirmed which models are used, but Lumo likely builds on smaller, community-developed systems rather than the massive, privately trained models that power services like ChatGPT. This may mean that responses become less detailed or nuanced.

Three service tiers

Lumo is available via the web as well as through apps for iOS and Android. The service is offered in three tiers: two free options and a paid version.

Guest users can ask a limited number of questions per week without an account, but chat history is not saved. Users with free Proton accounts automatically get access to Lumo Free, which includes basic encrypted chat history and support for smaller file uploads.

The paid version Lumo Plus costs approximately $12.99 per month ($9.99 with annual billing) and offers unlimited chats, longer chat history and support for larger file uploads. The price undercuts competitors – ChatGPT Plus, Gemini Advanced and Claude Pro all cost around $20 monthly.

The question that remains to be answered is how well Lumo will compete with models trained on significantly larger datasets. The most advanced AI assistants are powered by enormous amounts of user data, which helps them learn patterns and understand nuances for continuous improvement over time. Proton’s more limited, privacy-centered strategy may affect performance.

Pentagon purchases Musk’s politically incorrect AI models

The future of AI

Published 15 July 2025
– By Editorial Staff
1 minute read

Despite the deep rift with Trump, Elon Musk is now receiving a contract with the Pentagon worth up to $200 million to deliver specially adapted language models for the US military.

The project is called “Grok for Government” in a statement on X, by X.

Grok’s new AI model has been a major topic of conversation this past week, in establishment media primarily because after an update where certain filters were removed, it began breaking strongly against politically correct patterns, and among the general public due to the humor perceived in this.

Among other things, it has been noted how the chatbot writes that certain Jewish organizations, particularly the far-right group ADL, pursue a hostile line against European ethnic groups. For this, the chatbot has been accused of “antisemitism”.

American media analyst and political commentator Mark Dice on the controversy surrounding Grok’s new versions.

However, the criticism has apparently not prevented the US military from procuring Grok solutions for their purposes.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.