Saturday, July 12, 2025

Polaris of Enlightenment

New study exposes bias, misinformation, and censorship in artificial intelligence

The future of AI

Published 24 April 2024
– By Editorial Staff
Vaccines was one of the topics that led AI:s to the highest amount of misinformation. Grok, however, stood out with the most accurate answers, both in vaccines and every other category.
3 minute read

A new study has revealed significant disparities in the reliability of various artificial intelligence (AI) models, with some leading users astray through misinformation and disinformation.

The study, conducted by anonymous authors and published online, indicates that Grok, developed by Elon Musk’s X, was the most reliable, consistently providing accurate responses in the vast majority of cases.

According to the study, there is considerable variability in AI models’ performances, especially when responding to sensitive questions on previously censored or stigmatized topics. Gemini, one of the models assessed, had the highest misinformation score, averaging 111%, indicating not just inaccuracies but also a reinforcement of falsehoods. This score exceeds 100% because it includes instances where an AI model perpetuates misinformation even when faced with clear factual contradictions, effectively turning misinformation into disinformation.

In contrast, Grok was praised for its accuracy, achieving a misinformation score of only 12%. The researchers used a unique methodology for scoring that measured AI misinformation, with scores over 100% indicating disinformation. The study found that Open AI’s GPT model corrected its initial misinformation after being presented with additional information, demonstrating a certain adaptability. However, the other models continued to provide disinformation, raising concerns about their reliability and integrity.

While Grok performed perfectly in all but two categories, Google’s Gemini exceeded the 100% mark, crossing the line from misinformation to disinformation in all but one category.

Government’s influence on AI

In a related press release, the study authors reveal that the study was prompted by a 2023 federal court ruling that found the Biden administration had been “coercing social media platforms into censoring content likely in violation of the first amendment”. This ruling, upheld by the US 5th Circuit Court of Appeals and now before the US Supreme Court, has raised questions about government influence over AI companies, especially as new AI regulations are being introduced in the US and EU to “combat misinformation” and “ensure safety”. There is concern that these regulations might grant governments greater leverage over AI companies and their executives, much like the threat to social media platforms under Section 230.

The study’s results suggest that most AI responses align with government narratives, except for Grok. It remains unclear whether this alignment is due to external pressure, like that seen with social media platforms, or AI companies’ interpretation of regulatory expectations. The release of recent Google documents detailing how the company adjusted its Gemini AI processes to align with the US Executive Order on AI further complicates the situation.

However, the study’s authors disclosed an example of potential AI censorship with direct implications for US democratic processes: Google’s Gemini AI systematically avoids inquiries about Robert F. Kennedy Jr., the “most significant independent presidential candidate in decades”, failing to respond even to basic questions like “Is RFK Jr. running for president?” According to the study authors, “this discovery reveals a glaring shortfall in current AI legislation’s ability to safeguard democratic processes, urgently necessitating a comprehensive reevaluation of these laws”.

Call for transparent AI legislation

The study’s authors suggest that if AI systems are used as tools for disinformation, the threat to democratic societies could escalate significantly, surpassing even the impacts of social media censorship. This risk arises from the inherent trust users place in AI-generated responses, and the sophistication of AI can make it difficult for the average person to identify or contest misinformation or disinformation.

To address these concerns, the study’s authors advocate for AI legislation that promotes openness and transparency while preventing the undue influence of any single entity, especially governments. They suggest that AI legislation should acknowledge that AI models may occasionally generate insights that challenge widely accepted views or could be seen as inconvenient by those in power. The authors recommend that AI training sources be diverse and error correction methodologies be balanced to ensure AI remains a robust tool for democratic societies, free from training-induced censorship and disinformation.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Nvidia becomes first company to reach four trillion dollars in market value

The future of AI

Published 10 July 2025
– By Editorial Staff
NVIDIA founder and CEO Jensen Huang presents DGX Spark – the world's smallest AI supercomputer.
2 minute read

Graphics card giant Nvidia made history on Wednesday when the company became the first publicly traded company ever to exceed four trillion dollars in market value. The milestone was reached when the stock rose to $164.42 during trading on July 9.

The California-based tech company has experienced a meteoric rise driven by its dominant position in AI chip manufacturing. Over the past five years, the stock has risen by a full 1,460 percent, while this year’s increase stands at nearly 18 percent.

Nvidia’s success is based on the company’s near-monopolistic control over the market for AI processors. The company’s GPU chips form the backbone of machine learning, data centers, and large language models like ChatGPT.

The company’s chips have become indispensable for tech giants Microsoft, Amazon, Meta, and Alphabet, all of which are investing billions in AI infrastructure. This has made Nvidia one of the main winners in the ongoing AI revolution.

Jensen Huang’s wealth explodes

The stock surge has had a dramatic impact on co-founder and CEO Jensen Huang’s personal wealth. According to Bloomberg estimates, his net worth is now $142 billion, an increase of more than $25 billion this year alone.

Huang owns approximately 3.5 percent of Nvidia, making him the company’s largest individual shareholder. The wealth increase places him among the world’s ten richest people, with his fortune closely tied to Nvidia’s stock price.

Heaviest weight in S&P 500

Nvidia now has the highest weighting in the broad US stock index S&P 500, having surpassed both Apple and Microsoft. The breakthrough has led to optimism for continued growth, with some analysts predicting that the market value could rise further.

Loop Capital’s Ananda Baruah sees Nvidia at the “forefront” of the next “golden wave” for generative AI and estimates that the company could reach a market value of over six trillion dollars within a few years.

Nvidia’s historic success reflects the broader AI euphoria that has gripped financial markets, where investors are betting that artificial intelligence will reshape the entire economy over the coming decades.

Musk launches Grok 4 – takes the lead as world’s strongest AI model

The future of AI

Published 10 July 2025
– By Editorial Staff
Elon Musk speaks during the press conference alongside developers from xAI.
3 minute read

Elon Musk’s AI company xAI presented its latest AI model Grok 4 on Wednesday, along with a monthly fee of $300 for access to the premium version. The launch comes amid a turbulent period for Musk’s companies, as X CEO Linda Yaccarino has left her position and the Grok system, which lacks politically correct safeguards, has made controversial comments.

xAI took the step into the next generation on Wednesday evening with Grok 4, the company’s most advanced AI model to date. At the same time, a premium service called SuperGrok Heavy was introduced with a monthly fee of $300 – the most expensive AI subscription among major providers in the market.

Grok 4 is positioned as xAI’s direct competitor to established AI models like OpenAI’s ChatGPT and Google’s Gemini. The model can analyze images and answer complex questions, and has been increasingly integrated into Musk’s social network X over recent months, where xAI recently acquired significant ownership stakes.

Musk: “Better than PhD level”

During a livestream on Wednesday evening, Musk made bold claims about the new model’s capabilities.

“With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions”, Musk claimed. However, he acknowledged that the model can sometimes lack common sense and has not yet invented new technologies or discovered new physics – “but that is just a matter of time”.

Expectations for Grok 4 are high ahead of the upcoming competition with OpenAI’s anticipated GPT-5, which is expected to launch later this summer.

Launch during turbulent week

The launch comes during a tumultuous period for Musk’s business empire. Earlier on Wednesday, Linda Yaccarino announced that she is leaving her position as CEO of X after approximately two years in the role. No successor has yet been appointed.

Yaccarino’s departure comes just days after Grok’s official, automated X account made controversial comments criticizing Hollywood’s “Jewish executives” and other politically incorrect statements. xAI was forced to temporarily restrict the account’s activity and delete the posts. In response to the incident, xAI appears to have removed a recently added section from Grok’s public system instructions that encouraged the AI not to shy away from “politically incorrect” statements.

Musk wore his customary leather jacket and sat alongside xAI leaders during the Grok 4 launch. Photo: xAI

Two model versions with top performance

xAI launched two variants: Grok 4 and Grok 4 Heavy – the latter described as the company’s “multi-agent version” with improved performance. According to Musk, Grok 4 Heavy creates multiple AI agents that work simultaneously on a problem and then compare their results “like a study group” to find the best answer.

The company claims that Grok 4 demonstrates top performance across several test areas, including “Humanity’s Last Exam” – a demanding test that measures AI’s ability to answer thousands of questions in mathematics, humanities, and natural sciences. According to xAI, Grok 4 achieved a score of 25.4 percent without “tools,” surpassing Google’s Gemini 2.5 Pro (21.6 percent) and OpenAI’s o3 high (21 percent).

With access to tools, Grok 4 Heavy allegedly achieved 44.4 percent, compared to Gemini 2.5 Pro’s 26.9 percent.

Future products on the way

SuperGrok Heavy subscribers get early access to Grok 4 Heavy as well as upcoming features. xAI announced that the company plans to launch an AI coding model in August, a multimodal agent in September, and a video generation model in October.

The company is also making Grok 4 available through its API to attract developers to build applications with the model, despite the enterprise initiative being only two months old.

Whether companies are ready to adopt Grok despite the recent mishap remains to be seen, as xAI attempts to establish itself as a credible competitor to ChatGPT, Claude, and Gemini in the enterprise market.

The Grok service can now be accessed outside the X platform through Grok.com.

Spotify fills playlists with fake music – while CEO invests millions in military AI

The future of AI

Published 1 July 2025
– By Editorial Staff
Spotify CEO Daniel Ek accused of diverting artist royalties to military AI development.
3 minute read

Swedish streaming giant Spotify promotes anonymous pseudo-musicians and computer-generated music to avoid paying royalties to real artists, according to a new book by music journalist Liz Pelly.

Meanwhile, criticism grows against Spotify CEO Daniel Ek, who recently invested over €600 million in a company developing AI technology for future warfare.

In the book Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist, Liz Pelly reveals that Spotify has long been running a secret internal program called Perfect Fit Content (PFC). The program creates cheap, generic background music – often called “muzak” – through a network of production companies with ties to Spotify. This music is then placed in Spotify’s popular playlists, often without crediting any real artists.

The program was tested as early as 2010 and is described by Pelly as Spotify’s most profitable strategy since 2017.

“But it also raises worrying questions for all of us who listen to music. It puts forth an image of a future in which – as streaming services push music further into the background, and normalize anonymous, low-cost playlist filler – the relationship between listener and artist might be severed completely”, Pelly writes.

By 2023, the PFC program controlled hundreds of playlists. More than 150 of them – with names like Deep Focus, Cocktail Jazz, and Morning Stretch – consisted entirely of music produced within PFC.

“Only soulless AI music will remain”

A jazz musician told Pelly that Spotify asked him to create an ambient track for a few hundred dollars as a one-time payment. However, he couldn’t retain the rights to the music. When the track later received millions of plays, he realized he had likely been deceived.

Social media criticism has been harsh. One user writes: “In a few years, only soulless AI music will remain. It’s an easy way to avoid paying royalties to anyone.”

“I deleted Spotify and cancelled my subscription”, comments another.

Spotify has previously faced criticism for similar practices. The Guardian reported in February that the company’s Discovery Mode system allows artists to gain more visibility – but only if they agree to receive 30 percent less payment.

Spotify’s CEO invests in AI for warfare

Meanwhile, CEO Daniel Ek has faced severe criticism for investing over €600 million through his investment firm Prima Materia in the German AI company Helsing. The company develops software for drones, fighter aircraft, submarines, and other military systems.

– The world is being tested in more ways than ever before. That has sped up the timeline. There’s an enormous realisation that it’s really now AI, mass and autonomy that is driving the new battlefield, Ek commented in an interview with Financial Times.

With this investment, Ek has also become chairman of Helsing. The company is working on a project called Centaur, where artificial intelligence will be used to control fighter aircraft.

The criticism was swift. Australian producer Bluescreen explained in an interview with music site Resident Advisor why he chose to leave Spotify – a decision several other music creators have also made.

– War is hell. There’s nothing ethical about it, no matter how you spin it. I also left because it became apparent very quickly that Spotify’s CEO, as all billionaires, only got rich off the exploitation of others.

Competitor chooses different path

Spotify has previously been questioned for its proximity to political power. The company donated $150,000 to Donald Trump’s inauguration fund in 2017 and hosted an exclusive brunch the day before the ceremony.

While Spotify is heavily investing in AI-generated music and voice-controlled DJs, competitor SoundCloud has chosen a different path.

– We do not develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes, explains communications director Marni Greenberg.

– In fact, we implemented technical safeguards, including a ‘no AI’ tag on our site to explicitly prohibit unauthorised use.

Tech giants’ executives become US military officers – gain power over future warfare

The future of AI

Published 26 June 2025
– By Editorial Staff
Data from platforms such as Facebook, Instagram, and WhatsApp could soon be linked to the Swedish military's surveillance systems, according to TBOT (the Swedish Armed Forces' Cyber Defense Unit).
3 minute read

Four senior executives from tech giants Meta, Palantir, and OpenAI have recently been sworn into the US Army Reserve with the rank of lieutenant colonel – an officer rank that normally requires over 20 years of active military service.

The group is part of a new initiative called Detachment 201, aimed at transforming the American military by integrating advanced technologies such as drones, robotics, augmented reality (AR), and AI support.

The new recruits are:

Shyam Sankar, Chief Technology Officer (CTO) of Palantir

Andrew Bosworth, Chief Technology Officer of Meta

Kevin Weil, Chief Product Officer (CPO) of OpenAI

Bob McGrew, former Research Director at OpenAI

According to the technology platform Take Back Our Tech (TBOT), which monitors these developments, these are not symbolic appointments.

“These aren’t random picks. They’re intentional and bring representation and collaboration from the highest level of these companies”, writes founder Hakeem Anwar.

Meta and Palantir on the battlefield

Although the newly appointed officers must formally undergo physical training and weapons instruction, they are expected to participate primarily in digital defense. Their mission is to help the army adapt to a new form of warfare where technology takes center stage.

“The battlefield is truly transforming and so is the government”, notes Anwar.

According to Anwar, the recruitment of Palantir’s CTO could mean the military will start using the company’s Gotham platform as standard. Gotham is a digital interface that collects intelligence and monitors targets through satellite imagery and video feeds.

Meta’s CTO is expected to contribute to integrating data from platforms like Facebook, Instagram, and WhatsApp, which according to TBOT could be connected to military surveillance systems. These platforms are used by billions of people worldwide and contain vast amounts of movement, communication, and behavioral data.

“The activities, movements, and communications from these apps could be integrated into this surveillance network”, writes Anwar, adding:

“It’s no wonder why countries opposed to the US like China have been banning Meta products”.

Leaked project reveals AI initiative for entire government apparatus

Regarding OpenAI’s role, Anwar suggests that Kevin Weil and Bob McGrew might design an AI interface for the army, where soldiers would have access to AI chatbots to support strategy and field tactics.

As Detachment 201 becomes public, a separate AI initiative within the US government has leaked. The website ai.gov, still under development, reveals a plan to equip the entire federal administration with AI tools – from code assistants to AI chatbots for internal use.

TBOT notes that the initiative relies on AI models from OpenAI, Google, and Anthropic. The project is led by the General Services Administration, under former Tesla engineer Thomas Shedd, who has also been involved in the cryptocurrency project DOGE.

“The irony? The website itself was leaked during development, demonstrating that AI isn’t foolproof and can’t replace human expertise”, comments Anwar.

According to the tech site’s founder, several federal employees are critical of the initiative, concerned about insufficient safeguards.

“Without proper safeguards, diving head first into AI could create new security vulnerabilities, disrupt operations, and further erode privacy”, he writes.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.