Friday, July 11, 2025

Polaris of Enlightenment

What we know about the newly launched Grok 3

The future of AI

Published 20 February 2025
– By Editorial Staff
3 minute read

Elon Musk’s AI company xAI has launched the third-generation language model Grok 3, which the company says outperforms competitors such as ChatGPT and Google’s Gemini. During a live presentation, Musk claimed that the new model is “maximally truth-seeking” and ten times more capable than its predecessor.

Grok 3, trained using 100,000 Nvidia H100 GPUs at xAI’s Colossus Supercluster in Memphis, USA, is described as a milestone in artificial intelligence. According to xAI, the model has a unique ability to combine logical reasoning with extensive data processing, which was demonstrated by creating a game that mixes Tetris and Bejeweled and planning a complex space journey from Earth to Mars during the presentation. Musk emphasized that Grok 3 is designed to “favor truth over political correctness” – a direct criticism of competitors he considers too censored.

Technical capacity and competitiveness

According to data from xAI, Grok 3 has outperformed GPT-4o and Google’s Gemini in academic tests, including doctoral-level physics and biology. The model comes in two versions: the full-scale Grok 3 and the lighter Grok 3 mini, which prioritizes speed over accuracy. It also introduces the DeepSearch feature, an AI-powered search engine that compiles information from across the internet into coherent answers.

Early tests by experts such as Andrej Karpathy, former head of AI at Tesla, confirm that Grok 3 is at the forefront of logical thinking, but he also notes that the differences against competitors such as OpenAI’s o1-pro are marginal. Still, the development time is impressive: xAI built its supercomputer in eight months, compared to the industry standard of four years, according to Nvidia CEO Jensen Huang.

Availability and reviews

Grok 3 is first released to paying users of X (formerly Twitter) through the Premium+ subscription. A more expensive tier, SuperGrok, provides access to advanced features like unlimited image generation. However, Musk warned during the launch that the first version is a “beta” and may contain bugs – a call for patience.

Criticism of the launch has been harsh. Researchers and tech experts question xAI’s benchmark results, which they say are difficult to verify independently. Others point to risks of training AI on data from X, where misinformation and spam posts are common.

Some experts, such as AI researcher Findecanor, also criticize the name “Grok” – a term from science fiction describing deep understanding – saying it is misleading for a model that they say lacks genuine insight. In addition, Musk’s previous controversial statements about the potential dangers of AI have created skepticism about his own platform.

Vision for the future

Despite the criticism, xAI is betting big. The company plans to release Grok 2 as open source once Grok 3 is stabilized, which would allow community contributions to the technology. A voice feature and integrations for businesses via API are also in the works.

Meanwhile, a power struggle is underway in the AI industry. Musk recently tried to buy OpenAI for $97 billion, an offer rejected by CEO Sam Altman, who described it as an attempt to “destabilize” the competitor. With Grok 3, xAI is positioning itself as a key player in the global AI race – but the question is whether its promises can be fulfilled without increasing polarization around the ethics and trustworthiness of the technology.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Nvidia becomes first company to reach four trillion dollars in market value

The future of AI

Published yesterday 19:46
– By Editorial Staff
NVIDIA founder and CEO Jensen Huang presents DGX Spark – the world's smallest AI supercomputer.
2 minute read

Graphics card giant Nvidia made history on Wednesday when the company became the first publicly traded company ever to exceed four trillion dollars in market value. The milestone was reached when the stock rose to $164.42 during trading on July 9.

The California-based tech company has experienced a meteoric rise driven by its dominant position in AI chip manufacturing. Over the past five years, the stock has risen by a full 1,460 percent, while this year’s increase stands at nearly 18 percent.

Nvidia’s success is based on the company’s near-monopolistic control over the market for AI processors. The company’s GPU chips form the backbone of machine learning, data centers, and large language models like ChatGPT.

The company’s chips have become indispensable for tech giants Microsoft, Amazon, Meta, and Alphabet, all of which are investing billions in AI infrastructure. This has made Nvidia one of the main winners in the ongoing AI revolution.

Jensen Huang’s wealth explodes

The stock surge has had a dramatic impact on co-founder and CEO Jensen Huang’s personal wealth. According to Bloomberg estimates, his net worth is now $142 billion, an increase of more than $25 billion this year alone.

Huang owns approximately 3.5 percent of Nvidia, making him the company’s largest individual shareholder. The wealth increase places him among the world’s ten richest people, with his fortune closely tied to Nvidia’s stock price.

Heaviest weight in S&P 500

Nvidia now has the highest weighting in the broad US stock index S&P 500, having surpassed both Apple and Microsoft. The breakthrough has led to optimism for continued growth, with some analysts predicting that the market value could rise further.

Loop Capital’s Ananda Baruah sees Nvidia at the “forefront” of the next “golden wave” for generative AI and estimates that the company could reach a market value of over six trillion dollars within a few years.

Nvidia’s historic success reflects the broader AI euphoria that has gripped financial markets, where investors are betting that artificial intelligence will reshape the entire economy over the coming decades.

Musk launches Grok 4 – takes the lead as world’s strongest AI model

The future of AI

Published yesterday 10:55
– By Editorial Staff
Elon Musk speaks during the press conference alongside developers from xAI.
3 minute read

Elon Musk’s AI company xAI presented its latest AI model Grok 4 on Wednesday, along with a monthly fee of $300 for access to the premium version. The launch comes amid a turbulent period for Musk’s companies, as X CEO Linda Yaccarino has left her position and the Grok system, which lacks politically correct safeguards, has made controversial comments.

xAI took the step into the next generation on Wednesday evening with Grok 4, the company’s most advanced AI model to date. At the same time, a premium service called SuperGrok Heavy was introduced with a monthly fee of $300 – the most expensive AI subscription among major providers in the market.

Grok 4 is positioned as xAI’s direct competitor to established AI models like OpenAI’s ChatGPT and Google’s Gemini. The model can analyze images and answer complex questions, and has been increasingly integrated into Musk’s social network X over recent months, where xAI recently acquired significant ownership stakes.

Musk: “Better than PhD level”

During a livestream on Wednesday evening, Musk made bold claims about the new model’s capabilities.

“With respect to academic questions, Grok 4 is better than PhD level in every subject, no exceptions”, Musk claimed. However, he acknowledged that the model can sometimes lack common sense and has not yet invented new technologies or discovered new physics – “but that is just a matter of time”.

Expectations for Grok 4 are high ahead of the upcoming competition with OpenAI’s anticipated GPT-5, which is expected to launch later this summer.

Launch during turbulent week

The launch comes during a tumultuous period for Musk’s business empire. Earlier on Wednesday, Linda Yaccarino announced that she is leaving her position as CEO of X after approximately two years in the role. No successor has yet been appointed.

Yaccarino’s departure comes just days after Grok’s official, automated X account made controversial comments criticizing Hollywood’s “Jewish executives” and other politically incorrect statements. xAI was forced to temporarily restrict the account’s activity and delete the posts. In response to the incident, xAI appears to have removed a recently added section from Grok’s public system instructions that encouraged the AI not to shy away from “politically incorrect” statements.

Musk wore his customary leather jacket and sat alongside xAI leaders during the Grok 4 launch. Photo: xAI

Two model versions with top performance

xAI launched two variants: Grok 4 and Grok 4 Heavy – the latter described as the company’s “multi-agent version” with improved performance. According to Musk, Grok 4 Heavy creates multiple AI agents that work simultaneously on a problem and then compare their results “like a study group” to find the best answer.

The company claims that Grok 4 demonstrates top performance across several test areas, including “Humanity’s Last Exam” – a demanding test that measures AI’s ability to answer thousands of questions in mathematics, humanities, and natural sciences. According to xAI, Grok 4 achieved a score of 25.4 percent without “tools,” surpassing Google’s Gemini 2.5 Pro (21.6 percent) and OpenAI’s o3 high (21 percent).

With access to tools, Grok 4 Heavy allegedly achieved 44.4 percent, compared to Gemini 2.5 Pro’s 26.9 percent.

Future products on the way

SuperGrok Heavy subscribers get early access to Grok 4 Heavy as well as upcoming features. xAI announced that the company plans to launch an AI coding model in August, a multimodal agent in September, and a video generation model in October.

The company is also making Grok 4 available through its API to attract developers to build applications with the model, despite the enterprise initiative being only two months old.

Whether companies are ready to adopt Grok despite the recent mishap remains to be seen, as xAI attempts to establish itself as a credible competitor to ChatGPT, Claude, and Gemini in the enterprise market.

The Grok service can now be accessed outside the X platform through Grok.com.

Spotify fills playlists with fake music – while CEO invests millions in military AI

The future of AI

Published 1 July 2025
– By Editorial Staff
Spotify CEO Daniel Ek accused of diverting artist royalties to military AI development.
3 minute read

Swedish streaming giant Spotify promotes anonymous pseudo-musicians and computer-generated music to avoid paying royalties to real artists, according to a new book by music journalist Liz Pelly.

Meanwhile, criticism grows against Spotify CEO Daniel Ek, who recently invested over €600 million in a company developing AI technology for future warfare.

In the book Mood Machine: The Rise of Spotify and the Costs of the Perfect Playlist, Liz Pelly reveals that Spotify has long been running a secret internal program called Perfect Fit Content (PFC). The program creates cheap, generic background music – often called “muzak” – through a network of production companies with ties to Spotify. This music is then placed in Spotify’s popular playlists, often without crediting any real artists.

The program was tested as early as 2010 and is described by Pelly as Spotify’s most profitable strategy since 2017.

“But it also raises worrying questions for all of us who listen to music. It puts forth an image of a future in which – as streaming services push music further into the background, and normalize anonymous, low-cost playlist filler – the relationship between listener and artist might be severed completely”, Pelly writes.

By 2023, the PFC program controlled hundreds of playlists. More than 150 of them – with names like Deep Focus, Cocktail Jazz, and Morning Stretch – consisted entirely of music produced within PFC.

“Only soulless AI music will remain”

A jazz musician told Pelly that Spotify asked him to create an ambient track for a few hundred dollars as a one-time payment. However, he couldn’t retain the rights to the music. When the track later received millions of plays, he realized he had likely been deceived.

Social media criticism has been harsh. One user writes: “In a few years, only soulless AI music will remain. It’s an easy way to avoid paying royalties to anyone.”

“I deleted Spotify and cancelled my subscription”, comments another.

Spotify has previously faced criticism for similar practices. The Guardian reported in February that the company’s Discovery Mode system allows artists to gain more visibility – but only if they agree to receive 30 percent less payment.

Spotify’s CEO invests in AI for warfare

Meanwhile, CEO Daniel Ek has faced severe criticism for investing over €600 million through his investment firm Prima Materia in the German AI company Helsing. The company develops software for drones, fighter aircraft, submarines, and other military systems.

– The world is being tested in more ways than ever before. That has sped up the timeline. There’s an enormous realisation that it’s really now AI, mass and autonomy that is driving the new battlefield, Ek commented in an interview with Financial Times.

With this investment, Ek has also become chairman of Helsing. The company is working on a project called Centaur, where artificial intelligence will be used to control fighter aircraft.

The criticism was swift. Australian producer Bluescreen explained in an interview with music site Resident Advisor why he chose to leave Spotify – a decision several other music creators have also made.

– War is hell. There’s nothing ethical about it, no matter how you spin it. I also left because it became apparent very quickly that Spotify’s CEO, as all billionaires, only got rich off the exploitation of others.

Competitor chooses different path

Spotify has previously been questioned for its proximity to political power. The company donated $150,000 to Donald Trump’s inauguration fund in 2017 and hosted an exclusive brunch the day before the ceremony.

While Spotify is heavily investing in AI-generated music and voice-controlled DJs, competitor SoundCloud has chosen a different path.

– We do not develop AI tools or allow third parties to scrape or use SoundCloud content from our platform for AI training purposes, explains communications director Marni Greenberg.

– In fact, we implemented technical safeguards, including a ‘no AI’ tag on our site to explicitly prohibit unauthorised use.

Tech giants’ executives become US military officers – gain power over future warfare

The future of AI

Published 26 June 2025
– By Editorial Staff
Data from platforms such as Facebook, Instagram, and WhatsApp could soon be linked to the Swedish military's surveillance systems, according to TBOT (the Swedish Armed Forces' Cyber Defense Unit).
3 minute read

Four senior executives from tech giants Meta, Palantir, and OpenAI have recently been sworn into the US Army Reserve with the rank of lieutenant colonel – an officer rank that normally requires over 20 years of active military service.

The group is part of a new initiative called Detachment 201, aimed at transforming the American military by integrating advanced technologies such as drones, robotics, augmented reality (AR), and AI support.

The new recruits are:

Shyam Sankar, Chief Technology Officer (CTO) of Palantir

Andrew Bosworth, Chief Technology Officer of Meta

Kevin Weil, Chief Product Officer (CPO) of OpenAI

Bob McGrew, former Research Director at OpenAI

According to the technology platform Take Back Our Tech (TBOT), which monitors these developments, these are not symbolic appointments.

“These aren’t random picks. They’re intentional and bring representation and collaboration from the highest level of these companies”, writes founder Hakeem Anwar.

Meta and Palantir on the battlefield

Although the newly appointed officers must formally undergo physical training and weapons instruction, they are expected to participate primarily in digital defense. Their mission is to help the army adapt to a new form of warfare where technology takes center stage.

“The battlefield is truly transforming and so is the government”, notes Anwar.

According to Anwar, the recruitment of Palantir’s CTO could mean the military will start using the company’s Gotham platform as standard. Gotham is a digital interface that collects intelligence and monitors targets through satellite imagery and video feeds.

Meta’s CTO is expected to contribute to integrating data from platforms like Facebook, Instagram, and WhatsApp, which according to TBOT could be connected to military surveillance systems. These platforms are used by billions of people worldwide and contain vast amounts of movement, communication, and behavioral data.

“The activities, movements, and communications from these apps could be integrated into this surveillance network”, writes Anwar, adding:

“It’s no wonder why countries opposed to the US like China have been banning Meta products”.

Leaked project reveals AI initiative for entire government apparatus

Regarding OpenAI’s role, Anwar suggests that Kevin Weil and Bob McGrew might design an AI interface for the army, where soldiers would have access to AI chatbots to support strategy and field tactics.

As Detachment 201 becomes public, a separate AI initiative within the US government has leaked. The website ai.gov, still under development, reveals a plan to equip the entire federal administration with AI tools – from code assistants to AI chatbots for internal use.

TBOT notes that the initiative relies on AI models from OpenAI, Google, and Anthropic. The project is led by the General Services Administration, under former Tesla engineer Thomas Shedd, who has also been involved in the cryptocurrency project DOGE.

“The irony? The website itself was leaked during development, demonstrating that AI isn’t foolproof and can’t replace human expertise”, comments Anwar.

According to the tech site’s founder, several federal employees are critical of the initiative, concerned about insufficient safeguards.

“Without proper safeguards, diving head first into AI could create new security vulnerabilities, disrupt operations, and further erode privacy”, he writes.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.