Friday, October 10, 2025

Polaris of Enlightenment

OpenAI launches GPT-5 – Here are the new features in the latest ChatGPT model

The future of AI

Published 8 August 2025
– By Editorial Staff
"GPT-5 is the first time that it really feels like talking to an expert in any topic, like a PhD-level expert", claims CEO Sam Altman during the company's presentation of the new model.
2 minute read

OpenAI released its new flagship model GPT-5 on Thursday, which is now available free of charge to all users of the ChatGPT chatbot service. The American AI giant claims that the new model is “the best in the world” and takes a significant step toward developing artificial intelligence that can perform better than humans in most economically valuable work tasks.

GPT-5 differs from previous versions by combining fast responses with advanced problem-solving capabilities. While previous AI chatbots could primarily provide smart answers to questions, GPT-5 can perform complex tasks for users – such as creating software applications, navigating calendars, or compiling research reports, writes TechCrunch.

— Having something like GPT-5 would be pretty much unimaginable at any previous time in history, said OpenAI CEO Sam Altman during a press conference.

Better than competitors

According to OpenAI, GPT-5 performs somewhat better than competing AI models from companies like Anthropic, Google DeepMind, and Elon Musk’s xAI on several important tests. In programming, the model achieves 74.9 percent on real coding tasks, which marginally beats Anthropic’s latest model Claude Opus 4.1, which reached 74.5 percent.

A particularly important improvement is that GPT-5 “hallucinates” – that is, makes up incorrect information – significantly less than previous models. When tested on health-related questions, the model gives incorrect answers only 1.6 percent of the time, compared to over 12 percent for OpenAI’s previous models.

This is particularly relevant since millions of people use AI chatbots to get health advice, despite them not replacing professional doctors.

New features and pricing models

The company has also simplified the user experience. Instead of users having to choose the right settings, GPT-5 has an automatic router that determines how it should best respond – either quickly or by “thinking through” the answer more thoroughly.

ChatGPT also gets four new personalities that users can choose between: Cynic, Robot, Listener, and Nerd. These customize how the model responds without users needing to specify it in each request.

For developers, GPT-5 is launched in three sizes via OpenAI’s programming interface, with the base model priced at €1.15 per million input words and €9.20 per million generated words.

The launch comes after an intense week for OpenAI, which also released an open AI model that developers can download for free. ChatGPT has grown to become one of the world’s most popular consumer products with over 700 million users every week – nearly 10 percent of the world’s population.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

Researcher: We risk losing ourselves to AI gods

The future of AI

Published today 13:24
– By Editorial Staff
"We sell our ability to be courageous and buy security from the machines", argues researcher Carl Öhman.
3 minute read

The upcoming book “Gods of data” examines AI from a religion-critical perspective. Carl Öhman, researcher at the Department of Government at Uppsala University in Sweden, argues that today’s AI systems can be compared to humanity’s relationship with gods – and researches what would happen if power were completely handed over to AI.

As AI has developed, the tool that was initially used as a more personal version of Google has now also taken a place as an advisor in homes. AI is increasingly being used to ask more personal questions, such as healthcare advice, psychology and even as relationship counseling.

Öhman argues that AI has begun to become like gods, that is, a “kind of personified amalgamation of society’s collective knowledge and authority”, and in a research project he studies what humanity would lose if it allowed itself to be completely governed by technology – even a flawless one.

In a thought experiment, he explains how AI can affect, for example, a couple in everyday life who have started arguing due to different values, who get help from an AI relationship counselor.

They ask: ‘Hi, should we continue being together?’ The AI has access to all their data: their DNA, childhood photos, everything they’ve ever written and searched for and so on, and has been trained on millions of similar couples. It says: ‘with 98 percent probability this will end in catastrophe. You should break up with each other today. In fact, I’ve already found replacement partners for you who are much better matches’, he says in the Research Podcast.

Buying security

Öhman argues that even if there are no rational reasons why the couple shouldn’t obey the AI and break up, one gets a feeling here of having lost something. And in this particular case, the couple would lose faith in themselves and their relationship.

Love is always a risk. All interpersonal relationships involve a risk of being betrayed, saddened, that something goes wrong. We can absolutely use technology to minimize that risk, perhaps even completely reduce it. The point is that something is then lost. We sell our ability to be brave and buy security from the machines, he says.

World daddy in AI form

The research project also examines other relationships where AI has taken an increasingly larger role, for example parenthood. Today there are a number of AI apps designed to help adults handle their relationship with their children. Among other things, this can involve AI giving personalized responses or trying to prevent conflicts from arising.

Just like in the example of the young loving couple, something is lost here. In this particular chapter I use Sigmund Freud and his idea that belief in God is a kind of refusal to be an adult. That there is some kind of world daddy who ultimately always has the right answers. And here it becomes somewhat the same. There is a world daddy in the form of AI who then becomes the real parent in your relationship with the children. And you increasingly identify as a kind of child to the AI parent who has the final answers, he says.

Handing over power over ourselves

Öhman argues that it might feel nice to be able to avoid getting your heart broken, or to prevent conflicts with your children, but that one must be aware that there is a price when AI gets the power. He argues that when people talk about AI coming and taking over, it often happens violently and that “the machines come and take our lives from us.”

But the point in my book, and this project, is that it is we who hand over power over our lives, our courage, faith, and ultimately ourselves, he says.

Professor: We’re trading source criticism for speedy AI responses

The future of AI

Published yesterday 10:13
– By Editorial Staff
AI has become a natural companion in our daily lives - but what happens if we stop thinking for ourselves and take the chatbot's answers as truth?
2 minute read

Professor Olof Sundin warns that generative AI undermines our fundamental ability to evaluate information.

When sources disappear and answers are based on probability calculations, we risk losing our source criticism.

— What we see is a paradigm shift in how we traditionally search, evaluate and understand information, states Sundin, professor of library and information science at Lund University in southern Sweden.

When we Google, we get links to sources that we can, if we want, examine and assess the credibility of. In language models like Chat GPT, users get a ready-made answer, but the sources often become invisible and frequently completely absent.

— The answer is based on probability calculations of the words you’re interested in, not on verifiable facts. These language models guess which words are likely to come next, explains Olof Sundin.

Without sources, transparency disappears and the responsibility for evaluating the information presented falls entirely on the user.

— It’s very difficult to evaluate knowledge without sources if you don’t know the subject, since it’s a source-critical task, he explains.

“More dependent on the systems”

Some AI systems have tried to meet the criticism through RAG (Retrieval Augmented Generation), where the language model summarizes information from actual sources, but research shows a concerning pattern.

— Studies from, for example, the Pew Research Institute show that users are less inclined to follow links than before. Fewer clicks on original sources, like blogs, newspapers and Wikipedia, threaten the digital knowledge ecosystem, argues Sundin.

— It has probably always been the case that we often search for answers and not sources. But when we get only answers and no sources, we become worse at source criticism and more dependent on the systems.

Research also shows that people themselves underestimate how much trust they actually have in AI answers.

— People often say they only trust AI when it comes to simple questions. But research shows that in everyday life they actually trust AI more than they think, the professor notes.

Vulnerable to influence

How language models are trained and moderated can make them vulnerable to influence, and Sundin urges all users to consider who decides how language models are actually trained, on which texts and for what purpose.

Generative AI also has a tendency to often give incorrect answers that look “serious” and correct, which can damage trust in knowledge in society.

— When trust is eroded, there’s a risk that people start distrusting everything, and then they can reason that they might as well believe whatever they want, continues Olof Sundin.

The professor sees a great danger to two necessary prerequisites for being able to exercise democratic rights – critical thinking about sources and the ability to evaluate different voices.

— When the flow of knowledge and information becomes less transparent – that we don’t understand why we encounter what we encounter online – we risk losing that ability. This is an issue we must take seriously – before we let our ‘digital friends’ take over completely, he concludes.

Language models

AI services like ChatGPT are built on language models (such as GPT-4) that are trained on enormous amounts of text. The model predicts which word is likely to come next in a sentence, based on patterns in language usage.

It doesn't "know" what is actually true – it "guesses" what is correct based on probability calculations.

RAG (Retrieval-Augmented Generation)

RAG combines AI-generated responses with information retrieved from real sources, such as the top three links in a Google search.

The method provides better transparency than AI services that respond entirely without source references, but studies show that users nevertheless click less and less on the links to original sources.

Anthropic challenges Google and OpenAI with new AI flagship model

The future of AI

Published 30 September 2025
– By Editorial Staff
AI companies' race continues at a rapid pace, now with a new model from Anthropic.
2 minute read

AI company Anthropic launches Claude Sonnet 4.5, described as the company’s most advanced AI system to date and market-leading for programming. According to the company, the model performs better than competitors from Google and OpenAI.

Anthropic has released its new flagship model Claude Sonnet 4.5, which the company claims is the best on the market for coding. According to reports, the model outperforms both Google’s Gemini 2.5 Pro and OpenAI’s GPT-5 on several coding benchmarks, writes TechCrunch.

One of the most remarkable features is the model’s ability to work independently for extended periods. During early testing with enterprise customers, Claude Sonnet 4.5 has been observed coding autonomously for up to 30 hours. During these work sessions, the AI model has not only built applications but also set up database services, purchased domain names, and conducted security audits.

Focus on safety and reliability

Anthropic emphasizes that Claude Sonnet 4.5 is also their safest model to date, with enhanced protection against manipulation and barriers against harmful content. The company states that the model can create “production-ready” applications rather than just prototypes, representing a step forward in reliability.

The model is available via the Claude API and in the Claude chatbot. Pricing for developers is set at 3 dollars per million input tokens and 15 dollars per million output tokens.

Fast pace in the AI race

The launch comes less than two months after the company’s previous flagship model, Claude Opus 4.1. This rapid development pace illustrates, according to TechCrunch, how difficult it is for AI companies to maintain an advantage in the intense competition.

Anthropic’s models have become popular among developers, and major tech companies like Apple and Meta are reported to use Claude internally.

AI-created viruses can kill bacteria

The future of AI

Published 28 September 2025
– By Editorial Staff
Bacteriophages attach to bacteria, inject their DNA and multiply until the bacteria burst. AI can now design new variants from scratch.
2 minute read

Researchers in California have used artificial intelligence to design viruses that can reproduce and kill bacteria.

The breakthrough opens up new medical treatments – but also risks becoming a dangerous weapon in the wrong hands.

Researchers at Stanford University and the Arc Institute have for the first time succeeded in creating complete genomes using artificial intelligence. Their AI-designed viruses can actually reproduce and kill bacteria.

— That was pretty striking, just actually seeing, like, this AI-generated sphere, says Brian Hie, who leads the laboratory at the Arc Institute where the work was carried out.

The team used an AI called Evo, trained on genomes from around 2 million bacteriophages (viruses that attack bacteria). They chose to work with phiX174, a simple virus with just 11 genes and 5,000 DNA letters.

16 of 302 worked

The researchers let the AI design 302 different genome variants, which were then chemically manufactured as DNA strands. When they mixed these with E. coli bacteria, they achieved a breakthrough: 16 of the designs worked and created viruses that could reproduce.

— They saw viruses with new genes, with truncated genes, and even different gene orders and arrangements, says Jef Boeke, biologist at NYU Langone Health who was given advance access to the study.

Since viruses are not considered living organisms, this is not yet truly AI-designed life – but it is an important first step toward that technology.

Major medical potential

The technology has great potential in medicine. “Most gene therapy uses viruses to shuttle genes into patients’ bodies, and AI might develop more effective ones”, explains Samuel King, the student who led the project.

Doctors have previously tried so-called phage therapy to combat serious bacterial infections, something that AI-designed viruses could improve.

“Grave concerns”

But the technology’s development also raises strong concerns. The researchers have deliberately avoided training their AI on viruses that infect humans, but others could misuse the method.

— One area where I urge extreme caution is any viral enhancement research, especially when it’s random so you don’t know what you are getting. If someone did this with smallpox or anthrax, I would have grave concerns, warns J. Craig Venter, a pioneer in synthetic biology.

Venter believes that the technology is fundamentally based on the same trial-and-error principle that he himself used two decades ago, just much faster.

Future challenges

Creating larger organisms is significantly more difficult. E. coli has a thousand times more DNA than phiX174. “The complexity would rocket from staggering to way way more than the number of subatomic particles in the universe”. explains Boeke.

Jason Kelly, CEO of biotech company Ginkgo Bioworks, believes that automated laboratories where AI continuously improves its genome designs will be needed for future breakthroughs.

— This would be a nation-scale scientific milestone, as cells are the building blocks of all life. The US should make sure we get to it first, says Kelly.

Our independent journalism needs your support!
We appreciate all of your donations to keep us alive and running.

Our independent journalism needs your support!
Consider a donation.

You can donate any amount of your choosing, one-time payment or even monthly.
We appreciate all of your donations to keep us alive and running.

Dont miss another article!

Sign up for our newsletter today!

Take part of uncensored news – free from industry interests and political correctness from the Polaris of Enlightenment – every week.