Tuesday, February 18, 2025

Polaris of Enlightenment

Ad:

Here’s the Chinese AI model that’s shaking up the tech world

Published 31 January 2025
– By Editorial Staff
While OpenAI's GPT-4 was said to have cost over $100 million to train, DeepSeek claims their model only required $6 million.

The Chinese AI model DeepSeek has very quickly become one of the most downloaded apps and attracted a lot of attention in the tech industry. Its unexpected success has not only surprised investors but also affected the market capitalization of several major tech companies.

DeepSeek is an AI-powered chatbot and language model that works in a similar way to OpenAI’s ChatGPT. It is used to generate text, answer questions, and help users with tasks such as coding and mathematical calculations. The model has been compared to OpenAI’s o1 and is considered an advanced reasoning model, allowing it to process complex questions and provide multi-stage answers.

One of the most notable aspects of the chatbot is its low development cost. While OpenAI’s GPT-4 was said to have cost over $100 million to train, DeepSeek claims that their model only required $6 million. This has been made possible by a combination of advanced memory management and the efficient use of less powerful chips, making the model both cheaper and more resource efficient.

DeepSeek is not just a technological innovation, however, but also part of China’s larger strategy to reduce its dependence on Western technology. The company has managed to build a powerful AI model despite an export ban on high-end semiconductor chips from the US.

Major drop for Nvidia

The chatbot’s rapid impact has had immediate effects on the tech market. Among other things, the model has shown that advanced AI solutions can be developed without access to the most powerful chips, which has created uncertainty in the market about the need for expensive semiconductors.

The biggest stock market impact has been seen in Nvidia, the world’s leading chip maker, whose share value fell by $600 billion – the biggest one-day loss in US history. The market has reacted sharply to the realization that cheaper AI models can compete with the most advanced alternatives.

At the same time, DeepSeek, like other AI models, has also raised questions about security and data privacy. Some experts argue that the model, like other AI platforms, may be subject to government censorship and restrictions and that DeepSeek is programmed to provide “helpful and harmless” answers.

Trump: “A wake-up call”

DeepSeek’s success has led to widespread reactions from both businesses and governments. Donald Trump called the launch a “wake-up call” for US companies and said it is now crucial for the US to “compete to win” in AI development. Australia’s science minister, Ed Husic, expressed concerns about DeepSek’s handling of data and security.

In China, the language model is being hailed as proof of the country’s technological progress and independence from Western technology. President Xi Jinping has previously emphasized AI as a national priority, and its success is seen as a step towards establishing China as a global leader in artificial intelligence.

The AI model has also faced technological challenges. On the day of its major breakthrough, the platform was attacked, according to the company, by a “large-scale malicious attacks” that led to temporary outages and restrictions on registrations. Even at the time of writing, DeepSeek is suffering from server issues and disruptions.

TNT is truly independent!

We don’t have a billionaire owner, and our unique reader-funded model keeps us free from political or corporate influence. This means we can fearlessly report the facts and shine a light on the misdeeds of those in power.

Consider a donation to keep our independent journalism running…

US and UK back away from international AI declaration

The future of AI

Published 15 February 2025
– By Editorial Staff
US Vice President JD Vance stresses that “pro-growth AI policies” should take priority over security.

Sweden and 60 other countries have signed an AI declaration for inclusive, sustainable and open AI. However, the United States and the United Kingdom have chosen to opt out a decision that has provoked strong reactions.

The AI Declaration was developed in conjunction with the International AI Summit in Paris earlier this week, and its aim is to promote inclusive and sustainable AI in line with the Paris Agreement. It also emphasizes the importance of an “ethical” approach where technology should be “transparent”, “safe” and “trustworthy”.

The declaration also notes AI’s energy use, something not previously discussed. Experts have previously warned that in the future AI could consume as much energy as smaller countries.

Countries such as China, India and Mexico have signed the agreement. Finland, Denmark, Sweden and Norway have also signed. The United States and the United Kingdom are two of the countries that have chosen not to sign the agreement, reports the British state broadcaster BBC.

“Global governance”

The UK government justifies its decision with concerns about national security and “global governance”. US Vice President JD Vance has also previously said that too much regulation of AI could “kill a transformative industry just as it’s taking off”. At the meeting, Vance stressed that AI was “an opportunity that the Trump administration will not squander” and said that “pro-growth AI policies” should be prioritized over security.

French President Emmanuel Macron, for his part, defended the need for further regulation.

AI and speech therapy to help police identify voices

Published 14 February 2025
– By Editorial Staff

Researchers at Lund University are developing a forensic speech comparison using speech therapy, AI, mathematics and machine learning. The method will help police analyze audio recordings in criminal investigations.

Like fingerprints and DNA, the voice carries unique characteristics that can be linked to individuals. Speech and voice are influenced by several factors, such as the size of the vocal cords, the shape of the oral cavity, language use and breathing. While most people can perceive the gender, age or mood of a speaker, it takes specialist knowledge to objectively analyze the unique patterns of the voice an area in which speech therapists are experts.

The police turned to Lund University for help analyzing audio recordings in an investigation. The request led to the development of forensic speech comparison as a method of evidence gathering.

The police often handle audio recordings where the speaker is known, but also recordings where the purpose is to confirm or exclude a suspect.

– What we do at the moment is to have three assessors, speech therapists, analyze the speech, voice and language in the recordings in order to compare them. We listen for several factors, such as how the person in question produces their voice, articulates, seems to move their tongue and lips, says Susanna Whitling, a speech therapist and researcher at Lund University, in a press release.

Both larger datasets and cutting-edge analysis

The number of requests from the police has increased, making it difficult for analysts to keep up with all the recordings. To handle larger data sets, researchers have developed AI-based methods that can identify relevant audio files, which are then analyzed by experts.

– By combining traditional speech therapy perceptual assessment of speech voice and language with machine learning, we want to make it possible to both scan large amounts of data and offer cutting-edge analysis. Based on the hits that the AI then extracts, experts can make a professional assessment, explains Whitling.

The researchers are also collaborating with Andreas Jakobsson, a professor of mathematical statistics, to develop specialized software. The vision is to have an accurate and reliable speech comparison.

– We speech therapists can do perceptual assessment and examine the probability that two recordings contain the same person’s speech, voice and language. When adding the development of specialized software for so-called acoustic analysis such as voice frequency, intensity and temporal variations, we collaborate with experts in signal processing and machine learning.

World leaders gather in Paris for AI summit

Published 11 February 2025
– By Editorial Staff
Ulf Kristersson and around 100 other heads of state and government are currently at the AI Summit in Paris.

The AI Action Summit is currently taking place in Paris, where world leaders are gathering to discuss global governance of artificial intelligence.

The stated aim of the summit, chaired by French President Emmanuel Macron and Indian Prime Minister Narendra Modi, is to create a common path forward for the development of AI and lay the foundations for global AI governance.

During the summit, five main areas will be discussed:

• AI in the public interest
• Future impact of AI on the labor market
• Promoting innovation and culture
• Trust in AI
• Global AI governance

The UN Secretary-General, António Guterres, is attending the meeting along with leaders from nearly 100 countries, representatives from international organizations, researchers and civil society representatives.

He sees the establishment of global AI governance as a top priority and believes the technology could pose an “existential concern” for humanity if not regulated in a “responsible manner”.

AI must remain a tool at the service of humanity, and not a source of inequalities and unbridled risks”, writes the UN Information Center UNRIC in a press release.

The discussions should therefore shape an AI that is sustainable, beneficial and inclusive, with particular attention paid to the risks of abuse and the protection of individual rights”, it continues.

EU must invest in AI

Swedish Prime Minister Ulf Kristersson is leading the Swedish delegation, and is also attending a meeting of EU leaders to discuss European competitiveness and the future role of AI in it.

The summit organizers state that Europe “can and must significantly strengthen its positioning on AI and accelerate investments in this field, so that we can be at the forefront on the matter”.

Regarding the regulation and control of AI, the view is that “one single governance initiative is not the answer”. Instead, the focus should be on “existing initiatives, like the Global Partnership on Artificial Intelligence (GPAI), need to be coordinated to build a global, multi-stakeholder consensus around an inclusive and effective governance system for AI”, it says.

Google abandons promise not to use AI for weapons

Published 8 February 2025
– By Editorial Staff
The tech giant claims that in its AI development it implements social responsibility and generally accepted principles of international law and human rights.

Google has removed the part of its AI policy that previously prohibited the development and deployment of AI for weapons or surveillance.

When Google first published its AI policy in 2018, it included a section called “applications we won’t pursue”, in which the company pledged not to develop or deploy AI for weapons or surveillance.

Now it has removed that section and replaced it with another, Bloomberg reports. Records indicate that the previous text was still there as recently as last week.

Instead, the section has been replaced by “Responsible development and deployment”, where Google states that the company will implement “appropriate human oversight, due diligence, and feedback mechanisms to align with user goals, social responsibility, and widely accepted principles of international law and human rights”.

In connection with the changes, Google refers to a blog post in which the company writes that the policy change is necessary, as AI is now used for more general purposes.

Thousands of employees protested

In 2018, Google signed a controversial government contract called Project Maven, which effectively meant that the company would provide AI software to the Department of Defense to analyze drone images. Thousands of Google employees signed a protest against the contract and dozens chose to leave.

It was in the context of that contract that Google published its AI guidelines, in which it promised not to use AI as a weapon. The tech giant’s CEO, Sundar Pichai, reportedly told staff that he hoped the guidelines would stand the “test of time”.

In 2021, the company signed a new military contract to provide cloud services to the US military. In the same year, it also signed a contract with the Israeli military, called Project Nimbus, which also provides cloud services for the country. In January this year, it also emerged that Google employees were working with Israel’s Ministry of Defense to expand the government’s use of AI tools, as reported by The Washington Post.