Professor Olof Sundin warns that generative AI undermines our fundamental ability to evaluate information.
When sources disappear and answers are based on probability calculations, we risk losing our source criticism.
— What we see is a paradigm shift in how we traditionally search, evaluate and understand information, states Sundin, professor of library and information science at Lund University in southern Sweden.
When we Google, we get links to sources that we can, if we want, examine and assess the credibility of. In language models like Chat GPT, users get a ready-made answer, but the sources often become invisible and frequently completely absent.
— The answer is based on probability calculations of the words you’re interested in, not on verifiable facts. These language models guess which words are likely to come next, explains Olof Sundin.
Without sources, transparency disappears and the responsibility for evaluating the information presented falls entirely on the user.
— It’s very difficult to evaluate knowledge without sources if you don’t know the subject, since it’s a source-critical task, he explains.
“More dependent on the systems”
Some AI systems have tried to meet the criticism through RAG (Retrieval Augmented Generation), where the language model summarizes information from actual sources, but research shows a concerning pattern.
— Studies from, for example, the Pew Research Institute show that users are less inclined to follow links than before. Fewer clicks on original sources, like blogs, newspapers and Wikipedia, threaten the digital knowledge ecosystem, argues Sundin.
— It has probably always been the case that we often search for answers and not sources. But when we get only answers and no sources, we become worse at source criticism and more dependent on the systems.
Research also shows that people themselves underestimate how much trust they actually have in AI answers.
— People often say they only trust AI when it comes to simple questions. But research shows that in everyday life they actually trust AI more than they think, the professor notes.
Vulnerable to influence
How language models are trained and moderated can make them vulnerable to influence, and Sundin urges all users to consider who decides how language models are actually trained, on which texts and for what purpose.
Generative AI also has a tendency to often give incorrect answers that look “serious” and correct, which can damage trust in knowledge in society.
— When trust is eroded, there’s a risk that people start distrusting everything, and then they can reason that they might as well believe whatever they want, continues Olof Sundin.
The professor sees a great danger to two necessary prerequisites for being able to exercise democratic rights – critical thinking about sources and the ability to evaluate different voices.
— When the flow of knowledge and information becomes less transparent – that we don’t understand why we encounter what we encounter online – we risk losing that ability. This is an issue we must take seriously – before we let our ‘digital friends’ take over completely, he concludes.