Can we leave fact checking to AI?
-
Sist oppdatert
14. februar 2025
-
Kategori
Companies are trying to convince us to use artificial intelligence-based solutions for fact-checking. We found that human involvement remains immutable.
SCIENCE NEWS FROM KRISTIANIA: Artifical intelligence
Key takeaways
- Companies promote AI as the "holy grail" for fact-checking to tackle misinformation.
- AI helps automate verification, but PhD- candidate Lavash Kavtaradze and professor Bente Kalsnes argue that candidate human involvement remains crucial.
- The framing of AI tools as reliable solutions needs closer scrutiny.
(The summary was created by AI and quality assured by the editors).
To detect fake news and fact-check information is resource- and time-consuming. The worldwide infodemic has created an acute necessity of finding technological solutions for information verification. The current trend of injecting AI into the process of journalistic information verification is comfortably situated within a technology-oriented logic.
Media, academia, technology companies, and civil society are striving to come up with AI solutions to deal with fake news. As the scale of mis- and disinformation grows across digital platforms, public and professional discussion about the use of AI for information verification is becoming increasingly common.
Researchers have been focusing on the use of AI technologies in determining the credibility of sources of information as well as breaking down the logic behind manual fact-checking and automatization of the steps in the verification process. AI-powered services of information verification have been called the “holy grail of fact-checking”.
Framing AI-powered tools and services in this way is a type of strategic communication that can influence how people think about AI-based fact-checking. Hence, it deserves to be looked at closely.
Humans are needed in the verification process
In a study of six companies, we found that AI-powered services problematize current information ecosystems primarily in terms of the lack of factuality and credibility of the media products.
Moreover, they emphasize the deficiency of credible and authentic information sources as one of the characteristics of the fake news problem.
In response, the companies offer various kinds of automated services. Even though these technologies are far from performing without mistakes, AI-powered initiatives emphasize lifting the burden of manually verifying information from the shoulders of human fact-checkers.
Companies emphasize the importance of automation and express optimism about AI-based solutions to detect and debunk false and manipulated information.
However, we found that human involvement in information verification remains immutable. Thus, the role of human effort in autonomous information verification systems, as well as the actual functioning of the AI-powered services could be a topic of further exploration to observe the strategic positioning of the companies and the actual results their services yield.
How companies frame AI-based fact-checking tools and services
Companies engage in various kinds of strategic communication to convince users and potential customers about the credibility and efficiency of their solutions. Due to the novelty of these services, it is still early to talk decisively about their short-term effects or long-lasting implications on the information ecosystems or societies. Instead, we looked at these companies´ intentions and strategic positioning within the context of information verification.
Framing can be a way to engage in strategic communication through company websites. To frame something means to “select some aspects of a perceived reality and make them more salient in a communicating text. The goal is to promote a particular problem definition, causal interpretation, moral evaluation, and/or treatment recommendation for the item described”.
In this study, we examined the companies’ websites by looking at:
1) how they present the problem of mis- and disinformation,
2) how they identify the cause and the causal agents of the problem, and
3) how they are presenting their AI-based solutions to the problem.
About the research:
Additionally, we examine what sort of moral judgment the organizations use while proposing solutions to the information disorder. The data was collected from the companies’ websites between February and May 2022.
The findings from the websites of AI-powered companies in four topics:
1. Identifying the problem of distorted information landscapes
The companies problematize mis- and disinformation issues in terms of three main aspects:
- inauthenticity of the information sources,
- the questionable factual value of the publicly made claims, and
- the lack of credibility of the media content.
2. Causes of information disorder
The companies identify at least three different causes of the distorted online media landscape:
- low quality of information sources,
- growing amount of digitally mediated content,
- increased pressure for media professionals to verify the information.
3. The damage to human life
The companies focus on three major aspects of human life when it comes to damage stemming from information disorder:
- politics
- public health
- economic issues
4. AI-powered solutions for information verification
The companies are developing various types of AI-based services related to information verification, such as:
- automated (or semi-automated) fact-checking for determining the factual value of claims made by relevant public actors
- automated credibility assessment of media content
- automated authenticity assessment of information sources
References:
Choudhary, N., Singh, R., Bindlish, I., & Shrivastava, M. (2020). Neural Network Architecture for Credibility Assessment of Textual Claims. Computation and Language.
Entman, R. M. (1993). Framing: Toward Clarification of a Fractured Paradigm. Journal of Communication, 43(4), 51–58.
Hassan, N., Adair, B., Hamilton, J. T., Li, C., Tremayne, M., Yang, J., & Yu, C. (2015). The quest to automate fact-checking. Proceedings of the 2015 computation+ journalism symposium.
Text: Lasha Kavtaradze, PhD candidate, School of Communication, Leadership and Marketing, Kristiania University of Applied Sciences and Department of Information Science and Media Studies, University of Bergen and Bente Kalsnes, Professor, School of Communication, Leadership and Marketing, Kristiania University of Applied Sciences.
We love hearing from you!
Send your comments and questions regarding this article by e-mail to kunnskap@kristiania.no.
Siste nytt fra Kunnskap Kristiania
- Kunnskap KristianiaLes mer
Slik kan Vesten frigjøre seg fra autoritære og korrupte stater
Den geografiske konsentrasjonen av kritiske mineraler skaper strategiske utfordringer for Vesten. - Kunnskap KristianiaLes mer
Six common characteristics of a crisis
Differences and similarities between event-based and prolonged crises. - Kunnskap KristianiaLes mer
Can we leave fact checking to AI?
Companies are trying to convince us to use AI-based solutions for fact-checking. We found that human involvement remains immutable. - Kunnskap KristianiaLes mer
Hvorfor tollkrig nå , og hvem vinner ?
Tidligere handelskriger har fått skylden for Den store depresjonen på 1930-tallet. Er vi på vei dit igjen?

Meld deg på vårt nyhetsbrev