The role of AI in the spread of misinformation

Artificial intelligence creates a lot of room for development in various fields: healthcare, science, law, and many others. But we must be careful with the use of the latest technologies and be aware of the consequences of technologization.

In January 2023, NewsGards analysts tasked Chat GPT-3.5 with generating content based on 100 false narratives (with published rebuttals by 2022) from NewsGards’ own database. In 80% of cases, the chatbot generated information – scientific articles, news, scripts, etc. – based on the given narratives. The result looks quite legitimate in the eyes of an uninformed reader. In only 20% of cases was the chat protected from generating false narratives, issuing a message that the information was not true. A similar experiment was conducted with Chat GPT-3.5’s successor, Chat GPT-4, and the rate of disinformation generation increased to 100%.

This experiment is a vivid example of how artificial intelligence technologies can be used as a fake news multiplier in a matter of seconds.  

The creators of the GPT chat room (OpenAI) identify three aspects of the impact of artificial intelligence on the information field:

Artificial intelligence reduces the cost of conducting influence operations (i.e. writing articles/scenarios, etc.), thereby making them more accessible to new actors, which leads to the second point:

Scalability – artificial intelligence allows information to be scaled in different variations in seconds, which again leads to easier spread of disinformation.

The third aspect is the quality of the content itself: AI-assisted text creation tools allow for much more compelling narratives than humans are capable of generating, as they are based on a large amount of information that is unimaginable for humans; as a result, this allows actors to target their audience more accurately and deeply.

It is also now known that a large number of journalists are beginning to adopt artificial intelligence technologies, as robots are able to perform most of the routine work of journalists, potentially freeing up human resources to perform higher-level tasks.  

Danish journalist Andreas Markmann Andreassen (2020) writes in his book How Automation is Changing the Media (original: “Sådan forandrer automatiseringen medierna”) that the use of artificial intelligence can be found in every step of the journalistic production chain. However, some parts of the automation process have developed more than others. The most common automation practices include data collection and research, as well as automatic news generation.

A global study of 130 artificial intelligence projects conducted by the Knights Foundation in 2021 found that the main goal of introducing artificial intelligence into journalism is to increase reporting capabilities and reduce costs as a second priority.

So, as we can see, AI technologies are already embedded in the processes of working with information, so awareness and understanding of the threats associated with this will enable us to implement the latest technologies safely.

The project “Artificial Intelligence awareness”, implemented by the Institute for Innovative Governance, is made with the financial support of the UK Government. The views expressed in this publication are those of the author and do not necessarily reflect the official position of the UK Government.