04.08.2023
How dangerous can Chat GPT be?

Chat GPT was launched in November 2022 and immediately became widely used – in just 5 days, the number of users passed one million. Creating texts, writing code, and scripts in a matter of minutes… what risks can a text assistant pose?

However, the use of the latest technologies should be approached consciously, so what are the risks of Chat GPT?

Information field

Artificial intelligence is being actively implemented by journalists and content makers. One example of the introduction of artificial intelligence into journalism is the Swedish company United Robots, which supplies text generation solutions to several news agencies. They use artificial intelligence and natural language generation (NLG) to automatically generate publishable news stories based on large data sets, such as sports, real estate sales, traffic, weather, stock market and local business registrations, etc. Their robots analyse large open and accessible data sets and identify unusual events or patterns in them. This information can then be used to create articles or as an alert for the newsroom to follow up. But it’s important to understand that artificial intelligence is a broad term for a variety of algorithms, robots and systems that are based on and driven by data. Some of the existing algorithms are self-learning and can improve their accuracy with more information, but in most cases, they are static codes that perform the same function every time. Therefore, we shouldn’t expect artificial intelligence to be “conscious” and detect fake narratives in the content it creates. Therefore, the introduction of artificial intelligence into the information field poses a threat of rapid (and most importantly, convincing) multiplication of fake news and narratives.

Use of personal data

Artificial intelligence learns from large amounts of data, including the data you provide during a conversation with artificial intelligence. All information provided is stored and processed for further training of the language model. The question of how this data will be used and who has access to it remains open.

Ethical issues

GPT chat paraphrases the text given to it in seconds, which raises the threat of plagiarism and raises the broader question of ethics and critical thinking in society. The emergence of artificial intelligence gives us the opportunity to ask how socio-technical change can help create a better and more just world. Drawing on our deep engagement with social structure and inequality, we must develop theoretical and empirical insights to provide society with a rational vision of the sociotechnical future – and how it might take root – at the levels of individuals, institutions, and societies. The way people deploy and interpret AI systems varies depending on the institutional and organizational context in which they are deployed.

The project “Artificial Intelligence awareness”, implemented by the Institute for Innovative Governance, is made with the financial support of the UK Government. The views expressed in this publication are those of the author and do not necessarily reflect the official position of the UK Government.