Unintended Consequences of Large Language Models and Their Impact on Society
DOI:
https://doi.org/10.14279/eceasst.v84.2676Keywords:
LLM, ChatGPT, Ethics, AI Act, harm, ‘do no harm’-principleAbstract
In our paper, we explore unintended consequences of LLMs from the perspective that they can lead to illegitimate harm if they are not taken into account by AI providers, and what requirements result from this in terms of what needs to be done . In the first part of the paper, we demonstrate the reason for unintended consequences and what harm can result from them and then use the ‘do no harm’-principle to illustrate why AI providers are theoretically obliged to do everything possible to avoid illegitimate harm. The subsequent section details the development of an AI Restriction Framework, which aims to enhance visibility of potential illegitimate harm, thereby serving as a base for both AI providers and users to take action. The overarching objective of our research is to establish a foundation for a shared understanding of the potential harms that may arise from LLMs, by facilitating a focal point for a more informed societal discourse on their utilization.
Downloads
Published
How to Cite
Issue
Section
License
Copyright (c) 2025 Ursula Coester, Dominik Adler, Christian Böttger, Norbert Pohlmann

This work is licensed under a Creative Commons Attribution 4.0 International License.
