Unintended Consequences of Large Language Models and Their Impact on Society

Authors

  • Ursula Coester Westphalian University of Applied Science
  • Dominik Adler Westphalian University of Applied Science
  • Christian Böttger Westphalian University of Applied Science
  • Norbert Pohlmann Westphalian University of Applied Science

DOI:

https://doi.org/10.14279/eceasst.v84.2676

Keywords:

LLM, ChatGPT, Ethics, AI Act, harm, ‘do no harm’-principle

Abstract

In our paper, we explore unintended consequences of LLMs from the perspective that they can lead to illegitimate harm if they are not taken into account by AI providers, and what requirements result from this in terms of what needs to be done . In the first part of the paper, we demonstrate the reason for unintended consequences and what harm can result from them and then use the ‘do no harm’-principle to illustrate why AI providers are theoretically obliged to do everything possible to avoid illegitimate harm. The subsequent section details the development of an AI Restriction Framework, which aims to enhance visibility of potential illegitimate harm, thereby serving as a base for both AI providers and users to take action. The overarching objective of our research is to establish a foundation for a shared understanding of the potential harms that may arise from LLMs, by facilitating a focal point for a more informed societal discourse on their utilization.

Downloads

Published

2025-11-14

How to Cite

[1]
U. Coester, D. Adler, C. Böttger, and N. Pohlmann, “Unintended Consequences of Large Language Models and Their Impact on Society”, ECEASST, vol. 84, Nov. 2025.