How Does AI Remember? : The Ethics of Learning about Genocide from Artificial Intelligence

Abstract

The Bosnian war and especially the genocide of Srebrenica was a time of violence that left a wound in history. The initial outburst, the reasons behind the war, the religious tension, the genocidal indent and the response from Western countries are all issues we have been studying ever since. With AI becoming more prevalent, we are bound to use it more frequently to learn about history. But what are the ethics of learning about such events through AI applications? By posing questions about the subject to Chat GPT, we aim to understand if it can provide accurate information and how it handles controversial topics. We asked why the war happened, if Western responses were adequate, and whether we could talk about a genocide among other questions. It provided answers to all that, some better than others. The responses varied from specific to more general and “diplomatic.” For example, regarding the genocide, the program talked about Srebrenica with certainty, since it was a genocidal act condemned in international court. On the contrary, when asked what the Western response to the war was, it spoke vaguely about the war having a lot of international attention. Chat GTP does not have an intrinsic system of ethics, only what it has been programmed with, however, with its answers it projects a moral value to its readers, which cannot be prevented in events such as these.

Presenters

Ermioni Vlachidou
Student, Master of Arts in Contemporary History, Aristotle University of Thessaloniki, Thessaloniki, Greece

Details

Presentation Type

Paper Presentation in a Themed Session

Theme

2025 Special Focus—Minds and Machines: Artificial Intelligence, Algorithms, Ethics, and Order in Global Society

KEYWORDS

GENOCIDE, ARTIFICIAL INTELLIGENCE, BOSNIAN WAR, CHAT GPT, ETHICS