Abstract
Generative AI (genAI) image models are becoming increasingly popular and are being embedded within common design platforms such as Adobe and Canva. Previously, AI has been likened to a mirror (Vallor, 2024) and a “disclosing agent for assumptions about humanness” (Suchman, 2019). Due to the way genAI models take large quantities of existing images, created within everyday social lives online, and distil the most probable outcome to a prompt; their outputs can be considered social imaginary. This agglomerative property has also been shown to exacerbate and propagate biases from within the data sets into the pictures that are produced. Building on Benjamin’s (2019) assessment of Dall-E’s prejudiced production of race, this paper seeks to understand how fat bodies are presented by nine different, free-to-use genAI image models in response to a series of text prompts. Using critical visual analysis of the produced images, the first question posed is: what is communicated about fatness? Through auditing, the common features of fatness were found to be high numbers of topless, similarly-featured white men were found. A deeper analysis then highlights the more insidious messaging embedded in portrayals of fatness within lighting, facial expression and background. Secondly, the talk reflects on the complexity of co-creating images with genAI. When the outputs can feel reductive, how can one see themselves, and do we want to? By highlighting the challenging and exploitative genAI ecosystem and the social function images play, the paper concludes by unpacking the tension within the social dialogue created by genAI images.
Presenters
Aisha SobeyResearch Associate, Leverhulme Centre for the Future of Intelligence, University of Cambridge, Cambridgeshire, United Kingdom
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
Generative AI Image Models, Social Imaginary, Anti-Fat Bias, Communication, Inclusion