A Bug in the Code: Rethinking the Mechanics of Predictive Processing

Abstract

The sciences of the body have historically employed various mechanistic metaphors to describe the brain. In Descartes’ time, the brain was likened to a hydraulic machine; in the 18th century, it was thought to function as a clockwork mechanism; since the 1950s, cybernetic networks have dominated research and modeling paradigms. Despite the failure of mechanistic models to capture the intricacies of the mind’s contingencies, such metaphors persist. In this paper, I examine how the leading theory of the mind in neurosciences at the moment—predictive processing (PP)—is a useful but non-exhaustive tool for explaining human behavior. PP defines the mind as a “prediction machine” that does not have direct access to reality but is always in the process of hypothesizing about the cause of stimuli based on previous experiences. An individual can then either update their internal world model or change the world to align with their model. On the one hand, PP’s emphasis on knowledge construction, perpetual change, and the role of affect in cognition can assist cultural studies in challenging the hubristic belief that a human being can ever access the assumed “objective reality.” However, PP falls short of explaining the ontological difference between humans and machines as well as why people find some models more affectively salient than others. Additionally, as with many neuroreductive models, PP fails to connect its findings to larger social and political issues. Finally, I suggest that cross-fertilization between cultural studies and sciences of the mind is crucial to address such theoretical gaps.

Presenters

Jovana Isevski
Student, PhD, University of California, Riverside, California, United States

Details

Presentation Type

Paper Presentation in a Themed Session

Theme

Histories of Technology

KEYWORDS

PREDICTIVE PROCESSING, NEUROSCIENCE, THE MIND, MECHANISTIC METAPHORS, REDUCTIONISM