Abstract
The paper examines codes used in machine learning to train computers about People with Disabilities (PWDs). I highlight the hidden experience of the most marginalized group affected by the end-product of algorithms and codes. Perspectives can be easily overlooked with the hype surrounding machine intelligence, automatically leading us to question the robustness and sophistication of new technologies. Preliminary insights to my arguments start with the “dis” prefix in the concept of “disability,” which attracts a negative interpretation within traditional language. If offered an opportunity to present, issues of oppression, evident with learned behavior intended for artificially intelligent agents, are progressing, and the historical and social ableist discourse and narrative are being imitated by AI to regurgitate and amplify oppression against groups. Therefore, reasserting the potential of disability is incredibly difficult in contemporaneous times when dominant modes of cultural and discursive reproduction continue to portray and constitute PWDs as objects of ‘pity,’ charity, and professional intervention and leeches on systems of welfare, health, and social care. To challenge this imitated systemic oppression, the author proposes a concept called “Disability Semiotics” geared towards reclamation, redefining, and reasserting of ‘disability’ while simultaneously offering language and words that take us beyond ‘disability’ as ‘lacking’.
Presenters
Ralisa DawkinsPhD Student, Department of Science, Technology, and Society, Virginia Polytechnic Institute and State University, Virginia, United States
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
Codes, Algorithms, Disability, Language Model and ML