Abstract
The rapid development and deployment of artificial intelligence (AI) technologies have raised critical questions regarding their impact on diversity, equity, inclusion, and belonging (DEIB). This paper explores the extent to which algorithms, as central components of AI systems, reflect and propagate biases inherent in the data they process and the frameworks within which they are designed. Using a mixed-methods approach that incorporates both qualitative analyses of case studies and quantitative examination of algorithmic outputs, the research identifies patterns of bias and their implications for marginalized communities. The findings suggest that algorithmic decision-making often mirrors societal inequalities, reinforcing disparities unless intentionally mitigated through ethical oversight and inclusive practices. This work underscores the importance of incorporating DEIB principles into the development and governance of AI to foster systems that are fair, transparent, and representative of all user groups. The paper concludes by recommending strategies to address and minimize bias, promoting an AI landscape that enhances equitable and inclusive technological outcomes.
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
DIVERSITY, EQUITY, INCLUSION, BELONGING, ARTIFICIAL INTELLIGENCE, ALGORITHMIC BIAS, ETHICS, TECHNOLOGICAL IMPACT, MARGINALIZED COMMUNITIES, FAIRNESS, TRANSPARENCY, GOVERNANCE, INCLUSIVE PRACTICES, DATA ANALYSIS, CASE STUDIES