Abstract
This study explores the role of artificial intelligence in migration governance, focusing on the ethical and legal challenges of its increasing adoption. Artificial intelligence technologies—such as facial recognition, risk assessments, and language analysis—are employed by governments for asylum processing, visa applications, and border control. While these tools hold promise for improving efficiency, they also raise concerns about bias, discrimination, and human rights violations, particularly for vulnerable groups such as migrants, refugees, and stateless persons. Using the migration policies of the United Kingdom and the European Union as case studies, this presentation analyses the implications of artificial intelligence in managing the significant asylum backlog and enhancing border security, including the deployment of artificial intelligence-powered surveillance systems to detect migrant vessels. The ethical issues discussed include the use of flawed voice recognition technology that led to unjust deportations, the opacity of automated decision-making processes, and the risks of biased outcomes in risk assessment systems. This research addresses critical challenges related to transparency, accountability, and data protection, highlighting how over-reliance on artificial intelligence can undermine human rights principles such as non-refoulement, despite the regulatory complexities introduced by Brexit. The central question is: how can artificial intelligence be ethically and legally integrated into migration governance to enhance efficiency without compromising human rights and perpetuating discrimination?
Presenters
Indira BoutierLecturer in Law, Department of Economics and Law, Glasgow Caledonian University, United Kingdom
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
ARTIFICIAL INTELLIGENCE, MIGRATION GOVERNANCE, EUROPE, DISCRIMINATION, BORDER CONTROL