Abstract
The dynamic landscape of A.I. systems demands legal, ethical, and pedagogical frameworks capable of addressing the interplay between innovation, risk, harms, and responsibility. This presentation examines U.S. and E.U. liability models for ISPs as instructive analogs for A.I. governance. Specifically, it compares §230 of the US’s CDA with the E.U.’s E-commerce Directive (2000/31/EC), implemented through Italy’s Legislative Decree No.70/2003, to explore how these frameworks balance innovation and accountability. Section 230’s broad immunity provisions have catalyzed e-commerce and technological growth by shielding platforms from liability for third-party content. The E.U.’s conditional immunity model by contrast requires providers to act upon knowledge of illegal content, emphasizing privacy and harm mitigation. While both frameworks have facilitated innovation, their limitations become apparent when understandings of the “internet” and “ISPs” in the late 90s are compared with the dynamics of interconnected systems today. These frameworks reveal a failure in adequately conceptualizing the relationship of innovation, risk, and harms in interconnected systems. By reconsidering these concepts through the emergence of technological agency, this study provides insights into how pedagogy and legal protections can evolve to meet the challenges posed by A.I. and machine learning systems in critical areas such as innovation, global security, and governance.
Presenters
Michael ThateResearch Scholar for Responsible Tech, Innovation, and Policy, School of Engineering and Applied Sciences, Keller Center for Innovation, Faith and Work Initiative, Princeton University, United States
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
ISPs, Innovation, Risk, Legal Protections, Ethical Philosophy