Contemporary Considerations


You must sign in to view content.

Sign In

Sign In

Judge Profiling A.I. Tools: Regulation for Their Use

Paper Presentation in a Themed Session
Francesco Maria Damosso  

In discussions surrounding the application of artificial intelligence in the legal field, the concept of "predictive justice" frequently arises. This refers to computational systems based on machine learning and deep learning that analyze vast amounts of data to anticipate the outcomes and content of future judicial decisions. Among these tools, judge profiling elicits particular interest, as it aims to predict not just how the law will be applied in general but specifically how an individual judge will decide a case. Research into judge profiling in the United States reveals that such systems analyze not only legal texts and judicial precedents but also biographical and professional data about the judge in question. While predictive tools like judge profiling promise enhanced certainty and equality, they also introduce significant challenges. Implementing such systems could lead to issues like the alteration of observed phenomena and the behavior of the judge due to increased scrutiny. This, in turn, could undermine judicial independence and the effectiveness of rights protection. Is France's decision to criminalize judge profiling practices in 2019 the only viable approach? The balance between the benefits of increased predictability and the risks to judicial independence is a critical concern that warrants careful consideration, also in light of the European Regulation on Artificial Intelligence (“AI Act”).

Artificial Intelligence or Artificial Idiocy: A Call for Critical Thinking

Paper Presentation in a Themed Session
Isidoro Talavera  

To know that you are not knowledgeable in certain areas was seen by Plato as being a marker of wisdom and understanding. Socrates revealed that people who thought they were intelligent— who thought they knew what certain concepts meant— had never actually thought about them and, as such, didn’t really know anything about them. AI has a similar sort of problem— primarily because machines are taught by humans. But, this engages us in circular reasoning— where intelligence examines intelligence (or reasoning undermines reasoning). Specifically, the problem is that what makes us human has to do with human intelligence, but human intelligence has shaped artificial intelligence (this is akin to a dog chasing its own tail). In practice this means that our educators and their students will continue to be expected to behave like machines—reducing thinking, reasoning, teaching, and learning to the mechanical sums of physical processes of “stochastic parrots,” similar to the “artificial idiocy” of the way robots, automata, androids, and other forms of artificial intelligence “think.” To know that one is not knowledgeable in certain areas requires critical thinking. Accordingly, critical thinking (asking how (an analysis) and why (an evaluation)) is an aspect of human intelligence that is missed by artificial intelligence. The question to ask, then, is whether an emphasis on artificial intelligence promotes or impedes the development of critical thinking, or at least whether an emphasis on artificial intelligence has a negative effect on the development of critical thinking.

Artificial Intelligent Syntax: Semiotics and Their Impact on ‘Vulnerable’ Communities

Paper Presentation in a Themed Session
Ralisa Dawkins  

The paper examines codes used in machine learning to train computers about People with Disabilities (PWDs). I highlight the hidden experience of the most marginalized group affected by the end-product of algorithms and codes. Perspectives can be easily overlooked with the hype surrounding machine intelligence, automatically leading us to question the robustness and sophistication of new technologies. Preliminary insights to my arguments start with the “dis” prefix in the concept of “disability,” which attracts a negative interpretation within traditional language. If offered an opportunity to present, issues of oppression, evident with learned behavior intended for artificially intelligent agents, are progressing, and the historical and social ableist discourse and narrative are being imitated by AI to regurgitate and amplify oppression against groups. Therefore, reasserting the potential of disability is incredibly difficult in contemporaneous times when dominant modes of cultural and discursive reproduction continue to portray and constitute PWDs as objects of ‘pity,’ charity, and professional intervention and leeches on systems of welfare, health, and social care. To challenge this imitated systemic oppression, the author proposes a concept called “Disability Semiotics” geared towards reclamation, redefining, and reasserting of ‘disability’ while simultaneously offering language and words that take us beyond ‘disability’ as ‘lacking’.

Pirates of the Pre-Conscious, Sirens to our Desires: A Psychoanalytic Understanding of Relating to a Non-organic-device-other View Digital Media

Paper Presentation in a Themed Session
Cecilia Taiana  

In this paper, I elaborate on two main ideas: 1. AI and Psychoanalysis: A Convergence of Ideas. The first part of this presentation briefly traces the historical links between AI and psychoanalysis, highlighting their shared ancestry in 19th-century thinkers like Hermann von Helmholtz and Gustav Fechner. Their ideas about perception, unconscious inference, and the mind as an energy-regulating system laid the groundwork for both fields. This section examines the influence of Helmholtz's theories of unconscious inference and energy conservation on Freud's psychoanalytic model and how these concepts resonate with contemporary neuroscientific frameworks like Karl Friston's free energy principle and Geoffrey Hinton's AI's predictive coding model. 2. Two types of Attention: AI and Psychoanalysis. I argue that AI's approach to attention fundamentally differs from Freud's concept of evenly suspended attention. In this section, I highlight the dangers of relating to a non-organic-other in the context of an emerging 'intention economy' that [consciously] conceals its intentions from the user. I examine AI's potential to capture and seduce human attention, particularly within the emerging "intention economy," which seeks to predict and exploit human intentions for commercial gain. This evolution of the digital landscape raises ethical concerns about the commodification of human experience and the potential for AI to "steer" users toward predetermined outcomes. The constant flooding of the secondary process of the universal other is an intrusion/seduction that interferes and modifies the use of consciousness as a sense-organ that attributes psychic quality to understanding what they are feeling or thinking.

Digital Media

Discussion board not yet opened and is only available to registered participants.