Abstract
In the last few years, there has been increasing pressure for teachers and administrators to integrate AI into their high school classrooms. Those who promote AI tools often assert that they can bring about a more efficient but consistently ethical system of care for students. In this paper, we pry open that version of care as it carries opinions about the purpose of schooling, efficiency in student progression, and relational trust between teachers and students. To investigate the effects of teachers using generative AI tools in their practice, we ran a pilot study in which teaching assistants (TAs) co-wrote feedback for student writing submissions with the use of large language models in a hybrid dual-enrollment course. The TAs assigned each submission a grade and then, instead of writing feedback from scratch, edited feedback produced by ChatGPT. We followed the TAs throughout the study and asked for their reflections on this design, their students, and the development of their practice each week. Their reflections shed light on the politics of generative AI tools: automated feedback mechanisms mandate a restructuring of the classroom to make space for technological mediation that TAs had to come to terms with. They appreciated the tool to different extents but they all felt that the nature of efficiency, trust, and care had changed as a result of its introduction. We contextualize these findings as part of modern movements to scale learning and their neoliberal enframings.
Presenters
Annie SadlerAssistant Director of Project Evaluation and Research, Digital Education, Stanford University, California, United States
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
Automated Feedback, Large Language Models, Relational Trust, Purpose of Schooling