Abstract
This paper describes a writing assignment for an advanced composition course in which students were asked to analyze an essay generated by Chat GPT. The course, for junior-and senior-level college students, includes curriculum that is focused upon writing within the various academic disciplines and understanding the formal and stylistic conventions of different disciplines’ genres. This was the first essay in an assignment sequence and purposefully engaged ChatGPT to allow for class conversations about this tool before they began writing and researching other topics in the class. The assignment asked students to analyze an essay created by ChatGPT and compare it to human-generated essays written about the same topic. Students found that the ChatGPT-generated texts were easier to read and understand, but that was mainly because they were more simplistic and superficial. Students described the human-generated texts as longer and more complex. They claimed that the human-generated texts dealt with more intricate reasoning and ambiguities. Students also reported that the human-generated texts provided more high-interest examples and source material to support their authors’ claims. This assignment not only began a conversation in the course about strengths and weaknesses of machine-generated texts, but it also allowed the instructor to understand students’ perceptions of machine versus human generated writing. It allowed students to better understand and articulate what machine-generated writing can and cannot do and it helped them see why complexity in writing can help strengthen argument. These understandings were discussed later when students created their own argumentative research essays.
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
ChatGPT, Generative AI, Large Language Models, Teaching Writing, University Teaching