Abstract
The advent of generative AI (GenAI) technologies has seen increased use among higher education students, particularly for research and assignments. This practice has sparked ongoing debate about whether it constitutes academic dishonesty. While some argue that integrating GenAI enhances productivity, critics question its impact on critical thinking and research originality. Although various automation tools have been developed to detect AI-generated texts, they often struggle with accuracy, yielding false positives. Given the rise of institutional policies addressing academic integrity, there is an urgent need for more reliable methods to identify AI-generated content. This study explores the accuracy of human intuition in detecting AI-generated texts to advance previous experiments. In a two-phase study, we analyzed 738 research proposals and trained experts to identify AI-generated content using patterns from our database. In the second phase, we examined factors influencing students’ compliance with GenAI guidelines. Our results indicate that humans can detect AI-generated texts with an accuracy rate of 70% to 90%, largely depending on the expert’s experience, confidence, and the level of AI involvement. Additional findings suggest that compliance with GenAI guidelines is shaped by deterrent measures, moral alignment, beliefs about the ethics of GenAI use, and awareness of institutional policies. Emotional state, moral obligation, and perceptions of detection likelihood also significantly influence compliance behavior. We discuss the theoretical and practical implications of these findings. While we do not oppose the use of GenAI in principle, we stress that its application must strictly adhere to institutional guidelines to uphold academic integrity.
Presenters
Abdullahi YusufLecturer, Department of Science Education, Sokoto State University, Sokoto, Nigeria
Details
Presentation Type
Paper Presentation in a Themed Session
Theme
KEYWORDS
GenAI, Detection, Intuition, Compliance, Machine learning