Rise of ChatGPT in education forces teachers to redefine cheating
The rise of artificial intelligence (AI) tools such as ChatGPT has dramatically reshaped education, forcing high school and university educators to reconsider how they teach, assess, and define academic integrity. Teachers say student use of AI has become so widespread that assigning essays or projects outside the classroom is almost equivalent to inviting students to cheat.
“The cheating is off the charts. It’s the worst I’ve seen in my entire career,” one educator told the AP for their recent article.
According to him, teachers now assume that “anything you send home” may be completed by AI. The challenge has shifted from whether students use AI to how schools should adapt to its prevalence and decide what counts as cheating in this new landscape.
Educators across the country are adjusting in different ways. The teacher told the reporters he now requires his students to do most writing in class, where he monitors their laptop screens with software that allows him to restrict or lock down access to websites. Rather than banning AI altogether, he integrates it into lessons, encouraging students to treat it as a study aid instead of a shortcut.
This trend is being witnessed across institutions, with another teacher telling AP that she emphasizes in-class writing and uses verbal assessments to ensure students can explain their understanding of texts.
The temptation for students is clear: With a simple prompt, AI can instantly generate essay topics, provide supporting quotes, or even draft introductions and outlines.
The article highlights a common book-review assignment, noting that many students now immediately turn to ChatGPT for “brainstorming.” Within seconds, the tool generates essay ideas complete with supporting examples and quotes, before offering further assistance: “Would you like help writing any part of the essay? I can help you draft an introduction or outline a paragraph!”
Policies often vary not only across schools but even within the same institution, with some teachers allowing limited AI tools like Grammarly for grammar checks, while others ban them for offering sentence rewrites.
Initially, many schools outright prohibited AI use after ChatGPT’s launch in late 2022, but attitudes have since evolved. “AI literacy” has become a buzzword, as educators aim to balance the benefits of AI with its risks.
Over the summer, universities convened task forces to address the issue. The University of California, Berkeley issued guidance instructing faculty to include explicit statements on their syllabi about AI use. The recommendations included three options: requiring AI, banning it, or allowing limited use. The guidance warned that without clear expectations, students are more likely to misuse the technology.
Carnegie Mellon University, meanwhile, has reported a surge in academic responsibility violations linked to AI, but educators note that many students do not realize they are breaking rules. Faculty have been told a blanket ban is “not a viable policy” unless professors also revise how they teach and assess. This has prompted shifts away from traditional take-home assignments. Some instructors are returning to in-class, handwritten exams, while others have adopted flipped classrooms, where students complete homework during class under supervision.
As AI becomes embedded in everyday life, educators agree that old teaching methods are no longer sufficient. The challenge ahead lies in drawing new boundaries of academic honesty while leveraging AI as a legitimate tool for learning rather than a vehicle for academic misconduct.
By Nazrin Sadigova