I Caught A Student Using ChatGPT. How Should A School Respond?

This week, I caught a student trying to submit an assignment written by ChatGPT for the first time. Even though it is a relatively new tool, schools are already sounding the alarms. New York City schools, for instance, just banned ChatGPT from their devices and networks. But while I am disappointed with the student who submitted this, and while schools are stepping up their surveillance and censorship on student devices as a response, I believe that AI should cause us to reevaluate why students are using these tools in the first place. 

ChatGPT takes a shot at answer my students’ assignment.

It is understandable for large districts to quickly go for the most direct response: blocking students from using AI by whatever means necessary and using tools like GPTZero to examine whether or not something was created by AI. 

In reality, though, ChatGPT brings to light the foundational problem with our educational system. Rather than being concerned about learning, students are concerned about their grades. Assignments aren’t a way to increase your understanding of a topic—they are a means to an end (good grades, access to good universities, etc.). And just with other technologies—whether it’s programming answers into your TI-89 calculator, using Google Translate on a French assignment, or using Yahoo Answers in times past—schools are addressing the symptom (cheating) rather than the cause. And as a result of this singular focus on preventing cheating, we are throwing away the useful potential of AI.

As educationist Alfie Kohn notes (2011), grade-oriented environments incentivize cheating:

For example, a grade-oriented environment is associated with increased levels of cheating (Anderman and Murdock, 2007), grades (whether or not accompanied by comments) promote a fear of failure even in high-achieving students (Pulfrey et al., 2011), and the elimination of grades (in favor of a pass/fail system) produces substantial benefits with no apparent disadvantages in medical school (White and Fantone, 2010).

In other words, in a system where students are seeking a reward for their work, or where grades create a fear of failure, it makes sense that they will try to take shortcuts. 

AI will make cheating increasingly difficult to recognize. So what if we took away incentives for cheating? What if grades were not the end goal of schooling? What if, instead, learning was the end goal? No matter how much teachers lecture students about the value of learning, if the incentives remain the same, cheating will continue to exist. As long as we have grades as a reward for learning, students won’t view learning as a goal but as a means to an end. 

A failure to get rid of grades in all schools will further increase the divide between the rich and the poor. Indeed, my prediction is that schools serving predominantly poorer students will take an increasingly militant approach to AI, using AI to evaluate student work on the administrative end (indeed, this is how some plagiarism checkers work), but banning it from student use. Schools serving poor students and students of color are more likely to use surveillance technologies, and this will be no different for AI tools.  

ChatGPT offers some advice on writing this article

On the other hand, many schools serving the wealthiest students in the country are moving away from traditional A-F letter grades—and my guess is, too, that students at these schools will feel less pressure to misuse tools like ChatGPT as a result (and that teachers will take a much more generous approach to AI-generated texts). 

Schools can either continue to address cheating with continued surveillance, or they can start to develop a culture of trust and a culture where failure is okay. A culture of trust necessarily  requires the elimination of grades. I fear, though, that schools will take the former approach, at the expense of students and teachers. 

Previous
Previous

Please Don’t Grade Discussions

Next
Next

The Problem with Four-Year Graduation Rates