Stop cheating before it starts with these tips.
Educators have plenty of potential concerns about the impact of AI, but the one that feels most immediate is plagiarism. Academic integrity has always been an issue in classrooms, and AI has added a whole new layer. Aside from the potential for cheating, we also want students to benefit from the learning that comes from unassisted writing, like synthesizing ideas and knowledge-constitution. And just to complicate matters, we also want students to develop AI literacy, so banning it outright does them a disservice.
While there's no shortage of AI-focused plagiarism detectors out there, it's critical to note that they can mistakenly flag assignments—and even professional writers' work—as written by AI (and there are already tools to foil these detectors). Also, there's some evidence that detectors themselves exhibit bias. That truth, in conjunction with our research that shows higher use of generative AI by Black and Latino students, adds to the risk of students of color being accused of academic misconduct more often. So while it's an option to use detection tools, prevention is definitely more powerful—and will lead to more learning.
Detection
The "Trojan horse"
Include a word or phrase in your assignment that isn't visible to the student (shrink it down, make it the background color). Then you can search for this keyword in a student's writing to see if they cut and pasted the prompt into an AI tool.
Tool-based detectors
If you're going to use one of these tools, it's best to use it in context with what you know about the student and their writing, and to use it as a teachable moment, rather than a punitive "gotcha!"
- Turnitin: In terms of accuracy in detecting any kind of plagiarism, this tool may be more accurate than others.
- Winston AI: Similarly, this tool has a better track record than some others at accurately detecting AI-based plagiarism.
Prevention
Integrate AI literacy
If AI feels mysterious and forbidden to students, it could lead to more misuse. Instead, we can inform students about what AI is, how it works, what it does best, and what its limitations are. Plagiarism aside, generative AI is technology that is likely to affect their futures.
Address relevance questions directly
Students often ask, "Why do we have to do this? How will this help me in my adult life?" These are fair questions. And when it comes to AI, there will be kids who want to cut corners if writing it themselves doesn't have inherent meaning for them. It may be crystal clear to us why they need to learn how to write well and express themselves in their own personal voice, but finding a way to explicitly tell them—perhaps for each writing assignment—might curb corner-cutting.
Have clear expectations
For every assignment, be explicit about how students may and may not use AI. Knowing the exact parameters, using something like the stoplight model, can help eliminate fuzzy areas of AI use. And include direct lessons on academic integrity to establish its importance. You can use a thinking routine from our Digital Dilemmas to have students consider different situations involving intended—or unintended—plagiarism.
Use two-lane assessments
The University of Sydney has established some guidelines and recommendations for AI use. Essentially, "lane 1" includes assessments that are completely independent of AI: in-class essays, oral assessments, and exams. "Lane 2," according to their criteria, should comprise most assessments and can involve AI. Having this clear division takes out the guesswork and could bring some balance.
Focus on process over product
For some assignments, it might work to assess students' process rather than a final product. When students have to demonstrate their thinking in a variety of ways over time, it makes it more difficult to simply use AI to crank out a final essay, for example. And using programs that keep track of progress, like revision history in Google Docs, can help, too.
Use standards-based grading
Though this is a more general, school-/district-wide overhaul, leaning away from finite letter grades helps focus more on how students are approaching learning outcomes, rather than just earning points by completing assignments.
Try the DEER approach
From Bob Cummings and his colleagues, the D.E.E.R. approach sets clear parameters for each stage of an assignment. Below is a very brief definition, but it's worth a close read (and viewing an example) to get the full picture:
- Define the stages of the project and how each one contributes to the learning process.
- Evaluate a specific generative AI technology to use with all or some of the stages. (In addition to how well they apply to specific assignment stages, see below for other ways to evaluate AI tools more generally.)
- Encourage students to explore that specific generative AI technology for that stage.
- Reflect on each stage and the contributions and shortcomings of AI during the process.
Formulate assignments with AI in mind
Though it does mean rethinking the current curriculum, it might actually be refreshing to design meaningful assignments/assessments while working backward from learning outcomes—with AI in mind. Work with colleagues, get creative, play with AI yourself, and get student input!
Give students agency and allow meaningful expression
Easier said than done, for sure, but designing assignments for which students can make choices, use their interests, express themselves, and play to their strengths might drive fewer students to cheat.