If your teens are writing essays, research papers, and other long-form assignments for school, now may be a good time to ask their teachers about the methods used for plagiarism checks.
Teachers used to check for plagiarism manually. When text stood out as not resembling what’s expected from the student’s grade level and typical performance, they’d compare it to a source and see whether the student had copied it directly. Now, many of our kids’ assignments are done digitally, and teachers often use online services to mass-check them.
The result? Kids are being falsely accused of cheating by using AI.
Some Kids Do Use AI To Skip The Work
Teachers have used some creative methods to uncover AI cheating. In discussion forums, they describe tricks like adding white text to their essay prompts so that if the student copies and pastes it into an AI writing program, certain specific terms or names will show up.
In one case, the teacher added the white text—which the student reading the prompt may not notice but an AI bot will read—to the title of the essay so that she could simply search for the same text added to titles in the resulting essays, according to Newsweek.
On Reddit’s Teachers forum, another version of the trick was described as a line of white text in the smallest possible size directing the reader to add the words “Frankenstein” and “bananas” to the essay, which would stand out when reading over the finished product.
Checking For AI Usage Has Become Overzealous
Instead of using these tricks, the standard way to check for AI is the same method that many teachers have used for years to check for plagiarism — submit the finished product to sites like TurnItIn.
To check for plagiarism, that site compares the document’s contents to available sources and looks for large chunks of text that are directly copied. Checking for AI involves comparing the document to the sort of product that AI programs create, though. Since AI is trained on existing material written by humans, it’s pretty fallible.
The University of Nebraska’s Center for Transformative Teaching asks teachers to avoid using these sites for several reasons.
The primary one is that TurnItIn’s 99% confidence score (self-assigned) is based on tests involving “large compositions that were completely AI-generated,” and thus, it tends to get confused by documents written partly by AI.
More importantly, there’s the question of exactly what constitutes “written by AI.” One major issue:
“Tools such as Grammarly use machine learning to assist in spell, grammar, and increasingly, composition checks. Maybe you have right clicked on a sentence marked by one of these tools and thought, ‘This recommended sentence does sound much better than my own.’ Congratulations! You just set yourself up for that section to be flagged in moderate to high confidence of using AI!”
Other problems include that these AI programs for detecting AI (ironic) are more likely to falsely flag “simple and predictable” language—the same sentences most likely to be used by English language learners.
It also often falsely flags the sort of phrases that are common from people with ADHD, autism, and other neurodivergent brain types.
One College Student’s Experience
Moira Olmsted shared her experience with Bloomberg earlier this year. She’s a college student diagnosed with autism spectrum disorder, and one of her instructors reached out to tell her at least two of her assignments had been flagged as AI-generated.
She was able to argue her case and convince her instructor to change her grade, but she said she didn’t know how to prove to an instructor that she didn’t use a program to cheat. Furthermore, she was warned that if an AI program again decides her writing looks too much like AI, she’ll be treated as though she’s guilty of plagiarism.
Furthermore, 500 college application essays submitted before ChatGPT was released were offered to GPTZero and Copyleaks in a test of two of these AI-detecting AI systems. The programs claimed around 2% were AI-generated, despite AI writing programs not being available at that point.
Again, this emphasizes that specific groups of students are more likely to be targeted falsely by these AI checkers. Bloomberg notes:
“The students most susceptible to inaccurate accusations are likely those who write in a more generic manner, either because they’re neurodivergent like Olmsted, speak English as a second language (ESL) or simply learned to use more straightforward vocabulary and a mechanical style, according to students, academics and AI developers. A 2023 study by Stanford University researchers found that AI detectors were ‘near-perfect’ when checking essays written by US-born eighth grade students, yet they flagged more than half of the essays written by nonnative English students as AI-generated.”
What Are The Best Ways To Check For AI Use & Plaigerism?
If you ask your child’s teacher, “How do you vet students’ writing for cheating?” what would you like to hear? What would assure you that your child won’t be falsely accused?
One method that teachers have used since before the copy-and-paste command was available is to ask kids about their work.
One teacher on Reddit says they ask, “What does this word mean?” with the follow-up, “Then how did you use it in your essay?” knowing that a student who can’t answer either question likely cheated in one form or another. (After all, TurnItIn can’t tell if your student’s older sister wrote it for him, but this trick might catch that.)
Another offers a specific example from their experience:
“I had a student correctly use Latin in his paper. He and I both knew he didn’t know Latin. That was an easy catch.”
Many teachers recommend having kids handwrite essays to avoid the issue altogether; some even say to make them do most of the work in class so that teachers can see they’re doing the work.
Ideally, you want to hear that teachers are not relying entirely on an AI program to catch their students using AI to do their work for them and that if they use such a program, they give students a chance to defend their work.