English

Looking for fraud: AI makes teachers distrust their students

12 Jun 2025 ,

Copying part of your thesis or essay has never been easier with ChatGPT. Examination boards are busy dealing with it. How do they handle suspicions of fraud? ‘Students appreciate that the rules are clear.’

Adriejan van Veen is a member of the examination board for history. He sees no alternative to essays and theses in his program.

‘The plagiarism cases we as an examination board encounter are, I suspect, just the tip of the iceberg. We have no visibility into how students are using AI tools. I sometimes ask; when I do get an answer, it turns out some students use AI in every step of the research process: from brainstorming to content analysis, from literature search to correcting language and style.

Adriejan van Veen. Foto:

‘Our faculty guidelines prohibit presenting AI output as one’s own work. Red flags include flawless texts that remain on a general level but contain few concrete examples. They’re often full of clichés, with terms like pivotal and vital. In history, we often work with primary sources, such as from antiquity. The absence of quotes from these is also an important signal.

‘When an observant teacher suspects excessive use of AI, the examination board looks into it. That happened six times last academic year. We then investigate by studying the text and talking to the teacher and the student. Our experience is that students will come forward with an honest explanation.

‘It could be, for example, that a student had once written a draft themselves but had it corrected and improved by ChatGPT. Then we ask for the chat logs: show us how you worked with AI to create this text. Can an examination board require a student to do that? That’s hard to say because there is no policy for it.’

‘As a historian, you simply have to learn how to write an essay’

‘I think we need to move toward a situation where students must be able to demonstrate how they interacted with AI, upon a teacher’s request. We need to organize this as a program, because the university is lightyears behind. There are faculty-level guidelines, and they are fundamentally good. The question is how to enforce them.

‘Take-home exams are a thing of the past because of AI. Fortunately, regular exams can still be held in secure environments without internet. But the longer essays and theses remain a problem. You sometimes hear that we should test students differently because of AI—for example, through oral exams or interim assignments. But as a historian, you simply have to learn how to write an essay or thesis. That doesn’t go hand in hand with fully embracing AI. The consequence is that, as teachers, we have to very meticulously check students’ work.

‘At the same time, awareness of professional ethics is extremely important. What makes a good scholar, and how does AI fit into that? I give information sessions on this to first-year students. It would be good if the university as a whole also took a more active public stance. It’s very quiet in that area.’

Barbara Müller is a member of the examination board for communication science. She sees the number of fraud reports skyrocketing.

‘AI is fundamentally prohibited in our program. Teachers who want to make an exception must discuss it with the program director. The director then assesses whether it truly adds value. “Students appreciate that the rules are clear.” Many students will probably still use AI, but because we emphasize that it’s forbidden, at least they start thinking about it. After all, you’re at university to learn how to write and think critically. Of course, it’s sometimes frustrating to write a piece of text, but that’s part of the learning process.

‘We need to cultivate tolerance for imperfection’

‘Generative AI makes us focus even more on the final product. As if it has to be perfect. I give lectures, for example, where students work on an influence campaign. They ask: can we have the poster made by AI? I explain that it’s not about how pretty the poster is, but about the idea behind their campaign. As teachers, we have an important role to play in this: we need to cultivate tolerance for imperfection. If a sentence doesn’t flow well, we now already find that a big issue.’

Barbara Müller. Foto:

‘The problem of AI-related fraud since the launch of ChatGPT is significant. Especially with work from first- and second-year students, there are many reports: this academic year alone, between thirty and forty. Previously, there were maybe ten per year.

‘You can tell that teachers are becoming suspicious. That damages the relationship with their students. They report far more often because they notice strange things in texts, even though it doesn’t always lead to a conclusion of fraud. We as an examination board are allocated extra hours purely due to the number of reports.’

Herman Geuvers is chair of the examination board for Computing Science. Despite the concerns AI raises, he also sees opportunities.

‘A key skill our students learn is programming. During the introductory course, students complete interim assignments. If they submit all of them, they receive a bonus point on the exam. That acts as an incentive to practice.

‘Of course, they can have that programming work done by AI. By ChatGPT, for example, though Copilot is more popular with us. We see it often. You might think: let them go ahead, because in the end they’re tested in a three-hour exam without AI. Then they have to show what they can do.

‘AI is also progress: knowledge becomes more freely accessible’

‘But we do take it seriously. If you remove all restrictions, you end up in a situation where we put a lot of effort into grading assignments that were made by Copilot or ChatGPT. That’s a ridiculous investment, of course, and not the idea of education. Moreover, if students don’t practice themselves, they won’t pass the course.

Herman Geuvers. Foto: RU

‘Teachers often see quite quickly if something was made by AI. If you ask ChatGPT to complete a first-year assignment, it comes up with constructions that students haven’t even learned yet. A clever student who gets called in will make sure they master those constructions anyway. Then they come up with a somewhat coherent story and escape penalty.

‘Many students also program for fun or as a side job. Then they’re constantly interacting with Copilot. It’s not like they copy everything, but they do get ideas from it. Or they use it for constructions they don’t know off the top of their head. That’s the modern way of programming. Is that good or bad? Those students don’t think about it that way. In their future jobs, they’ll often work like that too.

‘Often, the suspicions are quite clear. For example, when students refer to articles or sources that turn out not to exist at all. Sometimes students don’t even know that tools like Scribbr and ChatPDF are powered by AI. We really need to start educating them about that: what constitutes AI use?

‘There have also been cases where a teacher said: “This text makes no sense, this must be AI.” But a sanction only follows if we can prove the fraud. If the student doesn’t confess, we have to be absolutely certain of our case.’

‘We’re currently working on changing the way we assess. Less emphasis on the final product, more on the process leading up to it. That’s a good development. As a teacher, you can then better guide students through the learning process and address much of their uncertainty. That’s why you become a teacher: to help students grow.

‘Most students come to university for that reason. Of course, a few just want the diploma. But most want to be assessed fairly and properly, and really learn something.’

‘I certainly don’t think the students we graduate are worse programmers than before. If students have never programmed themselves, they won’t be able to tell whether the output from an AI model is correct, or what might be wrong. That’s why it’s essential that students have foundational knowledge. In our case, that means being able to program.’

‘People used to be able to do mental arithmetic, nowadays we use calculators. Still, it’s useful to know something about numerical relationships. So that, if you make a mistake entering a formula, you can recognize: something’s off here. It’s the same with programming.

‘Some say the comparison between AI and calculators doesn’t hold. I don’t agree. Things change under the influence of this kind of technology. It’s also a form of progress. Students will be able to work more efficiently, and knowledge becomes more freely accessible. But students must still be able to evaluate the outcomes with their own critical judgment: not everything generated by AI is automatically true.’

Great that you are reading Vox! Do you want to stay up to date on all university news?

Thanks for adding the vox-app!

Leave a comment

Vox Magazine

Independent magazine of Radboud University

read the latest Vox online!

Vox Update

an immediate, daily or weekly update with our articles in your mailbox!

Weekly
English
Sent!