How AI is turning education upside down: ‘Perhaps it will even lead to the return of oral exams’
-
Illustratie: Debby Peeters
Many students are extensively using chatbots like ChatGPT in their studies. But what does that mean for their critical thinking skills and the way we assess learning?
For sociology student Sven Braster, ChatGPT is a game changer. ‘I’m severely dyslexic, and I use the tool as a language and spelling checker. For example, I ask how I can improve a text I’ve written myself.’
New Vox
This article is from the new edition of Vox, which is entirely dedicated to AI. In this magazine, you’ll find everything about the impact of artificial intelligence on education, research, and student life. Did you know, for example, that ChatGPT has some pretty interesting ideas for a student-style day in Nijmegen? But not everyone is a fan: three students share why they want nothing to do with AI tools. They’re doing their best — as much as possible — to keep AI out of their daily lives.
Braster, who is also chair of the University Student Council, doesn’t use AI solely to improve his writing. ‘Recently, I had an exam in philosophy of science, which included content on the philosopher Karl Popper. While studying, I asked ChatGPT to explain his perspective on the social sciences.’ Braster then works with the output. ‘I summarise it in my own words and then ask ChatGPT if my summary is accurate.’
For Braster, studying with the help of an AI chatbot is a way to engage more deeply with the material. But he doubts all students use ChatGPT like this. ‘I think many students mostly use it to offload work. Which makes sense—it saves time. But it also takes away opportunities to gain scientific insight and develop skills, like drawing connections or building arguments. If you let ChatGPT answer your questions or write your essay, you’re not learning how to do it yourself.’
No Consensus
Researchers at the Radboud Teaching and Learning Centre (TLC) have been studying this topic for a few years now. What impact does generative AI (or GenAI, for short) have on students’ academic skills and critical thinking?
The short answer: there’s no scientific consensus yet. These tools simply haven’t been around long enough. Preliminary research shows mixed results, says Jeroen de Jong, theme lead for educational innovation at TLC.
‘Right now, we often value the final product more than the process of getting there.’
Some studies point to a negative effect of GenAI use on students’ critical thinking. But the opposite is also observed, as noted by Professor of Philosophy of Behavioural Sciences Jan Bransen during a recent work visit to Sweden. ‘Students at the University of Gothenburg used GenAI a lot, but surprisingly, they used it mainly to strengthen their own critical thinking. For example, they asked a chatbot to provide multiple versions of the same argument so they could evaluate which one was stronger.’
Assessment
AI is forcing instructors to rethink how they assess students, says Bransen, who is also the academic lead of the TLC. ‘In the past, a take-home essay could show whether a student was able to develop arguments and think in an organised way. Now, those essays could just as easily be written by ChatGPT.’
De Jong suggests AI might even bring about the return of oral exams. ‘I used to be against oral thesis defences, ultimately, it’s about the text. But now I do see the importance of testing knowledge in this more traditional way.’
It’s clear that text-based assignments are under scrutiny. De Jong: ‘I think we’re heading toward a situation where students have to be more explicit about how they used AI. Or think of a course in academic skills where you learn the theory of academic writing at home, but do the writing itself in class. With a teacher nearby to help in real-time.’
‘We’ll be retired soon, students won’t.’
Bransen: ‘Right now, we often prioritise the final product over the learning process. But learning is really about formulating a good research question, finding reliable sources, and wrestling with words to express your own arguments clearly.’
More Personal Education
To better track and influence students’ learning curves, Bransen advocates for more personalised education.’Instead of one teacher for 400 students, we need more contact-based teaching. You only really see if someone is developing judgment and agency in one-on-one conversation.’
That, according to Bransen, also means students should be allowed to experiment more with GenAI. ‘As a university, we tend to view things from the perspective of the teacher or researcher, but what do students themselves see as responsible AI use? We’ll be retired soon, they still have a long career ahead of them.’
If students understand why it’s important to be able to write a coherent text, most will probably want to learn how to do that, the professor believes. ‘If they realise their own development is at the heart of it, I can even imagine frequent use of ChatGPT in studies, so long as they engage with it critically.’
Student Braster agrees. Even though he uses ChatGPT extensively, it’s not his only study method. He’s not worried about being misled by incorrect answers from the tool. ‘For the philosophy of science exam, of course I also attended lectures and tutorials, took notes, and read the literature. I ended up getting a 7.5 on the exam.’
Responsible AI use in the classroom
It’s not just universities that are grappling with ChatGPT. Secondary schools, too, are considering how to effectively manage the use of AI.
With long-term funding of over 80 million euros, the National Education Lab AI (NOLAI) is researching the application of artificial intelligence in primary and secondary education. Often, this involves developing and testing useful AI applications that can save teachers time or offer more personalised support in a child’s learning process (see page 31, ed.). Generative AI is also on the radar of NOLAI researchers. The central question: how can secondary education maintain control over both learning and teaching while using GenAI?
Researcher Evi Topali spent the past year exploring the challenges and opportunities of generative AI, as well as how frequently it is used by students and teachers. What did she find? Teachers are using GenAI to master new pedagogical theories or to make lessons more interactive. In the classroom, however, they still use GenAI sparingly, mostly because it’s difficult to monitor how students are using it. Her research shows that students have already discovered ChatGPT. They use the tool to prepare for tests—by having the chatbot ask them questions about the material—and to practice writing in a new language. They also frequently ask ChatGPT for suggestions to improve their writing.
But exactly how they do this hasn’t been thoroughly investigated, says Topali. ‘What prompts do they use? Do they copy the answers directly, or do they still think critically about them?’ So should it be banned? That’s not a good idea, argue Topali and her colleague Karmijn Steekelenburg, co-creation manager at NOLAI. It’s better to teach young people to work with this technology in a safe and critical way. ‘Both teachers and students need to learn new skills for this,’ says Steekelenburg.
‘It’s important for teachers to gain more control over GenAI’
Topali emphasizes the importance of guiding children in the responsible use of GenAI. ‘Right now, they use it on their own to do homework, sometimes copying AI-generated answers. But those answers are not always accurate and can be outright wrong. It would be beneficial if they learned the advantages and risks of GenAI.’ Additionally, it’s important for teachers to gain more control over GenAI. That’s why NOLAI researchers are developing educational tools to support teachers in using GenAI in a safe and controlled manner.
One such tool allows teachers to observe how students interact with a chatbot ‘behind the scenes.’ Another tool even allows teachers to influence AI responses. ‘GenAI systems often produce hallucinations, responses that are incorrect or completely fabricated,’ Topali explains. ‘We’re experimenting with that. A teacher can feed the AI incorrect answers, forcing the student to be critical of the output.’