‘When it comes to AI, we are a bit like children with a chainsaw’
-
Pim Haselager. Foto: Bert Beelen
When AI professor Pim Haselager wrote his first code in the early 1980s, ChatGPT and DeepSeek were still a long way off. Vox spoke to him about Elon Musk, the need for a European AI, AI-generated Sinterklaas poems – and: what makes humans tick.
When we spoke to you about artificial intelligence back in 2017, you said that we should be more afraid of how humans treat each other than of an artificial superintelligence. Are you still more afraid of humans than of AI?
Haselager: ‘Yes, mainly because I don’t see us creating genuine understanding, sentience, and general intelligence any time soon. Without those three ingredients, these machines are very powerful, but they are powerful in the sense that a chainsaw is powerful. You can do a lot of damage with a chainsaw, but the chainsaw cannot decide for itself what it wants. It doesn’t realise it’s there. In a way, when it comes to AI, we are a bit like children with a chainsaw. But I’m not afraid of a superintelligence gaining power, like in The Matrix or Terminator. In fact, I think it’s dangerous to worry too much about that.’
What’s dangerous about it?
‘It takes attention away from the real problem, which is: how do we control big companies, especially American companies, and their CEOs, Elon Musk, Mark Zuckerberg, Jeff Bezos, you name them, from abusing the power that they get through AI?’
‘The European way could be a type of AI that respects human rights’
How do we regulate these companies and their CEOs?
‘Well, one of the few good things about the current situation we’re in – and it’s a terrible situation – is that Europe has regulations. We have the AI Act, the GDPR, the digital market sector – we have a whole digital constitution. The US is trying to fight these regulations, and they’re trying to get rid of them because they actually work. The regulations are far from optimal, but the basic principles are good. And this could actually be an opportunity for Europe to create a third way of AI.’
New Vox
This article is from the new edition of Vox, which is entirely dedicated to AI. In this magazine, you’ll find everything about the impact of artificial intelligence on education, research, and student life. Did you know, for example, that ChatGPT has some pretty interesting ideas for a student-style day in Nijmegen? But not everyone is a fan: three students share why they want nothing to do with AI tools. They’re doing their best — as much as possible — to keep AI out of their daily lives.
What could a third way of AI, this European AI, look like?
‘The European way could be a type of AI that respects human rights. There is the US with its “move fast, break things” approach, and China, where AI is about political control over citizens. But an AI that respects human rights could be interesting for countries like Australia and India, or for the South American continent. Of course, historically Europe has a terrible track record when it comes to human rights, but looking forward, there may be hope.
‘DeepSeek, for example, the Chinese large language model, is very effective – but countries like Australia have recently decided not to use it for governmental affairs. Because DeepSeek might be open source, but it’s not politically open. A kind of AI that respects human rights is a business model, and there are customers for that.
‘And then there is the second good thing about the current situation: what the United States is doing politically, economically, scientifically, and I could go on, is so disastrous that everyone seems to understand that Europe has to become more independent in terms of AI.’
You could say that we are obviously realising this now, but have we realised it too late?
‘The situation looks terrible. And it is. But there is reason to be – well, optimistic is perhaps too strong a word, but we’re not without a fighting chance. And we don’t have a choice. The attack is too blunt and too stupid.
‘At the same time, this discussion hasn’t just started. Many people, including myself, have been talking about regulation and the need for it for years. The discussion is only now picking up speed. And the general public has become aware of the need.’
If we want to have a fighting chance, don’t we need strong, independent research – and strong universities? Now, we have just had massive budget cuts at universities in the Netherlands…
‘Quite frankly, I see this as sabotage of European independence. And I’m not surprised that the biggest party in the Netherlands responsible for this is pro-Putin. It’s the same with their attack on the media and on judges. They are undermining a sound democracy and the role that knowledge could play in it.
In a Nutshell
Want to learn more about Pim Haselager’s views on the past, present, and future of AI? Then don’t miss the two special episodes of In a Nutshell, Vox’s science podcast. You can listen to part 1 here and part 2 here.
‘Unfortunately, I think too many people still see universities as ivory towers, as some kind of left-wing hobby. But universities play a vital role in helping students translate their scientific knowledge into products that improve society rather than degrade it – including the creation of a healthy AI in Europe.’
You have an interesting research background yourself. You studied psychology and philosophy. How did you get from there to artificial intelligence?
‘When I enrolled in university in 1978, there was no such thing as artificial intelligence to study. It was like a different planet. My main question in those days was: “Who am I?” Not as an individual – I had that question too, of course – but for me, it was more about human beings as a species. The Vietnam War and the oil crisis had just ended, it was the time of punk, acid rain, and nuclear weapons, and the Cold War was at its peak. So, I was very interested in what makes Homo sapiens tick.
‘I started programming on the Commodore 64’
‘I liked the big questions that philosophy asked, but I didn’t like the answers because they were too conceptual. And I loved psychology because it was empirical, but you run the risk of becoming an experimental factory that forgot the real questions. So, I studied both: philosophy at the University of Amsterdam and psychology at the Free University. I literally biked happily back and forth between the disciplines.
‘I realised that the mechanisms of the brain are very important in trying to understand not only our abilities but also our shortcomings. And computer modelling, which was not a big thing at the time, was a different way of trying to understand ourselves. You try to see the differences between what your computer model does and what a real person or animal does. And you understand what you don’t understand. So, I saw them as complementary. Philosophy, psychology, and computer science were basically one project.’
A lot has happened since then…
‘Yes. I started programming on the Commodore 64, which may not mean much to you, but it was a major thing because it was the first personal computer that you could afford. It had a cassette tape for memory. I wrote my first programmes, and then I started to get interested in neural networks [basically computer models that simulate the interaction between neurons in the brain, Ed.], which were just coming up back then, and you could still do the calculations by hand. Now, you obviously can no longer do that. Today, we see that artificial intelligence works, and then we ask ourselves: “Why does it work?”’
Do you think that’s a problem?
‘I think it’s a reason to be afraid of AI. This lack of transparency is a much more profound and a much more urgent issue than so-called superintelligence. Big companies talk about superintelligence because they want to divert attention from their own responsibility for the products they release.
‘There’s a wonderful quote by Pedro Domingos that goes: “People worry that computers will get too smart and take over the world, but the real problem is that they’re too stupid, and they’ve already taken over the world.”’
‘We’re delegating a lot of decisions about ourselves – our finances, our dating, our healthcare, the legal system, policing – to systems that don’t understand what they’re talking about. A medical decision support system doesn’t know what it feels like to be ill. It just correlates data points.’
Do you use programmes like ChatGPT yourself?
‘Hardly. I sometimes use it to keep up with the latest developments, but I don’t let AI write my emails or summarise books or articles. I’ll read them myself, thank you. And all the fake pictures people are making with AI – I mean, how many funny cat pictures do we need? When is it enough? The amount of energy it costs – it’s just ridiculous.
‘I don’t think we should outright forbid students to use AI. If we want our students to become responsible professionals, then, of course, they need to learn to work with generative AI. But my students also tell me they use AI to generate ideas. But they’re not. AI doesn’t help you generate ideas, you trade your unique perspective for the equivalent of a Big Mac, the average American database.
‘There’s no one like you. Your perspective, your background, your understanding, your knowledge – they’re unique. And we want to help you express your unique perspective. Because it might be really valuable. But of course, that’s hard. By the time you exchange that for ideas or code generated by AI, it’s no longer you. So don’t call it “generating ideas”. You’re not generating them, you’re giving them away. And you’ll never find out what you really think. Isn’t that sad?
‘AI is a lot dumber than we tend to believe’
‘My mother was a very bad poet. And for Sinterklaas, she used to write two-line poems that were just deliciously wrong – but it was her. She could have used ChatGPT to write twenty pages of a perfectly fine poem, but it wouldn’t have been her. So, what would have been the point of those poems?’
Then obviously the question is: where do we go from here?
‘I find that AI poses a very deep philosophical question: why do we do what we do? Why do we think or play chess when there’s a machine that can do it better? Why do we create art? Where do we see the value of life? I think these questions are coming back with a vengeance, and they are becoming increasingly urgent these days.’
It’s clear that AI in general and ChatGPT in particular have already changed society. Is it comparable to any other form of technology that has come before?
‘I think AI is more profound, more penetrating, than any kind of technology that has come before – except maybe language, if you’re willing to think of language as a technology. Language has changed us as a species in incredible ways. And this is something comparable. It really is on that level.’
Why?
‘I think AI will transform the way we live as social beings. The social environment is really important for us as individuals, it shapes who we are, developmentally, but you are also co-created by your environment, every moment of your life. And AI is going to radically change that, it’s going to have transformative effects on that social environment and on us that we cannot foresee.
‘I think the hype surrounding AI is misleading us insofar that AI is a lot dumber than we currently tend to believe. That’s hype. But at the same time, we hardly understand the potential consequences of the technology we already have. It’s less advanced than it claims to be, but its effects are more profound than we realise. That’s what I mean by the “chainsaw and kids” analogy.’
Following your analogy: is there still a chance that an adult will come into the room and take away the chainsaw?
‘I think the EU Digital Constitution is a serious attempt. That is the adult in the room. The problem is, of course, how effective is an adult? You can come into the room and take away the chainsaw. But if you have chainsaws everywhere… You’re not always going to be in the room. You can write wonderful regulations, just as you can write wonderful ethical principles. But people are lazy and they dig their own graves. What I said in the 2017 interview is still true in 2025: we are the problem.’