New professor Iris van Rooij asks the critical questions of Artificial Intelligence
As new professor for Computational Cognitive Science, Iris van Rooij is not primarily concerned with the hot, popular topics of Artificial Intelligence such as robots and facial recognition software. She is doing in-depth research on the formulation of theories and knows of the issues with the popular topic. ‘I think it is our responsibility as scientists to talk about our responsibility when we see the negative consequences of certain technologies.’
The beginning of Iris van Rooij’s (48) professorship almost passed by unnoticed this summer. She didn’t announce it herself, didn’t share it with the world. And that’s curious for someone who is as active on social media as she is. Online, she likes sharing the work of others and speaks up when it comes to diversity in her field of work or at university. But when it came to her new position, she waited until a later official declaration by the university. ‘And maybe it was a bit of modesty’, says Van Rooij.
Since 2007, she has been working as an assistant professor in the Artificial Intelligence (AI) department, in 2015, she became an associate professor. Since the first of July, Van Rooij can call herself professor for Computational Cognitive Science. Isn’t that an important step in her career? One that is worth to share? That’s one question Van Rooij wasn’t prepared for. She takes a lot of time to think before she answers. Yes, with the title come new responsibilities and new possibilities, but she has dealt with a lot of them already. ‘It doesn’t feel like my work is changing drastically.’ She laughs. ‘I thought you’re going to ask me what computational cognitive science actually is.’
When she studied Psychology at Radboud University in the 90s, she asked herself where the female professors were. ‘Because of that, I decided: I want to be the first female professor here.’ Thankfully, she thinks, others achieved that goal faster. But there are still too little. That is something she realizes primarily from the reactions from female colleagues to her professorship. ‘So, to come back to your question: it’s not a personal milestone, but a milestone for the larger discourse.’
So now the question after all: what does a computational cognitive scientist do?
‘Cognitive science is the interdisciplinary study of human thought. Then we are talking about functions such as motor skills, perception, decision making or language. Computational cognitive science is based on the idea that you can explain cognition in a computational way, meaning that you can make a sort of model of it. When they hear the word computation, many people think of a laptop. That’s not what we mean by that. We mean all possible physical systems that can engage in computational tasks.’
In practice, Van Rooij’s job is primarily concerned with thinking about how scientists think about human thought. That sounds quite meta – and it is, according to Van Rooij. ‘We are asking ourselves what it means to make models about cognition. What makes the models actually good, and what makes them explanatory or not good?’
That is perhaps not the first thing that comes to mind with regards to AI. Cognitive engineering makes it more often into the spotlight. Scientists in that field are trying to make artificial systems that are intelligent, for example with facial recognition. That’s not what Van Rooij is doing. ‘A lot of AI is very much focused on ‘really cool models’ and ‘look at what they can do’’, she says. ‘My work is the opposite. What can’t they do? Where do they get stuck?’
What do you want to focus on most as a professor?
‘The strengthening of the connection between AI and Psychology. We have the extraordinary situation that those two are both grouped within the faculty of Social Sciences here. At other universities, AI is often part of the Computer Sciences. I did a master’s in Experimental Psychology myself and was disillusioned by the experimental psychological approach quite early on. There was no deep theory, just an accumulation of effects. Take for example the Stroop-effect that shows that we find it difficult to name the colour of the ink when a word such as ‘green’ is written in red.’
Trying to formulate a theory from the collection of effects is a bit like trying to write a novel through the generation of random sentences, thinks Van Rooij. ‘Then you might as well generate for eternity. And if you’re on the right track, you might not even recognize it. AI can help with the development of theories, because we can understand from building computational systems how, for example, visual perception works.’
You said on Twitter that it’s the job of science to explain phenomena like rainbows or thunderstorms, not to predict the weather. Why?
‘If we want to know how language and communication works, it’s not about whether and how someone can predict the next words I’m going to say. Or your next words. Take the recent flooding. One expert on climate change received complaints, I read, because he didn’t predict that. He said: our models are based on the past and this is a new terrain, we can’t predict its future. But we can explain it.’
Do you think there has been too much emphasis on that in the past? This prediction?
Why is that?
‘Because it’s easy. Take for example AI and machine learning (machines that can ‘learn’ from data, ed.). Men are trying to, for example, build systems that can predict the likelihood of criminal activity or systems that can select people for job interviews.’
That can have a negative societal impact. Because if your old data is biased, says Van Rooij, and, for example, discriminates against people of colour and women, you will also end up with systems that discriminate against those people and don’t invite them for job interviews. But science is not only putting too much emphasis on predictions, but also on success, she thinks. ‘There is a lot of emphasis on problem solving instead of problem creating. In my field, scientists often say: ‘Look, my system works 95 per cent of the time.’ But what about the 5 per cent that the system fails? Those are probably minorities. We need more problem creators, if you ask me.’
You are generally putting a lot of emphasis on the issues of inequality and diversity in science – in your work and on social media.
‘I think it is our responsibility as scientists to talk about our responsibility when we see the negative consequences of certain technologies and how they are abused.’
Do some people in your field experience this as nagging? They are quite satisfied with their model and you’re the one who asks difficult questions.
‘Yes, that sometimes is the case. I’m the one who asks the critical questions. But I have come to learn that there is a growing appreciation for this. It might hurt, but it’s necessary. Some people can get frustrated by that. They say: yes, but we still want progress, right? What is that progress then? Move fast and break a lot of things in the meantime?’
‘And sometimes I think that we shouldn’t strive towards certain technologies like facial recognition software. Last year, a paper was published that described how an algorithm had been created with machine learning that was able to identify trustworthiness in a face just like humans can. But we know that people are doing this in a biased way. There is quite some dark history behind it. White faces are perceived as more trustworthy by white people than Black ones. The entire idea that trustworthiness can be seen in someone’s face is asking for issues.’
You are not only calling for an order in science yourself. You are also vocal about diversity and social safety at university. Do you think people will listen more now that you are a professor?
‘That’s something I will have to see. I’m going to continue speaking up as a professor. I couldn’t do it any other way. But I’m still seeing what the new possibilities will be.’
Is the university doing enough with regard to diversity?
‘You can always do more. We have a very active gender and diversity commission at the Donders Institute. But I don’t know how representative that is.’
The number of female professors at the university is, for example, still quite low. In Nijmegen, it is 30 per cent.
‘We shouldn’t think of diversity in terms of: we need more women. That’s something where we still have a lot of work to do. The university is very white. And also, certainly in research, heteronormative. Targets like we need more women give us the false sense that we’re doing something concerning diversity. Meanwhile, experts are saying: it’s about inclusion, about creating a culture that everyone feels comfortable being a part of and in which everyone can flourish. I’m not trying to say that it’s bad to have targets, but if you only focus on them without establishing a broader vision, then targets are just an empty promise.’
Iris van Rooij (1973) received her master’s in Experimental Psychology from Radboud University. She received her PhD in Cognitive Psychology from the University of Victoria, Canada. She has worked as an assistant professor in the department of Human Technology Interaction at the Eindhoven University of Technology Technical and as an (associate) professor in the AI department in Nijmegen.
Additionally, Van Rooij is principal investigator at the Donders Institute for Brain, Cognition and Behavior. In 2020, she received the Distinguished Lorentz Fellowship and Prize by the NIAS-KNAW and the Lorentz Center.