altissia

Nicolas Sabouret: Artificial Intelligence – what does this mean?

1 June 2023

In this interview, we had the pleasure of speaking to Nicolas Sabouret, expert in artificial intelligence, Professor at Université Paris-Saclay and Director of the Graduate School of Computer Science.

As a specialist in artificial intelligence and human-machine interaction, he is particularly interested in affective computing and has been contributing to the debate on his discipline for years through his work, his theses and the publication of several works.

In this interview, we will discuss a range of topics, including the definition of AI and its fields of application, the technological challenges linked to its development, as well as the performance of algorithms and their relation to intelligence. We will also explore the skills necessary for working in this ever-evolving field, as well as AI’s possible uses for teaching and learning.

We hope that this discussion will prove enriching for everyone interested in AI and its current and future potential.

How would you define artificial intelligence and what are the main sectors it is used in today?

Artificial intelligence is the science of making machines perform activities that require intelligence when they are carried out by humans.

AI is used in numerous sectors, without us necessarily being aware of it; there is AI in your car’s GPS, in your Amazon recommendations for online shopping or Facebook feed for what posts to show you, but it’s also in the controller of cranes on construction sites to stop them jolting, in the postal sorting office to sort letters according to postcode using optical recognition, it’s in software used by school principals to prepare timetables and even in the robots on Mars to control movement!

What are the main technological challenges linked to developing artificial intelligence?

This question can be understood in many different ways, as “technology” is a concept that encompasses notions ranging from science to usage, by way of technique.

On a scientific level, the difficulty lies in finding the right way to make a calculation to obtain an acceptable result to a problem that we know is mathematically impossible to have an exact answer to. This is why there are so many AI techniques, each adapted to a particular set of problems.

On a technical level, we need to see how AI algorithms can be implemented in technological devices. To give a simple example, once you’ve written an algorithm that calculates the shortest route between point A and point B on a chart, you need to process geographical data to calculate the chart based on road maps, you need satellites for positioning, you need a user-friendly interface to enter the destination address and view the journey…

So, in terms of usage, the technology needs to be useful (in the sense that people find a use for it). This is the case for recommendation, image recognition or text analysis algorithms used on social networks, but certain AI tools have not been adopted (such as recommendation systems in the medical field) as the usage simply wasn’t there.

According to Luc Julia, one of the creators of Siri, the way algorithms perform has nothing to do with intelligence. What do you think about that?

First of all, we need to be able to define intelligence, which is a difficult question. One thing’s for certain, though, machines calculate – they do not “think”.

As some problems can be solved very well with calculations, machines are absolutely capable of outperforming us on certain tasks (a human can’t complete a sudoku in a millisecond) and appear to be incredibly competent. We tend to see this as intelligence, especially when a task is difficult for a human (e.g., identifying tumours on an ultrasound scan). But this is only ever the result of a calculation that was programmed with a specific goal. Today’s AI programs are not capable of “general” intelligence, of adapting to very diverse tasks.

Luc Julia also said, “intelligence isn’t only winning at Go, it’s also knowing how to make a sandwich”. This is absolutely true.

What skills are needed for working in the field of artificial intelligence?

If by “working in the field of artificial intelligence” you mean “developing AI programs”, then you’d need strong maths and computer science skills. You can, however, work in the field of AI interacting with AI tools without being an expert on the subject. For example, we have psychology researchers in our lab who help us to develop more “human” conversational tools that are capable of assisting users without interfering with their lifestyle. While these colleagues don’t actually develop AI programs themselves, they do contribute to developing AI systems through their experimental studies and the knowledge they pass on to computer scientists from the results of their research. By doing so, they “work in the field of AI”. I used psychologists as an example, but I could also say the same for biologists, doctors, chemists, energy specialists, etc. You therefore don’t need to be a computer scientist to contribute to AI.

Let’s not forget, no AI program works “by itself” – human expertise is always required.

If, in future, we wanted to create AI programs to help with cultural dissemination, preservation of heritage or historical research, we’d need experts in these fields to tell the machine what to do. The machine wouldn’t be able to figure out what to do by itself (and nor would the computer scientists)!

The history of science and technology shows that technological innovations can be capable of both good and bad. Do you think we should fear artificial intelligence?

There are two ways of looking at this question. Taken very literally, I would say that you shouldn’t fear AI – it’s just a tool and a tool is neither good nor bad. It’s what we do with a tool that is good or bad.

With the same AI algorithm for image-based decision-making, we could give a recommendation to a doctor to inform a diagnosis, or we could automatically trigger a medical operation, even without the patient’s consent. In the first case, we are talking about intelligent use of AI that keeps in mind the possible limits of the technology and the importance of the human’s role. In the second case, the use of the technology is alienating. We therefore should fear (and prevent) the misuse of AI, but not the AI itself as a scientific discipline (in the same way that the misuse of nuclear physics must be regulated, but not the science itself!).

But for some time now, with the increased awareness of the ecological impact of human activities, the question of “should we fear AI” can take on a different meaning. Some AI algorithms, such as deep learning, are extremely energy and natural resource intensive. We therefore can fear that abusive use of these methods will damage the balance of our ecosystem in the short term.

Finally, I’d like to point out that the wording of your question (“artificial intelligence”) tends to accentuate the personification that we naturally apply to tools that use AI. It would be better to say “AI systems” to remind us that they are indeed systems and not people.

How do you envisage the integration of AI and its role in the field of education?

Like most scientists, unfortunately I’m very bad at suggesting uses based on the research in my field. But I am convinced that we should look at and listen to what artists and creators have to tell us as, even though they may not fully understand how AI systems work, they know how to release the potential better than we do. I can, however, cite two examples in the field of education that I think are promising.

In language learning, speech and language recognition tools, as well as machine translation from AI research can help in developing devices for learning outside the classroom, which seems fundamental to me in language learning where regular practice is essential.

The second example that comes to mind is the automatic correction of students’ work. While teachers need to have a detailed understanding of their students’ errors in formative assessments, summative assessment marking is often repetitive, with little interest. I think AI could help in the designing of marking assistants that would allow teachers to focus on imparting knowledge and support to their students.