News & Views

Interview / The Realities of Robotics

by Sophia Epstein

Right now, in Japan, there are a few thousand people living with a mechanised roommate. Pepper, arguably the most famous humanoid robot, has been built to understand emotions – so it can interact with children and adults at a human level. Besides making a friendly housemate, Pepper has proved an effective customer service representative, and can now be found in stores across Europe, America and Asia helping visitors and giving out information.

The 1.2-metre robot was created by Softbank Robotics (formerly Aldebaran), and we spoke to the company’s Chief Scientific Officer Rodolphe Gelin to find out where the whole industry is going.


A massive talking point right now is Artificial Intelligence, how does that fit into the world of robotics?

Artificial intelligence is a very wide word. Today we already have artificial intelligence in robots, when you speak to a robot and the robot answers you, that’s artificial intelligence. When the robot recognises your face or an object, they are very simple functions, but it’s already artificial intelligence.

It’s difficult for us, from a very technical point of view, to understand what people expect from artificial intelligence. But, I think something that we are all watching and working on is machine learning. When the robot arrives at home, he doesn’t know you, he doesn’t know your habits, he doesn’t know the way you do things, he doesn’t know what time you get back from work, what you like to drink, what you watch on the TV, but then he will learn that, living with you he will learn that and after a while he will be able to suggest functions and features and services to you before you ask.

I think getting to that functionality will take five years. And it will take the same amount of time for robots to be in everyone’s home, which isn’t surprising because I think learning something people expect a robot to be able to do. The first thing our Japanese customers who have a robot at home said was: ‘It’s nice, but it doesn’t learn anything about me. I spent two months with the robot at home and the robot is the same.’

It’s really important for the robot to be accepted and for people to invest in their relationship with them. The robot needs to be able to learn and understand what’s happening around him so the service he provides gets better and better over time.



So you think that’s going to be feasible in five years?

Yes, because machine learning is something that already works really well. We have nice demonstrations, but only on very specific tasks. For instance, with machine learning you can learn to recognise objects. Recently we taught a robot to play the ball and cup game, when you have a cup and a ball connected together and you throw the ball into the cup. We taught Pepper to do that, so we can teach complex tasks to the robot, but the teaching takes a very long time.

Would you say that robotics and AI are progressing at the same speed or is one being held back by the other?

The artificial intelligence required by robotics is quite specific, that’s the problem. The artificial intelligence that played the game of Go was beautiful, we were really amazed and the more we got to know about this development, the more impressed we were. But that was only software, there was no connection with the physical world, and with robots the problem is the physical world.


As this all keeps progressing, what impact is robotics going to have on our lives?

At Softbank Robotics we think the secret of getting to people to accept robots is making sure you can interact with them in a very intuitive way. That means dialogue is necessary, the robot needs to be able to detect your emotions, detect if you’re tired, if you’re happy, if you’re excited, if you’re angry – that’s the secret.

That’s also the difference between a robot and any other digital device, when you use your mobile phone you click on an application when you need the application, but with a robot, because he lives with you, is very close to you, knows your habits, he can suggest the application when you usually use it. So, in a way, it’s a very intrusive interaction, because the robot can kind of read your mind before you ask for something, but in another way it is a very convenient and efficient way of living.

Sometimes, for instance, an elderly person doesn’t think to drink when it’s a very hot day. A classical robot would bring them a glass of water if they asked for one, but unfortunately elderly people forget to drink. With machine learning, the robot can suggest they have a drink because it understands that the person hasn’t drunk anything in some time, and with its connection to the smart home it can see the temperature is very high, and by analysing that situation it will decide to bring them a glass of water.

This is a tricky step, something you have to tune very accurately, so the robot is somewhere between being really proactive or just waiting for orders. Of course the robot has to answer and obey orders, but it also needs to propose things, and that’s the balance we have to find.

How are robots progressing in a physical sense, not thinking about AI?

Some aspects are progressing very fast because we benefit from the development of mobile phones, so we have embedded computers that are very powerful, very cheap and very small. We have sensors, microphones, cameras that are very cheap, and 3D sensors as well.

The technology in sensors and computing power is progressing very fast, the bottleneck for robotics is the actuation and motors because there are no other industries requiring the kind of motor we need in robotics. For me, that’s what makes the difference between robots and any other digital device – the motion, the actuation – – in a way that is our strength but it’s our weakness too because we are the only ones requiring these specific actuation systems.

So do you think that mechatronics is the bottleneck then or is AI also holding robotics back?

I don’t think so, from the artificial intelligence point of view it’s just a question of time, and I think we will do everything we want to do. We have been doing research on developmental robotics for a long time, we want to give the robot the ability to learn like a child learns. Babies learn how to use their arms, how to connect what they see with what they do, and we have long-term research based on that. It’s very complicated, but it’s very doable, it’ll probably take five or 10 years, but we’ll do it.

But for the motors I can’t say the same, I can’t say it’s just a question of time and improvement and optimisation of current solutions because if you compare the performance of human muscle to the performance of electrical motors, the motors are very, very far away from what we can do with our muscles. And muscles are the result of billions of years of evolution, but we can’t wait billions of years for the evolution of electric muscles, and that’s why we have to find disruptive solutions for actuation, whereas with artificial intelligence it’s just a question of time.