Are Autodidactic Robots Coming to Industry?
Posted on Apr 18, 2016 7:00 AM. 5 min read time
Why is AlphaGo such a momentous step in Artificial Intelligence? What does it have to do with robotic depalletizing? In this post, we investigate how recent advances in self-learning programs are making industrial robotics more flexible and easier to use. We also find out why AlphaGo will never be able to pass you the sugar.
You've almost certainly heard about AlphaGo. This March, the program developed at Google made Artificial Intelligence history by becoming the first robot to beat the world champion at the ancient game of Go. Why was the win so momentous? For one thing, AlphaGo taught itself how to play the game.
Self-learning algorithms are nothing new in robotics and AI. Since the first successful artificial neural network was developed in Stanford in 1959, programmers have been trying to make computer programs which can learn on their own. The motivation for doing this in robotics is fairly clear - if your robot can learn for itself then you don't have to program it.
How AlphaGo Won (and What This Means for Us)
Photo: Associated Press
Although machine learning has been around a long time, it’s only fairly recently that it has started to show real promise. Increasing computing power has allowed previously theoretical ideas to become physically possible.
AlphaGo uses a method called deep learning, which involves artificial neural networks with multiple hidden layers of hierarchy. This method has started to receive a lot of media attention recently, possibly due to its flashy-sounding name: “Deep Learning”. It sounds much more spectacular than “recurrent neural networks”, doesn't it? Recurrent neural networks are one of the deep learning architectures.
One of the most interesting things about AlphaGo's victory is the way it "taught itself" to play the game so well. The programmers started by training one of AlphaGo's deep neural networks using a catalog of 30 million moves played by world champion Go players. After that, AlphaGo then played the game against itself thousands of times in order to improve on the strategies of the human players.
But, what advantages can an AI like this give to factory robotics? As Google researcher, Demis Hassabis, says: "games are a kind of microcosm for the outside world. That's why games were invented." Machine learning techniques, such as those used in AlphaGo, can be moved to more practical applications once they have been developed for games. These techniques have some pretty useful applications in industry, including improved visual inspection, reduced calibration time and a better level of overall flexibility for robot workcells.
This is not just another empty prediction. Autodidactic robotic technologies are already showing up in factory robots.
The Self-Calibrating Depalletizing Robot
Depalletizing is a great task to give a robot. It adds no value to a product and is boring for human workers. However, it's not a task which is going to disappear any time soon. Whatever industry you work in, products and materials tend to arrive packed tightly onto pallets.
Robotic depalletization is not a new application. Many commercial options exist to integrate it into a production line. But, there is one big caveat: the objects manipulated must be the same size, have the same orientation, or have deviations which can be pre-programmed. This is a big issue if you regularly receive pallets with boxes of multiple, unknown sizes and orientations.
At the start of April this year, a California startup called Kinema Systems announced their new depalletizing system - Kinema Pick. The system which can be integrated into any robot and end effector is able to automatically detect the shape and orientation of the boxes. You only have to tell it the position of the pallet and the conveyor, which you program using augmented reality markers. After that, the system will use 3D vision and motion planning to detect the boxes and depalletize them to the conveyor.
The system is not unlike the Motoman depalletizer which we covered back in 2013. However, the company told IEEE Spectrum that their method has one big advantage - the Kinema Pick can handle tightly packed boxes, whereas the Motoman system required gaps to be present around the boxes for visual detection.
You can easily imagine the advantages of self-learning programs for industrial robots. For one thing, the required training and setup time could become tiny.
But, are there limits to self-learning? Are some tasks more suitable than others for autodidactic programming?
Why AlphaGo Can't Pass You the Sugar
There are, of course, limits to what machine learning is capable of. In general, autodidactic programs work well when applied to very specific, constrained parts of a problem. The learning space must be small for the program to be useful.
We can't always wait hours (or days) for the robot to learn the task. For example, one research group back in October programmed a Baxter robot to teach itself how to grasp 150 different objects. The method worked really well, with the robot able to predict a successful grasp 80% of the time. However, it took 29 days (700 hours) of grasping attempts for the robot to learn how to do this. Obviously, it would be ridiculous to expect industrial roboticists to wait so much time while the robot learned how to grasp their tools and parts.
When applications get complex, self-learning algorithms can quickly become unwieldy and unreliable. Therefore, for self-learning to be successful, you need to focus on a very small part of the problem. For example, it’s much more realistic to design a robot which can learn to detect the orientation of a known part, like this Fanuc robot, rather than expecting it to detect and grasp any unknown object.
One reason that Kinema Pick looks so promising is that it only handles boxes, which are basically always cuboids. If you were to present it with a pallet containing unrolled duvet covers, I would be surprised if it could handle them. Even as one of the most advanced AIs in the world, AlphaGo will only ever be able to beat you at Go. It will never be able to pass you the sugar at a dinner table. This is not a bad thing. It just shows how powerful a little bit of self-learning can be.
What parts of your application would you like to incorporate self-learning into? What disadvantages can you think of for a self-learning robot? How can you see AlphaGo's win affecting the wider robotics industry? Tell us in the comments below or join the discussion on LinkedIn, Twitter or Facebook.
Leave a comment