Over the last few decades, robot manufacturers have been struggling to develop robots that will be easy to program. With the introduction of collaborative robots that can be programmed by hand-guiding, a new threshold has been achieved. However, there is still plenty of stuff to teach robots and at the moment most robots are not very good at adapting themselves to their direct environment. With the introduction of multiple sensors and artificial intelligence, some research is leading to an incredible advance in robotic programming; Machine Learning.
As I said in the introduction, collaborative robots in the manufacturing and research world have recently seen big enhancements to programming methods since there is no longer any need to program the robot off-line. You can literally teach the robot the required points by hand-guiding the robotic arm through those points or trajectories, then just by punching in the point you want it to remember the robot's program is set. In some cases you can choose between linear or circular interpolation between the points. The fact is that you are still pretty limited by the options offered. For example, if an object intersects the path of the robot, the robot will still hit the given object because; first, it doesn't care, well actually, it doesn't recognize the object and second it doesn't know how to avoid it. This is where machine learning comes into the game.
Machine Learning is the field of scientific study that concentrates on induction algorithms and on other algorithms that can be said to ''learn.'' - Ron Kohavi PhD, Machine Learning Engineer, Microsoft.
Machine learning is actually the ability of the robot to understand constants in its learning process. In other words, when you teach the robot to do a certain operation a certain number of times, there will be certain constants in the different teaching motions. For example, you want to teach the robot to press a button. Well, the starting position and the finish position will pretty much remain the same. The action to push the button will also remain the same as you teach the robot to do it repeatedly. Machine learning algorithms remember (record) these constants and repeat them once the robot is in automatic mode. Since the goal is to push the button, the robot will locate the button, try to reach it and push the button. Even if the button is offset from its initial position, the robot will go to this position and press the button.
So the path itself is not very important, but the fact that the robot has to push the button (goal) remains an important constant. So if there is an object in the robot's path, the robot can monitor the object and then change its path and still reach the button without hitting the object. Watch the following video to have a better idea of how the robot is applying machine learning.
The last video was shot after the integration of new algorithms by Gu Ye and Ron Alterovitz from the University of North Carolina at Chapel Hill. They taught the robot to do some tasks and once they had integrate the learning algorithms, the robot could remember the important points and avoid the obstacles in its path. This demo is using a Aldebaran Robot to execute several common household tasks.
The following video by Chris Bowen and Ron Alterovitz, is using Rethink Robotics' Baxter to execute the simple task of transporting a substance with a spoon without spilling everything on the table. It may seems like a simple task for you and me, but for a robot it is not that intuitive. In fact, first you have to locate the initial receptacle, then the robot has to grab some of the material in the receptacle and transport it without spilling any, then to locate the second repository and finally to pour the material into this receptacle. The complexity of the task is increased once the researcher begins to put obstacles in the robot's path, as well as when he moves the second bowl. Take a look at it, it is quite impressive.
So, what's next? Well, this is just the very beginning of machine learning. Imagine that you show robots how to do a given task, you move the robot afterward and ask it to do the same task, but in a different environment, well it should be able to adapt its motions to its environment. So there is no need to teach the robot what to do each and every time. Now imagine sharing these algorithms in the cloud, this is happening with ROS and was one of the initial objective of Willow Garage.
Imagine that you are working with Baxter and it gets bumped, in most robots this would cause an offset and everything done thereafter is done from the position of the offset (this could really be bad and throw off all your carefully calibrated measurements), but in this case, the learning algorithm will correct the offsets, so that the task will still be able to be executed. It is very clever. In fact, I think that this can really revolutionize the way people program robots. Let's see where this research will lead robot programming and honestly, I hope that this will be available to the market as soon as possible. In the mean time, take a look at Ron Kohavi's research and don't forget to subscribe to our blog to have frequent news on robotics.