A few weeks ago, I wrote an article about ROS-Industrial, explaining how it could be used to create new applications for industrial environments by reusing code produced in academic research. Well, it turns out that the guys at SwRI have been working on a very nice demo that illustrates this concept perfectly. As you can see in the following video, the demo consists of a Motoman robot which performs the autonomous grasping of various objects using a Kinect sensor and one of our three-fingered Adaptive Robot Gripper. The video starts with an animation of a PR2 robot doing the same demo, which reminds us that the code was initially developed in a research environment, then transfered to the industrial robot.
How to do autonomous robot grasping?
As it is clearly explained in the video, the procedure required to perform autonomous grasping of objects can be broken down into five steps:
- The data is first acquired using the Kinect sensor. The raw output of this sensor is an image with a resolution of 640x480, each pixel containing 4 values: the usual R, G and B values determining the color of the pixel plus a fourth determining its depth. Because the camera is motorized, a full point cloud can be gathered, i.e. a set of points in space (x,y,z) for which the color is known (R,G,B).
- The points located in the background are removed, keeping only those which are part of the work surface (the table) and the objects (located on the table).
- The objects are isolated from the work surface.
- The 3D points representing the object which needs to be picked up are then analyzed in order to determine the best achievable grasp.
- A path planning algorithm is run to obtain an efficient, collision-free trajectory which will locate the robot gripper at the position determined in the previous step.
After these five steps, the trajectory can be executed by the robot. Using the Adaptive Robot Gripper simplifies the grasping because it mechanically adapts itself to the object. Therefore, it will compensate for the possible imprecisions of the algorithms. Also, the Gripper is commanded through a simple "close" command, removing the needs to compute the trajectory of each finger.
In the Industry
The autonomous grasping of various objects presents an enormous potential for industry. One example is the kitting application, in which different objects have to be placed together in a box or tray. What is now required to transfer this demo to full application in today's industries can be summarized in two words: ease of integration and cost.
The Kinect sensor is both cheap and simple (I'm not sure about the robustness, though). The algorithms seem to become easier to implement each year, thanks to initiatives such as ROS-Industrial (and, of course, open source rhymes with very low cost, i.e. zero!). Simple, low cost grippers which adapt to the grasped objects are now entering the market. Finally, the last piece of the puzzle is a low cost robotic arm. If you are a robotic entrepreneur who can gather and master all these elements, I can guarantee you that your future will be bright!