Technical Article

How to Use Robot Manipulators in Industrial Automated Bin-Picking

June 03, 2020 by Akshay Kumar

Explore the technology of autonomous robotic arms in this article on Industrial Automated Bin-Picking and how they can be used for a wide array applications.

Traditionally, manipulator arms run hard-coded instructions, but a majority of applications of collaborative robot arms today involve autonomous motion. Automated bin picking is one of those most sought after applications for manipulator arms. Automated bin-picking in semi-structured environments without human intervention is seen as one of the most sensible commercial applications, because it offers a good ROI and the opportunity to deploy human labor in more meaningful applications.

This article builds on the previous two articles on visual servoing and motion planning for robot arms and explains the complete automated bin-picking pipeline and its technical aspects.

 

What is Automated Bin-Picking?

Bin-picking refers to the problem of picking up randomly-placed objects with unknown poses from a designated location for tasks like sorting and pick-and-place. Using robotics and automation in bin-picking is called automated bin-picking and it opens more possible use cases while reducing labor requirements.

Picking up parts of a machine for assembly, workpieces for machine tending, and placing objects on shelves and stands are all examples of bin-picking, largely driven by 3D computer vision and intelligent manipulation techniques. Automated bin-picking relies heavily on computer vision to detect the objects and robotics to articulate the robot to handle the objects. 

Perception modules observe the environment to detect the objects to be picked from the cluttered box, then determine the correct approach to pick it up and place it at the target location. The robot thereafter picks up the object and transfers it to the drop location using different motion-planning strategies.

 

The Automated Bin-Picking Pipeline

Visual perception and robot motion are the two broad domains of the automated bin picking pipeline. Visual perception has sub-domains involved in determining the objects in the environment creating the grasp pose for the robot while the robot motion control algorithms help execute the transfer and drop.

Figure 1. Automated Bin Picking Pipeline 

 

Objects In the Bin

The type of objects/parts being picked up decides the success of bin picking. Objects with symmetrical structure, easily outstanding colors, and other notable features are easier to observe in the scene. Different heights, deforming properties, and geometric designs are challenging.

Structured, semi-structured, and unstructured bin-picking are categories based on how the objects are placed in the bin — completely predictable object arrangement, arrangement with subtle inaccuracies, and completely random object placement in the bin respectively. While structured and semi-structured bin-picking doesn’t need visual sensing, they are the most unlikely scenarios in the industrial settings and need too much work upfront to defy the purpose of automation.

 

 Figure 2. Structured, Semi-structured and unstructured bins

 

Visual Perception for Automated Bin-Picking

Computer vision and deep learning techniques are used to execute this step of understanding the bin-picking scene.

The comparison of the detected objects with their actual 3D model and the corresponding CAD model helps in detection and recognition. The components of the visual perception module in automated bin-picking include:

1. Object Detection - Determining individual objects in the bin, despite occlusions, partial overlaps, and glares is a challenging first step. It helps the robot understand the possible objects to be picked. Conventional computer vision techniques for edge detection, color affinities, and cascade classifiers can be used for object detection, but are largely applicable to only completely visible discrete objects. Only deep learning techniques are able to detect objects correctly under more complex situations as they are pre-trained on several datasets with large inherent as well as induced variations. Thus, they are immune to a lot of traditional computer vision pitfalls. 

2. Object Recognition - Once the objects in the scene have been detected, recognizing the type of object and deciding which object to pick is the next important step. A sorting application relies on how efficiently the correct object is picked. Camouflaged objects or occlusions make it fairly difficult for even the best learning algorithms to determine the object class.

3. Object Pose Estimation - After the object to pick up has been recognized, its pose in the bin is determined using a software CAD model of the object and other object localization algorithms. This step is necessary to derive the grasp pose in the next step.To ensure that the pick-up location for a particular object is achievable (by the robot) and accurate, 3D vision is used.

 

Figure 3. Pose estimation for bin objects 

 

4. Grasp Pose Estimation - Once the object to be picked up is determined,  determining the grasp pose for the robot to successfully pick up the object is done. This is a fairly challenging step given the constraints of avoiding collisions and ensuring a safe pick for the object. This step depends on a good geometric model (accurate and accessible design) of the robot’s EOAT. A major hack that the common commercial solutions use is to approach the object with the EOAT orientation aligned perfectly with the gravity vector and only tweaking the yaw for different objects. 

Figure 4. Different yaws with vertical grasp pose for different selected objects

 

With these steps, the primary visual perception tasks are accomplished. These steps hold true for a static bin, but with a moving bin or conveyor belt, the object pickup involves all these steps as well as visual servoing to guide the end-effector to the desired pick up point as the objects are also moving.

Further use of visual feed depends on if the transfer step involves a moving target/object drop point or a complex slot that needs visuals servoing again.

 

Robot Motion for Automated Bin-Picking

After the object to be picked up is selected and the robot knows the incident grasp pose, the robot needs to navigate to the target to perform the pickup. The approach to the pick-up point may involve visual servoing or conventional motion planning depending on whether the pick-up point is to be reached by depending on visual features completely or using 3D pose obtained from the visual perception pipeline, respectively.

EOATs have tactile sensors or torque sensors to confirm successful object grasp which initiates the drop-off process. Successful initial grasps often end up in failures when the EOAT is not able to maintain the grasp due to collisions of the object held with the sides of the bin or other objects in the bin. A robust solution is capable of acknowledging this failure and running a recovery strategy to complete the pick-up.

The motion planning strategy for the drop-off has already been discussed in a previous article. For static drop points, motion planning is employed and visual servoing is only used for unknown or variable locations. Sorting applications change the drop/target point depending on the object class. Autonomous motion allows for a more dynamic task environment and supports the feasibility of the solution in collaborative environments. 
 

End-of-Arm Tool (EOAT) for Automated Bin-Picking

The design choice of the end-of-arm tool and its control is critical to implementing high-speed automated bin-picking. Conventional two-fingered grippers serve well for most rigid, non-deformable objects but are not extensible to larger objects or deformable ones. Also, their design can often limit the manipulability thus slowing the task. Hence, a few companies have started developing suction-based grippers for deformable objects and the ability to vary the gripping force. 

Vacuum-suction-based EOATs simplify the colliding and grasp pose considerations substantially. 

 


Figure 5. Vacuum Gripper from Robotiq, Conventional Force-Controlled Gripper from OnRobot, and Deformable gripper from Soft Robotics

 

What are the Challenges in Automated BinPicking?

Automated bin-picking means the robot manipulator and its sensing system have to be adaptable and intelligent enough to solve problems in all possible scenarios. The following requirements of the application make the solutions difficult to scale and affect the accessibility of uninformed users.

  1. Bin picking needs the robot end of arm tool (EOAT) to acquire infinite poses with precision and accuracy. 
  2. Grasping the object in intricate orientations while reaching in the corners of a bin and avoiding collisions is extremely challenging for robot manipulators.
  3. Determining the correct object to be picked from a cluttered environment with ambient lighting variations, object occlusions, and camera inaccuracies can be extremely tough in real-time. 
  4. Different objects to be picked have varying optical characteristics that challenge the extensibility and robustness of the solution.
  5. Determining the 3D pose of the target object (from 3D visual information) to obtain a grasp pose is inconsistent and needs very high accuracy.
  6. Real-time performance of the perception, visual servoing, collision detection,  and motion planning modules is very difficult and needs high-performance computational resources. 
  7. Camera calibration or correct 3D vision sensing is always a tricky aspect and prone to hardware inaccuracies.

 

Review of Automated Bin-Picking

Bin-picking is a monotonous task with no explicit skill requirements. Automated bin-picking addresses sharp, hazardous, and heavy object handling, making it a remarkable example of useful automation. However, the computational efficiency of the solution is still evolving to meet more cycle time, precision, and flexibility expectations in industries. The development of more sophisticated and capable EOATs with larger gripping tolerance and control mechanisms is vital to the speed of adoption of automated bin-picking solutions.

Automated bin-picking in random environments is not robust and has yet to achieve the performance levels that industries ideally need for productivity and profit. However, using SCARA style robots for simplified automated-bin picking in controlled industrial environments has been in deployment for almost a decade now.