Vision systems and robotics come together in all types of industrial applications. This has led to an increase in the popularity of systems known as VGR (Vision Guided Robotics). The Spanish specialist for artificial vision, INFAIMON, has created a solution for Vision Guided Robotics that identifies, selects, picks and transfers defined objects within a bin, using Allied Vision cameras.
Vision Guided Robotic systems offer robotics a higher degree of freedom. Previously only suitable for work in predefined environments, robotics can now be used in more versatile settings. When a robot works without an associated system of vision, the work environment must be fixed and the robot must always go to a predetermined position. This requires the use of highly precise location systems for the objects to be handled so that the robot moves exactly to where it is required to go. Vision guided robotic systems (VGR) are far more flexible, as the vision systems enable the position of any object in the space to be determined with extreme precision. It is possible to define each of the points in a three-dimensional space and direct the robot to the precise point where it must go.
In robotics and vision systems, picking refers to the combined process of identifying an object using a vision system, the determination of its position in the space and its subsequent picking and transfer to the destination point using a robotized system. Out of all picking applications, the best-known is probably the so-called “pick and place”. Normally, this involves the determination of the location of an object on a plane and its subsequent picking. These applications often aim to determine the location of an object on a conveyor belt.
Infaimon, an Allied Vision Distribution Partner and specialist for artificial vision, has developed a solution that does more than identify objects placed on a plane conveyor belt. Their Bin Picking software, “InPicker”, also permits the selection and extraction of parts stacked randomly in a bin. The solution uses Allied Vision’s cameras for recognition and location and a robotics system for extraction and subsequent relocation. Although the base technology behind bin picking is similar to that used in pick and place, the difficulties involved are much greater. While at first sight it may appear to be a trivial methodology, it is extremely complex for a robotized computer system to distinguish and separate one part from all the others in the bin.
The bin picking system is based on stereo vision with two high resolution cameras implemented into the head piece of the robot. Two synchronized images enable the creation of a very precise, three dimensional map of all the objects in order to localize the best candidate for picking with great accuracy.
Stereoscopy: Two are better than one
Two of Allied Vision’s GigE Vision camera models were selected as the two “artificial eyes” delivering the stereo vision image, just as human eyes would do. Depending on the special requirements of the application, either the small and low-cost Mako G-125 or the enhanced Prosilica GT1290 is applied. To ensure the accurate functioning of a robotic system, it is crucial to use the minimum amount of cables, so the robot arm won’t be too restricted. The selected GigE Vision cameras enable a power supply by “Power over Ethernet” (PoE) allowing both power and data transfer with one single cable.
“In order to identify objects and their position in the bin as fast as possible, the bin picking systems need cameras with small dimensions, which are easy to synchronize and deliver images at a satisfactory frame rate”, describes Salvador Giró, CEO of Infaimon S.L. in Barcelona.
The Mako G-125 is an ultra-compact (29 x 29 mm) industrial GigE camera with Sony's ICX445 CCD sensor and various mounting options. Mako cameras are so small and light that they can easily be integrated into the robot’s head which further simplifies the usability of the system. They deliver images of 1292 × 964 pixel resolution at a frame rate of 30 frames per second, which is adequately high for this kind of application. For real stereo vision, the pair of cameras must capture and transmit synchronized images. Having various input and output options, the camera can easily be connected to an external computer sending a precise trigger.
For advanced applications requiring perfectly synchronized images at faster read out times, the bin picking system can also be equipped with Allied Vision’s high-performance, 1.2 Megapixel GigE Vision Prosilica GT1290 camera. This camera includes Precision Time Protocol (PTP) which ensures the synchronization of the cameras within 2 microseconds across an Ethernet network.
Moreover, the Prosilica GT1290 incorporates a high-quality Sony ICX445 EXview HAD CCD (type 1/3) sensor providing excellent monochrome and color image quality. The Prosilica GT1290 is a rugged camera designed to operate in extreme environments and fluctuating lighting conditions.
Picking step by step
The first step in the process of a bin picking application is the recognition of the object or part to be picked up. This requires accurate three-dimensional information about the object.
Given that in a randomly stacked context, the object could be lying in any position in the space, the bin picking computer program must be able to recognize the object in three dimensions. All the morphological parameters relating to the object must, therefore, be entered in the system.
Once the morphology of the object and the environment in which it is located is known, the next step is to recognize the objects.
The two Mako G or Prosilica GT cameras placed in the head piece, next to the robot arm, deliver a 3D image of the identified objects. However, this is not the only function of the cameras. While the robot is moving in a pre-defined trajectory within the 3D space, hundreds of images are taken creating a three-dimensional image of the environment. Views from different positions make hidden objects or zones visible, as the system generates a detailed map of the whole environment.
The next step is determining the best candidate among all recognized objects. The best candidate refers to the object which is located in the best position for the robot to grip it. This assumes that the object must be in a reachable position, not subject to collision, it is not trapped by other parts and is the best of the pre-selected candidates.
Once the object has been defined, the robot must reach it as fast as possible, without colliding with the working environment or the other parts. This requires a calculation of the ideal pathway. Finally, the robot, after picking up the object, places it in the previously defined position in order to continue with the production process. On some occasions, this trajectory is used to perform a quality inspection of the part using associated vision systems.
Having two cameras working simultaneously has various advantages. Using stereovision, production processes can become faster and more flexible. Objects don’t have to be stacked accurately and consistently, but can be filled randomly into a bin. No time is lost for sorting and aligning them before they are integrated into the products. Even highly complex structured parts can easily be identified. With other technologies, it is difficult to recognize these type of parts in a pile. The use of a stereo head helps to make safe and fast identification possible.
Other items particularly suited to this stereo technology are boxes or packages that have some sort of texture, such as printing marks or any other element. In this case, a system based on stereo vision is very useful in palletizing and de-palletizing processes and, of course, in object or part handling systems. With an average process speed of less than 10 seconds per object, the bin picking system is capable of emptying a container quickly without any mistakes or interruptions. The possibility to use this system continuously throughout all working shifts makes it very interesting for continuous production companies.
“Finally, the cost for the hardware being used by this technology has become very competitive in comparison to other 3D systems based on laser triangulation, pattern projection and time of flight”, adds Salvador Giró.