Robot vision systems advance automotive application

Robot vision systems advance automitive application

The growing use of robots in manufacturing is understandable. They offer incomparable combinations of speed, strength and agility. However they have no intellect and only operate in tightly choreographed ways programmed for them. Sick robot vision systems are changing that by giving robots the gift of sight. By giving robots vision, we are increasing their autonomy and expanding their potential uses.

For assembly work robots need to reliably identify parts presented to them in an unstructured way. The traditional way has been to develop a feeder system that presents components in an expected orientation. The disadvantage is that this adds cost and complexity to the process. Additionally a change in the part design or application renders the feeder useless. A vision systems allowing the robot to see the part it needs has several advantages. It can handle greater levels of application complexity and simplify the presentation of materials. The camera can also offer simultaneously real-time inspection of every component.

All applications have differing requirements and there are many possible ways to solve a specific vision task. In some cases, the choice of either 2D or 3D vision is obvious, but in other cases both technologies could work but each provide certain benefits. It is important to understand these benefits and how this applies to a given application in order to provide a reliable robot vision solution.

Combining 1D, 2D and 3D vision

The four primary camera tasks are positioning, Inspection, measurement and reading. 2D vision is particularly useful for applications with high contrast, or when the texture or colour of the object is the key to the solution. 2D solves all four vision tasks and is the dominant technology for machine vision solutions.

3D is suitable for analysing volume, shape or 3D position of objects, but also for detection of parts and defects that are low contrast, but have a detectable height difference. 3D is mainly used for measuring, inspection and positioning, but there are also cases for using 3D to read imprinted code or text when contrast information is missing.

Catching the third dimension can be done in many ways. Different machine vision technologies are available, each of which has its pros and cons. Sick offers 3D imaging in both scanning technologies and snapshot technologies.

Advanced machine vision, smart cameras, laser sensing and LED lighting, show potential for robots with stereo vision and 3D sensing needs. At the same time industry is demanding that robot vision technology becomes quicker to install and commission on the factory floor. It expects simpler programming and integration and where possible without using a separate PC.

Robot vision

For a human, distinguishing between a pile of different objects in a container, then picking the top one is easy. For a robot, this remained for some time a challenge for vision technology to accomplish reliably. It is importantly to accurately calculate the depth and 3D profile of the object, so it can be safely gripped. The limitations of 2D vision made it difficult for a robot system to avoid picking occluded objects. Distinguishing between similar colours or backgrounds presents problems as do components with curved or complex profiles.

Integrating 3D vision applications reliably solves these previously problematic applications. Vision applications in robotics automation often combine a combination of 1D, 2D and 3D image capture techniques. Combining high-speed processing and software algorithms combines them to solve automation problems.

Automotive vision applications

A typical automotive application is the automated handling of raw materials such as picking complex blanks, castings or forgings from random configurations in bins or stillages. The SICK PLB500 robot guidance system solves the problem by being able to recognise the correct part profile, calculate which is uppermost and most accessible for selection, and then find the optimum gripping point and place it exactly where required without collisions. Then it will choose the next part at another angle and repeat the task at high speeds. The robot can then load them automatically to, for example, turning machines, fixtures or feeder systems.

Another problem for vision-guided robot picking in automated assembly is for large car body panels, stored in racks. The parts may sit in different positions and orientations, especially in bent racks or parts not precisely located. The SICK PLR is a self-contained robot guidance system that combines state-of-the-art 2D and 3D machine vision techniques. The system works by taking a first picture of the part, looking for contrasting features like drill holes, for example. It then projects a laser cross onto a flat area of the part and takes a second image. The resulting data enables calculation of the correct distance and any pitch, roll or yaw of the part. The system communicates this information so the robot can safely grip it.

Future developments of vision-based applications are likely to combine 1D, 2D and 3D imaging to facilitate robotic tasks. The development of powerful processing tools and communications platforms also is integral to integrate image-derived data in increasingly demanding applications. In addition, algorithm developments enable development of new applications by retrofitting into existing pick-and-place solutions.