Optimal Drive Technology

Eyes open to robot efficiencies

Author : Martin Short,SICK UK

28 June 2022

Robot picking of automotive parts from a rack using SICK PLR
Robot picking of automotive parts from a rack using SICK PLR

“Can we get a cobot to do this…?” Such a suggestion might have been dismissed as a pipe dream even just a few years ago. Collaborative robots have changed all that. Their steady march of progress has crashed down barriers to accessibility and emboldened more engineering teams to seek out new production efficiencies.

Once project planning gets underway, it could well be that the answer might not be a cobot after all, but specified from a broadening spectrum of robot types and payloads. Yet, cobots have opened up the conversation – and not as a reason to reduce headcounts. Robots are in the vanguard of a growing automation trend to redeploy staff more productively, and to improve overall equipment effectiveness (OEE). 

Robots must have eyes to ‘see’. So, as the use of robots expands, machine vision systems develop to complement them. Just like the robots themselves, vision hardware and software are becoming more accessible and are being applied to new applications. For every project, vision is saving development time by making systems easier to integrate, set up and use. 

Simple vision sensors can be configured in minutes to perform a pre-determined repeatable task, while programmable cameras can be uploaded with application-specific vision-guidance software that runs directly on the device itself. For more bespoke applications, high-precision streaming devices now provide super-high resolutions with rapid image processing.

Three robot challenges

When embarking on a new robot project, engineers have three main challenges: programming the path, designing the gripper, and localisation. Robots need vision for localisation. Vision sensors detect the correct part, and then let the robot know its position and orientation, even if it is moving. The vision system captures an accurate image of the scene, processes the image data and communicates the coordinates rapidly to the robot controller. Vision sensors also offer the advantage of doubling up to undertake tasks such as quality inspection and classification simultaneously.

Is 2D or 3D needed? A 2D vision system will be the correct solution when the image contrast is sufficient to extract data from a scene. A 2D vision sensor can survey a jumble of multiple components of different shapes and sizes, then locate and sort them, for example, to support the assembly of kitted parts. 

Rather than needing a human operator to keep a CNC machine continuously supplied, a cobot can do the job with the help of a SICK Inspector PIM60 2D camera. It can identify up to 32 different parts for the cobot to pick with no need for manual pre-sorting. 

Because robotic applications frequently involve picking and placing, they often need three-dimensional measurements of the height, or depth, of an object. That data must be super-precise to avoid collisions that could damage the robot gripper or the product itself. 

Bin picking

A challenging 3D application that is increasingly being automated is the need to pick parts out of a bin or stillage. To do so, a robot must be able to identify the uppermost part to pick from the pile, then orientate and remove it without damaging the product, or the robot gripper itself. 

Robots must have eyes to see: SICK Inspector PIM60 URCap
Robots must have eyes to see: SICK Inspector PIM60 URCap

SICK’s PLB 3D vision-guidance system was initially developed in the automotive industry to pick randomly-orientated blanks, castings or forged metal parts, but now is used with robots of varying types, and is equally at home, for example, when picking small components in electronics or semiconductor production. 

SICK PLB bin picking

New 3D camera options such as stereovision and snapshot time-of-flight have widened the possibilities for processing depth and height data quickly and easily. But it is the corresponding software development that has crashed down the barriers to adoption, by dramatically reducing development time and eliminating the need for specialist programming skills.

SICK’s smart cameras and programmable devices are a versatile platform for a growing portfolio of easily configured software applications. Ready-made applications are easy to upload to a programmable camera as an App to a mobile phone. Some even come pre-installed as an “out of the box”, all-in-one kit of hardware and software. The SICK Belt Pick Toolkit App, for example, is a 3D guided robotic belt-picking solution which provides precise height-based image processing directly on the SICK Trispector P1000 programmable 3D vision camera.

Easy integration

SICK’s focus has been to make it easy to integrate vision guidance systems to communicate directly with the robot controller, making them quick to teach and ready to use in minimal time. In a typical application, a single camera with a SensorApp can ‘talk’ directly to the robot with no external PC or PLC. For example, it can be trained quickly to find the shape of a part or product, and tell the robot how to pick it up and where to place it accurately. Critically, the camera talks directly to the robot and there is no control system in between.

The same integration principles apply to automated guided vehicles or Automated Mobile Robots (AMRs). New ‘off-the-shelf’ software applications for SICK’s Visionary-T AP 3D time-of-flight snapshot camera allow a 3D profile to be captured so an AMR can position its forks in pallet pockets, or locate itself correctly under a dolly. The measured values required to pick up the pallet or dolly are pre-processed and evaluated on the sensor, then transmitted directly to the vehicle control.

Real-time monitoring

Digitalisation has brought with it the opportunity to monitor and track the data that sensors collect. Using cloud-based systems such as SICK’s Monitoring Box, for example, the collected data can be visualised in an easy-to-use graphical display. How soon will the sensor lens need cleaning? How many products passed or failed the quality inspection? How many pallets were stacked on shift A, compared to shift B? How many hours was the system operating?

Advances in robotics, vision systems and digital “Internet of Things” monitoring are combining to open the eyes of production teams to opportunities for efficiencies that they did not even expect. Engineers can now feel confident that starting out on a robotics journey is a cast-iron prospect for a good return on that investment. 


Contact Details and Archive...

Print this page | E-mail this page


Optimal Drive Technology