For decades, machine vision technologies have helped manufacturers — from automobiles to semiconductors and electronics — automate processes, improve productivity and efficiency, and increase revenue. Machine vision technologies, as well as artificial intelligence (AI) software and robots, have become more important as companies protect themselves from disruptions caused by labor shortages and COVID-19.
In the manufacturing space, these technologies continue to evolve to meet the ever-changing needs of today’s manufacturing processes. The main challenge for engineers is to keep up with new developments and their capabilities and evaluate which ones are most suitable for a particular application.
Below, we look at new developments in deep learning, cloud computing, sensor technology and 3D imaging and how they open up unprecedented opportunities for machine vision systems and the manufacturers that deploy them.
Deep learning is not magic
One of the key machine vision trends ahead is the continued emergence of deep learning in automated inspection applications in manufacturing. Deep learning has been a hype in the industry for quite some time and presents both opportunities and challenges.
As far as opportunities go, deep learning is a paradigm shift, as real-life data — that is, experience — is needed to configure a machine vision application. This is something that production engineers can relate to as they expect such a system to be reliable and easy to implement.
As always, hyped technologies raise expectations. With high expectations comes the risk of high disappointment. While it’s true that deep learning can make the implementation of vision systems easier, it’s not the ideal solution for every application, and success doesn’t come without effort.
The first challenge for engineers and systems integrators is to figure out whether deep learning is the right technology for their application. It may not be – just as machine vision itself may not be the best solution to a given problem. The key is to understand the application and how deep learning works. And today there is still quite a bit of confusion about deep learning.
A common myth is that deep learning makes it easy to program an inspection system, even with poor quality images. This is not true. As with any vision application, the quality of the input data – the images – is critical to the quality of the output. This is especially true for data used to train an algorithm. Deep learning is not magic. The better the input data, the better the application will perform.
The workflow for implementing a deep-learning vision system is different from that of a system that uses exclusively rules-based algorithms. It may be easier to implement, but that doesn’t mean the implementation requires less care and understanding of the application. Success starts with the training data. Not only must the quality be good, but the quantity is also important: in the first place sufficient data must be available.
Fortunately, most manufacturing processes do not produce many defective parts. However, this is a curse for machine learning, as engineers often don’t have enough sample images of documented defects to train a system. If the data set is too small or the quality is not sufficient, deep learning may not be the right technology for that application.
When sufficient training data is available, proper labeling is essential for implementing a deep learning inspection system. Are resources available for that task? Is there a clear and common understanding of what is and what is not a defect? These are aspects to take into account. Tools that promote features such as collaborative labeling with error analysis and validation can help.
Another aspect is the output data. Deep learning helps identify defects reliably and provides a lot of defect data, but it does not provide root cause analysis of how defects are produced and how to eliminate them. Analyzing output data to solve problems and continuously improve the process is the next challenge for production engineers.
Cloud or edge computing?
The cloud computing trend, which has already affected many other industries, is sure to affect the machine vision industry as well. However, its application is likely to be limited in industrial vision applications, mainly for two reasons.
First, industrial inspection typically requires fast, real-time, low-latency processing, which is difficult to achieve with cloud computing at this stage. Second, linking industrial inspection systems to a company’s IT infrastructure is a complex undertaking that raises IT security concerns. It also carries the risk of having to shut down a production line for hours in the event of a failure or for maintenance of a remote server, which can cost millions of dollars in a very short time.
The most likely scenario for cloud computing in machine vision will be edge processing. This means that the image processing is performed locally in real time, with the results uploaded to the cloud for further analysis. In a deep learning application, training and storage of data can take place in the cloud, while the actual execution of the inference is done at the edge.
Sensor technology innovations: are more megapixels always better?
Another important trend in the vision industry is innovation in sensor technology. The sensor resolution is constantly increasing, with smaller and smaller pixels. Shortwave infrared (SWIR) sensitivity is rapidly gaining ground, powered by Sony’s latest ViSWIR sensors.
Event-based sensors have also opened up new possibilities for vision applications. While all of these new technologies are exciting for engineers in manufacturing facilities, it would be a trap to think that one major development will cover the needs of all applications.
For example, a high-resolution sensor isn’t always the best solution when compared to an array of multiple lower-resolution cameras or a single camera moving across a field of view. Technical challenges in optics and lighting, or cost considerations, may justify a lower resolution setup. Again, a thorough analysis of the specific requirements of an application is needed to select the right technology.
3D imaging continues
3D imaging has been a trend for a few years now and continues to mature. Thanks to advances in 3D imaging, objects no longer need to be pinned and positioned in a preset way for inspection. This brings the market closer to the holy grail of machine vision: bin picking, the ability to grab randomly placed objects in a bulk container.
Today, the efficiency of bin picking still depends a lot on the geometry of the object: how easily can it be grabbed? How easily can it be separated from the rest? 3D imaging has made great strides and it is now no longer necessary to perform pixel contrast analysis to interpret the geometry of an object from a 2D image. Thanks to this technology, we can now take actual geometric measurements of an object.
Several technologies allow the capture of 3D information: laser triangulation, structured light, stereoscopy, time of flight. Again, technical expertise and a deep understanding of the application are required to select the right technology for a particular use case. Engineers, for example, need to know what level of precision is required. Laser triangulation is accurate, but requires movement of the object or 3D scanner. Is it compatible with the general application configuration? These are the types of questions to answer when choosing a 3D imaging technology for a production scenario.
Technical expertise needed
With so many new technologies and possibilities emerging in the machine vision market, expectations are high. However, these technologies can only deliver on their promises if they are implemented correctly in the right application scenarios. With the cost and labor associated with defective products, including discarded and remanufactured parts, damaged reputations and recalls, manufacturers must be diligent in identifying the right applications and technologies for disparate automated inspection tasks.
David L. Dechow is an accomplished engineer, programmer and entrepreneur specializing in the integration of machine vision, robotics and other automation technologies, with an extensive career in the industry. He is Vice President of Outreach and Vision Technology at landing AI.