Introduction
Machine vision is a branch of artificial intelligence (AI) and computer vision that involves the development of systems and algorithms to enable machines, typically computers, to interpret and understand visual information from the surrounding environment. In essence, it is the technology that allows machines to "see" and process images or video data similarly to how humans do, and then make decisions or take actions based on that visual input.
Key components of machine vision systems include:
1. Image Acquisition: Using cameras, sensors, or other devices to capture visual data from the environment.
2. Pre-processing: Processing raw image data to enhance quality, remove noise, correct distortions, or adjust for lighting conditions.
3. Feature Extraction: Identifying and extracting relevant features from the images, such as edges, shapes, textures, or colours.
4. Pattern Recognition: Analysing and interpreting the extracted features to recognize objects, patterns, or anomalies within the images.
5. Decision Making: Using the analysed visual data to make decisions or trigger actions, such as sorting products, guiding robotic systems, or detecting defects.
Machine vision is used to:
- Identify the colour of a product;
- Identify the presence of defects in a product;
- Measure the length, width, and depth of a product;
- Compare the outline of a product to a reference, and more.
How Machine Vision System Works
Let’s assume our machine vision system inspects products. In this scenario, first, the sensors detect if any product is present. After verifying this, the sensor triggers the camera to capture the image and the illumination system to highlight its features. Next, a frame grabber takes this image and translates it into a digital output. After being stored in the computer’s memory, the system software analyses the image based on predetermined criteria. If the product fails to pass the quality tests, it is automatically rejected.
Types of Vision Systems
A massive range of machine vision systems are available in the market, each being characterised by different levels of flexibility, performance, and cost. Vision systems can be divided into three classes, which are the following:
1. Smart camera-based vision systems
Smart cameras consist of a sensor, processor, and I/O in a compact arrangement, generally no bigger than a standard industrial camera. These solutions offer a simple and intuitive interface that facilitates easy operations with minimal training. To configure machine vision systems for inspection tasks, a computer needs to be connected to the smart camera via a network interface. This connection, however, is not required during the runtime. A major advantage of smart cameras is their compact design and easy communication of results.
2. PC based vision systems
The classical PC-based machine vision systems have an industrial computer at its heart that manages and communicates with all the other peripheral devices like cameras and lights. After processing the inputs from cameras and analyzing the information via the software, the computer communicates the decisions to the other devices. When the application requirements demand high processing power, a number of cameras, or dedicated FPGA processors, PC based vision systems come into the picture.
3. Compact vision systems
The compact version system is much like a ‘lighter’ version of a PC based vision system. It is based on embedded processing technology and usually includes a graphics card that captures and transfers the data to a separate peripheral device like an external monitor to be viewed. Generally, compact vision systems also have an inbuilt graphical user interface that is operated easily using a touch screen monitor or mouse. Sometimes, compact vision systems not only manage first-level inputs like the camera and trigger inputs but also have embedded ones.
4. Deploy computer vision models on the cloud & connect them to cameras for detecting defective products
Once the model is trained, proceed to deploy it on the cloud with services like nstudio.navan.ai, AWS SageMaker, Google AI Platform, or Azure. And connect the deployed model to cameras, setting up a pipeline for streaming video to your cloud infrastructure. This might involve using IoT (Internet of Things) devices or edge computing solutions to preprocess the video feed before sending it to the cloud. Once the video feed is streamed to the cloud, the deployed model can perform real-time inference on the frames to detect defective products. The results of the inference can then be sent back to the production line for action, such as flagging defective products for removal or further inspection.
Conclusion:
Computer vision techniques have expanded and simplified ways to use machine vision using deep learning to identify properties in objects. We can identify product defects, identify key points in an image, and classify images. All of this is done using machine learning which gives you more flexibility than rule-based traditional machine vision methods. navan.ai has a no-code platform - nstudio.navan.ai where users can build computer vision models within minutes without any coding. Developers can sign up for free on nstudio.navan.ai
Want to add Vision AI machine vision to your business? Reach us on https://navan.ai/contact-us for a free consultation.