Recent media headlines show a leading electric vehicle maker has removed over 100 steps from its battery making process, 52 pieces of equipment from the body shop and over 500 parts from the design of its flagship vehicles. The result of rethinking its manufacturing process has resulted in a 35% reduction in the cost of materials for vans and savings of similar scale for its other vehicles. Rudolf Schambeck, Machine Vision Manager, Zebra Technologies explains.
Simplification, flexibility and efficiency are needed in automotive manufacturing, including the leading-edge electric vehicle battery sector. Whether building a new factory or fitting solutions into an established site, carmakers do not want the problems caused by too many pieces of hardware and software from a range of suppliers that create interoperability, cost and maintenance complications.
Carmakers and electric vehicle battery manufacturers are leveraging easier to use machine vision, deep learning and 3D sensor technologies to inspect the uniformity of surface coating, detect defects in cells, read barcodes and serial numbers, ensure the consistent positioning and application of adhesives and thermal beads, and carry our quality assembly of battery packs with vision-guided robotics.
Use and procurement today
Deep learning machine vision software, 3D technology and vision-guided robotics are unlocking new levels of visual inspection for quality, safety and compliance across the electric battery manufacturing process. Forty-three per cent of automotive business leaders surveyed in Germany and 56% in the UK are currently using some form of AI such as deep learning in their machine vision projects, according to this Zebra report.
There’s a range of different ways manufacturers procure machine vision solutions for existing and new factories – selection made at the site level with sign-off at the corporate level, and selection and sign-off both at the site level being two main approaches. This site level focus has its benefits but can leave room for less desirable variation where different sites using different machine vision solutions for similar workflows, with expertise and data not shared across sites.
Deep learning and metrics
Modern machine vision solutions are easier to use than legacy systems, with better user interfaces and better interoperability between software, hardware and upgrades via subscriptions. Some solutions are aimed at data scientists, with readymade studio environments and tools, while other solutions involve the input of programmers working with engineers.
Today’s machine vision software comes with deep learning tools which are needed for higher levels of inspection and are better at handling more complex use cases. Deep learning neural networks (specifically convolutional neural networks) are powerful, advanced AI tools that mimic the human brain.
Neural networks can achieve remarkable results, but they need to be applied in an educated way. Data issues must be addressed in order for an electric vehicle battery manufacturer to reap the benefits of AI. Mixing training and testing datasets, inadequate and unbalanced same sizes, ambiguous and inconsistent data annotation, and environmental factors need to be taken into account to ensure deep learning solutions work properly.
Realistic expectations should be based on the areas where neural networks excel, compared to human performance and conventional rules-based machine vision, such as detection of surface defects, detecting or counting objects, reading difficult characters or detecting unexpected deviations from previously seen objects.
Selecting the appropriate evaluation metrics is essential for accurately assessing model performance. The most basic metric is accuracy (number of correct classifications divided by the number of all classifications), but it may not be suitable for unbalanced datasets. Instead, metrics like F1 score for classification or average precision would be better for detection tasks.
As a rule of thumb, it is better to avoid metrics that rely on plotting the true positive rate against the false positive rate as the numbers they produce may be misleadingly optimistic, especially when the number of true negatives is high.
Deep learning OCR
Some machine vision solutions come ready out of the box as low/no code solutions that require no prior machine vision knowledge. One example is optical character recognition (OCR) based on deep learning.
Getting OCR inspection right can be challenging. A variety of factors including stylised fonts, blurred, distorted or obscured characters, reflective surfaces, changing lighting environments, and complex, non-uniform backgrounds can make it impossible to achieve stable results using traditional OCR techniques.
Deep learning-based OCR can come with a ready-to-use neural network that is pre-trained using thousands of different image samples. It can deliver high accuracy straight out of the box, even when dealing with very difficult cases. Users can create robust OCR applications in just a few simple steps— without the need for machine vision expertise. Using an intuitive interface makes set-up easy. Such solutions are also flexible, as they can be deployed on desktop PCs, Android handheld devices and smart cameras. These advanced deep learning machine vision capabilities are also being combined with 3D scanning for advanced data capture and analysis.
The uses and benefits of 3D
3D vision systems can reconstruct the spatial layout of objects within an electric battery, including the object’s shape, size, position and orientation in a three-dimensional space. 3D scanning can provide accurate, detailed data that 3D inspection processes can use to perform comprehensive and precise inspections of cells, solder beads used for cell assembly, tabs and connectors, and adhesive beads for cell stack assembly.
3D vision systems capture images from two slightly offset viewpoints, known as stereo vision, which allows the 3D vision system to perceive depth and reconstruct the three-dimensional structure of objects. 3D scanning can be done using one of several techniques, such as laser scanning, structured light scanning, ToF scanning, photogrammetry, or contact scanning.
The first step in 3D scanning is to acquire data about the surface geometry of the object being scanned. For example, prismatic and pouch cells require laminating and stacking. Electrodes and separators are cut into rectangular sections and stacked to form the battery cell. High-resolution 3D profile sensors can aid inspections where image contrast or low lighting poses a problem. The high-fidelity 3D also helps ensure cell casings are free from contaminants or debris that might compromise safety or functionality.
Creating digital point clouds
Once data is acquired, it is processed to generate a point cloud, a collection of data points in a three-dimensional space, with each point representing a specific location on the object’s surface. The density and accuracy of the point cloud depend on the scanning technique and the output resolution of the 3D scanner. The point cloud data is often processed further to generate a mesh representation of the object’s surface. A mesh is a collection of vertices, edges and faces that define the object’s shape in a more structured and efficient way.
Some 3D profile sensors use a design with dual-camera, single-laser design which helps decrease gaps during scanning. This is especially useful for battery module assembly. The dual-camera, single-laser design enables quick capture of high-fidelity 3D reproductions of each module’s surface. Combining 3D profilers with pattern-matching software tools ensures cell stacks and all connections are precisely aligned in the assembled electric vehicle battery module.
3D tools for machine vision
3D profile sensors are important for machine vision tasks like quality control and inspection. 3D profile sensors extend the capabilities of machine vision systems. They enhance depth perception and improve quality control with a rich 3D dataset for modern machine vision software equipped with 3D tools to process and analyse the 3D point cloud data. Tools include 3D surface matcher to find and estimate pose of surface model occurrences in a point cloud, and 3D shape finders to find and characterise cylinders, (hemi)spheres, rectangular planes, and boxes in a point cloud.
Other tools could include 3D blob analysis to segment a point cloud into blobs and calculate their characteristics, and 3D measurement to find transitions in extracted profiles from depth maps and compute metrics on and from these. 3D metrology can be used to compute distances, statistics and volumes in a point cloud.
3D and vision-guided robots
Robotic arms are used in electric vehicle battery and automotive manufacturing for picking, sorting, and assembly at factory sites and assembly lines. Picking and sorting applications are useful when detecting and removing defective items from the production line.
A machine vision camera connected to machine vision software can inspect items as they pass on the line and alert the robotic arm to pick and remove anomalous items. Robotics also helps in heavy lifting, repetitive, and high accuracy assembly use cases. Vision-guided robots can be programmed to pick-and-place cells for cell stack and battery module assembly with high levels of accuracy and control, for example.
3D calibration tools can be leveraged for eye-to-hand (stationary camera mounted next to robot) and eye-in-hand (camera mounted on robot) vision-guided robotic applications. Machine vision cameras and 3D sensors attached to robotic arms give greater flexibility for moving along and around the materials or components to be inspected, while stationary cameras are suited to conveyor use cases.
Conclusion
Fifty-four per cent of manufacturing leaders in Europe (61% globally) expect AI to drive growth by 2029, up from 37% (41% globally) in 2024, according to Zebra’s 2024 Manufacturing Vision Study. This surge in AI adoption, combined with 92% of survey respondents prioritising digital transformation, underscores manufacturers’ intent to improve data management and leverage new technologies that enhance visibility and quality throughout the manufacturing process.
As new electric vehicle battery and vehicle processes and technologies emerge, we can expect to see more companies reviewing their processes and supply chain to reduce waste and costs, and drive-up production and profitability – they’ll need the right tools to help them get there. Deep learning machine vision and 3D data capture and analysis tools are already giving manufacturers a leading edge in electric battery production.
For more stories like this, visit our Digital Transformation channel.