Quality assurance has traditionally been reliant on a legion of human operators armed with clipboards and pens. The combination of artificial intelligence, edge computing and advanced algorithms is changing that, and fast!
The past 10 years has seen an explosion in the number of cameras – specifically surveillance systems – being deployed in factories and cities the world over.
Individual unit price has plummeted, while image quality has exponentially increased. A high-resolution, high frame-rate camera now represents a very cost-effective proposition; but more importantly, it offers a tremendous opportunity to leverage the data each camera captures.
And yet, that’s easier said than done. The data packet from one 4K camera is sizeable, the data packet from an entire network’s worth is monumental. Storing, let alone processing and analysing, such a volume just isn’t feasible – at least not in the traditional sense. That’s why edge computing represents an absolute sea-change in capability and thinking.
To learn more, The Manufacturer sat down with Brian Duffy, Edgeline lead for HPE.
How does advanced digital technology represent a more efficient, effective way of conducting quality assurance?
Brian Duffy: Employing someone to sit and monitor a dozen or more camera screens isn’t the most rewarding task and it is surprisingly labour-intensive. Naturally, that means you’re going to get lapses in concentration and things are going to get missed.
From a quality assurance (QA) perspective, a business wants its pass rate to be as close to 100% accurate as possible.
With the increase in cameras and the resulting number of screens, it’s just not effective to use human operators anymore. That’s where artificial intelligence (AI) comes in.
There are two important points which make AI a perfect candidate to undertake QA: the models are becoming ever-more sophisticated, and that’s driving dependable accuracy.
Therefore, we can say with high confidence whether a finished good meets the quality criteria or whether it must be rejected or reworked.
Video Analytics: Solving today’s business needs
Video data right at the edge can extract critical insights, helping speed up reaction times, reduce the risk of data transfer, and drive better business decisions.
And the edge is playing an incredibly important role in this.
Absolutely. Manufacturers are becoming ever-more aware of the huge amount of data just sitting at the edge, and that leveraging that data could significantly drive efficiencies.
Most CCTV systems have network video recorders at various points and, ultimately, video footage is brought back to a central location, either a cloud or a data centre.
The increase in the number of cameras means it’s just not practical to move data from the source – the network requirements are too large, and the latency involved is too high.
If you’re using video footage for QA, you need to know whether each item has passed or failed quickly – ideally, as close to real-time as possible. You can’t wait, you can’t stop your production line while the data centre traffic gets analysed and returns a result. That creates an even greater need to harness data at the source.
The other important aspect is that HPE is leading the edge. Not only can we run the same stack that would be run in the cloud or data centre, but our platform is particularly well-suited to this application.
All our products are capable of being wall mounted, they are ruggedised to operate in more hostile environmental conditions, and we generally use more energy efficient components.
We also incorporate accelerators within our system which really provide a performance boost. That allows HPE to say that our platform is ‘perfectly optimised for AI’.
Let’s go back to artificial intelligence, how does that support video analytics?
Artificial intelligence was first used in World War II to predict ballistics and has evolved significantly since the 1940s via ever-more complex algorithms.
These algorithms are now so sophisticated that they can mimic the human brain. Therefore, their analysis of something is extremely accurate, so much so that it increasingly exceeds human capabilities.
There are two elements to how video analytics work: training and inference. Let’s take an automotive body panel as an example and you want to use video analytics to detect whether a panel has any scratches, blemishes, dents or imperfections prior to it being fitted.
So, you have to train your AI algorithm to understand what a scratch is, what a dent is, what is not the finished good, and so on.
Inference is taking that trained model and deploying it where you want to conduct the analysis, which, as I’ve mentioned, is as close to the data source as possible.
Could a manufacturer feasibly develop and train these algorithms themselves?
A manufacturer may have the skills in-house to do that; they could code it and bring in data scientists and do it all themselves; and generally, that has been done to a point.
HPE’s approach is to partner with leading software developers who already have a lot of these models, or at least have got a framework to hone a model on. Then, from an Edgeline perspective, we actually validate that software works on our platform. That gives us confidence in the solution itself, as well as the vendor’s ability to conduct the QA.
Our ecosystem can typically solve a problem quicker and we can support the solution, as opposed to it being homegrown. For example, I’ve seen models being adopted and trained in less than a week to give results to a customer.
Do you have any real-world examples of video analytics for QA being deployed?
While I can’t mention names, we are running a number of pilots. One of our customers manufactures servers and we are working with several of our software partners to use cameras to monitor the assembly process.
If a customer wants to buy a server with a particular configuration, that information can be fed into the AI model and using video analytics, we can check whether an operator correctly assembles it, i.e. the cables are all in the right place, components are seated correctly, etc. Then visually, a screen turns green for a pass or red for rework.
Another example is in the food industry. This time, we’re using video analytics to assess cuts of meat coming down a production line. Algorithms are looking at the size and weight of each cut, detecting where the bone is to make the most efficient cut, and analysing the marbling, i.e. the balance of red and white.
Some countries have different preferences, so by leveraging video analytics and AI, we can look at the colouration of a cut of meat and determine whether it’s optimised for country A or country B depending on the natural marbling of the meat.
Video analytics is pretty bleeding-edge stuff. What does the future hold?
HPE is investing heavily in this area because we firmly believe that video analytics is going to explode. The appetite to conduct more surveillance is increasing exponentially and that is going to create an explosion in the number of cameras being deployed.
It doesn’t make sense to have people sat looking at video screens all day; the number of screens they would have to monitor is becoming unimaginable. The human eye is just not set up to process that amount of visual data.
That is forcing us to look at new ways of analysing video footage, and the way to do that is leveraging AI and using models to make sense of video footage automatically.
The number of applications where we can use video analytics to detect something and then trigger a follow-on action is huge. We’re at the start of a curve, a curve which is about to grow exponentially, and HPE is leading the charge because we can process in real-time right at the edge.
How is Artificial Intelligence helping manufacturers supercharge their operations?
Artificial Intelligence has the power to transform the relationship between people and technology, charging our creativity and skills.
HPE’s Alan Smalley explains why AI should be on every manufacturer’s digital agenda.