While at the Manufacturing 4.0 conference 2015, Jonny Williamson sat down with Rockwell Automation’s vice president of market development, John Nesi, to discuss the company’s vision of the Connected Enterprise.
Rockwell Automation seems pretty certain that the ‘Connected Enterprise’ is where the industrial landscape is heading. What has given you that confidence?
Thirty years ago we were starting to discuss the de facto standard for networking becoming EtherNet. It’s taken this long because, four decades ago, the automation industry had around a dozen different Programmable Logic Controller (PLC) suppliers each propagating their own proprietary networks for their own self-preservation.
Due to industry consolidation, there’s around half of that number left and many have now embraced the open standard model, so there’s less proprietary networking getting in the way of progress.
Of equal importance is the fact that our customers’ capital expenditure tends to occur over very long periods of time; they don’t invest in upgrading networks and automation systems very lightly or frequently. We are currently in that investment cycle of industry.
The accelerant that’s really pushing this forward is the cost per EtherNet node dropping dramatically. Today, you can deploy an EtherNet node as cheaply as you can deploy any other kind of network, meaning the accepted standard has become open EtherNet.
At the same time, common data standards within IT have started to be embraced by more of the larger automation players. As the cost of deploying a standard network comes down, the data standards materialise and the two combine, further proliferating the trend.
How does that trend support Rockwell Automation’s vision of the ‘Connected Enterprise’?
Low cost per node of EtherNet deployment is producing more internet-capable devices that are going to populate every sensor, actuator and instrument that is installed in a plant going forward.
All of which has to be managed – in terms of bandwidth that each plant consumes, forcing infrastructure investment and the need to collect information to make smarter, more proactive business decisions.
That’s why having Cisco as a partner is pretty critical because it helps Rockwell Automation bridge the gap between the factory floor and the office environment from a networking standard point-of-view. It makes it easier to have the conversations between IT and Operational Technology (OT) departments which are now so vital.
For a lot of manufacturers, this still seems to be quite conceptual. What needs to happen to really drive this forward and ensure the transition isn’t a long, protracted process over the next decade?
When I hear about ‘Industry 4.0’, for example, it can be seen as being a very long academic exercise. There’s a lot of standards associated with Industry 4.0 that don’t exist yet and there’s a lot of wrangling that’s going to take place before they ever materialise.
When I look at Industry 4.0 or any of the smart manufacturing initiatives, they offer an ultimate vision of the future that’s going to take a lot of work to arrive at. Alternatively, there are intermediate steps that manufacturers can take today to move forward.
This summer, the Manufacturing 4.0 conference – hosted by Hanley Automation in Dublin, Ireland and sponsored by Rockwell Automation – brought together 100 invited delegates to hear from some of the leading operational and information technology minds about the political, economic and practical consequences of Industry 4.0.
Joining Ireland’s Taoiseach (Prime Minister) Enda Kelly to speak at the event were representatives from Hanley Automation; Rockwell Automation; Cisco Systems; Microsoft, and SAS, among others.
What Rockwell Automation is advocating is that the Connected Enterprise is something that’s absolutely achievable now. To prove that it’s possible, we’ve already implemented it across our organisation and, as a result, it becomes a future-proof first step that’s basically a requirement to achieve Industry 4.0 or any smart manufacturing advantages.
Yet this “first step” is still something that businesses have yet to take in 30 years. What’s fueling the need to do it now?
The proliferation of information combined with the availability of cheaper compute cycles creates an opportunity. Obviously a fear factor in terms of how internet-enabled do they want to make their factories’ equipment, especially regarding data and IP security. Yet equally, it can’t be ignored because if they don’t embrace it, their competitor will.
Where is manufacturing receiving investment? Typically in those countries where terms such as Industry 4.0 and smart manufacturing originate. That’s not a coincidence because they are the ones who understand the sector’s importance to factors such as employment, exports and GDP.
As more companies embrace the trend to reshore production, they probably aren’t going to make do with a piece of broken down, dated equipment, they’re going to invest in new stuff now to be competitive with the company that has been investing for the past 20 years.
If you’re investing in new equipment or facilities, it probably makes sense to consider this sort of discussion; but if you’re operating legacy systems and infrastructure, how big of a step is it to take?
You need to look at where your critical profit centre is and what you want to improve to attain the highest level of profit possible based on a quality metric improvement – i.e. machine availability, operator efficiency, etc.
That’s when you introduce a pilot project connecting one line or a handful of machines, the information from which can be studied to learn how you can best impact that quality metric – such as migrating from batch processing to continuous processing, or from build-and-check to continuous check in-line, both of which would reduce factors that affect profit such as cycle times and quality issues.
A phased-in approach like this allows a business to understand the investment required and the benefits gained in a very practical way which makes it easier to justify rolling it out further. You can paint the vision of what it could be, but typically you’ve got to start by taking one step at a time.
With every sensor, actuator and instrument installed in a plant constantly collecting information and feeding it back, how can adopters avoid so-called “analysis paralysis”?
You can collect data for data’s sake, but you really need to understand what it is you’re analysing the data for, and then the big challenge is taking a lot of random points of data and trying to prognosticate some form of action to take.
On a factory floor when you’re talking about managing equipment data, you want the company who manufactured that equipment to collect the data to inform you when to change an air filter or gear, etc.
You don’t want to be the one collecting information, you want it to come direct, and you don’t want to have to gather the information from all of your various different vendors before taking action.
Ideally, the vendor would have access to your machine and equipment so they can run their model on the fleet that they’ve got installed at 100 other sites just like yours, then tell you when you need to do something and identify trends far quicker.
Over time, I think more people will utilise Cloud-capabilities to run analytics on big data sets, but will realise the latency in doing that, it doesn’t occur in real time, so to take more immediate – even pre-emptive – action you need to move the analytics compute cycle closer to the point in question.
It sounds intimidating to hear Internet of Things-this and big data-that, but you’ve really got to break it down to, ‘what am I going to use it for?’. ‘How big does it actually have to be?’ ‘How much data do I really need to have an immediate impact on the equipment that I’m working on or monitoring?’
You’ve said that “security by obscurity” is no longer a valid argument. What did you mean by that?
There is an assumption that if a machine is left alone on a proprietary network, then nobody can – or will – hack into it. But chances are that somebody is going to talk to it with a PC anyway and if they do, it’s vulnerable.
Most hacks in industrial environments don’t necessarily come in through the cyber security channel, they come in directly through the front door, i.e. via an infected laptop or flash drive. It’s likely to be just an innocent employer or contractor that comes in, connects, and just happens to be infected.
Older PLCs were never designed for role-based access or any form of lockdown protection, nobody cared about that up until now because anyone can tie into machines through the internet. Because everybody carries a laptop or other smart device, they’ll just walk up to any device on the factory floor and start working on it without giving it a second thought.
Many businesses don’t assess their old data highway networks for security vulnerabilities and upgrade them to current security standards, and that makes them very vulnerable. Just pretending that it isn’t going to happen to you because your network is dated, isn’t necessarily true.
Typically there’s an IP concern and an access protection concern, but it all starts with education on security and defence in-depth – that’s a journey every company has to take sooner or later, whether it’s outsourced or conducted in-house.
Do you think that current workforce skills are at a level sophisticated enough to handle disruptive technologies such as the Cloud, big data, and the Internet of Things?
You’ve got two different workforce issues. The first is operating machinery with more embedded intelligence, and that comes down to operator skill – something which probably needs to be upgraded to be more proficient with new technology.
At the same time, technology is smart enough to help an operator without them necessarily needing to have in-depth knowledge. That’s good news in that it makes machinery easier to work with, but conversely it means operators don’t understand processes to such a deep level.
The more idiot-proof you make the operator station, the more blind the operator is to what they are actually affecting.
The other factor is creating an environment where more analytics of processes occurs. Who’s monitoring that and has an understanding of how best to use that information, if at all? That takes a bit of discipline within organisations and it may be a skillset they have to develop themselves.