Cybersecurity issues facing the industry and how to overcome them
Donovan Tindill is a control systems cybersecurity subject matter expert with the Honeywell Industrial Cybersecurity Marketing team having previously spent over 17 years a control systems cybersecurity consultant in Canada and projects globally. Plant Services editor in chief Thomas Wilk caught up with him at the 2022 Honeywell User Group meeting to find out what top-of-mind cyber-security issues are facing our industry.
PS: How would you describe what you're hearing from customers in terms of what they're looking for when it comes to cybersecurity?
DT: What we're finding is that with every company, each customer, I would call it a journey. What they're first recognizing is that there's even a journey in the first place.
Depending on where they are in their journey, they're asking for different things. At the start, it's very basic: “Help me identify what are my vulnerabilities and my priorities? What do I do next? What are my top five?” We will then help them with those items, whether it's getting the control system off of the business network, or doing the network perimeter just to protect it. Think of a castle: let's put up a moat, a drawbridge, a wall, because we need to protect the control system inside, and they focus largely on the perimeter controls at the start.
Then as they move down their journey, they start looking at it like, “we think we're doing okay at the perimeter, let's start looking at the control system itself. Let's learn more about it. Let's discover it.” So you hear things about inventory discovery, and then they want to reduce the attack surface of the control system itself. That becomes kind of the next mode, where they are now improving the cybersecurity of the control system.
Then they move beyond and start looking to automate all of this work. In the beginning it's basic blocking and tackling, “let's get antivirus deployed.” Next, let's try and make sure it's everywhere, and let's measure ourselves, “how well are we doing this?” Then you start looking at whether everything is up to date, and can we start automating the rollout or the deployment or the maintenance of antivirus?
Then if something is detected, what do we do with that data? Early on, they might not even do anything about it. But you take that more seriously later on where that gets alerted, it gets sent to an operations center, and somebody starts to ask you questions about how did it get there, and your incident response and investigation and lessons learned become more mature.
So that's an example journey of where you can take antivirus, from “let's just try and get it deployed” to “what do you do when something happens,” and alerting and automating, and using tools to help you verify, if we just added a new device in the network, is it fully compliant? Does it have all the minimum requirements we expect? When they get to the far end, you have a better understanding of what you think you should do. You don't rely so much on a third party to give you an assessment. A third party spends more time helping you optimize or improve or get deeper knowledge.
So when you ask, “what are we being asked to do?” the first question I ask is, “what have you done so far?” And I help them discover what is the next best step.
PS: You mapped out a really good three-phase journey there. If someone starts at phase one, is there a general timeline that you would have for how long it take to get from initial phase one to maturity, or can it vary based on the site?
DT: If you use sectors that were hot-regulated, like in the power sector, they were given three to five years, and even that was a challenge. Because it's not so much the not knowing, it's not so much the fact that they didn't know what needed to get done. It’s they didn't realize how much manpower and effort (was required), and the complexity of doing this. Until you project manage and figure out the sequencing, you don't really have an appreciation, and that's what adds to it too.
PS: I was talking to someone at breakfast yesterday and he worked in cyber security for his plant. He was sort of chuckling that the most common virus that they find is things like Conficker, which are about two decades old. It's on a legacy system, it got in in the 2000s and is still hiding there. What are your customers telling you about the kind of threats they’re seeing? Is it new attacks from the outside? Is it legacy viruses that just sort of kick around and they’re more nuisances than say ransomware attacks?
%{[ data-embed-type="image" data-embed-id="6375569eb7a7b365288b4587" data-embed-element="span" data-embed-size="320w" data-embed-align="right" contenteditable="false" ]}%DT: Malware – actual malware infections – are not dominating the discussion. It's more that there's a paranoia, there's a nervousness, over all these more advanced attacks that we hear about: “Could those get into my environment?” The questions are largely: “Am I vulnerable?” and “What should I do next?”
Another analogy would be if you think about an Excel chart, and you have this steady gradient or at least an upward trend of the capability of the threats. The ability to first having knowledge of the control system, having the ability to use phishing, and the impact of ransomware. They were not commonplace 10 or 20 years ago, so there’s just this steady increase. Meanwhile, you have a control system that was designed and engineered 10 or 20 years ago, and they were designed and engineered in an era when the threat was way down here. So it was adequate, but they're still running it.
It's almost like a flat line, where if you don't improve your cyber security controls, it remains exactly the same effectiveness. But now you have this threat coming from beneath, this more capable threat, where you need better safeguards. Encryption protocols are a good example. Over time, encryption protocols get retired and we replace them with stronger ones, because it's easier to defeat the old ones. It’s the same kind of thing. If you have an old control system, and imagine it's an old encryption protocol from 20 years ago, it's not so strong. It was good when it was built, let's give them credit for that, but it was never intended to last for 20 years.
Then you look at what are the upgrades, you want to move it up the ladder a little bit, like a step change, and make it better. So you add better network architecture, whitelisting, monitoring, hardening, so that the safeguards to keep that threat out of the system are more effective. That's really what it's about.
PS: Plant Services readers are influencers when it comes to purchasing. But they aren't always influencers when it comes to things like network security, and one of their largest questions is one of the ones you just framed out, which is, “what can I do?” And their version of that question is, “what am I on the hook for?” Best in class maintenance teams have factored in proactive work into their workload to get ahead of the machine breakdowns, so these folks also want to get ahead of security exposure. In your opinion, or based on what you've seen with customers, what are maintenance and reliability (and operations) teams normally on the hook for?
DT: Essentially, what is leadership expecting them to be responsible for?
PS: Yes, do you see responsibility for things like patching and upgrades falling to those teams with any kind of frequency? Or is this the kind of thing where these teams are empowered to reach out to someone else, like the director of OT or the director of IT to help them out?
DT: If there's a director of OT – if it even exists, which it often doesn't – typically by default, that group is assumed responsible for cybersecurity of the system as well. It's just kind of this assumption, like, “You maintain and operate it, right? You're the closest, so you’re the most capable of dealing with cybersecurity as well, regardless of what your cybersecurity training skills might be.”
PS: And this makes some of them very nervous because the skills are variable.
DT: That's right, that's probably another journey area where they begin trying to patch it to their best effort, then they start to see how much work it is, they need a test lab, they need time to do it. Maybe they have resource shortages, they have turnover, they have retirements, and “more time” becomes their biggest bottleneck.
Once you're an expert in the technology, then you start moving into what are the potential vulnerabilities and how do I address those. So I encourage people to understand the technology really, really well, and then start thinking about it differently. How do I make it more secure? How do I keep others out of it, and configure it securely?
It's a shift in mindset, that it's not just, “let's just worry about keeping the system up and running.” Let's start thinking about, “how could I configure that? How can I make it more secure? Can I just remove old legacy protocols that I don't need, to reduce that attack footprint?” Because those are the things that are targeted, those things you don't need. They're little increments, and over time they add up.
You're reducing the vulnerability, so when you bring in an audit, it's like, “oh, you addressed that vulnerability because you disabled it. You had it out for service, you patched it, you dealt with a whole bunch of things.” It's that discipline that makes a difference over time without having an overwhelming, anxiety-inducing feeling to fix everything all at once.
PS: Let me ask a services question. A lot of plant teams are stressed for time, where there's never time enough to get the repair work done, much less cyber security, and they're trusting more and more in OEMs and other vendors to provide those kinds of services. I think a lot of our readers expect cyber security to be baked in, especially when they put new sensors onto an asset, like a vibration sensor. They want to make sure that the cyber security is already baked in there when it links to their network. How is Honeywell serving the kinds of customers who would simply look towards the OEM or partner to take that responsibility off of their plate?
DT: With the initial engagement, it would be the assessments, to help identify and prioritize. If we are then bringing in a solution, whether it's the control system or a cybersecurity solution, we would actually have cybersecurity requirements right into the design and engineering stage.
When you use the term “cybersecurity baked-in,” there are a lot of features that are baked into the product but it is your choice to disable or enable them, or use them. It’s enabling those features where we get into the design and engineering, and then we agree with the customer on what needs to be done. Unfortunately this doesn't take less time. It takes more time because you need to verify that it's going to work, because you're reducing compatibility, you're turning off things you don't need, so there is a cost footprint when you want to be more secure because you have to test it. All of these security requirements will be part of the design, will be part of the acceptance testing, and the customer can tell us what they want us to do, or we can have it as part of the design process.
Then once that system is deployed, we can move into managed security services where then if they struggle to get staffing at the facility, and they don't want to deal with remote access, we can offer remote access as a service. We can help automate patching and antivirus, security and performance monitoring, security compliance checking. Our latest is an advanced monitoring and incident response, where we're actually looking for all of the patterns of a cyber-attack. Say somebody's logging in, somebody's running executables, and we will monitor that on the customer's behalf because they don't have the time, they don't have the funding to develop and build all that. We'll bring it in quickly, in a number of months, and then we can do the managed detection and response.
We're hoping you hear less of these stories where there's a threat inside the environment that goes undetected, and the damage happens while undetected. Instead, we see initial access, we see some behaviors where we can intervene and potentially help prevent it. Or if it does happen, we can respond to reduce the impact. With quick detection reducing likelihood, and with good response, you could reduce impact.
This story originally appeared in the November/December 2022 issue of Plant Services. Subscribe to Plant Services here.
Big Picture Interview
This article is part of our monthly Big Picture Interview column. Read more interviews from our monthly Big Picture series.