Efficiency Through Security – Greg Hale | EPISODE #14

Picture of Waterfall team

Waterfall team

A wide-ranging conversation with Greg Hale, Editor and Founder of Industrial Safety and Security Source (ISSSource), about where we are today, how security relates to safety, how to sell security as improving efficiency and other topics.

        

Apple Podcasts Google Podcasts Spotify RSS Icon

READ HERE THE FULL TRANSCRIPT:

Intro: The Industrial Security podcast with Andrew Ginter and Nate Nelson, sponsored by Waterfall Security Solutions.

Nate: Welcome listeners to the Industrial Security podcast, my name is Nate Nelson, I’m sitting here with Andrew Ginter, the vice president of Industrial Security at Waterfall Security Solutions and he is going to introduce the guest and subject of today’s episode. Andrew, how are you this afternoon?

Andrew: Hello, Nate, and hello to all our listeners. Our guest today is Greg Hale, the editor and founder of Industrial Safety and Security Source, a publication that’s been going on for nearly 10 years now. Greg is going to talk about the progress he’s observed, the challenges that he still observes in the marketplace as a whole from his perspective as a journalist covering the topic of industrial security among other issues.

Nate: Then let’s hop on over to Greg Hale.

Greg you’ve been watching and reporting on this industry for years, where did it all start for you?

Greg: Well, it’s funny, I launched the Industrial Safety and Security Source or isssource.com site in April 2010, but the genesis of the site actually started back when Y2K was really starting up, when people were freaking out over whether the system would work after the clocks kicked off to the year 2000. At that point, I realized how digital and cyber connected we had all become. And then when 9/11 occurred, it really struck home that safety and security were going to become paramount to the industry. The catch was, as far as I saw, the main industry players really didn’t take security as serious as they took safety. Yes, there were security professionals out there fighting the fight, but it was truly an uphill battle at the time. I had proposed starting up a safety and security publication with my company years later, maybe in like ’04, ’05, that time frame and, but they just kept saying no. And eventually, I decided that manufacturing automation industries can start focusing on security at a higher level, then I decided to go out and launch my site. And while safety was always a big deal, security hadn’t been as big a deal at that point.

And then when I launched I in 2010, Stuxnet occur and the Deepwater Horizon incident broke out, and those 2 epic incidents I thought would shape the safety and security industries for a year to come. And just to add a little bit more, with Stuxnet, we ended up publishing a series of stories by Eric Byers, Joel Angel, and Aaron Andrew Ginter focusing on what Stuxnet was all about. And then we also wrote some stories about by Richard Ceylon who was behind the attack and why, and they were all truly fascinating stories. So, that’s kind of the background behind the ISS source.

Nate: It’s interesting to me that that Greg brings up Y2K because that was sort of about cybersecurity, but it was an imagined thing. So, I’m surprised that it actually influenced this real world that we’re talking about.

Andrew: Yeah, well, it is interesting that he brings it up. Most people talk about the industrial security space, the industrial Industrial Security Initiative as having started roughly with the 9/11 attack, and it was a physical attack, but it raised awareness about security enormously. The Y2K connection that I heard was had to do with the realization that so much of our infrastructure depended on these computers that could all malfunction all at once, or it was sort of that realization of how dependent we were on the computers. In fact though, I learned just a couple of months ago, I attended a presentation that Marty Edwards did on the history of the industrial security space, he flagged Eric Byers as one of the pioneers, I think Eric published the first peer-reviewed academic paper on the topic of industrial cybersecurity back in 1996, even before Y2K.

Nate: Would I be going too far as to say that a good portion of the origins of cyber security have their origins in fiction? I’m just remembering now that the term ‘computer worm’ originates from a novel from 1975. Are there other examples of this that you could think of where we sort of just come to realize that we’ve thought about this stuff before it actually happened to us?

Andrew: What I will say is that not so much fiction, but research. What I realized a few years ago is that anything that anyone ever describes in one of these conferences saying, “Hey, look this kind of attack is possible,” and you might look at it sideways and go, “Really? How real is that?” It’s theoretical, I wouldn’t call it fiction, but what I did realize is that, by the time any of us mere mortals hear about any of that stuff, odds are the bad guys and even some of the high-powered militaries, you can decide for yourself if they’re good guys, there’s people out there using pretty much everything that anyone’s ever talked about.

Nate: And before we get on to the next question, Greg mentioned you in his answer in accordance with Stuxnet. What was that about?

Andrew: That was a paper that Eric and Joel and I put together. We put our heads together and said, “There’s been a lot of people analyzing the Stuxnet worm,” Symantec put a team together analyzed the artifact at length, a bunch of other researchers looked at the artifact in detail, nobody had really put the pieces together and said, ‘What does this worm mean for industrial control systems? If you let this thing loose on an industrial control system that’s defended the usual ways, what happens?” And so, we showed that given the features of the worm, given the capabilities of the worm, things like it blew through industrial firewalls like they weren’t there, it was nasty. So, the point is it was sort of an application paper that talked about the application of the worm, if it hits your network, what were you likely to see?

Nate: Let’s hop back into my interview with Greg.

Since those early days that you just told me about, can you reflect on what you’ve seen since then? So, the progress that’s been made or maybe the lack of progress, what have been the highlights?

Greg: Well, to me, there’s been a mixed bag. On one hand, there’s been great progress over the past 9 years and the security awareness is through the roof. More companies are starting to create plans for security and some are even beginning to start programs. That’s a huge jump from years ago. More and more executives and their boards are demanding that something be done. When that happens, it does draw the attention from everyone in the company; that’s for sure. On the other hand, while security awareness is high, I still feel there is a classic paralysis by analysis going on. I feel many manufacturers are either thinking, “Hey, I’ve never been hit before, so why should I do anything,” or they feel, “Wow, this security thing is incredibly complicated, I really don’t know where to start,” or even the project mentality still exists where they may implement a security project, but it starts and stops after the project parameters have been met. The idea of security is an under ongoing thing, goes against the mindset of today’s manufacturers. Also, it used to be that manufacturers didn’t demand security in their proposals. So, while they wouldn’t get it and then security would either be added in later or would be an additional cost to the original bid. Now, I’m hearing from some of my security sources that manufacturers are making sure security is built into the bid. While that may not appear like a big thing, I think that true growth. From what I’m told, that as much like safety was added into proposals years ago.

Nate: Now that Greg’s painted a picture for us, Andrew, how does your experience square with his?

Andrew: I was a little surprised at Greg’s characterization of the marketplace, but maybe it’s because of the people I work with. We’re a vendor, I work with customers who are deploying sophisticated security systems. I get a look at people who are doing really good stuff. And so, the impression I get is that, “Well, everyone I work with that’s really good stuff.” Greg is looking at things more from the outside, I thought it was interesting that he characterized it as, “Yeah, we’ve been doing this since about Y2K or 2003,” depending on how you count it, or 1996, depending on how you count; we’ve been at this for a long time. I kind of got the impression from him though that there was still a lot of progress to be made, it wasn’t as (what’s the right word?) ahead of the curve as my own personal experience of the people I work with in the marketplace. So, I thought that was interesting to get that sort of looking at it from the outside perspective.

Nate: So, Greg, you are the editor of ISS source, Industrial Safety and Security Source, a name that rolls right off my tongue, as many of your articles are about safety as they are about security. So, how are those 2 things, safety and security, connected?

Greg: Well, I will say Industrial Safety and Security Source it is a mouthful, but it really does talk about the topic. And speaking of your question, I often say security protects machines against man and safety protects man against machines. I always got the feeling from safety folks for years, their systems were immune from any kind of cyber-attack, and they would talk about how their systems were impenetrable to attack. And while they may be difficult to penetrate they are not impenetrable; a security incident can lead to a safety. And the perfect case in point is the Triton incident that occurred back in August of 2017 and broke to the public in December of 2017, the 2 areas are so similar, meaning safety and security. They are both all about risk and understanding what you have to do to mitigate that risk. In safety, you have to do a process hazard analysis to understand risk and what to do to lower that risk level. As John Cusumano at 80 solutions has been showing, the same is true with cyber process hazard analysis. They can show where your cyber risk is and then you can work to remedy the situation. On top of that, one of the big debates in the safety industry for years was should the safety system be independent of the control system or should they be integrated into the control system by working separately and independently? That was, and quite honestly still is a very emotional subject, very hotly debated. But after the Triton incident were a safety system and also a distributed control system were taken over and potentially controlled by an attacker, the safety system subsequently tripped and shut down a Saudi Arabian petrochem refinery, there’s an immediate outcry for a separate safety system. But in this day and age of the industrial Internet of Things environment where there is an increased level of connectivity, all systems, separate or interconnected, are vulnerable. Even the Stuxnet case showed us an air-gap system was not connected to anything, that was truly vulnerable. And actually, if you really think about it, the Stuxnet incident was really a cyber-incident that was focused on a safety incident. So, I think that was also another related safety incident. But going back to the Trident thing, Triton was a planned cyber-attack, and a DCS and safety system an ill intent behind it. And not to be overly dramatic, but that means all systems, especially those in the critical infrastructure, need to remain on high alert at all times.

Andrew: So, Eric’s thrown out a lot of terminology here, the Triton attack was the one that targeted a safety system, shut down a refinery. He talked about process hazard analysis. For anyone who’s not familiar with these safety instrumented systems, the purpose of a safety instrumented system, an SIS, a safety system for short, is to protect human life, to prevent human casualties. And so, they’re very important at these industrial sites. Every powerful tool is also a weapon, these industrial sites are very powerful tools, and so we have to be very careful around them. If I might digress for a second and just give you a contrast, if you look up the British Petroleum Texas City refinery explosion in 2005, there’s a lovely video on the web that the American Safety Authority put together and it talked about what went wrong at that site with regard to safety. And they said the site had a great program for worker safety, for personnel safety. Every stairwell had a sign in it saying, “Hold the handrail, we don’t want you to trip on the way down the stairs,” every place that had anything overhead, they had, “You got to put your hard hat on,” reminders of all of these sort of everyday things. But the organization, British Petroleum, was faulted for having a poor process safety thing.

Personnel safety and process safety are 2 different things, and process hazard analysis has to do with process safety. Process hazard analysis looks at, “What is an acceptable rate of failure? You can never reduce a rate of failure for these large processes to 0, what’s an acceptable rate of failure? What’s next double rate of casualties at this site?” And most people look at a number that’s something like, “Well we have something like, I don’t know, let’s say we have 1000 of these refineries worldwide. If we have an explosion at a refinery, one anywhere in the world, once the century, if there’s one explosion century, that that might be a cost of having access to refined petroleum products that society worldwide is willing to pay.” So, you run the numbers and say. “Okay, if once a century is an acceptable rate for one of these failures worldwide and there’s 1000 sites, you got to do the math. Well, any individual site can only fail, blow up and kill people, once every 100,000 years.” Once a century worldwide, 1000 sites, you multiplied together, it’s 100,000 years per site; oh, that’s getting pretty ambitious. Okay, now you look at all of the ways the site can blow up. Let’s say that 1000 ways the site can blow up, you run the numbers, each one of those ways can only fail once every 1000 times 100,000, once every 100 million years. Okay, that’s extremely reliable.

And it’s these safety systems that are charged with monitoring and triggering shutdowns if they sense an unsafe condition. It’s these safety systems that have to be that reliable that the safety system does not fail more than once every 100 million years. And you say, “Well, nothing lasts 100 million years, so really, I’m going to need at least 2 of these things running in parallel, so that if one of them fails, I can take it down, I can repair it and the other one is still working. How often do they fail? How long does it take to repair? How many of these do I need running in parallel to get that high a degree of confidence that this thing is going to last 100 million years before I get a combination of random failures and circumstances such that something bad happens?” This is done for every kind of failure in the system. It’s a long very detailed process. This is process hazard analysis and it’s focused on the process, not the people walking around running down the stairs. You can’t trigger an explosion by running down the stairs and tripping and twisting your ankle.

Nate: Now to the meat of what Greg was talking about, the debate between independent versus integrated safety systems. Andrew, where do you fall on that?

Andrew: Well, I lean towards independent; a lot of people do. What we’re talking about here is the question, “Is the safety system connected to anything outside the safety system? Can you monitor it from outside from your cell phone?” that is sort of an extreme example of connectivity. A lot of people say, “Every flow of information is a potential attack and therefore, there better not be any information passing between the safety system and the internet no matter how indirectly because that’s an attack path,” it’s the kind of thing Triton exploited. Other people say, “No, no, it’s no good if we can’t monitor it, we got to be able to use the information, connectivity is essential,” this is the debate between the benefits of connectivity and the security risks of connectivity, and the argument is still going on just like Greg said.

Nate: You’ve told me about where it all started for you and what’s happened since then a little bit, can you speak to what’s going on today? So, what are the hot topics in and around your water coolers, the hot technologies, the debates, that sort of stuff.

Greg: When you’re talking about today’s hot topics, the word ‘nuance’ comes to mind. And by that, I mean if you go to conferences, and again, I go to plenty of them, or you talk to security professionals, they’re all talking about the same things they talked about 8, 9, 10 years ago. But the difference is the type of discussions they’re having, that’s where nuance comes in. Today, as it was years ago, people are talking about the people process and technology area, they’re talking about the Purdue model, they’re talking about the IEC 6443 security standard and they’re talking about the IT/OT relationship and how they’re working together. Years ago, those subjects were all given surface level discussions, but now however, the discussions are going much deeper and are more sophisticated. Along those lines, there are some technologies out there that are giving the industry something it is for a very long time and that is visibility; visibility into what is going on over the network and more importantly visibility into what devices and networks the manufacturer has working at a facility.

I remember talking to one of the visibility vendors and they were saying they went out to one plant and they asked the plant manager, how many devices did they have working on the plant floor, and they said, “We probably have about 200 to 300.” And so, they plugged in their device, and it ended up being the ad about I think was something like 10,000 devices working. But that’s all stuff that had been added on over the years and they knew nothing about it. If you don’t know what you have, you sure can’t protect it, and I think that visibility is becoming more and more importantly today. Also, when you talk about hot debates, the IT/OT convergence issue is very interesting. Again, I like to tell the story about when I was at a suppliers user group years ago and when someone at a presentation it was giving a presentation was mentioned the IT department, he ended up being booed by everybody in the audience. And the relationship at the time was rocky at best, but today, OT needs IT to ensure a secure environment, and IT needs OT to make sure everything’s done in the proper manner. That’s where I tease confidentiality, integrity, and availability model surely comes in conflict rather with OT’s availability, integrity, and confidentiality environment. Knowing the system has to stay up and running is the lifeblood of any manufacturing system, and the IT OT convergence, it’s going to happen, it’s happening and it’s going to get even tighter and tighter over the years.

Andrew: So, Nate, Greg stories there from what things were like ten years ago very much agree with my own experience. I remember being at a conference on automation, not on security. So, you had all sorts of people coming by, you’d have process engineers coming by from some big company that uses automation and I’d ask them, “Are you interested in security?”

“No, no, no, we’re not interested in cyber security, IT does cyber security, not us,” that’s their responsibility, “We’re looked after. Thanks anyway,” okay. And a few hours later, the IT people from the same company would come by and I’d say, “Hi, let’s talk about cyber security. Are you responsible for cyber security in your big company?”

“Yep, yep, we’re responsible company-wide, we do cyber security company-wide.”

“Great. What are you doing under control systems?”

“Oh, the control systems, oh them, no, no, they’re special, we don’t handle them.” What’s the lesson? Engineering wasn’t doing security, that’s IT’s job, IT wasn’t doing security, they’re too special, nothing was happening. These organizations were completely blind to it back then. Today, as Greg said, there’s resources, there are standards, there’s stuff out there, there’s a conversation happening that’s much more informed and hopefully much more useful.

Nate: Having watched and written about all this for a decade now, where do you see the industry moving? If we’re not so far from if you bring up IT, you get booed off a stage, where do you see us going 5, 10 years from now?

Greg: Well, one of the things I see as being different in 5 to 10 years is ensuring a secure supply chain, which is also what I see is another hot debate within the industry right now. You may be as secure as you can be, meaning you individually or your organization individually, but if your partners remain vulnerable or your vendors remain vulnerable, then you remain vulnerable. You can use the Target incident as a perfect case in point. A security only goes so far when you’re working with multiple partners, but plus your partners, are your partners’ supply chain secure? Today, you may not even know the answer to that question. Security used to be I you need a hardened perimeter to fight off the bad guys, then it was the idea they were going to find a way in so security has to be stronger on the inside. But lately, I’ve been growing more aware of the idea of resiliency and the thought of being able to sustain attack and not have to totally shut you down, or not have that attack totally shut you down where you continue on producing product I think is going to be a huge issue moving forward.

At this point, I really don’t think manufacturers are at that point, but I’m thinking they will get there at some point in the next 5, 10 years, if not sooner. Plus a holistic overarching security program is something I think will gain more traction. Again, we’re not there yet, but I think with the IIOT, the interest we have to get there sooner than later. And plus, while I’m not a technologist and understanding that technology is not the total answer, I will say with more advances in technology with things like big data analytics and AI, that will become stronger in the years to come. As it is with most things, humans need to play a big role and get a stronger grasp on what security is all about. And I’m not just talking security professionals here, I’m just talking about everyone that’s working within a manufacturing environment, they have to really know what security is all about.

Nate: I’m interested in your having brought up the subject of resiliency. Could you perhaps elaborate on the distinction between security and resiliency and why thinking about resiliency in the first place may be helpful to solving some of our problems?

Greg: Well, I think the idea behind that is bend, but don’t break. And, again, security years ago was always about, you got to fight these guys off and just kind of have them bounce off the manufacturing environment. Well, we’ve learned that that’s not going to happen, bad guys are going to get in, if they want to get in, they’re going to get in. So, how do you defend against an attack and how do you be able to use proper techniques to kind of protect yourself and be become more resilient and not necessarily have everything shut down immediately? The zones and conduits model I think helps and in terms of how you can become more resilient and how you can kind of contain attacks in various areas or various zones.

Nate: Being such a hot topic in the industry, I’m surprised that resiliency hasn’t come up more on our podcast this far.

Andrew: It is a hot topic, it’s still in its infancy. The ideal for resilient systems is that they can be compromised, they can suffer attacks, they can suffer compromise and keep going. The resilience in the analogy they given the physical world, resilience of a spring has to do with how much you can deform the spring and still have it returned to its original shape after the force that’s applying the deformation relaxes. So, the whole idea is that you might slow down operations, you might impair operations, but you don’t break anything, and when you fight off the attackers, everything springs back to normal and this is the ideal. When I say it’s in its infancy, that ideal, most systems are far from it. The example everybody gives it is the power sector, they say the power sector is highly redundant, highly resilient, but it’s been difficult to apply that model for most other industries.

Nate: I’m actually somewhat surprised that you say that the power grid is the shining example of what’s most resilient, and I recognize a sort of media bias and what I’m about to say, but the news stories you hear tend to be about power grids that maybe don’t show 100% resiliency. The example I always go back to (and of course this wasn’t actually a cyber-security incident, it was an accident of another kind) is the 2003 Northeast US power outage. Wouldn’t that be an instance of a power grid, an advanced power grid showing poor resiliency?

Andrew: Yeah, I take your point. When people cite resiliency in the power grid, a lot of the time they’re talking about power generation where the North American grid produces, at its peak, something like a terawatt, a trillion watts of power. On the generation side, the largest physical generator is like 850 megawatt, it’s less than a gigawatt, it’s less than 1/10 of 1% of the generating capacity of North America. You can knock out a terawatt of generating capacity, you can knock out 2 or 3 of these and the grid doesn’t even notice, 3/10 of 1% of the capacity is knocked out if you knock out 3 of the biggest generators. That’s the example that’s given us as the ideal for resiliency. The 2003 blackout was a cascading failure, it was a failure of the transmission grid, not a failure of generating capacity. And even then, the blackout is cited as an example of resiliency because, even though a lot of the lights went out, there was no physical damage to almost anything. The computers that were designed to prevent physical damage worked, and within 5, 8, 10 hours, whatever it took, everything was working again because the physical equipment hadn’t been damaged. So, this is the ideal. How to apply it to, I don’t know, a refinery or a water treatment plant, to me is more problematic. You might argue that you can segment your networks, you can have a segment of your network that has the optimization systems in it. And if let’s say ransomware falls in and takes out the optimization systems, well, you shut down the connection between that segment and anything else, you isolate the attack. And now, you’re still producing power, you’re still producing gasoline, you may not be producing it quite as efficiently as in the past I know. If you’re producing, I don’t know, 30 million dollars’ worth of stuff per day, you might only make, I don’t know, I’m pulling numbers out of the air, 5 million dollars profit when you could have made 6; this is the purpose of the optimization system. So, that’s an example of a kind of resiliency, but if you imagine an attack, a ransomware attack getting right into the control network, now everything stops. The question of how to do resiliency, how to tolerate compromise is very much an open question. It’s a hot topic, but there’s not a lot of answers out there right now.

Nate: I take your points, but I think that something in what you said, at least in my interpretation, sort of demonstrates maybe the limits of this term, resiliency. Because of course we could say,  even in the example that we just talked about 2003 power outage, no equipment was damaged, that’s all great. But of course, if your lights went off for days on end, I can imagine someone thinking, “Good for you, your equipment’s still fine, but my power went out for 3 days. How is this resiliency thing of any importance to me?”

Andrew: Well, this is just it. If the equipment’s damaged, if a high voltage transformer is damaged, it doesn’t work anymore, it caught fire, it’s gone, it’s finished, there is no worldwide inventory of high voltage transformers, you have to special order every one of them. There’s a 6 to 9 month lead time. It’s not that you would suffer a 3-day outage, if the physical equipment is damaged, you might suffer a 6 or 9-month outage or you might suffer rotating outages because you’ve got a move equipment around and share the equipment between the sites. This is the concept of resiliency in the grid. And there’s voice is out there saying they look at the Ukraine example where a distribution grid was targeted and the lights went out and they fell back on analog controls, meaning they went to the affected substations, they physically unplugged the computers and said, “That’s it, you’re done,” and they went and turned on the power by taking a big physical switch and moving it from one position to the other. Now the power flows are enabled again, locking that switch into position, now you’ve got power flows. It’s possible to imagine analog control, not digital, physical control for processes, it’s much harder to imagine that for a refinery or for a pipeline. The whole concept of people talk about analog backups, I don’t know how to do an analog backup for most physical processes. I don’t know how to say the computer controls are processed normally, but there’s an automatic analog failover so that I can tolerate compromise, I just don’t know how to do that. So, we’re getting distracted here a bit from Greg and his commentary, but he’s right on the money. This is a topic that people are going to be talking about for a long time, in part because nobody knows how to solve the problem.

Nate: It occurs to me too that these separate questions that we’ve been talking about, the security and safety and then even the IT and OT sort of connect one another. Because when we’re talking about IT people in with ICS security, some of the issue is that, coming from information technologies, you don’t have to think about safety, it’s all about security. On the other hand, the people who are more used to industrial technologies know that safety is maybe their number one priority, even above security. Is there a helpful way to look at this balance between these 2 sort of factions or am I simplifying the matter? Where do you land on all this?

Greg: Well, the IT side, as they’re getting more and more involved, and they are getting more and more involved, much to many people’s chagrin, they have to learn, they have to become a sponge in what the IT or the OT environment is all about, and they have to be able to pick up on and grasp the idea of safety. And they also have to make sure that what they’re doing is not going to compromise the safety environment. OT gets it, they know it. Now, if IT and OT have any kind of relationship, and I will say that relationship is much better today than it ever has been, then they’re going to be able to convey that message. IT can’t come in with their guns blazing saying, “It’s going to be our way or the highway,”
that just can’t happen. But at the end of the day, everybody has to understand what this is all about. It’s about keeping the environment safe, it’s about keeping the environment secure, but it’s also about keeping the environment is productive and is keeping systems up and running and as profitable as possible. And if there’s a constant state of conflict and lack of understanding between all areas, then it’s not going to happen.

Nate: Now what Greg said there to me recalls what we talked about in the Patrick Miller episode of our show.

Andrew: That’s right. He sounds to me like he’s echoing Patrick’s perspective where IT is all about the opportunity that is possible from access to data, from moving data around, from of course applying security mechanisms to protect the data, and OT is all about engineering risk. Ad especially when we look at the extreme ends of it, the cloud on one end where we’re talking about big data analytics, and the safety systems on the other end, every organization, every IT/OT decision process has to find a boundary, has to draw a line between those 2 extremes, however fuzzy, and say, “On the left, we’re going to manage this according to engineering disciplines, on the right, we’re going to manage it according to IT disciplines.” And that line, however crisp, however gray, whether it’s closer to the safety systems, whether it’s closer to the cloud, drawing that line is going to be unique to every organization, but the 2 perspectives have to be reflected. And people are talking about this, good things are happening here.

Nate: These questions that that I asked Gregg back to back, it occurs to me, is it even a relevant conversation to have to say that IT systems tend to be more resilient than OT systems?

Greg: I really don’t know, I haven’t really thought about resilience as it applies to IT systems.

Nate: Because theoretically, they’re always under attack and they do stay running. So, isn’t that the definition of the term?

Andrew: Well, the example I gave a minute ago, ransomware, you infect something with ransomware and everything kind of stops because you’ve encrypted all the useful files, you can’t process the data anymore. But if you’ve been keeping nightly backups, you erase the machine, you restore from backup and you carry on. So, in that sense, resilience is not a black-and-white thing, it’s a gray spectrum as well. Taking the power grid down for 9 months is very, very bad. Taking it down for a day is not good, taking it down four hours may be tolerable. On the IT side, what’s it take to identify the affected equipment, erase, it restore, from backup? It’s a few hours. It’s not completely continuous, but it’s not a 9-month shutdown either. So, that’s a fancy way of saying maybe.

Nate: Okay, just thought. Let’s jump back to my interviews Greg.

Your publication is in English language, but I imagine you get around to other parts of the world as well. Can you compare what’s happening in North America with what you see in other parts of the world?

Greg: You’re right, I do we have readership mainly in North America, but from everywhere in the world; you name the continent, I’ve probably got readers on it. But I’m thinking that the readership for our site from Europe is fairly strong. And from my interviews with various executives and security professionals, it seems like Europe is up there in security because there’s also more regulation that seems to be going on there. But also the Middle East, that area, that region seems to be really keen and really picking up on security, and they’re more in tune with security. And I’m not saying any other region is not, but I’m just saying the Middle East seems to be more tuned into what the security is and they’re doing more about it. North America, as I mentioned before, it really seems like they’re on the on the cusp of taking off and security. Yeah, I as I mentioned before, security awareness is just through the roof. I don’t think there’s any manufacturer out there that doesn’t have it as their number 1 or 2 top, top topic out there, but there is kind of kicking the tires and, yeah, more and more companies are doing things, but as a general rule, I think they could be doing better, whereas like I said, I think the Middle East is much further ahead.

Nate: It doesn’t surprise me that he says that the Middle East is leading in this domain. Andrew, do you think that that’s the legacy of incidents like Stuxnet and Shamoon, Saudi Aramco?

Andrew: I’m not sure it is. I agree with his assessment. I would say that what you have in the Middle East is a very big awareness of physical threat, it’s a place of many conflicts. And in my experience, when I work with people in parts of the world where there is (what’s the right word?) tension, physical tension of the risk of physical conflict, just the whole awareness of security is higher. And when you work in a place like, I don’t know, Central Europe where you haven’t you haven’t had a war in 50 years, the awareness just isn’t there. People ask the question, “Why would anybody want to do that to us?” whereas nobody ever asks that question in the Middle East. What I thought was interesting was his comments on North America. I haven’t heard that perspective before, but he’s right, the 2 data points he put together, he said, “Look, there’s a lot of people talking about security, there’s a lot of people flagging security as a high concern, and nobody’s doing anything about it yet.” And so, those 2 can’t continue indefinitely, at some point, the high concern, the high awareness, the high level of discussion has to turn into a high level of action. And it’ll be interesting to see if he’s right if that happens, it’ll be exciting times.

Nate: So, my takeaway from this is that Canada and the US need to go to war, and then finally, we’ll take ICS security seriously.

Andrew: ‘Seriously’ doesn’t really work in that sentence there, Nate, but yeah, that’s the kind of thing. We see for example another area that Greg didn’t mention, we see a heightened awareness of security on the edge of the China 9 lines where China claims the China Sea and the other nations around the edge claim their claims overlap with China’s, and that source of conflict leads to a very heightened awareness of security, and even a cyber-security. So, I think Greg very much has a point there.

Nate: Alright, I’ll keep brainstorming. I have no more formal questions for you, is there a parting thought that you can leave with our listeners?

Greg: I’ve been saying this for years and I’ll keep saying it for probably a few more years, that security has always been poorly marketed. And originally, they’re playing off the fear, uncertainty, and doubt or the FUD factor, and they’re using security and calling it an insurance policy. And those messages are over a short period of time. I’m looking at security as something as being more than that, and that’s a business enabler. That’s where, if you’re doing it right and you’re using the proper discipline, it’ll keep your system up and running, which means you’re more productive and more profitable. We’ve done stories where the top security players, systems are up and running more frequently, and the less downtime obviously, the more productive and they’re more profitable. And, yeah, they’re putting more money in into security, but it pays off on the long run, versus the bottom tier security players, the opposite is true. They’re not spending much on security. so there’s not that outlay, but they’re also having more downtime, they’re less productive, and they’re not as profitable. So, to me, it ends up being more of a no-brainer, but I understand why people don’t do it, I kind of get that. But over the years, I’ve just asked everyone that I interviewed that question, “Do you see security as a business enabler?” in most cases, they just say yes they get it, but they care more about keeping their names out of the headlines and making sure their systems stay attack-free. But if they don’t take that next step then I don’t think they’re going to reap the rewards that will come if they have a secure environment. Plus you throw the IIOT on top of that, you really, really have to understand your security program. And I think once you do that, I think the profitability and the uptime is going to be that much greater.

Nate: Andrew, your final thoughts on Greg’s final thoughts?

Greg: Well, I thought that was a great insight that increased reliability means productivity, it means profitability, and that that is something that security vendors and others, advocates for cyber security in the industrial space, that we should all emphasize. One way that that I actually see that happening in sort of many customers is the level of understanding, the level of discipline that a cyber-security program can impose on an organization. So, for example, for a while many years ago, I was selling and installing intrusion detection systems for industrial networks, and the owners and operators would be looking over our shoulders saying, “So, all this network traffic, what is that?” and we were running through the alerts and we were looking at different kinds of traffic. Inevitably, one of them looking over our shoulder would say, “That, what’s that?” and I’d explained that that’s network time protocol or that’s something else, domain name service. And they’d look at that and say. “That’s not supposed to be there, where’s that coming from? And thinking about security, the fact that it’s not it’s not supposed to be there wasn’t necessarily a security vulnerability, it’s just not the way the system was designed. And so, digging into how things work helps people understand them better, helps people make them more reliable. In the same sense every flow of information, whether it be a physical flow through a USB Drive or a laptop or an online flow through a firewall, every flow of information can also be an attack. Understanding information flows, controlling information flows, like one we have one big kind of information flow as security updates. Controlling security updates, controlling all of this stuff to a higher degree, disciplining the flows of information in a plant, also, increased understanding of information flows, increased discipline of information flows, is going to contribute to reliability in the long run. And so, Greg didn’t go into detail, it really rung a bell, I thought that was a that was a great insight.

Nate: Alright. With that, I’d like to thank Greg Hale and I’d like to thank you, Andrew, for sitting down with me,

Andrew: Always a pleasure, Nate, thank you.

Nate: This has been The Industrial Security podcast, I’ll catch you all next time.

Share

Stay up to date

Subscribe to our blog and receive insights straight to your inbox