cybersecurity – Waterfall Security Solutions https://waterfall-security.com Unbreachable OT security, unlimited OT connectivity Wed, 10 Sep 2025 08:31:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://waterfall-security.com/wp-content/uploads/2023/09/cropped-favicon2-2-32x32.png cybersecurity – Waterfall Security Solutions https://waterfall-security.com 32 32 I don’t sign s**t – Episode 143 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/i-dont-sign-st-episode-143/ Wed, 10 Sep 2025 08:31:45 +0000 https://waterfall-security.com/?p=35976 Tim McCreight of TaleCraft Security in his (coming soon) book "I don't sign s**t" uses story-telling to argue that front line security leaders should not be accepting multi-billion dollar risks on behalf of the business. We need to escalate those decisions - with often surprising results when we do.

The post I don’t sign s**t – Episode 143 appeared first on Waterfall Security Solutions.

]]>

I don’t sign s**t – Episode 143

We don't have budget to fix the problem, so we accept the risk? Tim McCreight of TaleCraft Security in his (coming soon) book "I Don't Sign S**t" uses story-telling to argue that front line security leaders should not be accepting multi-billion dollar risks on behalf of the business. We need to escalate those decisions - with often surprising results when we do.

For more episodes, follow us on:

Share this podcast:

“It always comes down to can I have a meaningful business discussion to talk about the risk? What’s the risk that we’re facing? How can we reduce that risk and can we actually pull this off with the resources that we have?” – Tim McCreight

Transcript of I don’t sign s**t | Episode 143

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Hey everyone, and welcome to the Industrial Security Podcast. My name is Nate Nelson. I’m here as usual with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions, who is going to introduce the subject and guest of our show today. Andrew, how’s going?

I’m very well, thank you, Nate. Our guest today is Tim McCrate. He is the CEO and founder of TaleCraft Security, and his topic is the book that he’s working on. The working title is We Don’t Sign Shit, which is a bit of a controversial title, but he’s talking about risk. Lots of technical detail, lots of examples, talking about who should really be making high-level decisions about risk in an organization.

Nathaniel Nelson
Then without further ado, here’s your conversation with Tim.

Andrew Ginter
Hello, Tim, and welcome to the podcast. Before we get started, can I ask you to say a few words for our listeners? You know, tell us a bit about yourself and about the good work that you’re doing at TaleCraft.

Tim McCreight
Hi folks, my name is Tim McCreight. I’m the CEO and founder of TaleCraft Security. This is year 44 now in the security industry. I started my career in 1981 when I got out of the military, desperately needed a job and took a role as a security officer in a hotel in downtown Winnipeg, Manitoba.

Shortly after I was moved into the chief security officer role for that’ that hotel and others and had an opportunity to move into security as a career path. And I haven’t looked back I decided I also wanted to learn more about cybersecurity.

Holy smokes, in ’98, ’99, I took myself out of the workforce for two years, learned as much as I could about information systems, and then came back for the latter part of my career and have held roles as a chief information security officer in a number of organizations. So I’ve had the pleasure and the honor of being both in physical and cybersecurity for the past 40 some years.

Andrew Ginter
And tell me about TaleCraft

Tim McCreight
It’s a boutique firm with two of our lines. Our first line is that it’s new skills from the old guard, and we are here to help give back and grow.

And it’s our opportunity to provide services to clients focusing on a risk-based approach to developing security programs. We teach security professionals how to tell their story and how to use the concepts of storytelling to present security risks and ideas to executives.

And finally, we have a series of online courses through our TaleCraft University where a chance to learn more about the principles of ESRM and other skills that we’re going to be adding to our repertoire of classes in the near future.

Andrew Ginter
And our topic is your new book. You know, I’m eagerly awaiting a look at the book. Can I ask you, you know before we even get into the the content of the book, how’s it coming? When are we going to see this thing?

Yeah Well, thank you for asking. i had great intentions to publish the book, hopefully this year. and Unfortunately, some things changed last year. i I was laid off from a role that I had and I started TaleCraft Security.

So sadly, my days have been absorbed by the work that it takes to stand up a business get it up and running. And my hats off to all the entrepreneurs out there who do all of these things every day. I’m new to this. So understanding what you have to do to stand up a business, get it running, to market it, to run the finances, et cetera, it has been like all consuming. So The book has unfortunately taken a bit of backseat, but I’ve got some breathing room now. I’ve got into a bit of a rhythm.

Tim McCreight
It’s a chance for me to get back to the book and start working through it. And and it’s to me, it’s appropriate. It’s a really good time. If I’m following the arc of a story, this is the latter part of that story arc. So I get a chance to help fill in that last part of the story, my own personal story, and and to put that into the book.

Andrew Ginter
I’m sorry to hear that. I’m, like said, looking forward to it. We have talked about the book in in the past. Let me ask you again, sort of big picture. You know, I’m focused on industrial cybersecurity. I saw a lot of value in the the content you described us as being produced. But can you talk about, you know, how industrial is the book?

We’re talking about risk. We’re talking about about leadership, right? How industrial does it get? I know you you do ah you do a podcast. You do Caffeinated Risk with Doug Leese, who’s a big contributor at Enbridge. He’s deep industrial. How industrial are you? How industrial is this book?

Tim McCreight
It spans around 40 years of my career and starting from, you know, physical security roles that I had, but also dealing with the security requirements for telecommunications back in the eighties into the nineties, getting ready for, and and helping with the security planning for the Olympics in the early two thousands, working into the cyberspace and understanding the value of first information security, then it turned into cyber security, then focusing on the OT environment as well, when I had a chance to work in critical infrastructure and oil and gas.

And then finally, you know the consistent message throughout the book is this concept of risk and that our world, when we first, you know when we first began this idea of industrial security back in the forties, bringing it up to where we need to be now from a professional perspective and how we view risk.

I do touch and do speak a little bit about the the worlds that I had a chance to work in from an industrial perspective. The overarching theme though is really this concept of risk and how we need to continue to focus on risk regardless of the environment that we’re in.

And some of the interesting stories I had along the way, some of the, honest to God, some of the mistakes I made along the way as well. I’ve learned more from mistakes than I have from successes.

And understanding the things that I needed to get better at throughout my career. I’m hoping that folks, when they do get a chance to read the book, that they recognize they don’t need to spend 40 some years to get better at their profession. You can do it in less time and you can do it by focusing on risk, regardless of whether you’re in the IT, the OT or the physical space.

Andrew Ginter
So there’s, there is some, some industrial angle in there, but, like I said, industrial or not, I’m i’m fascinated by the topic. I think we’ve, I’ve, beaten around the bush enough. The title, the working title is, is “We Don’t Sign Shit.” What does that mean?

Tim McCreight
I came up with “We Don’t Sign Shit.” And it’s I have a t-shirt downstairs in my office so that that I got from my team with an oil and gas company I worked with. And and Doug Lease was in the team as well.

And it really came down to this, the principle that for years, security was always asked to sign off on risk or to accept it or to endorse it or my favorite, well, security signed off on it, must be good.

Wait a second. We never should have. That never should have been our role. We never should have been put in a position where we had to accept risk on behalf of an organization because that’s not the role of security. Security’s role is to identify the risk.

Identify mitigation strategies and present it back to the executives so that they can make a business decision on the risks that we face. So in my first couple of weeks, when I was at this oil and gas organization, we had a significant risk that came across my desk and it was a letter that I had to sign off on. a brand new staff member came in and said, “Hi boss, I just need to take a look at this.”

I’m like, “Hi, who are you? What team do you work on? And what’s the project you’re working on?” When I read this letter, I’m like, are you serious that we’re accepting a potential billion dollar risk on behalf of this organization? Why?

And like, “Well, we always do this.” Not anymore. And we went upstairs. We got a hold of the right vice president to take a look at this to address the risk and work through it. And as I continued to provide this type of coaching and training to the team there, I kept bringing up the same concept. Look, our job is not to sign shit.

That’s not what we’re here for. We don’t sign off on the risk. We identify what the risk is, the impacts to the organization, what the potential mitigation strategies are. And then we provide that to executives to make a business decision.

So when I did leave the organization for another role, they took me out for lunch and I thought it was pretty cool. The whole team got together and they created this amazing t-shirt and it says, “Team We Don’t Sign Shit.” So it worked, right? And that mindset’s still in place today. I have a chance to touch base with them often. Ask how they’re doing. And all of them said the same thing is that, yeah, it’s that mindset is still there where they’ve embraced the idea that security’s role is to identify the risk and present opportunities to mitigate, but not to accept the risk on behalf of the organization.

That was the whole context of where I I took this book is, wouldn’t it be great if we could finally get folks to recognize, no, we don’t sign shit. This isn’t our job.

Nathaniel Nelson
So Andrew, I get the idea here. tim isn’t the one who signs off on the risk. He identifies it and passes it on to business decision makers, but I don’t yet see where the passion for this issue comes from, like why this point in the process is such a big deal.

Andrew Ginter
Well, I can’t speak for Tim, but I’m fascinated by the topic because I see so many organizations doing this a different way. In my books, the people who decide how much budget industrial security gets should be the people ah making decisions about are these risks big enough to address today? Is this, is this ah a serious problem because they’re the ones that are are you know they have the the business context they can compare the the industrial risks to the the other risks the business is facing to the other needs of the business and make business decisions

When you have the wrong people making the decisions, you risk, there’s a real risk that you make the wrong decisions because the the people executing on industrial cybersecurity do not have the business knowledge of what the business needs. They don’t have the big picture of the business and the people with the big picture of the business do not have knowledge, the information about the risk and the mitigations and the costs. And so each of them is making the wrong decision. When you bring these people together and the people with the information convey it to the people with the business knowledge, now the people with the business knowledge can make the right decision for the business.

And again, the industrial team execute on it. If you have the wrong people making the decision, you risk making the wrong decision.

Andrew Ginter
So let me ask, I mean, you take a letter into an executive, you you you do this over and over again in lots of different organizations. How do how is that received? How do the executives react when you do that?

Tim McCreight
So, I mean, my standard approach has always been, and and I use this as my litmus test is if the role I play as a chief security officer or CISO, and you’re asking me to accept risk, I come back. And the the first question I’m going to ask is if this is the case and you’re asking me to do this on, I’m going to say, no, invariably the room gets really quiet.

People start recognizing, oh, he’s serious. Yeah. Cause I have no risk tolerance when it comes to work. I would be giving everybody like paper notebooks and crayons and I want it back at the end of the day So I don’t have any tolerance for risk. But to test my theory is when I ask executives, if you’re saying that my role is to sign off on this, then I’m not going to, does that stop the project?

It never does. So the goal then is to ensure that the executives understand it’s their decision, and it’s a business decision that has to be made, not a security decision because my decision is always going to be, I start with no and I’ll negotiate from there.

But when we look at what the process is that i’ve I’ve provided and others have followed is I’ll bring the letter with the recommendations to the business for them to review and to either accept the risk, sign off on it, or to find me an opportunity to reduce the risk.

That’s when I start getting attention from the executives. So it moves from shock to he’s serious to, okay, now we can understand what the risk is. Let’s walk through this as a business decision. That’s when you start making headway with executives is taking that approach.

Andrew Ginter
So, I mean, that that sounds simple, simple but in in my experience, what you said there is actually very deep. I mean, i’ve I’m on the end of a long career as well, and I’ve never been a CISO. And in hindsight, I come to realize that, bluntly, I’m not a very good manager.

Because when someone comes to me, it doesn’t matter, so any anyone outside the the my sphere of influence my scope of responsibility saying, hey, Andrew, can you do X for me?

Whenever one of my people comes to me with an idea saying, hey, we should do Y, my first instinct is, what a good idea. Yeah, yeah.

Whereas I know that strong managers, their first instinct is no. And now whoever’s coming at us with the request or with the idea has to justify it, has to give some business reasons.

Again, so that’s, this is this is deep. It’s a deep difference between between you and and people like me.

Tim McCreight
Yeah well, and it is, and there’s, don’t get me wrong. There’s an internal struggle every time when I’ve worked through these types of requests where I, I want to help people too, but, but I understand that the path you got to take and how you have to get business to understand it, accept it and move forward with it. It’s different, right? This is why some great friends of mine that I’ve known for years, and they were technical, they’re technically brilliant. They have some amazing skills. Like, honest to God, I stopped being a smart technical person long time ago, and I’ve relied on just wizards to help move the programs forward.

And, I’ve chatted with them as well, and then they’re similar to you, Andrew. they’ve They’ve got great technical skills. They’ve been doing this for a long time. And, one of the one of the folks I chatted with, they’re just like, I can’t I can’t give myself the lobotomy to get to that level. I’m like, oh, my God. Okay, fair enough.

And I get it, but the way I’ve always approached this, it’s different, right? So I i take myself out of the equation of always wanted to help everybody to how can I ensure that I’m reducing the risk?

And if I can get to those types of discussions and have them with executives, for me, that’s where I find the value. So all of the work I’ve done in my career to get to this space, the amazing folks that I’ve met along the way, the teams that I’ve helped build, the folks I still call on to, to to mentor me through situations,

It always comes down to, can I have a meaningful business discussion to talk about the risk? And then it takes away some of the emotional response. It takes away that immediate, I need to help everybody do everything because we can’t.

But it gives us a chance to focus on what the problem is. What’s the risk that we’re facing? How can we reduce that risk? And can we actually pull this off with the resources that we have? So yeah, I get it. Not everybody wants to sit in these chairs. I’ve met so many folks throughout my career that they keep looking at me going, Jesus, Tim, why would you ever want to be in that space?

Why would you ever accept the fact that you’re, that they’re trying to hold you accountable for breaches or or for events or incidents? And I challenge back with it from it, for me, it’s that opportunity to speak at a business language, to get the folks at the business level, to appreciate what we bring to the table, whether it’s in OT security, IT t or cyber, it physical or cyber, it’s,

It’s a chance for all of us to be represented at that table, at that level, but at a business focus. So for me, that’s why I kept looking for these opportunities is can I continue to move the message forward that we’re here to help, but let’s make sure we do it the right way.

Andrew Ginter
So, fascinating principles. Can you give me some examples? I mean, TaleCraft is about telling stories. Can you tell me a story? How did this work? How did it come about? What kind of stories are you telling here?

Tim McCreight
So there’s there’s a lot that i’ve I’ve presented over the years, but a really good one is I was working with Bell Canada many years ago. We had accepted the, we were awarded the communication contract and some of the advertising media supporting contracts for the Olympics for 2010 for Vancouver.

And I was working with an amazing team at Bell Canada. Doug Leese was on the team as well, reporting into the structure. So it was very cool to work with Doug on some of these projects. We decided that the team that was putting in place the communication structure decided they want to use the first instance of voice over IP, commercial voice over IP. It was called hosted IP telephony.

And it was from Nortel. If folks still remember Nortel, it was from Nortel Networks. We looked at the approach that they were taking, how we were going to be applying the the technology to the Olympic Village, et cetera.

Doug and the team, they did this amazing work when the risk assessment came across, but they were able to intercept a conversation decrypt the conversation and play it back as an MP4, like an MP3 file.

You could actually hear them talking. And it was at the time it was the CEO calling his executive assistant order lunch. And we had that recorded. You could actually hear it. It was just as if it was, they were speaking to you.

So that’s a problem when you’re trying to keep secure communications between endpoints in a communication path. We wrote up the risk assessment. We presented it to the executives. We we presented the report up to my chain and it was simple.

Here’s the risk. Here’s the mitigation strategy. We need a business decision for the path that we wanted to take. And that generated quite the stir. My boss got back to me and said, well, we have to change the report. No, I said, no, we don’t. We don’t change this shit. We just, you you move it forward.

We’ve objectively uncovered the risk. The team did a fantastic job. But here’s an attached recording. If you want to hear it, but let’s keep moving forward. So it went up to the next level of management and same thing. Would you alter report? No, no I would not.

Move on, move on. Finally got to the chief security officer. And I remember getting the phone call. It’s like, well, Tim, this is, this is going to cause concerns. No, it’s a business decision. It isn’t about concerns. This is a business decision. And what risk is the business willing to accept?

So he submitted the report forward. Next thing I’m getting a call from, an executive office assistant telling me that my flight is going to be made for the next day. I’ll be, I’ll be flying to present the report. Like, Jesus Christ. So, all right, I got on a plane headed out east.

Waited forever to talk to the CEO at the time. And all they asked all they asked was, it is this real? are you is Would you change this? I said, no, the risk is legitimate.

And here’s the resolution. Here’s the mitigation path. Here’s the strategy. So they asked how much we needed, what we needed for time. it was about six months worth of work with the folks at Nortel to fix the problem. And all of that to state that had we done this old school many years ago, we would have just accepted the risk and move forward with it.

That wasn’t our role. That’s not our job, right? In that whole path, that whole risk assessment needed to presented to the point where executives understood what could potentially happen. We already proved that it could, but they needed to understand here’s the mitigation strategy. We found a way to resolve it.

We need this additional funding time resources to fix the problem. So that That stuck with me. That was like almost 20 years, like that was over 20 years ago. And that stuck with me because had I, altered my report, had I taken away the risk, had he accepted it on behalf of the security team, we don’t know what could have happened to the transmissions back and forth at the Olympics.

But I do know that in following that process, you never read about anyone’s conversations being intercepted at the 2010 Olympics, did you? It works. The process works, but what it takes is an understanding that from a risk perspective, this is the path that we have to take.

It’s not ours to accept. You have to make sure you get that to the executives and let them make that decision. Those are the stories that we need folks to hear now, as we move into this next phase of developing the profession of security.

Andrew Ginter
So Nate, you might ask, the CEO had a conversation, intercepted ordering lunch. Is this worth, the the big deal that it turned into? And I discussed this offline with with Tim and what he came back with is was, Andrew, think about it. Imagine that you’re nine days into the 10-day Summer Olympics or two week, whatever it is.

And someone, pick someone, let’s say the Chinese intelligence is found to have been intercepting and listening in on all of the conversations between the various nations, teams, coaches in the various sports and their colleagues back in their home countries.

They’ve been listening in on them for the the whole Olympics. What would that do to the reputation of the Olympics? What would that do to the reputation of Bell Canada? This is a huge issue. It was a material cost to fix. It took six months and he didn’t say how many people and how much technology.

But this is not something that the security team could say, “Okay, we don’t have any budget to fix this, therefore we have to accept the risk.” That’s the wrong business decision.

When he escalated this, it went all the way up to the CEO who said, yeah, this needs to be fixed. Take the budget, fix it. We cannot accept this risk as a business. That’s ah a business decision the CEO could make. It’s not a business decision he could make with the budget authority that he had four levels down in the organization.

Andrew Ginter
So fascinating stuff. Again, I look forward to stories in in the book. But you mentioned stories at the very beginning when you introduced TaleCraft. Can you tell me more about TaleCraft? How does this this idea of storytelling dovetail with with the work you’re doing right now?

Tim McCreight
When I was first designing this idea of what TaleCraft could be, we reached out to a good friend of ours here in Calgary, Mike Daigle. He does some amazing work. He spent some time just dissecting what I’ve done in my career and what I’ve accomplished. More importantly, some of the things that he wanted to focus on from company perspective.

And one of the the parts he brought up, and this is how TaleCraft was created, the word tail was I i spend a significant amount of my time now telling stories and it’s to help educate and to inform and stories to influence and and to provide meaning and value to executives.

But the common theme for all of this has been this concept of telling a story. One of the things I found throughout my career is as security professionals move through the ranks, as they begin, junior levels, moving into their first role as management and moving into director positions and eventually chief positions, the principles and the concepts of being able to tell a story or to communicate effectively with executives,

I found that some of my peers weren’t doing a great job or they were, I don’t know about you, Andrew, but if you sit in a ah presentation that someone’s giving and if all you’re reading is the slide deck, Jesus, you could just send that to me. I got this. I don’t need to spend time watching you stagger through a slide deck or the slides that have a couple of thousand words on them that you’re expecting us to read from 40 feet away.

It doesn’t happen. So what really bothered me is that we started losing this skillset of being able to tell a story. And to effectively use the principles of storytelling to provide input to executives, to make decisions for things like budget or resourcing or allocating, staff resources, et cetera.

So that’s one of the things that we do at TaleCraft is we teach security professionals and others, the principle and the concept of storytelling and how the story arc, those three parts to a story arc that we learned as kids, the beginning of the story, the middle where the conflict occurs, the resolution, and finally the end of the story, when, when you’re closing off and heading back to the village, after you slayed the dragon, those three things that we have, we learned as kids, they still apply as an adult because we learn as human beings through stories. We have for hundreds of years, thousands of years, used oral history as a way to present a story from one generation to the next.

We can use the same skill sets when we’re talking to our executives, when we’re explaining a new technique to our team, or when we’re giving an update in the middle of an incident and how you’re going to react to the next problem and how you’re going to solve it.

Those principles exist. It’s reminding people of what the structure is, teaching people how to follow the story arc when they’re presenting their material, taking away the noise, the distractions and everything else that gets in the way when listening to a story, but focus on the human.

And that’s one of the things that we’re doing here Telegraph is we’re teaching people to be more human in their approach and the techniques work. I just, My wife is up in Edmonton doing a conference right now for the CIO c Conference for Canada.

And she actually asked me to, this is a first folks, for all those of you who are married, what what kind of a progress I’ve made. My wife actually asked if I could dissect her presentation and help her with it. I thought that was pretty amazing. We restructured it so that she was able to use props.

She brought in a medical smock and and a stethoscope to talk about one of the clients that she worked with. And it sounds like it worked because she got some referrals for folks in the audience and she’s spending time right now talking to more clients up in Edmonton. So yeah, I crossed my fingers I was going to get through that one and it seemed to have worked. But these principles of telling a story, if you have a chance to understand how a story works and you’re able to replicate that in a security environment, all of a sudden now you’re speaking from a human to a human.

You’re not bringing in technology. You’re not talking about controls. You’re not spewing off all of these different firewall rules that we have to go through. Nobody cares about that stuff. What they want to hear is what’s the story and can I link the story to risk?

And at the top end of that arc, can I provide you an opportunity to reduce the risk and then finish the story by asking for help? If we can do that, those types of presentations throughout my career, that’s when I’ve been the most successful is when I can focus on the story I need to tell, get the executives as part of it and focus on the human reaction to the problem that we have.

That’s one of the things that we’re teaching at TaleCraft.

Andrew Ginter
So that makes sense in principle. Let me let me ask you. I mean, I do a lot of presentations. I had an opportunity to present on a sort of an abstract topic at S4, which is the currently the world’s biggest OT security-focused conference. And, if you’re curious, it was the title was “Credibility Versus Likelihood.” So, again, a very sort of abstract, risky, risk-type topic.

And the the the advice I got from Dale Peterson, the organizer, was, “Andrew, I see your slides. You can’t just read the slides. You’ve got to come to this presentation armed with examples for every slide, for every second slide.”

Tim McCreight
Yep.

Andrew Ginter
“Get up there and tell stories.” so I would give examples. Sometimes they would be attack scenarios. is that is that the same kind of thing here?

Tim McCreight
It is, I think. you And congratulations for for being asked to present at that conference. That’s amazing. So so kudos to you. That’s that’s awesome, Andrew. That’s great to hear. But you’re right. You touched on one of the things that a lot of presentations lack is the credibility or how I view the person providing the presentation. Do they have the authority? Do I look at them as someone who’s experienced and understands it?

And you do that by telling the story and providing an example for, let’s say, an attack scenario where you saw how it unfolded, how you’re able to detect it, how are you able to contain it, eradicate it, recover back. Those are the stories that people want to hear because it makes it real for people. Providing nothing but a technical description of an attack or bringing out, us as an example, a CVE and breaking it down by different sections on a slide. Oh my God, I would probably poke my eye out with a fork.

But if you walk me through how you identified it, The work that you guys did to identify, to detect it, to contain it, to eradicate it, and then recover. it If you can walk me through those steps from a personal example that you’ve had, that to me is the story.

And that’s the part that gets compelling is now you’ve got someone who’s got real world experience, expertise in this particular problem. They were able to solve it and they provide to me in a story. So now I can pick up those parts. I’m going to remember that part of the presentation because you gave me a great example, which is really, you gave me a great story. Does that make sense?

Andrew Ginter
It does to a degree. Let me Let me distract you for a moment here. I’m not sure this is I’m not sure this is the same the same topic, but I’ve, again, i’ve I’ve written a bit on risk.

Tim McCreight
Okay.

Andrew Ginter
You know I’ve tried to teach people a bit about what what is risk, how do you manage risk in in especially critical infrastructure settings. And I find that a lot of risk assessment reports are, it seems to me not very useful. They’re not useful as tools to make business decisions.

You get a long list of, you still have 8,000 unpatched vulnerabilities in your your your OT environment. Any questions? To me what business decision makers understand more than a list of 8,000 vulnerabilities is attack scenarios.

And so what I’ve argued is that every risk assessment should finish or lead, if you wish, with a in In physical security, you’re you’re probably more familiar this than I am, the the concept of design basis threat, a description of the capable attack you must defeat. You’re designed to defeat with a high degree of confidence.

And you look at your existing security posture and decide this class of attack we defeat with a high degree of confidence. These attacks up here, we don’t have that high degree of confidence.

And and what I’ve argued you should tell the story. Go through one or two of these attack scenarios and say, here is an attack that we would not defeat with a high degree of confidence. Is it acceptable that this attack potential is out there? Is that an acceptable risk?

Is that Is that the kind of storytelling we’re talking about here, or have I drifted off into some other space?

Tim McCreight
No, I think you’ve actually applied the principles of telling a story to something as complex as identifying your particular response or your organization’s response to ah either an attack a attack scenario or a more sophisticated attack scenario. So no, I think you’ve you’ve nailed it.

What it does though, in the approach that you just talked about, It gives a few things to the business audience. One, you have a greater understanding of the assets that are in place and how they apply to the business environment, right? Whether it’s in a physical plant structure for OT or whether it’s a pipeline, et cetera.

If you understand the environment that is being targeted, understand the assets that are in place and the controls that you have there in place, that gives you greater a greater understanding and foundations for what is the potential risk.

By telling the story then of what a particular attack scenario looks like, And if you have a level of confidence that you’d be able to protect against it, you’d be able to walk through the different parts of the story arc.

This is the context of the attack. This is what the attack could look like. Here’s how we would try to resolve it if we can. And then here’s the closing actions that we would be focused on if the attack was either successful or unsuccessful.

So all of those things, I think, apply to the principles of telling a story. What you’ve given is a great example of how to take something that’s very technical or, the the typical risk assessment I’ve seen in my career where, that Andrew here, here’s your 200 page report, the last 10, last hundred pages are all the CVEs we found.

And let us know if you need any help. Well, that doesn’t help me. But if you walk me through a particular example where here is in this one set of infrastructure, we’re liable or we’re open to this type of attack.

I think that’s amazing because it gives the executives the story they need. You understand the assets. Here’s the risk. Here’s the potential impact. Here’s what we can and cannot do to defeat or defend against this.

And then we need your help if this is a risk that you can’t accept. So no, I think you’ve covered all parts of what would be an appropriate story arc for using that type of approach. And honest to God, if you could get more folks to include that in reports, I would love to see that because I’m like you, I i have read too many reports that don’t offer value.

But the description you just provided and the way we break it down, that offers huge value to executives moving forward.

Nathaniel Nelson
Tim’s spending a lot of time emphasizing the importance of storytelling in conveying security concepts to the people who make decisions. Andrew, in your experience, is this sort of thing something you think about a lot? Do frame your your information in the same ways that he’s talking about, or do you have a different sort of approach?

Andrew Ginter
This makes sense to me. it’s sort of a step beyond what I usually do. So I’m i’m very much thinking about what he’s done and and how to use it going forward. But just to give you an example, close to a decade ago, I came out with a report, the “Top 20 Cyber Attacks on Industrial Control Systems.”

And it wasn’t so much a report looking backwards saying what has happened. It’s a report looking at what’s possible, what kind of capabilities are out there. And I tried to put together a spectrum of attack scenarios with a spectrum of consequences. Some of the attacks were very simple to carry out and had almost no consequence.

Some of them were really difficult to carry out and would take you down hard and cost an organization billions of dollars or dozens of lives. And everything in between.

And I did that because, in my experience, business decision makers understand attack scenarios, better than they understand abstract numeric risk metrics or lists of vulnerabilities.

But I described it as attack scenarios. In hindsight, I think really… what I was doing there was telling some stories and, I need to update that report.

I’m going to do it by updating it to read in more of a storytelling style so that, people can hear stories about attacks that they do defeat reliably and why, and attacks that they probably will not defeat with a high degree of confidence and what will be the consequences so that they can make these business decisions.

Nathaniel Nelson
Yeah, and that sounds nice in theory, but then I’m imagining, you tell your nice story to someone in the position to make a decision with money and they come back to you and say, well, Andrew, your story is very nice, but why can’t we defeat all of these attack scenarios with the amount of money we’re giving you?

Nathaniel Nelson
What do you tell them at that point?

Andrew Ginter
That is a very common reaction, saying, “You’ve asked us where to draw the line. We draw the line above the most sophisticated attack, fix them all.” And then I explain what that’s going to cost.

They haven’t even really paid attention to the attack scenarios. They haven’t even asked me about the attack scenarios. I’ve just explained the concept of a spectrum. They said, yeah, put it on the very put the line on the top, fix them all. And then you have to explain the cost.

And they go, “Whoa. Okay, so what are these?” And they ask in more detail and you give them the simplest attack, the simplest story that you do not defeat with a high degree of confidence.

And you ask them, is that something we need to fix? And they say, “Yeah, that’s nasty. I could see that happening, fix that. What else do you got?” And you work up the chain and eventually you reach an attack scenario or two where they look at it and say, “That’s just weird.”

I mean, let me give you an extreme example. Imagine that a foreign power has either bribed or blackmailed every employee in a large company. What security program, what policy can this the the CEO put in place that will defend the organization? Well, there isn’t one. Your entire organization is working against you. Is that a credible threat? The business is probably going to say, no, this is why we have background checks.

A conspiracy that large, the government is going to, be you going to come in and, and and and arrest everyone. That’s not a credible threat. And so, the initial reaction might be, yeah, fix it all. Draw the line across the very top of the spectrum.

And when that becomes clear that you can’t do that, this is where you dig into the stories and they have to understand the the individual scenarios. And they will eventually draw the line and say, “These three here that you told me about, fix them.” The rest of them just don’t seem credible.

That’s the decision process that you need to to to go through. And you need to describe the attacks. And I think the right way to describe the attacks is is with storytelling.

Andrew Ginter
So, I mean, this all makes great sense to me. I mean, this is why I asked you to be a guest on the podcast. But let me ask you, a sort of the next level of detail at TaleCraft. If, I don’t know, a big business, a CISO, says, TaleCraft makes sense to me and they bring you in, what do you actually do? Do you do you run seminars? Do you review reports and give advice? what What does TaleCraft actually do if we if somebody engages with you?

Tim McCreight
So there are a couple of things that we can offer to organizations that bring us in that from a TaleCraft perspective. First, what we offer, let me talk about storytelling first. What we offer from the storytelling approach is we will go to the client site.

We will run workshops, anywhere from four-hour workshop to a two-day workshop. We will bring team members from the security group, as well as others that the security team interacts with. We’ll go over the principles of storytelling and the concepts of storytelling, how to be more mindful in your public speaking and in your preparation.

And we’ll spend the first day going through the theory and the concepts of telling a story and becoming a better public speaker. Then on the second day of the workshop, we we then ask all participants to stand up for up to 10 minutes and provide their stories.

At the end of each one of the sessions, we provide positive feedback and provide them opportunities to grow and experience more more storytelling opportunities. And then we close out the workshop We provide reports back to each of the individuals on how we observed them absorbing all of the content from day one, and then offer opportunities for individual mentoring and coaching along the way.

So that’s one of the first services we offer. The second, as we come into organizations, if a CISO or CSO contacts us and asks us for assistance, we can do everything from helping them redesign their security program using the principles of enterprise security risk management, review the current program that they have today, assess the maturity of the controls that they have in place, identify risks that are facing the organization at a strategic level. And then we can come in and help them map out and design path to greater maturity by assessing the culture of security across the organization as well, where we go out and interview stakeholders from across the organization, from different departments, different divisions, and different levels of employees in the organization and identify their perception of security, the value that security brings to the organization, and how the security team can become greater partners and trusted advisors to the company. That’s part of the work that we do at Telegram Security.

Andrew Ginter
I understand as well that you’re working with professional associations or or something. I mean, I know that in in Canada, there’s the Canadian Information Processing Society. It’s not security focused. Security is an aspect of information processing in in the IT space.

In Alberta, there’s APEGA, the Association for Professional Engineers, Geologists, Geophysicists. I would dearly love to see these professions embrace cybersecurity and establish professional standards for practitioners for what is considered acceptable practice so that there is sort of a minimum bar.

So tell me, you’re you’re working with these folks. what What is it that you’re doing? How’s that going?

Tim McCreight
Yeah, so this happened, I’ve been thinking about this for probably the last 20 some years, and it always bothered me that the security director, the CISO, et cetera, in an organization, if they did get a chance to come to a board meeting or to be invited to talk to executives, you got a 45 minute time slot. Most times it was less. You had a chance to drink the really good coffee, and then you were asked to leave the room, and that was your time.

Where your peers who were running other departments across the organization in legal, finance, HR, etc. They stayed the entire weekend to help map out the strategy for an organization. Yet we weren’t invited to that party.

And that kind of annoyed me for the last some years. So I took it upon myself to begin a journey and I brought some folks along with me. There’s about 15 of us now that are working on the concept of designing and developing the profession of security, focusing on Canada first, and then working through the Commonwealth model to all those countries that follow the Commonwealth parliamentary system.

And it it made sense to me. I couldn’t do much work when I was the president of ASIS 2023. I didn’t want to have any perceived conflict of interest or anything that I was doing. But what we looked at from this concept of designing the profession of security It’s an opportunity for thus those who call this our profession and want to be recognized as such to borrow some of the great work that KIPPS has done and that APEGA has done here in Alberta, KIPPS across the country, to recognize the path that they took, how they were recognized and established, how they developed their charters, et cetera.

So we’ve had an opportunity to chat with some folks from KIPPS, but also to look at the work that they’ve done. And I’ve had a chance to review APEGA and it made sense to me. So now, Spin forward to 2025. We have a group of individuals who are focused on designing and developing what we consider to be a model that will provide a professional designation for security professionals in Canada.

It’s an opportunity to demonstrate your expertise and your body of knowledge. It’s an opportunity to take all of the the designations that you’ve received from groups like ISC squared, ISACA, ASIS, et cetera, use them as stepping stones to the next level where you’re accepted as a professional designation so that a security designation, whatever we can land on for the post nominals would be recognized the same as an engineer or as a doctor or as potentially a lawyer.

It gives us the validation of our work that we do. It gives us the recognition of the value that security brings to an organization. And it ties together OT, IT, t cyber, physical, all of the different parts of makeup security. And it’s a chance for us to come under one umbrella. So the way I describe it is that, I’ve, For years, I said, I ran a department. It just happens to be security. Now we can say I’m a security professional and my expertise is in OT security or in forensics or in investigations or in a crime prevention through environmental design.

It gives us an umbrella designation for security and a chance to specialize. So a good friend of mine is a surgeon. He started off as a doctor and now he’s a thoracic surgeon. So whenever he recognizes himself is that, he’s a, he’s a doctor, my specialty is c thoracic surgery, and now he’s chief of thoracic surgery at Vancouver General Hospital. Super great guy, but the path he took was become a doctor, demonstrate your expertise, spend more time to create your specialty, focus on that, be recognized for that. And now that’s his designation.

I want to do the same here in Canada for security. The reason why is, look, you and I both know this, Andrew, and we’ve we’ve seen this. If I go do a risk assessment for a client or internally, and if I do a bad job, I just go to the next client.

But if we have a doctor or a lawyer who mishandles a file or mishandles an operation or is liable for their actions, they’re held accountable to it. We are not. What I want to be able to do is put in the standards that demonstrate the level of our expertise, that we’re held accountable for our actions, that we maintain our credentials throughout our career, that we’re able to give back to the profession of security, and that if something does happen, we’re actually accountable for the work that we do.

And think that’s important, right? like here in our new house, an engineer stamped our plans. He’s accountable for the work he did. Why can’t we have the same for security? I think we need to, because then that provides executives a greater understanding of how important the work that we do every day to secure your organization so that you can achieve your goals and objectives.

That that’s what I’ve been doing on the side of my desk for the past 20 years. I finally got some breathing room to do it now with a TaleCraft giving me the space to do it. So I’m, I’m looking forward to trying to roll this thing out between now and the end of the year, at least the structure of it, and then we engage more people to get their comments and their perceptions so that we’re trying to reflect and represent as many folks as we can across the security profession.

Andrew Ginter
Well, Tim, this has been tremendous. Again, I look forward to to your book. Hopefully you find some time to work on it. Before we let you go, can I ask you to sum up for us? What are the what what should we take away from from the discussion we’ve had in the in the episode here and and use it going forward?

Tim McCreight
Thank you for that. I appreciate it. And yeah, fingers crossed, I can get working on the book over the summertime. That’s my goal. But for this particular episode, I think a couple of things. One, as security professionals, it’s not our job to accept the risk. It’s our job to identify it, provide a mitigation strategy, and present it back to executives. So that’s that’s one of the things that I want to keep stressing for everybody. Our role is to be an advisor to the organization.

It’s not to accept the risk on behalf of the organization. Second is, We all have a story to tell. We all understand the value and the power of a story. We all see how important it is when we tell a story to our executives, to our leaders, to our teams, and to others.

You need to focus on those skill sets of how to tell a story, particularly in the role of security, because not everyone understands the value that we bring. and the second annual and then And the last point for me is that You need to continue to look for mentors, for instructors, for trainers who can offer you these skill sets and you can provide this type of training for you so that you can continue to build your career.

We can’t do this alone. but You need to make sure that you have an opportunity to reach out to folks that can help you, whether it’s looking at your security program and trying to build it on a risk-based approach or teaching people the value of telling a story and then applying those skills the next presentation you give to executives. If folks remember those things, that’d be terrific.

So for those folks listening to the podcast today, if those points resonate with you, and if you’re looking for opportunities to learn more about telling a story or how to be effective doing that, how to look at your program from a risk-based approach and how to find mentors that can help you in your career path, reach out to TaleCraft Security.

This is what we do. It’s our opportunity to give back to the profession of security, to help organizations build their security programs, and to grow the skill sets of people who want to learn more about telling a story, becoming a better security leader, or understanding the concepts of a risk-based approach to security.

That’s what we’re here at TaleCraft for us, to help, to give back, and to grow.

Nathaniel Nelson
Andrew, that seems to have done it with your interview with Tim. Do you have any final word you would like to say gazelle today?

Andrew Ginter
Yeah, I mean, I think this is a really important topic. I see way too many security teams saying, this is my budget. This is all I have budget to I do not have budget to solve that problem. Therefore, I will accept the risk of that problem. And, especially for new projects, for risks that that we’ve never considered before, you That is often the wrong decision.

When we have new kinds of decisions to make, we need to escalate those decisions to the people who assign budget. We need to tell those people stories so they understand the risk. We have to get the right information, the right stories to the right people so they can make the right decisions. Saying, I have no budget, therefore I’m going to accept the risk many times is the wrong decision for the business. And we cannot afford to be making those wrong decisions time and again.

As the threat environment becomes more dangerous, as consequences of of industrial cyber attacks increase, we need to be making the right decisions. And this seems an essential component of of making the right decisions.

Nathaniel Nelson
Well, thanks to Tim McCreight for that. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a pleasure. Thank you, Nate.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to every everyone out there listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post I don’t sign s**t – Episode 143 appeared first on Waterfall Security Solutions.

]]>
NIS2 and the Cyber Resilience Act (CRA) – Episode 142 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/nis2-and-the-cyber-resilience-act-cra-episode-142/ Mon, 18 Aug 2025 08:29:50 +0000 https://waterfall-security.com/?p=35094 NIS2 legislation is late in many EU countries, and the new CRA applies to most suppliers of industrial / OT computerized and software products to the EU. Christina Kieffer, attorney at reuschlaw, walks us through what's new and what it means for vendors, as well as for owner / operators.

The post NIS2 and the Cyber Resilience Act (CRA) – Episode 142 appeared first on Waterfall Security Solutions.

]]>

NIS2 and the Cyber Resilience Act (CRA) – Episode 142

NIS2 legislation is late in many EU countries, and the new CRA applies to most suppliers of industrial / OT computerized and software products to the EU. Christina Kiefer, attorney at reuschlaw, walks us through what's new and what it means for vendors, as well as for owner / operators.

For more episodes, follow us on:

Share this podcast:

“So NIS2 is focusing on cybersecurity of entities, and the CRA is focusing on cybersecurity for products with digital elements.” – Christina Kiefer

Transcript of NIS2 and the Cyber Resilience Act (CRA)  | Episode 142

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome everyone to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions, who’s going to introduce the subject and guest of our show today. Andrew, how’s going?

Andrew Ginter
I’m very well, thank you, Nate. Our guest today is Christina Kiefer. She is an Attorney at Law and a Senior Associate in the Digital Business Department of reuschlaw. And she’s going to be talking to us about cybersecurity regulation in the European Union. As we all know, NIST 2 is coming and there’s other stuff coming too.

Nathaniel Nelson
Then without further ado, here’s your conversation with Christina.

Andrew Ginter
Hello, Christina, and welcome to the podcast. ah Before we get started, can i ask you to say a few words, introduce yourself and your background, and tell us a bit about the good work that you’re doing at Reuschlaw.

Christina Kiefer
Yes, of course. So first of all, thank you very much for the invitation. I’m very happy to be in your podcast today. So, yeah, to me, my name is Christina Kiefer. I’m an attorney at law working as a senior associate at our digital business unit in the law firm reuschlaw.

Christina Kiefer
We are based in Germany and reuschlaw is one of Europe’s leading commercial law firms specialized in product law. And for more than 20 years, our team of approximately 30 experts has been advising companies in dynamic industries, both nationally but also internationally.

Christina Kiefer
And for me myself, in my daily work, I advise companies and also public institutions on yeah complex issues in the areas of data protection, cybersecurity, but also IT and contract law.

And one focus of my work is on supporting clients in introduction of digital products in the EU market. And also looking at the field of cybersecurity and IT law. Since my studies, I have already focused on IT law and cybersecurity. And yes, I have been involved in the legal development since since then in this area.

Andrew Ginter
Thank you for that. And our topic is, you know, the law in Europe for cybersecurity, its regulation. The big news in Europe is, of course, NIS2. And it’s not a law, it’s a directive to the the nation states to produce laws, to produce regulations. So every country is going to have its own laws. Can I ask you for an update? How’s that going? who’s Who’s got the law? I thought there was a deadline. do the do the Do the nations of Europe have this covered or or is it still coming?

Christina Kiefer
Yes, so it’s the last point, so it’s still coming. Some countries have already transposed NS2 Directive into national law, but also a lot of countries are still in the developing and the transposition yeah period.

And that that’s why we are yeah confusing because NIS2 Directive it’s already or has already been enforced since January 2023. and and also the deadline for the EU member states to impose the NIS2 directive international law was October 2024.

So because of that, because of a lot of member states haven’t transposed the NIS2 directive international law, the EU Commission has launched an infringement proceeding against 23 member states last fall in 2024. And this has led to some movements in some EU member states. So as of now, 10 countries have fully transposed this to international law.

So for example, Belgium, Finland, Greece or Italy. And then another 14 countries have published at least some draft legislation so far. And there you can call ah Bulgaria, Denmark and also Germany. And then there are also two countries, it’s Sweden and Austria, and those two EU member states, they have not published neither a draft or also a final national law. So there we have no public information available on their implementation status yet.

Andrew Ginter
And, you know, someone watching this from the outside with, you know, a command of English and of very limited command of German, is there sort of a standard place that a person like me looking at this from the outside could go to find all this stuff? Or is it on every country’s national website in a different language in a different location? Is is there any central repository of these rules?

Christina Kiefer
No, not yet at least. Maybe there will be some private websites where you can find all the different implementation information. But until until now, when you are a company, either you within the EU or also the EU, when you are providing your services into the EU market, you have to fulfill with the NIS2 directive. And this means you have to fulfill with the national laws in each EU member states.

And this is yeah a big challenge for all international companies because they have to check each national law of each EU member states and they have to check if they fall under the scope of application. And what is also very important that the different national laws have different obligations. So the NIS2 directive has a minimum standard which all national legislators have to fulfill But on top of this, some EU member states have imposed more obligations or ah portal for registration or new reporting obligations.

So you have to check for each EU member state. But here we can also help because we see in our daily work that this is a very, very hard yeah challenge for companies to check all the laws and also understand all the national laws. We offer a NIS2 implementation guide where you can get regularly updates on and an overview of how the different EU member states have transposed NIS2.

And yes, in addition to this, we also have a NIS2 reporting and obligation guide, especially looking at the reporting and registration obligations to see where you have to register in each EU member state, but guide So you can book our full guide, but we also post yeah some overviews on LinkedIn and our newsletter.

Andrew Ginter
So thanks for that. You touched on the yeah the the goal of NIS2 was to increase consistency among the nation states of Europe in terms of their cyber regulations, and in my understanding, to increase the strength of those regulations across the board. How’s that coming? Are the regulations that are coming out stronger than we saw with NIS2? And are they consistent?

Christina Kiefer
Well, it’s… correct that the idea behind NIS2 or the NIS2 directive was to create ah stronger and also more consistent cybersecurity framework across the whole EU and the EU market. And also the NIS2 directive should also cover a broad set of sectors for regulated companies. So there should be some consistency within the EU. but it’s an EU directive and not an EU regulation. So this means the NIS2 directive sets only a minimum standard to all EU member states that they can then transpose into national law. And that’s why EU member states are allowed also to go beyond if they want to. And some of the EU member states have already done this. this So what we’re seeing right now, looking at the national laws which have already been enacted and also looking at the draft of some national laws, we see quite a mixed picture. So we don’t see a whole consistency what a lot of companies were hoping for. We see more like a mixed picture with some countries like Belgium again, for example.

They have pretty much stuck to the core of the directive and haven’t added much on top. So there you are also for you as a company, you can ensure when you’re looking at this two directive or when you have already looked at this two directive, you can be yeah positive that you also fulfill the requirements of the law of Belgium. But on the other hand, looking for example, on Italy, they have expanded the the scope of application. So Italy has, for example, included a cultural sector as an additional regulated area. So the sector of culture hasn’t been mentioned in NIS2 directive at all. But Italy ah had the idea, well, we can regulate also the cultural sector. So that’s why they have also sort in yeah included it into their national law.

And also in France, you can see that they have imposed more obligations and also have broadened the scope of application of their national law. because here they have also widened up the regulated sectors and here they have added educational institutions, for example. We have a minimum set of standards set out in the NIS2 directive, but across the EU, looking at the national laws, we have a lot of national differences. And that’s why it’s very hard for companies to comply with the NIS2 directive or with the national laws within the EU market.

Nathaniel Nelson
One of the more interesting things that Christina mentioned there, Andrew, was Italy treating its cultural sector as like critical infrastructure, which sounds a little bit, it sounds very Italian, frankly.

Andrew Ginter
Well, I don’t know. It’s not just the Italians. The original, you know, this was back in the, I don’t know, the the late noughts. One of the original directives that came out of the American administration was… a list of critical infrastructures. And at the time it included something like national monuments as a critical infrastructure sector. And the justification was, you know, any monument or, you know, cultural institution that was that was seen as essential to national identity, national cohesions,

And then it disappeared in the 2013 update of what were ah critical national infrastructure. So it’s no longer on CISA’s list of critical infrastructures, but it used to be. And, you know, in terms of Italy, oh I don’t, you know, I don’t have a lot of information about Italy, but again, you might imagine that national monuments and certain cultural institutions are vital to sort of national identity. Think the Roman Colosseum. Should that be regarded as critical infrastructure? It’s certainly critical to tourism, that’s for sure. So that’s that’s what little I know about it.

Andrew Ginter
And in my recollection of NIS2, one of the changes was increased incident disclosure rules. Now, i’ve I’ve argued or I’ve speculated. we We did a threat report at Waterfall. We actually saw numbers sort of plateau in terms of incidents. I wonder, I speculate whether increased incident disclosure rules are in fact reducing disclosures because lawyers see that disclosing too much information can result in lawsuits. For instance, SolarWinds was sued for incorrect disclosures. And so they they i’m I’m guessing that that they… they yeah conclude that minimum disclosure is least risk. And if they get partway into an incident and say, this is not material, we don’t need to disclose it we’re not going to disclose it, we actually see fewer disclosures.

Can you talk about what’s happening with the the disclosure rules? are they How consistent are they? Multinational businesses, how many different ways do they have to file? And are we seeing greater disclosure or in your estimation, fewer disclosures because of these rules?

Christina Kiefer
Yeah, that’s a really good question and honestly it’s something we get also asked all the time right now because once we hear again all over if we operate in several and several EU countries do I need to report a security incident in one you member states or via one portal and then I’m fine or do I really have to report a security incident to each EU member states which is kind of affected with the with regard to the security incident.

And yeah, unfortunately, the answer right now is yes, you have to report your security incident to each EU member state or to each national authority of the EU member state, which you fall under the scope of the national law. Because the NIS2 directive does not really require one portal or one obligation registration and also a reporting portal for all EU member states. So it’s up to the national authorities and also up to the EU member states to regulate this field law. And you can see that many national authorities have already recognized this issue and they are also looking at ways to simplify the process of registration but also of reporting security incidents and there you can see some member states try to yeah at least include or to to set up a portal a national-wide portal where you can yeah report your security incident.

Some other national authorities go even further. They say they implement a yeah scheme or structure where you only have to report to them and then they will yeah transfer the report to the other relevant EU authorities. But again, this is each and in e in each EU member state national law, so then you also have to check again all the other national laws within the EU. Yes, but also the authorities of the EU member states have already, well, at least indicated that they are talking to each other. So maybe in the future we will get one portal to report everything. But as I said before, it’s not regulated in the NIS2 directive and is also not foreseen for now.

Yes, and to the other part of your question. You could think that when you’re obliged to report everything and each security incident that the reporting would decrease But you also have to look at a yeah at the at the risk of non-compliance and the risks are very high because the NIS2 directive is imposing high sanctions and also a lot of yeah authority measures, authority market measures. And that’s why in the daily consulting work, it’s better to say, please report an incident because also the national authorities communicate this to the companies. They say, please report something because then we can work together. So the focus of the national authorities, at least in Germany, we see right now is they want to cooperate together.

They want to ensure a cyber secure en environment and a cyber secure market. So the focus is to report something that they can yeah work on together and that’s why it would be better to report and I would say maybe we get also an increase of reporting.

Andrew Ginter
So I’m a little confused by your answer. the The rules that I’m a little bit familiar with are the American ah Securities and Exchange Commission rules. And those rules mandate that any material incident must be reported to the public, any incident that might cause a reasonable investor to either buy or sell or assign a value to shares in in a company.

Which means non-material incidents can be kept quiet. And the SEC disclosures are public. Everyone can see them because reasonable people need information to buy and sell shares. The NIS2 system, is it requiring all incidents to be reported? And are those reports public?

Christina Kiefer
That’s a good point. To your first part of your question, the NIS2 directive and also the reporting obligation is kind of the same as the regulation you mentioned before, because you have to report only severe security incidents. As a regulated company, you are obliged to check if there is a security incident in the first step and then the second step you have to check if there a severe security incident.

And only this security incident you are obliged to report to the national authorities. So that’s kind of the same structure or mechanism. And to the second part of your question, the report will not be published for everyone. So first of all, if you report it to national authorities, only the national authorities have the information. It can happen because we have in some Member States some laws where yeah people from the public can access or can get access to information, to public information. It can happen that some information will be publicly available. But the the first step is that you will only report it to the national authority and that the report will not be available for the public as such.

But next to the reporting obligation to the national authorities, you also have information obligations in the NIS2 directive. So it can happen that you are also obliged to inform the consumers of your services.

Andrew Ginter
So thanks for that. The other big news that I’m aware of in Europe is the CRA, which confuses me because I thought NIS2 was the big deal, yet there’s this other thing that sort of came at me out of the blue a year ago, and I’m going, what’s what’s going on? Can you introduce for us what is the CRA, and how’s it different from NIS2?

21:30.66
Christina Kiefer
Yeah, sure. So, as you mentioned before, the CRA is like the sister or brother and the second major piece. of the new European cybersecurity framework alongside the NIS2 Directive.

Christina Kiefer
It’s the Cyber Resilience Act, or for short CRA. And while the NIS2 Directive focuses on the cybersecurity requirements for businesses or entities in critical sectors, the CRA takes a different angle and the CRA introduces EU-wide cybersecurity rules for products.

So NIS2 is focusing on cybersecurity of entities and the CRA is focusing on cybersecurity for products with digital elements. And also the other difference is also that NIS2 directive, we have an EU directive, so it needs to be transposed into national law by each EU member state and the Cyber Resilience Act is an EU regulation So when the Resilience Act comes into force, it will apply directly in each EU member state.

Andrew Ginter
Okay, so that’s how the CRA fits into NIS2. What is the CRA? What are what are these rules? is it Can you give us a high-level summary?

Christina Kiefer
Yeah, sure. So the CRA is the EU-wide first horizontal regulation, which imposes cybersecurity rules for products with digital elements. So regulated are products with digital elements and this definition is very broad. It covers software and also hardware and also software and hardware components if they are yeah brought to the EU market separately. And products with digital elements are kind of like connected devices and as I said, software and hardware that can potentially pose a security risk. Also, what is very important, the CRA imposes obligations not only to manufacturers, but also to importers, distributors, and also to those companies which are not resident in the EU, because the main point for the geographical scope of application is that you place a product in the EU market, whether you are placed in the EU or not.

Christina Kiefer
So this means also that the Cyber Resilience Act, such as data and such as the General Data Protection Regulation, has a global impact impact for anyone selling tech products in Europe.

Andrew Ginter
So let me jump in real quick here, and Nate. What Christina‘s described here, oh you the CRA, the scope applies to all digital products sold in Europe. To me, this the CRA is, in my estimation, and she’s going to explain more in ah in a few minutes, it’s probably the strictest cybersecurity regulation for products generally in the whole world. it It sounds to me like this might become just like GDPR. This was ah a European regulation that came through a few years ago. It had to do with marketing and the use of private information, in particular my email and sending it. Basically, so it was like an anti-spam act. It’s the strictest in the world. And everybody who has any kind of worldwide customer base, which is almost everybody in the digital world that that’s sending out marketing emails, is now following the GDPR pretty much worldwide because it’s just too hard to apply one law in one country and one law in the other. So what you do is you pick the strictest that you have to comply with worldwide, which is the gp GDPR, and you do that. worldwide instead of trying to figure out what’s what. It sounds to me like the CRA could very well turn into that kind of thing. It might be the thing that all manufacturers that embed a CPU in their product have to follow worldwide because it’s just too hard to to change what they do in one country versus another.

Andrew Ginter
Okay, so can you dig a little deeper? I mean, an automobile, you buy a a ah new automobile from the from the dealership. My understanding is that it has 250, 300, maybe 325 CPUs in it, all of them running software. It would seem to me that ah a new automobile is covered by the CRA. what What are the obligations of the manufacturer? What should customers like me expect in automobiles that that might be different because of the CRA?

Christina Kiefer
Thank you. First of all, looking at your example, automobiles are not covered by the CRA, because the CRA some exemptions. And the CRA says, we are not regulating digital products with the digital elements, which are already regular regulated by specific product safety laws. And here, looking at the automotive sector, we have for sure in the EU very strong and very specialized regulation for product safety of cars and so on. So just for your example, but looking at other products with the chill elements, for example, wearables or headphones, smartphones, for example, you can say that there are kind of five core obligations for manufacturers in the CRA. So the first obligation is compliance with Annex 1, which means you have to fulfill a list of cybersecurity requirements. And you don’t only have to fulfill those cybersecurity requirements, but you also have declare and show compliance with Annex 1 of the CRA. So it’s a conformity assessment you have to undergo.

Christina Kiefer
The other application, number two, is cyber risk assessment. If you are a manufacturer of a product with digital digital elements, you are obliged to assess cyber risks and not only during the development and the construction of your product and also not only during the placing of your product to the EU market, but throughout the whole product life circle. So if you have a product and you have it already placed on the market, you are obliged to undergo cyber risk assessments. Then looking at the third obligation, it’s free security updates.

Christina Kiefer
So manufacturers have to provide free security updates throughout the expected product life cycle. We have also mandatory incident reporting. So we have here also reporting and registration obligations, such as we already talked about looking at the NISS2 directive. And also like in each product safety law in the EU, we also have the obligation for technical documentation. So this is of those are the five core obligations, compliance, cyber risk assessment, free security update, reporting and documentation.

Andrew Ginter
And you mentioned distributors. What are distributors and importers obliged to do?

Christina Kiefer
yeah there We have some graduated obligations. So they they are not such strict obligations such for manufacturers, but importers and distributors are obliged to assess if the product, what they are importing and distributing to the EU market are compliant with the whole set of cybersecurity requirements of the CRA. So they have to check if the manufacturer and the product is compliant and if not, They have to inform and yeah cooperate with the manufacturer to ensure cybersecurity compliance. But also importers are also obliged to yeah impose their own measures to to fu fulfill with the CRA.

Andrew Ginter
Okay, and you said there were five obligations. You spun through them quickly. Some of them make sense on their own. Do a risk assessment, do it from time to time, see if the risks have changed. That kind of makes sense. The first one, though, comply with Annex 1. That’s like an appendix to the CRA. What’s in there? what What are the obligations?

Christina Kiefer
Yes, sure. Annex 1 is, yeah the you can also say, Appendix 1 to the CRA. and And there are you can see there is a list of certain cybersecurity requirements which manufacturers have to fulfill. And the list is divided into two different main areas. And one area is cybersecurity requirements. So it focuses on no known vulnerabilit vulnerabilities at the time of the market placement, secure default configurations, protection against unauthorized access, ensuring confidentiality, integrity and availability, and also secure deletion and export of user data. So kind of all of cyber security requirements such as them which I have mentioned. And the other area is vulnerability management. So manufacturers have to ensure that they have a structured vulnerability management process and they have to yeah install a software bill of materials.

They have to provide free security updates. They have to undergo cybersecurity testing and assessments. there needs to be a process to publish information on resolved vulnerabilities. And again, here we also need a clear reporting channel for known vulnerabilities.

Andrew Ginter
So it sounds like you said that a manufacturer is not allowed to ship a product with known vulnerabilities. Practically speaking, how does that work? I mean, a lot of manufacturers in the industrial space use Linux under the hood. Linux is a million lines of code of kernel. And, you know, the, these devices don’t necessarily do a full desktop style Linux, but they still have a lot of code that they’re pulling from an open source distribution. And in these millions of lines of code, From time to time, people discover vulnerabilities and they get announced. And so it’s it’s almost a random process. Do I have to suspend shipments the day that a vulnerability a Linux vulnerability comes to light until I can get the thing patched and then three days later ah start shipments again? Practically speaking, how does this zero known vulnerabilities requirement work?

Christina Kiefer
Basically, it is like, as you said, because the Cyber Resilience Act focuses on known ah no known vulnerabilities not only in your product but also in the whole supply chain. So the Cyber Resilience Act focuses not only on products with digital elements but also focusing on the cybersecurity of the whole supply chain. So this means looking at Annex 1 and the cybersecurity requirements Products with digital elements may only be placed on the EU market if they don’t contain any known exploitable vulnerabilities. So it’s not any vulnerability, but it’s any known exploitable vulnerability. That is a clear requirement under Annex 1. And also when you’re looking at making a product available on a market, that doesn’t just mean selling it.

Christina Kiefer
It includes any kind of commercial activity. And also what is also a very good question also in our daily work, looking at making a product available on the market. A lot of companies say, well, I have a ah batch of products. So, and if I have placed this batch of products on the EU market, I have already placed product on the market. So I can also place the other products of this batch also in the future. But it is not correct, because looking at EU product safety law, the regulation is focusing on each product. So looking at these requirements, you can say, first of all you really have to check your own product, your own components, but also the products and the components you are using from the supply chain. And you have to check if there are any known exp exploitable vulnerabilities. So you have to yeah impose a process to check the known vulnerabilities and also to ah impose mechanisms to fix those vulnerabilities.

Christina Kiefer
And if you have products already on the market, you don’t have to recall them because first of all, it’s okay if you have a vulnerability management which is working and where you can fix those vulnerabilities. And when you have products already in the shipment process, there it’s up to each company to assess if they have to yeah recall products in the and the shipment process or if they say, okay, we leave it in the shipment process because we know we can fix the vulnerability within two or three days. So in the end, it’s kind of a risk-based approach and each company has to assess what measurements are yeah applicable and also necessary.

Andrew Ginter
So that that makes a little more sense. I mean, the Linux kernel and sort of core functions in my, but I don’t have the numbers, but I’m guessing that you’re going to see a vulnerability every week or two in that large set of software. And if that’s part of a router that you’re shipping or part of a firewall that you’re shipping or part of any kind of product that you’re shipping, Does it make sense that, you know, you discover the exploitable vulnerability on Thursday and you have to suspend shipment until, ah you know, three weeks out when you have incorporated the vulnerability in your build and you’ve repeated all of your product testing, which can be extensive.

Andrew Ginter
And by the time you’re ready to ship that fix, two other problems have been developed and now you have to, you can’t ship until, you know, it, It sounds like it’s not quite that strict. it’s not that That scenario sounds like nonsense to me. It just it would never work. You’re saying that there is some flexibility to do reasonable things to keep bringing product to market as long as you’re managing the vulnerabilities over time. Is is that fair?

Christina Kiefer
Yes, yes, that’s right. Because in the CRA we have a risk-based approach and also you have to… No, the basis for each measure you have to to impose under the CRA is your cyber risk assessment. So you have to check what kind of product am I using or am i manufacturing? Which kind of product am I right now placing on the EU market? What are the cybersecurity risks right now? And also what what are the specific cybersecurity risks of this known vulnerability?

Christina Kiefer
And then you have to check, have i do I have a process? Do I have a process imposing appropriate measures to to fix those vulnerabilities? And if I have appropriate measures, to fix the vulnerabilities in a timely manner, then it’s not the know you are not obliged to recall the product itself. But at the end, looking at a risk-based approach, it’s up to the decision of each company.

Andrew Ginter
So this is a lot of a lot of change in in for a lot of product vendors. Can I ask you, how’s it going? Is it working? Are are the vendors confused? can you Do you have any sort of insight in into how it’s going?

Christina Kiefer
Yeah, sure. So what we’re seeing right now, a lot of companies, both manufacturers, but also suppliers, are getting ahead of the curve when it comes to the Cyber Resilience Act, because they see that there is a change and there there will be new strict obligations, not only on manufacturers, but also in the whole supply chain. So suppliers, distributors, importers are also coming to us and asking if they are under the scope of the CRA. So this is the first point. If you’re a distributor or an importer, you already have to check if you and your company itself falls under the scope of the CIA. And if it is like this, then you are already obliged to ensure all the obligations of the CRA. But it can also happen that suppliers are under the scope of the CRA in an indirect manner.

Because ensuring all those new cybersecurity requirements from a manufacturer point of view, you have to ensure it within the whole supply chain. And the main instrument to ensure this was already in a future in a and the past and will also be in the future is contract management. So you have to impose or transpose all those new obligations to the suppliers via contract management. And there we see different reactions, but there’s definitely a growing awareness that cybersecurity needs to be addressed contractually, especially in relation to the CRA obligations. And yeah looking at contract negotiations, of course, we have some negotiations with the suppliers And one of the main points which is negotiated is the regulation of enforcement.

Christina Kiefer
Because when you have contractual management looking at cybersecurity requirements, you can not only yeah transpose those obligations to the suppliers, but you also have rules on enforcing those new contractual obligations. For example, contractual penalties. And there we see that contractual penalties often sparks some debate during negotiations. But to sum up, in practice, we’ve always been able to find a balanced solution that works for all parties involved.

Nathaniel Nelson
I suppose I could think about any number of potentially trivial electronics products, Andrew, but let’s say that I or my neighbor has ah a smart fridge, a fridge with a computer it. We generally assume that those devices don’t even really have security in mind at all. And a security update is like so far from the universe of how anyone would interact. with such a device and now we’re saying that that kind of thing is going to be regulated in these ways.

Andrew Ginter
I think the short answer is yes. You might ask, what good does this regulation do for a fridge? And, you know, I think about this sometimes. I think the answer is it depends. If, you know, a lot of the larger home appliances nowadays have touchscreens. There’s a CPU inside. There’s software inside. These are cyber devices. You might ask, well, when was the last time I updated the firmware in my fridge? How many times am I going to update the firmware in my fridge? Those are good questions. Most people never think about something like that. But the law might… you know, very reasonably apply to the fridge if the fridge is connected to the Internet so that I can see, for example, how much power my fridge is using on my cell phone app.

Isn’t that clever? But now I’ve connected the fridge to the Internet. We all know what what happened to, what was it, the Mirai botnet took over hundreds of thousands of Internet of Things devices and and used them as attack tools for denial of service attacks. If you’ve got an internet connected fridge, you risk that if you haven’t updated the software. Worse, if someone gets into your fridge, takes over the CPU, you could change the set point on the temperature and cause all your food to spoil. This is a safety risk.

Andrew Ginter
Again, how many consumers are going to update the software in their fridge? Realistically, I don’t think… You the majority of consumers will, even if there is a safety threat. To me, you know, the risk, this this is part of the risk assessment. If there’s a safety threat because of these vulnerabilities, you might well need to… I don’t know, auto-update the firmware. That might be part of your risk assessment so that the consumer doesn’t have to do it. Or better yet, design the fridge so that safety threats because of a compromised CPU are impossible, physically impossible. Make the the temperature setting manual or something. But this is this is a bigger problem than I think one regulation, the the the question of safety critical devices connected to the cloud.

Nathaniel Nelson
Yeah, admittedly, the the notion of a smart refrigerator safety threat isn’t totally resonating with me. And then we haven’t even discussed the matter of like, OK, let’s say that my refrigerator gets automatic updates or I just have to click a button in an app when it notifies me to do so to update my firmware. At some point, you know, fridges sit in houses for long periods of time. I can’t recall the last time that my fridge has been replaced. In that time, any manufacturer could go out of business. And then how do you get those updates, right?

Andrew Ginter
Exactly. So, you know, to me, but this is outside the scope of the CRA, but, you know, to answer your question, to me, the solution you know, two or threefold, we we need to design safety-critical consumer appliances in such a way that the unsafe conditions cannot be brought about by a cyber attack. I mean, we talk about, you know, fixing known vulnerabilities. That’s only one kind of vulnerability. What about zero days? There is, there’s there’s logically no way that someone can, you solve all zero days. It it It’s a nonsensical proposition. So there’s always going to be zero days. What if one is exploited and, you know, a million fridges set to a ah set point that that’s unsafe?

Andrew Ginter
To me, we’ve got to design the fridges differently, but that’s that’s sort of a different conversation. In fact, that’s the topic of my next book, but which is why I care so much about it. but but it’s These are important questions, and I think the CRA is a ah step in the direction of answering them, but I don’t know that it has all the answers.

Andrew Ginter
So work with me. you know, what, what you described there makes sense for, you know, manufacturers like, uh, IBM who can, you know, produce high volumes of, or, you know, Sony or the, the big fish. But, you know, if I’m a small manufacturer, I produce a thousand devices a year. I buy components for these devices. I buy software for these devices from big names like Sony and Microsoft and Oracle. And, you know, I go to Oracle and say, you must meet my contract requirements or I won’t buy my thousand products from you at a cost of $89 a product. Oracle is going to say, take a flying leap. We’re not signing your contract. Is this realistic?

Christina Kiefer
Yes, and we see this also in practice because we are not only consulting the big manufacturers but are also the smaller companies in the supply chain. And there you can have different approaches because when you are buying products from the big companies, First of all, you have to know that they are or they might be obliged also under the CRA. So they are fulfilling all those new cybersecurity requirements. And you also have to take it though there you also have to check their contracts because there you can see already they have a lot of new regulations looking at cybersecurity, either if it’s implemented into the the general contractual documents or implemented into one cybersecurity appendix.

So you see all the companies are looking at the Cyber Resilience Act and then they are taking measures and also looking at their contract management. So if you are lucky enough, you can see, okay, they have a contract which is already regulating all the obligations under the CIA. And then if it’s not like this, We take the approach that we establish a cybersecurity appendix. So when you’re already a contractual relationship with the big players, you don’t have to negotiate the whole contract from the beginning. You can only show them your appendix and then on on basis of this appendix, you can discuss the cybersecurity requirements. So this is kind of a approach which has helped also smaller companies in the market.

Andrew Ginter
So you gave the example of of headphones and smartphones. For the record, does this apply to industrial products as well? I mean, our our listeners care about programmable logic controllers and steam turbines that have embedded computer components, or is it strictly a ah consumer goods rule? Now, and this is a very important point to highlight, the Cyber Resilience Act explicitly applies not only to consumer products but also to products in the B2B sector. so this means that all software and all hardware products along with any related remote data processing solutions fall under the scope of the CRA, either in B2C or also in B2B relationships.

Andrew Ginter
Well, Christina, thank you so much for joining us. Before we let you go, can I ask you, can you sum up for our listeners? What are the the key messages to take away to understand about what’s happening with cyber regulations, both NISU and CRA in Europe, and and what we should be doing about them as both consumers and manufacturers?

Christina Kiefer
Yeah, sure, of course. So let me give you a quick recap. So first of all, you see the EU legislature is tightening the cybersecurity requirements significantly with both the NIS2 directive and also the Cyber Resilience Act. And the new requirements affect any company that offers products or services to the EU market, no matter where they are based. So it is it has a very broad scope of application. Looking at the NIS2 directive, it’s very important to know that the NIS2 directive is already enforced, but it has to be transposed into national law, which has not been fulfilled by all EU member states, and that the national implementation across the EU is still quite varied.

Looking at the Cyber Resilience Act, the CRA brings new security obligations to products with digital elements, so for all software, for all hardware products. And it also is focusing not only on cybersecurity on products, but also in the whole supply chain. So both frameworks require companies to take proactive steps right now, looking at risk assessment, risk management, reporting, and also contract management, particularly when it comes to managing their supply chain. So looking at the short implementation deadlines ahead, both from the NIS2 Directive and also the CIA, it’s very important for companies to act now. And the first step we consult to do is to identify the relevant laws, because we have a lot of new regulations looking at digital products and digital services. So, yeah first of all, check the relevant laws and the relevant obligations which are applicable to your business.

And here we offer a free NIS2 quick check and also a free CRA quick check where you can just click through the different questions to see if you are under the scope of NIS2 and CRA. And then after all, when you clarified that you are affected on the one or both of the new regulations, the company needs to review and adopt their cybersecurity processes, both technically and also organizationally. So it’s very crucial to continuously monitor and ensure compliance with the ongoing legal requirements, especially also looking at contract management and focusing on the supply chain. And yeah, there we can help national but also international companies with kind of a 360 degree approach to cybersecurity compliance because we enter ensure solutions with the range from product development and marketing to reporting and market measures. So, yeah, we we give companies ah practical and also actionable guidance in ah in an every step way.

So looking at the first step to to act and yeah to identify the relevant laws and obligations to your business, companies can yeah visit our free NIS2 QuickCheck and our free CRA QuickCheck, which is available under nist2-check.com and also And yeah, if you have any further question, you are free and invited to write to me via email via LinkedIn. Yeah, I’m happy to connect. And thank you very much for the invitation.

Nathaniel Nelson
Andrew, that just about concludes your interview with Christina Kiefer. And maybe for a last word today, we could just talk about what all of these rules mean practically for businesses out there because, you know, it’s one thing to mention this rule and that rule in a podcast, but sounds like kind of stuff we’re talking about here is going to mean a lot of work for a lot of people in the future.

Andrew Ginter
I agree completely. It sounds like a lot of new work and a lot of new risk, both for the critical infrastructure entities that are covered by NIST or by the local laws, especially for for businesses, the larger businesses that are active in multiple jurisdictions, and certainly for any manufacturer who wants to sell anything remotely CPU-like into the the the European market. It sounds like a lot of work, but I have some hope that it’s also, because it’s such a lot of work, it’s also a business opportunity. And we’re going to see entrepreneurs and service providers and even technology providers out there providing services and tools that will automate more and more of this stuff so that not every manufacturer and every critical infrastructure provider can. in the European Union or in the world selling to the European Union. Not every one of them has to invent all of this the the answers to these these new rules by themselves.

Nathaniel Nelson
Well, thank you to Christina for elucidating all of this for us. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a pleasure. Thank you, Nate.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post NIS2 and the Cyber Resilience Act (CRA) – Episode 142 appeared first on Waterfall Security Solutions.

]]>
Network Duct Tape – Episode 141 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/network-duct-tape-episode-141/ Wed, 13 Aug 2025 16:31:00 +0000 https://waterfall-security.com/?p=35075 Hundreds of subsystems with the same IP addresses? Thousands of legacy devices with no modern encryption or other security? Constant, acquisitions of facilities "all over the place" network-wise and security-wise? What most of us need is "network duct tape". Tom Sego of Blastwave shows us how their "duct tape" works.

The post Network Duct Tape – Episode 141 appeared first on Waterfall Security Solutions.

]]>

Network Duct Tape – Episode 141

Hundreds of subsystems with the same IP addresses? Thousands of legacy devices with no modern encryption or other security? Constant, acquisitions of facilities "all over the place" network-wise and security-wise? What most of us need is "network duct tape". Tom Sego of Blastwave shows us how their "duct tape" works.

For more episodes, follow us on:

Share this podcast:

“We abstract the policy from the network infrastructure such that you can have a group of devices or a device itself that essentially associates with an IP address that’s an overlay address.” – Tom Sego

Transcript of Network Duct Tape | Episode 141

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome listeners to the Industrial Security Podcast. My name is Nate Nelson. I’m here as usual with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions.

He is going to introduce for all of us the subject and guest of our show today. So Andrew, how are you?

Andrew Ginter
I’m well, thank you, Nate. Our guest today is Tom Sego. He is the CEO and co-founder of BlastWave. And he’s going to be talking about distributed asset protection, which is a fancy name for a very common problem in the industrial space. We have – Stuff – devices, computers, assets, cyber assets all over the place, might be distant in pumping and substations might be local. The stuff was bought, on the cheap. It was the the lowest bidder.

It’s old. It’s ancient. And we have no budget to rip in place. So what do we do about cybersecurity? And this is something he’ll he’ll be walking us through.

Nathaniel Nelson
Then let’s get right into it.

Andrew Ginter
Hello, Tom, and thank you for joining us. Before we get started, can I ask you to say a few words of introduction? Tell us a bit about your background and about the good work that you’re doing at BlastWave.

Tom Sego
Sure, Andrew. Thanks for having me. So my background is I started my career as a chemical engineer at Caterpillar. I also spent eight years at Eli Lilly designing and building processing facilities to make medicine.

I was also a certified safety professional during that period and managed a 24-7 liquid incineration operation, which burned a 30,000 gallons of liquid waste per day.

So a shit ton. And then I went to Emerson, got did business development, corporate strategy there. Then I did product management at AltaVista. Then I went on to do sales support at Apple, where I was at Apple for almost 10 years.

And then that’s when I started my entrepreneurial career. I started a mobile telephony company, started a solar storage company, started a wine importing business, then played professional poker for a few years, and then eventually started this cybersecurity business called BlastWave.

I co-founded that in 2017. And our mission then is the same as it is today, which is to protect critical infrastructure from cyber threats.

And we wanted to kind of come at this with a very different approach than other cybersecurity companies in that We kind of started from first principles thinking about what are the three highest kind of classes of threat and categories of threats, and can we actually eliminate those?

The biggest category is probably no surprise to anybody here, but it’s phishing, credential theft, et cetera. I’m like, well, let’s just get rid of usernames and passwords altogether. And come up with a different model for for MFA that can actually apply to industrial settings.

So we did that. The second category of threats was really CVEs and and vulnerabilities. And could we make those unexploitable? And we came up with a concept called network cloaking, which I’m sure we’ll discuss, which kind of addresses that issue. And then the last one is human error, which is impossible to get rid of.

But if you can make human beings make fewer decisions, they can also make fewer mistakes. So we also incorporated that into a lot of our UI and UX.

Andrew Ginter
That’s, wow, that’s that’s a history like none other I’ve ever heard, Tom. Makes like I’m thinking it makes my own, what I thought, storied background look completely mundane.

You’ve been in lots of different industries. Now, I understand that a lot of what BlastWave does right now is upstream and midstream. And we’ve never had someone on the show explaining how that works. I mean, I think we’ve had one person on talking about an offshore platform at some point.

But when you’re looking at the industry, can we start with the industry? what’s What’s the physical process? Physically, what’s this stuff look like? What’s it do? How does it work?

Tom Sego
Yeah, it’s really interesting because I can talk about the physical process and it’s also evolved quite a bit in the last 20 years. So first of all, just stepping back, looking at the industry, the overall oil and gas market globally generates $2 trillion dollars of revenue per year, and it generates $1 trillion in profit.

So there’s a lot of money in this business. And that also means that there’s a lot of gallons of oil and lot of cubic feet of gas that are being extracted and transmitted and sent everywhere around the world.

And the other thing that’s interesting is that in spite of how old this industry is, there’s between 15 and 20 thousand new oil wells created per year and in fact, half of those were done in the Permian Basin. So about 8,000 wells were created last year in the Permian Basin.

Tom Sego
I don’t think people realize the magnitude of which the oil and gas companies are continuing to create wells and extract oil. The other thing that’s interesting about it is 20 years ago, we had a traditional vertical drilling approach to oil and gas.

And in that to last two decades, we’ve noticed that there are capabilities to actually now drill horizontally. And what’s pretty interesting is you can actually, as you start drilling a well today, you create the initial bore, which is, usually a foot or more in diameter.

And then you can send these kind of devices and drill bits down a relatively sloping curve that over the course of maybe 100 or 200 meters, you’ve now done 90 degree angle.

And then you can start drilling horizontally, which allows you to have higher probabilities of not hitting a dry well. It gives you more capabilities for lower cost extraction.

And so it’s been a great boon for the industry. Hydraulic fracturing, which is another technique that’s been exploited to to get much higher yields out of these wells, also contributed to the the recent boom in oil and gas.

So There are many, many things that have to be considered when you start doing this process. You’ve got to go through site selection, permitting. You’ve got to do all this site prep. And one thing people may not realize is site prep means building roads.

You have to build an entire infrastructure to get to and from these wells. And then once you start building. Actually drilling the well, it’s much like a CNC machine if you’ve been in a factory like Caterpillar or something where there’s a fluid, heat transfer fluid that allows you to cut the metal.

In this case, they use a mud that both stabilizes the wellbore and it also helps you manage pressure. And that that mud flows down through the the drill pipe and then it comes out around in kind of an annulus, almost like a donut that comes back up the outside of that drill pipe to be then cleaned, having the the rock kind of cuttings removed from it using a screening and operation.

And then you kind of reuse the mud and so forth. So there’s a lot to it. And And increasingly, much of this is being automated.

And you’re having connectivity that is absolutely essential to be your eyes and ears in these wells. Because once you start producing oil and gas, these things are hours and hours away from each other.

They’re very remote, very rural areas. And so that connectivity is absolutely critical. And you may have, we have one customer who has 700 sites that they’re trying to manage.

And so they have to have the ability to do this in an automated fashion, which requires not just connectivity, but secure connectivity.

Andrew Ginter
Cool. I mean, it’s a piece of the of the the industry I’d never dug into. So thank you for that. Can I ask you, you’ve said in the modern world,

you know it increasingly everything is automated. I mean, that makes perfect sense. The The example I often use is you buy an automobile, it’s got 300 CPUs in it. It Everything, every every device, but every non-trivial device you you you buy nowadays has a CPU in it.

Can you talk about the automation in these these drilling systems, in these these upstream systems? what does, what’s that automation look like? Is it like built into the device like an automobile? Is it a programmable logic controller? I mean, I’m familiar with, power plants vaguely. I mean, bluntly, I don’t get out much. I’m i’m a software guy more than a hardware guy, but but I’ve had a few tours. I know what a PLC looks like. If if i If I visited one of these well sites, would I recognize the automation? What’s it look like?

Tom Sego
Yeah, you would definitely recognize the automation. So what you see is your classic kind of SCADA tech stack, if you will. So you’ll have remote terminal units. You’re going to have PLCs.

You’re going to have these things mounted on a DIN rail in a cabinet. And there can be various size cabinets at some well locations.

You’re going to have just a few number of devices. And then at some other well sites, again, I go back to the horizontal drilling, you’re going to have a much bigger operation there. You’re also going to have those well sites connected to what are called tank batteries.

so that you can essentially manage the flow of oil and gas into these storage facilities. So there’s there’s a lot of automation that’s necessary using kind of PID control loops to maintain equilibrium within these systems.

And there can also be Oftentimes, challenges that happen, shocks to the system, where let’s say in the case of oil and gas, the price starts dropping.

But when the price starts dropping, the motivation of the business unit is not to just keep cranking production at maximum capacity. And so you actually want to have dynamically, you want to manage your your operation dynamically based on economic conditions that can change over time.

And I’ll tell you something else, Andrew, about what’s happening today. There’s a lot more uncertainty in the business world today than there was four months ago. And I think that is going to affect oil and gas.

It’s going to affect the price of oil and gas. It’s going to affect the supply of oil oil and gas. It’s going to affect the transmission across borders. So these kinds of things can affect the the automation.

I’ll call it like Uber automation. Okay. Not just between the actual plant operations and facilities, but also between different entities in the upstream, downstream and midstream ecosystem.

So there’s a lot of very interesting factors that affect that. And I’ll tell you one other thing that’s kind of interesting. That’s how everybody’s talking about ai and there are some of the larger oil and gas companies that are trying to figure out how to apply AI to optimize their operation.

And everybody knows that there’s there’s automation that’s used to help identify ways to to to deliver predictive maintenance to rotating machines.

But there’s also uses of AI in oil and gas to to prevent things like spills. And one of the big challenges is it’s easy. If you go talk to someone at BP or Shell or Chevron and you say, can I get data to the cloud? They’re going to go, well, heck yeah.

There’s all kinds of great things that can allow you to get data out of your process. And in fact, I think you’re associated with a company that does a really good job of doing that kind of one-way transmission of data.

And the other thing is, but once you have that data, and you’re using it to build AI models, then how do you get, deliver those set points and control variables back to the process?

It scares the crap out of these people. The idea of connecting their control network to a much less secure cloud network or corporate network.

Because as we all know, security is a continuum. It’s not Boolean secure insecure. So I think there’s a lot of interesting things that are happening with that. And I think just to to kind of close the story on that, one company, for example, is pulling that data, they’re analyzing it actually in AWS, and then they are taking some of those control variables and they’re using a human in the loop process so that they’ll say, this is the recommended set point for this this process.

And then the human in the loop then implements that through their control HMI. So there’s a lot of very interesting traditional ways in which automation is applied to oil and gas.

But there’s also some very interesting evolving mechanisms that involve machine learning.

Andrew Ginter
So, Nate, let me jump in and and give sort of a bit of context here. Yeah, AI and cloud-based systems, in my opinion, these are the future of industrial automation in pretty much… Everything.

The question is not if, the question is when, because different kinds of cloud systems are going to be used in different kinds of industries at different times, with different intensities. So, I care enormously about this topic because I am writing my fourth book. The the working subtitle of the book, possibly the title of the book is CIE for a Safety Critical Cloud.

You know, when you have cloud systems controlling, you potentially dangerous physical processes. How do you do that? There are designs that work. I… I’m keen to to to listen to the rest of the episode here. I’m keen to, but when I had Tom on, I was keen to learn from him. When I write these books, I try not to make up solutions myself.

I tend to get them wrong when I do that. I try to learn from experts like Tom and, gather up the best knowledge in the industry and try and trying package it up in a digestible format.

So, yeah, that the cloud is the future and I’m, yeah when When we recorded this, I was keen to to learn from Tom about what the future looks like.

Nathaniel Nelson
And I know we’re about to get right back into the interview. And what I’m about to say actually kind of has nothing to do with what you just said. But before we go, a few times now, it feels like you guys have mentioned the terms upstream, downstream, midstream. And I just want to make sure I’m clear on this before we continue.

Andrew Ginter
Sure. This is This is standard oil and gas terminology. People say, oh, oil and gas, as if it were one industry. It’s not. Really, there’s three industries involved, and each of these these sort of sub-industries have a lot of different kinds of facilities. So the stream is generally considered to be the pipeline.

So we’re talking upstream is producing stuff to feed into midstream, the pipeline. And downstream is taking stuff out of the pipeline to for for refining and such. So, sort of next level of detail, what’s involved in upstream? Exploration is considered part of upstream.

Initial drilling is part of upstream. Offshore platforms are part of upstream. The, onshore pump jacks are part of upstream.

The whole infrastructure, building roads is part of the upstream process. Midstream is pipelines and tank farms. And, in in the natural gas space, you need to do sort of an initial separation and, discard waste from the the product. You might even need this in liquids to take if you can do an initial filter and take water out of the oil and pump it back down, the dirty water back down into the well, sort of waste, or carbon dioxide out of the natural gas, there’s initial processing facilities that are sort of pre-sending stuff into the pipeline. There’s tank farms where the pipelines store stuff sort of intermediate. There’s liquid natural gas ports. There’s oil oil ports. There’s oil tankers. This is all part of midstream, the process of moving stuff and you’re from from place to place and to a degree storing it while you’re moving it.

And then downstream is sort of everything you do after it comes out of the pipeline. So there’s refining, turning it into diesel fuel and and jet fuel. There’s the the the finished processing on on natural gas, taking out all of the the natural gas liquids, making it basically pure methane with not much else.

There’s even stuff like trucking. Gasoline from the pipeline to the gas stations is considered part of downstream.  Midstream kind of rears its head again because, you you might have the concept of a gasoline pipeline. So you’ve got the oil pipeline bringing the crude oil to the refinery. Then you’ve got the, you sort of hit midstream again, taking the finished product, gasoline, and sending it to consumers. Then you’ve got the trucks, you’ve got the gas stations.

Each of these sort of upstream, midstream, and downstream sub-industries has sort of many components. I I’ve lost it now, but I saw a list once of, here’s all the different kinds of things that can be in midstream.

And it was like, I counted, it was 27 kinds of things. So it’s a complicated industry, but very loosely, upstream produces, midstream transports, and downstream consumes, in a sense, refines and produces the goods that we actually consume.

Andrew Ginter
So that’s interesting. I mean, human in the loop, I’ve heard that described as open loop, in power plants, which I’m more familiar with. You you monitor the turbines.

13:42.13
Andrew Ginter
The AI in the cloud comes back and sends you a text message and says, you should really service, the turbine in generating unit number three sometime in the next four weeks. And it goes into my eyes, goes into my brain. I go and double check with my fingers. I type on things. I say, i think they’re right.

And I schedule the service. That’s open loop. And yeah, it it gets scary when you start doing closed loop.

Yeah. Yeah. And And I would say that one of the key things, if you look at some analogous systems where they have actually gone from open loop, human in loop, if you will, to closed loop, you can you I’ll give two examples. One would be autopilot on planes and another would be self-driving cars.

And in both of those cases, you don’t just switch from open loop to closed loop. No, you do an extensive amount of testing and validation.

And you also, in many cases, build redundant systems that allow an an additional level of supervisory control on top of your normal process control loops.

And so like an example that I had heard about was a company that was looking at having, tank level measurements and looking at an AI model that would actually analyze the input feeds to that tank model. So, and and it would pull data from third parties that would look at the truck routes for the tankers that were pulling oil from that tank.

And so you could actually synthesize that data. Now you would have to put in place a lot of, I’ll call it ancillary systems and ancillary testing to make that safe enough to be like an autopilot on a car.

Because theoretically now with all that supporting testing, autopilot on a car is is supposed to be safer than humans.

And with people on their phones, like I see them these days, I think that’s become an increasingly low bar.

Andrew Ginter
Fascinating stuff. The The future of automation, I’m convinced. But if we could come back to the to the mundane, you talked about phishing, you talked about CVEs, exploiting vulnerabilities.

We’re talking about protecting these assets in the the the upstream and midstream oil and gas. Can you Can you bring us back to cybersecurity? How does how does this big picture fit with with what you folks do and and what you’re focused on cybersecurity-wise?

Tom Sego
Absolutely. So one of the things that’s interesting is, I love talking to customers and I try to spend at least 50% of my time and actually listening more than talking to customers and understanding what their challenges are and how we can solve those.

And in the case of oil and gas, there were three customers that came to us and told us the identical story and they became our largest customers.

And this the story they were telling us was that they had these highly distributed assets all over these these very wide geographic areas And they had spotty cellular and they had backup satellite to enable that connectivity that they need. They need the eyes and the ears in the field because it would be cost prohibitive for them to get in a truck and and drive out there to monitor that every few hours.

So the challenge they brought to us was the security team didn’t like the operations team having this insecure connectivity to these remote areas.

And so the security team said, you need to do something about that. And that’s where BlastWave came in. And we said, we can actually use our software-defined networking solution to cloak those assets so they’re undiscoverable to adversaries.

but also segment them so that if there were malware that were to get introduced in one area, it would not spread to others. And then finally, you would have the ability to get secure remote access.

And one of the coolest parts about this is this is not a bump in the wire kind of solution. This is a solution that allows routing and switching between groups of devices and users.

So it cuts across firewalls as if they don’t exist. It doesn’t route traffic based on source and destination. It routes it based on identity.

And this is something I think is very unique to us. And it’s something that I think customers absolutely love. And this has enabled us to address a benefit that we hadn’t even thought about, which was when oil and gas companies acquire other oil and gas companies that one of the first things they face are the need to maybe re-IP this architecture.

Because oftentimes the IP space, there’s overlapping addresses. And the that can be problematic. It can take a lot of time.

It can take a lot of money. And that’s another solution that we’ve been able to deliver calm almost by accident. We had one company, an oil and gas company, that acquired a $30 billion dollars acquisition target.

That’s a big company that you’re acquiring. And they were able to protect that with Blast Shield in three weeks of acquiring them. And they didn’t have to re-IP anything.

Again, that’s just because of the way we do this network overlay. So there’s a lot of cool things that that that use cases that we’ve discovered through the process of listening and talking to customers.

Andrew Ginter
Cool. So, so, you’ve said the the phrase SD-WAN, software defined wide area network. I have never figured out what is an SD-WAN. I mean, I’ve worked with firewalls for 20 years.

I did a lot of different kinds of networking, not not hugely. I mean, and I never worked for a telco, but but can you work with me? What is an SD-WAN? What is your SD-WAN? How does one of these things actually work? What does it do?

Tom Sego
Yeah. Well, first of all, I said SDN, not SD-WAN. So I said software-defined networking, which is a principle, not SD-WAN, which is an architecture.

What I guess the best way for me to think about this, and keep in mind, I’m a chemical engineer, not a software engineer. So I That means i’ll yeah if it takes me it may take me longer to understand these concepts, but when I finally do, I can probably explain them to people.

So the the the way I’ve learned this is that we essentially establish, we abstract the policy from the network infrastructure so that such that you can have a group of devices or a device itself that essentially associates with an IP address that’s an overlay address, much like you get network address translation.

All right, so you have a an original IP address, you have and a translated IP address, and the software-defined network then uses the overlay address to both communicate with each other, to establish the most efficient route,

because performance is very important in OT environments, unlike IT environments. And this allows us to optimize the path for any given packet, which is also very cool. So that’s one of the elements that I think is important in software-defined networking.

um The other thing is, is that it creates this illusion that it is a point-to-point between two different devices or two different groups.

And so that’s part of the abstraction. So if you don’t have to like set the path, which is what firewalls do, path, looking at the routing, how you go from this firewall to that firewall, from this port to that port, when you just abstract that to, I wanna go from this centrifuge to that control room,

It doesn’t matter if the infrastructure changes. And this is a very powerful yeah benefit of software-defined networking. Because if you’re just looking at the device you want to protect and the user who wants to connect to that protected device, as the environment evolves and it absolutely will, you don’t get put in the penalty box like you would in a firewall situation where you could get firewall rule conflict.

And if one thing to think about, Andrew, is when you think about the breaches that occur, about 100 percent of those breaches already have firewalls.

And so that means that the firewall didn’t work properly, which is usually a result of a firewall rule problem or the the environment has evolved in such a way that it’s no longer protected. There’s a hole.

And of course, we all know that adversaries just need to be right once. Whereas us defenders, we’ve got to be right all the time, which is very tough unless you’re my wife.

Andrew Ginter
There you go.

Andrew Ginter
so So Nate, let me jump in here. I’ve, the as I told Tom, I’ve wondered about this space of software-defined networking, wide area networking for some time, and i’m I’m beginning to wrap my head around it.

um he gave the example of, you you might imagine that we’ve got oh the internet, local area networks, wide area networks were designed so that devices have internet protocol addresses and they talk to each other and, routers move messages from one network to another. So they get from the source to the destination.

Why is any of this complicated? Why do we need any more than that? One example that that Tom gave was acquisitions. If company A, i mean, there’s there’s internet addresses, the 10-dot series, two to the 24th addresses are private addresses.

Private businesses can assign them to their, ad written to to assets on their private networks and never show those those ad addresses to the public, to the the public internet. That’s fine.

There’s another set, 192.168 is a 16-bit address range that everyone uses. So you might say, so so what? Company A uses, let’s say 10.0.1 through 10.0.20.

They’ve got a lot of assets. They use up a bunch of the address space. And then they buy company B that’s used the same addresses because they’re private addresses. You don’t have to register that you’re using them in public.

And now all of the equipment has the same IP addresses. For For each IP address, there’s two pieces of equipment in the network. How do you route messages from from these subnetworks, from these assets to each other?

um This is the problem of renumbering when you acquire a business. Often you have to renumber it’s it’s a pain in the butt on on IT t networks.

It can shut you down until you’re done and tested the renumbering on OT networks and nobody wants to shut down. So you if if there’s a piece of technology, i mean, the the the textbook technology is network address translation, part of most firewalls.

It lets you hide some private addresses and assign a different address to sort of that set of of private addresses. You’ve got to set up a whole bunch of firewall rules You can do that sort of manually painfully, but it gets worse than that.

I mean, I was talking to Tom after the recording. He gave me an example that I didn’t capture on on the recording, but he said, Andrew, they’re they’re working with an airport and the airport’s building a new wing.

I mean, this is common. Airports expand. And in every, let’s say there’s 27 gates in the new wing. Every gate has got one of those machines, those those ramps the that sort of snuggle up to the aircraft and the door opens and people come out and step onto this device that has, I forget what the name of it is, moved up to the aircraft and then they they walk into the into the airport building.

Every one of these devices has automation, has computers.

Every one of these devices, when you buy it from the manufacturer, the manufacturer assigns the same private addresses to every one of their products. So now you’ve got 27 of these ramps in the new wing, and every batch of 20 computers or devices that are built into the ramp have the same IP addresses.

How do you route this stuff? Again, you can put firewalls in place. You can do So now you need a firewall in every ramp. You need you need technology. And it gets it gets more complicated than that.

Andrew Ginter
For example, many years ago, I worked with a bunch of pipelines. I remember one pipeline, thousand kilometers long, pumping stations, compressor stations, all the way down the pipeline. Communication was important.

You have to communicate with these these stations or you have to shut down the pipeline. It’s illegal to operate a pipeline in in that jurisdiction unless there’s human supervision.

And so you had, there there was a fiber laid along the right of way for the pipeline. And from time to time, some fool would run a backhold through it.

So you’d need backup communications. I kid you not, this pipeline had something like seven layers of backup communication. There was satellites, there was DSL modems to the local internet service provider.

There was cable modems when there were a local internet service provider. There was… I don’t think I think this was before the era of of cell phones.

there were There were analog modems. We’re talking 56 kilobit, 100 kilobit per second modems that you can route in an emergency internet protocol down very slowly.

And they had built their own by hand. They had rolled their own, what today I think would be called a software-defined wide area network, where the task of that component was to say, I need to send an internet protocol message from the SCADA system to device 500 kilometers away

what infrastructure is up, what infrastructure is dead. If a piece of the infrastructure, the communications but infrastructure has failed, then activate another piece of the, one of the backups and change all the routes, change all the firewall rules so that

All of the messages that have to get from a to B can get from a to B. It was it was it seemed to me ridiculously complicated, but in hindsight, it it sounds like the same kind of need that modern software-defined wide-area networks address.

They address security needs as well as just the basics of getting the messages from one place to another when the underlying infrastructure changes from moment to moment.

Andrew Ginter
um So so that that kind of makes sense. You’re I think of wide area network, I think of routing. So there’s a routing element. You’ve got multiple paths. The system sort of auto-heals and figures out the best paths or presumably the cheapest paths.

But you’ve also talked about users and and security. How does How does this routing concept work with security?

How is security part of this? You’ve also mentioned firewalls. Can you can you can you dig a little deeper?

Tom Sego
Yeah. Well, I think I think we in a way are disrupting firewalls that are used for industrial, lots of industrial applications.

There are great uses of firewalls. They’re a fantastic tool, but it’s it’s kind of been used like the if you have a hammer, all the world looks like a nail. And, especially again, I’ll talk about these remote oil and gas locations where you may only have five or 10 devices.

And so the idea of having a firewall to segment that is ridiculous. The expense would be prohibitive. So that’s one of the other reasons why it’s so cool about the way we can scale dramatically from protecting five devices at a very remote well site to 2000 devices with a single gateway.

So there’s a lot of flexibility that we have that, that firewalls can’t deliver. And when you look at a comparison of a project that involves a firewall as a solution versus blast shield, we are, we take one 10th the time, cost one fourth as much.

We can deliver this with half the administrative lift. It’s much easier to deploy as well. And it actually works. So there’s a lot of benefits that we bring over a firewall kind of solution.

Andrew Ginter
Okay, so so I understand these are these are powerful benefits, but can we come back to the technology? Can you tell us what does this stuff look like? I mean, you said it’s not a bump in the wire.

Physically, what does it look like? Is it a DIN rail box at each of these sites? Is it a DIN rail box on on a central tower? is it what Is it something in the cloud? Can you talk about what is it that that is solving these problems?

Tom Sego
Sure. So there are basically five components that we have to our platform. The first two create the authentication handshake. One is a client that runs late locally on on your HMI or on your machine.

And then you also typically have either a mobile application that provides the and MFA without passwords. And that was patterned after Apple Pay.

So again, I spent a decade at Apple. And so the idea was, let’s try to use some of that technology to provide stronger authentication. The other thing that we have is we have a gateway.

And the gateway is a software appliance. And it can be deployed on x86 bare metal. It can be deployed… On containers. It can be deployed on Kubernetes clusters.

It can be deployed in the cloud, AWS, GCP, Azure. It’s very flexible and it can be operated both in passive mode and active mode. So in the pat traffic path or outside the traffic path.

We also have an agent that can run locally on a machine, which most people know what agents are. And then finally, there’s an orchestrator that is used to drag and drop devices and people into groups and then establish policies between those groups.

So that’s a little bit about the way that the but technology is set up. And one of the things that that we found is that you can have people who are, I’ll say, less sophisticated than many CCNA trained professionals.

So they don’t even need to know how to use command line to deploy our solution. So it’s relatively simple. We have an example where one person is managing 22,000 devices.

So again, that provides a benefit to them in terms of OPEX reduction ongoing. So that’s a little bit about the way technology work and these the and the way these components fit together. Does that answer your question, Andrew?

28:55.44
Andrew Ginter
ah That’s close. I mean, what what you’ve described is sort of the the pieces of the puzzle. But, I’m still a little weak on on on how they work together. I mean, you again, we’ve we’ve used the word routing a couple of times.

29:09.02
Andrew Ginter
um To me, there’s there’s two ways to do routing. You can either take the message messages into one of your components, I’m not sure which one, and figure out where they belong and send them on the way yourself. You can be a router.

29:24.15
Andrew Ginter
Or, and I understand sometimes some software WANs can do this, they reach out to routers like firewalls and just routers and who knows what else that can route messages.

29:38.01
Andrew Ginter
And they send commands to those devices when things need to be routed differently. Is one of these models what what you use? how How do you guys do the routing?

Yeah, so let me talk about how these pieces all fit together. So the software appliance that is the gateway sits upstream of the switch and usually downstream of the firewall.

And what it often will do is it will provide what we call layer two isolation. And so what that is, if you think about, we can essentially turn a 48 port switch into 48 VLANs so that each one of those is its own encrypted unit that can’t see their neighbors and can’t talk to their neighbors in unless the policy allows that to happen.

And so that level of very granular control is something we can deliver because of the way the gateway controls and manages the routing that you’re discussing.

Now, there’s two other components I didn’t really talk that much about. One was the authenticator, and the second was the client. And the client is different than the agent. And so the what the client does essentially is a challenge response between either the SSO, the FIDO2 compliant key, or the mobile authenticator.

And so what it’ll do is essentially produce a QR code that the mobile application would scan and then apply your face ID, and then you would be into the system, but not authorized or permitted to see anything unless the policy had already been allowed.

So that’s the way we manage both the authentication and the authorization. And that’s also the way we manage routing of traffic between devices, gateways, and the groups that that those devices are in kind of encapsulated in.

Nathaniel Nelson
So in his answer there, Tom was was trying to describe things, but admittedly I was getting a little bit mixed up because there were certain things that were upstream from other things and downstream from other things and layer two and switches. And be like Can you, Andrew, just help simplify everything we’re talking about here?

Andrew Ginter
Yeah, sure. So in my understanding, they have a few different kinds of components. And And I might have got this wrong. But, what I got out of it was, imagine… Um

You know, firewalls can do network address translation. They can say, I’ve got a bunch of addresses here. I’m going to show you a different address to the world. But, managing them in sort of scale, at scale with tens of thousands of devices can be a real challenge, especially if each firewall is only managing a handful of devices. That’s a ridiculous number of firewalls to manage.

So what Thomas got, I believe, is a, I think he called it a gateway device. It’s something that sort of sits between, let’s say, a small network of five to 10 devices and the infrastructure.

And you can assign whatever IP address you need to to that gateway. Oh It might, in fact, have two addresses, one on sort of the infrastructure side and one on the device side.

So it has a device address that is compatible with whatever stupid little network of five local, always reused, ramp IP addresses, the, the, the airport ramp addresses, it’s, it’s compatible with that bit of address space.

It talks to those five devices. And when those devices send it messages, it forwards those messages into the infrastructure and it figures out the addressing. It figures out the, it does encryption.

If you’ve got sort of more conventional, um, Windows or Linux communications, you can put his software on those devices. They that That software will do the crypto, the software will connect sort of natively into the infrastructure and and sort it all out.

And then, the the thing of beauty is, okay, those pieces kind of make sense. The thing of beauty is what I heard was they’ve got a management system, which says, okay, you have 20,000 devices.

um half of them have exactly the same IP address. That doesn’t matter. This device over here in this building in this country can talk to that device over there.

It’s allowed. But when that device wants to talk to Andrew’s laptop, because I’m a a maintenance technician, Andrew has to provide two-factor authentication.

So you can, you basically, you you you stop caring what IP addresses these devices have you don’t have. You’re not configuring routing rules. You’re configuring permissions in a sort of a high-level user-friendly permission manager.

And all of the routing nonsense and the encryption nonsense is figured out for you under the hood. So you can you can think about… Your your big picture of devices that need to talk to each other, who should be allowed to talk to each other, instead of how do I route this when the IP address is conflict? You don’t have to ask that question anymore.

Andrew Ginter
Cool. So that that starts to make sense. I mean, can you talk a little bit about, you’ve been doing this for, 2017, this eight years. Can you talk about, can you give us some examples to to to help us understand, how this stuff works?

Tom Sego
Well, I think the, having run this for almost eight years now, the the journey was not a straight line. We went through, we originally started out, believe not, Andrew, as a hardware company.

And the the thesis was to build an unhackable stack. So this sounds naive, and it was. We were going to start with a chip, a new chip, that we had a partner developing that would have an onboard neural net.

It would create 17 key pairs and it would encrypt the bootloader in the factory and burn a fuse so it couldn’t be reset. And that was the foundation of our product. And then we were gonna write our own kernel, write our operating system. And this was from someone who helped write the OS 10 kernel.

We were gonna write that in such a way that it used byte codes and would not be exposed to buffer overflows and other issues. So it could, we were going to use formal methods to even prove the kernel.

And then we’d have our networking layer, which is what our company is now. And then we’d have our own SDK to manage applications that would also use formal methods. And then finally, we would have the authentication layer that we also have today. So we went from a five,

very ambitious levels of of tech stack to two. And then we have other people doing some of those other things. I think the market really wasn’t ready for something that complex, maybe that secure from a, on the higher end of the security spectrum, if you will.

um the market just really wasn’t willing to pay that. And so we simplified, we pivoted. And then by the way, once we did come out with our hardware product in February of 2020, there was another global issue that hit everyone that caused us to then pivot to a software as a service model, which then required some more development and everything else. So we didn’t really launch our product until late in 2021 and started getting our first customers very shortly thereafter.

And since then, we’ve grown very rapidly to the point where this most recent year, we quadrupled our our revenue and tripled our customer count.

So it’s been an exciting ride.

So let me give you an example. The one one customer, again, an oil and gas customer who was, again, trying to, they were faced with a challenge where they were going have to build their own cell towers, essentially become their own wireless ISP. And this is not unique to this oil and gas customer.

There are many that are facing that. And I don’t know if you or your audience knows, but it’s about a quarter million dollars to build a cell tower. And you have to have many of them. So in in in a relative sense, we are not just delivering security to this customer, we’re also so helping save them a ton of money.

So instead of 10 to $20 million, dollars they’re spending a fraction of that, which is also very interesting. One of the When they did this acquisition, there was another company that did an acquisition.

They wanted to sell off certain components too. So they wanted to sell off the saltwater rejuvenation or… It I don’t know exactly what the right word is, but they wanted to offload this asset.

And one of the things that they were able to do very quickly, because all of our segmentation, all of our granularity and access is done in software.

We can essentially just take that new entity. Put their users in a group, put the devices that they control into another group, and they would have complete control of just their newly acquired saltwater assets and no visibility, no access at all to the oil and gas parent company.

So that was another great example of using this in a creative way.

Andrew Ginter
So you’ve mentioned acquisitions a few times. I mean, I live in Calgary. This is oil country. I hear about these acquisitions all the time. Is this Is this sort of part of the the the the genesis of your organization? is is this How often do these things happen? How complicated are these sort of mergers and acquisitions technology-wise that happen all the time?

Tom Sego
Well, they happen very frequently, especially, again, in oil and gas. In the In the case of oil and gas, because one customer sorry one asset owner has a certain tech stack that can only profitably make money up to a point.

And then they can sell that asset to someone else who has a richer skillset that can extract more profit, more money, more revenue from that same resource.

and And I would say an example that we’ve also seen where people are pleasantly surprised about Blast Shield is when there yeah there’s one one oil and gas customer that acquired a company.

And their biggest fear was they were going to have to do an IP space assessment and figure out whether there were overlapping IP addresses. And so instead of having to do that, which they didn’t have to do at all, they just deployed our software overlay and immediately were able to segment using software each one of these devices, even regardless of whether the underlay IP address was the same.

That saved a lot of money in truck rolls. That saved a lot of money and hassle and headaches in managing that that IP space, which which they were very happy about. And the way they described it, actually, they described it two ways to me.

One way was, my God, this is like a Swiss Army knife. And the other guy said, this is like duct tape. It’s like networking duct tape. It has It provides lots of different purposes and is very versatile to basically deliver the network they want with the network they have.

Andrew Ginter
So let me just sort of emphasize, Tom has said, you talked about changing IP addresses a few times. I talked about it a few times. I’ve actually, from time to time had to change IP addresses on stuff, not so much in an industrial setting, just, just internet protocol networks, just, business infrastructure.

And here’s the tricky bit. It’s very hard to do that remotely.

You know, Imagine that you you want to remote into a remote substation. There’s nobody there, but there’s 100 devices. And you have to log into each device with, I don’t know, SSH or remote desktop.

And you’ve got to change the IP address on the device. And at some point, you’ve got to tell the firewall that it’s talking to a different network of IP addresses.

And if you do that in the wrong order, if you, let’s say, hit the firewall first, now you can’t send messages to any of the devices because the firewall doesn’t know how to route to those devices anymore. They have different IP addresses. So you have to undo that. Now you go into the device and you give the SSH command a Linux box. You give the that that command line command to change the IP address, and it stops talking to you because you’re connected to the old IP address. You’ve got to try and connect to the new IP address.

Only the firewall won’t connect you to the new IP address because it its IP address hasn’t been updated. So now you have to sort of blindly change all these addresses. Then you change the firewall, and then you see if you can still talk to these devices, and three of them have gone missing.

Why? Did I fumble finger the IP address? Is there some other problem? It’s just really hard to do this remotely. And so, again, if you have 700 sites, you’ve got to put people in trucks and drive out to these wretched sites to make these changes.

If there’s a way to avoid that, you can save a lot of money. So, yeah, I kind of get that it’s really useful to avoid doing that.

Andrew Ginter
so So this is starting to come together for me. I mean, you can do the network address management in your, what did you call them?

The gateways.

Tom Sego
Gateway, yeah.

Andrew Ginter
And that gives you an enormous amount of flexibility. But And it’s it’s the the client that does the the crypto. Or maybe it’s the agent.

39:22.07
Andrew Ginter
I’ve i’ve i’ve lost track.

Tom Sego
The client is used to authenticate.

Andrew Ginter
Right.

Tom Sego
The agent runs on typically a server in the cloud, those kinds of maybe a historian type of use case. The gateway is the workhorse because so much of OT infrastructure cannot run an agent.

And so because it can’t run an agent, you need to have a gateway that can do the encryption and decryption of traffic. Now, when you think about the way a lot of these processes are controlled, they use PLCs.

And the PLCs, we don’t encrypt the traffic below the switch.

We don’t interfere with that. However, with the traffic that is upstream of the switch, all of that’s encrypted wherever it may go.

So I think that’s that’s the way it’s done.

Andrew Ginter
One other technical question, you mentioned CVEs and exploits and vulnerabilities earlier.

I mean, i’m I’m familiar with, let’s say firewalls that that say they do stuff like virtual patching, meaning if there’s a vulnerability in a PLC, the firewall, if it sees an exploit for that vulnerability come through, will drop the exploit and will protect the, the prevent the exploit from reaching the the the device. Is Is that the kind of thing you do when you talk about about protecting from exploits or are you doing something else?

We’re definitely doing something else. And I think the the approach that we take is we use this networking cloaking concept where you have to authenticate first before you can see anything.

There’s no management portal. So there are zero exposed web services. If you run a network scan on a factory, that’s protected by blast shield, you’re going to come up with nothing.

And what that means is if there are CVEs, and I guarantee you there will be, there will also be zero-day viruses, okay which may not be on anyone’s list.

And so in those both of those cases, as well as ancient devices that are never going to be patched, you’ve got a way to deal with these unpatchable systems because they’re unaddressable. And so it’s going to be very difficult to exploit those.

Andrew Ginter
Cool. So, I understand you’re you’re you’re heavy into oil and gas with all of the examples we’ve been talking about oil and gas, but I’m guessing you you are active in other industries as well. Given your personal background, are you active in other industries? what Can you give me some examples of what’s going on there?

Tom Sego
Yeah, absolutely. I think manufacturing is a fantastic kind of industry for us. They oftentimes have our little bit early adopters with with as it pertains to machine learning, predictive maintenance, those kinds of things, advanced analytics.

And we had one a manufacturing customer, in fact, who was hacked and many manufacturers do get hacked from time to time. They were hacked and the board asked the CISO to have an assessment to figure out what their risk posture was.

And before they could complete that assessment, they were hacked again. And so this really lit a fire under the entire kind of security team.

And they basically came up with a list of findings. And with those findings, they started implementing those findings. And they were testing various kinds of solutions.

And in one facility, they had 10 different lines, manufacturing lines. And they had deployed blast shield on one of those manufacturing lines.

They got hacked a third time. Now, this time, though, nine of the 10 lines shut down, whereas the line that was protected by Blastshield continued to run.

And what was really interesting about that is how quickly the organization responded. The CFO of this company responded and elevated that to the parent private equity company.

And now that’s leading to us becoming the default standard for not just that one company and all of its 17 plants, but also the parent private equity company and all the other manufacturing facilities that they’re trying to manage. Okay.

Andrew Ginter
Cool. I’m I’m delighted to hear it. The world needs more cybersecurity. Um

I mean, I’ve learned a lot. Thank you so much for joining us. Before we let you go, can we ask you to sum up? What what are the key concepts we should be taking away from from our conversation here?

Sure. So I think the company, as it was founded, was trying to establish protecting critical infrastructure based on first principles. And the first principle was to try to eliminate entire classes of threats if possible.

And so our solution then tries to eliminate phishing credential theft. So we we have an MFA passwordless feature. We also allow you to segment using software.

We cloak your network so it’s undiscoverable. 35% of all CVEs discovered last year are what are called forever day vulnerabilities. And so that network cloaking capability means that they’re not exploitable.

And then finally, we also have a secure mode access component in there. So we’re trying to deliver a lot of value to our oil and gas manufacturing customers so that they when you couple this with a continuous monitoring and visibility tool like a nozomi dragos dark trace armis SCADAFense industrial defender the group clarity so when you combine those two you get a ton of protection at a very low price

Nathaniel Nelson
So that just about does it, Andrew, for your interview with Tom. Do you have any final words to take this episode out with?

Andrew Ginter
Yeah, I mean, I really like Tom, the the the customer that gave the duct tape analogy. You have lots of little networks, sometimes thousands of devices.

Half of them have literally the same IP address or half of these, tiny little subnetworks of of five devices on on airport runways or on, on webbages.

ah networks that you’ve acquired with, acquiring an oil field, they all have the same IP address. They all have the same IP address range. None of it’s encrypted. It’s just a mess.

And, this is something that lets you patch it all together. You need crypto, you need authentication, you passwordless is good. Use certificates instead. They’re harder to phish. You need to hide all of these repeated subnets with the same IP addresses.

You need a permissions manager, saying A can talk to B.

You need infrastructure underneath the permissions manager to make the messages from a go to B. You need to to have some synthetic IP addresses so that when you set everything up, your SCADA system can talk to an address and a port, I don’t know, probably on the gateway or or some piece of the infrastructure rather than the real address that’s repeated a hundred times in your infrastructure.

This just makes… A lot of sense. I It seems to me there’s there’s a a bright future for this kind of, of again, duct tape or just patch it all together and make it work and throw some security on top of it. Crypto authentication, this is all good. I’m i’m i’m impressed.

Nathaniel Nelson
Thank you to Tom Sego for speaking with you about all that, Andrew. And i always, gotta say that again. Well, thank you to Tom Sego for speaking with you about all of that, Andrew. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a pleasure. Thank you, Dave.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thank you to everybody out there that’s listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Network Duct Tape – Episode 141 appeared first on Waterfall Security Solutions.

]]>
Credibility, not Likelihood – Episode 140 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/credibility-not-likelihood-episode-140/ Wed, 06 Aug 2025 20:52:59 +0000 https://waterfall-security.com/?p=34651 Explore safety, risk, likelihood, credibility, and unhackable cyber defenses in the context of Norwegian offshore platforms.

The post Credibility, not Likelihood – Episode 140 appeared first on Waterfall Security Solutions.

]]>

Credibility, not Likelihood – Episode 140

Safety defines cybersecurity - Kenneth Titlestad of Omny joins us to explore safety, risk, likelihood, credibility, and deterministic / unhackable cyber defenses - a lot of it in the context of Norwegian offshore platforms.

For more episodes, follow us on:

Share this podcast:

Large scale destructive attacks on big machinery is, not something that I would consider a credible attack.” – Kenneth Titlestad

Transcript of Credibility, not Likelihood | Episode 140

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome everyone to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions, who’s going to introduce the subject and guest of our show today. Andrew, how are you

Andrew Ginter
I’m very well, thank you, Nate. Our guest today is Kenneth Tittelstad. He is the Chief Commercial Officer at Omni, and he’s also the Chair of the Norwegian International Electrotechnical Committee of Subgroup working on 62443. So this is the Norwegian delegation to the IEC that produces the widely used IEC 62443 standard.

We’re going to be talking about credible threats. What should we be planning for security wise? And by the way, I happened… I had opportunity to be in Norway and I visited Kenneth at the Omni head office where they have a lovely recording studio. So we recorded this face to face in their in their studio in their head office.

Then let’s get right into your conversation with Kenneth.

Andrew Ginter
Hello, Kenneth, and welcome to the podcast. Before we get started, can you tell our listeners, give us a bit of information about your background, about what you know, what you’ve been up to and and the good work that you’re doing here at Omny Security

Kenneth Titlestad
Thank you so much, Andrew, and welcome to Norway and our office. It’s, I’m so glad to have you visiting us. So my name is Kenneth Titlestad and I’m working as a Chief Commercial officer in Omny and I’ve just started as a commercial officer here in Omny. I went over from Southwest area where where I was heading up OT cyber security for. I’ve been doing that for six years.

Before that I was working in Ecuador also working on OT cybersecurity, so I’ve been working in the field now for almost 15 years and also for the last five or six years I’ve been chairman for the Norwegian Electrotechnical Committee, the the group that is handling IEC 62443. I’ve been diving deep into the cybersecurity now for quite many years.

And at Omny, we are developing a software platform for for handling cyber security and security for critical infrastructure. It contains security, knowledge graph and. AI that provides actionable insights into security for critical infrastructure. So it’s about it out and physical infrastructure.

Andrew Ginter
OK. Thank you for that. Our topic today is credibility. Now this is talking about risk. You know a lot of people think risk is boring. OK, a lot of people when they enter the industrial security space, they they want to know about attacks. They want to know about the technical bits and bytes. You tell me that you got interested in risk. Very long time ago. Can you talk about that? Where? Where did that come from?

Kenneth Titlestad
Absolutely. I’m I’m not sure if I when I when I considered it as as a as a risk or as a as a field of expertise. So when I was just a small boy, actually my dad, he worked as a control room technician offshore in Conoco Phillips or back then it was called Phillips. So when I was only two years or three years old in 1977. He was working at the Palau offshore oil and gas. Before and I don’t remember this of course back then. But it it, uh, it was always a topic around the dinner table at my my home where he talked about how it was working in the oil and gas business. So in 1977 he was on his way out to the platform when the big horrible blowout happened. He was not actually. He hadn’t arrived at the platform, but he was on his way out there. So it it really was a big topic around the dinner table all the time about safety risks involved in oil and gas.

So I was always listening with my my small ears back then being a bit fascinated about this world, I didn’t see the real danger in it, but I I was trying to picture it in my mind what it was to actually work on in these kind of environments.

So it I was kind of primed back when I was just a small, small boy and later on when I moved into the I I was more into computers. So I did a lot of gaming and programming on Commodore 64 and I started to work in Ecuador on the IT side. But I was still fascinated, fascinated, about the core business being oil and gas and production and exploration. So when I actually got my first trip offshore. I kind of felt that the the circle was closed and I saw the big world, the industrial world that my dad was had been talking about for several years and the kind of the risk perspectives also kicked in. The first thing you meet when you step on board, such a platform is the HSE focus a lot of focus on HS.

OK. And it’s for a reason and I fully got to understand that first, when I actually came on board such a facility, I understood why it’s so important, because it’s it can be really dangerous if you don’t have control over what you’re doing. So that’s when I actually saw the big scale of risk as a perspective.

Andrew Ginter
Yeah. Offshore platforms are intense. I’ve never set foot on one myself, but I’ve I’ve heard the stories quite the environment. And this is I mean, we’re talking about industrial cybersecurity, so you know offshore platforms are intense in terms of physical risk. Can you talk about cyber?

Kenneth Titlestad
It’s it’s an emerging topic. So when I was working in, in Statoil when it was called stator, now it’s equinor, we started to look into that area Around 2010 two 1011 I I still remember the day when people came charging into the meeting room and they started talking about the news of Stuxnet. So that was I I think we got to hear about it in 2010. I was working on the IT side and I I was responsible for large part parts of our Windows infrastructure in the company and we started to I I started to look into what what this SCADA things, what what is it I didn’t know about. PLCS I had never seen a PLC. I didn’t know that there was actually other kind of digital equipment operating critical infrastructure. So so with Stuxnet I started to to dive into the landscape of cyber security.

Kenneth Titlestad
And also as a company, we started a big uh journey back then on on really making uh OT much more cybersecurity. And Stuxnet was kind of a kickstart for it.

Nathaniel Nelson
Andrew, it feels like maybe there are certain kinds of seminola cyber security incidents in the O2 world. We talk, we reference off in the 2007 Aurora test. Maybe, you know, Triton and destroyer. But Stuxnet is that foundational thing that, you know, set the timeline for everybody, right?

Andrew Ginter
Indeed. And you know I was active in the space. I mean, I was leading the the team at Industrial Defender building the world’s first industrial SIM at the time. So Stuxnet was big news. I did a lot of work on Stuxnet. I had a blog at the time, you know, every time I learned something new about it because somebody had published a report, somebody had published another blog.

I’ve done a little research on my own. I published a paper on how Stuxnet spread because, you know, analysis had been done of the artifact. You know, the malware. But it had been done by IT. People at Symantec at I think he said, a bunch of people had analyzed the malware and you know, that’s work I couldn’t do. I’m not a I’m not a a reverse analyst.

But I sat down with Joel Langill. I sat down with Eric Byers and we investigated the impact that Stuxnet would have in a network. What would what would happen if you let this thing loose in a network? Given our understanding of the the Siemens systems, Joel was nexpert on the Siemens systems. You know, Eric and I were sort of more expert more generally, firewalls and industrial systems. So we all contributed to this paper and said here’s what happens if you let loose Stuxnet into an industrial network.

And in hindsight, I have to wonder if we didn’t do more damage than than good, because a lot of people learned stuff about Stuxnet, but there was only one outfit that benefited. And that was Iran’s nuclear weapons program. That was the only, site in the world that was physically impacted.

Why? I regret some of the stuff that I published about about Stuxnet.

Nathaniel Nelson
Do you recall if that research got traction, whether it might have gotten over there or is there no way to tell?

Andrew Ginter
I have no way to tell. I do recall a conversation, sometime later, because I’m a Canadian, I I work with the the Canadian authorities. I remember a conversation with Canadian intelligence services. And I remember, asking them. I’ve stopped, but at one point when I figured out that there’s only one place in the world that’s physically benefiting from my research, I stopped publishing anything about Stuxnet. And I remember some time after that talking to Canadian intelligence saying, I’ve stopped publishing anything about Stuxnet. You don’t have to tell me nothing. In the future, if you ever see me putting out information that’s helping our enemy, tap me on the shoulder, would you? And tell me. Shut up, Ginter. You’re doing more harm than good, and I will shut up. So, yeah, I, I look back on Stuxnet with with mixed emotions. It it was a wake up call for the industry. a lot of people learned about cyber security because of Stuxnet, but who benefited because of all that research?

OK. So that’s Stuxnet is. A lot of people got started in the OT space it was the big news years ago.

Andrew Ginter
Can I ask you, let’s let’s talk about industrial security and the work you, the work you’ve been doing. Stuxnet is where it got started. Where have you wound up? What are you up to today?

Kenneth Titlestad
Yeah, it’s. It’s as you say, it’s 15 years and it’s been, it’s for me. I think it’s been a very interesting journey. So but back in 2010 when when Stuxnet hit the news, I wasn’t immediately immediately diving into OT cybersecurity full time. I was working on the IT side, trying to secure Windows environment in a large oil and gas company.

But uh short, uh, after a while I move more and more over to outsider security, and I had my first trip offshore to oil and gas platform. I think that first trip was in 2013, so actually three years after the Stuxnet. But then I was going out just to to do some troubleshooting on a firewall. So, but more and more, I was moving into OT cybersecurity, and at the end I was. I moved over to Super Steria and I think it was in 2017. And at the end I was really working hard on finding really proper solutions for OT cybersecurity when when potential nation states are targeting you, what do you then do? If you must sort of have their mindset of assume breach and these kind of systems with the PLCS and all they are really, really vulnerable. What do you do when you are being targeted so then then I started to look into. I heard rumors that could there could be something that was non hacky.

So I started investigating into unidirectional data. Diodes was exposed to to waterfall. That was one of the first examples of of where I heard about non hackable stuff. And also I got to to to hear about the, the the Crown Jewel analysis, Cyber informed engineering. Back then it was consequence driven, cyber informed. Hearing. But those kind of topics really, really sparked an extra interest for me because then then I saw on some attack vectors on some of the risks I saw actually a solution that could remove the risk instead of just mitigating it.

Andrew Ginter
So your first sort of foray, everyone was interested in Stuxnet, but you started working on the problem you said with a firewall and to a degree that makes sense. I mean the the firewall, the Itot firewall is often the boundary between the engineering discipline on the platform in the industrial process and the IT discipline, where information is the asset that needs to be protected. And so that boundary is something that both the engineers and the IT folk care about, so that that kind of makes sense. I’m, I’m curious, you got out to the platform you were tasked with the firewall. What did you find?

Kenneth Titlestad
There. Yeah, it was actually kind of a long, long lasting ticket we had in our system, there was a firewall between it and OT that was noisy, so it was causing creating a lot of events and alerts on traffic that it shouldn’t have so I was tasked to go out there and try to troubleshoot this. We we absolutely didn’t think that it was a cyber cyber attack or kind of evil intent, but it was incorrectly configured firewall rule. But when I got out there I could see that it was. It was just incorrectly configured firewall.

There’s nothing, not, not anything dangerous or cyber attack involved, but I also got to to think of of a scenario where if it had actually been a cyber attack and one that created so much noise as well on a security boundary, a security component. Sitting on the outskirts of OT, shouldn’t the OT environment do something to sort of shut down or go into a more fail safe situation? So I got kind of interested in in actually the instrumentation behind your security components on the outskirts of OT. So that’s a topic I continued to explore for for several years, having in the back of my mind cyber informed engineering, non hackable approaches unidirectional systems and on on S4 last year I talked about the the safety instrumented system because safety has always been a particular interest of mine. So I talked about the cyber informed safety instrument. The system shouldn’t the safety instrumented system. At some point, when you’re under an attack, shouldn’t the the the sort of the big brain? Uh, in the room? Shouldn’t that actually take an action? An instrumented automated action and going into not necessarily. A fail safe only, but a more fail failover to a more safe and secure situation.

Andrew Ginter
So that makes sense in theory. I mean if the firewall was saying help help. I’m under attack over and over again. Should some action not have taken place on the OT side. But let me ask you this. It was a false positive. It would have shut down the platform. a very expensive that form unnecessarily, can we detect cyberattacks reliably enough to prevent this kind of unnecessary shutdown, and have if if we do shut down whenever there’s a bunch of alarms? Is that not a new sort of denial of service vulnerability? The bad guys don’t even need to get into OT. They just need to launch a few packets. That firewall generates some alarms in the shuts down without them even bothering to break in the OT. Is that really the right way forward?

Kenneth Titlestad
No, I totally agree. It’s not a good approach going forward. But at the same time I think to shut down one too many times, is is better than not actually doing it, so we should be kind of overreacting and and going into fail safe situation and it could cause unnecessary down time and it could. It’s vulnerability on the production side, but I think it’s much more dangerous with the false negatives where we actually don’t see any attacks and but it’s it’s actually happening. So false positive we need to reduce them, but it’s much more important to actually reduce the false negatives.

Andrew Ginter
So just listening to the recording here. I mean, this is not something I discussed with Kenneth, but we were talking about automatic action when we discovered that an attack might be in progress, for example, because there’s a lot of alarms coming out of the firewall, you know. He agreed with me that shutting down the platform was probably an overreaction because that introduces a new attack vector. The bad guys just need to send a few packets against the firewall, generate a few lines and the whole platform shut down, I agreed with him that something should be done, but we didn’t really figure out what. Here’s an idea in hindsight, a number of jurisdictions are introducing what they call islanding rules, meaning if IT is compromised, you need to, basically, I don’t know, power off the IT firewall, nothing gets through into OT anymore.

For the duration of the emergency, you have the ability to shut off all communications into OT. This is part of, the regulation says you must be able to island. So now you have that capability. I wonder if it isn’t reasonable to trigger islanding when you automatically discover a whole bunch of alarms coming out of anything, because the modern attack pattern, most of them of of modern day attacks, are not like Stuxnet, where you let it loose and it does its thing most of modern day attacks have remote control from the Internet, and if you island, if you break the connection between it and OT.

If there was an attack in the OT network, the bad guys can no longer control it. They can no longer send commands. So and this is not, this is not new. The the term islanding is a little bit new. The concept of sort of an automatic shut off is has been bandied about for for many years. But again, given that the regulators are demanding an islanding capability. maybe engaging it automatically from time to time is not the worst thing that can happen. It increases our security and the impact on operations is is minimal because you’ve you’ve deployed the ability to island already.

You’ve developed the capability of running your OT system independently, and so interrupting that communication for a period of hours at a time while you track things down and say, oh, that was a false alarm. I’m guessing is, minimal cost. So there’s an idea.

Andrew Ginter
OK. Well, let’s come back to our our topic here. The topic is credibility. we’re talking about the risk equation, the typical risk equation is consequence times likelihood. generally we do it qualitatively, but we we wind up with a number coming out of that to compare different different kinds of risks, high frequency versus versus high impact risks. can you talk about that? Where does credibility fit in that equation?

Kenneth Titlestad
I think it fits very well into that equation because when we we, especially when we talk about the likelihood or the probability part of it, the left left side of the equation it it’s always a very, very difficult conversation to have when you try to identify the risk or the the risk levels we are talking about or you try to identify the consequence levels involved. It’s sad to see that a lot of the conversations they go astray due to not being able to put the number on the probability or the likelihood, and I think it it the the conversation gets to be much more fruitful if we can get rid of that challenge on trying to figure out the number on the probability or the likelihood.

Credibility gives us tools in our language to actually be able to talk about the left part of the. So it’s something that is a bit more analog and analog value where we can move more towards the consequence approach, the consequence driven where the the right side of the equation is is more important to talk about as long as you get, if you consider it being credible.

Andrew Ginter
Well, I have to agree. Uh, I’ve argued in my previous in my last book that that likelihood is flawed, that at the high end of cyber attacks, not the low end, the low end likelihood actually works. The high end. The outcomes of cyberattacks are not random. If the same ransomware hits a factory twice and we’ve all we’ve done is restore from backup, it took them down the first time we restore from backup. We make no changes. It hits. Again, they’re going to go down the same way. It’s not random.

I argue that on the high end nation state, targeting is not random either. it’s not that they they they try for a while and if they if they don’t succeed they, go try somewhere else. Nation state threat actors keep targeting the same target until they achieve their mission objective. It’s not random. Once they’ve targeted you, it’s not random. Randomness to me doesn’t work at the high end. Credibility makes more sense. We know is is the threat credible? Is the consequence credible? If this threat comes after us, is this attack comes after us? Is it reasonable to believe credibility is what’s reasonable to believe, not who what’s reasonable to believe? Is it reasonable to believe that the consequence will be realized?

I think it makes a lot of sense, but it’s it’s new. I don’t see the word credibility in a lot of of standards. where does this sit? What what you know. Is this? Is this something people are talking about?

Kenneth Titlestad
Yeah, absolutely. In my work with the clients, I’ve been working with and also the professionals I’ve been working with, we have discussed for some years now that the, the OR we have discussed the big challenge of the the likelihood or the probability part of the equation. And we’ve we’ve without actually having having without following standards or best practices, we’ve seen that we need to skip the discussion on the probability or the likelihood and and talk about the consequent side of it first and then we revisit the likelihood and probability afterwards. But I also see in IRC 6243, especially with the 3-2, it actually talks about consequence, only cyber cyber risk analysis.

So that’s giving a opportunity to actually move away from the discussions on on probability and also of course with the consequence driven approach with cyber informed engineering, we start to see more focus on the far right side with the. The consequence consequence side but leaving out what to do with the likelihood, and I think with credibility we we get some some language based tools to actually play. Is it where we talk about it in a qualitative manner? Instead of having to force it into a number?

Andrew Ginter
So that makes sense to me. I mean, I have the sense that over time in the course of time, cyber attacks become more sophisticated, more sophisticated attacks become credible attacks that were dismissed A decade ago as theoretical have actually happened. Do you see that? what? What do you see coming at us in terms of sophisticated attacks in in the near?

Kenneth Titlestad
I think that’s a really challenging question looking far into the future or or far into the into the history to try to extrapolate what could we expect from the future we see with with the Stokes net, the against Ukraine. Triton, Colonial Pipeline. We see incidents that have had a really high impact, but there’s not very many of.

So, but we see it’s those kind of capabilities are being explored and are being put into different tools, so they can be used by not only nation states but also criminal groups. So with with that kind of analysis we can expect more and more sophisticated attacks and also by more and more non sophisticated groups. So we should expect increase in high impact incident.

Andrew Ginter
OK, so if we’re not talking likelihood, we’re not talking probability, we’re talking credible. How do we decide what’s credible? How do we decide what’s reasonable to believe?

Kenneth Titlestad
Yeah, that’s a that’s a good question. So we need to have some grasp of of what is credible and what is not credible. I’m also of the opinion that that the credibility part of the equation. It’s a qualitative thing. It’s not a zero or one, it’s something that is attached to a kind of a a slippery slope not easily defined. But what we could say if we are trying to to see credibility as a zero or one, what is credible things that have happened actually have happened once or twice or three times. They are credible, so the twice on incident or a safety only type of cybersecurity. That’s now a credible attack because it has happened.

And also near misses. That’s something that Triton was kind of a near miss. They didn’t actually cause it this this destructive attack, but it could have happened. And so we also have other near misses, incidents that we should be considering.

Andrew Ginter
So that makes a lot of sense to me. Credibility versus likelihood. How do we decide though credibility sounds like a judgment call. How do we decide? What’s?

Kenneth Titlestad
That’s a that’s a good question. I I I think there’s a good recommendations in 62443, for instance the 3-2 it it talks about the like I said, the consequence only as an example on how how you can approach the risk equation but it also talks about the need for focusing on worst case consequences. So it talks about essential functions, which basically could be the safety functions. For instance, you need to investigate the consequence if those are actually attacked and compromised. What could be the worst case consequence? So you begin there and then once you identify the worst case consequences, then you move over to the probability or likelihood dimension.

And then you need to consider all the factors. So what are the vulnerabilities involved? What are the safeguards and or what the the standard is talking about? You’re compensating countermeasures. You consider that you consider the function or the asset as well, that if there’s. If there’s no actual interest in the assets, then the vulnerability could be also non interesting to address or analyze. But you start with the consequence side, then you start to look at the likelihood and probability and then you are informed by the the consequence approach.

Andrew Ginter
OK, so let me challenge you on that. I’ve read the CI implementation guide. It says start with the worst case consequences. It says those words. I’ve not seen those words in three Dash 2. Are you sure that that you’re you’re not reading into 3-2?

Kenneth Titlestad
No, I’ve been searching for for that specific part of three dash too many times because because I’ve, I’ve heard others say that the same and it’s actually there. It’s really gold Nuggets in 3-2 talking about essential functions, specifically saying the worst case consequence and also specifically saying that you can choose to do a consequence only risk assessment, so that’s really important. Single words or single sentences in three after. So worth highlighting in the three Dash 2.

Andrew Ginter
OK. So that that makes sense in the abstract. Can you give me some examples what applying these principles? What what should we regard as credible?

Kenneth Titlestad
Yeah, interesting question. I think that the things that come to mind first is for instance the, the, the Triton incident. Before 2017, where when it actually happened, we didn’t think it was credible that someone would actually target a safety only system or cause a safety incident with a cyber attack with with Triton it we actually saw the first first of its kind and the threat became obviously credible. And then SolarWinds as well. It’s a very interesting study where the way they actually compromised the solar winds update mechanism, suddenly massive, massive deployment of kind of malware within critical and non critical infrastructure became really credible threat as well and also near misses. Of course we should be informed by things happening out there and coming on the news that are near misses that can talk about talk to us about what is a credible threat.

Another kind of near miss that I think or is not a near miss, but it’s scenarios or incidents that could talk about credibility is is where we actually have a safety incident. For instance, we we had have had lots of them in Norwegian oil and gas and in oil and gas gas. In general, is safety incidents where we, which is not cyber related at all, but where we see that it it could be able to be replicated by a cyber attack. So that’s something that we should be considering as a credible threat going forward where we actually could replicate the cyber or the incident with the cyber cause.

On credibility, I also think that we need to have in the back of our mind or in the analysis we have to have focus on on the technology evolution, the development and sharing of new technology. So we I see it as a graph where where we are exposed to more and more heavy machinery or heavy software that can be used on the adversary side.

Kenneth Titlestad
So with Kali Linux Metasploit now there’s also AI. So what is being about becoming a credible threat threat is more and more sophisticated stuff due to development of technology. So AI now is on on both sides of the table, or both as an attacker as a tool that makes more more attacks credible, but also on the on the defensive side where we actually need to use it to protect against more and more sophisticated attacks.

Andrew Ginter
So Nate, I was, let me go just a little bit deeper into into Kenneth’s last example. I remember talking to him about this two days before I recorded the session with Kenneth. I was at another event. I had 1/2 hour speaking slot. I was, listening politely to the other speakers. I remember. And one of the speakers was a a penetration tester. I remember asking the pen tester a question about AI and his answer alarmingly.

And, I discussed it with Kenneth. I discussed it with with others. Since the future is is difficult, I asked the AI the the pen-tester so you, you touched on AI. What should we look for from AI going forward? And I asked, should we worry about about AI crafting phishing attacks because I’ve I’ve heard of that happening. Should we worry about Ai helping the bad guys write malware to write more sophisticated malware because I’ve heard of that happening.

And I paused and his answer was Andrew, you’re not thinking hard enough about this problem, you know? Yeah, that stuff’s happening. But what you need to worry about is somebody taking a Kali Linux ISO image. This is the Linux disk image that everybody uses. All the pen testers use. Lots of attack tools, he says. Taking that GB of ISO image. coupling and adding it together with two gigabytes of AI model and the model has not been trained on natural language and creating phishing attacks. The model has been trained by watching professional pen testers attack OT systems, mostly in test beds. I mean, this is what pen testers do. They take a test bed that is a a copy of a system that they’re supposed to be, doing the pen test on no one that does the pen test on a live system. They do it on a test bed.

They use the Kali Linux tools. They attack the system and demonstrate how you can get into the system and cause it to bring about simulated physical consequences. So you’ve taught this AI model how to use the Kali Linux tools to attack OCF OT systems to brick stuff and bring about physical consequences. You take that training model, couple it with the image.

Wrap it up in enough code to run the image as a sort of kind of embedded virtual machine to run the the AI model the million by million matrix of numbers that is a neural network run the neural networ. Run the the the Kelly Linux image and have the AI operate the tools to attack a real OT system. Drop that three, 3 1/2 gigabytes of attack code on an OT asset, start it and walk away and it will figure out what’s there? It will figure out how to attack it. It will figure out how to bring about physical consequences.

I heard that and I thought crap. That’s nasty. back in the day, Stuxnet was autonomous. It did its thing, but it was a massive investment to to produce an an asset, a piece of malware that did its thing without human intervention. This strikes me as again something that will do its thing without human intervention, and it will figure out as it goes. It’s one investment you can leverage across hundreds of different kinds of targets.

I was alarmed. This is something I’m I’m thinking about going forward, it’s to me this is a credible threat. This is something we all need to worry about. I don’t know that the this thing exists yet. But I’m pretty sure it will in five years.

Andrew Ginter
OK. So that’s that’s a lot to worry about. Can I ask you know? Is everything credible? What? What in your mind is not a credible threat at this point.

Kenneth Titlestad
I would think that large scale destructive attacks on big machinery is not something that I would consider a credible attack, but it also goes back to the motivation of the threat sector, for instance, if you have a small municipality, I would lee that really heavy, sophisticated cyber attacks, a lot of them wouldn’t be actually credible due to the target not being interesting for such a threat actor. So large scale destructive attacks is something that in a lot of scenarios wouldn’t be a credible attack.

And then we have for, for instance, large large scale blackouts is quite an interesting story nowadays because a couple of weeks ago, I would think that it wasn’t actually a credible attack. Once we now see that it can happen, for instance, with Spain, it was probably not a cyber attack, but it was something that happened on the consequence side. If we can show that or or identify that it actually can be caused by a cyber attack, then that suddenly nowadays within the last week has become a credible attack.

And also swarm kind of attacks we I hear the discussions on that from time to time where where they see talk about whether it’s a credible thing where you attack millions of cars. As of now, I don’t see that as a credible attack, but things can change.

Nathaniel Nelson
You know, that’s an interesting statement he made there. That large scale attacks on heavy machinery. It isn’t credible. when I think about what we’re talking about on this podcast, the purpose of OT security presumably is that there are significant risks to really important machines. Large scale, but maybe at this point we’ve covered that.

That’s a good point. I think one of the the lessons here is that determining what is and is not credible is a judgment call. OK? Different experts are going to disagree. I’ve, few years ago I saw research published. Saying, look, here’s let’s take for the sake of argument, the possibility of attacking a I don’t know, a chemical plant and causing a toxic discharge. And the researchers concluded that it was theoretically possible, but it was such an enormous amount of effort on the on the part of the adversary, all of which would have to go on undetected by the sight, they said. in the end, I just don’t know that this is reasonable to believe that this will ever happen. So, that was one site.

But again, there are the experts, experts disagree. This is the the what I learned on the very first book I wrote. I got wildly different feedback from different internationally recognized experts. Here’s here’s an insight. To me, this means that when we make judgments about credibility, we probably have to be we have to make if we’re going to make a mistake, make a mistake on the side of caution, err on the side of caution because different experts have different opinions. We might be wrong. every expert has to be honest enough to admit that we might be wrong and build a margin for error into their judgment of what’s credible.

So even if we don’t believe that an attack that I don’t know destroys a turbine is credible, we might want to take some reasonable defences to against, such a not terribly credible attack in our opinion, but we might want to to deploy defences anyway.

Just because we might be wrong and this, this is something that that is also being discussed. It’s how big a margin for error do we need to build into our our planning. I mean I talked to a gentleman who produces who who designs pedestrian bridges. I said how do you how do you calculate the maximum mode? He says that’s easy. Andrew you you you build a barrier to either side of the bridge so vehicles can’t get on the bridge.

Most people are less than two meters tall. Most people are mostly water. You model 2 meters of water. The width of the bridge, the length of the bridge. That’s your maximum load. And then he says. And then he says, you multiply that by 8 and you build the bridge to carry the multiplied load. Because these are people we’re talking about, it is unacceptable for the bridge to fail under load. And so this is the margin for error that engineers routinely built into their safety calculations. I believe, we as as experts in cybersecurity need to build a margin for error into our security planning as well.

Andrew Ginter
So this all makes sense. One of the things that appeals to me very much about the credibility concept is using the concept to communicate with non-technical decision makers like boards of directors. You do this, you have experience with this. Can you talk about your experience?

Kenneth Titlestad
Yeah, I think it’s interesting. When we talk to board members and the the CXOS in different companies, they they don’t necessarily go into details about risk, but they know that they have a special accountability.

So so when we talk about credibility for for those kind of people, they are getting more on board with the discussions, they know they have a special accountability, they draw the line in the sand. For instance if if if the potential consequence is that somewhat somebody to die then that’s a non acceptable risk and they they take on that kind of position due to their accountability as as board members or or heads of of the company.

And they also are being accountable for from from the the government and from the for the society. So the some, some risks when it comes to the consequence side if if we talk about people dying then that’s absolutely and not acceptable risk for this?

Society and the representatives for for that kind of approach is is elected persons in the government and they put the the heads of the company or the Board of Directors as accountable for that on top of the company.

Andrew Ginter
So that makes sense. Boards care about consequences that the business or the society is going to find unacceptable. You didn’t use the word credible. How does credibility fit into acceptability when you’re communicating with?

Kenneth Titlestad
Yeah, we don’t have to defend against all possible cyber attacks. What we do have to protect against is the credible ones. So when we bring credibility in as a concept, then it’s something that communicates, communicates much better for the the Board of Directors and the heads of the companies.

Andrew Ginter
This has been good, but it’s it’s a field big enough that I fear we’ve missed something. let me ask you an open question. What? What should I have asked you here?

Kenneth Titlestad
We’ve been talking about credibility. Credibility is what is reasonable to believe. But it’s not enough to talk about reasonable attacks. We also need to be talking about reasonable defence. So what is a reasonable defence? We then need to be considering or or taking all the tools.

We need to use all the tools at our disposal for a reasonable defence, and nowadays that also obviously includes AI on the defensive side, not only on the offensive side.

This is also a very important part of me, of the reason for me joining Omny. So Omny is is built on our security knowledge graph, so it’s a data model where we can put all information we need about our assets on the vulnerabilities on the network, topologies, on the threats, the threat actors. So it becomes a digital representation or a digital twin of our asset. Combining that with AI which we have built in from the beginning, we get a very strong assistance on security where it matters most.

Andrew Ginter
Cool. Well, this has been great. Thank you, Kenneth, for joining us. Before I let you go, can I ask you to sum up for our listeners, what should we take away from this episode?

Kenneth Titlestad
Thank you, Andrew, for having me and and thank you so much for being here in Norway and and visiting us at our office. So we’ve we’ve had a good conversation about consequence, the focus on on the worst case consequences we’re we moved over to talking about credibility, replacing the the likely good concept with credibility, especially for high impact stuff where we don’t have the probability or the data to talk about it. We also talked about reasonable attacks and reasonable defences. So what is a reasonable defence against increasingly credible, sophisticated attacks with high consequences. So it’s been a really good discussion about all of these topics.

Kenneth Titlestad
If people want to know more about these topics or they want to discuss them, please connect with me on LinkedIn and message me there. I’m more than happy to discuss these topics and please visit our webpage Omnysecurity.com. Our platform addresses most of these topics we talked about today.

Nathaniel Nelson
Andrew, that just about does it for your conversation with Kenneth Title. Scott, do you have any final words you would like to take out our episode with today?

Andrew Ginter
Yeah, I mean we’ve we’ve talked about about credibility and this is a concept that is is relevant to sort of the high end of sophisticated attacks, the high end of of consequence. But I’m not sure let me.

Let me try and give a very simple example. I mean I was I was raised in Brooks, Alberta, little town, 10,000 people in the middle of nowhere. Literally an hours drive from any larger population centre. In terms of cyber threats, do let pick. Let’s pick on, I don’t know, the Russian military, does the Russian military have the money to buy three absolute cyber gurus, train them up on water systems, plant them as a sleeper cell in the workforce of the town of Brooks water treatment system. Have them sit on their hands for three years and after three years.

Using the passwords they’ve gained, the trust they’ve gained and the expertise that they have. Have them launch a crippling cyber attack that that damages equipment that takes the water treatment system down for 45 days is that a credible threat? Well, the Russians have the money to do that. It’s, they have the capability to do that.

But you have to ask, why would they bother? I mean, this is a little agricultural community. There’s a little bit of oil and gas, activity. Why would they bother? That does not seem to be it. It. It does not seem to be reasonable to launch that kind of attack against the town of Brooks. It just makes no sense. I don’t see that as a credible threat.

Is that a credible threat for the water treatment system in the city of Washington, DC, home of the Pentagon? I do think that’s a credible threat. So the question of what’s credible is an important question that I see more and more people asking in risk analysis going forward. we have to figure out what’s credible for us, what are what, what, what capabilities do our adversaries have? What kind of assets are we protecting? What kind of defences we have deployed what makes sense, what’s reasonable to believe in terms of the bad guys coming after us. This is an important question going forward and I see lots of people discussing it. I’m I’m, grateful for the the the chance to explore the concept here with with Kenneth.

Nathaniel Nelson
Well, thanks to Kenneth for exploring this with us. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a pleasure. Thank you, Nate.

Nathaniel Neson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Credibility, not Likelihood – Episode 140 appeared first on Waterfall Security Solutions.

]]>
Lessons Learned From Incident Response – Episode 139 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/lessons-learned-from-incident-response-episode-139/ Wed, 09 Jul 2025 10:53:04 +0000 https://waterfall-security.com/?p=33748 Tune in to 'Lessons Learned From Incident Response', the latest episode of Waterfall Security's OT cybersecurity Podcast.

The post Lessons Learned From Incident Response – Episode 139 appeared first on Waterfall Security Solutions.

]]>

For more episodes, follow us on:

Share this podcast:

If you didn’t listen to a single thing I said, you can listen to these three things: collaborate, plan, and practice.

Chris Sistrunk, Technical Lead of ICS at Mandiant

Transcript of Making the Move into OT Security | Episode 118

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome listeners to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions, who’s going to introduce the subject and guest of our show today. Andrew, how are you?

Andrew Ginter
I’m very well, thank you, Nate. Our guest today is Chris Sistrunk. He is the technical lead of the Mandiant ICS or OT security consulting team, whatever you wish to call it.

Google purchased Mandiant in 2022, but they’re still keeping the Mandiant name. So he still identifies as the technical lead of industrial security consulting at Mandiant.

And our topic, they as part of their consulting practice, they do a lot of incident response. He’s going to talk about lessons learned from incident response in the industrial security space.

Nathaniel Nelson
Then without further ado, here’s your interview.

Andrew Ginter
Hello, Chris, and welcome to the the podcast. Before we get started, can I ask you to say a few words for our listeners about your background and about the good work that you’re doing at Mandiant?

Chris Sistrunk
Okay, thanks, Andrew. I’m at Mandiant on the ICS/OT consulting team, been doing that for over 11 years now, focused on ICS/OT security consulting around the world with every type of critical infrastructure, doing incident response, strategic and technical assessments, and doing training as well.

Before that, I was an electrical engineer, still am, but for a large electric utility,
Entergy. I was there over 11 years as a senior electrical engineer, transmission distribution SCADA, substation automation and distribution design.

So that’s a little bit about me. And again, just working for Mandiant as part of Google Cloud.

Andrew Ginter
And our topic is incidents. It’s lessons from incidents, but let’s talk the big picture of incidents. I mean, Waterfall puts out a threat report annually. I’m one of the the contributors.

We go through thousands of public incident reports looking for the needles in the haystack, the incidents where there were physical consequences, where there were shutdowns, where sometimes equipment was damaged.

And we rely on the public record, on public disclosure. And so I’ve always believed that we were under-reporting because I’m guessing, again, I don’t have that many confidential disclosures that that people tell me about.

But I’m guessing that there’s a lot more out there that never makes it into the public eye. You folks work behind the scenes, you know, without breaching any non-disclosure agreements or anything, can you talk about the big picture? Do you see incidents, especially incidents in the industrial space with physical consequences, incidents that triggered shutdowns, incidents that are not publicly reported?

How many are there? What do they look like? Can you talk anything about sort of what I would not see by looking at the public record.

Chris Sistrunk
Sure. Thanks for the question. Yeah I think we’re talking about cybersecurity incidents here. And there’s many incidents that happen every day, right? But life goes on. Squirrels happen, right, in the grid.

But for cybersecurity incidents, I do believe we’re seeing an increase I can’t go into how many. We actually have a report that M-Trend’s mandate has put out every year. It’s going to come out later this month and for RSA.

And this is a yearly report. And we report on the the different themes, the different targeted victims, the different threat groups, the TTPs. But for cyber attacks that impact, say, production or cause a company to shut their operations down, I don’t have any hard, fast numbers to talk about, but we have seen an increase. And you can look in not just our report, but also the reports of others, IBM, X-Force, Verizon DBIR, Dragos, others. There are increasing reports of these, and a lot of it has to do with things like ransomware and ransomware either directly impacting the control system environment, which we have responded to, and a manufacturer and a few others. But we have seen in the public news where a company might have to shut down operations due to indirect impact.

Maybe their enterprise resource planning software or manufacturing execution software was impacted, which is an indirect impact to the industry-critical data flowing that was halted, which means I can’t produce my orders anymore or track shipping or logistics, things like that. So we’re seeing a lot of those.

There’s others in the electric sector that they kind of have to be reported to OE 417 reports. If there’s a material impact, obviously they’ll be filed in the, or they’re supposed to be filed in the 8K or 10K with the SEC.

And so, I think if you take all of those sources in and look together and see, we see there’s an increase of operational impact, but it’s I think the engineers are doing a good job of and the folks that run these systems are minimizing the impact in these situations, especially for electric and water and other critical infrastructure. Manufacturing is critical, but I’d say it is probably the highest targeted outside of healthcare and other other areas.

Andrew Ginter
So work with me on on the numbers just for one more minute. I’m on the record in the in the the waterfall threat report speculating as to what’s going on with public disclosures. It’s my opinion, but I have limited information to back it up. It’s my opinion that the new disclosure rules in the SEC and other jurisdictions around the world are in fact reducing the amount of information in the public domain rather than increasing it.

And the reason I suggest this is because it seems to me that with the new rules, every incident response team on the planet, roughly, I overgeneralize, has a new step two in their playbook. Step two is call the lawyers.

And what do the lawyers say? They say, say nothing. Because if you disclose improperly, if you if you if you fail to disclose widely enough, you can be accused of facilitating insider trading.

If you disclose too much information, you might get sued. I mean, people have been sued for disclosing incorrect information about security into the public. People buy and trade shares, and then they they find out the information was incorrect, and they get sued.

And so, to me, the mandate for the lawyers is say the minimum the law requires, because if you say too much, you risk making a mistake and getting sued, and you don’t want to get sued.

And if you If you say too little, you’re going to get sued. The lawyers minimize, and if you have a material incident, you must report it.

If it turns out the incident is not material to the finances of the company, but you don’t have to report it. And again, to minimize the risk of getting sued by reporting incorrect information, you report nothing.

So my sense is that we’re seeing fewer reports because of these mandatory rules, not more them. What do you see? You see this from the other side. Does this make any sense? Do you have a different perspective?

Chris Sistrunk
Well, I can say that, as an incident responder working with, victims in critical infrastructure, but also outside, I think this is a broader question you bring. I can definitely confirm that we work with external counsel that a victim may have hired to bring in to handle a lot of these reporting or not reporting requirements.

I can’t say or confirm that the the lawyers themselves, external counsel, we under reports. I can’t say that. I don’t know.

I’m not a lawyer, nor do I play one on Facebook. So I will just stick to say, yes, we have worked with external counsel. And usually we do not say anything in public for us as the incident responder unless the victim company or our client asks us to. Because sometimes sharing information is a helpful thing, especially if it’s a big breach, sharing that lessons learned about what has happened to them with others, just like we did back in the day when we had the SolarWinds breach. So, there’s two ways of thinking of that. And maybe you can pull on that thread with some other experts, but not me. I don’t know about the external counsel part.

Nathaniel Nelson
Andrew, you’d referenced Waterfall’s annual threat report, a report which I’ve covered in the past for dark reading. I’m not sure I’ve seen this year’s iteration, so maybe you could and tell listeners just a bit about what the report covers and what the numbers are showing lately.

Andrew Ginter
Sure. he report uses a public data set. The entire data set’s in the appendix. You can click through to it if you wish. We cover, we count in in our statistics, we count deliberate attacks, cyber attacks, with physical consequences, Not stole some money, Physical consequences in heavy industry and critical infrastructure, the the industries we serve. In the public record. No confidential disclosures. the numbers last year were 72 attacks I believe with I don’t know some 100, 150, something like that, I forget the numbers, sites affected. Many of the attacks affected multiple sites. This year, we are up from 72. We have 76 attacks affecting a little over 1,000 sites. So there was more sites affected. But the number of attacks did not increase sharply.

And this is why, again, I speculate, why have we sort of seen a plateau? We went up from zero, essentially, give or take, in, let’s say, 2019 to 72, and then in 2024, 76. Why do we seem to have a bit a bit of a plateau? And I’m speculating it has to do with the SEC rules. People are now legally obliged, not just in the United States, the Security and Exchange Commission. They’re legally obliged, not just in that jurisdiction, but in other jurisdictions around the world. There’s similar rules around the world.

If you have an incident that is material’ that any reasonable investor would use as grounds to buy or share or sell or value shares, you must disclose it.

But I have the sense that we are seeing fewer disclosures, because by law, you’re required to disclose material incidents. And again, because I speculate that because the lawyers are involved, we are seeing fewer disclosures. They disclose the material incidents and they squash everything else is the sense I have.

But, you asked about the numbers. Seventy six last year. Nation state attacks are up from, there were two the year before. There were six last year. You know, is this a trend? It’s still small numbers. Who knows?

And industrial control system capable malware, malware that understands industrial protocols and that is apparently designed to manipulate industrial systems, is up sharply. Where there were three new different kinds of of malware disclosed last year or found in the wild last year that had that capability versus seven in the preceding 15 years. Again, small numbers. Is it a blip? Is it a trend? Is AI helping these people write stuff? We don’t know. So these are all sort of, you look at the numbers and you scratch your head and you go, I wonder, what’s going on here? So that’s the threat report in a nutshell. there’s There’s other statistics in there, but those are sort of the headlines.

Andrew Ginter
So that makes sense. That leads us into the topic of the the show, which is lessons learned from incidents. You folks do incident response all the time. Can you talk to me? What are you seeing out there? Is there an incident or three that that sticks in your mind as, Andrew, the most important thing I have to tell you is, and or the most recent, what where would you like to start?

Chris Sistrunk
Okay, sure. we We have been doing OT incident response since I’ve been here. And I can give you a few examples. Last year in 2024, we responded to a North American manufacturing company that had their OT network.

If we’re looking at a Purdue model, it’s the the third layer, or level three of the network. was directly impacted by the Akira ransomware group.

And what had happened was an unknown internet connection was made by this third party who was running the site. They had put in their own Cisco ASA firewall.

And it just so happened to be that there was two critical vulnerabilities in that firewall at the time. And the CURE ransomware group was targeting those exposed firewalls.

So don’t necessarily think this was a targeted manufacturing OT attack. It’s just ransomware gangs doing what they do, trying to make money.

And so they were able to log in and get in through these vulnerabilities and deployed the ransomware on directly on the OT network, which was flat.

And every system, but about five or six or seven were completely encrypted, including their OT DCS vendors. And and there was multiple, not pick picking on any one in particular, but GE, ABB, Rockwell, several others that were there.

And the backups server was impacted and the backup of the backup server was impacted they were all on the same flat network so but this was a really tough situation since the company manufacturing did not have any backups that were offline the OT vendors like I mentioned had to come on site to completely rebuild the Windows systems, the Windows servers, the engineering workstations, the HMIs, all the things that were Windows and or Linux they had to completely rebuild.

Client didn’t pay the ransom, in other words. And so the lessons learned here, work with your OT vendors and OEMs and even your contractors to make sure that your windows systems and linux systems have antivirus make sure that you have OT backups that are segmented from the main OT network and keep offline backups and test them a at least a year basis backups will get you out of a bad day, even if it’s an honest mistake at five o’clock on Friday.

So this is a basic win here, having good backup strategy. And then in the last case here, we we recommended they eliminate this external firewall and leverage the existing IT, OT, DMZ firewall that came in from the the main owner of that site. And so they had a backdoor essentially that this third party contractor had installed in a new internet connection. So get away from the shadow IT, go back to your normal IT, OT, DMZ with jump box, two factor authentication and all those things. But if you do the basics and do them well, keep good segmentation, have backups and patch your firewalls on a regular basis. I think that will go a long way, especially in this case.

Nathaniel Nelson
You know, I feel like I’ve heard of variant of that advice that Chris just gave a million times, and I don’t work in industrial security. So you folks must hear it all the time, or it must just be such basic knowledge that you don’t even think about it.

So are there really industrial sites out there that still need to hear that you shouldn’t be making an internet connection from your critical systems?

Short answer yes. People who do security assessments, not just incident response that we’re talking here, but security assessments come back and say they regularly find connections out to the IT network and occasionally straight out to the Internet.

The connections to the IT network tend to have been deployed by the engineering team or the IT team to make their lives easier. You know, people with gray hair, and enough gray hair like me, they they they about how the systems used to be air-gapped. This was a very long time ago. We’re talking 30, 40 years. The systems used to be air-gapped.

And people with gray hair like me might assume that’s still the case. It’s not. Everybody who does audits reports these connections. The really disturbing stuff, yes, it’s disturbing that there are connections to the IT network that are poorly secured.

But the really disturbing stuff is the vendors going in. If you do an audit on a site, time and time again, I hear people saying, yeah, they discovered three different internet connections the vendor’s stuck in there.

And you’re going, well, what? Wouldn’t you notice if there was a new internet connection? I mean, no internet service provider gives you a connection for free. You’ve got to run wires. You’ve got to pay for this thing every month. It’s showing up on your bill. No, it’s not.

There’s a lot of wires being run while stuff is being deployed. You don’t notice a new wire. And the vendors pay month after month for the internet connection. It doesn’t even show up on the the bill of the owner and operator because the vendors are providing a remote management or remote maintenance service, and they want to minimize their costs.

They want to maximize their convenience in terms of getting into the site. So they deploy rogue DSL routers. They deploy rogue firewalls to the the site’s internet connection. They might deploy rogue cellular access points where there’s not even wires to run. It’s just a box sitting there that has a label on it saying, important, do not remove. And of course, that makes it invisible to everybody who’s looking at it. It says, oh, what? That? Don’t touch that one.

Yes, it’s very common. The advice I try to give people is when you do an like a risk assessment or and a walkthrough or an audit of your site, look for these rogue connections.

Unfortunately, you’re probably going to find one or two of these. Contractual penalties with the vendor help, but they’re no guarantees.

Andrew Ginter
So that triggered something. Let me yeah let me dig a little deeper. You said that the the victim decided not to pay the ransom. You know, do you see victims ever paying the ransom to recover an OT network, to recover the HMI, to recover worse than that, the PLCs and the safety systems?

You know, why would? Does anyone trust a criminal to take the tool the criminal provides and run it on their safety system and, restore it because they trust the criminal? Does anyone trust a criminal that far as that? Does that happen?

Chris Sistrunk
Okay, so good question. So we have seen traditional IT systems where they pay the ransom and get access back to these systems.

And some that are OT adjacent, such as Colonial Pipeline, and in hospitals we do know that those systems and both of those incidents or those examples are colonial pipeline or name a hospital breach. In some cases there were OT’s type data OT type critical information that was impacted and so they of course paid and due to the fact that there’s – if someone one didn’t trust these ransomware gangs to do what they say they were going to do, those ransomware gangs would be out of business. So if you pay them, the decryptor doesn’t work, then it’s no good and their ransomware gang job is over with, at least for this thing.

But for OT, In this instance I talked about, they did not pay. I don’t have enough data to know if OT direct on the control systems themselves, the Windows, HMIs, engineering workstations, DCS servers, SCADA servers, if they’ve paid in those instances.

But I’d say it’s it’s plausible. And it really comes down to the business decision of the plant owner, the CEO of the company based on what the engineers at the lower level, hey, can we get do we have backups? Can we get the vendors to come in?

And so I really don’t have enough information to say about do OT asset owners like a plant, like a site, like a, that directly operates OT, if they trust these ransomware gangs or not.

It may just roll up higher than them about that. Also, there’s usually an advice from a ransomware negotiator that’s a third party that specializes in negotiating with ransom. So they may advise to pay or not to pay or to get a reduced payment as well. So it’s very, very complicated.

I know I didn’t answer your your question directly but in in the instances we’ve seen, we have seen them not pay and we have seen them pay and what’s OT or not.

Andrew Ginter
That makes sense. So coming back to our theme here, lessons from incidents, the lesson from this incident is: get rid of that firewall, use the existing infrastructure, and, look at your backups. I mean, if the backups are encrypted, it’s it’s all over. That makes perfect sense.

What else have you got? What else, have you been running into lately that that’s that’s interesting and noteworthy?

Chris Sistrunk
Yeah, I mean, it basically boils down to either ransomware or commodity malware. So i’ve got I’ve got another example about a ransomware electric utility was impacted by ransomware on the IT side.

But they had a good incident response plan, and they severed the IT and OT connections there. And even down to the power plant type networks. And so that was really amazing. And so that’s a good story. And we were able to actually verify the IT team. We’re able to verify that the threat actor, the ransomware group Quantum, were scanning the OT DMZ, but they didn’t get a chance to get be let through.

We did do a full assessment of their DMZ, and looked at the domain control of the firewalls and even the domain controller and the firewalls and others down inside the OT networks.

And we found like that they were actually pretty lucky because they had some weakness in some of the firewalls. So eventually, if they had enough time, if the ransomware actor had persisted long enough, they could have gotten through that firewall and made it to the DMZ. And the Active Directory had some weaknesses as well, and they could have gotten domain access, domain admin, and pivoted to the OT network.

But the great thing is, to highlight again, they had a good incident response plan, they were able to segment quickly, and then they were able to have their OT vendor – in this case, it was Emerson Ovation – were able to go on site, And they were able to not only take the IOCs that we had from the ransomware, but they were able to sweep because they were this was their contract to do so, to look in the PLC logs, the OT workstations, endpoint protection, and all that stuff.

So we all worked in concert together in this incident. And then actually they hardened the firewalls, hardened the domain controllers, hardened the workstation configurations, Before doing anything else, they did all of that.

When the IT ransomware was eradicated and hardened, then they said, okay, now we’ll reconnect everything back the way it was. So that was a really great lesson learned with another ransomware. And it wasn’t a direct impact to OT, but this was a great opportunity to to leverage that Incident response plan that they had.

Andrew Ginter
So, Nate, the concept of separating IT from OT networks in an emergency, this is a concept that I see increasingly. I mean, I think we’ve reported on the on the the show here a few times that this is what the TSA demands of pipeline operators, petrochemical pipeline operators ever since Colonial, the ability to separate the networks in an emergency so that you can keep the pipeline running while IT is being cleaned up.

I haven’t actually read the the, translated the Danish law, but apparently in Denmark, there’s a recent law in the last 12 months saying exactly the same thing.

You know, the the TSA applies to pipelines and rails. In Denmark, it applies to critical infrastructure. And it says in an emergency, you have to be able to separate. They call it “islanding,” the industrial control network.

And as chris points out it can be effective but it relies on really rapid intrusion detection and rapid response because as Chris said the bad guys had been testing the OT firewall if they had had just a little bit longer they could have got through so, even though it’s imperfect, it is a measure that I’m seeing increasingly required of critical infrastructure operators and recommended to non-critical operators as a measure that that helps, especially on the incident response side.

Andrew Ginter
Have you got another example for us? I mean, three is a magic number. You’ve given us, sort of two sets of insights. What what else have you got for us and in terms of lessons learned?

Chris Sistrunk
Yeah. There’s lessons learned. I can just name a few other lessons learned from just about any attack, right? Making sure that you have these at least window systems with antivirus. In a lot of cases, the OT network didn’t have antivirus, just basic antivirus, not necessarily an agent or EDR solutions.

If you have those great, but if you don’t have any antivirus, that’s, you need to get at least a supportive version of Windows or operating system and with antivirus.

Having good backups, having good vendor support. Now, this last incident we responded to was using a living off the land attack.

So we responded to electric utility in Ukraine and in 2022. And it was a distribution utility that the attacker came in through the IT network, deployed their typical wiper malware. This was the group APT44 or Sandworm team, which has been targeting critical infrastructure around the world for quite a while.

And they were able to pivot to the SCADA system and use the feature of the SCADA system to trip breakers using tool that was built in the SCADA system itself.

So just giving it a list of breakers to trip and and calling that executable in the system to trip those breakers on behalf of the attackers. And so the the lesson learned here is targeted attacks, they’re going to not use malware, they’re gonna use the features or the inherent vulnerabilities in an OT network.

Stealing valid credentials like an operator, workstation, or an engineering administrator account and if you can even spearfish an engineer or an administrator network admin on the IT network and you don’t have good segmentation of roles from IT to OT then that’s that attacker is going to use every one of those tools to evade detection to bypass your normal detections

Because they’re coming in as a valid user. So the lessons learned there is to limit the amount of administrative access. And this is role-based authentication, right? And does the person that got promoted and now is in a different department, does he still need admin rights?

Does this person have enough control for just their area only? Are the area responsibilities too wide? And now we say, OK, we need to reduce the amount of admin.

Do we require two-factor authentication or even hardware two-factor authentication to really reduce the attacker down to an insider threat?

Because remotely, that’s very hard to do, to bypass hardware token-based two-factor authentication. And so there’s some there’s some living off the land guides out there.

The U.S. government DOE has put out a threat hunting guide for living off the land attacks after the Volt Typhoon announcements last year.

But I would also go a step above and beyond that learning good ways to detect anomalous logins even from your own folks. If it’s out of a normal time, out of a normal location you’re really going to have to have some tuning on some of these detections.

And the only way to really test those is with red team that’s trying to be quiet and not trigger your detections. And and That is some of the more advanced asset owners and end users. They’re using leveraging red teams, hiring red teams like what we do at Mandiant to come in and see if we can do living off land attacks to bypass their detections.

Nathaniel Nelson
Since Chris mentioned it but moved on before we can actually define it, let me just, for listeners, living off the land is the process by which an attacker, rather than using their own malicious tooling, would make use of legitimate software or functionality of the system they are attacking to perform malicious actions on it.

It’s been a growing trend in recent years, I believe, because it’s so effective in that it is so difficult to detect.

You know, you could spot malware with certain kinds of tools, but can you spot somebody doing things with legitimate aspects of Windows or whatever you might be using?

It sounds, though, like Chris is talking about detecting living off the land tactics, which seems difficult, Andrew.

Andrew Ginter
That’s right. I mean, I have been following Living Off the Land to a degree. The, the, What’s right word? The short answer is you run an antivirus scan on a machine that’s been compromised by a Living Off the Land attack, and it comes up squeaky clean.

There’s nothing nasty on the machine. And what I heard Chris say is that this is because the the bad guys are using normal mechanisms, especially remote access, to log into these systems as if they were normal users and use the tools on the machine to to attack the network or to wait for a period of time until it’s opportune and then attack the network.

And What I heard him say is that because it’s a lot of remote access, he says you can detect this by focusing hard on your remote access system. You can prevent it by throwing in some hardware-based two-factor. that That will solve a lot of the problem, not necessarily all of it. There’s always vulnerabilities in zero days, but the two-factor helps enormously. It’s way better than not having two-factor.

But that’s preventive. On the detective side, he said, pay attention to your remote access. If normal users are logging in at strange times, that should raise a red flag.

If normal users are logging in from strange places, the IP address coming in is from China. Well, is Fred in China this week? No, he’s not. So, what I heard was one way to, to you know, help detect living off the land techniques is to pay close attention in your intrusion detection system to the intelligence that you’re getting about remote users logging in.

Andrew Ginter
So one more question. We talked to a lot of folks on the on the podcast. A lot of them are are vendors with technology that we talk about. And sort of a consistent theme for most of these vendors, most of these technologies is operational benefits.

Yes, the technology, whatever it is, is helping with cybersecurity. But often, this stuff helps with just general operations in sometimes surprising ways.

We’ve been talking about incidents and lessons and a lot of what you do is incident response. Are there operational benefits that that you run into that people say, I did what you told me and everything is working more more smoothly, not just on the security side? Do you have something anything like that for us?

Chris Sistrunk
Oh, absolutely. And one of the things that I always tout is looking at your network, looking at the packet captures in the network can aid in not just cybersecurity benefits, but these operational benefits.

You can see things like switch failures happening, TCP retransmissions happening, all this traffic, maybe when like your Windows HMIs, maybe trying to reach out to Windows Update, but it’s blocked by the IT/OT firewall or anything else. It may not have a connection at all. It’s trying to reach out. All this unnecessary traffic or indications of improper configurations, misconfigurations, and things like that.

So just looking at your network with some of these tools that are out there, free tools, paid tools, ICS-specific tools, or IT-specific tools, doesn’t matter. If you look at, If you take any one of those, say just even Wireshark, and look in your OT network, you can get an idea on what traffic doesn’t need to be there that you can eliminate it make your improvements to the system.

And now I have better visibility. If there is an incident, I can easier to detect if there’s a cyber incident or if something’s operationally wrong, like a switch failure or something.

And so there’s a really great benefit there. Also helps improve reliability. We’ve done an assessment at a company that had a conveyor belt that they were having problems with. If the conveyor belt wasn’t timed exactly right, if they had too much latency on the network, the conveyor belt would stop and all the things on the conveyor belt would just go everywhere and it was a disaster.

So we just looked in the network, oh, you’ve got all these TCP retransmissions. And you look at the map in the in the software and say, oh, it’s coming from these two IP addresses.

Oh, we know what those equipment is. And we had the network person come, oh, I’ve been trying to figure this out for weeks. And just looking looking, just using a tool like that they were able to find and fix the problem and they fixed their latency issue because of that.

So going back to, incident response, having these things, having an incident response plan, a lot of OT already has this, because of disasters, fire, floods, storms, spills, air releases, safety issues, and that’s all part of their normal disaster recovery or incident response plans.

If you already have one of those, you’ve already done 90% of the work to have a cyber incident response plan. You just now have a cyber incident response added to that. So that’s the whole premise behind things like ICS4ICS, incident command system for industrial control systems.

And having that, say, chief person in charge of cybersecurity for a site, for a paper mill, for a power plant, for manufacturing facility, even though that’s not your day-to-day job all the time, if you’re, say, the lead, that, and you have, say, you have multiple plants, have multiple leads for those plants that still the, every decision will go through the plant manager, the general manager of the plant or site, but at least you have someone that is in charge of cybersecurity.

Just like you have a designated fire watch person or anything else. So if you take safety culture that we’ve known about for over 100 years and mold your cybersecurity culture to fit with that, things will make a lot more sense. We’ve already invented this. is We’re not reinventing the wheel here. Now we’re just including another paradigm of cyber security, network security, and endpoint security into these things that we have been doing.

There’s a fire. Okay, let’s put out. Incident response. And if you have a plan, That’s great. If you don’t have a plan and you run around that’s not good. So if you have a plan, you can at least prepare for it. And sometimes that’s the win. Being prepared is better than not being prepared.

Andrew Ginter
Well, this has been tremendous. Thank you, Chris, for joining us. Before we let you go, can you sum up for our listeners? what What are the key points we should take away here?

Sure. If you didn’t listen to a single thing I said, and you you can listen to these three things. Collaborate, plan, and practice. So collaborate. Get your IT teams talking to your OT teams, talking to your manufacturers, and identify the right roles within each of those.

And make sure you get together and talk about these things. Have some donuts and coffee. So collaborating knowing who is in charge of what is half the battle, knowing who to call when. plan Having an incident response plan or including OT security in your incident response plan and or engineering procedures, that’s going to help when an incident impacts OT directly indirectly.

And then practice. even can start with a simple question. Hey, what would we do in an incident? Or even going to having a tabletop exercise collecting logs from a PLC security logs. How long does that take? How many devices do we have? If the general manager says, how long is this going to take to pull all the logs from all of our systems? You won’t be able to say, I don’t know.

You’ll just know, this will take, two hours and 45 minutes because we’ve tested it. So collaborate, plan, and practice.

If you need help with OT security or IT security, we do that at Mandiant. We offer an incident response retainer that covers IT and OT. There’s no separate retainer. If you have an IT incident and don’t need OT, not a problem. If you have an OT-only incident, not a problem. if it’s IT cloud and OT all at the same time, we can help you, around the world 24-seven.

And lastly, if you want to learn more about this, you can reach out to me. Chris Sistrunk at Google.com. My email, LinkedIn, social media, blue sky. And check out some of our blogs on the Google cloud or Mandiant security blog.

We have, great content out there that is actual actionable not marketing fluff. It is actual actionable reports.

The next M-Trends report is coming out next week, RSA timeframe, end of April. So that’s a free report.

It’s a great report to to look at and gain some insights on what we’ve been responding to over the last year. And with that, I appreciate it. Collaborate, plan and practice.

Nathaniel Nelson
Andrew, that just about concludes your interview with Chris Sistrunk. Do you have any final words to add on to his to take us out with today?

Andrew Ginter
Yeah I mean, Chris Chris summed up collaborate, plan, and practice. What I heard, especially earlier in the interview, was do the basics, guys. Some people call it basic hygiene. It’s basically do on an OT network as much as you can of what you would do on an IT network.

Put a little antivirus in on the systems that tolerate it. Get some backups. Get some off-site backups so that if the bad guys get in, they can’t encrypt the off-site backups. They’re somewhere else. look for the vendors leaving behind internet connections, get rid of them,

And on the in terms of living off the land, he gave some very concrete advice that it that I’d never heard before, saying, look, these people are coming in as users. Get two-factor. Two-factor will do a lot to breaking up living off the land attacks.

And in your intrusion detection systems, look hard at what your remote users are doing. And if it seems at all unusual, that’s a clue that you’re being attacked. And, in terms of his collaborate, plan and practice, I really liked the fire warden analogy.

 Say, look, if you have an industrial site that is flammable, okay, your fire warden does not just sit on their hands until the place bursts into flames. Okay? The fire warden is someone who’s active in terms of actively, looking at managing raising the alarm when they see dangerous practices in this flammable plant. It’s not, it’s not just a reactive position, it’s also a proactive position.

And we need that for cybersecurity, because basically every site is, in a sense, a flammable cybersecurity situation.

So it’s not just that they sit on their hands until there’s an incident and then they’re in charge. They are actively looking around, just like a fire warden would and wouldn’t say, we shouldn’t be doing this. My job is not just to put the fire out when it occurs or coordinate putting the fire out.

My job is to help prevent these things. And so that I love that analogy. That that makes so much sense. Anyhow, that’s that’s what I took from the episode.

Nathaniel Nelson
Sure. Well, thank you, Chris, for speaking with us. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a great pleasure. Thank you, Nate.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there who’s listening.

The post Lessons Learned From Incident Response – Episode 139 appeared first on Waterfall Security Solutions.

]]>
Experience & Challenges Using Asset Inventory Tools – Episode 138 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/experience-challenges-using-asset-inventory-tools-episode-138/ Tue, 27 May 2025 16:28:48 +0000 https://waterfall-security.com/?p=32564 In this episode, Brian Derrico of Trident Cyber Partners walks us through what it's like to use inventory tools - different kinds of tools in different environments - which have become almost ubiquitous as main offerings or add-ons to OT security solutions.

The post Experience & Challenges Using Asset Inventory Tools – Episode 138 appeared first on Waterfall Security Solutions.

]]>

Experience & Challenges Using Asset Inventory Tools – Episode 138

Asset inventory tools have become almost ubiquitous as main offerings or add-ons to OT security solutions. In this episode, Brian Derrico of Trident Cyber Partners walks us through what it's like to use these tools - different kinds of tools in different environments.

For more episodes, follow us on:

Share this podcast:

“Trying to build a vulnerability management program when you don’t know what’s out there is a fool’s errand…you’re never going to be able to understand your total risk.” – Brian Derrico

Transcript of Experience & Challenges Using Asset Inventory Tools | Episode 138

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome listeners to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions.

He’s going to introduce the subject and guest of our show today. Andrew, how’s going?

Andrew Ginter
I’m very well, thank you, Nate. Our guest today is Brian Derrico. He is the founder of Trident Cyber Partners, and he’s going to be talking about using asset inventory tools. I mean, we’ve had a lot of people on vendors mostly talking about what’s available, how it works.

He’s going to look at the problem from the point of view of the user using these tools and why using these tools turns out to be a little harder than you might expect.

Nathaniel Nelson
Then without further ado, here’s your conversation with Brian Derrico.

Andrew Ginter
Hello Brian, and welcome to the podcast. Before we get started, can I ask you to, say a few words of introduction, tell us a little bit about yourself and about the good work that you’re doing at Trident Cyber Partners.

Brian Derrico
Good morning, Andrew. I’m Brian Derrico. I’ve been in the critical infrastructure sector for about 15 years. i Spent my entire career at a large utility solely focused on the cybersecurity requirements for nuclear power plants.

And my last role there was actually, it was the program manager responsible for the entire cyber program across the fleet. Again, all really dealing with OT type stuff and regulatory requirements.

I left in October, started my own business, tried in cyber partners, and mainly aimed to help other critical infrastructure sectors with their cyber problems.

Andrew Ginter
Thanks for that. And our topic eventually is going to be asset inventory. But, let me ask you, you’ve spent a lot of time working at nuclear.

You’ve worked, in in very old plants in, you’ve done some work recently with a very modern plants. Can you talk about, in terms of automation, what’s the difference between sort of very old automation and very new automation that that you’ve been exposed to?

Brian Derrico
So there’s there’s a lot of similarities, right? At the end of the day, whether it’s a new plant or an old plant, it is still a nuclear power plant. So there is a nuclear reaction that is heating some water. That water is heating some other water in a secondary loop that is flashing to steam, spinning a turbine, making electricity.

So that is nuclear power 101. It doesn’t matter how new or old the plant is. They’ve all generally worked that way for a long, long period of time. To your point, what you do see is the amount of digital assets in those plants is drastically different from new to old.

So in my previous role, I had done some industry benchmarks to try and figure out what is sort of the average number of digital devices that are in a plant. And it came in around 1700 or 1800 per unit.

These new plants that they’re building, they’re an order of magnitude larger than that. There are potentially 10,000 devices on a single unit because everything is digital.

I don’t know how many people have had an opportunity to tour a nuclear plant. I would certainly advise if you have that opportunity is a really, really cool thing to see. And most plants are all analog. There is a lot of analog equipment, a lot of analog indication.

And the new plants, that’s not that case anymore. So trying to keep track of all of your digital devices becomes a very important and critical problem.

For example, in some of the older plants that we worked in, as you’re going through getting asset inventory, you open up the cabinet, you kind of look for what is digital, what are the blinky lights, and you go through and that is generally a manual way that we did a lot of asset inventory.

These newer plants, you open the racks and everything inside is digital. Everything inside could be considered an attack pathway. And there were some discussions and and there’s some thought process out there that essentially calling locations critical.

Is going to be an easier way to do it because saying this entire rack, no matter what’s in it, is going to be a critical digital component is an easier way than trying to label an inventory all 50 or 60 devices. So that was a thought process that was considered.

But again, at the end of the day, every device was considered on a case by case basis. But it kind of gives you an idea of just the scale of how much digital equipment there are in newer plants nowadays.

Nathaniel Nelson
Andrew, I’m glad we’re getting the opportunity to talk about nuclear because it seems like a pretty relevant and highly important field.

And yet it never seems like we get a guest on who wants to talk about it. So where does nuclear stand in the panoply that is industrial security for you?

Andrew Ginter
Well, we’re we’re going to be talking mostly about asset inventory, but let’s talk about nuclear for a while. I mean, Brian said a few words. in a sense, he’s lived a lot of this stuff, without even knowing how unusual it is.

Nuclear is an extreme. When we talk about worst case consequences of of compromise, what’s the worst case, the worst thing that can happen in a a coal-fired power plant? A boiler blows up, people die.

What’s the worst thing that can happen in a nuke? The nuclear core explodes, Chernobyl, and hundreds of square kilometers become unlivable for centuries.

Oh, that’s very bad. So the consequences drive the intensity of your security program, and and nukes are an extreme. I mean, the only thing I can imagine that’s possibly more sensitive than nukes is, I don’t know, nuclear weapons, targeting systems, launch launch protocols. It’s just it’s that extreme.

What does that mean for cybersecurity? Well, let’s start with physical security. In different parts of the world, there’s different rules. In a lot of the world, you need a security clearance to visit the site.

So In North America, you can get tours of the site. But in a lot of places, you you a lot of stuff is classified. I don’t have a security clearance. I’ve never seen network diagrams for a nuclear site. I’m guessing a bunch of this stuff is classified. it It’s national secrets. It’s it’s it’s that intense.

On the cybersecurity side, again, I talk to people, uh, we, we serve nuclear customers at waterfall, And they do things that, seem again, seem extreme.

They might have all of their OT systems in one room, in one building, and all of their IT systems, all their IT servers, email servers and whatnot, they do have IT networks in in nuclear plants. you need to You need to schedule work crews. got to pay your people.

So they have IT and OT networks. And all of the IT servers are in a different room in a different building. Why? Because they cannot afford someone any time, someday to make a mistake and plug a cable from an IT network into an OT asset. That’s completely unacceptable cybersecurity wise.

And so they physically separate it so that as much as possible, they make these kinds of errors impossible. You can’t do it. You can’t plug the wrong cable and it’s in a different building.

Another example, you might imagine that there would be multiple security levels. You might imagine that the technology that controls the core, the control rods into the core that keeps the core from exploding is more sensitive than the the OT systems that control the steam turbines. I mean, a coalpower a coal-fired power plant has steam turbines.

Steam turbines have steam turbines, you imagine. In fact, again, when I talk to these people, a lot of nuclear sites, in my understanding, have only two security levels. Absolutely highest critical and business and nothing in between.

Again, why? Why would the steam turbines be protected to the same degree as the core control system? In part, it’s because, the physics of these systems, the steam, there are… distant physical connections. the liquid from the core heats up the liquid in the steam. And so, there’s theoretically a risk that something happening to the steam turbines could leak back into the core.

But more fundamentally, these people just say we cannot afford to make mistakes with security. And so we’re going to dumb it down. We’re not going to have seven or eight or 13 security levels. And you have to remember which is which and apply the right policies to the right equipment.

It’s going to be absolutely critical, end of story. And which room you’re in. That’s the policy you apply. Again, as much as possible, they eliminate human error.

Regulations. I’m most familiar with the the North American regulations. You might imagine, I mean, NERC SIP handles the power grid. if you If you fail to live up to your obligations under NERC SIP, what happens? You can be fined as much as a million dollars a day.

It’s never been levied, but you get fined. With the nukes, if they fail to live up to their regulations, they’re shut down. They lose their license to operate. that’s it’s It’s that simple. If you cannot operate safely, you cannot operate. Bang, you’re down. So again, intense attention is paid to the detail of cybersecurity and cybersecurity regulations.

Another example. I’m not aware of any nuclear generator. now I might I don’t know all the generators in the world. I’m not aware of any nuclear generator that has any kind of OT remote access, period.

Nothing remotely gets into OT. You want to touch OT, you walk over to the server room. So again, intense. In a sense, though, what I what I what I see of of the nukes is that they are leaders in the cybersecurity field.

They they do things extremely intensely. And as other parts of the field, other power plants, other refineries, other high-consequence sites, as the threat environment continues worsening, as cyberattacks keep getting more sophisticated, they look over at what is nuclear doing, and they pull one after another technique out of the nuclear arsenal, and start applying it in in their in their circumstance. So even if you’re not required to follow the nuclear rules, I would encourage people to read NEI, the Nuclear Energy Institute 08-09 standard, or the NRC Nuclear Regulatory Commission 5.71,

I’d actually recommend NEI 08-09. It’s more readable. It’s got more examples. The NRC 5.71 is sort of more terse and saying, here’s the regulation, follow it. But they are leaders in the space. And over time, I see people drawing on their expertise and and the way they do things.

Andrew Ginter
And our topic is asset inventory. And so, we’re talking about how much automation there is. We’re talking about how hard it is to count. Can we back up a minute?

In principle, the truism is you cannot defend what you don’t know you have.

And so that’s why we do inventory. Is that it or is there more to it? Why are we doing these inventories? What good is an asset inventory?

Brian Derrico
So it’s a great question and I’m going to give two answers, right? So one is on the nuclear space. The first answer is we have to, right? And sometimes that is, it’s an an answer. I don’t think it’s a good one, but it is answer. So we do have regulatory compliance around an asset inventory because to your point, it does sort of fuel other aspects of your cyber programs, such as supply chain, vulnerability management, configuration management, et cetera.

The flip side is it’s just, it’s a smart thing to do, right? You can’t build a vulnerability management program if you don’t know what software is out there that you’re potentially vulnerable to.

So trying to build a vulnerability management program when you don’t know what’s out there is it’s it’s a fool’s errand because you’re never going to be able to understand your total risk.

And that’s really the key is understanding your assets gives you the ability to understand your attack surface. And once you understand your attack surface, you can then figure out what are my vulnerabilities? What do I need to mitigate? What is a possible threat vector an adversary could use to attack this device or this process?

And you can’t do any of that without having the asset inventory first.

This brings us back to our topic. We’re talking about asset inventory. We’re talking about tools. There’s tools out there to do asset inventory. We don’t have to do a manual walk down and count the blinky lights in the cabinets.

Do the tools not solve the problem? is Is there still a problem when you’ve deployed one of these tools?

Brian Derrico
So there are a number of tools that do this and some are better than others right nature of the beast, but they do a great job of asset inventory. So I currently do professional services for a software company and a lot of their deployments in the OT space are generally for people that want to use the tool as their asset inventory.

Now, the issue is sort of becomes a couple of pieces uh that comes up can come up often and I i saw this in nuclear all the time is a lot of those tools that we’re talking about they depend on network traffic right so they’re looking at source and destination and they’re passively trying to piece together these are their assets on your network and this is what they do and how they do it so one problem is going to be you have assets that are not networked so If you have safety critical devices, they may be isolated. So you’re not going to be able to deploy a tool to do that. So you are going to have to manually enter those in and manually keep track of those in some way, shape or form.

And then the second piece is a lot of these tools that we talked about, they can’t just be deployed instantly. You can’t just throw a box in a rack and call it macaroni. There are architectural changes that have to happen to your network. You have to get traffic from switches. You have to open span ports. You have to deploy sensors.

And that’s where things can get a little difficult on the OT side of the house.

Andrew Ginter
So work with me. modern switches, any kind of managed switch has got a span port or a mirror port.

You log into the switch, you turn on mirroring and and off you go. You can start seeing the traffic and a lot of these these asset inventory tools can start figuring out what are the assets based on their traffic.

I get that some systems are are not on the network, the safety systems, that makes sense. But is it is it more complicated than that? I mean, I imagine you’re working with some older systems, older switches, or do any of these plants use non-managed switches?

Brian Derrico
So I’m sure there are some non-managed switches out there. I would not be surprised if there are some hubs that are still out there and kicking.

While in theory, yes, opening up a span port is is a simplistic idea. Where that turns into and where it becomes difficult is a lot of these OT vendors and and even environments that you’re in, nobody wants to change the system without vendors’ involvement, because everybody’s scared about what are the consequences. Because again, this isn’t an IT system, this is an OT system. There could be some huge process changes and huge impacts and risk if whatever you wanna do doesn’t go according to plan.

And that’s where I have seen the most amount of struggle come from is, you wanna get some a span port, you reach out to the vendor, you say, hey, this is what we’re looking to do. We just wanna span this traffic and the vendors don’t wanna budge.

The vendor hasn’t deployed that. They don’t know what that’s going to look like. They tell you that, hey, we’re going to have to refat the entire system after making this change. now Now, meanwhile, is is there going to be an impact?

No. we We can look at switch utilization and see, hey, even if we double, we’ll double the switch utilization. you’re not gonna see a huge impact to that because your switch is only at five or 10% utilization.

But it’s just, it’s there isn’t an understanding on the vendor side. So for some of these big control system vendors, it becomes difficult for them to bless as it were making these changes. And that’s where we have seen the most amount of struggle.

And we even had projects where we had to provide a lot of the testing and we provided, this is what needs to happen because the vendor just didn’t have the knowledge.

And think as time goes on for those control system vendors that are out there, I think that’s gonna be more and more of an issue because more and more of their deployments are gonna have a requirement for some form of higher detection capability, but We can’t just say, these things are they’re in an ot environment they’re safe uh that this’s just this is not the case right there there needs to be higher level of detection and the vendors need to be more willing to work and as time goes on I think it’ll be easier but retrofitting this sort of technology in existing systems becomes increasingly difficult because nobody wants to touch the system that isn’t broke

Andrew Ginter
So A couple of quick points there. Brian used a couple of of acronyms people might not recognize. He said you might have to refat the entire system. What’s that? Fat is factory acceptance test.

It’s set everything up and test every function of the system. Emergency recovery, every function of the system and make sure that it meets the requirements that were laid out when you you issued the contract to get the system built.

Typically takes days. You have to shut the plant down to do it. So nobody wants to refat anything. So that’s that’s what the vendors are threatening, saying, well, if you make a change that we haven’t tested, we have to retest it, don’t we?

Another point he made was about, uh, bandwidth and, for anyone who, who’s not real familiar with how mirror or span ports work, you got a switch with, I don’t know, 24 ports on it, 48 ports.

It has to be a managed switch. You log into the switch with a username and password and you can configure the switch. And one of the things you can configure is it’s called a mirror port or a span port. Um,

It’s a port or, multiple ports where you send copies of stuff. So typically, if you’re going to do an asset inventory, you configure one port and say every message that anybody sends to anybody else on the system, send a copy of the message out this port.

And now… The asset inventory system can look at the messages and say, oh, there’s IP addresses in use. I wonder what kind of machine this is. It’s using this TCP port number, and it figures out what kind of stuff is on the network based on the network traffic. And the mirror port gives you that traffic.

And the throughput consideration is, I thought, and now I’m not an expert on switches, I assume that modern switches, you would put, they they have ports, 24 ports out the front, and every message that comes in goes onto to a backplane. It’s a very high-speed backplane.

And I thought that the message went to every one of the other ports, and the ports decided, do I send this out or not? And so it would go to the mirror port as well. That’s what I assumed. And so, turning on the mirror port would not, in fact, increase the, you the amount of traffic on the backplane because every message is visible to every port.

But what I didn’t get clarification from from Brian, but what it sounds like is at least some of the switches he’s dealing with, if you enable the mirror port, then the source. if If port A is sending a message to port B, it first puts on on on the backplane address to port B, and a second time puts the same message on the backplane address to the mirror port, because it’s been configured to send everything to the mirror port. And that would tend to double the amount of traffic on the backplane.

But these backplanes are massively high speed because they have to support all of the 24 ports simultaneously. So he’s saying, look, your average backplane is barely loaded and doubling the load is immaterial.

What he did not say was that configuring the switch causes the switch to malfunction. I would imagine ancient switches that were connected were around sort of at the beginning of the concept of mirror ports and and span ports might have defects in their software that if you turn on the mirror port, it might malfunction. But, he didn’t say that. I forgot to ask him. And the fact that he didn’t say it says to me he’s never run into it or, he would have mentioned it. So that’s I’m putting words in his mouth there, but I’m guessing that’s not so much a concern. The concern is throughput. The concern is testing. That’s just, people worry about

Things working the way they’re supposed to if you make a change that has not been anticipated. This is the essence of the engineering change control discipline that is, again, used intensely at at nuclear sites and used, but maybe just a little less intensely at at other critical infrastructure sites. Pause. Pause.

Andrew Ginter
So work with me. In the modern day, you’re saying, the control system vendors don’t get asset inventory. I mean, span ports, mirror ports, they’re also used for intrusion detection systems.

This is what Dragos uses. This is what Nozomi uses. the six pillars of the cybersecurity framework, the NIST framework, include detect, respond, recover. You’ve got to be able to look at what’s happening on the hosts. You’ve got to be able to look at what’s happening on the networks.

Really, the the vendors in the modern day don’t get this.

Brian Derrico
And I credit where it’s due, some do get it better than others.

However, there have been some vendors we’ve worked with that did not want to make any changes because they just wanted to give us the same system that they gave us 20 years ago. with one version, higher than than what we deployed, again, decades in the past.

And, when pressed, while the people on the vendor side are experts in what they are doing, they are experts in safety design, they are experts in PLCs and how all of these things talk together.

They’re not IT people. So when you start talking, hey, I want to open up a span port, it’s different. They don’t understand. They think it’s going to cause an impact to the system. Meanwhile, as people with an IT t background, we can see that, hey, you’re using managed switches. you can enable a span port.

The inputs are 100 meg. And, even if if all of your PLCs are, completely maxing that throughput the back plane of the switch is going to be nowhere near utilization and even doubling that you’re not going to see a decrease and it just it takes a long time to get the vendors on board and again we even offered to to do some testing and show what the utilization changes were

And, we have seen that again with some vendors are better than others. But, I feel like at the end of the day, it’s we just want to give you the same system that you’ve already had. And making changes to that is scary.

And, we’re an isolated system. So, we don’t need to deploy a lot of that technology because we’re just going to stay isolated and and not connected to anything. And the reality is that isn’t as effective either because you While you lose the sort of network attack path, you still have several others, such as physical supply chain and portable media.

So having detection capability is actually, in my opinion, it’s worth the risk of plugging that thing in as long as you have a sound architecture. And that’s where some of the struggles begin with changing sort of that mindset from on the vendor side.

So for example, some of the control system vendors that there’s workstations and stuff there, they understand that, yes, there are detection pieces. You’re going to deploy some level of network intrusion detection.

You’re going to deploy some level of SIEM agent, right?

So I need to send Syslog and we’ve had good luck, and again, with particular vendors there. Some vendors will actually included with their control system, they will also include a security suite.

So they will have their own HIDs, their NIDS, their SIEM, and that’s all included. They have a patching server that distributes Microsoft Quick Fixes and all that stuff. It’s great.

However, when you get to that lower level of your PLC type stuff where, again, we were working with a PLC vendor and they would not budge. They did not want to change their design.

They thought that the switch, there would be a loss in time of communication, which would affect the safety related aspect of the design, and they did not want to budge.

And it took two years for us to to work with them for them to understand that we have requirements and when the programs were implemented specifically across nuclear it was understood that you’re not going to go in and bolt this stuff onto existing systems but when you’re starting fresh when you’re building a system from the ground up it has to have all of these components there is no longer an excuse to say, oh, it’s and <unk> already working. we’re not going to go play around with it. It’s going to that obviously cause issues.

Everything has to be baked in from the ground up. The cybersecurity piece has to be foundational. And again, with the PLC vendors, we found it to be, again, one particular vendor, very difficult.

For us to get that through and it took a number of people, trying to work their, the PLC engineers through why this is, we promise here, here’s some data to back it up.

And they finally did agree to to use the architecture that that we were, we had kind of specified from a design perspective.

Andrew Ginter
So we we sweat blood, we fight with the vendors, we get our asset inventory system deployed, we augment it with with manual inventory for the air-gapped or the isolated networks, and we use it for managing patches and vulnerabilities.

Is there anything else we use it for?

Brian Derrico
Absolutely. To your point, Vulnerability management’s a big one, right? Because I think at the end of the day, your asset inventory is going to give you what your what your risk profile is, what your attack surface is.

Vulnerabilities is one part of that. There is another piece of it that is supply chain, right? So we talked about that a little earlier, being able to understand what are the important devices that I am going to produce procure and procure those with certain sets of requirements. That’s also critical.

Another thing that we would use it for is configuration management. So understanding what is your configuration. You can build tools, you can use tools. That tell you this is the configuration on the device.

And some of those tools out there, some of those network intrusion systems that are OT-centric can also give you alerts and understandings on what is when changes happen. You have a code download to a PLC.

Is that expected? And then also, this is the running code of that PLC, and this is what changed, and you would have visibility into all of that. And again, all based on your asset inventory and having as much information as you can about those assets.

Andrew Ginter
And if we could sort of bring it into the modern world, the, the latest automation systems have a lot of devices and asset inventory counts them. This is great.

But there’s a lot more we need to do with the information. So you’ve talked about patching. There’s a lot of We’ve had people on the show talking about SBOM, software bill of materials, keeping track of sort of embedded software when vulnerabilities are announced.

Is there automation for tracking SBOMs and vulnerabilities and doing the mechanics of patching and patching? Arguably, counting the asset is is the easiest part of managing the inventory.

Is there more in sort of that we can expect of modern tools?

Brian Derrico
I think there is. And, vulnerability management is always going to be one of the most difficult things to conquer because if you don’t have an updated software inventory, you’re never going to know what’s out there. You can do all the Windows patches in the world, but, there are obviously tens and tens of thousands of non-Windows vulnerabilities where if you’re running again, insert whatever software product, right? There are huge vulnerabilities around a lot of those. So can you automate it?

I think it comes down to you can automate the visibility. Right So you can at least understand and have up-to-date dashboards of this these are the devices that you need to worry about. Right This particular device has five critical vulnerabilities. And then that gives your your internal cyber engineers something to go after to mitigate to overall reduce that risk.

I also think it’s important from a business perspective to understand what are we going to do, right? On the IT t side, there’s a lot of patching processes and there’s, SLAs associated with is your, is the vulnerability critical, high, medium, low, et cetera.

On the OT side in general, OT is very adverse to patching and mitigation. And I agree with that in some senses, and I don’t agree with that in other senses. And I think as a business, you guys like you need to understand what is your tolerance for that risk? What are you willing to accept?

And are there areas where, yes, we we’re comfortable, we’re not patching because we have all these controls in place. And in order to get to the device, there’s guns, gates, and guards in the middle of it.

But, but hey, maybe if something really, really, really big comes out, we are going to take care of it. And We do have to come up. So I I don’t think there is a way to fully automate it, but you can at least automate the visibility.

So you don’t have people, just manually searching NVD with a software list that they don’t even know is accurate. You can get that part out of the way. There are tools out there that will help you. And then becomes a business decision and sort of a business process around, with all that information, here is your overall risk profile. What are you going to do about it?

And that that becomes the deeper discussion, again, around what specifically the business is, how much risk tolerance you do have, how much risk avoidance you want to have, and kind of go from there.

Andrew Ginter
Well, Brian, thank you so much for joining us today. Before I let you go, can I ask you, can you sum up for our listeners? What should we take away in in terms of what we’re doing with asset inventory?

Brian Derrico
Absolutely. I would say asset inventory is the most important part of your program, because if you don’t know what assets are out there, you’re never going to be able to protect your organization from somebody that maybe they know what’s out there and you don’t.

So asset inventory is critical. You cannot build upon your internal program without understanding what your attack surface is. I think another point is there are tools to help you.

This is not something that we need to do manually anymore. You do not have to go into cabinets and count every single blinky light. There are tools and you know products out there that will help us get closer to where we want to be.

And then at the end of the day, you still need an internal team that understands what the information coming back is. So if if you you know if you do need help in deploying these tools or selecting tools or understanding what the risk is, I’d be happy to help.

You can connect with me on LinkedIn. Brian Derrico, think I’m the only one. And I can help you with those problems because, again, once we once we conquer assets and get the tools in place, a lot of pieces of the program become a lot easier.

And my goal and what I love is just driving efficiency. So let’s automate, automate, automate, use tools to kind of help us see what we can and just do what we can to protect critical infrastructure.

Nathaniel Nelson
Andrew, that just about concludes your interview with Brian. Do you have any final thoughts about what he talked about there that you can leave our listeners with?

Andrew Ginter
I mean, I think what I took away from here is is, the importance of inventory and the need for automation. I mean, if a modern nuclear generator has, 10,000 plus devices in it that have CPUs in them that have to be managed, that have software that have to be managed, then you know I don’t know that a nuclear generator is that much more heavily instrumented than the average industrial thing. If you buy a steam turbine, it’s a modern turbine is heavily instrumented. If you buy any kind of physical equipment, it’s going to be heavily instrumented. This is you know There’s plus CPUs in a modern automobile.

And that’s, that’s something that fits in your living room. We’re talking about massive installations. I would imagine that a big refinery has as many as 100,000 plus devices if it’s been upgraded recently.

When was the last time you tried to manage a spreadsheet with 10,000 rows in it? When the last time you tried to manage a spreadsheet with 100,000 in it? Just manually counting the blinking lights takes a long time.

Automation to me is is essential. I mean, this is, you look at the NIST cybersecurity framework, sort of the grand compendium of everything that is cyber. What’s the first thing you do? Well, the first thing you do is figure out who’s responsible for the program and you know assign budget and responsibility.

What’s the second thing you do? You take asset inventory. You got to understand what you’re protecting. So, this this all makes sense that you need the inventory and in the modern world, you need automation. There’s no way you can do this anymore manually. So, my thanks to to Brian Derrico and, learn something here.

Nathaniel Nelson
Yes, our thanks to Brian and Andrew, as always, thank you for speaking with me.

It’s always a pleasure. Thank you, Nate.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Experience & Challenges Using Asset Inventory Tools – Episode 138 appeared first on Waterfall Security Solutions.

]]>
How to Embed 30 Years of Security Funding into Capital Budgets – Episode 135 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/how-to-embed-30-years-of-security-funding-into-capital-budgets-episode-135/ Sun, 09 Feb 2025 09:50:33 +0000 https://waterfall-security.com/?p=30934 Looking for Security Funding for Capital Budgets? Ian Fleming of Deloitte explains how we can embed up to 20 or 30 years of cybersecurity budget into capital plans, rather than fight for budget every year.

The post How to Embed 30 Years of Security Funding into Capital Budgets – Episode 135 appeared first on Waterfall Security Solutions.

]]>

How to Embed 30 Years of Security Funding into Capital Budgets – Episode 135

Most of us struggle to get funding for industrial cybersecurity. Ian Fleming of Deloitte explains how - because cybersecurity is essential to sustaining the value of industrial assets - how we can embed up to 20 or 30 years of cybersecurity budget into capital plans, rather than fight for budget every year.

For more episodes, follow us on:

Share this podcast:

“Budgeting for OT cybersecurity shouldn’t be an afterthought for a capital project. Trying to integrate it into the life of the physical asset, I think is key.” – Ian Fleming

Transcript of How to Embed 30 Years of Security Funding into Capital Budgets | Episode 135

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome listeners to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions, who’s going to introduce the subject and guest of our show today. Andrew, how are you?

Andrew Ginter
I’m very well, thank you, Nate. Our guest today is Ian Fleming. He is a solutions architect for OT, industrial control systems and cyber physical solutions at Deloitte. And today we’re going to be talking about how the money flows. We’re going to be talking about working the numbers, arranging the budget so that there is in fact budget for industrial security.

Nathaniel Nelson
Then without further ado, your interview with Ian Fleming.

Andrew Ginter
Hello, Ian, and welcome to the podcast. Before we get started, can I ask you to please introduce yourself and, you know, say a few words about the good work that you’re doing at Deloitte?

Ian Fleming
Yeah. Hello, Andrew. Thanks for having me. My name is Ian Fleming. I lead cybersecurity efforts with operational technologies at Deloitte. My team really focuses on helping organizations secure their industrial control systems, like building automation and physical infrastructure systems that are typically overlooked when it comes to cybersecurity prior to Deloitte, I worked for really heavily in power. I did a lot of operational technology. Cyber was involved in a lot of NERC SIP work actually enabled a lot of some of the vulnerabilities that we’re we’re trying to patch today. So I feel like I’ve come into the consulting side to pay penance for what I’ve done in industry.

Lately Deloitte I’ve been working on integrating security as part of a core operations, especially in in, in industries and areas of government civil. Where the line between physical assets and cyber assets is becoming increasingly blurred, we also work to make sure our clients can effectively manage risk related to these systems and just proper alignment between security investments with business goals. It’s good to be here.

Andrew Ginter
And our topic today is budget, you know, shaking the money loose, managing the money. We don’t get any anything done in most businesses unless there’s a budget to get it done. And we’re going to talk about sort of the OT security budget, the industrial security budget. But can we start with IT? I mean, do IT teams have the same struggle for budget that that we observe in the OT world?

Ian Fleming
That’s a good place to start. Mean IT teams do face their challenges with budgets, but they’re often more straightforward nowadays when compared to OT. I think in IT, cybersecurity costs are generally tied to a business process or a system that the top, you know, top floor of the office typically understands. Pretty clear. But often like cloud based solutions where information is an asset, they’re easier to finance and frankly, it does work more from a top down of the organization. It initially couldn’t get funding. They’ve been able to really structure their sales pitch towards. You know. Real business goals, which is a great, you know, it’s something that OT. I I you’d think it would be easy to for them to describe it, but they top floor tends to just throw money at those problems whenever things break versus IT where they see it more as a strategic advantage. If you move data between say cloud provisor cloud providers, you’re doing upgrades of infrastructure relatively easy. In it, you can handle the issues in a more agile way. At the same time, it has been rapidly transitioning from company owned data centres which were once inside of a a office building to to cloud based. More operational expense type models where logical security nowadays we refer to it as security as code. It automates much much of the security work in it. Now these models do allow IT teams to dynamically shift their resources and manage security through software which works really well in environments where assets are entirely virtual and easy to scale. And that’s the reason why operational expenses have really exploded in it. But let’s look at the other side like an OT where my clients are working in and where I’m focusing some of my time at Deloitte, we’re dealing with physical assets like machines and sensors, industrial equipment where failures mean real world space and time consequence.

IT goes beyond just the information, it’s it’s physical stoppage of production. So the problem is also compounded by the fact that IT often has to compete with the physical maintenance budget for operations, which typically isn’t really seeing much in IT, especially with the advent of cloud and everybody in IT moving that direction. As far as physical capital projects like industrial automation systems or infrastructure, they’re they are also fundamentally different. Most of the projects in OT are architected, designed and budgeted and financed over really long life cycles like 20 year life cycles before a refresh. When a capital project such as you know physical infrastructures initiated all costs, including materials, labour, maintenance, think of building a building, or heck, even just renovating your kitchen in your house, they’re budgeted upfront and financing is typically secured through like a large one time capital expenditure.

Andrew Ginter
So Nate, you know we’re talking about budgets here. A lot of our listeners, I’m guessing, are like me and have sort of a limited understanding of of accounting and budgets. I mean, we tend to be focused on bits and bytes and buffer overflows and you know crypto systems. So let me let me give you just a little bit of background here. you know When I started the episode, I had sort of a a small business owner’s understanding of accounting and budgeting here. you know I’ve operated my own small business from time to time. And when you know when I operated my own business, there’s you know there’s two kinds of expenses. There’s what’s called capital expenses and operating expenses. If you buy, let’s say, a delivery truck for a delivery business, the the the truck you know, hat is going to deliver value to you. You’re going to use the truck for like a decade. And so the government generally requires you to declare that large expense as a capital investment.

Which means, you know, I always thought it was sort of a liability to to declare that because I would have to, you know, What I’d like to do is reduce the amount that I pay in taxes. And so if I could claim the entire cost of the truck against my revenues that year, as a small business owner, as a sole proprietor, I would pay less taxes. The government says, no, no, you can’t do that. You have to you know assume a lifespan of three or 10 years or something for the truck, and you can only claim a fraction of the expense against your taxes and reduce your taxes slowly over time because you are the the you know the asset is reducing in value over time.

Andrew Ginter
Expenses like gasoline that you use up you know that day or you know the the over the course of the next week, you can claim the entire amount of the expense against your your your income. You can reduce your taxes. This is sort of the the naive model I had of of capital expenses versus operating expenses. You can claim all of operating expenses right away. It turns out that in big business, claiming capital costs over a period of time, let’s say the delivery truck over 10 years is an advantage because bit you know big business wants to show a profit every year, wants to control their expenses every year, control the expenses that they claim. And so if they have to buy you know a fleet of trucks, a thousand trucks in a particular year, and they’re going to last 10 years, then they don’t want to show that they have negative profit in the year that they had to make that, you know in the year that the money left the business, because it left the business that year to buy the thousand trucks. They want to show that, you know to to account for that expense over the the the life of the asset, the trucks, so that they can show a consistent profit.

So, you know this is, sort of capital versus operating is is different in small business versus large business. And you know in heavy industry, which is you know industrial security. We’re all about industrial here. In heavy industry, there tends to be extreme pressure to reduce operating expenses. When you build a mine, you invest, I don’t know, $3 billion dollars in you know before the first shovel full of ore you know with with gold or whatever in it comes out of the mine. You invest a massive amount. This is your capital investment.

And once you’ve made that massive investment, generally you’re under pressure to minimize the cost of operating that asset over the course of the next 30 years because you’re producing a commodity. you know Even gold is a commodity and you know you sell the gold at the world price for gold. Gold is interchangeable. Nobody cares if it’s your gold or somebody else’s gold. You’re fighting with every other gold mine on the planet to produce gold.

and you know even gold gets more expensive every year to produce as the the supply diminishes, to produce gold you know at a price that will that will show you a profit. So, operating expenses are always under extreme pressure in heavy industry and they they capitalize their investments. So, that’s sort of, accounting 101, when I came into this, I have learned from Ian. So, i’m I’m thinking, let’s go back to Ian and and learn you know the mistakes i’ve just I’ve just explained to you and sort of the naive understanding of accounting.

Andrew Ginter
So thanks for that. You know, reflecting on what you just said, the thing that that I think I caught was that. There’s roughly two kinds of budget there’s capital expenses and operating expenses. You know, in the OT world, everybody wants to minimize operating expenses and you know capital is is kind of what it is in the IT world. I think I heard you say that everything is becoming operationalized, meaning it’s all going into the OpEx budget, but you’re saying that in, you know, capital budgets are still really important in the OT space is, is that the key difference here between between these two spaces? Is is the budgets?

Ian Fleming
Well, I think that’s a it’s a really good question and it has been something that I’ve been struggling with, like how to operationalize an OT cybersecurity program when. It’s being funded through what like I was talking about earlier, typically on it is more of an operational expense budget. You don’t really tie the ongoing maintenance of a computer system that’s anticipated to run for five years on a capital expense. It’s like replacing Oracle or a sales force application. Be a CapEx. Unless of course, you’re buying all the software as a service. See those lines have have been. Grade, but because those physical assets do have a long lifespan and the security investments are typically, and when I say security, it’s also availability of those assets are are tied to those those physical assets. So whether it’s built into CapEx or drawn down over time, it needs to be sustainable from a resourcing perspective.

Like for instance in in power I worked in power systems for several years. Power delivery and distribution. We had a financial metric called tier meant. It meant timed interest, earned ratio and a CFO and a prior life taught me about this because I had no idea how to tie like a cybersecurity. A tool that was that was designed to protect an operational asset. So the tier measures a company’s ability to meet its debt obligations by comparing its income before interest and taxes to the interest and expenses on its debts. So basic. The life cycle of that asset, you wouldn’t be under the water, you know, underwater on your loan. So the tier ratio can indicate whether your organization has that profitability from that asset that’s operational to cover its debt obligations and ongoing operational cost. When I figured that out with the CFO and this is several decades ago, I was like, OK, that’s how I’m going to tie my cyber security program to a specific a very specific operational asset. And when I say operational assets, it’s it’s it has a a physical, a cyber physical component to it.

That actually helped me budget long-term OT cyber security measures towards the asset and that’s some of the work that I’ve been doing here at Deloitte is tying that by by getting it down at the low level like how is this asset being budgeted and financed in order to. Convince somebody to take on the risk of installing it and owning it. But but, but also trying to influence the cybersecurity metrics into that asset to where the OpEx, the, the, the cost, the ongoing cost of protecting that asset from cybersecurity is also encompassed inside that operational the, the, the, I’m sorry, the capital expense of that asset.

Andrew Ginter
OK, so so you know TIER, you talked about tying costs and interest to income. So when you say that you’re tying cybersecurity to a an asset, we’re talking about an asset. Like you know, in a power plant, a generating unit, not an asset that generates revenue, not like a bolt or a PLC that represents only an expense is is that the kind of is that the the the sort of size and class of asset that you’re tying cybersecurity?

Ian Fleming
Well, cybersecurity can be tied to any single component or group of components inside of the power plant. I like to think of the system itself. I do a lot of model based systems engineering at at at Deloitte as well, and we don’t typically look at each individual component as as being completely autonomous from the process that it’s designed. To you know, to operate. So it would, it would be all the entire system, I mean, the whole idea of doing a capital improvement or capital. Project is to account for also the you know the financial risk. You know of of doing the investment or or performing the investment on the asset, but also reducing the you know proper engineering to reduce the total cost of ownership of that asset. If cybersecurity isn’t tied in. To those models, it makes it very difficult to not just bolt on because it’s being design. Mind, without cybersecurity in mind, but like for your example, a PLC.

If a PLC is designed inside of a power plant, let’s just use that as an example, and there’s no cybersecurity maintenance tied to that as part of the the model for financial keeping. Keep keeping that that asset functioning, it’s going to make make it very difficult in the future. Over years or even an adjustment to a threat or a risk. To find financing for that. And and when you then you’re running into the patching problems, right? You got to go through design assessments and everything all over again. However, if if a a device like a PLC was engineered and designed in that system, knowing that it had to accommodate a 20 year life cycle, and there will be. Times that that they’ll have to be system systematic updates and upgrades due to either compliance regulatory which is really difficult to plan for, but you you know for a fact that the equipment itself is probably. He is probably going to be replaced over time. I did one project for a client regarding a tunnel and that was one of their transportation tunnel. And they were extremely concerned about that because they knew that the technology was going to improve over time. So as part of the Capital improvement project, it was a 50 year life. People.

Creating a budget for for cyber security improvements and functional improvements over time, instead of creating another capital project in the future, it was just built into the maintenance of that capital asset.

Andrew Ginter
OK, so so it you know you’re saying that we need when when there’s a capital project that’s the time not just you know? Lots of people say it’s you you need to build cyber security into your stuff beforehand, not afterwards. It’s always more expensive afterwards. What you’re saying sort of in addition is that you have to build the cybersecurity budget into the capital budget. That at least that’s that’s what I’m hearing. You know, have I got that right. And you know, if I may you, you’ve been working, you, you you mentioned with with building automation. You know when you. When people try to tie the the you know. To to make that tie. How’s that working in sort of in in the parts of the the industry that you’re working with?

Ian Fleming
Sure. And I do work a lot in, in government with you know, a lot of government facilities, those types of things. When it comes to building automation systems or HVAC lighting. Heck, even even water treatment systems. It it’s clear that. That cybersecurity is an is an afterthought in these systems we go in. The there’s not a really clear. Point of reference for even what assets are on the network and we are. Having. To delve into like IT tools just to determine what physical inventory is out there.

And again, it goes back to the whole an IT data is the asset, it’s easier to justify protecting the data because you can move it. If there’s a failure but an OT such as HVAC systems, refrigeration and those types of systems, food processing and plant goes down, you’re not just losing data, you’re you’re risking the the physical assets themselves, sport, spoiled food damage machinery comes in the challenge that physical operations are always under pressure to reduce those operation. Expensive and cybersecurity, seen as an extra cost rather than a central part of keeping that system running safely and being available. Ironically, the way I feel about it is just working in OT versus IT it it a lot like how cybersecurity was reviewed in the early to mid 1990s. We didn’t really have cybersecurity budgets back then. Everybody was just looking at like IT as operations. I just need the information and the product was more important than keeping it secure and I feel like a lot, a lot of these OT systems.

So. Just building automation that don’t really have the cybersecurity component. To it, if we if we look at the way they’re budgeted. And the way that they’re they’re brought online as a capital investment and you you design in that cyber security component to it, whether it be in contract or through supply chain. You know that is what sets the budget. That’s what. That’s what gives us the big wins in integrating security as a core part of operations, particularly in industries where there’s that vague line between where cyber can control or impact those. Assets I mentioned the tunnel earlier, that’s the a great example we recently worked on a tunnel maintenance project. They had to address. They wanted us to address cyber security as a as as a priority. They basically made us. Cyber physical commissioning agents. So any type of PLC or logic controller that was touching an Ethernet network or had some kind of routable protocol that was creating some sort of. Function inside this this structure, this infrastructure they they wanted us to to look at that from a not only a design perspective because knowing what we’ve seen with TPS that are happening today and in the past how they can how we can make those cyber components more modular. To where we know we’re going to have to upgrade, say, passive network monitoring. Well, maybe we’re doing passive network monitoring today, but in the future we might want to do active monitoring just using that as an example, just designing those hooks in. To where in the future would require a massive heavy lift it it’s akin to, you know, having a spare tire or or some sort of designed resiliency built in for cyber security purposes on an operational system.

Andrew Ginter
so Let me chime in here, Nate. This is sort of my learning curve as as I went through the episode. you know Start with IT. t One of the points that Ian made was that almost everything is becoming operational costs in IT. you know in in In years past, 20 years ago, if I bought a laptop as part of my small business, I would have to you know claim that as a capital expense. And I could only claim a third of the cost of the laptop every year. And I had to keep track of it for three years. you know To me, it was annoying. But again, to big business, they they like capitalizing things. It normalizes their profits. In the IT space, though, today, you know increasing the the the in many jurisdictions, if you buy a laptop for $1,500, you just claim the thing right then and there.

It’s not it’s not worth capitalizing. It’s not big enough to to drag out the accounting over three years. If you buy a server farm at a cost of $50 million, dollars you know you still are going to and and you expect a life of five years out of the server farm, you’re you’re still expected to capitalize that. The thing is almost nobody does that anymore. People don’t have you know A lot of businesses don’t have their own server farms anymore. They’re renting the farms from someone else out of the cloud. And the rent comes out of the operating budget, not the capital budget because they’re someone else owns the asset. You can’t capitalize somebody else’s asset. So you don’t have big capital expenses in IT anymore.

When you apply that principle naively in OT you wind up fighting for capital or sorry for operating budget every year and you lose sometimes and cybersecurity sort of falls by the wayside and we have all these problems and this is what we’re trying to solve. the The insight here is that what you want to do is associate the cybersecurity cost with the asset that you’re protecting and the asset is not the computer, the asset is the the generating unit or the tunnel or you know a physical asset. To me, that’s counterintuitive. It’s an ongoing expense every year, yet it’s part of the capital plan, the capital budget for the asset. Why does that make sense?

And you know he didn’t quite say it in this many words, but in in chatting with him, you know he gave the example of a tunnel and maintenance. them I mean, what what do you maintain in a tunnel? There’s equipment in a tunnel. you’ve got to blow if In a long tunnel, you’ve got to put air down there, or you know over time, all you’re left with is CO2 and nobody has anything to breathe, especially if you’re driving through the thing. You have to drain water out of there. If the tunnel is low enough to be below the water table, you really need strong pumps if the water, if the tunnel is is under a body of water or under a river. So you’ve got a lot of equipment in these tunnels.

And what he’s saying is that the cost of maintaining the equipment is part of the capital budget. And I’m going, really? And he says, yeah, the reason for that is because the asset that pumps the yeah for for the water, the the blowers for the air. The value of the asset depends on correctly maintaining that equipment. If you don’t maintain the equipment, the the value of the asset declines. You can’t use the asset anymore or the equipment wears out faster than it’s supposed to. It’s supposed to last 20 years. It only lasts four years because you never maintained it. And so the the maintenance cost is an ongoing cost every year, but it’s part of the capital budget, because it’s essential to the asset. And what he’s saying is that in the modern world, if you want to protect these the automation that you know controls the equipment that’s essential to your asset, that cybersecurity protection should be part of the assets budget, not part of your you know cut to the bone operating budget, which was you know which was news to me. So this is this is sort of the theme going forward.

Andrew Ginter
OK, so so you know what I’m hearing is that we need to build cyber security, ongoing costs into capital plans. It sounds contradictory. You know, capital sounds like one time and and operational, you know, cybersecurity is ongoing, you know, is this is this new, is this something that there’s there’s? Precedent for in in the OT space already.

Ian Fleming
Oh, absolutely. That’s a that’s a really good. Point. I mean, that’s most OT systems are designed and with the under capital to to account for operational expense over the life of that asset like it’s just these are you know contrarian example of what happened with with Al Equipo OT breach. That the water facility out in Pennsylvania, it’s a great example of consequences, you know, potential consequences of cyber security in these types of OT environments. These these water treatment plants. And water utilities, if it’s not properly integrated into long term financial planning. And and life cycle management and in the case I’ll equipped like remote access was added to a PLC that PLC was exploited led to a beach. And you know if we look at this you know it’s pretty obvious that. There was a functional upgrade requirement. They wanted to be able to remotely manage this PLC if.  Was managed if if that if that functional improvement to that capital asset was managed as a CapEx project instead of an operational improvement like an OpEx budget because IT? Just adds you know. Remote control or interactive remote access as a day by day function for for regular maintenance of of information technology system.

But if it was designed and built into the system from the very beginning as part of the overall project cost, the change would have been memorialized in documentation. There would have been a change to an as built of the function of that system, the architecture engineer, the system integrator, all the people that was involved in the original design. The system could have included in the initial setup of the interactive remote access feature. That they wanted a long term security strategy that embedded that function into the life cycle of the asset they could. Have. Also modularized that cybersecurity function for planned replacement as as new remote access protocols came out, finance might also account for that expected life of the asset. And if the cost was too much. What the risk appetite was low and say no, this isn’t worth it. At least you’d have some sort of document that that was showing what the cybersecurity expenses over that asset life cycle was going to be, you could have accelerated depreciation of that asset. It would have been more of a financial and a risk management decision versus a hey, we need to enable interactive mode access on this on this machine or on this with this logic controller. Now it makes it a lot easier to enforce cyber security policies and just general operations policies and adjust to new standards while maintaining existing protections without having to worry about annual budget constraints.

If, say, there’s a bridge, there’s two ways of bridge that you want to you want to put more load on it. There’s two ways to to do it. You could just overload the bridge by changing out the weight limit sign right, or you. You obviously have to recreate the structure and reinforce the base of that structure to carry the additional load. In operational technologies, it’s pretty clear that that’s very unsafe to do in information technology. It’s not because there’s not an intrinsic tie between the OT system. And the context of operations that that system is operating under and that the physical component, it’s just like, OK, we’re just installing interactive remote access here. So if a project is is budgeted through a capital expense, it’s going through like a, a, a long term plan of how long that assets supposed to last and how it’s supposed to be maintained. It shouldn’t be an OpEx budget that we’re we’re adding more IT features to it without taking into context what that system was supposed to be used for and if we’re circumventing any of the controls by adding IT based cybersecurity and.

Interact, you know, feature sets to that asset, I feel. Andrew, that’s where most most of you know my past life I’ve gone wrong is taking the IT approach which you know, hey, it’s a VPN, it’s it’s encrypted, there’s nothing wrong. But I’m not really looking at the operational context that that I’m that should be. The attention that should be given to the operational context of the asset that I’m modifying.

Does that? Does that make sense? I guess I’m. I’m I’m trying to tie that OpEx to the CapEx budget and the asset, the long term asset and I’ve seen this over and over again, it has been a pattern without using too many examples from clients that I’ve worked with. But those were most of the problems, if you’re you’re modifying code. In a virtual environment, there’s very little physical consequence to that. But when you’re when you’re doing it to an operational asset, it’s very, very different constant set of consequences.

Andrew Ginter
OK. So so let’s assume we can get cybersecurity costs for the life of the asset built into the capital plan for the improvement, whatever it is. UM. You’ve got those costs built into the the plan up front? How do you manage that financially? How do you how do you pull money out of that over time and and what happens if you you run out of the money that you’ve budgeted or you know you know because?

Costs have gone up, or what? You know what happens if you if you use the physical asset, not 20 years, you use it for 30 years and you haven’t got the number in there that you know is gonna you can draw down for is it is it like a fixed number that you’re drawing down and you have to guess right with the number or how does that work?

Ian Fleming
So yeah, the maintenance, the maintenance cost for you know, I’m not suggesting they need to be. Like it’s all going to be CapEx, but if OpEx, I’m sorry, it’s all going to be CapEx. Maintenance is going to be an operational expense over the lifetime of the asset. However, if if there’s not a what I’m advocating for is cybersecurity, being part of the CapEx plan, so.

Think of designing any type of physical asset you’re going to have components that are made to be pulled out and replaced like conveyor belts. There’s a maintenance plan for that asset. Now what you just described there is a problem. It arises when, like the TCO, the total cost of ownership metric of financial metric remains static and doesn’t account for either business growth added, functions demands you know, asset improvements, those types of things over time. For instance, we would install. It’s the the whole overloading the bridge. We wouldn’t replace just by moving the weight limit size. We have to reinforce that structure itself because it’s a it’s a safe, it’s a safety issue. Tanking without equip a a water, the TCO will have to be. Dynamic when it’s in the in the operational expense side, has to adapt to the evolving functional demands of the asset and including the threat landscape of cybersecurity. But the CapEx part, the capital expense, it reduces the operational expense. Considerably. If you plan for those systems to be replaced. Time you might have to accelerate the depreciation of a life cycle or the the acceleration of that asset.

You know, replace versus fix. If you don’t build into the the model, the componentry that needs to be replaced over time so. I hear what you’re saying. I mean, you kind of threw me a an interesting one there on like, well, it has to be dynamic. It’s not all all. I just hope I’m. I’m. I’m being clear that I’m not. I’m not. Advocating for the full. Operational technology security of an OT. That to be fully CapEx, the problem that I’ve seen is when people when when asset owners deploy assets without even without even taking into account for security concerns during the development and the financing of that capital asset, think of it this way, it’s usually commissioned. 1st and then we go buy a product and call it, you know, cyber security vendor. A and we try to force force it on top of that asset and more. A better approach would be hey, we need to bring cyber security in on this. Let’s look at the model of the system, figure out where the the more significant and risks are, and design the system to account for a cybersecurity. Over the long lifespan of the asset it does, it does create issues because it doesn’t usually think that way. Remember, they’re mostly capital. I mean there’s they’re mostly operational. You know if if if Azure comes out with something tomorrow. They’ll shift over to it. If you make a decision today with a capital expense, you have to be able to live with that. With that with that solution for a specific period of time. Based on that, based on your maintenance. Budget. Just just like. You know, if a you know a high OpEx type component fails on a on a truck, you’re you’re going to replace it just to keep the capital asset alive. But there’s better ways to deal with it than just continually raising that operational expense over time. I hope I’m being clear on that, that I I I’m not advocating for the entire OT cybersecurity budget to be 100% in the capital expense or the. Capital expense of that asset, it’s just OT cyber needs to place the table to influence the design of that OT asset.

Andrew Ginter
Okay, so so let me chime in here. Again, in sort of my learning curve, there’s a difference between a capital expense and a capital plan. A capital expense is one where you spend, I don’t know, $3 billion dollars over the course of eight months, and then you reap the benefits of that over the next 30 years because you’ve built a mine, you’ve you know built a a power plant, you’ve built something.

That’s a capital expense. You spend the money once. A capital plan is setting money aside in future budgets, in my understanding, setting money aside in future budgets to deal with that asset. You’ve made a capital investment. You can’t just spend the money and expect the thing to run. You’ve got to maintain this stuff. You’ve got to secure it. You’ve got to operate it. All of those costs are built into a plan for the asset.

And from time to time, the financial people have to reevaluate that plan. So for example, let’s say, you know, we’ve just put a solar farm in and, you know, we’ve got I don’t know, lithium batteries that we’re using to to store the power for the farm for for you know overnight use. And these batteries wear out every, I don’t know, three years and have to be replaced. And the the life of the solar farm is expected to be 20 years. If the price of lithium batteries shoots through the roof,

The cost of maintaining this asset has now shot through the roof. are are the the The numbers we put together saying the asset is going to pay for itself in 20 years don’t work anymore. There may be a point where we say, you know we’re going to shut this down and you know wait for three years and see if the price of lithium comes back to normal. or you know We’re just going to shut it down and get rid of it. it’s just It doesn’t work anymore because you’re reevaluating the capital plan for that asset. and you know In a sense, you might have the same thing with cybersecurity. It’s not like you’ve put maintenance money in a bank account to be drawn down over 20 years. It’s not like you put cybersecurity money in a bank account to be drawn down out of 20 years and you might run out of money. That’s not how it works. It’s part of the capital plan.

And if there’s a sudden change or a permanent change in your expenses, for for example, a new regulation comes down that makes cybersecurity for this asset much, much more expensive than it used to be so expensive that you know the asset was only performing marginally to begin with.

And now we’ve tipped it over and it’s just not profitable anymore. We might choose to shut the asset down. That’s part of, in my understanding, that’s part of the capital plan for the asset that that needs to be reevaluated in light of current conditions. It’s not part of the capital budget. The you know the capital expense happened when you built the asset, but the plan persists. That’s that’s my limited understanding here of of of of how this works.

Nathaniel Nelson
You know The more we talk about long-term capital plans and 20-year timelines and these these amortized cybersecurity budgets, are we then accounting for patching and upgrading legacy systems over these many-year timelines?

Andrew Ginter
ah Yeah, I mean, I did not ask Ian that question, but yeah I think what what springs to mind is patching. you know Legacy systems, legacy automation, 20-year-old automation, because that’s how long the power plant lasts. you know We put automation in in place for that.

The question you know question is, should should money not have been set aside to upgrade the automation? And the answer is yes. If you need to upgrade the automation to reap the benefits out of the asset, then you have to budget for that. But when we’re talking cybersecurity, I mean, part of the problem I think is that it’s an afterthought. but you you know Even if you plan up front and you look at a system and say, well, I’m going to take it down every five years for a for maintenance, for essential maintenance, and that’s the opportunity to upgrade everything. And you know what do I do in between? Well, there’s new vulnerabilities a week after we turn the asset back on. you know Can we patch those things?

I think that comes down, I’m guessing it comes down you know partly to is it in the plan, but partly as well just cost benefit. If you can put compensating measures in like strong network segmentation or you know device encryption or if you can put a compensating measure in that achieves the security objective and is cheaper than the really expensive patching process because of all the engineering that’s involved

Maybe you should use the compensating measures, not you know because you have no other choice, but because you’ve rationally looked at costs and benefits and said, it’s way cheaper to use compensating measures than it is to try and keep this you know the the software up to date week by week as as new vulnerabilities are announced. so that’s Again, I didn’t ask Ian this, but you know applying the principles he’s laid out, that that’s kind of what makes sense to me.

Nathaniel Nelson
And the other question I had, as as Mike Tyson says, everybody has a plan until you get punched in the mouth. When you have a very long-term cybersecurity plan in place, how do you account for all of the ways in which your needs are going to change and the threat landscape out there is going to change in unpredictable ways left and right?

Andrew Ginter
And that’s a good question. And I think that’s the difference between sort of a capital expense and a capital or an asset plan. you know An expense happens one time. The plan is something that lives for the life of the asset. And as conditions change, you know the cost of lithium changes, the the threat environment changes, the plan might have to be reevaluated. Regulations change. You might have to reevaluate your plan. But that’s sort of part of the answer. A second part of the answer is engineers tend to be heavily involved in asset plans because they’re designing the asset and they’re the ones that have to design the asset to deliver the value over a 10, 20, 30 year period. And so engineers are are heavily involved. And this is, I think, why the engineering community that that I see majority of them, it’s not universal, but a majority of them are really embracing cyber-informed engineering because this is an upfront process that shows them how to subtly change their designs upfront in ways to just take certain entire classes of risks off the table. you know the The threat of a cyber attack causing a massive boiler to blow up in your face, you can take that off the table with a mechanical overpressure relief valve.

You can take other kinds of threats off the table by subtly changing the design of your network, so the design of your automation. and These changes, in a sense, are are permanent. They take those classes of threat off the table permanently. That simplifies long-term planning. so you know They’re embracing CIE and you know the the asset plan is something that’s reevaluated periodically over the life of the asset.

And you know, new conditions about the cost of maintenance, the cost of security, the the need for security, you know, the cost of insurance. All of these conditions are built into the periodic reevaluations of the asset plan. You don’t have to get it perfectly right 20 years in advance.

Andrew Ginter
It does make sense. I mean, you know what what I’m hearing is that, you know. We’ve had lots of guests on on the show over the course of a 100 episodes talking about, you know, building cybersecurity into technical plans for the the the management of of automation assets. What I’m hearing you say is that. You know, it’s not one number. It’s not one time. It’s that cybersecurity budgeting needs to be part is what I’m hearing, needs to be part of the the ongoing budgeting and capital and asset management process that you know, large organizations have. It. Is that what you’re saying?

Ian Fleming
Well, that that is the intent of of asset management in an operational construct, right? I mean it’s it’s about influencing the budget or influencing the books on on inventory that you have. On the shelf, that’s where. Really good asset management forecasting come into play even from an OT or a cyber perspective. It just feels like there’s a disconnect there because of the financing method and the way that things are operating with cloud and virtual. Virtual software that it’s not not operating inside of a data centre.

More, but we need to be realistic about how long these assets will last and how long it will cost to maintain their security. A really good parallel can be drawn from the history of maritime insurance in this story and the the shipping industry I’ve been working with the MTS Isaac lately, so I got a really good crash course on how the shipping industry vessels are classified based on build quality, ongoing maintenance which directly impacts their insurance. Premiums, actually, it’s one of the oldest, I think was one of the first insurance companies that came to out of existence with the maritime. So for instance, ships that receive high classification rating from a society that classifies the building rating like given A1 rating from the Lloyds of London. It indicates a vessel is a very high quality construction, well maintained. They’re also from MTSISAC. They’re they’re even. Tying cybersecurity rating systems into vessels, which I thought was fascinating at the last MTSISAC I went to.

This actually is is built just to lower or maintain or just put some sort of a marker on what the expected insurance premium will be because the higher that rating, the lower the insurance premium would be. Conversely the ships of the lower classification ratings from the society. Or those that fail to maintain their rating will have higher premiums or they’ll be considered out of class. This. Which is uninsurable. So the same principle, if it would apply to OT cyber if the asset outlived its original budgeted timeline or cybersecurity cost increase due to the threat of regulatory landscape, the the organization should have that process in place to reevaluate that cybersecurity posture much. Much like how the ship’s classification ratings would be reassessed overtime if this asset loses its high rating because of neglected security or added features that we’re taking into it, you know, bolted on over time, the organization would face increased risk. Higher cost for maintaining and and I’m sorry for for mitigating those risks and not maintaining that asset.

Andrew Ginter
Cool. more than I thought I was going to learn about finance, so thank you for that. Can I? Can I ask you an open question? You know, you’ve been doing this for a while. What else should we know? What? What? What? Am I not smart enough to ask you about here?

Ian Fleming
Ohh, you know the the hard part, I think waterfall I go, I go far back with you guys in in prior lives working in power and I did like the the approach with the data diodes and and things like one thing that that opened my eyes working. With waterfall on other projects in in my prior lives with, with utilities and and an industry. Is the importance of the collaboration between an IT leader and those operations people that are in the field working on things and including that finance team? I think having that cybersecurity built into CapEx, it’s not easy. It’s a hard thing to describe. I think I’ve done a. A pretty horrible job of trying to drive it here today, but it does require that clear communications about the risks, benefits, long term cost saving.

And I do feel like if if if we can explore this deeper, I hear a lot of the leaders, business leaders saying the same thing there. There’s this disconnect between what’s valuable and IT cybersecurity those metrics or those KPIs, you know the. Number of vulnerabilities that we’re searching for, or a number of threats that were thwarted and it’s disconnected from like actual production or, you know, just just maintaining that business relevance with cybersecurity.

I feel like. Cybersecurity. Just in in, in general is is more like quality and engineering the the longer I’ve been in the industry and because I’m finding myself focusing more on how to articulate. The problem in financial terms and using historical references to tie all this stuff together, it’s not really about the Whiz Bang latest and greatest vulnerability or attack. While those are sensationalized. It’s really about how do we sustain and and how do we adapt and as a cybersecurity practice and specifically in in operational technology and not even specifically just in cybersecurity in general.

How we can look at this differently and how we can describe it differently to get the attention that that the asset deserves and in our profession, how we can make things better? So. I don’t know if that answered your question, but this has been something really top of mind for me for a while. It’s I wish I could tell you all the things that I’m involved in there. We we actually do hear, but the ones that I did bring up during this call were published and. Either the Wall Street Journal or or other other places that that got some national attention put in for some awards. So it’s just kind of a I’m just hope that we can challenge everybody here to think a little bit differently about the cybersecurity problem and how itcan. How cybersecurity as a practice can address some of the some of the problems in our industry that we serve.

Andrew Ginter
Thank you for joining us. Before I let you go, can you can you, you know, take us through the highlights. What what are the key takeaways from from you know our discussion here.

Ian Fleming
Yeah, sure, Andrew. You know the key takeaways. That I have. Just three, really. There’s one OT cybersecurity is fundamentally different from IT, mainly because it. Deals. With those physical assets that can’t be moved to the cloud can’t be replaced easily. Or shifted. And the second one is budgeting for OT cybersecurity shouldn’t be an afterthought for a capital project. Trying to integrate it into the physical, the life of the physical asset, I think is key. That’s what’s going to keep. Your. Budgeted over the life of. That asset and the third. Try to seek out collaboration across it, not just inside your you know the IT circles, but also the operations people that are designing ENA firms and include finance. So I think that’s. CFOs, I think that’s really essential for the long term success of cyber security program. You have to have a resourcing plan on that. Resourcing usually starts at finance. It’s how everything gets gets. For. It’s maintained overtime and if you’re struggling to secure that funding for those cyber don’t, don’t, don’t fight for OpEx every year. Try to try to work design work to design that cyber maintenance. Into modulars for those modules for those capital projects from the start. It’s really a smarter way to secure your operations in a safer way to fund your ongoing maintenance of a physical operational asset over the the life over its operational life cycle.

Nathaniel Nelson
Andrew, that was your interview with Ian Fleming. Do you have any final words to take us out with today?

Andrew Ginter
Yeah, I mean, I i learned something here i’m about sort of financing for big business. You know, I learned that that accounting for big capital expenses, accounting for those expenses over time is actually a benefit. It stabilizes your profits. And I learned that you know large assets tend to have a capital plan that associates critical recurring expenses like maintenance and insurance and cybersecurity, couples those expenses to the asset. So you don’t have to fight for those allocations every year. You know you either spend the money or you retire the asset. They’re part of the asset.

I also learned that you know you kind of have to speak the financial language to make this happen. You’ve got to be able to communicate with the the people who manage the budgets. You’ve got to be able to talk about assets and depreciation and management and maintenance. you know Use that language to to work cybersecurity into that that equation and you know The lesson is if if you can get cybersecurity into the asset plan, then you know You’re going to have an easier time of managing cybersecurity and other sort of operational, essential operational outlays for that asset over over the life of the asset.

And Ian didn’t mention it, but he’s on LinkedIn. you know he He has a lot of papers on this topic, and you know he does more general cybersecurity stuff. This is just a piece of what he does. He’s got papers on that, other stuff. If you’re interested in digging deeper on on these or other sort of cybersecurity topics, there’s a whole OT section at the Deloitte website, and you can just connect Ian Fleming on LinkedIn at Deloitte, and he’ll he’ll be happy to point you to his you know that his writing and you know help you dig deeper into the topic.

Nathaniel Nelson
Well, thanks to Ian for speaking with you, Andrew. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a pleasure. Thank you, Nate.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post How to Embed 30 Years of Security Funding into Capital Budgets – Episode 135 appeared first on Waterfall Security Solutions.

]]>
Where does IT Security END and OT Security BEGIN? https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/where-does-it-security-end-and-ot-security-begin/ Thu, 26 Dec 2024 15:06:48 +0000 https://waterfall-security.com/?p=29897 The standard answer to this questions is "The Consequence Boundary"...but which kind of consequences are we talking about? And aren't there different levels of consequence? We help define these to answer the question.

The post Where does IT Security END and OT Security BEGIN? appeared first on Waterfall Security Solutions.

]]>

Where does IT Security END and OT Security BEGIN?

Where does the consequence boundary between IT and OT actually rest? Where is the line in the sand that separates what is needed to secure OT, an what is needed to secure IT? Lets have a look...
Picture of Waterfall team

Waterfall team

OT Security vs IT Security

OT IT Security Boundary Where does IT end and OT begin? Our research team frequently gets asked this question and the answer has grown technically more complex over the years, but the basic principles that guide the answer to this question have remained the same. It all has to do with first answering: “What is your risk tolerance? What is your risk appetite?”

With today’s complex and interconnected world, the lines between Information Technology (IT) and Operational Technology (OT) are increasingly blurred. While both IT and OT rely on digital systems to function, their purposes, priorities, and security challenges differ drastically. Understanding these distinctions is critical for crafting effective security strategies.

…the basic principles that guide the answer to this question have remained the same. It all has to do with first answering: “What is your risk tolerance? What is your risk appetite?

So, Where does IT end and OT begin?

In Andrew Ginter’s book Engineering-grade OT Security, he explains that OT begins at the consequence boundary. This boundary will differ for different operations, but the idea is that the IT/OT boundary rests somewhere around where the consequences of the risks actually happening become unacceptable.

Industrial pipesSome common unacceptable risks across most industries include any loss of human life, bodily harm, damage to machinery or equipment, and then we have unscheduled downtime. The duration of what is acceptable unscheduled downtime can vary greatly between each industry. For a power plant, it would be unacceptable to shut down operations for a half hour, but for a shoe factory, it might not be as dramatic of an issue. Wherever that acceptable/unacceptable risk boundary lies, that is its IT/OT boundary for that business.

OT takes over from IT where the consequences of something going wrong become unacceptable.

The Purpose of IT Vs the Purpose of OT

IT systems manage data and support business processes, such as communication, record-keeping, and analytics. Think of email servers, financial systems, and cloud applications. In contrast, OT systems control physical processes and equipment, often in industries like manufacturing, energy, and transportation. Some classic examples of OT include robotic assembly lines, power generation, nuclear power plants, offshore oil platforms, and railway signaling systems.

Key Difference: IT security focuses on protecting data and business processes, while OT security focuses on protecting physical systems and ensuring operational continuity.

IT Priorities Vs OT Priorities

The core objectives of IT and OT security reflect their drastically different operational priorities.

The CIA TriadAnyone who has casually walked by an ongoing cybersecurity classroom has most likely heard about the CIA Triad. This C-I-A concept formed the basis of cybersecurity when it first came out. It has grown partially outdated, as data Integrity hasn’t really become that great a threat, but Confidentiality (i.e. data exfiltration) and Availability (i.e. ransomware) have remained very relevant. The triad for OT security differs as it prioritizes safety and availability as well as operational integrity. When securing OT, the concern for data going into the machines far exceeds the concern for someone accessing outbound operational data from the machinery.

IT Security Priorities:

  • Confidentiality – Protecting sensitive data from unauthorized access.

  • Data Integrity – Ensuring the accuracy and reliability of data.

  • Availability – Maintaining access to IT systems and data when needed.

OT Security Priorities:

  • Availability – Keeping physical systems running and avoiding downtime.

  • Safety – Ensuring the well-being of workers and preventing accidents.

  • Operational Integrity – Guaranteeing the correct operation of equipment and processes.

Key Difference: IT prioritizes confidentiality first, while OT prioritizes safety

The IT Threat Landscape Vs OT Threat Landscape

Security Cameras as OT systemIT systems face threats such as malware, phishing, and data breaches. The goal of IT attackers is often to steal or encrypt important data, usually for financial gain some sort of business disruption.

OT systems, however, are exposed to threats where the attacker will try and cause some kind of physical consequence such as machinery malfunctioning and causing downtime.

  • Cyber-physical Attacks – Manipulating equipment to cause damage or outages.

    Ransomware – Encrypting and shutting down critical systems to extort money.

  • Insider Threats – Human errors or malicious insiders impacting physical operations.

Key Difference: OT threats can directly impact physical infrastructure and human safety, making them potentially far more catastrophic than IT threats.

System Lifespan and Upgrades

IT systems typically have shorter lifespans and are often upgraded or replaced within 3-5 years to keep pace with technology. OT systems, on the other hand, may operate for decades without significant changes.

Additionally, many critical OT systems are prohibitively expensive to upgrade, with price tags in the tens of millions of dollars. Furthermore, the lead time on such an upgrade can exceed into months or even years, during which production must continue uninterrupted.

This longevity of OT systems creates 2 distinct challenges:

  • Older OT systems may lack built-in security features, as they were designed before such threats needed to be considered

  • Patching and updates can be difficult, as downtime impacts operations. Even minor patches pose the risk of ruining operations if the patch corrupts some file or dependency.

Key Difference: OT systems are much more likely to rely on outdated, unsupported technology. This outdated/unsupported technology can’t be updated or replaced without drastically risking impacting operations. Meanwhile, IT can typically roll out patches and updates fairly quickly. Even simple common IT fixes such as “turning it off and on again” are far more complex when it comes to OT

Interconnectivity and Access

IT environments are designed from the ground up for high interconnectivity, with users and devices accessing networks remotely and frequently. OT environments were traditionally isolated (“air-gapped”) to reduce exposure to external threats. However, the recent rise of Industrial IoT (IIoT) and the need for endless remote monitoring has increased OT interconnectivity, expanding the available attack surface.

Key Difference: OT systems are transitioning from isolated to interconnected, introducing new security challenges, while IT systems have always been high-interconnected.

Incident Response

In IT, incident response often involves detecting and isolating compromised systems to prevent data loss. In OT, response plans must consider the impact on physical operations, human safety, and regulatory compliance. A poorly managed response could disrupt critical infrastructure or even endanger lives.

Key Difference: OT incident response requires a multidisciplinary approach involving engineering, safety, and IT teams working together.

Cyber-Informed Engineering for OT Security

Engineering FriendsAs IT and OT systems grew more integrated over the years, organizations tried to adopt some sort of unified security strategies that address both IT and OT. This included joint risk assessments, robust monitoring of OT/IT environments, and even some cross team collaborations. These efforts proved to be ineffective at fully stopping the threats and risks.

A more centralized effort was needed. In 2022, the US Department of Energy released the National Cyber-informed Engineering Strategy.

The principles of Cyber-informed Engineering strongly recommend building resilience into industrial systems from the ground up. Cyber-informed engineering focuses on designing and operating systems with cybersecurity as a foundational element, rather than an afterthought.

Some of the main recommendations of CIE:

  • Incorporate Cybersecurity Early in Design – Embed security considerations into the design phase of OT systems to mitigate vulnerabilities before deployment.

  • Understand the Mission Impact – Analyze how cyber threats could impact physical operations and engineer systems to minimize those risks.

  • Integrate Safety and Security – Develop solutions that address both operational safety and cybersecurity simultaneously, ensuring one does not compromise the other.

  • Leverage Threat Modeling – Use threat modeling techniques to anticipate potential attack vectors and implement defenses tailored to OT environments.

  • Collaborate Across Disciplines – Bring together engineers, IT professionals, and security experts to foster a holistic approach to protecting systems.

By adopting cyber-informed engineering, organizations can proactively address the unique challenges of OT security and enhance the resilience of their critical systems.

Wrapping it up

So, to summarize, OT begins at the consequence boundary. The place along the entire network where the consequences of the risks become unacceptable. That is where IT solutions are no longer sufficient, and OT security takes over. And furthermore, by having IT and OT teams work together, as outlined with Cyber-informed Engineering, a higher and more resilient network can be achieved for the entire business or organization. Securing both IT and OT. When IT and OT work together, everyone is happier.

Want to protect your OT network? Book a consultation >>

About the author
Picture of Waterfall team

Waterfall team

Share

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Where does IT Security END and OT Security BEGIN? appeared first on Waterfall Security Solutions.

]]>
Infographic: Top 10 OT Cyberattacks of 2024 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/top-10-cyberattacks-of-2024/ Wed, 18 Dec 2024 08:35:55 +0000 https://waterfall-security.com/?p=29316 Infographic mapping out the top 10 OT cyberattacks of 2024 including WHEN it happened, WHICH industry, WHAT happened and also covering the consequences and significance of each attack

The post Infographic: Top 10 OT Cyberattacks of 2024 appeared first on Waterfall Security Solutions.

]]>

Infographic: Top 10 OT Cyberattacks of 2024

Top 10 Attacks of 2024

2024 had more than its fair share of cyberattacks on OT systems. This infographic helps map out the complex threat environment by breaking down the top 10 OT cyberattacks of 2024.

The infographic explains each incident including:

arrow red right When  each of the attacks occurred.

arrow red right Which  industry the attack impacted.

arrow red rightWhat happened  in each of the incidents.

arrow red right The Consequence  of each attack, including estimated costs if available.

arrow red right The Significance  and the ripple effect that each event had on the industry.

About the Author

Picture of Rees Machtemes, P.Eng.

Rees Machtemes, P.Eng.

Rees is the lead threat researcher for the annual Waterfall / ICSStrive OT Threat Report and writes frequently on the topic of OT / ICS cybersecurity. Being solutions-focused, he champions INL’s Cyber-Informed Engineering program and regularly provides advice and commentary to government agencies and standards bodies issuing OT security guidance.

Rees is a professional engineer with 15 years of industry experience in: power engineering, substation automation design, plant automation, telecommunications, data centres, and IT. He holds a degree in Electrical Engineering from the University of Alberta.

Fill out the form and get it by email

The post Infographic: Top 10 OT Cyberattacks of 2024 appeared first on Waterfall Security Solutions.

]]>
Are OT Security Investments Worth It? https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/are-ot-security-investments-worth-it/ Sun, 08 Dec 2024 09:56:23 +0000 https://waterfall-security.com/?p=29186 Spoiler Alert: Yes, investing in OT security is very much “worth it”. It helps prevent financial losses, operational disruptions, and compliance penalties far exceeding initial costs. The average ROI can reach up to 400%, ensuring both protection and operational continuity.

The post Are OT Security Investments Worth It? appeared first on Waterfall Security Solutions.

]]>

Are OT Security Investments Worth It?

Spoiler Alert: Yes, investing in OT security is very much “worth it”. It helps prevent financial losses, operational disruptions, and compliance penalties that far exceed initial costs. The average ROI can reach up to 400%, ensuring both protection and operational continuity.
Picture of Waterfall team

Waterfall team

Are OT Security Investments Worth It?

The Rising Need for OT Security in Industrial Operations

The growing digitization of industrial operations makes safeguarding operational technology (OT) increasingly vital. OT encompasses the hardware and software that detects or controls physical processes, distinct from IT, which focuses on data. One key difference between OT and IT security though, is that a breach of an OT system can have real-world, physically harmful consequences—and those consequences can arise quickly. For example, if a cyberattack gains access to a manufacturer’s OT systems, it could directly (or indirectly) cause an unplanned shutdown of production, damage machinery, or even harm personnel working near the production line.

FACT: 2023 saw a 19% increase in cyberattacks causing physical damage, highlighting the growing threat to OT environments.

One of the major challenges in improving OT security are outdated legacy systems that lack modern security features and complex network architectures that provide many potential entry points for attackers. Another often underestimated factor is the human element.

OT Security factory smog and smokeIn most cases, employees are the first line of defense in cybersecurity efforts. However, inadequate training leaves organizations vulnerable to attacks, as employees are not always equipped to handle the demands of modern cybersecurity operations.

As cyberattacks grow more advanced, all industrial sectors face heightened vulnerabilities. Protecting critical assets is essential, and compliance with regulations alone is no longer sufficient. Comprehensive investment in securing the operational technology that underpins business continuity has become a necessity and is no longer a “nice to have” option.

Neglecting OT security poses significant risks to safety, connectivity, and financial stability. In today’s modern threat landscape, industrial operators understand the need to prioritize security across all processes to safeguard their operations and ensure resilience in the face of growing cyber threats.

Breaking Down the High Costs of OT Security Solutions

The financial burden of securing Operational Technology (OT) is particularly challenging for small and medium enterprises. The expenses include initial investments in hardware and software, as well as ongoing maintenance costs.

“The 2022 Clorox cyberattack inflicted $49 million in damages, underscoring the financial fallout of neglected OT security.”

The secure operation of OT systems is invaluable, as vulnerabilities can threaten worker safety, operational continuity, and system integrity. Research shows that cyberattacks targeting OT environments are on the rise, with a 19% increase in attacks causing physical damage reported in 2023. High-profile incidents, such as the $27 million breach at Johnson Controls, the $49 million damages at Clorox, and the $450 million costs incurred by MKS Instruments, illustrate the financial risks of inadequate OT security.

The factory workers - OT Security Cost Investment AnalysisInvesting in OT security may seem costly upfront, but the risks posed by unprotected legacy systems far outweigh these expenses. Legacy systems, with their outdated protocols, expose both OT and IT networks to attacks due to their interdependent nature. Solutions like advanced anomaly detection, real-time monitoring, and network segmentation are designed to mitigate these risks effectively. By using unidirectional gateways, legacy systems can continue to be used safely and securely, without the need for costly upgrades.

Despite the costs, OT security investments in tools like unidirectional security gateways yield significant returns. Businesses report an average ROI of 400%, primarily through incident prevention. This becomes increasingly critical as cybercriminals evolve their tactics, targeting IT and OT networks to disrupt operations. Robust and proactive security measures are essential to protect organizations from the financial and reputational damage caused by cyberattacks.

Calculating ROI: How OT Security Pays Off

Evaluating the return on investment (ROI) for OT security initiatives involves understanding both tangible and intangible benefits. While traditional business investments aim for revenue growth, security investments focus on risk reduction, helping organizations avoid or mitigate potential losses.

PROTIP: Use the Return on Security Investment (ROSI) formula to compare the cost of security measures versus the reduction in potential losses.

A great method for calculating costs and ROI on OT security investments is to use the ROSI formula, which works like this:

ROSI = (Reduction in potential losses – Cost of safety measure) / Cost of safety measure

For example, a $100,000 security solution that reduces potential losses of $500,000 to $250,000 yields a 150% return. Historical data, such as ransomware incidents costing between $250,000 and $850,000, further supports the financial justification of these investments.

Organizations can refine their calculations by incorporating metrics such as:

  • Single Loss Expectancy (SLE): The financial impact of a single incident.

  • Annual Rate of Occurrence (ARO): The frequency of incidents based on historical data.

  • Annual Loss Expectancy (ALE): The annualized cost of potential incidents, derived from SLE and ARO.

  • Mitigation Ratio: The percentage of incidents prevented by a security measure.

For instance, if a business faces ten annual attacks costing $20,000 each, a $50,000 investment that prevents 90% of these breaches demonstrate clear financial benefits. When using deterministic solutions such as Waterfall’s unidirectional security gateway, the benefit becomes even clearer. See here for more details.

Beyond financial savings, OT security investments safeguard business continuity, customer trust, and reputation. These benefits are critical for companies operating in competitive markets where even minor disruptions can have significant consequences.

Some final words...

Industrial operations today face the dual challenge of addressing increasingly sophisticated cyber threats while managing constrained budgets. Securing OT systems is essential to maintaining a “production-first” approach that underpins modern industrial operations.

OUCH! An unprotected legacy manufacturing machine once allowed malware to move laterally, disrupting operations across an entire company.

Prioritizing resources starts with comprehensive risk assessments. Tools that calculate asset-specific risk scores can help identify critical areas requiring investment. Modernizing infrastructure, such as replacing 10- to 20-year-old equipment, also enhances security by reducing vulnerabilities, but keeping that machine in a way that maintains compliance and enhances security is far more cost effective.

Factory floorCollaboration across OT, IT, and security teams is crucial for cohesive strategies. Cross-functional efforts ensure that cybersecurity measures align with business objectives, resulting in shared ownership of protocols. While moving to proactive solutions like Zero Trust Network Access (ZTNA) enhances security by adhering to the principle of “never trust, always verify.”, it still leaves gaps within OT security. However, a more cohesive approach such as Cyber-informed Engineering, addresses the threats head-on, with a more elaborate solution that saves costs over time by getting OT and IT (and other stakeholders) working together to ensure security from the start, and not as an afterthought.

Investing in OT security, while expensive, is far less costly than the aftermath of a cyberattack. By adopting a risk-based strategy, securing legacy infrastructure, and fostering collaboration, industrial operators can enhance their resilience to cyber threats while maintaining operational efficiency.

Want to learn how to engineer  OT Security into OT systems? Get your complimentary copy of Andrew Ginter’s new book: Engineering-grade OT Security: A Manager’s Guide

FAQs

What is OT security and why is it important for industrial operators?

Operational technology (OT) refers to the systems that control physical processes in industrial operations. Securing OT is essential to prevent breaches that could halt production, damage equipment, or harm workers. As OT systems become prime targets for cybercriminals, protecting them is increasingly critical.

What are some key challenges in implementing OT security?

Common challenges include outdated systems lacking modern security features, complex network architectures with numerous entry points, and human error. Addressing these issues requires securing legacy systems, redesigning network structures, and ensuring employees are adequately trained.

How do cyberattacks affect OT environments in industrial operations?

Cyberattacks on OT systems can cause production downtime, financial losses, equipment damage, and even physical harm to workers.

What are the costs associated with OT security investments?

OT security investments include upfront costs for hardware and software, ongoing maintenance, and compliance expenses. However, these costs are outweighed by the potential financial and operational losses of a cyberattack.

Is OT security investment worth the financial burden?

Yes, the ROI of OT security demonstrates its value. Preventing downtime and damage from cyberattacks saves organizations significant costs, making security investments highly worthwhile.

How can organizations calculate the ROI of OT security measures?

The ROSI formula calculates the financial benefits of security measures by comparing potential losses avoided to the cost of the measures.

What proactive measures can industrial operations take to prioritize OT security?

Industrial operations should conduct risk assessments, secure legacy infrastructure, and adopt strategies like network segmentation between OT and IT. These measures strengthen security and reduce vulnerabilities.

Why is collaboration important for effective OT security?

Collaboration between OT, IT, and security teams ensures aligned strategies and shared ownership of cybersecurity protocols. Approaches such as Cyber-informed Engineering improves communication, fosters cohesive planning, and enhances overall security outcomes.

 

Want to learn how to engineer  OT Security into OT systems? Get your complimentary copy of Andrew Ginter’s new book: Engineering-grade OT Security: A Manager’s Guide

 

Picture of Waterfall team

Waterfall team

Share

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Are OT Security Investments Worth It? appeared first on Waterfall Security Solutions.

]]>