industrial security podcast – Waterfall Security Solutions https://waterfall-security.com Unbreachable OT security, unlimited OT connectivity Wed, 10 Sep 2025 08:31:45 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://waterfall-security.com/wp-content/uploads/2023/09/cropped-favicon2-2-32x32.png industrial security podcast – Waterfall Security Solutions https://waterfall-security.com 32 32 I don’t sign s**t – Episode 143 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/i-dont-sign-st-episode-143/ Wed, 10 Sep 2025 08:31:45 +0000 https://waterfall-security.com/?p=35976 Tim McCreight of TaleCraft Security in his (coming soon) book "I don't sign s**t" uses story-telling to argue that front line security leaders should not be accepting multi-billion dollar risks on behalf of the business. We need to escalate those decisions - with often surprising results when we do.

The post I don’t sign s**t – Episode 143 appeared first on Waterfall Security Solutions.

]]>

I don’t sign s**t – Episode 143

We don't have budget to fix the problem, so we accept the risk? Tim McCreight of TaleCraft Security in his (coming soon) book "I Don't Sign S**t" uses story-telling to argue that front line security leaders should not be accepting multi-billion dollar risks on behalf of the business. We need to escalate those decisions - with often surprising results when we do.

For more episodes, follow us on:

Share this podcast:

“It always comes down to can I have a meaningful business discussion to talk about the risk? What’s the risk that we’re facing? How can we reduce that risk and can we actually pull this off with the resources that we have?” – Tim McCreight

Transcript of I don’t sign s**t | Episode 143

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Hey everyone, and welcome to the Industrial Security Podcast. My name is Nate Nelson. I’m here as usual with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions, who is going to introduce the subject and guest of our show today. Andrew, how’s going?

I’m very well, thank you, Nate. Our guest today is Tim McCrate. He is the CEO and founder of TaleCraft Security, and his topic is the book that he’s working on. The working title is We Don’t Sign Shit, which is a bit of a controversial title, but he’s talking about risk. Lots of technical detail, lots of examples, talking about who should really be making high-level decisions about risk in an organization.

Nathaniel Nelson
Then without further ado, here’s your conversation with Tim.

Andrew Ginter
Hello, Tim, and welcome to the podcast. Before we get started, can I ask you to say a few words for our listeners? You know, tell us a bit about yourself and about the good work that you’re doing at TaleCraft.

Tim McCreight
Hi folks, my name is Tim McCreight. I’m the CEO and founder of TaleCraft Security. This is year 44 now in the security industry. I started my career in 1981 when I got out of the military, desperately needed a job and took a role as a security officer in a hotel in downtown Winnipeg, Manitoba.

Shortly after I was moved into the chief security officer role for that’ that hotel and others and had an opportunity to move into security as a career path. And I haven’t looked back I decided I also wanted to learn more about cybersecurity.

Holy smokes, in ’98, ’99, I took myself out of the workforce for two years, learned as much as I could about information systems, and then came back for the latter part of my career and have held roles as a chief information security officer in a number of organizations. So I’ve had the pleasure and the honor of being both in physical and cybersecurity for the past 40 some years.

Andrew Ginter
And tell me about TaleCraft

Tim McCreight
It’s a boutique firm with two of our lines. Our first line is that it’s new skills from the old guard, and we are here to help give back and grow.

And it’s our opportunity to provide services to clients focusing on a risk-based approach to developing security programs. We teach security professionals how to tell their story and how to use the concepts of storytelling to present security risks and ideas to executives.

And finally, we have a series of online courses through our TaleCraft University where a chance to learn more about the principles of ESRM and other skills that we’re going to be adding to our repertoire of classes in the near future.

Andrew Ginter
And our topic is your new book. You know, I’m eagerly awaiting a look at the book. Can I ask you, you know before we even get into the the content of the book, how’s it coming? When are we going to see this thing?

Yeah Well, thank you for asking. i had great intentions to publish the book, hopefully this year. and Unfortunately, some things changed last year. i I was laid off from a role that I had and I started TaleCraft Security.

So sadly, my days have been absorbed by the work that it takes to stand up a business get it up and running. And my hats off to all the entrepreneurs out there who do all of these things every day. I’m new to this. So understanding what you have to do to stand up a business, get it running, to market it, to run the finances, et cetera, it has been like all consuming. So The book has unfortunately taken a bit of backseat, but I’ve got some breathing room now. I’ve got into a bit of a rhythm.

Tim McCreight
It’s a chance for me to get back to the book and start working through it. And and it’s to me, it’s appropriate. It’s a really good time. If I’m following the arc of a story, this is the latter part of that story arc. So I get a chance to help fill in that last part of the story, my own personal story, and and to put that into the book.

Andrew Ginter
I’m sorry to hear that. I’m, like said, looking forward to it. We have talked about the book in in the past. Let me ask you again, sort of big picture. You know, I’m focused on industrial cybersecurity. I saw a lot of value in the the content you described us as being produced. But can you talk about, you know, how industrial is the book?

We’re talking about risk. We’re talking about about leadership, right? How industrial does it get? I know you you do ah you do a podcast. You do Caffeinated Risk with Doug Leese, who’s a big contributor at Enbridge. He’s deep industrial. How industrial are you? How industrial is this book?

Tim McCreight
It spans around 40 years of my career and starting from, you know, physical security roles that I had, but also dealing with the security requirements for telecommunications back in the eighties into the nineties, getting ready for, and and helping with the security planning for the Olympics in the early two thousands, working into the cyberspace and understanding the value of first information security, then it turned into cyber security, then focusing on the OT environment as well, when I had a chance to work in critical infrastructure and oil and gas.

And then finally, you know the consistent message throughout the book is this concept of risk and that our world, when we first, you know when we first began this idea of industrial security back in the forties, bringing it up to where we need to be now from a professional perspective and how we view risk.

I do touch and do speak a little bit about the the worlds that I had a chance to work in from an industrial perspective. The overarching theme though is really this concept of risk and how we need to continue to focus on risk regardless of the environment that we’re in.

And some of the interesting stories I had along the way, some of the, honest to God, some of the mistakes I made along the way as well. I’ve learned more from mistakes than I have from successes.

And understanding the things that I needed to get better at throughout my career. I’m hoping that folks, when they do get a chance to read the book, that they recognize they don’t need to spend 40 some years to get better at their profession. You can do it in less time and you can do it by focusing on risk, regardless of whether you’re in the IT, the OT or the physical space.

Andrew Ginter
So there’s, there is some, some industrial angle in there, but, like I said, industrial or not, I’m i’m fascinated by the topic. I think we’ve, I’ve, beaten around the bush enough. The title, the working title is, is “We Don’t Sign Shit.” What does that mean?

Tim McCreight
I came up with “We Don’t Sign Shit.” And it’s I have a t-shirt downstairs in my office so that that I got from my team with an oil and gas company I worked with. And and Doug Lease was in the team as well.

And it really came down to this, the principle that for years, security was always asked to sign off on risk or to accept it or to endorse it or my favorite, well, security signed off on it, must be good.

Wait a second. We never should have. That never should have been our role. We never should have been put in a position where we had to accept risk on behalf of an organization because that’s not the role of security. Security’s role is to identify the risk.

Identify mitigation strategies and present it back to the executives so that they can make a business decision on the risks that we face. So in my first couple of weeks, when I was at this oil and gas organization, we had a significant risk that came across my desk and it was a letter that I had to sign off on. a brand new staff member came in and said, “Hi boss, I just need to take a look at this.”

I’m like, “Hi, who are you? What team do you work on? And what’s the project you’re working on?” When I read this letter, I’m like, are you serious that we’re accepting a potential billion dollar risk on behalf of this organization? Why?

And like, “Well, we always do this.” Not anymore. And we went upstairs. We got a hold of the right vice president to take a look at this to address the risk and work through it. And as I continued to provide this type of coaching and training to the team there, I kept bringing up the same concept. Look, our job is not to sign shit.

That’s not what we’re here for. We don’t sign off on the risk. We identify what the risk is, the impacts to the organization, what the potential mitigation strategies are. And then we provide that to executives to make a business decision.

So when I did leave the organization for another role, they took me out for lunch and I thought it was pretty cool. The whole team got together and they created this amazing t-shirt and it says, “Team We Don’t Sign Shit.” So it worked, right? And that mindset’s still in place today. I have a chance to touch base with them often. Ask how they’re doing. And all of them said the same thing is that, yeah, it’s that mindset is still there where they’ve embraced the idea that security’s role is to identify the risk and present opportunities to mitigate, but not to accept the risk on behalf of the organization.

That was the whole context of where I I took this book is, wouldn’t it be great if we could finally get folks to recognize, no, we don’t sign shit. This isn’t our job.

Nathaniel Nelson
So Andrew, I get the idea here. tim isn’t the one who signs off on the risk. He identifies it and passes it on to business decision makers, but I don’t yet see where the passion for this issue comes from, like why this point in the process is such a big deal.

Andrew Ginter
Well, I can’t speak for Tim, but I’m fascinated by the topic because I see so many organizations doing this a different way. In my books, the people who decide how much budget industrial security gets should be the people ah making decisions about are these risks big enough to address today? Is this, is this ah a serious problem because they’re the ones that are are you know they have the the business context they can compare the the industrial risks to the the other risks the business is facing to the other needs of the business and make business decisions

When you have the wrong people making the decisions, you risk, there’s a real risk that you make the wrong decisions because the the people executing on industrial cybersecurity do not have the business knowledge of what the business needs. They don’t have the big picture of the business and the people with the big picture of the business do not have knowledge, the information about the risk and the mitigations and the costs. And so each of them is making the wrong decision. When you bring these people together and the people with the information convey it to the people with the business knowledge, now the people with the business knowledge can make the right decision for the business.

And again, the industrial team execute on it. If you have the wrong people making the decision, you risk making the wrong decision.

Andrew Ginter
So let me ask, I mean, you take a letter into an executive, you you you do this over and over again in lots of different organizations. How do how is that received? How do the executives react when you do that?

Tim McCreight
So, I mean, my standard approach has always been, and and I use this as my litmus test is if the role I play as a chief security officer or CISO, and you’re asking me to accept risk, I come back. And the the first question I’m going to ask is if this is the case and you’re asking me to do this on, I’m going to say, no, invariably the room gets really quiet.

People start recognizing, oh, he’s serious. Yeah. Cause I have no risk tolerance when it comes to work. I would be giving everybody like paper notebooks and crayons and I want it back at the end of the day So I don’t have any tolerance for risk. But to test my theory is when I ask executives, if you’re saying that my role is to sign off on this, then I’m not going to, does that stop the project?

It never does. So the goal then is to ensure that the executives understand it’s their decision, and it’s a business decision that has to be made, not a security decision because my decision is always going to be, I start with no and I’ll negotiate from there.

But when we look at what the process is that i’ve I’ve provided and others have followed is I’ll bring the letter with the recommendations to the business for them to review and to either accept the risk, sign off on it, or to find me an opportunity to reduce the risk.

That’s when I start getting attention from the executives. So it moves from shock to he’s serious to, okay, now we can understand what the risk is. Let’s walk through this as a business decision. That’s when you start making headway with executives is taking that approach.

Andrew Ginter
So, I mean, that that sounds simple, simple but in in my experience, what you said there is actually very deep. I mean, i’ve I’m on the end of a long career as well, and I’ve never been a CISO. And in hindsight, I come to realize that, bluntly, I’m not a very good manager.

Because when someone comes to me, it doesn’t matter, so any anyone outside the the my sphere of influence my scope of responsibility saying, hey, Andrew, can you do X for me?

Whenever one of my people comes to me with an idea saying, hey, we should do Y, my first instinct is, what a good idea. Yeah, yeah.

Whereas I know that strong managers, their first instinct is no. And now whoever’s coming at us with the request or with the idea has to justify it, has to give some business reasons.

Again, so that’s, this is this is deep. It’s a deep difference between between you and and people like me.

Tim McCreight
Yeah well, and it is, and there’s, don’t get me wrong. There’s an internal struggle every time when I’ve worked through these types of requests where I, I want to help people too, but, but I understand that the path you got to take and how you have to get business to understand it, accept it and move forward with it. It’s different, right? This is why some great friends of mine that I’ve known for years, and they were technical, they’re technically brilliant. They have some amazing skills. Like, honest to God, I stopped being a smart technical person long time ago, and I’ve relied on just wizards to help move the programs forward.

And, I’ve chatted with them as well, and then they’re similar to you, Andrew. they’ve They’ve got great technical skills. They’ve been doing this for a long time. And, one of the one of the folks I chatted with, they’re just like, I can’t I can’t give myself the lobotomy to get to that level. I’m like, oh, my God. Okay, fair enough.

And I get it, but the way I’ve always approached this, it’s different, right? So I i take myself out of the equation of always wanted to help everybody to how can I ensure that I’m reducing the risk?

And if I can get to those types of discussions and have them with executives, for me, that’s where I find the value. So all of the work I’ve done in my career to get to this space, the amazing folks that I’ve met along the way, the teams that I’ve helped build, the folks I still call on to, to to mentor me through situations,

It always comes down to, can I have a meaningful business discussion to talk about the risk? And then it takes away some of the emotional response. It takes away that immediate, I need to help everybody do everything because we can’t.

But it gives us a chance to focus on what the problem is. What’s the risk that we’re facing? How can we reduce that risk? And can we actually pull this off with the resources that we have? So yeah, I get it. Not everybody wants to sit in these chairs. I’ve met so many folks throughout my career that they keep looking at me going, Jesus, Tim, why would you ever want to be in that space?

Why would you ever accept the fact that you’re, that they’re trying to hold you accountable for breaches or or for events or incidents? And I challenge back with it from it, for me, it’s that opportunity to speak at a business language, to get the folks at the business level, to appreciate what we bring to the table, whether it’s in OT security, IT t or cyber, it physical or cyber, it’s,

It’s a chance for all of us to be represented at that table, at that level, but at a business focus. So for me, that’s why I kept looking for these opportunities is can I continue to move the message forward that we’re here to help, but let’s make sure we do it the right way.

Andrew Ginter
So, fascinating principles. Can you give me some examples? I mean, TaleCraft is about telling stories. Can you tell me a story? How did this work? How did it come about? What kind of stories are you telling here?

Tim McCreight
So there’s there’s a lot that i’ve I’ve presented over the years, but a really good one is I was working with Bell Canada many years ago. We had accepted the, we were awarded the communication contract and some of the advertising media supporting contracts for the Olympics for 2010 for Vancouver.

And I was working with an amazing team at Bell Canada. Doug Leese was on the team as well, reporting into the structure. So it was very cool to work with Doug on some of these projects. We decided that the team that was putting in place the communication structure decided they want to use the first instance of voice over IP, commercial voice over IP. It was called hosted IP telephony.

And it was from Nortel. If folks still remember Nortel, it was from Nortel Networks. We looked at the approach that they were taking, how we were going to be applying the the technology to the Olympic Village, et cetera.

Doug and the team, they did this amazing work when the risk assessment came across, but they were able to intercept a conversation decrypt the conversation and play it back as an MP4, like an MP3 file.

You could actually hear them talking. And it was at the time it was the CEO calling his executive assistant order lunch. And we had that recorded. You could actually hear it. It was just as if it was, they were speaking to you.

So that’s a problem when you’re trying to keep secure communications between endpoints in a communication path. We wrote up the risk assessment. We presented it to the executives. We we presented the report up to my chain and it was simple.

Here’s the risk. Here’s the mitigation strategy. We need a business decision for the path that we wanted to take. And that generated quite the stir. My boss got back to me and said, well, we have to change the report. No, I said, no, we don’t. We don’t change this shit. We just, you you move it forward.

We’ve objectively uncovered the risk. The team did a fantastic job. But here’s an attached recording. If you want to hear it, but let’s keep moving forward. So it went up to the next level of management and same thing. Would you alter report? No, no I would not.

Move on, move on. Finally got to the chief security officer. And I remember getting the phone call. It’s like, well, Tim, this is, this is going to cause concerns. No, it’s a business decision. It isn’t about concerns. This is a business decision. And what risk is the business willing to accept?

So he submitted the report forward. Next thing I’m getting a call from, an executive office assistant telling me that my flight is going to be made for the next day. I’ll be, I’ll be flying to present the report. Like, Jesus Christ. So, all right, I got on a plane headed out east.

Waited forever to talk to the CEO at the time. And all they asked all they asked was, it is this real? are you is Would you change this? I said, no, the risk is legitimate.

And here’s the resolution. Here’s the mitigation path. Here’s the strategy. So they asked how much we needed, what we needed for time. it was about six months worth of work with the folks at Nortel to fix the problem. And all of that to state that had we done this old school many years ago, we would have just accepted the risk and move forward with it.

That wasn’t our role. That’s not our job, right? In that whole path, that whole risk assessment needed to presented to the point where executives understood what could potentially happen. We already proved that it could, but they needed to understand here’s the mitigation strategy. We found a way to resolve it.

We need this additional funding time resources to fix the problem. So that That stuck with me. That was like almost 20 years, like that was over 20 years ago. And that stuck with me because had I, altered my report, had I taken away the risk, had he accepted it on behalf of the security team, we don’t know what could have happened to the transmissions back and forth at the Olympics.

But I do know that in following that process, you never read about anyone’s conversations being intercepted at the 2010 Olympics, did you? It works. The process works, but what it takes is an understanding that from a risk perspective, this is the path that we have to take.

It’s not ours to accept. You have to make sure you get that to the executives and let them make that decision. Those are the stories that we need folks to hear now, as we move into this next phase of developing the profession of security.

Andrew Ginter
So Nate, you might ask, the CEO had a conversation, intercepted ordering lunch. Is this worth, the the big deal that it turned into? And I discussed this offline with with Tim and what he came back with is was, Andrew, think about it. Imagine that you’re nine days into the 10-day Summer Olympics or two week, whatever it is.

And someone, pick someone, let’s say the Chinese intelligence is found to have been intercepting and listening in on all of the conversations between the various nations, teams, coaches in the various sports and their colleagues back in their home countries.

They’ve been listening in on them for the the whole Olympics. What would that do to the reputation of the Olympics? What would that do to the reputation of Bell Canada? This is a huge issue. It was a material cost to fix. It took six months and he didn’t say how many people and how much technology.

But this is not something that the security team could say, “Okay, we don’t have any budget to fix this, therefore we have to accept the risk.” That’s the wrong business decision.

When he escalated this, it went all the way up to the CEO who said, yeah, this needs to be fixed. Take the budget, fix it. We cannot accept this risk as a business. That’s ah a business decision the CEO could make. It’s not a business decision he could make with the budget authority that he had four levels down in the organization.

Andrew Ginter
So fascinating stuff. Again, I look forward to stories in in the book. But you mentioned stories at the very beginning when you introduced TaleCraft. Can you tell me more about TaleCraft? How does this this idea of storytelling dovetail with with the work you’re doing right now?

Tim McCreight
When I was first designing this idea of what TaleCraft could be, we reached out to a good friend of ours here in Calgary, Mike Daigle. He does some amazing work. He spent some time just dissecting what I’ve done in my career and what I’ve accomplished. More importantly, some of the things that he wanted to focus on from company perspective.

And one of the the parts he brought up, and this is how TaleCraft was created, the word tail was I i spend a significant amount of my time now telling stories and it’s to help educate and to inform and stories to influence and and to provide meaning and value to executives.

But the common theme for all of this has been this concept of telling a story. One of the things I found throughout my career is as security professionals move through the ranks, as they begin, junior levels, moving into their first role as management and moving into director positions and eventually chief positions, the principles and the concepts of being able to tell a story or to communicate effectively with executives,

I found that some of my peers weren’t doing a great job or they were, I don’t know about you, Andrew, but if you sit in a ah presentation that someone’s giving and if all you’re reading is the slide deck, Jesus, you could just send that to me. I got this. I don’t need to spend time watching you stagger through a slide deck or the slides that have a couple of thousand words on them that you’re expecting us to read from 40 feet away.

It doesn’t happen. So what really bothered me is that we started losing this skillset of being able to tell a story. And to effectively use the principles of storytelling to provide input to executives, to make decisions for things like budget or resourcing or allocating, staff resources, et cetera.

So that’s one of the things that we do at TaleCraft is we teach security professionals and others, the principle and the concept of storytelling and how the story arc, those three parts to a story arc that we learned as kids, the beginning of the story, the middle where the conflict occurs, the resolution, and finally the end of the story, when, when you’re closing off and heading back to the village, after you slayed the dragon, those three things that we have, we learned as kids, they still apply as an adult because we learn as human beings through stories. We have for hundreds of years, thousands of years, used oral history as a way to present a story from one generation to the next.

We can use the same skill sets when we’re talking to our executives, when we’re explaining a new technique to our team, or when we’re giving an update in the middle of an incident and how you’re going to react to the next problem and how you’re going to solve it.

Those principles exist. It’s reminding people of what the structure is, teaching people how to follow the story arc when they’re presenting their material, taking away the noise, the distractions and everything else that gets in the way when listening to a story, but focus on the human.

And that’s one of the things that we’re doing here Telegraph is we’re teaching people to be more human in their approach and the techniques work. I just, My wife is up in Edmonton doing a conference right now for the CIO c Conference for Canada.

And she actually asked me to, this is a first folks, for all those of you who are married, what what kind of a progress I’ve made. My wife actually asked if I could dissect her presentation and help her with it. I thought that was pretty amazing. We restructured it so that she was able to use props.

She brought in a medical smock and and a stethoscope to talk about one of the clients that she worked with. And it sounds like it worked because she got some referrals for folks in the audience and she’s spending time right now talking to more clients up in Edmonton. So yeah, I crossed my fingers I was going to get through that one and it seemed to have worked. But these principles of telling a story, if you have a chance to understand how a story works and you’re able to replicate that in a security environment, all of a sudden now you’re speaking from a human to a human.

You’re not bringing in technology. You’re not talking about controls. You’re not spewing off all of these different firewall rules that we have to go through. Nobody cares about that stuff. What they want to hear is what’s the story and can I link the story to risk?

And at the top end of that arc, can I provide you an opportunity to reduce the risk and then finish the story by asking for help? If we can do that, those types of presentations throughout my career, that’s when I’ve been the most successful is when I can focus on the story I need to tell, get the executives as part of it and focus on the human reaction to the problem that we have.

That’s one of the things that we’re teaching at TaleCraft.

Andrew Ginter
So that makes sense in principle. Let me let me ask you. I mean, I do a lot of presentations. I had an opportunity to present on a sort of an abstract topic at S4, which is the currently the world’s biggest OT security-focused conference. And, if you’re curious, it was the title was “Credibility Versus Likelihood.” So, again, a very sort of abstract, risky, risk-type topic.

And the the the advice I got from Dale Peterson, the organizer, was, “Andrew, I see your slides. You can’t just read the slides. You’ve got to come to this presentation armed with examples for every slide, for every second slide.”

Tim McCreight
Yep.

Andrew Ginter
“Get up there and tell stories.” so I would give examples. Sometimes they would be attack scenarios. is that is that the same kind of thing here?

Tim McCreight
It is, I think. you And congratulations for for being asked to present at that conference. That’s amazing. So so kudos to you. That’s that’s awesome, Andrew. That’s great to hear. But you’re right. You touched on one of the things that a lot of presentations lack is the credibility or how I view the person providing the presentation. Do they have the authority? Do I look at them as someone who’s experienced and understands it?

And you do that by telling the story and providing an example for, let’s say, an attack scenario where you saw how it unfolded, how you’re able to detect it, how are you able to contain it, eradicate it, recover back. Those are the stories that people want to hear because it makes it real for people. Providing nothing but a technical description of an attack or bringing out, us as an example, a CVE and breaking it down by different sections on a slide. Oh my God, I would probably poke my eye out with a fork.

But if you walk me through how you identified it, The work that you guys did to identify, to detect it, to contain it, to eradicate it, and then recover. it If you can walk me through those steps from a personal example that you’ve had, that to me is the story.

And that’s the part that gets compelling is now you’ve got someone who’s got real world experience, expertise in this particular problem. They were able to solve it and they provide to me in a story. So now I can pick up those parts. I’m going to remember that part of the presentation because you gave me a great example, which is really, you gave me a great story. Does that make sense?

Andrew Ginter
It does to a degree. Let me Let me distract you for a moment here. I’m not sure this is I’m not sure this is the same the same topic, but I’ve, again, i’ve I’ve written a bit on risk.

Tim McCreight
Okay.

Andrew Ginter
You know I’ve tried to teach people a bit about what what is risk, how do you manage risk in in especially critical infrastructure settings. And I find that a lot of risk assessment reports are, it seems to me not very useful. They’re not useful as tools to make business decisions.

You get a long list of, you still have 8,000 unpatched vulnerabilities in your your your OT environment. Any questions? To me what business decision makers understand more than a list of 8,000 vulnerabilities is attack scenarios.

And so what I’ve argued is that every risk assessment should finish or lead, if you wish, with a in In physical security, you’re you’re probably more familiar this than I am, the the concept of design basis threat, a description of the capable attack you must defeat. You’re designed to defeat with a high degree of confidence.

And you look at your existing security posture and decide this class of attack we defeat with a high degree of confidence. These attacks up here, we don’t have that high degree of confidence.

And and what I’ve argued you should tell the story. Go through one or two of these attack scenarios and say, here is an attack that we would not defeat with a high degree of confidence. Is it acceptable that this attack potential is out there? Is that an acceptable risk?

Is that Is that the kind of storytelling we’re talking about here, or have I drifted off into some other space?

Tim McCreight
No, I think you’ve actually applied the principles of telling a story to something as complex as identifying your particular response or your organization’s response to ah either an attack a attack scenario or a more sophisticated attack scenario. So no, I think you’ve you’ve nailed it.

What it does though, in the approach that you just talked about, It gives a few things to the business audience. One, you have a greater understanding of the assets that are in place and how they apply to the business environment, right? Whether it’s in a physical plant structure for OT or whether it’s a pipeline, et cetera.

If you understand the environment that is being targeted, understand the assets that are in place and the controls that you have there in place, that gives you greater a greater understanding and foundations for what is the potential risk.

By telling the story then of what a particular attack scenario looks like, And if you have a level of confidence that you’d be able to protect against it, you’d be able to walk through the different parts of the story arc.

This is the context of the attack. This is what the attack could look like. Here’s how we would try to resolve it if we can. And then here’s the closing actions that we would be focused on if the attack was either successful or unsuccessful.

So all of those things, I think, apply to the principles of telling a story. What you’ve given is a great example of how to take something that’s very technical or, the the typical risk assessment I’ve seen in my career where, that Andrew here, here’s your 200 page report, the last 10, last hundred pages are all the CVEs we found.

And let us know if you need any help. Well, that doesn’t help me. But if you walk me through a particular example where here is in this one set of infrastructure, we’re liable or we’re open to this type of attack.

I think that’s amazing because it gives the executives the story they need. You understand the assets. Here’s the risk. Here’s the potential impact. Here’s what we can and cannot do to defeat or defend against this.

And then we need your help if this is a risk that you can’t accept. So no, I think you’ve covered all parts of what would be an appropriate story arc for using that type of approach. And honest to God, if you could get more folks to include that in reports, I would love to see that because I’m like you, I i have read too many reports that don’t offer value.

But the description you just provided and the way we break it down, that offers huge value to executives moving forward.

Nathaniel Nelson
Tim’s spending a lot of time emphasizing the importance of storytelling in conveying security concepts to the people who make decisions. Andrew, in your experience, is this sort of thing something you think about a lot? Do frame your your information in the same ways that he’s talking about, or do you have a different sort of approach?

Andrew Ginter
This makes sense to me. it’s sort of a step beyond what I usually do. So I’m i’m very much thinking about what he’s done and and how to use it going forward. But just to give you an example, close to a decade ago, I came out with a report, the “Top 20 Cyber Attacks on Industrial Control Systems.”

And it wasn’t so much a report looking backwards saying what has happened. It’s a report looking at what’s possible, what kind of capabilities are out there. And I tried to put together a spectrum of attack scenarios with a spectrum of consequences. Some of the attacks were very simple to carry out and had almost no consequence.

Some of them were really difficult to carry out and would take you down hard and cost an organization billions of dollars or dozens of lives. And everything in between.

And I did that because, in my experience, business decision makers understand attack scenarios, better than they understand abstract numeric risk metrics or lists of vulnerabilities.

But I described it as attack scenarios. In hindsight, I think really… what I was doing there was telling some stories and, I need to update that report.

I’m going to do it by updating it to read in more of a storytelling style so that, people can hear stories about attacks that they do defeat reliably and why, and attacks that they probably will not defeat with a high degree of confidence and what will be the consequences so that they can make these business decisions.

Nathaniel Nelson
Yeah, and that sounds nice in theory, but then I’m imagining, you tell your nice story to someone in the position to make a decision with money and they come back to you and say, well, Andrew, your story is very nice, but why can’t we defeat all of these attack scenarios with the amount of money we’re giving you?

Nathaniel Nelson
What do you tell them at that point?

Andrew Ginter
That is a very common reaction, saying, “You’ve asked us where to draw the line. We draw the line above the most sophisticated attack, fix them all.” And then I explain what that’s going to cost.

They haven’t even really paid attention to the attack scenarios. They haven’t even asked me about the attack scenarios. I’ve just explained the concept of a spectrum. They said, yeah, put it on the very put the line on the top, fix them all. And then you have to explain the cost.

And they go, “Whoa. Okay, so what are these?” And they ask in more detail and you give them the simplest attack, the simplest story that you do not defeat with a high degree of confidence.

And you ask them, is that something we need to fix? And they say, “Yeah, that’s nasty. I could see that happening, fix that. What else do you got?” And you work up the chain and eventually you reach an attack scenario or two where they look at it and say, “That’s just weird.”

I mean, let me give you an extreme example. Imagine that a foreign power has either bribed or blackmailed every employee in a large company. What security program, what policy can this the the CEO put in place that will defend the organization? Well, there isn’t one. Your entire organization is working against you. Is that a credible threat? The business is probably going to say, no, this is why we have background checks.

A conspiracy that large, the government is going to, be you going to come in and, and and and arrest everyone. That’s not a credible threat. And so, the initial reaction might be, yeah, fix it all. Draw the line across the very top of the spectrum.

And when that becomes clear that you can’t do that, this is where you dig into the stories and they have to understand the the individual scenarios. And they will eventually draw the line and say, “These three here that you told me about, fix them.” The rest of them just don’t seem credible.

That’s the decision process that you need to to to go through. And you need to describe the attacks. And I think the right way to describe the attacks is is with storytelling.

Andrew Ginter
So, I mean, this all makes great sense to me. I mean, this is why I asked you to be a guest on the podcast. But let me ask you, a sort of the next level of detail at TaleCraft. If, I don’t know, a big business, a CISO, says, TaleCraft makes sense to me and they bring you in, what do you actually do? Do you do you run seminars? Do you review reports and give advice? what What does TaleCraft actually do if we if somebody engages with you?

Tim McCreight
So there are a couple of things that we can offer to organizations that bring us in that from a TaleCraft perspective. First, what we offer, let me talk about storytelling first. What we offer from the storytelling approach is we will go to the client site.

We will run workshops, anywhere from four-hour workshop to a two-day workshop. We will bring team members from the security group, as well as others that the security team interacts with. We’ll go over the principles of storytelling and the concepts of storytelling, how to be more mindful in your public speaking and in your preparation.

And we’ll spend the first day going through the theory and the concepts of telling a story and becoming a better public speaker. Then on the second day of the workshop, we we then ask all participants to stand up for up to 10 minutes and provide their stories.

At the end of each one of the sessions, we provide positive feedback and provide them opportunities to grow and experience more more storytelling opportunities. And then we close out the workshop We provide reports back to each of the individuals on how we observed them absorbing all of the content from day one, and then offer opportunities for individual mentoring and coaching along the way.

So that’s one of the first services we offer. The second, as we come into organizations, if a CISO or CSO contacts us and asks us for assistance, we can do everything from helping them redesign their security program using the principles of enterprise security risk management, review the current program that they have today, assess the maturity of the controls that they have in place, identify risks that are facing the organization at a strategic level. And then we can come in and help them map out and design path to greater maturity by assessing the culture of security across the organization as well, where we go out and interview stakeholders from across the organization, from different departments, different divisions, and different levels of employees in the organization and identify their perception of security, the value that security brings to the organization, and how the security team can become greater partners and trusted advisors to the company. That’s part of the work that we do at Telegram Security.

Andrew Ginter
I understand as well that you’re working with professional associations or or something. I mean, I know that in in Canada, there’s the Canadian Information Processing Society. It’s not security focused. Security is an aspect of information processing in in the IT space.

In Alberta, there’s APEGA, the Association for Professional Engineers, Geologists, Geophysicists. I would dearly love to see these professions embrace cybersecurity and establish professional standards for practitioners for what is considered acceptable practice so that there is sort of a minimum bar.

So tell me, you’re you’re working with these folks. what What is it that you’re doing? How’s that going?

Tim McCreight
Yeah, so this happened, I’ve been thinking about this for probably the last 20 some years, and it always bothered me that the security director, the CISO, et cetera, in an organization, if they did get a chance to come to a board meeting or to be invited to talk to executives, you got a 45 minute time slot. Most times it was less. You had a chance to drink the really good coffee, and then you were asked to leave the room, and that was your time.

Where your peers who were running other departments across the organization in legal, finance, HR, etc. They stayed the entire weekend to help map out the strategy for an organization. Yet we weren’t invited to that party.

And that kind of annoyed me for the last some years. So I took it upon myself to begin a journey and I brought some folks along with me. There’s about 15 of us now that are working on the concept of designing and developing the profession of security, focusing on Canada first, and then working through the Commonwealth model to all those countries that follow the Commonwealth parliamentary system.

And it it made sense to me. I couldn’t do much work when I was the president of ASIS 2023. I didn’t want to have any perceived conflict of interest or anything that I was doing. But what we looked at from this concept of designing the profession of security It’s an opportunity for thus those who call this our profession and want to be recognized as such to borrow some of the great work that KIPPS has done and that APEGA has done here in Alberta, KIPPS across the country, to recognize the path that they took, how they were recognized and established, how they developed their charters, et cetera.

So we’ve had an opportunity to chat with some folks from KIPPS, but also to look at the work that they’ve done. And I’ve had a chance to review APEGA and it made sense to me. So now, Spin forward to 2025. We have a group of individuals who are focused on designing and developing what we consider to be a model that will provide a professional designation for security professionals in Canada.

It’s an opportunity to demonstrate your expertise and your body of knowledge. It’s an opportunity to take all of the the designations that you’ve received from groups like ISC squared, ISACA, ASIS, et cetera, use them as stepping stones to the next level where you’re accepted as a professional designation so that a security designation, whatever we can land on for the post nominals would be recognized the same as an engineer or as a doctor or as potentially a lawyer.

It gives us the validation of our work that we do. It gives us the recognition of the value that security brings to an organization. And it ties together OT, IT, t cyber, physical, all of the different parts of makeup security. And it’s a chance for us to come under one umbrella. So the way I describe it is that, I’ve, For years, I said, I ran a department. It just happens to be security. Now we can say I’m a security professional and my expertise is in OT security or in forensics or in investigations or in a crime prevention through environmental design.

It gives us an umbrella designation for security and a chance to specialize. So a good friend of mine is a surgeon. He started off as a doctor and now he’s a thoracic surgeon. So whenever he recognizes himself is that, he’s a, he’s a doctor, my specialty is c thoracic surgery, and now he’s chief of thoracic surgery at Vancouver General Hospital. Super great guy, but the path he took was become a doctor, demonstrate your expertise, spend more time to create your specialty, focus on that, be recognized for that. And now that’s his designation.

I want to do the same here in Canada for security. The reason why is, look, you and I both know this, Andrew, and we’ve we’ve seen this. If I go do a risk assessment for a client or internally, and if I do a bad job, I just go to the next client.

But if we have a doctor or a lawyer who mishandles a file or mishandles an operation or is liable for their actions, they’re held accountable to it. We are not. What I want to be able to do is put in the standards that demonstrate the level of our expertise, that we’re held accountable for our actions, that we maintain our credentials throughout our career, that we’re able to give back to the profession of security, and that if something does happen, we’re actually accountable for the work that we do.

And think that’s important, right? like here in our new house, an engineer stamped our plans. He’s accountable for the work he did. Why can’t we have the same for security? I think we need to, because then that provides executives a greater understanding of how important the work that we do every day to secure your organization so that you can achieve your goals and objectives.

That that’s what I’ve been doing on the side of my desk for the past 20 years. I finally got some breathing room to do it now with a TaleCraft giving me the space to do it. So I’m, I’m looking forward to trying to roll this thing out between now and the end of the year, at least the structure of it, and then we engage more people to get their comments and their perceptions so that we’re trying to reflect and represent as many folks as we can across the security profession.

Andrew Ginter
Well, Tim, this has been tremendous. Again, I look forward to to your book. Hopefully you find some time to work on it. Before we let you go, can I ask you to sum up for us? What are the what what should we take away from from the discussion we’ve had in the in the episode here and and use it going forward?

Tim McCreight
Thank you for that. I appreciate it. And yeah, fingers crossed, I can get working on the book over the summertime. That’s my goal. But for this particular episode, I think a couple of things. One, as security professionals, it’s not our job to accept the risk. It’s our job to identify it, provide a mitigation strategy, and present it back to executives. So that’s that’s one of the things that I want to keep stressing for everybody. Our role is to be an advisor to the organization.

It’s not to accept the risk on behalf of the organization. Second is, We all have a story to tell. We all understand the value and the power of a story. We all see how important it is when we tell a story to our executives, to our leaders, to our teams, and to others.

You need to focus on those skill sets of how to tell a story, particularly in the role of security, because not everyone understands the value that we bring. and the second annual and then And the last point for me is that You need to continue to look for mentors, for instructors, for trainers who can offer you these skill sets and you can provide this type of training for you so that you can continue to build your career.

We can’t do this alone. but You need to make sure that you have an opportunity to reach out to folks that can help you, whether it’s looking at your security program and trying to build it on a risk-based approach or teaching people the value of telling a story and then applying those skills the next presentation you give to executives. If folks remember those things, that’d be terrific.

So for those folks listening to the podcast today, if those points resonate with you, and if you’re looking for opportunities to learn more about telling a story or how to be effective doing that, how to look at your program from a risk-based approach and how to find mentors that can help you in your career path, reach out to TaleCraft Security.

This is what we do. It’s our opportunity to give back to the profession of security, to help organizations build their security programs, and to grow the skill sets of people who want to learn more about telling a story, becoming a better security leader, or understanding the concepts of a risk-based approach to security.

That’s what we’re here at TaleCraft for us, to help, to give back, and to grow.

Nathaniel Nelson
Andrew, that seems to have done it with your interview with Tim. Do you have any final word you would like to say gazelle today?

Andrew Ginter
Yeah, I mean, I think this is a really important topic. I see way too many security teams saying, this is my budget. This is all I have budget to I do not have budget to solve that problem. Therefore, I will accept the risk of that problem. And, especially for new projects, for risks that that we’ve never considered before, you That is often the wrong decision.

When we have new kinds of decisions to make, we need to escalate those decisions to the people who assign budget. We need to tell those people stories so they understand the risk. We have to get the right information, the right stories to the right people so they can make the right decisions. Saying, I have no budget, therefore I’m going to accept the risk many times is the wrong decision for the business. And we cannot afford to be making those wrong decisions time and again.

As the threat environment becomes more dangerous, as consequences of of industrial cyber attacks increase, we need to be making the right decisions. And this seems an essential component of of making the right decisions.

Nathaniel Nelson
Well, thanks to Tim McCreight for that. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a pleasure. Thank you, Nate.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to every everyone out there listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post I don’t sign s**t – Episode 143 appeared first on Waterfall Security Solutions.

]]>
NIS2 and the Cyber Resilience Act (CRA) – Episode 142 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/nis2-and-the-cyber-resilience-act-cra-episode-142/ Mon, 18 Aug 2025 08:29:50 +0000 https://waterfall-security.com/?p=35094 NIS2 legislation is late in many EU countries, and the new CRA applies to most suppliers of industrial / OT computerized and software products to the EU. Christina Kieffer, attorney at reuschlaw, walks us through what's new and what it means for vendors, as well as for owner / operators.

The post NIS2 and the Cyber Resilience Act (CRA) – Episode 142 appeared first on Waterfall Security Solutions.

]]>

NIS2 and the Cyber Resilience Act (CRA) – Episode 142

NIS2 legislation is late in many EU countries, and the new CRA applies to most suppliers of industrial / OT computerized and software products to the EU. Christina Kiefer, attorney at reuschlaw, walks us through what's new and what it means for vendors, as well as for owner / operators.

For more episodes, follow us on:

Share this podcast:

“So NIS2 is focusing on cybersecurity of entities, and the CRA is focusing on cybersecurity for products with digital elements.” – Christina Kiefer

Transcript of NIS2 and the Cyber Resilience Act (CRA)  | Episode 142

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome everyone to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions, who’s going to introduce the subject and guest of our show today. Andrew, how’s going?

Andrew Ginter
I’m very well, thank you, Nate. Our guest today is Christina Kiefer. She is an Attorney at Law and a Senior Associate in the Digital Business Department of reuschlaw. And she’s going to be talking to us about cybersecurity regulation in the European Union. As we all know, NIST 2 is coming and there’s other stuff coming too.

Nathaniel Nelson
Then without further ado, here’s your conversation with Christina.

Andrew Ginter
Hello, Christina, and welcome to the podcast. ah Before we get started, can i ask you to say a few words, introduce yourself and your background, and tell us a bit about the good work that you’re doing at Reuschlaw.

Christina Kiefer
Yes, of course. So first of all, thank you very much for the invitation. I’m very happy to be in your podcast today. So, yeah, to me, my name is Christina Kiefer. I’m an attorney at law working as a senior associate at our digital business unit in the law firm reuschlaw.

Christina Kiefer
We are based in Germany and reuschlaw is one of Europe’s leading commercial law firms specialized in product law. And for more than 20 years, our team of approximately 30 experts has been advising companies in dynamic industries, both nationally but also internationally.

Christina Kiefer
And for me myself, in my daily work, I advise companies and also public institutions on yeah complex issues in the areas of data protection, cybersecurity, but also IT and contract law.

And one focus of my work is on supporting clients in introduction of digital products in the EU market. And also looking at the field of cybersecurity and IT law. Since my studies, I have already focused on IT law and cybersecurity. And yes, I have been involved in the legal development since since then in this area.

Andrew Ginter
Thank you for that. And our topic is, you know, the law in Europe for cybersecurity, its regulation. The big news in Europe is, of course, NIS2. And it’s not a law, it’s a directive to the the nation states to produce laws, to produce regulations. So every country is going to have its own laws. Can I ask you for an update? How’s that going? who’s Who’s got the law? I thought there was a deadline. do the do the Do the nations of Europe have this covered or or is it still coming?

Christina Kiefer
Yes, so it’s the last point, so it’s still coming. Some countries have already transposed NS2 Directive into national law, but also a lot of countries are still in the developing and the transposition yeah period.

And that that’s why we are yeah confusing because NIS2 Directive it’s already or has already been enforced since January 2023. and and also the deadline for the EU member states to impose the NIS2 directive international law was October 2024.

So because of that, because of a lot of member states haven’t transposed the NIS2 directive international law, the EU Commission has launched an infringement proceeding against 23 member states last fall in 2024. And this has led to some movements in some EU member states. So as of now, 10 countries have fully transposed this to international law.

So for example, Belgium, Finland, Greece or Italy. And then another 14 countries have published at least some draft legislation so far. And there you can call ah Bulgaria, Denmark and also Germany. And then there are also two countries, it’s Sweden and Austria, and those two EU member states, they have not published neither a draft or also a final national law. So there we have no public information available on their implementation status yet.

Andrew Ginter
And, you know, someone watching this from the outside with, you know, a command of English and of very limited command of German, is there sort of a standard place that a person like me looking at this from the outside could go to find all this stuff? Or is it on every country’s national website in a different language in a different location? Is is there any central repository of these rules?

Christina Kiefer
No, not yet at least. Maybe there will be some private websites where you can find all the different implementation information. But until until now, when you are a company, either you within the EU or also the EU, when you are providing your services into the EU market, you have to fulfill with the NIS2 directive. And this means you have to fulfill with the national laws in each EU member states.

And this is yeah a big challenge for all international companies because they have to check each national law of each EU member states and they have to check if they fall under the scope of application. And what is also very important that the different national laws have different obligations. So the NIS2 directive has a minimum standard which all national legislators have to fulfill But on top of this, some EU member states have imposed more obligations or ah portal for registration or new reporting obligations.

So you have to check for each EU member state. But here we can also help because we see in our daily work that this is a very, very hard yeah challenge for companies to check all the laws and also understand all the national laws. We offer a NIS2 implementation guide where you can get regularly updates on and an overview of how the different EU member states have transposed NIS2.

And yes, in addition to this, we also have a NIS2 reporting and obligation guide, especially looking at the reporting and registration obligations to see where you have to register in each EU member state, but guide So you can book our full guide, but we also post yeah some overviews on LinkedIn and our newsletter.

Andrew Ginter
So thanks for that. You touched on the yeah the the goal of NIS2 was to increase consistency among the nation states of Europe in terms of their cyber regulations, and in my understanding, to increase the strength of those regulations across the board. How’s that coming? Are the regulations that are coming out stronger than we saw with NIS2? And are they consistent?

Christina Kiefer
Well, it’s… correct that the idea behind NIS2 or the NIS2 directive was to create ah stronger and also more consistent cybersecurity framework across the whole EU and the EU market. And also the NIS2 directive should also cover a broad set of sectors for regulated companies. So there should be some consistency within the EU. but it’s an EU directive and not an EU regulation. So this means the NIS2 directive sets only a minimum standard to all EU member states that they can then transpose into national law. And that’s why EU member states are allowed also to go beyond if they want to. And some of the EU member states have already done this. this So what we’re seeing right now, looking at the national laws which have already been enacted and also looking at the draft of some national laws, we see quite a mixed picture. So we don’t see a whole consistency what a lot of companies were hoping for. We see more like a mixed picture with some countries like Belgium again, for example.

They have pretty much stuck to the core of the directive and haven’t added much on top. So there you are also for you as a company, you can ensure when you’re looking at this two directive or when you have already looked at this two directive, you can be yeah positive that you also fulfill the requirements of the law of Belgium. But on the other hand, looking for example, on Italy, they have expanded the the scope of application. So Italy has, for example, included a cultural sector as an additional regulated area. So the sector of culture hasn’t been mentioned in NIS2 directive at all. But Italy ah had the idea, well, we can regulate also the cultural sector. So that’s why they have also sort in yeah included it into their national law.

And also in France, you can see that they have imposed more obligations and also have broadened the scope of application of their national law. because here they have also widened up the regulated sectors and here they have added educational institutions, for example. We have a minimum set of standards set out in the NIS2 directive, but across the EU, looking at the national laws, we have a lot of national differences. And that’s why it’s very hard for companies to comply with the NIS2 directive or with the national laws within the EU market.

Nathaniel Nelson
One of the more interesting things that Christina mentioned there, Andrew, was Italy treating its cultural sector as like critical infrastructure, which sounds a little bit, it sounds very Italian, frankly.

Andrew Ginter
Well, I don’t know. It’s not just the Italians. The original, you know, this was back in the, I don’t know, the the late noughts. One of the original directives that came out of the American administration was… a list of critical infrastructures. And at the time it included something like national monuments as a critical infrastructure sector. And the justification was, you know, any monument or, you know, cultural institution that was that was seen as essential to national identity, national cohesions,

And then it disappeared in the 2013 update of what were ah critical national infrastructure. So it’s no longer on CISA’s list of critical infrastructures, but it used to be. And, you know, in terms of Italy, oh I don’t, you know, I don’t have a lot of information about Italy, but again, you might imagine that national monuments and certain cultural institutions are vital to sort of national identity. Think the Roman Colosseum. Should that be regarded as critical infrastructure? It’s certainly critical to tourism, that’s for sure. So that’s that’s what little I know about it.

Andrew Ginter
And in my recollection of NIS2, one of the changes was increased incident disclosure rules. Now, i’ve I’ve argued or I’ve speculated. we We did a threat report at Waterfall. We actually saw numbers sort of plateau in terms of incidents. I wonder, I speculate whether increased incident disclosure rules are in fact reducing disclosures because lawyers see that disclosing too much information can result in lawsuits. For instance, SolarWinds was sued for incorrect disclosures. And so they they i’m I’m guessing that that they… they yeah conclude that minimum disclosure is least risk. And if they get partway into an incident and say, this is not material, we don’t need to disclose it we’re not going to disclose it, we actually see fewer disclosures.

Can you talk about what’s happening with the the disclosure rules? are they How consistent are they? Multinational businesses, how many different ways do they have to file? And are we seeing greater disclosure or in your estimation, fewer disclosures because of these rules?

Christina Kiefer
Yeah, that’s a really good question and honestly it’s something we get also asked all the time right now because once we hear again all over if we operate in several and several EU countries do I need to report a security incident in one you member states or via one portal and then I’m fine or do I really have to report a security incident to each EU member states which is kind of affected with the with regard to the security incident.

And yeah, unfortunately, the answer right now is yes, you have to report your security incident to each EU member state or to each national authority of the EU member state, which you fall under the scope of the national law. Because the NIS2 directive does not really require one portal or one obligation registration and also a reporting portal for all EU member states. So it’s up to the national authorities and also up to the EU member states to regulate this field law. And you can see that many national authorities have already recognized this issue and they are also looking at ways to simplify the process of registration but also of reporting security incidents and there you can see some member states try to yeah at least include or to to set up a portal a national-wide portal where you can yeah report your security incident.

Some other national authorities go even further. They say they implement a yeah scheme or structure where you only have to report to them and then they will yeah transfer the report to the other relevant EU authorities. But again, this is each and in e in each EU member state national law, so then you also have to check again all the other national laws within the EU. Yes, but also the authorities of the EU member states have already, well, at least indicated that they are talking to each other. So maybe in the future we will get one portal to report everything. But as I said before, it’s not regulated in the NIS2 directive and is also not foreseen for now.

Yes, and to the other part of your question. You could think that when you’re obliged to report everything and each security incident that the reporting would decrease But you also have to look at a yeah at the at the risk of non-compliance and the risks are very high because the NIS2 directive is imposing high sanctions and also a lot of yeah authority measures, authority market measures. And that’s why in the daily consulting work, it’s better to say, please report an incident because also the national authorities communicate this to the companies. They say, please report something because then we can work together. So the focus of the national authorities, at least in Germany, we see right now is they want to cooperate together.

They want to ensure a cyber secure en environment and a cyber secure market. So the focus is to report something that they can yeah work on together and that’s why it would be better to report and I would say maybe we get also an increase of reporting.

Andrew Ginter
So I’m a little confused by your answer. the The rules that I’m a little bit familiar with are the American ah Securities and Exchange Commission rules. And those rules mandate that any material incident must be reported to the public, any incident that might cause a reasonable investor to either buy or sell or assign a value to shares in in a company.

Which means non-material incidents can be kept quiet. And the SEC disclosures are public. Everyone can see them because reasonable people need information to buy and sell shares. The NIS2 system, is it requiring all incidents to be reported? And are those reports public?

Christina Kiefer
That’s a good point. To your first part of your question, the NIS2 directive and also the reporting obligation is kind of the same as the regulation you mentioned before, because you have to report only severe security incidents. As a regulated company, you are obliged to check if there is a security incident in the first step and then the second step you have to check if there a severe security incident.

And only this security incident you are obliged to report to the national authorities. So that’s kind of the same structure or mechanism. And to the second part of your question, the report will not be published for everyone. So first of all, if you report it to national authorities, only the national authorities have the information. It can happen because we have in some Member States some laws where yeah people from the public can access or can get access to information, to public information. It can happen that some information will be publicly available. But the the first step is that you will only report it to the national authority and that the report will not be available for the public as such.

But next to the reporting obligation to the national authorities, you also have information obligations in the NIS2 directive. So it can happen that you are also obliged to inform the consumers of your services.

Andrew Ginter
So thanks for that. The other big news that I’m aware of in Europe is the CRA, which confuses me because I thought NIS2 was the big deal, yet there’s this other thing that sort of came at me out of the blue a year ago, and I’m going, what’s what’s going on? Can you introduce for us what is the CRA, and how’s it different from NIS2?

21:30.66
Christina Kiefer
Yeah, sure. So, as you mentioned before, the CRA is like the sister or brother and the second major piece. of the new European cybersecurity framework alongside the NIS2 Directive.

Christina Kiefer
It’s the Cyber Resilience Act, or for short CRA. And while the NIS2 Directive focuses on the cybersecurity requirements for businesses or entities in critical sectors, the CRA takes a different angle and the CRA introduces EU-wide cybersecurity rules for products.

So NIS2 is focusing on cybersecurity of entities and the CRA is focusing on cybersecurity for products with digital elements. And also the other difference is also that NIS2 directive, we have an EU directive, so it needs to be transposed into national law by each EU member state and the Cyber Resilience Act is an EU regulation So when the Resilience Act comes into force, it will apply directly in each EU member state.

Andrew Ginter
Okay, so that’s how the CRA fits into NIS2. What is the CRA? What are what are these rules? is it Can you give us a high-level summary?

Christina Kiefer
Yeah, sure. So the CRA is the EU-wide first horizontal regulation, which imposes cybersecurity rules for products with digital elements. So regulated are products with digital elements and this definition is very broad. It covers software and also hardware and also software and hardware components if they are yeah brought to the EU market separately. And products with digital elements are kind of like connected devices and as I said, software and hardware that can potentially pose a security risk. Also, what is very important, the CRA imposes obligations not only to manufacturers, but also to importers, distributors, and also to those companies which are not resident in the EU, because the main point for the geographical scope of application is that you place a product in the EU market, whether you are placed in the EU or not.

Christina Kiefer
So this means also that the Cyber Resilience Act, such as data and such as the General Data Protection Regulation, has a global impact impact for anyone selling tech products in Europe.

Andrew Ginter
So let me jump in real quick here, and Nate. What Christina‘s described here, oh you the CRA, the scope applies to all digital products sold in Europe. To me, this the CRA is, in my estimation, and she’s going to explain more in ah in a few minutes, it’s probably the strictest cybersecurity regulation for products generally in the whole world. it It sounds to me like this might become just like GDPR. This was ah a European regulation that came through a few years ago. It had to do with marketing and the use of private information, in particular my email and sending it. Basically, so it was like an anti-spam act. It’s the strictest in the world. And everybody who has any kind of worldwide customer base, which is almost everybody in the digital world that that’s sending out marketing emails, is now following the GDPR pretty much worldwide because it’s just too hard to apply one law in one country and one law in the other. So what you do is you pick the strictest that you have to comply with worldwide, which is the gp GDPR, and you do that. worldwide instead of trying to figure out what’s what. It sounds to me like the CRA could very well turn into that kind of thing. It might be the thing that all manufacturers that embed a CPU in their product have to follow worldwide because it’s just too hard to to change what they do in one country versus another.

Andrew Ginter
Okay, so can you dig a little deeper? I mean, an automobile, you buy a a ah new automobile from the from the dealership. My understanding is that it has 250, 300, maybe 325 CPUs in it, all of them running software. It would seem to me that ah a new automobile is covered by the CRA. what What are the obligations of the manufacturer? What should customers like me expect in automobiles that that might be different because of the CRA?

Christina Kiefer
Thank you. First of all, looking at your example, automobiles are not covered by the CRA, because the CRA some exemptions. And the CRA says, we are not regulating digital products with the digital elements, which are already regular regulated by specific product safety laws. And here, looking at the automotive sector, we have for sure in the EU very strong and very specialized regulation for product safety of cars and so on. So just for your example, but looking at other products with the chill elements, for example, wearables or headphones, smartphones, for example, you can say that there are kind of five core obligations for manufacturers in the CRA. So the first obligation is compliance with Annex 1, which means you have to fulfill a list of cybersecurity requirements. And you don’t only have to fulfill those cybersecurity requirements, but you also have declare and show compliance with Annex 1 of the CRA. So it’s a conformity assessment you have to undergo.

Christina Kiefer
The other application, number two, is cyber risk assessment. If you are a manufacturer of a product with digital digital elements, you are obliged to assess cyber risks and not only during the development and the construction of your product and also not only during the placing of your product to the EU market, but throughout the whole product life circle. So if you have a product and you have it already placed on the market, you are obliged to undergo cyber risk assessments. Then looking at the third obligation, it’s free security updates.

Christina Kiefer
So manufacturers have to provide free security updates throughout the expected product life cycle. We have also mandatory incident reporting. So we have here also reporting and registration obligations, such as we already talked about looking at the NISS2 directive. And also like in each product safety law in the EU, we also have the obligation for technical documentation. So this is of those are the five core obligations, compliance, cyber risk assessment, free security update, reporting and documentation.

Andrew Ginter
And you mentioned distributors. What are distributors and importers obliged to do?

Christina Kiefer
yeah there We have some graduated obligations. So they they are not such strict obligations such for manufacturers, but importers and distributors are obliged to assess if the product, what they are importing and distributing to the EU market are compliant with the whole set of cybersecurity requirements of the CRA. So they have to check if the manufacturer and the product is compliant and if not, They have to inform and yeah cooperate with the manufacturer to ensure cybersecurity compliance. But also importers are also obliged to yeah impose their own measures to to fu fulfill with the CRA.

Andrew Ginter
Okay, and you said there were five obligations. You spun through them quickly. Some of them make sense on their own. Do a risk assessment, do it from time to time, see if the risks have changed. That kind of makes sense. The first one, though, comply with Annex 1. That’s like an appendix to the CRA. What’s in there? what What are the obligations?

Christina Kiefer
Yes, sure. Annex 1 is, yeah the you can also say, Appendix 1 to the CRA. and And there are you can see there is a list of certain cybersecurity requirements which manufacturers have to fulfill. And the list is divided into two different main areas. And one area is cybersecurity requirements. So it focuses on no known vulnerabilit vulnerabilities at the time of the market placement, secure default configurations, protection against unauthorized access, ensuring confidentiality, integrity and availability, and also secure deletion and export of user data. So kind of all of cyber security requirements such as them which I have mentioned. And the other area is vulnerability management. So manufacturers have to ensure that they have a structured vulnerability management process and they have to yeah install a software bill of materials.

They have to provide free security updates. They have to undergo cybersecurity testing and assessments. there needs to be a process to publish information on resolved vulnerabilities. And again, here we also need a clear reporting channel for known vulnerabilities.

Andrew Ginter
So it sounds like you said that a manufacturer is not allowed to ship a product with known vulnerabilities. Practically speaking, how does that work? I mean, a lot of manufacturers in the industrial space use Linux under the hood. Linux is a million lines of code of kernel. And, you know, the, these devices don’t necessarily do a full desktop style Linux, but they still have a lot of code that they’re pulling from an open source distribution. And in these millions of lines of code, From time to time, people discover vulnerabilities and they get announced. And so it’s it’s almost a random process. Do I have to suspend shipments the day that a vulnerability a Linux vulnerability comes to light until I can get the thing patched and then three days later ah start shipments again? Practically speaking, how does this zero known vulnerabilities requirement work?

Christina Kiefer
Basically, it is like, as you said, because the Cyber Resilience Act focuses on known ah no known vulnerabilities not only in your product but also in the whole supply chain. So the Cyber Resilience Act focuses not only on products with digital elements but also focusing on the cybersecurity of the whole supply chain. So this means looking at Annex 1 and the cybersecurity requirements Products with digital elements may only be placed on the EU market if they don’t contain any known exploitable vulnerabilities. So it’s not any vulnerability, but it’s any known exploitable vulnerability. That is a clear requirement under Annex 1. And also when you’re looking at making a product available on a market, that doesn’t just mean selling it.

Christina Kiefer
It includes any kind of commercial activity. And also what is also a very good question also in our daily work, looking at making a product available on the market. A lot of companies say, well, I have a ah batch of products. So, and if I have placed this batch of products on the EU market, I have already placed product on the market. So I can also place the other products of this batch also in the future. But it is not correct, because looking at EU product safety law, the regulation is focusing on each product. So looking at these requirements, you can say, first of all you really have to check your own product, your own components, but also the products and the components you are using from the supply chain. And you have to check if there are any known exp exploitable vulnerabilities. So you have to yeah impose a process to check the known vulnerabilities and also to ah impose mechanisms to fix those vulnerabilities.

Christina Kiefer
And if you have products already on the market, you don’t have to recall them because first of all, it’s okay if you have a vulnerability management which is working and where you can fix those vulnerabilities. And when you have products already in the shipment process, there it’s up to each company to assess if they have to yeah recall products in the and the shipment process or if they say, okay, we leave it in the shipment process because we know we can fix the vulnerability within two or three days. So in the end, it’s kind of a risk-based approach and each company has to assess what measurements are yeah applicable and also necessary.

Andrew Ginter
So that that makes a little more sense. I mean, the Linux kernel and sort of core functions in my, but I don’t have the numbers, but I’m guessing that you’re going to see a vulnerability every week or two in that large set of software. And if that’s part of a router that you’re shipping or part of a firewall that you’re shipping or part of any kind of product that you’re shipping, Does it make sense that, you know, you discover the exploitable vulnerability on Thursday and you have to suspend shipment until, ah you know, three weeks out when you have incorporated the vulnerability in your build and you’ve repeated all of your product testing, which can be extensive.

Andrew Ginter
And by the time you’re ready to ship that fix, two other problems have been developed and now you have to, you can’t ship until, you know, it, It sounds like it’s not quite that strict. it’s not that That scenario sounds like nonsense to me. It just it would never work. You’re saying that there is some flexibility to do reasonable things to keep bringing product to market as long as you’re managing the vulnerabilities over time. Is is that fair?

Christina Kiefer
Yes, yes, that’s right. Because in the CRA we have a risk-based approach and also you have to… No, the basis for each measure you have to to impose under the CRA is your cyber risk assessment. So you have to check what kind of product am I using or am i manufacturing? Which kind of product am I right now placing on the EU market? What are the cybersecurity risks right now? And also what what are the specific cybersecurity risks of this known vulnerability?

Christina Kiefer
And then you have to check, have i do I have a process? Do I have a process imposing appropriate measures to to fix those vulnerabilities? And if I have appropriate measures, to fix the vulnerabilities in a timely manner, then it’s not the know you are not obliged to recall the product itself. But at the end, looking at a risk-based approach, it’s up to the decision of each company.

Andrew Ginter
So this is a lot of a lot of change in in for a lot of product vendors. Can I ask you, how’s it going? Is it working? Are are the vendors confused? can you Do you have any sort of insight in into how it’s going?

Christina Kiefer
Yeah, sure. So what we’re seeing right now, a lot of companies, both manufacturers, but also suppliers, are getting ahead of the curve when it comes to the Cyber Resilience Act, because they see that there is a change and there there will be new strict obligations, not only on manufacturers, but also in the whole supply chain. So suppliers, distributors, importers are also coming to us and asking if they are under the scope of the CRA. So this is the first point. If you’re a distributor or an importer, you already have to check if you and your company itself falls under the scope of the CIA. And if it is like this, then you are already obliged to ensure all the obligations of the CRA. But it can also happen that suppliers are under the scope of the CRA in an indirect manner.

Because ensuring all those new cybersecurity requirements from a manufacturer point of view, you have to ensure it within the whole supply chain. And the main instrument to ensure this was already in a future in a and the past and will also be in the future is contract management. So you have to impose or transpose all those new obligations to the suppliers via contract management. And there we see different reactions, but there’s definitely a growing awareness that cybersecurity needs to be addressed contractually, especially in relation to the CRA obligations. And yeah looking at contract negotiations, of course, we have some negotiations with the suppliers And one of the main points which is negotiated is the regulation of enforcement.

Christina Kiefer
Because when you have contractual management looking at cybersecurity requirements, you can not only yeah transpose those obligations to the suppliers, but you also have rules on enforcing those new contractual obligations. For example, contractual penalties. And there we see that contractual penalties often sparks some debate during negotiations. But to sum up, in practice, we’ve always been able to find a balanced solution that works for all parties involved.

Nathaniel Nelson
I suppose I could think about any number of potentially trivial electronics products, Andrew, but let’s say that I or my neighbor has ah a smart fridge, a fridge with a computer it. We generally assume that those devices don’t even really have security in mind at all. And a security update is like so far from the universe of how anyone would interact. with such a device and now we’re saying that that kind of thing is going to be regulated in these ways.

Andrew Ginter
I think the short answer is yes. You might ask, what good does this regulation do for a fridge? And, you know, I think about this sometimes. I think the answer is it depends. If, you know, a lot of the larger home appliances nowadays have touchscreens. There’s a CPU inside. There’s software inside. These are cyber devices. You might ask, well, when was the last time I updated the firmware in my fridge? How many times am I going to update the firmware in my fridge? Those are good questions. Most people never think about something like that. But the law might… you know, very reasonably apply to the fridge if the fridge is connected to the Internet so that I can see, for example, how much power my fridge is using on my cell phone app.

Isn’t that clever? But now I’ve connected the fridge to the Internet. We all know what what happened to, what was it, the Mirai botnet took over hundreds of thousands of Internet of Things devices and and used them as attack tools for denial of service attacks. If you’ve got an internet connected fridge, you risk that if you haven’t updated the software. Worse, if someone gets into your fridge, takes over the CPU, you could change the set point on the temperature and cause all your food to spoil. This is a safety risk.

Andrew Ginter
Again, how many consumers are going to update the software in their fridge? Realistically, I don’t think… You the majority of consumers will, even if there is a safety threat. To me, you know, the risk, this this is part of the risk assessment. If there’s a safety threat because of these vulnerabilities, you might well need to… I don’t know, auto-update the firmware. That might be part of your risk assessment so that the consumer doesn’t have to do it. Or better yet, design the fridge so that safety threats because of a compromised CPU are impossible, physically impossible. Make the the temperature setting manual or something. But this is this is a bigger problem than I think one regulation, the the the question of safety critical devices connected to the cloud.

Nathaniel Nelson
Yeah, admittedly, the the notion of a smart refrigerator safety threat isn’t totally resonating with me. And then we haven’t even discussed the matter of like, OK, let’s say that my refrigerator gets automatic updates or I just have to click a button in an app when it notifies me to do so to update my firmware. At some point, you know, fridges sit in houses for long periods of time. I can’t recall the last time that my fridge has been replaced. In that time, any manufacturer could go out of business. And then how do you get those updates, right?

Andrew Ginter
Exactly. So, you know, to me, but this is outside the scope of the CRA, but, you know, to answer your question, to me, the solution you know, two or threefold, we we need to design safety-critical consumer appliances in such a way that the unsafe conditions cannot be brought about by a cyber attack. I mean, we talk about, you know, fixing known vulnerabilities. That’s only one kind of vulnerability. What about zero days? There is, there’s there’s logically no way that someone can, you solve all zero days. It it It’s a nonsensical proposition. So there’s always going to be zero days. What if one is exploited and, you know, a million fridges set to a ah set point that that’s unsafe?

Andrew Ginter
To me, we’ve got to design the fridges differently, but that’s that’s sort of a different conversation. In fact, that’s the topic of my next book, but which is why I care so much about it. but but it’s These are important questions, and I think the CRA is a ah step in the direction of answering them, but I don’t know that it has all the answers.

Andrew Ginter
So work with me. you know, what, what you described there makes sense for, you know, manufacturers like, uh, IBM who can, you know, produce high volumes of, or, you know, Sony or the, the big fish. But, you know, if I’m a small manufacturer, I produce a thousand devices a year. I buy components for these devices. I buy software for these devices from big names like Sony and Microsoft and Oracle. And, you know, I go to Oracle and say, you must meet my contract requirements or I won’t buy my thousand products from you at a cost of $89 a product. Oracle is going to say, take a flying leap. We’re not signing your contract. Is this realistic?

Christina Kiefer
Yes, and we see this also in practice because we are not only consulting the big manufacturers but are also the smaller companies in the supply chain. And there you can have different approaches because when you are buying products from the big companies, First of all, you have to know that they are or they might be obliged also under the CRA. So they are fulfilling all those new cybersecurity requirements. And you also have to take it though there you also have to check their contracts because there you can see already they have a lot of new regulations looking at cybersecurity, either if it’s implemented into the the general contractual documents or implemented into one cybersecurity appendix.

So you see all the companies are looking at the Cyber Resilience Act and then they are taking measures and also looking at their contract management. So if you are lucky enough, you can see, okay, they have a contract which is already regulating all the obligations under the CIA. And then if it’s not like this, We take the approach that we establish a cybersecurity appendix. So when you’re already a contractual relationship with the big players, you don’t have to negotiate the whole contract from the beginning. You can only show them your appendix and then on on basis of this appendix, you can discuss the cybersecurity requirements. So this is kind of a approach which has helped also smaller companies in the market.

Andrew Ginter
So you gave the example of of headphones and smartphones. For the record, does this apply to industrial products as well? I mean, our our listeners care about programmable logic controllers and steam turbines that have embedded computer components, or is it strictly a ah consumer goods rule? Now, and this is a very important point to highlight, the Cyber Resilience Act explicitly applies not only to consumer products but also to products in the B2B sector. so this means that all software and all hardware products along with any related remote data processing solutions fall under the scope of the CRA, either in B2C or also in B2B relationships.

Andrew Ginter
Well, Christina, thank you so much for joining us. Before we let you go, can I ask you, can you sum up for our listeners? What are the the key messages to take away to understand about what’s happening with cyber regulations, both NISU and CRA in Europe, and and what we should be doing about them as both consumers and manufacturers?

Christina Kiefer
Yeah, sure, of course. So let me give you a quick recap. So first of all, you see the EU legislature is tightening the cybersecurity requirements significantly with both the NIS2 directive and also the Cyber Resilience Act. And the new requirements affect any company that offers products or services to the EU market, no matter where they are based. So it is it has a very broad scope of application. Looking at the NIS2 directive, it’s very important to know that the NIS2 directive is already enforced, but it has to be transposed into national law, which has not been fulfilled by all EU member states, and that the national implementation across the EU is still quite varied.

Looking at the Cyber Resilience Act, the CRA brings new security obligations to products with digital elements, so for all software, for all hardware products. And it also is focusing not only on cybersecurity on products, but also in the whole supply chain. So both frameworks require companies to take proactive steps right now, looking at risk assessment, risk management, reporting, and also contract management, particularly when it comes to managing their supply chain. So looking at the short implementation deadlines ahead, both from the NIS2 Directive and also the CIA, it’s very important for companies to act now. And the first step we consult to do is to identify the relevant laws, because we have a lot of new regulations looking at digital products and digital services. So, yeah first of all, check the relevant laws and the relevant obligations which are applicable to your business.

And here we offer a free NIS2 quick check and also a free CRA quick check where you can just click through the different questions to see if you are under the scope of NIS2 and CRA. And then after all, when you clarified that you are affected on the one or both of the new regulations, the company needs to review and adopt their cybersecurity processes, both technically and also organizationally. So it’s very crucial to continuously monitor and ensure compliance with the ongoing legal requirements, especially also looking at contract management and focusing on the supply chain. And yeah, there we can help national but also international companies with kind of a 360 degree approach to cybersecurity compliance because we enter ensure solutions with the range from product development and marketing to reporting and market measures. So, yeah, we we give companies ah practical and also actionable guidance in ah in an every step way.

So looking at the first step to to act and yeah to identify the relevant laws and obligations to your business, companies can yeah visit our free NIS2 QuickCheck and our free CRA QuickCheck, which is available under nist2-check.com and also And yeah, if you have any further question, you are free and invited to write to me via email via LinkedIn. Yeah, I’m happy to connect. And thank you very much for the invitation.

Nathaniel Nelson
Andrew, that just about concludes your interview with Christina Kiefer. And maybe for a last word today, we could just talk about what all of these rules mean practically for businesses out there because, you know, it’s one thing to mention this rule and that rule in a podcast, but sounds like kind of stuff we’re talking about here is going to mean a lot of work for a lot of people in the future.

Andrew Ginter
I agree completely. It sounds like a lot of new work and a lot of new risk, both for the critical infrastructure entities that are covered by NIST or by the local laws, especially for for businesses, the larger businesses that are active in multiple jurisdictions, and certainly for any manufacturer who wants to sell anything remotely CPU-like into the the the European market. It sounds like a lot of work, but I have some hope that it’s also, because it’s such a lot of work, it’s also a business opportunity. And we’re going to see entrepreneurs and service providers and even technology providers out there providing services and tools that will automate more and more of this stuff so that not every manufacturer and every critical infrastructure provider can. in the European Union or in the world selling to the European Union. Not every one of them has to invent all of this the the answers to these these new rules by themselves.

Nathaniel Nelson
Well, thank you to Christina for elucidating all of this for us. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a pleasure. Thank you, Nate.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post NIS2 and the Cyber Resilience Act (CRA) – Episode 142 appeared first on Waterfall Security Solutions.

]]>
Network Duct Tape – Episode 141 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/network-duct-tape-episode-141/ Wed, 13 Aug 2025 16:31:00 +0000 https://waterfall-security.com/?p=35075 Hundreds of subsystems with the same IP addresses? Thousands of legacy devices with no modern encryption or other security? Constant, acquisitions of facilities "all over the place" network-wise and security-wise? What most of us need is "network duct tape". Tom Sego of Blastwave shows us how their "duct tape" works.

The post Network Duct Tape – Episode 141 appeared first on Waterfall Security Solutions.

]]>

Network Duct Tape – Episode 141

Hundreds of subsystems with the same IP addresses? Thousands of legacy devices with no modern encryption or other security? Constant, acquisitions of facilities "all over the place" network-wise and security-wise? What most of us need is "network duct tape". Tom Sego of Blastwave shows us how their "duct tape" works.

For more episodes, follow us on:

Share this podcast:

“We abstract the policy from the network infrastructure such that you can have a group of devices or a device itself that essentially associates with an IP address that’s an overlay address.” – Tom Sego

Transcript of Network Duct Tape | Episode 141

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome listeners to the Industrial Security Podcast. My name is Nate Nelson. I’m here as usual with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions.

He is going to introduce for all of us the subject and guest of our show today. So Andrew, how are you?

Andrew Ginter
I’m well, thank you, Nate. Our guest today is Tom Sego. He is the CEO and co-founder of BlastWave. And he’s going to be talking about distributed asset protection, which is a fancy name for a very common problem in the industrial space. We have – Stuff – devices, computers, assets, cyber assets all over the place, might be distant in pumping and substations might be local. The stuff was bought, on the cheap. It was the the lowest bidder.

It’s old. It’s ancient. And we have no budget to rip in place. So what do we do about cybersecurity? And this is something he’ll he’ll be walking us through.

Nathaniel Nelson
Then let’s get right into it.

Andrew Ginter
Hello, Tom, and thank you for joining us. Before we get started, can I ask you to say a few words of introduction? Tell us a bit about your background and about the good work that you’re doing at BlastWave.

Tom Sego
Sure, Andrew. Thanks for having me. So my background is I started my career as a chemical engineer at Caterpillar. I also spent eight years at Eli Lilly designing and building processing facilities to make medicine.

I was also a certified safety professional during that period and managed a 24-7 liquid incineration operation, which burned a 30,000 gallons of liquid waste per day.

So a shit ton. And then I went to Emerson, got did business development, corporate strategy there. Then I did product management at AltaVista. Then I went on to do sales support at Apple, where I was at Apple for almost 10 years.

And then that’s when I started my entrepreneurial career. I started a mobile telephony company, started a solar storage company, started a wine importing business, then played professional poker for a few years, and then eventually started this cybersecurity business called BlastWave.

I co-founded that in 2017. And our mission then is the same as it is today, which is to protect critical infrastructure from cyber threats.

And we wanted to kind of come at this with a very different approach than other cybersecurity companies in that We kind of started from first principles thinking about what are the three highest kind of classes of threat and categories of threats, and can we actually eliminate those?

The biggest category is probably no surprise to anybody here, but it’s phishing, credential theft, et cetera. I’m like, well, let’s just get rid of usernames and passwords altogether. And come up with a different model for for MFA that can actually apply to industrial settings.

So we did that. The second category of threats was really CVEs and and vulnerabilities. And could we make those unexploitable? And we came up with a concept called network cloaking, which I’m sure we’ll discuss, which kind of addresses that issue. And then the last one is human error, which is impossible to get rid of.

But if you can make human beings make fewer decisions, they can also make fewer mistakes. So we also incorporated that into a lot of our UI and UX.

Andrew Ginter
That’s, wow, that’s that’s a history like none other I’ve ever heard, Tom. Makes like I’m thinking it makes my own, what I thought, storied background look completely mundane.

You’ve been in lots of different industries. Now, I understand that a lot of what BlastWave does right now is upstream and midstream. And we’ve never had someone on the show explaining how that works. I mean, I think we’ve had one person on talking about an offshore platform at some point.

But when you’re looking at the industry, can we start with the industry? what’s What’s the physical process? Physically, what’s this stuff look like? What’s it do? How does it work?

Tom Sego
Yeah, it’s really interesting because I can talk about the physical process and it’s also evolved quite a bit in the last 20 years. So first of all, just stepping back, looking at the industry, the overall oil and gas market globally generates $2 trillion dollars of revenue per year, and it generates $1 trillion in profit.

So there’s a lot of money in this business. And that also means that there’s a lot of gallons of oil and lot of cubic feet of gas that are being extracted and transmitted and sent everywhere around the world.

And the other thing that’s interesting is that in spite of how old this industry is, there’s between 15 and 20 thousand new oil wells created per year and in fact, half of those were done in the Permian Basin. So about 8,000 wells were created last year in the Permian Basin.

Tom Sego
I don’t think people realize the magnitude of which the oil and gas companies are continuing to create wells and extract oil. The other thing that’s interesting about it is 20 years ago, we had a traditional vertical drilling approach to oil and gas.

And in that to last two decades, we’ve noticed that there are capabilities to actually now drill horizontally. And what’s pretty interesting is you can actually, as you start drilling a well today, you create the initial bore, which is, usually a foot or more in diameter.

And then you can send these kind of devices and drill bits down a relatively sloping curve that over the course of maybe 100 or 200 meters, you’ve now done 90 degree angle.

And then you can start drilling horizontally, which allows you to have higher probabilities of not hitting a dry well. It gives you more capabilities for lower cost extraction.

And so it’s been a great boon for the industry. Hydraulic fracturing, which is another technique that’s been exploited to to get much higher yields out of these wells, also contributed to the the recent boom in oil and gas.

So There are many, many things that have to be considered when you start doing this process. You’ve got to go through site selection, permitting. You’ve got to do all this site prep. And one thing people may not realize is site prep means building roads.

You have to build an entire infrastructure to get to and from these wells. And then once you start building. Actually drilling the well, it’s much like a CNC machine if you’ve been in a factory like Caterpillar or something where there’s a fluid, heat transfer fluid that allows you to cut the metal.

In this case, they use a mud that both stabilizes the wellbore and it also helps you manage pressure. And that that mud flows down through the the drill pipe and then it comes out around in kind of an annulus, almost like a donut that comes back up the outside of that drill pipe to be then cleaned, having the the rock kind of cuttings removed from it using a screening and operation.

And then you kind of reuse the mud and so forth. So there’s a lot to it. And And increasingly, much of this is being automated.

And you’re having connectivity that is absolutely essential to be your eyes and ears in these wells. Because once you start producing oil and gas, these things are hours and hours away from each other.

They’re very remote, very rural areas. And so that connectivity is absolutely critical. And you may have, we have one customer who has 700 sites that they’re trying to manage.

And so they have to have the ability to do this in an automated fashion, which requires not just connectivity, but secure connectivity.

Andrew Ginter
Cool. I mean, it’s a piece of the of the the industry I’d never dug into. So thank you for that. Can I ask you, you’ve said in the modern world,

you know it increasingly everything is automated. I mean, that makes perfect sense. The The example I often use is you buy an automobile, it’s got 300 CPUs in it. It Everything, every every device, but every non-trivial device you you you buy nowadays has a CPU in it.

Can you talk about the automation in these these drilling systems, in these these upstream systems? what does, what’s that automation look like? Is it like built into the device like an automobile? Is it a programmable logic controller? I mean, I’m familiar with, power plants vaguely. I mean, bluntly, I don’t get out much. I’m i’m a software guy more than a hardware guy, but but I’ve had a few tours. I know what a PLC looks like. If if i If I visited one of these well sites, would I recognize the automation? What’s it look like?

Tom Sego
Yeah, you would definitely recognize the automation. So what you see is your classic kind of SCADA tech stack, if you will. So you’ll have remote terminal units. You’re going to have PLCs.

You’re going to have these things mounted on a DIN rail in a cabinet. And there can be various size cabinets at some well locations.

You’re going to have just a few number of devices. And then at some other well sites, again, I go back to the horizontal drilling, you’re going to have a much bigger operation there. You’re also going to have those well sites connected to what are called tank batteries.

so that you can essentially manage the flow of oil and gas into these storage facilities. So there’s there’s a lot of automation that’s necessary using kind of PID control loops to maintain equilibrium within these systems.

And there can also be Oftentimes, challenges that happen, shocks to the system, where let’s say in the case of oil and gas, the price starts dropping.

But when the price starts dropping, the motivation of the business unit is not to just keep cranking production at maximum capacity. And so you actually want to have dynamically, you want to manage your your operation dynamically based on economic conditions that can change over time.

And I’ll tell you something else, Andrew, about what’s happening today. There’s a lot more uncertainty in the business world today than there was four months ago. And I think that is going to affect oil and gas.

It’s going to affect the price of oil and gas. It’s going to affect the supply of oil oil and gas. It’s going to affect the transmission across borders. So these kinds of things can affect the the automation.

I’ll call it like Uber automation. Okay. Not just between the actual plant operations and facilities, but also between different entities in the upstream, downstream and midstream ecosystem.

So there’s a lot of very interesting factors that affect that. And I’ll tell you one other thing that’s kind of interesting. That’s how everybody’s talking about ai and there are some of the larger oil and gas companies that are trying to figure out how to apply AI to optimize their operation.

And everybody knows that there’s there’s automation that’s used to help identify ways to to to deliver predictive maintenance to rotating machines.

But there’s also uses of AI in oil and gas to to prevent things like spills. And one of the big challenges is it’s easy. If you go talk to someone at BP or Shell or Chevron and you say, can I get data to the cloud? They’re going to go, well, heck yeah.

There’s all kinds of great things that can allow you to get data out of your process. And in fact, I think you’re associated with a company that does a really good job of doing that kind of one-way transmission of data.

And the other thing is, but once you have that data, and you’re using it to build AI models, then how do you get, deliver those set points and control variables back to the process?

It scares the crap out of these people. The idea of connecting their control network to a much less secure cloud network or corporate network.

Because as we all know, security is a continuum. It’s not Boolean secure insecure. So I think there’s a lot of interesting things that are happening with that. And I think just to to kind of close the story on that, one company, for example, is pulling that data, they’re analyzing it actually in AWS, and then they are taking some of those control variables and they’re using a human in the loop process so that they’ll say, this is the recommended set point for this this process.

And then the human in the loop then implements that through their control HMI. So there’s a lot of very interesting traditional ways in which automation is applied to oil and gas.

But there’s also some very interesting evolving mechanisms that involve machine learning.

Andrew Ginter
So, Nate, let me jump in and and give sort of a bit of context here. Yeah, AI and cloud-based systems, in my opinion, these are the future of industrial automation in pretty much… Everything.

The question is not if, the question is when, because different kinds of cloud systems are going to be used in different kinds of industries at different times, with different intensities. So, I care enormously about this topic because I am writing my fourth book. The the working subtitle of the book, possibly the title of the book is CIE for a Safety Critical Cloud.

You know, when you have cloud systems controlling, you potentially dangerous physical processes. How do you do that? There are designs that work. I… I’m keen to to to listen to the rest of the episode here. I’m keen to, but when I had Tom on, I was keen to learn from him. When I write these books, I try not to make up solutions myself.

I tend to get them wrong when I do that. I try to learn from experts like Tom and, gather up the best knowledge in the industry and try and trying package it up in a digestible format.

So, yeah, that the cloud is the future and I’m, yeah when When we recorded this, I was keen to to learn from Tom about what the future looks like.

Nathaniel Nelson
And I know we’re about to get right back into the interview. And what I’m about to say actually kind of has nothing to do with what you just said. But before we go, a few times now, it feels like you guys have mentioned the terms upstream, downstream, midstream. And I just want to make sure I’m clear on this before we continue.

Andrew Ginter
Sure. This is This is standard oil and gas terminology. People say, oh, oil and gas, as if it were one industry. It’s not. Really, there’s three industries involved, and each of these these sort of sub-industries have a lot of different kinds of facilities. So the stream is generally considered to be the pipeline.

So we’re talking upstream is producing stuff to feed into midstream, the pipeline. And downstream is taking stuff out of the pipeline to for for refining and such. So, sort of next level of detail, what’s involved in upstream? Exploration is considered part of upstream.

Initial drilling is part of upstream. Offshore platforms are part of upstream. The, onshore pump jacks are part of upstream.

The whole infrastructure, building roads is part of the upstream process. Midstream is pipelines and tank farms. And, in in the natural gas space, you need to do sort of an initial separation and, discard waste from the the product. You might even need this in liquids to take if you can do an initial filter and take water out of the oil and pump it back down, the dirty water back down into the well, sort of waste, or carbon dioxide out of the natural gas, there’s initial processing facilities that are sort of pre-sending stuff into the pipeline. There’s tank farms where the pipelines store stuff sort of intermediate. There’s liquid natural gas ports. There’s oil oil ports. There’s oil tankers. This is all part of midstream, the process of moving stuff and you’re from from place to place and to a degree storing it while you’re moving it.

And then downstream is sort of everything you do after it comes out of the pipeline. So there’s refining, turning it into diesel fuel and and jet fuel. There’s the the the finished processing on on natural gas, taking out all of the the natural gas liquids, making it basically pure methane with not much else.

There’s even stuff like trucking. Gasoline from the pipeline to the gas stations is considered part of downstream.  Midstream kind of rears its head again because, you you might have the concept of a gasoline pipeline. So you’ve got the oil pipeline bringing the crude oil to the refinery. Then you’ve got the, you sort of hit midstream again, taking the finished product, gasoline, and sending it to consumers. Then you’ve got the trucks, you’ve got the gas stations.

Each of these sort of upstream, midstream, and downstream sub-industries has sort of many components. I I’ve lost it now, but I saw a list once of, here’s all the different kinds of things that can be in midstream.

And it was like, I counted, it was 27 kinds of things. So it’s a complicated industry, but very loosely, upstream produces, midstream transports, and downstream consumes, in a sense, refines and produces the goods that we actually consume.

Andrew Ginter
So that’s interesting. I mean, human in the loop, I’ve heard that described as open loop, in power plants, which I’m more familiar with. You you monitor the turbines.

13:42.13
Andrew Ginter
The AI in the cloud comes back and sends you a text message and says, you should really service, the turbine in generating unit number three sometime in the next four weeks. And it goes into my eyes, goes into my brain. I go and double check with my fingers. I type on things. I say, i think they’re right.

And I schedule the service. That’s open loop. And yeah, it it gets scary when you start doing closed loop.

Yeah. Yeah. And And I would say that one of the key things, if you look at some analogous systems where they have actually gone from open loop, human in loop, if you will, to closed loop, you can you I’ll give two examples. One would be autopilot on planes and another would be self-driving cars.

And in both of those cases, you don’t just switch from open loop to closed loop. No, you do an extensive amount of testing and validation.

And you also, in many cases, build redundant systems that allow an an additional level of supervisory control on top of your normal process control loops.

And so like an example that I had heard about was a company that was looking at having, tank level measurements and looking at an AI model that would actually analyze the input feeds to that tank model. So, and and it would pull data from third parties that would look at the truck routes for the tankers that were pulling oil from that tank.

And so you could actually synthesize that data. Now you would have to put in place a lot of, I’ll call it ancillary systems and ancillary testing to make that safe enough to be like an autopilot on a car.

Because theoretically now with all that supporting testing, autopilot on a car is is supposed to be safer than humans.

And with people on their phones, like I see them these days, I think that’s become an increasingly low bar.

Andrew Ginter
Fascinating stuff. The The future of automation, I’m convinced. But if we could come back to the to the mundane, you talked about phishing, you talked about CVEs, exploiting vulnerabilities.

We’re talking about protecting these assets in the the the upstream and midstream oil and gas. Can you Can you bring us back to cybersecurity? How does how does this big picture fit with with what you folks do and and what you’re focused on cybersecurity-wise?

Tom Sego
Absolutely. So one of the things that’s interesting is, I love talking to customers and I try to spend at least 50% of my time and actually listening more than talking to customers and understanding what their challenges are and how we can solve those.

And in the case of oil and gas, there were three customers that came to us and told us the identical story and they became our largest customers.

And this the story they were telling us was that they had these highly distributed assets all over these these very wide geographic areas And they had spotty cellular and they had backup satellite to enable that connectivity that they need. They need the eyes and the ears in the field because it would be cost prohibitive for them to get in a truck and and drive out there to monitor that every few hours.

So the challenge they brought to us was the security team didn’t like the operations team having this insecure connectivity to these remote areas.

And so the security team said, you need to do something about that. And that’s where BlastWave came in. And we said, we can actually use our software-defined networking solution to cloak those assets so they’re undiscoverable to adversaries.

but also segment them so that if there were malware that were to get introduced in one area, it would not spread to others. And then finally, you would have the ability to get secure remote access.

And one of the coolest parts about this is this is not a bump in the wire kind of solution. This is a solution that allows routing and switching between groups of devices and users.

So it cuts across firewalls as if they don’t exist. It doesn’t route traffic based on source and destination. It routes it based on identity.

And this is something I think is very unique to us. And it’s something that I think customers absolutely love. And this has enabled us to address a benefit that we hadn’t even thought about, which was when oil and gas companies acquire other oil and gas companies that one of the first things they face are the need to maybe re-IP this architecture.

Because oftentimes the IP space, there’s overlapping addresses. And the that can be problematic. It can take a lot of time.

It can take a lot of money. And that’s another solution that we’ve been able to deliver calm almost by accident. We had one company, an oil and gas company, that acquired a $30 billion dollars acquisition target.

That’s a big company that you’re acquiring. And they were able to protect that with Blast Shield in three weeks of acquiring them. And they didn’t have to re-IP anything.

Again, that’s just because of the way we do this network overlay. So there’s a lot of cool things that that that use cases that we’ve discovered through the process of listening and talking to customers.

Andrew Ginter
Cool. So, so, you’ve said the the phrase SD-WAN, software defined wide area network. I have never figured out what is an SD-WAN. I mean, I’ve worked with firewalls for 20 years.

I did a lot of different kinds of networking, not not hugely. I mean, and I never worked for a telco, but but can you work with me? What is an SD-WAN? What is your SD-WAN? How does one of these things actually work? What does it do?

Tom Sego
Yeah. Well, first of all, I said SDN, not SD-WAN. So I said software-defined networking, which is a principle, not SD-WAN, which is an architecture.

What I guess the best way for me to think about this, and keep in mind, I’m a chemical engineer, not a software engineer. So I That means i’ll yeah if it takes me it may take me longer to understand these concepts, but when I finally do, I can probably explain them to people.

So the the the way I’ve learned this is that we essentially establish, we abstract the policy from the network infrastructure so that such that you can have a group of devices or a device itself that essentially associates with an IP address that’s an overlay address, much like you get network address translation.

All right, so you have a an original IP address, you have and a translated IP address, and the software-defined network then uses the overlay address to both communicate with each other, to establish the most efficient route,

because performance is very important in OT environments, unlike IT environments. And this allows us to optimize the path for any given packet, which is also very cool. So that’s one of the elements that I think is important in software-defined networking.

um The other thing is, is that it creates this illusion that it is a point-to-point between two different devices or two different groups.

And so that’s part of the abstraction. So if you don’t have to like set the path, which is what firewalls do, path, looking at the routing, how you go from this firewall to that firewall, from this port to that port, when you just abstract that to, I wanna go from this centrifuge to that control room,

It doesn’t matter if the infrastructure changes. And this is a very powerful yeah benefit of software-defined networking. Because if you’re just looking at the device you want to protect and the user who wants to connect to that protected device, as the environment evolves and it absolutely will, you don’t get put in the penalty box like you would in a firewall situation where you could get firewall rule conflict.

And if one thing to think about, Andrew, is when you think about the breaches that occur, about 100 percent of those breaches already have firewalls.

And so that means that the firewall didn’t work properly, which is usually a result of a firewall rule problem or the the environment has evolved in such a way that it’s no longer protected. There’s a hole.

And of course, we all know that adversaries just need to be right once. Whereas us defenders, we’ve got to be right all the time, which is very tough unless you’re my wife.

Andrew Ginter
There you go.

Andrew Ginter
so So Nate, let me jump in here. I’ve, the as I told Tom, I’ve wondered about this space of software-defined networking, wide area networking for some time, and i’m I’m beginning to wrap my head around it.

um he gave the example of, you you might imagine that we’ve got oh the internet, local area networks, wide area networks were designed so that devices have internet protocol addresses and they talk to each other and, routers move messages from one network to another. So they get from the source to the destination.

Why is any of this complicated? Why do we need any more than that? One example that that Tom gave was acquisitions. If company A, i mean, there’s there’s internet addresses, the 10-dot series, two to the 24th addresses are private addresses.

Private businesses can assign them to their, ad written to to assets on their private networks and never show those those ad addresses to the public, to the the public internet. That’s fine.

There’s another set, 192.168 is a 16-bit address range that everyone uses. So you might say, so so what? Company A uses, let’s say 10.0.1 through 10.0.20.

They’ve got a lot of assets. They use up a bunch of the address space. And then they buy company B that’s used the same addresses because they’re private addresses. You don’t have to register that you’re using them in public.

And now all of the equipment has the same IP addresses. For For each IP address, there’s two pieces of equipment in the network. How do you route messages from from these subnetworks, from these assets to each other?

um This is the problem of renumbering when you acquire a business. Often you have to renumber it’s it’s a pain in the butt on on IT t networks.

It can shut you down until you’re done and tested the renumbering on OT networks and nobody wants to shut down. So you if if there’s a piece of technology, i mean, the the the textbook technology is network address translation, part of most firewalls.

It lets you hide some private addresses and assign a different address to sort of that set of of private addresses. You’ve got to set up a whole bunch of firewall rules You can do that sort of manually painfully, but it gets worse than that.

I mean, I was talking to Tom after the recording. He gave me an example that I didn’t capture on on the recording, but he said, Andrew, they’re they’re working with an airport and the airport’s building a new wing.

I mean, this is common. Airports expand. And in every, let’s say there’s 27 gates in the new wing. Every gate has got one of those machines, those those ramps the that sort of snuggle up to the aircraft and the door opens and people come out and step onto this device that has, I forget what the name of it is, moved up to the aircraft and then they they walk into the into the airport building.

Every one of these devices has automation, has computers.

Every one of these devices, when you buy it from the manufacturer, the manufacturer assigns the same private addresses to every one of their products. So now you’ve got 27 of these ramps in the new wing, and every batch of 20 computers or devices that are built into the ramp have the same IP addresses.

How do you route this stuff? Again, you can put firewalls in place. You can do So now you need a firewall in every ramp. You need you need technology. And it gets it gets more complicated than that.

Andrew Ginter
For example, many years ago, I worked with a bunch of pipelines. I remember one pipeline, thousand kilometers long, pumping stations, compressor stations, all the way down the pipeline. Communication was important.

You have to communicate with these these stations or you have to shut down the pipeline. It’s illegal to operate a pipeline in in that jurisdiction unless there’s human supervision.

And so you had, there there was a fiber laid along the right of way for the pipeline. And from time to time, some fool would run a backhold through it.

So you’d need backup communications. I kid you not, this pipeline had something like seven layers of backup communication. There was satellites, there was DSL modems to the local internet service provider.

There was cable modems when there were a local internet service provider. There was… I don’t think I think this was before the era of of cell phones.

there were There were analog modems. We’re talking 56 kilobit, 100 kilobit per second modems that you can route in an emergency internet protocol down very slowly.

And they had built their own by hand. They had rolled their own, what today I think would be called a software-defined wide area network, where the task of that component was to say, I need to send an internet protocol message from the SCADA system to device 500 kilometers away

what infrastructure is up, what infrastructure is dead. If a piece of the infrastructure, the communications but infrastructure has failed, then activate another piece of the, one of the backups and change all the routes, change all the firewall rules so that

All of the messages that have to get from a to B can get from a to B. It was it was it seemed to me ridiculously complicated, but in hindsight, it it sounds like the same kind of need that modern software-defined wide-area networks address.

They address security needs as well as just the basics of getting the messages from one place to another when the underlying infrastructure changes from moment to moment.

Andrew Ginter
um So so that that kind of makes sense. You’re I think of wide area network, I think of routing. So there’s a routing element. You’ve got multiple paths. The system sort of auto-heals and figures out the best paths or presumably the cheapest paths.

But you’ve also talked about users and and security. How does How does this routing concept work with security?

How is security part of this? You’ve also mentioned firewalls. Can you can you can you dig a little deeper?

Tom Sego
Yeah. Well, I think I think we in a way are disrupting firewalls that are used for industrial, lots of industrial applications.

There are great uses of firewalls. They’re a fantastic tool, but it’s it’s kind of been used like the if you have a hammer, all the world looks like a nail. And, especially again, I’ll talk about these remote oil and gas locations where you may only have five or 10 devices.

And so the idea of having a firewall to segment that is ridiculous. The expense would be prohibitive. So that’s one of the other reasons why it’s so cool about the way we can scale dramatically from protecting five devices at a very remote well site to 2000 devices with a single gateway.

So there’s a lot of flexibility that we have that, that firewalls can’t deliver. And when you look at a comparison of a project that involves a firewall as a solution versus blast shield, we are, we take one 10th the time, cost one fourth as much.

We can deliver this with half the administrative lift. It’s much easier to deploy as well. And it actually works. So there’s a lot of benefits that we bring over a firewall kind of solution.

Andrew Ginter
Okay, so so I understand these are these are powerful benefits, but can we come back to the technology? Can you tell us what does this stuff look like? I mean, you said it’s not a bump in the wire.

Physically, what does it look like? Is it a DIN rail box at each of these sites? Is it a DIN rail box on on a central tower? is it what Is it something in the cloud? Can you talk about what is it that that is solving these problems?

Tom Sego
Sure. So there are basically five components that we have to our platform. The first two create the authentication handshake. One is a client that runs late locally on on your HMI or on your machine.

And then you also typically have either a mobile application that provides the and MFA without passwords. And that was patterned after Apple Pay.

So again, I spent a decade at Apple. And so the idea was, let’s try to use some of that technology to provide stronger authentication. The other thing that we have is we have a gateway.

And the gateway is a software appliance. And it can be deployed on x86 bare metal. It can be deployed… On containers. It can be deployed on Kubernetes clusters.

It can be deployed in the cloud, AWS, GCP, Azure. It’s very flexible and it can be operated both in passive mode and active mode. So in the pat traffic path or outside the traffic path.

We also have an agent that can run locally on a machine, which most people know what agents are. And then finally, there’s an orchestrator that is used to drag and drop devices and people into groups and then establish policies between those groups.

So that’s a little bit about the way that the but technology is set up. And one of the things that that we found is that you can have people who are, I’ll say, less sophisticated than many CCNA trained professionals.

So they don’t even need to know how to use command line to deploy our solution. So it’s relatively simple. We have an example where one person is managing 22,000 devices.

So again, that provides a benefit to them in terms of OPEX reduction ongoing. So that’s a little bit about the way technology work and these the and the way these components fit together. Does that answer your question, Andrew?

28:55.44
Andrew Ginter
ah That’s close. I mean, what what you’ve described is sort of the the pieces of the puzzle. But, I’m still a little weak on on on how they work together. I mean, you again, we’ve we’ve used the word routing a couple of times.

29:09.02
Andrew Ginter
um To me, there’s there’s two ways to do routing. You can either take the message messages into one of your components, I’m not sure which one, and figure out where they belong and send them on the way yourself. You can be a router.

29:24.15
Andrew Ginter
Or, and I understand sometimes some software WANs can do this, they reach out to routers like firewalls and just routers and who knows what else that can route messages.

29:38.01
Andrew Ginter
And they send commands to those devices when things need to be routed differently. Is one of these models what what you use? how How do you guys do the routing?

Yeah, so let me talk about how these pieces all fit together. So the software appliance that is the gateway sits upstream of the switch and usually downstream of the firewall.

And what it often will do is it will provide what we call layer two isolation. And so what that is, if you think about, we can essentially turn a 48 port switch into 48 VLANs so that each one of those is its own encrypted unit that can’t see their neighbors and can’t talk to their neighbors in unless the policy allows that to happen.

And so that level of very granular control is something we can deliver because of the way the gateway controls and manages the routing that you’re discussing.

Now, there’s two other components I didn’t really talk that much about. One was the authenticator, and the second was the client. And the client is different than the agent. And so the what the client does essentially is a challenge response between either the SSO, the FIDO2 compliant key, or the mobile authenticator.

And so what it’ll do is essentially produce a QR code that the mobile application would scan and then apply your face ID, and then you would be into the system, but not authorized or permitted to see anything unless the policy had already been allowed.

So that’s the way we manage both the authentication and the authorization. And that’s also the way we manage routing of traffic between devices, gateways, and the groups that that those devices are in kind of encapsulated in.

Nathaniel Nelson
So in his answer there, Tom was was trying to describe things, but admittedly I was getting a little bit mixed up because there were certain things that were upstream from other things and downstream from other things and layer two and switches. And be like Can you, Andrew, just help simplify everything we’re talking about here?

Andrew Ginter
Yeah, sure. So in my understanding, they have a few different kinds of components. And And I might have got this wrong. But, what I got out of it was, imagine… Um

You know, firewalls can do network address translation. They can say, I’ve got a bunch of addresses here. I’m going to show you a different address to the world. But, managing them in sort of scale, at scale with tens of thousands of devices can be a real challenge, especially if each firewall is only managing a handful of devices. That’s a ridiculous number of firewalls to manage.

So what Thomas got, I believe, is a, I think he called it a gateway device. It’s something that sort of sits between, let’s say, a small network of five to 10 devices and the infrastructure.

And you can assign whatever IP address you need to to that gateway. Oh It might, in fact, have two addresses, one on sort of the infrastructure side and one on the device side.

So it has a device address that is compatible with whatever stupid little network of five local, always reused, ramp IP addresses, the, the, the airport ramp addresses, it’s, it’s compatible with that bit of address space.

It talks to those five devices. And when those devices send it messages, it forwards those messages into the infrastructure and it figures out the addressing. It figures out the, it does encryption.

If you’ve got sort of more conventional, um, Windows or Linux communications, you can put his software on those devices. They that That software will do the crypto, the software will connect sort of natively into the infrastructure and and sort it all out.

And then, the the thing of beauty is, okay, those pieces kind of make sense. The thing of beauty is what I heard was they’ve got a management system, which says, okay, you have 20,000 devices.

um half of them have exactly the same IP address. That doesn’t matter. This device over here in this building in this country can talk to that device over there.

It’s allowed. But when that device wants to talk to Andrew’s laptop, because I’m a a maintenance technician, Andrew has to provide two-factor authentication.

So you can, you basically, you you you stop caring what IP addresses these devices have you don’t have. You’re not configuring routing rules. You’re configuring permissions in a sort of a high-level user-friendly permission manager.

And all of the routing nonsense and the encryption nonsense is figured out for you under the hood. So you can you can think about… Your your big picture of devices that need to talk to each other, who should be allowed to talk to each other, instead of how do I route this when the IP address is conflict? You don’t have to ask that question anymore.

Andrew Ginter
Cool. So that that starts to make sense. I mean, can you talk a little bit about, you’ve been doing this for, 2017, this eight years. Can you talk about, can you give us some examples to to to help us understand, how this stuff works?

Tom Sego
Well, I think the, having run this for almost eight years now, the the journey was not a straight line. We went through, we originally started out, believe not, Andrew, as a hardware company.

And the the thesis was to build an unhackable stack. So this sounds naive, and it was. We were going to start with a chip, a new chip, that we had a partner developing that would have an onboard neural net.

It would create 17 key pairs and it would encrypt the bootloader in the factory and burn a fuse so it couldn’t be reset. And that was the foundation of our product. And then we were gonna write our own kernel, write our operating system. And this was from someone who helped write the OS 10 kernel.

We were gonna write that in such a way that it used byte codes and would not be exposed to buffer overflows and other issues. So it could, we were going to use formal methods to even prove the kernel.

And then we’d have our networking layer, which is what our company is now. And then we’d have our own SDK to manage applications that would also use formal methods. And then finally, we would have the authentication layer that we also have today. So we went from a five,

very ambitious levels of of tech stack to two. And then we have other people doing some of those other things. I think the market really wasn’t ready for something that complex, maybe that secure from a, on the higher end of the security spectrum, if you will.

um the market just really wasn’t willing to pay that. And so we simplified, we pivoted. And then by the way, once we did come out with our hardware product in February of 2020, there was another global issue that hit everyone that caused us to then pivot to a software as a service model, which then required some more development and everything else. So we didn’t really launch our product until late in 2021 and started getting our first customers very shortly thereafter.

And since then, we’ve grown very rapidly to the point where this most recent year, we quadrupled our our revenue and tripled our customer count.

So it’s been an exciting ride.

So let me give you an example. The one one customer, again, an oil and gas customer who was, again, trying to, they were faced with a challenge where they were going have to build their own cell towers, essentially become their own wireless ISP. And this is not unique to this oil and gas customer.

There are many that are facing that. And I don’t know if you or your audience knows, but it’s about a quarter million dollars to build a cell tower. And you have to have many of them. So in in in a relative sense, we are not just delivering security to this customer, we’re also so helping save them a ton of money.

So instead of 10 to $20 million, dollars they’re spending a fraction of that, which is also very interesting. One of the When they did this acquisition, there was another company that did an acquisition.

They wanted to sell off certain components too. So they wanted to sell off the saltwater rejuvenation or… It I don’t know exactly what the right word is, but they wanted to offload this asset.

And one of the things that they were able to do very quickly, because all of our segmentation, all of our granularity and access is done in software.

We can essentially just take that new entity. Put their users in a group, put the devices that they control into another group, and they would have complete control of just their newly acquired saltwater assets and no visibility, no access at all to the oil and gas parent company.

So that was another great example of using this in a creative way.

Andrew Ginter
So you’ve mentioned acquisitions a few times. I mean, I live in Calgary. This is oil country. I hear about these acquisitions all the time. Is this Is this sort of part of the the the the genesis of your organization? is is this How often do these things happen? How complicated are these sort of mergers and acquisitions technology-wise that happen all the time?

Tom Sego
Well, they happen very frequently, especially, again, in oil and gas. In the In the case of oil and gas, because one customer sorry one asset owner has a certain tech stack that can only profitably make money up to a point.

And then they can sell that asset to someone else who has a richer skillset that can extract more profit, more money, more revenue from that same resource.

and And I would say an example that we’ve also seen where people are pleasantly surprised about Blast Shield is when there yeah there’s one one oil and gas customer that acquired a company.

And their biggest fear was they were going to have to do an IP space assessment and figure out whether there were overlapping IP addresses. And so instead of having to do that, which they didn’t have to do at all, they just deployed our software overlay and immediately were able to segment using software each one of these devices, even regardless of whether the underlay IP address was the same.

That saved a lot of money in truck rolls. That saved a lot of money and hassle and headaches in managing that that IP space, which which they were very happy about. And the way they described it, actually, they described it two ways to me.

One way was, my God, this is like a Swiss Army knife. And the other guy said, this is like duct tape. It’s like networking duct tape. It has It provides lots of different purposes and is very versatile to basically deliver the network they want with the network they have.

Andrew Ginter
So let me just sort of emphasize, Tom has said, you talked about changing IP addresses a few times. I talked about it a few times. I’ve actually, from time to time had to change IP addresses on stuff, not so much in an industrial setting, just, just internet protocol networks, just, business infrastructure.

And here’s the tricky bit. It’s very hard to do that remotely.

You know, Imagine that you you want to remote into a remote substation. There’s nobody there, but there’s 100 devices. And you have to log into each device with, I don’t know, SSH or remote desktop.

And you’ve got to change the IP address on the device. And at some point, you’ve got to tell the firewall that it’s talking to a different network of IP addresses.

And if you do that in the wrong order, if you, let’s say, hit the firewall first, now you can’t send messages to any of the devices because the firewall doesn’t know how to route to those devices anymore. They have different IP addresses. So you have to undo that. Now you go into the device and you give the SSH command a Linux box. You give the that that command line command to change the IP address, and it stops talking to you because you’re connected to the old IP address. You’ve got to try and connect to the new IP address.

Only the firewall won’t connect you to the new IP address because it its IP address hasn’t been updated. So now you have to sort of blindly change all these addresses. Then you change the firewall, and then you see if you can still talk to these devices, and three of them have gone missing.

Why? Did I fumble finger the IP address? Is there some other problem? It’s just really hard to do this remotely. And so, again, if you have 700 sites, you’ve got to put people in trucks and drive out to these wretched sites to make these changes.

If there’s a way to avoid that, you can save a lot of money. So, yeah, I kind of get that it’s really useful to avoid doing that.

Andrew Ginter
so So this is starting to come together for me. I mean, you can do the network address management in your, what did you call them?

The gateways.

Tom Sego
Gateway, yeah.

Andrew Ginter
And that gives you an enormous amount of flexibility. But And it’s it’s the the client that does the the crypto. Or maybe it’s the agent.

39:22.07
Andrew Ginter
I’ve i’ve i’ve lost track.

Tom Sego
The client is used to authenticate.

Andrew Ginter
Right.

Tom Sego
The agent runs on typically a server in the cloud, those kinds of maybe a historian type of use case. The gateway is the workhorse because so much of OT infrastructure cannot run an agent.

And so because it can’t run an agent, you need to have a gateway that can do the encryption and decryption of traffic. Now, when you think about the way a lot of these processes are controlled, they use PLCs.

And the PLCs, we don’t encrypt the traffic below the switch.

We don’t interfere with that. However, with the traffic that is upstream of the switch, all of that’s encrypted wherever it may go.

So I think that’s that’s the way it’s done.

Andrew Ginter
One other technical question, you mentioned CVEs and exploits and vulnerabilities earlier.

I mean, i’m I’m familiar with, let’s say firewalls that that say they do stuff like virtual patching, meaning if there’s a vulnerability in a PLC, the firewall, if it sees an exploit for that vulnerability come through, will drop the exploit and will protect the, the prevent the exploit from reaching the the the device. Is Is that the kind of thing you do when you talk about about protecting from exploits or are you doing something else?

We’re definitely doing something else. And I think the the approach that we take is we use this networking cloaking concept where you have to authenticate first before you can see anything.

There’s no management portal. So there are zero exposed web services. If you run a network scan on a factory, that’s protected by blast shield, you’re going to come up with nothing.

And what that means is if there are CVEs, and I guarantee you there will be, there will also be zero-day viruses, okay which may not be on anyone’s list.

And so in those both of those cases, as well as ancient devices that are never going to be patched, you’ve got a way to deal with these unpatchable systems because they’re unaddressable. And so it’s going to be very difficult to exploit those.

Andrew Ginter
Cool. So, I understand you’re you’re you’re heavy into oil and gas with all of the examples we’ve been talking about oil and gas, but I’m guessing you you are active in other industries as well. Given your personal background, are you active in other industries? what Can you give me some examples of what’s going on there?

Tom Sego
Yeah, absolutely. I think manufacturing is a fantastic kind of industry for us. They oftentimes have our little bit early adopters with with as it pertains to machine learning, predictive maintenance, those kinds of things, advanced analytics.

And we had one a manufacturing customer, in fact, who was hacked and many manufacturers do get hacked from time to time. They were hacked and the board asked the CISO to have an assessment to figure out what their risk posture was.

And before they could complete that assessment, they were hacked again. And so this really lit a fire under the entire kind of security team.

And they basically came up with a list of findings. And with those findings, they started implementing those findings. And they were testing various kinds of solutions.

And in one facility, they had 10 different lines, manufacturing lines. And they had deployed blast shield on one of those manufacturing lines.

They got hacked a third time. Now, this time, though, nine of the 10 lines shut down, whereas the line that was protected by Blastshield continued to run.

And what was really interesting about that is how quickly the organization responded. The CFO of this company responded and elevated that to the parent private equity company.

And now that’s leading to us becoming the default standard for not just that one company and all of its 17 plants, but also the parent private equity company and all the other manufacturing facilities that they’re trying to manage. Okay.

Andrew Ginter
Cool. I’m I’m delighted to hear it. The world needs more cybersecurity. Um

I mean, I’ve learned a lot. Thank you so much for joining us. Before we let you go, can we ask you to sum up? What what are the key concepts we should be taking away from from our conversation here?

Sure. So I think the company, as it was founded, was trying to establish protecting critical infrastructure based on first principles. And the first principle was to try to eliminate entire classes of threats if possible.

And so our solution then tries to eliminate phishing credential theft. So we we have an MFA passwordless feature. We also allow you to segment using software.

We cloak your network so it’s undiscoverable. 35% of all CVEs discovered last year are what are called forever day vulnerabilities. And so that network cloaking capability means that they’re not exploitable.

And then finally, we also have a secure mode access component in there. So we’re trying to deliver a lot of value to our oil and gas manufacturing customers so that they when you couple this with a continuous monitoring and visibility tool like a nozomi dragos dark trace armis SCADAFense industrial defender the group clarity so when you combine those two you get a ton of protection at a very low price

Nathaniel Nelson
So that just about does it, Andrew, for your interview with Tom. Do you have any final words to take this episode out with?

Andrew Ginter
Yeah, I mean, I really like Tom, the the the customer that gave the duct tape analogy. You have lots of little networks, sometimes thousands of devices.

Half of them have literally the same IP address or half of these, tiny little subnetworks of of five devices on on airport runways or on, on webbages.

ah networks that you’ve acquired with, acquiring an oil field, they all have the same IP address. They all have the same IP address range. None of it’s encrypted. It’s just a mess.

And, this is something that lets you patch it all together. You need crypto, you need authentication, you passwordless is good. Use certificates instead. They’re harder to phish. You need to hide all of these repeated subnets with the same IP addresses.

You need a permissions manager, saying A can talk to B.

You need infrastructure underneath the permissions manager to make the messages from a go to B. You need to to have some synthetic IP addresses so that when you set everything up, your SCADA system can talk to an address and a port, I don’t know, probably on the gateway or or some piece of the infrastructure rather than the real address that’s repeated a hundred times in your infrastructure.

This just makes… A lot of sense. I It seems to me there’s there’s a a bright future for this kind of, of again, duct tape or just patch it all together and make it work and throw some security on top of it. Crypto authentication, this is all good. I’m i’m i’m impressed.

Nathaniel Nelson
Thank you to Tom Sego for speaking with you about all that, Andrew. And i always, gotta say that again. Well, thank you to Tom Sego for speaking with you about all of that, Andrew. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a pleasure. Thank you, Dave.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thank you to everybody out there that’s listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Network Duct Tape – Episode 141 appeared first on Waterfall Security Solutions.

]]>
Credibility, not Likelihood – Episode 140 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/credibility-not-likelihood-episode-140/ Wed, 06 Aug 2025 20:52:59 +0000 https://waterfall-security.com/?p=34651 Explore safety, risk, likelihood, credibility, and unhackable cyber defenses in the context of Norwegian offshore platforms.

The post Credibility, not Likelihood – Episode 140 appeared first on Waterfall Security Solutions.

]]>

Credibility, not Likelihood – Episode 140

Safety defines cybersecurity - Kenneth Titlestad of Omny joins us to explore safety, risk, likelihood, credibility, and deterministic / unhackable cyber defenses - a lot of it in the context of Norwegian offshore platforms.

For more episodes, follow us on:

Share this podcast:

Large scale destructive attacks on big machinery is, not something that I would consider a credible attack.” – Kenneth Titlestad

Transcript of Credibility, not Likelihood | Episode 140

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome everyone to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions, who’s going to introduce the subject and guest of our show today. Andrew, how are you

Andrew Ginter
I’m very well, thank you, Nate. Our guest today is Kenneth Tittelstad. He is the Chief Commercial Officer at Omni, and he’s also the Chair of the Norwegian International Electrotechnical Committee of Subgroup working on 62443. So this is the Norwegian delegation to the IEC that produces the widely used IEC 62443 standard.

We’re going to be talking about credible threats. What should we be planning for security wise? And by the way, I happened… I had opportunity to be in Norway and I visited Kenneth at the Omni head office where they have a lovely recording studio. So we recorded this face to face in their in their studio in their head office.

Then let’s get right into your conversation with Kenneth.

Andrew Ginter
Hello, Kenneth, and welcome to the podcast. Before we get started, can you tell our listeners, give us a bit of information about your background, about what you know, what you’ve been up to and and the good work that you’re doing here at Omny Security

Kenneth Titlestad
Thank you so much, Andrew, and welcome to Norway and our office. It’s, I’m so glad to have you visiting us. So my name is Kenneth Titlestad and I’m working as a Chief Commercial officer in Omny and I’ve just started as a commercial officer here in Omny. I went over from Southwest area where where I was heading up OT cyber security for. I’ve been doing that for six years.

Before that I was working in Ecuador also working on OT cybersecurity, so I’ve been working in the field now for almost 15 years and also for the last five or six years I’ve been chairman for the Norwegian Electrotechnical Committee, the the group that is handling IEC 62443. I’ve been diving deep into the cybersecurity now for quite many years.

And at Omny, we are developing a software platform for for handling cyber security and security for critical infrastructure. It contains security, knowledge graph and. AI that provides actionable insights into security for critical infrastructure. So it’s about it out and physical infrastructure.

Andrew Ginter
OK. Thank you for that. Our topic today is credibility. Now this is talking about risk. You know a lot of people think risk is boring. OK, a lot of people when they enter the industrial security space, they they want to know about attacks. They want to know about the technical bits and bytes. You tell me that you got interested in risk. Very long time ago. Can you talk about that? Where? Where did that come from?

Kenneth Titlestad
Absolutely. I’m I’m not sure if I when I when I considered it as as a as a risk or as a as a field of expertise. So when I was just a small boy, actually my dad, he worked as a control room technician offshore in Conoco Phillips or back then it was called Phillips. So when I was only two years or three years old in 1977. He was working at the Palau offshore oil and gas. Before and I don’t remember this of course back then. But it it, uh, it was always a topic around the dinner table at my my home where he talked about how it was working in the oil and gas business. So in 1977 he was on his way out to the platform when the big horrible blowout happened. He was not actually. He hadn’t arrived at the platform, but he was on his way out there. So it it really was a big topic around the dinner table all the time about safety risks involved in oil and gas.

So I was always listening with my my small ears back then being a bit fascinated about this world, I didn’t see the real danger in it, but I I was trying to picture it in my mind what it was to actually work on in these kind of environments.

So it I was kind of primed back when I was just a small, small boy and later on when I moved into the I I was more into computers. So I did a lot of gaming and programming on Commodore 64 and I started to work in Ecuador on the IT side. But I was still fascinated, fascinated, about the core business being oil and gas and production and exploration. So when I actually got my first trip offshore. I kind of felt that the the circle was closed and I saw the big world, the industrial world that my dad was had been talking about for several years and the kind of the risk perspectives also kicked in. The first thing you meet when you step on board, such a platform is the HSE focus a lot of focus on HS.

OK. And it’s for a reason and I fully got to understand that first, when I actually came on board such a facility, I understood why it’s so important, because it’s it can be really dangerous if you don’t have control over what you’re doing. So that’s when I actually saw the big scale of risk as a perspective.

Andrew Ginter
Yeah. Offshore platforms are intense. I’ve never set foot on one myself, but I’ve I’ve heard the stories quite the environment. And this is I mean, we’re talking about industrial cybersecurity, so you know offshore platforms are intense in terms of physical risk. Can you talk about cyber?

Kenneth Titlestad
It’s it’s an emerging topic. So when I was working in, in Statoil when it was called stator, now it’s equinor, we started to look into that area Around 2010 two 1011 I I still remember the day when people came charging into the meeting room and they started talking about the news of Stuxnet. So that was I I think we got to hear about it in 2010. I was working on the IT side and I I was responsible for large part parts of our Windows infrastructure in the company and we started to I I started to look into what what this SCADA things, what what is it I didn’t know about. PLCS I had never seen a PLC. I didn’t know that there was actually other kind of digital equipment operating critical infrastructure. So so with Stuxnet I started to to dive into the landscape of cyber security.

Kenneth Titlestad
And also as a company, we started a big uh journey back then on on really making uh OT much more cybersecurity. And Stuxnet was kind of a kickstart for it.

Nathaniel Nelson
Andrew, it feels like maybe there are certain kinds of seminola cyber security incidents in the O2 world. We talk, we reference off in the 2007 Aurora test. Maybe, you know, Triton and destroyer. But Stuxnet is that foundational thing that, you know, set the timeline for everybody, right?

Andrew Ginter
Indeed. And you know I was active in the space. I mean, I was leading the the team at Industrial Defender building the world’s first industrial SIM at the time. So Stuxnet was big news. I did a lot of work on Stuxnet. I had a blog at the time, you know, every time I learned something new about it because somebody had published a report, somebody had published another blog.

I’ve done a little research on my own. I published a paper on how Stuxnet spread because, you know, analysis had been done of the artifact. You know, the malware. But it had been done by IT. People at Symantec at I think he said, a bunch of people had analyzed the malware and you know, that’s work I couldn’t do. I’m not a I’m not a a reverse analyst.

But I sat down with Joel Langill. I sat down with Eric Byers and we investigated the impact that Stuxnet would have in a network. What would what would happen if you let this thing loose in a network? Given our understanding of the the Siemens systems, Joel was nexpert on the Siemens systems. You know, Eric and I were sort of more expert more generally, firewalls and industrial systems. So we all contributed to this paper and said here’s what happens if you let loose Stuxnet into an industrial network.

And in hindsight, I have to wonder if we didn’t do more damage than than good, because a lot of people learned stuff about Stuxnet, but there was only one outfit that benefited. And that was Iran’s nuclear weapons program. That was the only, site in the world that was physically impacted.

Why? I regret some of the stuff that I published about about Stuxnet.

Nathaniel Nelson
Do you recall if that research got traction, whether it might have gotten over there or is there no way to tell?

Andrew Ginter
I have no way to tell. I do recall a conversation, sometime later, because I’m a Canadian, I I work with the the Canadian authorities. I remember a conversation with Canadian intelligence services. And I remember, asking them. I’ve stopped, but at one point when I figured out that there’s only one place in the world that’s physically benefiting from my research, I stopped publishing anything about Stuxnet. And I remember some time after that talking to Canadian intelligence saying, I’ve stopped publishing anything about Stuxnet. You don’t have to tell me nothing. In the future, if you ever see me putting out information that’s helping our enemy, tap me on the shoulder, would you? And tell me. Shut up, Ginter. You’re doing more harm than good, and I will shut up. So, yeah, I, I look back on Stuxnet with with mixed emotions. It it was a wake up call for the industry. a lot of people learned about cyber security because of Stuxnet, but who benefited because of all that research?

OK. So that’s Stuxnet is. A lot of people got started in the OT space it was the big news years ago.

Andrew Ginter
Can I ask you, let’s let’s talk about industrial security and the work you, the work you’ve been doing. Stuxnet is where it got started. Where have you wound up? What are you up to today?

Kenneth Titlestad
Yeah, it’s. It’s as you say, it’s 15 years and it’s been, it’s for me. I think it’s been a very interesting journey. So but back in 2010 when when Stuxnet hit the news, I wasn’t immediately immediately diving into OT cybersecurity full time. I was working on the IT side, trying to secure Windows environment in a large oil and gas company.

But uh short, uh, after a while I move more and more over to outsider security, and I had my first trip offshore to oil and gas platform. I think that first trip was in 2013, so actually three years after the Stuxnet. But then I was going out just to to do some troubleshooting on a firewall. So, but more and more, I was moving into OT cybersecurity, and at the end I was. I moved over to Super Steria and I think it was in 2017. And at the end I was really working hard on finding really proper solutions for OT cybersecurity when when potential nation states are targeting you, what do you then do? If you must sort of have their mindset of assume breach and these kind of systems with the PLCS and all they are really, really vulnerable. What do you do when you are being targeted so then then I started to look into. I heard rumors that could there could be something that was non hacky.

So I started investigating into unidirectional data. Diodes was exposed to to waterfall. That was one of the first examples of of where I heard about non hackable stuff. And also I got to to to hear about the, the the Crown Jewel analysis, Cyber informed engineering. Back then it was consequence driven, cyber informed. Hearing. But those kind of topics really, really sparked an extra interest for me because then then I saw on some attack vectors on some of the risks I saw actually a solution that could remove the risk instead of just mitigating it.

Andrew Ginter
So your first sort of foray, everyone was interested in Stuxnet, but you started working on the problem you said with a firewall and to a degree that makes sense. I mean the the firewall, the Itot firewall is often the boundary between the engineering discipline on the platform in the industrial process and the IT discipline, where information is the asset that needs to be protected. And so that boundary is something that both the engineers and the IT folk care about, so that that kind of makes sense. I’m, I’m curious, you got out to the platform you were tasked with the firewall. What did you find?

Kenneth Titlestad
There. Yeah, it was actually kind of a long, long lasting ticket we had in our system, there was a firewall between it and OT that was noisy, so it was causing creating a lot of events and alerts on traffic that it shouldn’t have so I was tasked to go out there and try to troubleshoot this. We we absolutely didn’t think that it was a cyber cyber attack or kind of evil intent, but it was incorrectly configured firewall rule. But when I got out there I could see that it was. It was just incorrectly configured firewall.

There’s nothing, not, not anything dangerous or cyber attack involved, but I also got to to think of of a scenario where if it had actually been a cyber attack and one that created so much noise as well on a security boundary, a security component. Sitting on the outskirts of OT, shouldn’t the OT environment do something to sort of shut down or go into a more fail safe situation? So I got kind of interested in in actually the instrumentation behind your security components on the outskirts of OT. So that’s a topic I continued to explore for for several years, having in the back of my mind cyber informed engineering, non hackable approaches unidirectional systems and on on S4 last year I talked about the the safety instrumented system because safety has always been a particular interest of mine. So I talked about the cyber informed safety instrument. The system shouldn’t the safety instrumented system. At some point, when you’re under an attack, shouldn’t the the the sort of the big brain? Uh, in the room? Shouldn’t that actually take an action? An instrumented automated action and going into not necessarily. A fail safe only, but a more fail failover to a more safe and secure situation.

Andrew Ginter
So that makes sense in theory. I mean if the firewall was saying help help. I’m under attack over and over again. Should some action not have taken place on the OT side. But let me ask you this. It was a false positive. It would have shut down the platform. a very expensive that form unnecessarily, can we detect cyberattacks reliably enough to prevent this kind of unnecessary shutdown, and have if if we do shut down whenever there’s a bunch of alarms? Is that not a new sort of denial of service vulnerability? The bad guys don’t even need to get into OT. They just need to launch a few packets. That firewall generates some alarms in the shuts down without them even bothering to break in the OT. Is that really the right way forward?

Kenneth Titlestad
No, I totally agree. It’s not a good approach going forward. But at the same time I think to shut down one too many times, is is better than not actually doing it, so we should be kind of overreacting and and going into fail safe situation and it could cause unnecessary down time and it could. It’s vulnerability on the production side, but I think it’s much more dangerous with the false negatives where we actually don’t see any attacks and but it’s it’s actually happening. So false positive we need to reduce them, but it’s much more important to actually reduce the false negatives.

Andrew Ginter
So just listening to the recording here. I mean, this is not something I discussed with Kenneth, but we were talking about automatic action when we discovered that an attack might be in progress, for example, because there’s a lot of alarms coming out of the firewall, you know. He agreed with me that shutting down the platform was probably an overreaction because that introduces a new attack vector. The bad guys just need to send a few packets against the firewall, generate a few lines and the whole platform shut down, I agreed with him that something should be done, but we didn’t really figure out what. Here’s an idea in hindsight, a number of jurisdictions are introducing what they call islanding rules, meaning if IT is compromised, you need to, basically, I don’t know, power off the IT firewall, nothing gets through into OT anymore.

For the duration of the emergency, you have the ability to shut off all communications into OT. This is part of, the regulation says you must be able to island. So now you have that capability. I wonder if it isn’t reasonable to trigger islanding when you automatically discover a whole bunch of alarms coming out of anything, because the modern attack pattern, most of them of of modern day attacks, are not like Stuxnet, where you let it loose and it does its thing most of modern day attacks have remote control from the Internet, and if you island, if you break the connection between it and OT.

If there was an attack in the OT network, the bad guys can no longer control it. They can no longer send commands. So and this is not, this is not new. The the term islanding is a little bit new. The concept of sort of an automatic shut off is has been bandied about for for many years. But again, given that the regulators are demanding an islanding capability. maybe engaging it automatically from time to time is not the worst thing that can happen. It increases our security and the impact on operations is is minimal because you’ve you’ve deployed the ability to island already.

You’ve developed the capability of running your OT system independently, and so interrupting that communication for a period of hours at a time while you track things down and say, oh, that was a false alarm. I’m guessing is, minimal cost. So there’s an idea.

Andrew Ginter
OK. Well, let’s come back to our our topic here. The topic is credibility. we’re talking about the risk equation, the typical risk equation is consequence times likelihood. generally we do it qualitatively, but we we wind up with a number coming out of that to compare different different kinds of risks, high frequency versus versus high impact risks. can you talk about that? Where does credibility fit in that equation?

Kenneth Titlestad
I think it fits very well into that equation because when we we, especially when we talk about the likelihood or the probability part of it, the left left side of the equation it it’s always a very, very difficult conversation to have when you try to identify the risk or the the risk levels we are talking about or you try to identify the consequence levels involved. It’s sad to see that a lot of the conversations they go astray due to not being able to put the number on the probability or the likelihood, and I think it it the the conversation gets to be much more fruitful if we can get rid of that challenge on trying to figure out the number on the probability or the likelihood.

Credibility gives us tools in our language to actually be able to talk about the left part of the. So it’s something that is a bit more analog and analog value where we can move more towards the consequence approach, the consequence driven where the the right side of the equation is is more important to talk about as long as you get, if you consider it being credible.

Andrew Ginter
Well, I have to agree. Uh, I’ve argued in my previous in my last book that that likelihood is flawed, that at the high end of cyber attacks, not the low end, the low end likelihood actually works. The high end. The outcomes of cyberattacks are not random. If the same ransomware hits a factory twice and we’ve all we’ve done is restore from backup, it took them down the first time we restore from backup. We make no changes. It hits. Again, they’re going to go down the same way. It’s not random.

I argue that on the high end nation state, targeting is not random either. it’s not that they they they try for a while and if they if they don’t succeed they, go try somewhere else. Nation state threat actors keep targeting the same target until they achieve their mission objective. It’s not random. Once they’ve targeted you, it’s not random. Randomness to me doesn’t work at the high end. Credibility makes more sense. We know is is the threat credible? Is the consequence credible? If this threat comes after us, is this attack comes after us? Is it reasonable to believe credibility is what’s reasonable to believe, not who what’s reasonable to believe? Is it reasonable to believe that the consequence will be realized?

I think it makes a lot of sense, but it’s it’s new. I don’t see the word credibility in a lot of of standards. where does this sit? What what you know. Is this? Is this something people are talking about?

Kenneth Titlestad
Yeah, absolutely. In my work with the clients, I’ve been working with and also the professionals I’ve been working with, we have discussed for some years now that the, the OR we have discussed the big challenge of the the likelihood or the probability part of the equation. And we’ve we’ve without actually having having without following standards or best practices, we’ve seen that we need to skip the discussion on the probability or the likelihood and and talk about the consequent side of it first and then we revisit the likelihood and probability afterwards. But I also see in IRC 6243, especially with the 3-2, it actually talks about consequence, only cyber cyber risk analysis.

So that’s giving a opportunity to actually move away from the discussions on on probability and also of course with the consequence driven approach with cyber informed engineering, we start to see more focus on the far right side with the. The consequence consequence side but leaving out what to do with the likelihood, and I think with credibility we we get some some language based tools to actually play. Is it where we talk about it in a qualitative manner? Instead of having to force it into a number?

Andrew Ginter
So that makes sense to me. I mean, I have the sense that over time in the course of time, cyber attacks become more sophisticated, more sophisticated attacks become credible attacks that were dismissed A decade ago as theoretical have actually happened. Do you see that? what? What do you see coming at us in terms of sophisticated attacks in in the near?

Kenneth Titlestad
I think that’s a really challenging question looking far into the future or or far into the into the history to try to extrapolate what could we expect from the future we see with with the Stokes net, the against Ukraine. Triton, Colonial Pipeline. We see incidents that have had a really high impact, but there’s not very many of.

So, but we see it’s those kind of capabilities are being explored and are being put into different tools, so they can be used by not only nation states but also criminal groups. So with with that kind of analysis we can expect more and more sophisticated attacks and also by more and more non sophisticated groups. So we should expect increase in high impact incident.

Andrew Ginter
OK, so if we’re not talking likelihood, we’re not talking probability, we’re talking credible. How do we decide what’s credible? How do we decide what’s reasonable to believe?

Kenneth Titlestad
Yeah, that’s a that’s a good question. So we need to have some grasp of of what is credible and what is not credible. I’m also of the opinion that that the credibility part of the equation. It’s a qualitative thing. It’s not a zero or one, it’s something that is attached to a kind of a a slippery slope not easily defined. But what we could say if we are trying to to see credibility as a zero or one, what is credible things that have happened actually have happened once or twice or three times. They are credible, so the twice on incident or a safety only type of cybersecurity. That’s now a credible attack because it has happened.

And also near misses. That’s something that Triton was kind of a near miss. They didn’t actually cause it this this destructive attack, but it could have happened. And so we also have other near misses, incidents that we should be considering.

Andrew Ginter
So that makes a lot of sense to me. Credibility versus likelihood. How do we decide though credibility sounds like a judgment call. How do we decide? What’s?

Kenneth Titlestad
That’s a that’s a good question. I I I think there’s a good recommendations in 62443, for instance the 3-2 it it talks about the like I said, the consequence only as an example on how how you can approach the risk equation but it also talks about the need for focusing on worst case consequences. So it talks about essential functions, which basically could be the safety functions. For instance, you need to investigate the consequence if those are actually attacked and compromised. What could be the worst case consequence? So you begin there and then once you identify the worst case consequences, then you move over to the probability or likelihood dimension.

And then you need to consider all the factors. So what are the vulnerabilities involved? What are the safeguards and or what the the standard is talking about? You’re compensating countermeasures. You consider that you consider the function or the asset as well, that if there’s. If there’s no actual interest in the assets, then the vulnerability could be also non interesting to address or analyze. But you start with the consequence side, then you start to look at the likelihood and probability and then you are informed by the the consequence approach.

Andrew Ginter
OK, so let me challenge you on that. I’ve read the CI implementation guide. It says start with the worst case consequences. It says those words. I’ve not seen those words in three Dash 2. Are you sure that that you’re you’re not reading into 3-2?

Kenneth Titlestad
No, I’ve been searching for for that specific part of three dash too many times because because I’ve, I’ve heard others say that the same and it’s actually there. It’s really gold Nuggets in 3-2 talking about essential functions, specifically saying the worst case consequence and also specifically saying that you can choose to do a consequence only risk assessment, so that’s really important. Single words or single sentences in three after. So worth highlighting in the three Dash 2.

Andrew Ginter
OK. So that that makes sense in the abstract. Can you give me some examples what applying these principles? What what should we regard as credible?

Kenneth Titlestad
Yeah, interesting question. I think that the things that come to mind first is for instance the, the, the Triton incident. Before 2017, where when it actually happened, we didn’t think it was credible that someone would actually target a safety only system or cause a safety incident with a cyber attack with with Triton it we actually saw the first first of its kind and the threat became obviously credible. And then SolarWinds as well. It’s a very interesting study where the way they actually compromised the solar winds update mechanism, suddenly massive, massive deployment of kind of malware within critical and non critical infrastructure became really credible threat as well and also near misses. Of course we should be informed by things happening out there and coming on the news that are near misses that can talk about talk to us about what is a credible threat.

Another kind of near miss that I think or is not a near miss, but it’s scenarios or incidents that could talk about credibility is is where we actually have a safety incident. For instance, we we had have had lots of them in Norwegian oil and gas and in oil and gas gas. In general, is safety incidents where we, which is not cyber related at all, but where we see that it it could be able to be replicated by a cyber attack. So that’s something that we should be considering as a credible threat going forward where we actually could replicate the cyber or the incident with the cyber cause.

On credibility, I also think that we need to have in the back of our mind or in the analysis we have to have focus on on the technology evolution, the development and sharing of new technology. So we I see it as a graph where where we are exposed to more and more heavy machinery or heavy software that can be used on the adversary side.

Kenneth Titlestad
So with Kali Linux Metasploit now there’s also AI. So what is being about becoming a credible threat threat is more and more sophisticated stuff due to development of technology. So AI now is on on both sides of the table, or both as an attacker as a tool that makes more more attacks credible, but also on the on the defensive side where we actually need to use it to protect against more and more sophisticated attacks.

Andrew Ginter
So Nate, I was, let me go just a little bit deeper into into Kenneth’s last example. I remember talking to him about this two days before I recorded the session with Kenneth. I was at another event. I had 1/2 hour speaking slot. I was, listening politely to the other speakers. I remember. And one of the speakers was a a penetration tester. I remember asking the pen tester a question about AI and his answer alarmingly.

And, I discussed it with Kenneth. I discussed it with with others. Since the future is is difficult, I asked the AI the the pen-tester so you, you touched on AI. What should we look for from AI going forward? And I asked, should we worry about about AI crafting phishing attacks because I’ve I’ve heard of that happening. Should we worry about Ai helping the bad guys write malware to write more sophisticated malware because I’ve heard of that happening.

And I paused and his answer was Andrew, you’re not thinking hard enough about this problem, you know? Yeah, that stuff’s happening. But what you need to worry about is somebody taking a Kali Linux ISO image. This is the Linux disk image that everybody uses. All the pen testers use. Lots of attack tools, he says. Taking that GB of ISO image. coupling and adding it together with two gigabytes of AI model and the model has not been trained on natural language and creating phishing attacks. The model has been trained by watching professional pen testers attack OT systems, mostly in test beds. I mean, this is what pen testers do. They take a test bed that is a a copy of a system that they’re supposed to be, doing the pen test on no one that does the pen test on a live system. They do it on a test bed.

They use the Kali Linux tools. They attack the system and demonstrate how you can get into the system and cause it to bring about simulated physical consequences. So you’ve taught this AI model how to use the Kali Linux tools to attack OCF OT systems to brick stuff and bring about physical consequences. You take that training model, couple it with the image.

Wrap it up in enough code to run the image as a sort of kind of embedded virtual machine to run the the AI model the million by million matrix of numbers that is a neural network run the neural networ. Run the the the Kelly Linux image and have the AI operate the tools to attack a real OT system. Drop that three, 3 1/2 gigabytes of attack code on an OT asset, start it and walk away and it will figure out what’s there? It will figure out how to attack it. It will figure out how to bring about physical consequences.

I heard that and I thought crap. That’s nasty. back in the day, Stuxnet was autonomous. It did its thing, but it was a massive investment to to produce an an asset, a piece of malware that did its thing without human intervention. This strikes me as again something that will do its thing without human intervention, and it will figure out as it goes. It’s one investment you can leverage across hundreds of different kinds of targets.

I was alarmed. This is something I’m I’m thinking about going forward, it’s to me this is a credible threat. This is something we all need to worry about. I don’t know that the this thing exists yet. But I’m pretty sure it will in five years.

Andrew Ginter
OK. So that’s that’s a lot to worry about. Can I ask you know? Is everything credible? What? What in your mind is not a credible threat at this point.

Kenneth Titlestad
I would think that large scale destructive attacks on big machinery is not something that I would consider a credible attack, but it also goes back to the motivation of the threat sector, for instance, if you have a small municipality, I would lee that really heavy, sophisticated cyber attacks, a lot of them wouldn’t be actually credible due to the target not being interesting for such a threat actor. So large scale destructive attacks is something that in a lot of scenarios wouldn’t be a credible attack.

And then we have for, for instance, large large scale blackouts is quite an interesting story nowadays because a couple of weeks ago, I would think that it wasn’t actually a credible attack. Once we now see that it can happen, for instance, with Spain, it was probably not a cyber attack, but it was something that happened on the consequence side. If we can show that or or identify that it actually can be caused by a cyber attack, then that suddenly nowadays within the last week has become a credible attack.

And also swarm kind of attacks we I hear the discussions on that from time to time where where they see talk about whether it’s a credible thing where you attack millions of cars. As of now, I don’t see that as a credible attack, but things can change.

Nathaniel Nelson
You know, that’s an interesting statement he made there. That large scale attacks on heavy machinery. It isn’t credible. when I think about what we’re talking about on this podcast, the purpose of OT security presumably is that there are significant risks to really important machines. Large scale, but maybe at this point we’ve covered that.

That’s a good point. I think one of the the lessons here is that determining what is and is not credible is a judgment call. OK? Different experts are going to disagree. I’ve, few years ago I saw research published. Saying, look, here’s let’s take for the sake of argument, the possibility of attacking a I don’t know, a chemical plant and causing a toxic discharge. And the researchers concluded that it was theoretically possible, but it was such an enormous amount of effort on the on the part of the adversary, all of which would have to go on undetected by the sight, they said. in the end, I just don’t know that this is reasonable to believe that this will ever happen. So, that was one site.

But again, there are the experts, experts disagree. This is the the what I learned on the very first book I wrote. I got wildly different feedback from different internationally recognized experts. Here’s here’s an insight. To me, this means that when we make judgments about credibility, we probably have to be we have to make if we’re going to make a mistake, make a mistake on the side of caution, err on the side of caution because different experts have different opinions. We might be wrong. every expert has to be honest enough to admit that we might be wrong and build a margin for error into their judgment of what’s credible.

So even if we don’t believe that an attack that I don’t know destroys a turbine is credible, we might want to take some reasonable defences to against, such a not terribly credible attack in our opinion, but we might want to to deploy defences anyway.

Just because we might be wrong and this, this is something that that is also being discussed. It’s how big a margin for error do we need to build into our our planning. I mean I talked to a gentleman who produces who who designs pedestrian bridges. I said how do you how do you calculate the maximum mode? He says that’s easy. Andrew you you you build a barrier to either side of the bridge so vehicles can’t get on the bridge.

Most people are less than two meters tall. Most people are mostly water. You model 2 meters of water. The width of the bridge, the length of the bridge. That’s your maximum load. And then he says. And then he says, you multiply that by 8 and you build the bridge to carry the multiplied load. Because these are people we’re talking about, it is unacceptable for the bridge to fail under load. And so this is the margin for error that engineers routinely built into their safety calculations. I believe, we as as experts in cybersecurity need to build a margin for error into our security planning as well.

Andrew Ginter
So this all makes sense. One of the things that appeals to me very much about the credibility concept is using the concept to communicate with non-technical decision makers like boards of directors. You do this, you have experience with this. Can you talk about your experience?

Kenneth Titlestad
Yeah, I think it’s interesting. When we talk to board members and the the CXOS in different companies, they they don’t necessarily go into details about risk, but they know that they have a special accountability.

So so when we talk about credibility for for those kind of people, they are getting more on board with the discussions, they know they have a special accountability, they draw the line in the sand. For instance if if if the potential consequence is that somewhat somebody to die then that’s a non acceptable risk and they they take on that kind of position due to their accountability as as board members or or heads of of the company.

And they also are being accountable for from from the the government and from the for the society. So the some, some risks when it comes to the consequence side if if we talk about people dying then that’s absolutely and not acceptable risk for this?

Society and the representatives for for that kind of approach is is elected persons in the government and they put the the heads of the company or the Board of Directors as accountable for that on top of the company.

Andrew Ginter
So that makes sense. Boards care about consequences that the business or the society is going to find unacceptable. You didn’t use the word credible. How does credibility fit into acceptability when you’re communicating with?

Kenneth Titlestad
Yeah, we don’t have to defend against all possible cyber attacks. What we do have to protect against is the credible ones. So when we bring credibility in as a concept, then it’s something that communicates, communicates much better for the the Board of Directors and the heads of the companies.

Andrew Ginter
This has been good, but it’s it’s a field big enough that I fear we’ve missed something. let me ask you an open question. What? What should I have asked you here?

Kenneth Titlestad
We’ve been talking about credibility. Credibility is what is reasonable to believe. But it’s not enough to talk about reasonable attacks. We also need to be talking about reasonable defence. So what is a reasonable defence? We then need to be considering or or taking all the tools.

We need to use all the tools at our disposal for a reasonable defence, and nowadays that also obviously includes AI on the defensive side, not only on the offensive side.

This is also a very important part of me, of the reason for me joining Omny. So Omny is is built on our security knowledge graph, so it’s a data model where we can put all information we need about our assets on the vulnerabilities on the network, topologies, on the threats, the threat actors. So it becomes a digital representation or a digital twin of our asset. Combining that with AI which we have built in from the beginning, we get a very strong assistance on security where it matters most.

Andrew Ginter
Cool. Well, this has been great. Thank you, Kenneth, for joining us. Before I let you go, can I ask you to sum up for our listeners, what should we take away from this episode?

Kenneth Titlestad
Thank you, Andrew, for having me and and thank you so much for being here in Norway and and visiting us at our office. So we’ve we’ve had a good conversation about consequence, the focus on on the worst case consequences we’re we moved over to talking about credibility, replacing the the likely good concept with credibility, especially for high impact stuff where we don’t have the probability or the data to talk about it. We also talked about reasonable attacks and reasonable defences. So what is a reasonable defence against increasingly credible, sophisticated attacks with high consequences. So it’s been a really good discussion about all of these topics.

Kenneth Titlestad
If people want to know more about these topics or they want to discuss them, please connect with me on LinkedIn and message me there. I’m more than happy to discuss these topics and please visit our webpage Omnysecurity.com. Our platform addresses most of these topics we talked about today.

Nathaniel Nelson
Andrew, that just about does it for your conversation with Kenneth Title. Scott, do you have any final words you would like to take out our episode with today?

Andrew Ginter
Yeah, I mean we’ve we’ve talked about about credibility and this is a concept that is is relevant to sort of the high end of sophisticated attacks, the high end of of consequence. But I’m not sure let me.

Let me try and give a very simple example. I mean I was I was raised in Brooks, Alberta, little town, 10,000 people in the middle of nowhere. Literally an hours drive from any larger population centre. In terms of cyber threats, do let pick. Let’s pick on, I don’t know, the Russian military, does the Russian military have the money to buy three absolute cyber gurus, train them up on water systems, plant them as a sleeper cell in the workforce of the town of Brooks water treatment system. Have them sit on their hands for three years and after three years.

Using the passwords they’ve gained, the trust they’ve gained and the expertise that they have. Have them launch a crippling cyber attack that that damages equipment that takes the water treatment system down for 45 days is that a credible threat? Well, the Russians have the money to do that. It’s, they have the capability to do that.

But you have to ask, why would they bother? I mean, this is a little agricultural community. There’s a little bit of oil and gas, activity. Why would they bother? That does not seem to be it. It. It does not seem to be reasonable to launch that kind of attack against the town of Brooks. It just makes no sense. I don’t see that as a credible threat.

Is that a credible threat for the water treatment system in the city of Washington, DC, home of the Pentagon? I do think that’s a credible threat. So the question of what’s credible is an important question that I see more and more people asking in risk analysis going forward. we have to figure out what’s credible for us, what are what, what, what capabilities do our adversaries have? What kind of assets are we protecting? What kind of defences we have deployed what makes sense, what’s reasonable to believe in terms of the bad guys coming after us. This is an important question going forward and I see lots of people discussing it. I’m I’m, grateful for the the the chance to explore the concept here with with Kenneth.

Nathaniel Nelson
Well, thanks to Kenneth for exploring this with us. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a pleasure. Thank you, Nate.

Nathaniel Neson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Credibility, not Likelihood – Episode 140 appeared first on Waterfall Security Solutions.

]]>
Lessons Learned From Incident Response – Episode 139 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/lessons-learned-from-incident-response-episode-139/ Wed, 09 Jul 2025 10:53:04 +0000 https://waterfall-security.com/?p=33748 Tune in to 'Lessons Learned From Incident Response', the latest episode of Waterfall Security's OT cybersecurity Podcast.

The post Lessons Learned From Incident Response – Episode 139 appeared first on Waterfall Security Solutions.

]]>

For more episodes, follow us on:

Share this podcast:

If you didn’t listen to a single thing I said, you can listen to these three things: collaborate, plan, and practice.

Chris Sistrunk, Technical Lead of ICS at Mandiant

Transcript of Making the Move into OT Security | Episode 118

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome listeners to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions, who’s going to introduce the subject and guest of our show today. Andrew, how are you?

Andrew Ginter
I’m very well, thank you, Nate. Our guest today is Chris Sistrunk. He is the technical lead of the Mandiant ICS or OT security consulting team, whatever you wish to call it.

Google purchased Mandiant in 2022, but they’re still keeping the Mandiant name. So he still identifies as the technical lead of industrial security consulting at Mandiant.

And our topic, they as part of their consulting practice, they do a lot of incident response. He’s going to talk about lessons learned from incident response in the industrial security space.

Nathaniel Nelson
Then without further ado, here’s your interview.

Andrew Ginter
Hello, Chris, and welcome to the the podcast. Before we get started, can I ask you to say a few words for our listeners about your background and about the good work that you’re doing at Mandiant?

Chris Sistrunk
Okay, thanks, Andrew. I’m at Mandiant on the ICS/OT consulting team, been doing that for over 11 years now, focused on ICS/OT security consulting around the world with every type of critical infrastructure, doing incident response, strategic and technical assessments, and doing training as well.

Before that, I was an electrical engineer, still am, but for a large electric utility,
Entergy. I was there over 11 years as a senior electrical engineer, transmission distribution SCADA, substation automation and distribution design.

So that’s a little bit about me. And again, just working for Mandiant as part of Google Cloud.

Andrew Ginter
And our topic is incidents. It’s lessons from incidents, but let’s talk the big picture of incidents. I mean, Waterfall puts out a threat report annually. I’m one of the the contributors.

We go through thousands of public incident reports looking for the needles in the haystack, the incidents where there were physical consequences, where there were shutdowns, where sometimes equipment was damaged.

And we rely on the public record, on public disclosure. And so I’ve always believed that we were under-reporting because I’m guessing, again, I don’t have that many confidential disclosures that that people tell me about.

But I’m guessing that there’s a lot more out there that never makes it into the public eye. You folks work behind the scenes, you know, without breaching any non-disclosure agreements or anything, can you talk about the big picture? Do you see incidents, especially incidents in the industrial space with physical consequences, incidents that triggered shutdowns, incidents that are not publicly reported?

How many are there? What do they look like? Can you talk anything about sort of what I would not see by looking at the public record.

Chris Sistrunk
Sure. Thanks for the question. Yeah I think we’re talking about cybersecurity incidents here. And there’s many incidents that happen every day, right? But life goes on. Squirrels happen, right, in the grid.

But for cybersecurity incidents, I do believe we’re seeing an increase I can’t go into how many. We actually have a report that M-Trend’s mandate has put out every year. It’s going to come out later this month and for RSA.

And this is a yearly report. And we report on the the different themes, the different targeted victims, the different threat groups, the TTPs. But for cyber attacks that impact, say, production or cause a company to shut their operations down, I don’t have any hard, fast numbers to talk about, but we have seen an increase. And you can look in not just our report, but also the reports of others, IBM, X-Force, Verizon DBIR, Dragos, others. There are increasing reports of these, and a lot of it has to do with things like ransomware and ransomware either directly impacting the control system environment, which we have responded to, and a manufacturer and a few others. But we have seen in the public news where a company might have to shut down operations due to indirect impact.

Maybe their enterprise resource planning software or manufacturing execution software was impacted, which is an indirect impact to the industry-critical data flowing that was halted, which means I can’t produce my orders anymore or track shipping or logistics, things like that. So we’re seeing a lot of those.

There’s others in the electric sector that they kind of have to be reported to OE 417 reports. If there’s a material impact, obviously they’ll be filed in the, or they’re supposed to be filed in the 8K or 10K with the SEC.

And so, I think if you take all of those sources in and look together and see, we see there’s an increase of operational impact, but it’s I think the engineers are doing a good job of and the folks that run these systems are minimizing the impact in these situations, especially for electric and water and other critical infrastructure. Manufacturing is critical, but I’d say it is probably the highest targeted outside of healthcare and other other areas.

Andrew Ginter
So work with me on on the numbers just for one more minute. I’m on the record in the in the the waterfall threat report speculating as to what’s going on with public disclosures. It’s my opinion, but I have limited information to back it up. It’s my opinion that the new disclosure rules in the SEC and other jurisdictions around the world are in fact reducing the amount of information in the public domain rather than increasing it.

And the reason I suggest this is because it seems to me that with the new rules, every incident response team on the planet, roughly, I overgeneralize, has a new step two in their playbook. Step two is call the lawyers.

And what do the lawyers say? They say, say nothing. Because if you disclose improperly, if you if you if you fail to disclose widely enough, you can be accused of facilitating insider trading.

If you disclose too much information, you might get sued. I mean, people have been sued for disclosing incorrect information about security into the public. People buy and trade shares, and then they they find out the information was incorrect, and they get sued.

And so, to me, the mandate for the lawyers is say the minimum the law requires, because if you say too much, you risk making a mistake and getting sued, and you don’t want to get sued.

And if you If you say too little, you’re going to get sued. The lawyers minimize, and if you have a material incident, you must report it.

If it turns out the incident is not material to the finances of the company, but you don’t have to report it. And again, to minimize the risk of getting sued by reporting incorrect information, you report nothing.

So my sense is that we’re seeing fewer reports because of these mandatory rules, not more them. What do you see? You see this from the other side. Does this make any sense? Do you have a different perspective?

Chris Sistrunk
Well, I can say that, as an incident responder working with, victims in critical infrastructure, but also outside, I think this is a broader question you bring. I can definitely confirm that we work with external counsel that a victim may have hired to bring in to handle a lot of these reporting or not reporting requirements.

I can’t say or confirm that the the lawyers themselves, external counsel, we under reports. I can’t say that. I don’t know.

I’m not a lawyer, nor do I play one on Facebook. So I will just stick to say, yes, we have worked with external counsel. And usually we do not say anything in public for us as the incident responder unless the victim company or our client asks us to. Because sometimes sharing information is a helpful thing, especially if it’s a big breach, sharing that lessons learned about what has happened to them with others, just like we did back in the day when we had the SolarWinds breach. So, there’s two ways of thinking of that. And maybe you can pull on that thread with some other experts, but not me. I don’t know about the external counsel part.

Nathaniel Nelson
Andrew, you’d referenced Waterfall’s annual threat report, a report which I’ve covered in the past for dark reading. I’m not sure I’ve seen this year’s iteration, so maybe you could and tell listeners just a bit about what the report covers and what the numbers are showing lately.

Andrew Ginter
Sure. he report uses a public data set. The entire data set’s in the appendix. You can click through to it if you wish. We cover, we count in in our statistics, we count deliberate attacks, cyber attacks, with physical consequences, Not stole some money, Physical consequences in heavy industry and critical infrastructure, the the industries we serve. In the public record. No confidential disclosures. the numbers last year were 72 attacks I believe with I don’t know some 100, 150, something like that, I forget the numbers, sites affected. Many of the attacks affected multiple sites. This year, we are up from 72. We have 76 attacks affecting a little over 1,000 sites. So there was more sites affected. But the number of attacks did not increase sharply.

And this is why, again, I speculate, why have we sort of seen a plateau? We went up from zero, essentially, give or take, in, let’s say, 2019 to 72, and then in 2024, 76. Why do we seem to have a bit a bit of a plateau? And I’m speculating it has to do with the SEC rules. People are now legally obliged, not just in the United States, the Security and Exchange Commission. They’re legally obliged, not just in that jurisdiction, but in other jurisdictions around the world. There’s similar rules around the world.

If you have an incident that is material’ that any reasonable investor would use as grounds to buy or share or sell or value shares, you must disclose it.

But I have the sense that we are seeing fewer disclosures, because by law, you’re required to disclose material incidents. And again, because I speculate that because the lawyers are involved, we are seeing fewer disclosures. They disclose the material incidents and they squash everything else is the sense I have.

But, you asked about the numbers. Seventy six last year. Nation state attacks are up from, there were two the year before. There were six last year. You know, is this a trend? It’s still small numbers. Who knows?

And industrial control system capable malware, malware that understands industrial protocols and that is apparently designed to manipulate industrial systems, is up sharply. Where there were three new different kinds of of malware disclosed last year or found in the wild last year that had that capability versus seven in the preceding 15 years. Again, small numbers. Is it a blip? Is it a trend? Is AI helping these people write stuff? We don’t know. So these are all sort of, you look at the numbers and you scratch your head and you go, I wonder, what’s going on here? So that’s the threat report in a nutshell. there’s There’s other statistics in there, but those are sort of the headlines.

Andrew Ginter
So that makes sense. That leads us into the topic of the the show, which is lessons learned from incidents. You folks do incident response all the time. Can you talk to me? What are you seeing out there? Is there an incident or three that that sticks in your mind as, Andrew, the most important thing I have to tell you is, and or the most recent, what where would you like to start?

Chris Sistrunk
Okay, sure. we We have been doing OT incident response since I’ve been here. And I can give you a few examples. Last year in 2024, we responded to a North American manufacturing company that had their OT network.

If we’re looking at a Purdue model, it’s the the third layer, or level three of the network. was directly impacted by the Akira ransomware group.

And what had happened was an unknown internet connection was made by this third party who was running the site. They had put in their own Cisco ASA firewall.

And it just so happened to be that there was two critical vulnerabilities in that firewall at the time. And the CURE ransomware group was targeting those exposed firewalls.

So don’t necessarily think this was a targeted manufacturing OT attack. It’s just ransomware gangs doing what they do, trying to make money.

And so they were able to log in and get in through these vulnerabilities and deployed the ransomware on directly on the OT network, which was flat.

And every system, but about five or six or seven were completely encrypted, including their OT DCS vendors. And and there was multiple, not pick picking on any one in particular, but GE, ABB, Rockwell, several others that were there.

And the backups server was impacted and the backup of the backup server was impacted they were all on the same flat network so but this was a really tough situation since the company manufacturing did not have any backups that were offline the OT vendors like I mentioned had to come on site to completely rebuild the Windows systems, the Windows servers, the engineering workstations, the HMIs, all the things that were Windows and or Linux they had to completely rebuild.

Client didn’t pay the ransom, in other words. And so the lessons learned here, work with your OT vendors and OEMs and even your contractors to make sure that your windows systems and linux systems have antivirus make sure that you have OT backups that are segmented from the main OT network and keep offline backups and test them a at least a year basis backups will get you out of a bad day, even if it’s an honest mistake at five o’clock on Friday.

So this is a basic win here, having good backup strategy. And then in the last case here, we we recommended they eliminate this external firewall and leverage the existing IT, OT, DMZ firewall that came in from the the main owner of that site. And so they had a backdoor essentially that this third party contractor had installed in a new internet connection. So get away from the shadow IT, go back to your normal IT, OT, DMZ with jump box, two factor authentication and all those things. But if you do the basics and do them well, keep good segmentation, have backups and patch your firewalls on a regular basis. I think that will go a long way, especially in this case.

Nathaniel Nelson
You know, I feel like I’ve heard of variant of that advice that Chris just gave a million times, and I don’t work in industrial security. So you folks must hear it all the time, or it must just be such basic knowledge that you don’t even think about it.

So are there really industrial sites out there that still need to hear that you shouldn’t be making an internet connection from your critical systems?

Short answer yes. People who do security assessments, not just incident response that we’re talking here, but security assessments come back and say they regularly find connections out to the IT network and occasionally straight out to the Internet.

The connections to the IT network tend to have been deployed by the engineering team or the IT team to make their lives easier. You know, people with gray hair, and enough gray hair like me, they they they about how the systems used to be air-gapped. This was a very long time ago. We’re talking 30, 40 years. The systems used to be air-gapped.

And people with gray hair like me might assume that’s still the case. It’s not. Everybody who does audits reports these connections. The really disturbing stuff, yes, it’s disturbing that there are connections to the IT network that are poorly secured.

But the really disturbing stuff is the vendors going in. If you do an audit on a site, time and time again, I hear people saying, yeah, they discovered three different internet connections the vendor’s stuck in there.

And you’re going, well, what? Wouldn’t you notice if there was a new internet connection? I mean, no internet service provider gives you a connection for free. You’ve got to run wires. You’ve got to pay for this thing every month. It’s showing up on your bill. No, it’s not.

There’s a lot of wires being run while stuff is being deployed. You don’t notice a new wire. And the vendors pay month after month for the internet connection. It doesn’t even show up on the the bill of the owner and operator because the vendors are providing a remote management or remote maintenance service, and they want to minimize their costs.

They want to maximize their convenience in terms of getting into the site. So they deploy rogue DSL routers. They deploy rogue firewalls to the the site’s internet connection. They might deploy rogue cellular access points where there’s not even wires to run. It’s just a box sitting there that has a label on it saying, important, do not remove. And of course, that makes it invisible to everybody who’s looking at it. It says, oh, what? That? Don’t touch that one.

Yes, it’s very common. The advice I try to give people is when you do an like a risk assessment or and a walkthrough or an audit of your site, look for these rogue connections.

Unfortunately, you’re probably going to find one or two of these. Contractual penalties with the vendor help, but they’re no guarantees.

Andrew Ginter
So that triggered something. Let me yeah let me dig a little deeper. You said that the the victim decided not to pay the ransom. You know, do you see victims ever paying the ransom to recover an OT network, to recover the HMI, to recover worse than that, the PLCs and the safety systems?

You know, why would? Does anyone trust a criminal to take the tool the criminal provides and run it on their safety system and, restore it because they trust the criminal? Does anyone trust a criminal that far as that? Does that happen?

Chris Sistrunk
Okay, so good question. So we have seen traditional IT systems where they pay the ransom and get access back to these systems.

And some that are OT adjacent, such as Colonial Pipeline, and in hospitals we do know that those systems and both of those incidents or those examples are colonial pipeline or name a hospital breach. In some cases there were OT’s type data OT type critical information that was impacted and so they of course paid and due to the fact that there’s – if someone one didn’t trust these ransomware gangs to do what they say they were going to do, those ransomware gangs would be out of business. So if you pay them, the decryptor doesn’t work, then it’s no good and their ransomware gang job is over with, at least for this thing.

But for OT, In this instance I talked about, they did not pay. I don’t have enough data to know if OT direct on the control systems themselves, the Windows, HMIs, engineering workstations, DCS servers, SCADA servers, if they’ve paid in those instances.

But I’d say it’s it’s plausible. And it really comes down to the business decision of the plant owner, the CEO of the company based on what the engineers at the lower level, hey, can we get do we have backups? Can we get the vendors to come in?

And so I really don’t have enough information to say about do OT asset owners like a plant, like a site, like a, that directly operates OT, if they trust these ransomware gangs or not.

It may just roll up higher than them about that. Also, there’s usually an advice from a ransomware negotiator that’s a third party that specializes in negotiating with ransom. So they may advise to pay or not to pay or to get a reduced payment as well. So it’s very, very complicated.

I know I didn’t answer your your question directly but in in the instances we’ve seen, we have seen them not pay and we have seen them pay and what’s OT or not.

Andrew Ginter
That makes sense. So coming back to our theme here, lessons from incidents, the lesson from this incident is: get rid of that firewall, use the existing infrastructure, and, look at your backups. I mean, if the backups are encrypted, it’s it’s all over. That makes perfect sense.

What else have you got? What else, have you been running into lately that that’s that’s interesting and noteworthy?

Chris Sistrunk
Yeah, I mean, it basically boils down to either ransomware or commodity malware. So i’ve got I’ve got another example about a ransomware electric utility was impacted by ransomware on the IT side.

But they had a good incident response plan, and they severed the IT and OT connections there. And even down to the power plant type networks. And so that was really amazing. And so that’s a good story. And we were able to actually verify the IT team. We’re able to verify that the threat actor, the ransomware group Quantum, were scanning the OT DMZ, but they didn’t get a chance to get be let through.

We did do a full assessment of their DMZ, and looked at the domain control of the firewalls and even the domain controller and the firewalls and others down inside the OT networks.

And we found like that they were actually pretty lucky because they had some weakness in some of the firewalls. So eventually, if they had enough time, if the ransomware actor had persisted long enough, they could have gotten through that firewall and made it to the DMZ. And the Active Directory had some weaknesses as well, and they could have gotten domain access, domain admin, and pivoted to the OT network.

But the great thing is, to highlight again, they had a good incident response plan, they were able to segment quickly, and then they were able to have their OT vendor – in this case, it was Emerson Ovation – were able to go on site, And they were able to not only take the IOCs that we had from the ransomware, but they were able to sweep because they were this was their contract to do so, to look in the PLC logs, the OT workstations, endpoint protection, and all that stuff.

So we all worked in concert together in this incident. And then actually they hardened the firewalls, hardened the domain controllers, hardened the workstation configurations, Before doing anything else, they did all of that.

When the IT ransomware was eradicated and hardened, then they said, okay, now we’ll reconnect everything back the way it was. So that was a really great lesson learned with another ransomware. And it wasn’t a direct impact to OT, but this was a great opportunity to to leverage that Incident response plan that they had.

Andrew Ginter
So, Nate, the concept of separating IT from OT networks in an emergency, this is a concept that I see increasingly. I mean, I think we’ve reported on the on the the show here a few times that this is what the TSA demands of pipeline operators, petrochemical pipeline operators ever since Colonial, the ability to separate the networks in an emergency so that you can keep the pipeline running while IT is being cleaned up.

I haven’t actually read the the, translated the Danish law, but apparently in Denmark, there’s a recent law in the last 12 months saying exactly the same thing.

You know, the the TSA applies to pipelines and rails. In Denmark, it applies to critical infrastructure. And it says in an emergency, you have to be able to separate. They call it “islanding,” the industrial control network.

And as chris points out it can be effective but it relies on really rapid intrusion detection and rapid response because as Chris said the bad guys had been testing the OT firewall if they had had just a little bit longer they could have got through so, even though it’s imperfect, it is a measure that I’m seeing increasingly required of critical infrastructure operators and recommended to non-critical operators as a measure that that helps, especially on the incident response side.

Andrew Ginter
Have you got another example for us? I mean, three is a magic number. You’ve given us, sort of two sets of insights. What what else have you got for us and in terms of lessons learned?

Chris Sistrunk
Yeah. There’s lessons learned. I can just name a few other lessons learned from just about any attack, right? Making sure that you have these at least window systems with antivirus. In a lot of cases, the OT network didn’t have antivirus, just basic antivirus, not necessarily an agent or EDR solutions.

If you have those great, but if you don’t have any antivirus, that’s, you need to get at least a supportive version of Windows or operating system and with antivirus.

Having good backups, having good vendor support. Now, this last incident we responded to was using a living off the land attack.

So we responded to electric utility in Ukraine and in 2022. And it was a distribution utility that the attacker came in through the IT network, deployed their typical wiper malware. This was the group APT44 or Sandworm team, which has been targeting critical infrastructure around the world for quite a while.

And they were able to pivot to the SCADA system and use the feature of the SCADA system to trip breakers using tool that was built in the SCADA system itself.

So just giving it a list of breakers to trip and and calling that executable in the system to trip those breakers on behalf of the attackers. And so the the lesson learned here is targeted attacks, they’re going to not use malware, they’re gonna use the features or the inherent vulnerabilities in an OT network.

Stealing valid credentials like an operator, workstation, or an engineering administrator account and if you can even spearfish an engineer or an administrator network admin on the IT network and you don’t have good segmentation of roles from IT to OT then that’s that attacker is going to use every one of those tools to evade detection to bypass your normal detections

Because they’re coming in as a valid user. So the lessons learned there is to limit the amount of administrative access. And this is role-based authentication, right? And does the person that got promoted and now is in a different department, does he still need admin rights?

Does this person have enough control for just their area only? Are the area responsibilities too wide? And now we say, OK, we need to reduce the amount of admin.

Do we require two-factor authentication or even hardware two-factor authentication to really reduce the attacker down to an insider threat?

Because remotely, that’s very hard to do, to bypass hardware token-based two-factor authentication. And so there’s some there’s some living off the land guides out there.

The U.S. government DOE has put out a threat hunting guide for living off the land attacks after the Volt Typhoon announcements last year.

But I would also go a step above and beyond that learning good ways to detect anomalous logins even from your own folks. If it’s out of a normal time, out of a normal location you’re really going to have to have some tuning on some of these detections.

And the only way to really test those is with red team that’s trying to be quiet and not trigger your detections. And and That is some of the more advanced asset owners and end users. They’re using leveraging red teams, hiring red teams like what we do at Mandiant to come in and see if we can do living off land attacks to bypass their detections.

Nathaniel Nelson
Since Chris mentioned it but moved on before we can actually define it, let me just, for listeners, living off the land is the process by which an attacker, rather than using their own malicious tooling, would make use of legitimate software or functionality of the system they are attacking to perform malicious actions on it.

It’s been a growing trend in recent years, I believe, because it’s so effective in that it is so difficult to detect.

You know, you could spot malware with certain kinds of tools, but can you spot somebody doing things with legitimate aspects of Windows or whatever you might be using?

It sounds, though, like Chris is talking about detecting living off the land tactics, which seems difficult, Andrew.

Andrew Ginter
That’s right. I mean, I have been following Living Off the Land to a degree. The, the, What’s right word? The short answer is you run an antivirus scan on a machine that’s been compromised by a Living Off the Land attack, and it comes up squeaky clean.

There’s nothing nasty on the machine. And what I heard Chris say is that this is because the the bad guys are using normal mechanisms, especially remote access, to log into these systems as if they were normal users and use the tools on the machine to to attack the network or to wait for a period of time until it’s opportune and then attack the network.

And What I heard him say is that because it’s a lot of remote access, he says you can detect this by focusing hard on your remote access system. You can prevent it by throwing in some hardware-based two-factor. that That will solve a lot of the problem, not necessarily all of it. There’s always vulnerabilities in zero days, but the two-factor helps enormously. It’s way better than not having two-factor.

But that’s preventive. On the detective side, he said, pay attention to your remote access. If normal users are logging in at strange times, that should raise a red flag.

If normal users are logging in from strange places, the IP address coming in is from China. Well, is Fred in China this week? No, he’s not. So, what I heard was one way to, to you know, help detect living off the land techniques is to pay close attention in your intrusion detection system to the intelligence that you’re getting about remote users logging in.

Andrew Ginter
So one more question. We talked to a lot of folks on the on the podcast. A lot of them are are vendors with technology that we talk about. And sort of a consistent theme for most of these vendors, most of these technologies is operational benefits.

Yes, the technology, whatever it is, is helping with cybersecurity. But often, this stuff helps with just general operations in sometimes surprising ways.

We’ve been talking about incidents and lessons and a lot of what you do is incident response. Are there operational benefits that that you run into that people say, I did what you told me and everything is working more more smoothly, not just on the security side? Do you have something anything like that for us?

Chris Sistrunk
Oh, absolutely. And one of the things that I always tout is looking at your network, looking at the packet captures in the network can aid in not just cybersecurity benefits, but these operational benefits.

You can see things like switch failures happening, TCP retransmissions happening, all this traffic, maybe when like your Windows HMIs, maybe trying to reach out to Windows Update, but it’s blocked by the IT/OT firewall or anything else. It may not have a connection at all. It’s trying to reach out. All this unnecessary traffic or indications of improper configurations, misconfigurations, and things like that.

So just looking at your network with some of these tools that are out there, free tools, paid tools, ICS-specific tools, or IT-specific tools, doesn’t matter. If you look at, If you take any one of those, say just even Wireshark, and look in your OT network, you can get an idea on what traffic doesn’t need to be there that you can eliminate it make your improvements to the system.

And now I have better visibility. If there is an incident, I can easier to detect if there’s a cyber incident or if something’s operationally wrong, like a switch failure or something.

And so there’s a really great benefit there. Also helps improve reliability. We’ve done an assessment at a company that had a conveyor belt that they were having problems with. If the conveyor belt wasn’t timed exactly right, if they had too much latency on the network, the conveyor belt would stop and all the things on the conveyor belt would just go everywhere and it was a disaster.

So we just looked in the network, oh, you’ve got all these TCP retransmissions. And you look at the map in the in the software and say, oh, it’s coming from these two IP addresses.

Oh, we know what those equipment is. And we had the network person come, oh, I’ve been trying to figure this out for weeks. And just looking looking, just using a tool like that they were able to find and fix the problem and they fixed their latency issue because of that.

So going back to, incident response, having these things, having an incident response plan, a lot of OT already has this, because of disasters, fire, floods, storms, spills, air releases, safety issues, and that’s all part of their normal disaster recovery or incident response plans.

If you already have one of those, you’ve already done 90% of the work to have a cyber incident response plan. You just now have a cyber incident response added to that. So that’s the whole premise behind things like ICS4ICS, incident command system for industrial control systems.

And having that, say, chief person in charge of cybersecurity for a site, for a paper mill, for a power plant, for manufacturing facility, even though that’s not your day-to-day job all the time, if you’re, say, the lead, that, and you have, say, you have multiple plants, have multiple leads for those plants that still the, every decision will go through the plant manager, the general manager of the plant or site, but at least you have someone that is in charge of cybersecurity.

Just like you have a designated fire watch person or anything else. So if you take safety culture that we’ve known about for over 100 years and mold your cybersecurity culture to fit with that, things will make a lot more sense. We’ve already invented this. is We’re not reinventing the wheel here. Now we’re just including another paradigm of cyber security, network security, and endpoint security into these things that we have been doing.

There’s a fire. Okay, let’s put out. Incident response. And if you have a plan, That’s great. If you don’t have a plan and you run around that’s not good. So if you have a plan, you can at least prepare for it. And sometimes that’s the win. Being prepared is better than not being prepared.

Andrew Ginter
Well, this has been tremendous. Thank you, Chris, for joining us. Before we let you go, can you sum up for our listeners? what What are the key points we should take away here?

Sure. If you didn’t listen to a single thing I said, and you you can listen to these three things. Collaborate, plan, and practice. So collaborate. Get your IT teams talking to your OT teams, talking to your manufacturers, and identify the right roles within each of those.

And make sure you get together and talk about these things. Have some donuts and coffee. So collaborating knowing who is in charge of what is half the battle, knowing who to call when. plan Having an incident response plan or including OT security in your incident response plan and or engineering procedures, that’s going to help when an incident impacts OT directly indirectly.

And then practice. even can start with a simple question. Hey, what would we do in an incident? Or even going to having a tabletop exercise collecting logs from a PLC security logs. How long does that take? How many devices do we have? If the general manager says, how long is this going to take to pull all the logs from all of our systems? You won’t be able to say, I don’t know.

You’ll just know, this will take, two hours and 45 minutes because we’ve tested it. So collaborate, plan, and practice.

If you need help with OT security or IT security, we do that at Mandiant. We offer an incident response retainer that covers IT and OT. There’s no separate retainer. If you have an IT incident and don’t need OT, not a problem. If you have an OT-only incident, not a problem. if it’s IT cloud and OT all at the same time, we can help you, around the world 24-seven.

And lastly, if you want to learn more about this, you can reach out to me. Chris Sistrunk at Google.com. My email, LinkedIn, social media, blue sky. And check out some of our blogs on the Google cloud or Mandiant security blog.

We have, great content out there that is actual actionable not marketing fluff. It is actual actionable reports.

The next M-Trends report is coming out next week, RSA timeframe, end of April. So that’s a free report.

It’s a great report to to look at and gain some insights on what we’ve been responding to over the last year. And with that, I appreciate it. Collaborate, plan and practice.

Nathaniel Nelson
Andrew, that just about concludes your interview with Chris Sistrunk. Do you have any final words to add on to his to take us out with today?

Andrew Ginter
Yeah I mean, Chris Chris summed up collaborate, plan, and practice. What I heard, especially earlier in the interview, was do the basics, guys. Some people call it basic hygiene. It’s basically do on an OT network as much as you can of what you would do on an IT network.

Put a little antivirus in on the systems that tolerate it. Get some backups. Get some off-site backups so that if the bad guys get in, they can’t encrypt the off-site backups. They’re somewhere else. look for the vendors leaving behind internet connections, get rid of them,

And on the in terms of living off the land, he gave some very concrete advice that it that I’d never heard before, saying, look, these people are coming in as users. Get two-factor. Two-factor will do a lot to breaking up living off the land attacks.

And in your intrusion detection systems, look hard at what your remote users are doing. And if it seems at all unusual, that’s a clue that you’re being attacked. And, in terms of his collaborate, plan and practice, I really liked the fire warden analogy.

 Say, look, if you have an industrial site that is flammable, okay, your fire warden does not just sit on their hands until the place bursts into flames. Okay? The fire warden is someone who’s active in terms of actively, looking at managing raising the alarm when they see dangerous practices in this flammable plant. It’s not, it’s not just a reactive position, it’s also a proactive position.

And we need that for cybersecurity, because basically every site is, in a sense, a flammable cybersecurity situation.

So it’s not just that they sit on their hands until there’s an incident and then they’re in charge. They are actively looking around, just like a fire warden would and wouldn’t say, we shouldn’t be doing this. My job is not just to put the fire out when it occurs or coordinate putting the fire out.

My job is to help prevent these things. And so that I love that analogy. That that makes so much sense. Anyhow, that’s that’s what I took from the episode.

Nathaniel Nelson
Sure. Well, thank you, Chris, for speaking with us. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a great pleasure. Thank you, Nate.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there who’s listening.

The post Lessons Learned From Incident Response – Episode 139 appeared first on Waterfall Security Solutions.

]]>
Experience & Challenges Using Asset Inventory Tools – Episode 138 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/experience-challenges-using-asset-inventory-tools-episode-138/ Tue, 27 May 2025 16:28:48 +0000 https://waterfall-security.com/?p=32564 In this episode, Brian Derrico of Trident Cyber Partners walks us through what it's like to use inventory tools - different kinds of tools in different environments - which have become almost ubiquitous as main offerings or add-ons to OT security solutions.

The post Experience & Challenges Using Asset Inventory Tools – Episode 138 appeared first on Waterfall Security Solutions.

]]>

Experience & Challenges Using Asset Inventory Tools – Episode 138

Asset inventory tools have become almost ubiquitous as main offerings or add-ons to OT security solutions. In this episode, Brian Derrico of Trident Cyber Partners walks us through what it's like to use these tools - different kinds of tools in different environments.

For more episodes, follow us on:

Share this podcast:

“Trying to build a vulnerability management program when you don’t know what’s out there is a fool’s errand…you’re never going to be able to understand your total risk.” – Brian Derrico

Transcript of Experience & Challenges Using Asset Inventory Tools | Episode 138

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome listeners to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions.

He’s going to introduce the subject and guest of our show today. Andrew, how’s going?

Andrew Ginter
I’m very well, thank you, Nate. Our guest today is Brian Derrico. He is the founder of Trident Cyber Partners, and he’s going to be talking about using asset inventory tools. I mean, we’ve had a lot of people on vendors mostly talking about what’s available, how it works.

He’s going to look at the problem from the point of view of the user using these tools and why using these tools turns out to be a little harder than you might expect.

Nathaniel Nelson
Then without further ado, here’s your conversation with Brian Derrico.

Andrew Ginter
Hello Brian, and welcome to the podcast. Before we get started, can I ask you to, say a few words of introduction, tell us a little bit about yourself and about the good work that you’re doing at Trident Cyber Partners.

Brian Derrico
Good morning, Andrew. I’m Brian Derrico. I’ve been in the critical infrastructure sector for about 15 years. i Spent my entire career at a large utility solely focused on the cybersecurity requirements for nuclear power plants.

And my last role there was actually, it was the program manager responsible for the entire cyber program across the fleet. Again, all really dealing with OT type stuff and regulatory requirements.

I left in October, started my own business, tried in cyber partners, and mainly aimed to help other critical infrastructure sectors with their cyber problems.

Andrew Ginter
Thanks for that. And our topic eventually is going to be asset inventory. But, let me ask you, you’ve spent a lot of time working at nuclear.

You’ve worked, in in very old plants in, you’ve done some work recently with a very modern plants. Can you talk about, in terms of automation, what’s the difference between sort of very old automation and very new automation that that you’ve been exposed to?

Brian Derrico
So there’s there’s a lot of similarities, right? At the end of the day, whether it’s a new plant or an old plant, it is still a nuclear power plant. So there is a nuclear reaction that is heating some water. That water is heating some other water in a secondary loop that is flashing to steam, spinning a turbine, making electricity.

So that is nuclear power 101. It doesn’t matter how new or old the plant is. They’ve all generally worked that way for a long, long period of time. To your point, what you do see is the amount of digital assets in those plants is drastically different from new to old.

So in my previous role, I had done some industry benchmarks to try and figure out what is sort of the average number of digital devices that are in a plant. And it came in around 1700 or 1800 per unit.

These new plants that they’re building, they’re an order of magnitude larger than that. There are potentially 10,000 devices on a single unit because everything is digital.

I don’t know how many people have had an opportunity to tour a nuclear plant. I would certainly advise if you have that opportunity is a really, really cool thing to see. And most plants are all analog. There is a lot of analog equipment, a lot of analog indication.

And the new plants, that’s not that case anymore. So trying to keep track of all of your digital devices becomes a very important and critical problem.

For example, in some of the older plants that we worked in, as you’re going through getting asset inventory, you open up the cabinet, you kind of look for what is digital, what are the blinky lights, and you go through and that is generally a manual way that we did a lot of asset inventory.

These newer plants, you open the racks and everything inside is digital. Everything inside could be considered an attack pathway. And there were some discussions and and there’s some thought process out there that essentially calling locations critical.

Is going to be an easier way to do it because saying this entire rack, no matter what’s in it, is going to be a critical digital component is an easier way than trying to label an inventory all 50 or 60 devices. So that was a thought process that was considered.

But again, at the end of the day, every device was considered on a case by case basis. But it kind of gives you an idea of just the scale of how much digital equipment there are in newer plants nowadays.

Nathaniel Nelson
Andrew, I’m glad we’re getting the opportunity to talk about nuclear because it seems like a pretty relevant and highly important field.

And yet it never seems like we get a guest on who wants to talk about it. So where does nuclear stand in the panoply that is industrial security for you?

Andrew Ginter
Well, we’re we’re going to be talking mostly about asset inventory, but let’s talk about nuclear for a while. I mean, Brian said a few words. in a sense, he’s lived a lot of this stuff, without even knowing how unusual it is.

Nuclear is an extreme. When we talk about worst case consequences of of compromise, what’s the worst case, the worst thing that can happen in a a coal-fired power plant? A boiler blows up, people die.

What’s the worst thing that can happen in a nuke? The nuclear core explodes, Chernobyl, and hundreds of square kilometers become unlivable for centuries.

Oh, that’s very bad. So the consequences drive the intensity of your security program, and and nukes are an extreme. I mean, the only thing I can imagine that’s possibly more sensitive than nukes is, I don’t know, nuclear weapons, targeting systems, launch launch protocols. It’s just it’s that extreme.

What does that mean for cybersecurity? Well, let’s start with physical security. In different parts of the world, there’s different rules. In a lot of the world, you need a security clearance to visit the site.

So In North America, you can get tours of the site. But in a lot of places, you you a lot of stuff is classified. I don’t have a security clearance. I’ve never seen network diagrams for a nuclear site. I’m guessing a bunch of this stuff is classified. it It’s national secrets. It’s it’s it’s that intense.

On the cybersecurity side, again, I talk to people, uh, we, we serve nuclear customers at waterfall, And they do things that, seem again, seem extreme.

They might have all of their OT systems in one room, in one building, and all of their IT systems, all their IT servers, email servers and whatnot, they do have IT networks in in nuclear plants. you need to You need to schedule work crews. got to pay your people.

So they have IT and OT networks. And all of the IT servers are in a different room in a different building. Why? Because they cannot afford someone any time, someday to make a mistake and plug a cable from an IT network into an OT asset. That’s completely unacceptable cybersecurity wise.

And so they physically separate it so that as much as possible, they make these kinds of errors impossible. You can’t do it. You can’t plug the wrong cable and it’s in a different building.

Another example, you might imagine that there would be multiple security levels. You might imagine that the technology that controls the core, the control rods into the core that keeps the core from exploding is more sensitive than the the OT systems that control the steam turbines. I mean, a coalpower a coal-fired power plant has steam turbines.

Steam turbines have steam turbines, you imagine. In fact, again, when I talk to these people, a lot of nuclear sites, in my understanding, have only two security levels. Absolutely highest critical and business and nothing in between.

Again, why? Why would the steam turbines be protected to the same degree as the core control system? In part, it’s because, the physics of these systems, the steam, there are… distant physical connections. the liquid from the core heats up the liquid in the steam. And so, there’s theoretically a risk that something happening to the steam turbines could leak back into the core.

But more fundamentally, these people just say we cannot afford to make mistakes with security. And so we’re going to dumb it down. We’re not going to have seven or eight or 13 security levels. And you have to remember which is which and apply the right policies to the right equipment.

It’s going to be absolutely critical, end of story. And which room you’re in. That’s the policy you apply. Again, as much as possible, they eliminate human error.

Regulations. I’m most familiar with the the North American regulations. You might imagine, I mean, NERC SIP handles the power grid. if you If you fail to live up to your obligations under NERC SIP, what happens? You can be fined as much as a million dollars a day.

It’s never been levied, but you get fined. With the nukes, if they fail to live up to their regulations, they’re shut down. They lose their license to operate. that’s it’s It’s that simple. If you cannot operate safely, you cannot operate. Bang, you’re down. So again, intense attention is paid to the detail of cybersecurity and cybersecurity regulations.

Another example. I’m not aware of any nuclear generator. now I might I don’t know all the generators in the world. I’m not aware of any nuclear generator that has any kind of OT remote access, period.

Nothing remotely gets into OT. You want to touch OT, you walk over to the server room. So again, intense. In a sense, though, what I what I what I see of of the nukes is that they are leaders in the cybersecurity field.

They they do things extremely intensely. And as other parts of the field, other power plants, other refineries, other high-consequence sites, as the threat environment continues worsening, as cyberattacks keep getting more sophisticated, they look over at what is nuclear doing, and they pull one after another technique out of the nuclear arsenal, and start applying it in in their in their circumstance. So even if you’re not required to follow the nuclear rules, I would encourage people to read NEI, the Nuclear Energy Institute 08-09 standard, or the NRC Nuclear Regulatory Commission 5.71,

I’d actually recommend NEI 08-09. It’s more readable. It’s got more examples. The NRC 5.71 is sort of more terse and saying, here’s the regulation, follow it. But they are leaders in the space. And over time, I see people drawing on their expertise and and the way they do things.

Andrew Ginter
And our topic is asset inventory. And so, we’re talking about how much automation there is. We’re talking about how hard it is to count. Can we back up a minute?

In principle, the truism is you cannot defend what you don’t know you have.

And so that’s why we do inventory. Is that it or is there more to it? Why are we doing these inventories? What good is an asset inventory?

Brian Derrico
So it’s a great question and I’m going to give two answers, right? So one is on the nuclear space. The first answer is we have to, right? And sometimes that is, it’s an an answer. I don’t think it’s a good one, but it is answer. So we do have regulatory compliance around an asset inventory because to your point, it does sort of fuel other aspects of your cyber programs, such as supply chain, vulnerability management, configuration management, et cetera.

The flip side is it’s just, it’s a smart thing to do, right? You can’t build a vulnerability management program if you don’t know what software is out there that you’re potentially vulnerable to.

So trying to build a vulnerability management program when you don’t know what’s out there is it’s it’s a fool’s errand because you’re never going to be able to understand your total risk.

And that’s really the key is understanding your assets gives you the ability to understand your attack surface. And once you understand your attack surface, you can then figure out what are my vulnerabilities? What do I need to mitigate? What is a possible threat vector an adversary could use to attack this device or this process?

And you can’t do any of that without having the asset inventory first.

This brings us back to our topic. We’re talking about asset inventory. We’re talking about tools. There’s tools out there to do asset inventory. We don’t have to do a manual walk down and count the blinky lights in the cabinets.

Do the tools not solve the problem? is Is there still a problem when you’ve deployed one of these tools?

Brian Derrico
So there are a number of tools that do this and some are better than others right nature of the beast, but they do a great job of asset inventory. So I currently do professional services for a software company and a lot of their deployments in the OT space are generally for people that want to use the tool as their asset inventory.

Now, the issue is sort of becomes a couple of pieces uh that comes up can come up often and I i saw this in nuclear all the time is a lot of those tools that we’re talking about they depend on network traffic right so they’re looking at source and destination and they’re passively trying to piece together these are their assets on your network and this is what they do and how they do it so one problem is going to be you have assets that are not networked so If you have safety critical devices, they may be isolated. So you’re not going to be able to deploy a tool to do that. So you are going to have to manually enter those in and manually keep track of those in some way, shape or form.

And then the second piece is a lot of these tools that we talked about, they can’t just be deployed instantly. You can’t just throw a box in a rack and call it macaroni. There are architectural changes that have to happen to your network. You have to get traffic from switches. You have to open span ports. You have to deploy sensors.

And that’s where things can get a little difficult on the OT side of the house.

Andrew Ginter
So work with me. modern switches, any kind of managed switch has got a span port or a mirror port.

You log into the switch, you turn on mirroring and and off you go. You can start seeing the traffic and a lot of these these asset inventory tools can start figuring out what are the assets based on their traffic.

I get that some systems are are not on the network, the safety systems, that makes sense. But is it is it more complicated than that? I mean, I imagine you’re working with some older systems, older switches, or do any of these plants use non-managed switches?

Brian Derrico
So I’m sure there are some non-managed switches out there. I would not be surprised if there are some hubs that are still out there and kicking.

While in theory, yes, opening up a span port is is a simplistic idea. Where that turns into and where it becomes difficult is a lot of these OT vendors and and even environments that you’re in, nobody wants to change the system without vendors’ involvement, because everybody’s scared about what are the consequences. Because again, this isn’t an IT system, this is an OT system. There could be some huge process changes and huge impacts and risk if whatever you wanna do doesn’t go according to plan.

And that’s where I have seen the most amount of struggle come from is, you wanna get some a span port, you reach out to the vendor, you say, hey, this is what we’re looking to do. We just wanna span this traffic and the vendors don’t wanna budge.

The vendor hasn’t deployed that. They don’t know what that’s going to look like. They tell you that, hey, we’re going to have to refat the entire system after making this change. now Now, meanwhile, is is there going to be an impact?

No. we We can look at switch utilization and see, hey, even if we double, we’ll double the switch utilization. you’re not gonna see a huge impact to that because your switch is only at five or 10% utilization.

But it’s just, it’s there isn’t an understanding on the vendor side. So for some of these big control system vendors, it becomes difficult for them to bless as it were making these changes. And that’s where we have seen the most amount of struggle.

And we even had projects where we had to provide a lot of the testing and we provided, this is what needs to happen because the vendor just didn’t have the knowledge.

And think as time goes on for those control system vendors that are out there, I think that’s gonna be more and more of an issue because more and more of their deployments are gonna have a requirement for some form of higher detection capability, but We can’t just say, these things are they’re in an ot environment they’re safe uh that this’s just this is not the case right there there needs to be higher level of detection and the vendors need to be more willing to work and as time goes on I think it’ll be easier but retrofitting this sort of technology in existing systems becomes increasingly difficult because nobody wants to touch the system that isn’t broke

Andrew Ginter
So A couple of quick points there. Brian used a couple of of acronyms people might not recognize. He said you might have to refat the entire system. What’s that? Fat is factory acceptance test.

It’s set everything up and test every function of the system. Emergency recovery, every function of the system and make sure that it meets the requirements that were laid out when you you issued the contract to get the system built.

Typically takes days. You have to shut the plant down to do it. So nobody wants to refat anything. So that’s that’s what the vendors are threatening, saying, well, if you make a change that we haven’t tested, we have to retest it, don’t we?

Another point he made was about, uh, bandwidth and, for anyone who, who’s not real familiar with how mirror or span ports work, you got a switch with, I don’t know, 24 ports on it, 48 ports.

It has to be a managed switch. You log into the switch with a username and password and you can configure the switch. And one of the things you can configure is it’s called a mirror port or a span port. Um,

It’s a port or, multiple ports where you send copies of stuff. So typically, if you’re going to do an asset inventory, you configure one port and say every message that anybody sends to anybody else on the system, send a copy of the message out this port.

And now… The asset inventory system can look at the messages and say, oh, there’s IP addresses in use. I wonder what kind of machine this is. It’s using this TCP port number, and it figures out what kind of stuff is on the network based on the network traffic. And the mirror port gives you that traffic.

And the throughput consideration is, I thought, and now I’m not an expert on switches, I assume that modern switches, you would put, they they have ports, 24 ports out the front, and every message that comes in goes onto to a backplane. It’s a very high-speed backplane.

And I thought that the message went to every one of the other ports, and the ports decided, do I send this out or not? And so it would go to the mirror port as well. That’s what I assumed. And so, turning on the mirror port would not, in fact, increase the, you the amount of traffic on the backplane because every message is visible to every port.

But what I didn’t get clarification from from Brian, but what it sounds like is at least some of the switches he’s dealing with, if you enable the mirror port, then the source. if If port A is sending a message to port B, it first puts on on on the backplane address to port B, and a second time puts the same message on the backplane address to the mirror port, because it’s been configured to send everything to the mirror port. And that would tend to double the amount of traffic on the backplane.

But these backplanes are massively high speed because they have to support all of the 24 ports simultaneously. So he’s saying, look, your average backplane is barely loaded and doubling the load is immaterial.

What he did not say was that configuring the switch causes the switch to malfunction. I would imagine ancient switches that were connected were around sort of at the beginning of the concept of mirror ports and and span ports might have defects in their software that if you turn on the mirror port, it might malfunction. But, he didn’t say that. I forgot to ask him. And the fact that he didn’t say it says to me he’s never run into it or, he would have mentioned it. So that’s I’m putting words in his mouth there, but I’m guessing that’s not so much a concern. The concern is throughput. The concern is testing. That’s just, people worry about

Things working the way they’re supposed to if you make a change that has not been anticipated. This is the essence of the engineering change control discipline that is, again, used intensely at at nuclear sites and used, but maybe just a little less intensely at at other critical infrastructure sites. Pause. Pause.

Andrew Ginter
So work with me. In the modern day, you’re saying, the control system vendors don’t get asset inventory. I mean, span ports, mirror ports, they’re also used for intrusion detection systems.

This is what Dragos uses. This is what Nozomi uses. the six pillars of the cybersecurity framework, the NIST framework, include detect, respond, recover. You’ve got to be able to look at what’s happening on the hosts. You’ve got to be able to look at what’s happening on the networks.

Really, the the vendors in the modern day don’t get this.

Brian Derrico
And I credit where it’s due, some do get it better than others.

However, there have been some vendors we’ve worked with that did not want to make any changes because they just wanted to give us the same system that they gave us 20 years ago. with one version, higher than than what we deployed, again, decades in the past.

And, when pressed, while the people on the vendor side are experts in what they are doing, they are experts in safety design, they are experts in PLCs and how all of these things talk together.

They’re not IT people. So when you start talking, hey, I want to open up a span port, it’s different. They don’t understand. They think it’s going to cause an impact to the system. Meanwhile, as people with an IT t background, we can see that, hey, you’re using managed switches. you can enable a span port.

The inputs are 100 meg. And, even if if all of your PLCs are, completely maxing that throughput the back plane of the switch is going to be nowhere near utilization and even doubling that you’re not going to see a decrease and it just it takes a long time to get the vendors on board and again we even offered to to do some testing and show what the utilization changes were

And, we have seen that again with some vendors are better than others. But, I feel like at the end of the day, it’s we just want to give you the same system that you’ve already had. And making changes to that is scary.

And, we’re an isolated system. So, we don’t need to deploy a lot of that technology because we’re just going to stay isolated and and not connected to anything. And the reality is that isn’t as effective either because you While you lose the sort of network attack path, you still have several others, such as physical supply chain and portable media.

So having detection capability is actually, in my opinion, it’s worth the risk of plugging that thing in as long as you have a sound architecture. And that’s where some of the struggles begin with changing sort of that mindset from on the vendor side.

So for example, some of the control system vendors that there’s workstations and stuff there, they understand that, yes, there are detection pieces. You’re going to deploy some level of network intrusion detection.

You’re going to deploy some level of SIEM agent, right?

So I need to send Syslog and we’ve had good luck, and again, with particular vendors there. Some vendors will actually included with their control system, they will also include a security suite.

So they will have their own HIDs, their NIDS, their SIEM, and that’s all included. They have a patching server that distributes Microsoft Quick Fixes and all that stuff. It’s great.

However, when you get to that lower level of your PLC type stuff where, again, we were working with a PLC vendor and they would not budge. They did not want to change their design.

They thought that the switch, there would be a loss in time of communication, which would affect the safety related aspect of the design, and they did not want to budge.

And it took two years for us to to work with them for them to understand that we have requirements and when the programs were implemented specifically across nuclear it was understood that you’re not going to go in and bolt this stuff onto existing systems but when you’re starting fresh when you’re building a system from the ground up it has to have all of these components there is no longer an excuse to say, oh, it’s and <unk> already working. we’re not going to go play around with it. It’s going to that obviously cause issues.

Everything has to be baked in from the ground up. The cybersecurity piece has to be foundational. And again, with the PLC vendors, we found it to be, again, one particular vendor, very difficult.

For us to get that through and it took a number of people, trying to work their, the PLC engineers through why this is, we promise here, here’s some data to back it up.

And they finally did agree to to use the architecture that that we were, we had kind of specified from a design perspective.

Andrew Ginter
So we we sweat blood, we fight with the vendors, we get our asset inventory system deployed, we augment it with with manual inventory for the air-gapped or the isolated networks, and we use it for managing patches and vulnerabilities.

Is there anything else we use it for?

Brian Derrico
Absolutely. To your point, Vulnerability management’s a big one, right? Because I think at the end of the day, your asset inventory is going to give you what your what your risk profile is, what your attack surface is.

Vulnerabilities is one part of that. There is another piece of it that is supply chain, right? So we talked about that a little earlier, being able to understand what are the important devices that I am going to produce procure and procure those with certain sets of requirements. That’s also critical.

Another thing that we would use it for is configuration management. So understanding what is your configuration. You can build tools, you can use tools. That tell you this is the configuration on the device.

And some of those tools out there, some of those network intrusion systems that are OT-centric can also give you alerts and understandings on what is when changes happen. You have a code download to a PLC.

Is that expected? And then also, this is the running code of that PLC, and this is what changed, and you would have visibility into all of that. And again, all based on your asset inventory and having as much information as you can about those assets.

Andrew Ginter
And if we could sort of bring it into the modern world, the, the latest automation systems have a lot of devices and asset inventory counts them. This is great.

But there’s a lot more we need to do with the information. So you’ve talked about patching. There’s a lot of We’ve had people on the show talking about SBOM, software bill of materials, keeping track of sort of embedded software when vulnerabilities are announced.

Is there automation for tracking SBOMs and vulnerabilities and doing the mechanics of patching and patching? Arguably, counting the asset is is the easiest part of managing the inventory.

Is there more in sort of that we can expect of modern tools?

Brian Derrico
I think there is. And, vulnerability management is always going to be one of the most difficult things to conquer because if you don’t have an updated software inventory, you’re never going to know what’s out there. You can do all the Windows patches in the world, but, there are obviously tens and tens of thousands of non-Windows vulnerabilities where if you’re running again, insert whatever software product, right? There are huge vulnerabilities around a lot of those. So can you automate it?

I think it comes down to you can automate the visibility. Right So you can at least understand and have up-to-date dashboards of this these are the devices that you need to worry about. Right This particular device has five critical vulnerabilities. And then that gives your your internal cyber engineers something to go after to mitigate to overall reduce that risk.

I also think it’s important from a business perspective to understand what are we going to do, right? On the IT t side, there’s a lot of patching processes and there’s, SLAs associated with is your, is the vulnerability critical, high, medium, low, et cetera.

On the OT side in general, OT is very adverse to patching and mitigation. And I agree with that in some senses, and I don’t agree with that in other senses. And I think as a business, you guys like you need to understand what is your tolerance for that risk? What are you willing to accept?

And are there areas where, yes, we we’re comfortable, we’re not patching because we have all these controls in place. And in order to get to the device, there’s guns, gates, and guards in the middle of it.

But, but hey, maybe if something really, really, really big comes out, we are going to take care of it. And We do have to come up. So I I don’t think there is a way to fully automate it, but you can at least automate the visibility.

So you don’t have people, just manually searching NVD with a software list that they don’t even know is accurate. You can get that part out of the way. There are tools out there that will help you. And then becomes a business decision and sort of a business process around, with all that information, here is your overall risk profile. What are you going to do about it?

And that that becomes the deeper discussion, again, around what specifically the business is, how much risk tolerance you do have, how much risk avoidance you want to have, and kind of go from there.

Andrew Ginter
Well, Brian, thank you so much for joining us today. Before I let you go, can I ask you, can you sum up for our listeners? What should we take away in in terms of what we’re doing with asset inventory?

Brian Derrico
Absolutely. I would say asset inventory is the most important part of your program, because if you don’t know what assets are out there, you’re never going to be able to protect your organization from somebody that maybe they know what’s out there and you don’t.

So asset inventory is critical. You cannot build upon your internal program without understanding what your attack surface is. I think another point is there are tools to help you.

This is not something that we need to do manually anymore. You do not have to go into cabinets and count every single blinky light. There are tools and you know products out there that will help us get closer to where we want to be.

And then at the end of the day, you still need an internal team that understands what the information coming back is. So if if you you know if you do need help in deploying these tools or selecting tools or understanding what the risk is, I’d be happy to help.

You can connect with me on LinkedIn. Brian Derrico, think I’m the only one. And I can help you with those problems because, again, once we once we conquer assets and get the tools in place, a lot of pieces of the program become a lot easier.

And my goal and what I love is just driving efficiency. So let’s automate, automate, automate, use tools to kind of help us see what we can and just do what we can to protect critical infrastructure.

Nathaniel Nelson
Andrew, that just about concludes your interview with Brian. Do you have any final thoughts about what he talked about there that you can leave our listeners with?

Andrew Ginter
I mean, I think what I took away from here is is, the importance of inventory and the need for automation. I mean, if a modern nuclear generator has, 10,000 plus devices in it that have CPUs in them that have to be managed, that have software that have to be managed, then you know I don’t know that a nuclear generator is that much more heavily instrumented than the average industrial thing. If you buy a steam turbine, it’s a modern turbine is heavily instrumented. If you buy any kind of physical equipment, it’s going to be heavily instrumented. This is you know There’s plus CPUs in a modern automobile.

And that’s, that’s something that fits in your living room. We’re talking about massive installations. I would imagine that a big refinery has as many as 100,000 plus devices if it’s been upgraded recently.

When was the last time you tried to manage a spreadsheet with 10,000 rows in it? When the last time you tried to manage a spreadsheet with 100,000 in it? Just manually counting the blinking lights takes a long time.

Automation to me is is essential. I mean, this is, you look at the NIST cybersecurity framework, sort of the grand compendium of everything that is cyber. What’s the first thing you do? Well, the first thing you do is figure out who’s responsible for the program and you know assign budget and responsibility.

What’s the second thing you do? You take asset inventory. You got to understand what you’re protecting. So, this this all makes sense that you need the inventory and in the modern world, you need automation. There’s no way you can do this anymore manually. So, my thanks to to Brian Derrico and, learn something here.

Nathaniel Nelson
Yes, our thanks to Brian and Andrew, as always, thank you for speaking with me.

It’s always a pleasure. Thank you, Nate.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Experience & Challenges Using Asset Inventory Tools – Episode 138 appeared first on Waterfall Security Solutions.

]]>
Insights into Nation State Threats – Podcast Episode 134 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/insights-into-nation-state-threats-episode-134/ Wed, 18 Dec 2024 11:04:48 +0000 https://waterfall-security.com/?p=29619 Nation state threats are often portrayed as the "irresistible forces" of cyber threats, with little qualification. Joseph Price of Deloitte joins us to dig deeper - what are nation states capable of, what are they up to, and how should we interpret the information that is available to the public?

The post Insights into Nation State Threats – Podcast Episode 134 appeared first on Waterfall Security Solutions.

]]>

Insights into Nation State Threats – Podcast Episode 134

Nation state threats are often portrayed as the "irresistible forces" of cyber threats, with little qualification. Joseph Price of Deloitte joins us to dig deeper - what are nation states capable of, what are they up to, and how should we interpret the information that is available to the public?

For more episodes, follow us on:

Share this podcast:

“…We can’t just sit idly by and say…’well, the worst thing we’ve seen is XYZ’…That does not necessarily mean that’s the limit to the imagination and capability of nation states…”

                                              -Joseph Price

About Joseph Price

Joseph PriceJoseph Price is a seasoned cybersecurity professional with over 26 years of experience spanning leadership, strategic operations, program management, software and hardware product development, offensive and defensive cyber operations planning and execution, threat hunting, and incident response in both IT and ICS/SCADA environments. He is currently a Senior Manager/Specialist Leader at Deloitte in Idaho Falls, Idaho, where he focuses on delivering value to government and public service customers in ICS/OT cybersecurity to make the world safer and more resilient. He leads a team of professionals in providing products and services to protect and defend ICS/OT/IoT/IIoT systems across various industries, helping organizations manage and mitigate risk.

Prior to joining Deloitte, Joseph held various leadership roles at Idaho National Laboratory, including Manager of Advanced Programs, Deputy Director of the Critical Infrastructure Protection Division, and Program Manager for Cyber Security R&D. He has also served in the U.S. Air Force, notably as Chief of Weapons and Tactics for the 67th Information Operations Wing and Flight Commander of the 33rd Information Operations Squadron.

About Deloitte

Deloitte is one of the “Big Four” accounting firms and a global leader in professional services, offering expertise in audit, consulting, tax, and advisory services. Deloitte Cyber Risk specializes in areas such as cyber strategy, threat intelligence, risk management, incident response, and managed security services. By leveraging advanced technologies like artificial intelligence, machine learning, and cloud security solutions, Deloitte empowers clients to proactively identify vulnerabilities, mitigate threats, and recover swiftly from cyber incidents.

Transcript of Insights into Nation State Threats | Episode 134

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome listeners to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions, who’s going to introduce the subject and guest of our show today. Andrew, how are you?

Andrew Ginter
I’m very well. Thank you, Nate. Our guest today is Joseph Price. He is a senior manager and the program lead for the OT cybersecurity program at Deloitte. And our topic is nation states, more or less. the The word credibility comes to mind. How worried should we be? I mean, how likely are is the is the average site to be the target of a nation state grade attack? This is the the kind of thing that Joseph is an expert on.

Nathaniel Nelson
Then without further ado, here’s your interview with Joseph.

Andrew Ginter
Hello, Joseph, and welcome to the podcast. Before we get started, can I ask you to say a few words of introduction? Please tell us a bit about your background and about the good work that you’re doing at Deloitte.

Joseph Price
Sure. Thank you very much, Andrew, for having me on. followed you and it’s exciting to be be a part of your podcast. So thanks for this opportunity. My name is Joseph Price. I go by Joseph and I’m zeroing in on about 30 years of being in cyber. I started back in the mid 90s with what we called information warfare. We didn’t use the term cyber back then as an active duty military officer in the Air Force I spent about four years defending networks in various places around the world, and then I switched over into the offensive cyber side of the things I don’t get to talk a lot about that, obviously because details are are not things we can discuss openly. But I will tell you this, the one thing we’ve spending 16 years in that community is I didn’t just learn about how we conduct offensive operations, but how other nations and other groups and organizations can conduct offensive operations and really what they can do, whether we’ve seen it, mentioned in the news or not. So I enjoyed about 20 years total working for the Department of Defence in various caps.

And after that I moved here to Idaho Falls ID where I now live. I joined Idaho National Laboratory and was the deputy director for Critical Infrastructure Protection there. And then three years ago, I shifted over to Deloitte and Touche or just Deloitte if you prefer. And I’m a senior manager there and the program lead for our OT cybersecurity program. So I helped develop our capabilities and service offerings and deliver them to our clients who have OT systems. To help them secure and protect and create more resilient. Architectures that’s supporting their OT systems, so that’s where I focus now and it’s a pleasure to be here.

Andrew Ginter
hacker computerAnd the world needs more OT security, so thanks for that. Nation states is our topic and we read about nation state threats in the news. I’m I work for a vendor. I go to a lot of these face to face conferences. I hear a lot of vendor pitches. I’m sorry, a lot of vendors get up there and wave the nation state threat flag and, fear, uncertainty and doubt. the the sky is falling, the sky is falling. We’re all going to die. And yet, here we are. you being on the inside without stepping on on, anything you’re. Not allowed to tell us how. Accurate is the the news? How really what’s going on behind the scenes? How? How worried should we be?

Joseph Price
That’s a great question. I think in the absence of details and information, a lot of times people just make presumptions about what a nation state might do. In terms of capability, nation states don’t tend to just be opportunity. There’s certain amount of opportunistic elements to any campaign, but they’re not just necessarily saying, ohh let’s see what we can find. Often actions are deliberate. Now the problem we have is we don’t necessarily know what they might target. So we might talk about a few examples or ideas around. Some things we’ve seen recently in the news, but for most processes, it’s a deliberate it’s a deliberate activity. Nation states have the resources they have access to talent. They have the patience to do things. So in many ways we might conclude that they’re 10 foot. Tall and bulletproof. Now, that’s not entirely true, but I think we were. We are fooling ourselves to think that. The best capability out there is some closely related version to what we’ve seen in the news. When a particular operation was exposed.

I think that capabilities are really only limited by imagination and one’s dedication to a particular operation or operational objective. And so I tell people that yes, nation states are highly capable. They aren’t necessary. a lot of people say, well, do I have to worry about them targeting me? Well, that depends. But I would say on on the whole operational technology systems are more attractive. For targeting for military or diplomatic purposes, then IT systems, or I should say they’re they’re attractive for a different reason. And that’s as we all know, those of us who tried to defend them is that impacts from the cyber domain. Can manifest themselves in the physical domain. And so if you think about it, you can achieve. Military goals, which may be to, cause some destruction or to impact the availability of some critical resource, all through the cyber domain. And so I believe. There’s a lot of capability and a lot of emphasis and focus out there and so we, we can’t just sit idly by and say, oh, well, the worst thing we’ve seen is XYZ. Ukraine, they they flipped a few Breakers. That does not necessarily mean that’s the limit to the imagination and capability of nation states at this time.

Nathaniel Nelson
um Andrew, to get us started here, we’re talking about nation-state APTs. It could sound like it’s all one thing, but in reality, we’re talking about a wide tapestry of different threat actors from different places with different motivations. Which are the ones that we are most interested in in this podcast today?

Andrew Ginter
There’s a lot of different capabilities out there. And, this is not comprehensive, but maybe just to give people sort of a a taste of of what’s possible. Let me cover off maybe a half dozen of the threat actors and sort of the different ways they approach the, nation state-grade attacks. Starting at the low end, Iran is accused of sponsoring hacktivist groups. most recently they targeted some PLCs that were on the internet that were manufactured by an Israeli manufacturer. They disabled water distribution in a small town in Ireland, and doing this by sort of low tech, low investment targeting of internet exposed assets. North Korea has more sophisticated professionals that are paid every day. The activists aren’t paid, they’re amateurs.

Andrew Ginter
The professionals are paid every day to attack things and Mostly what they do is ransomware because this is how the sanctioned regime makes a lot of its foreign currency is Stealing it in ransomware attacks. So they’ve got some very sophisticated ransomware groups China sort of is credited with bringing nation-state-grade cyber attacks to the forefront. Back in the day, the the DHS at the time in 2006, 2007 put out alerts about advanced persistent threats. That was code for Chinese intelligence agencies.

And they pioneered sort of the public use of what’s now the classic remote access Trojan or remote access targeted attack, where you get a foothold on a network. the the the You install a rat, a remote access Trojan, a piece of malware, it calls to a command and control center on the internet and you operate that malware by remote control. You use it to attack other machines on the compromised network. You spread the rat to other machines. You might spread different versions of the rat in case your first version is found out and you establish a persistent presence. The very latest there is volt typhoon, which is saying there isn’t even a rat anymore. They’re using the facilities in the operating system to maintain remote control. Extremely difficult to detect that the remote control is there.

The Russians take a different approach. Historically, they’ve produced malware artifacts for attacks. Think Black Energy had code in it to manipulate DNP3 devices. DNP3 is a a widely used protocol in the electric sector.

The latest out of Russia or credited to Russia, I mean, none of this is officially confirmed, is Pipedream, which again is a code that has, it’s a tech code that has a lot of capability in it for manipulating devices in control systems, presumably maliciously. up And we haven’t heard much about them lately, but back in the day, I think 2010,

American and Israeli intelligence was accused and has never officially accepted responsibility, but is widely thought to have produced Stuxnet, which is a very sophisticated artifact that once you let it loose in a target network, it just does its thing. It’s autonomous. It spreads autonomously. It finds its target. It sabotages the target. It does not need remote control, the way the Russian tools do, the way the the Chinese prefer to sort of silently volt typhoon living off the land, remote control systems. The Stuxnet was was autonomous. so This is sort of the spectrum from from low-tech, hacktivist attacks to remote control attacks, some of which are very sophisticated to autonomous attacks, some of which have been historically very sophisticated. And there’s probably more that I’ve missed, but it’s it’s a it’s a sobering set of capabilities.

Andrew Ginter
OK. And you know. We read about these nation states in the news. A lot of the nation state grade attacks that make the news are espionage breaking into governments, breaking into nonprofits, breaking into, anybody who who dares to, voice any opposition to a regime. Breaking into these places and stealing information, you mentioned a couple of of instances. the Russia breaking into the Ukraine twice causing, physical power outages. the the I guess the. The question is we hear a lot comparatively about espionage, not so much about sabotage, you know? Is there sabotage happening that just isn’t being reported? What’s what’s going on there?

Joseph Price
That’s a great question, Andrew and. when I mentioned earlier that. That. The activities you see in the news are not the limit of the capabilities of a nation state level actor. It’s important to realize, like these are not singular transactions. Especially when you consider targeting OT systems. This is a campaign, right? So it evolves overtime and sometimes our defences are good. We catch them early on in the campaign. So even the simple acts within Ukraine 2015, were there a number of of were there a number of circuits that were? That were opened as part of that particular action. It started with a lot of information gathering, a lot of reconnaissance. We even saw. Right after the 2015 activity in January of 2016 that Ukraine ERGO, which is the transmission company that was later the target in December of 2016 of the follow on attack. Was part of a phishing scam. And some of the particular people that they targeted in that scheme or protection engineers.

So you start to put these pieces together and you realize they’re looking at those people who are responsible for the overall protection system of the transmission network. And in December of 2016, rather than throwing several Breakers in several different distribution companies, they threw 1 breaker in a transmission company and. It was something on the order, like an order of magnitude more power lost in that one breaker trip than in all the rest of the 2015. During the 2015 attack. And so you realize that there’s deliberate processes going on. And sometimes, like I said, we’re lucky we enter. We interrupt the process early. But. The goal for. To to attack a particular OT system, let’s use the United States as an example. The goal is not to let’s get in there, gain access. pull all the information we can and then cause sabotage. Because when your sabotage takes place in the physical realm, the chance of reprisal, the chance of every anything from a diplomatic to a military response, certainly it raise it or excuse me rises considerably.

But if you had those assets to hold at risk, if you can gain access, secure that access and hold it at risk, you can integrate that the the whatever sabotage or whatever attack scenario into a suite of capabilities that you could have as part of a campaign plan. And it could be very effective too. So. The the adversary is going to use. The most minimal force required. To gain access and if they can use something that let’s say is out there in the wild. But they can tell you’re not patched against. Well, sure, they’re going to use that. They’re going to use that before they go to some zero day that they know and no one else knows. Right. You’re going to be economical in your use of your various offensive. Crown jewels. Once they’ve gained a foothold, once they’ve secured their position. They may do. They’ll need to do additional reconnaissance to figure out. What are our options?

I always felt that Ukraine 2015 was kind of a hastily, hastily executed operation. Because so many things happened at once, and then they burned all the infrastructure at the end. But if you go back and look at each individual action that was taken at each of the distribution companies. You recognize that in some cases? They obviously had people that couldn’t read or understand Ukrainian. Because they had messages on the screen that they were remotely operating. That said, this is just a test. System. And yet they continue to try to do things. They opened a they they opened a tiebreaker, which in general, unless you’re under some maintenance function, tiebreakers aren’t going to shut the power down anything. And so. What we saw as as things progress and you get into the December 2016 event, you realize that. Things are more specific to the equipment that’s in use.

It’s highly targeted. There clearly had someone who knew what was going on in that system and I think we need to recognize that a nation state adversary. Will understand your process. They may not understand your systems and exactly your processes for running through things or your contingency measures, etcetera, but they’ll understand the physical process that you’re controlling. So that they can understand the effects they may have. And then they may just sit on that access. Monitor it. It may only phone home once in a blue moon. Because they don’t need to. Risk detection by having frequent and regular communications or a massive amount of information flowing back and forth between that target. They have it there they could hold it and they can use it again for. What I would say is, potential military or even just diplomatic. Influence operations. But without having to. take any physical action themselves. They can do it remotely. So I think that’s that’s something that that is the reasons why they’re not necessarily going straight to sabotage. It’s not because, as I’ve seen in an article recently, ohh, they wouldn’t mess with us. No, actually, this is the exact way that people would wouldn’t mess with the United States. attacking it asymmetrically. Using capabilities to cause damage or to. Cause service outages or even uncontrolled environmental release. Risk safety of risk of safety basis or violate a safety basis and cause potential harm to humans. Those are all things that could be done from afar via the cyber domain. That’s that’s a nice capability to have. an arrow in your quiver if you will. That nation states would want to hold on to. For some future conflict.

Andrew Ginter
So the example you gave of campaigns developing capabilities that sort of describes Volk typhoon to a T. But in the news lately, there’s been a lot of sort of lesser stuff. I mean Russian, state sponsored Russian hactivists are are accused of, I don’t know, overflowing a a water tank in Texas. The. Iran, Iran’s nation state sponsored hactivists are accused of targeting an Israeli made PLC that’s used in a couple of small water systems and turning off the water to 180 people in Ireland for two days. None of this seems terribly consequential. I mean. What? What really is the goal here? That doesn’t sound like a campaign.

Joseph Price
Water tower at sunsetIt’s interesting when you and I think this is again, this is a that tendency I think especially within the media to presume that what we see is the totality of the operation. And I just don’t think that’s the case. So you mentioned a couple of really good examples. In fact, we had a very recent example on Monday, there was a the Arkansas city, KS. Was also attacked its water Water authority was attacked. Very little details have come out. I’m very interested to hear what they find when and we’re trying to get some additional details through some contacts, but because it. It on the face, it just looks like, well, not only did they not really have much of an effect. The plant in the in Arkansas City went into manual mode.

Similar situation with some of the examples from Cyber Avengers. The ones you mentioned attacking water authorities and and kind of defacing the PLC’s. The only place that actually caused an impact was that village in Ireland that you mentioned and you’re like well and now they’re exposed. So like you said, what did they really? What did they really gain from that? And so my answer to that is let’s think deeper about the campaign. The campaign ultimately has, let’s say, high value targets at the end of it. And maybe that high value target is a major municipal water system in the US, one that cannot be ignored. If you were to have significant impacts. Yes. So how do you how do you target that? And everyone might think, OK, well, let’s jump straight to. I’m going to. Learn about their systems. If I can. Who are the key people? I might start fishing, etcetera. But part of you has to ask. Wait a minute. If we were to get caught early in the campaign. And there were to be any repraisals. And that would, would that completely wipe that campaign opportunity off the map? Do we need to use better tools? Do we need to invest more time in a human related human related operation? there’s a lot of things to consider, and so even starting, you might say, how’s the US going to react? When we cause. When we launch an attack and cause any impact whatsoever. To a water system.

Well, we need a we need a lab environment, right? So there’s, I’m sure. Plenty of nation states. I’m sure they all have labs where? They go test things out. But to really get ours to measure our response, they need to. Do it somewhere. Well, what is? if you sit, if you consider large metropolitan areas, New York City, Los Angeles, Philadelphia, Baltimore, those you’re going to get those going to get pretty big reactions pretty quickly for sure. Right. A lot of people will know if something. It’s there. Well, what about Muleshoe, Texas? Probably not a large number of people even are going to know where Muleshoe Texas is on the map. So we’re going to hit some of these smaller rural areas, number one, it’s going to be easier target, right, because these water authorities suffer from what I call STP. Same three people, the same three people are responsible for making sure they have all the necessary chemicals for treatment of the water that the water. Distribute sourcing and distribution all works. they go and deal with issues. They’ve got to handle and manage the budget. They’ve got to handle the maintenance calls the late night calls of issues, the water main breaks, all those things. Same three people are responsible things so it’s a pretty good bet they’re not going to have high end cybersecurity capabilities.

So and then we’re going to do some, we’re going to take an action and that action isn’t going to directly cause loss of life or anything major like that. So. They had to go into manual operation mode. Big deal, right? That of all the potential impacts, that’s probably the least not for those same three people, because now they’re probably a lot busier, even more so than usual. But that’s going to give us a window to does that cross a threshold, how fervent. Is the US’s response at the executive level at the DHS CISA level at the state governors level? What are? How do we respond as a community, as a nation? When we recognize that a foreign actor is taking action against these life critical. Services. That we just take for granted every day? And so I think that again part of this can’t part of the campaign is figuring out where are those limits to government response, what’s going. To. What’s going to trip a a greater response or something? What will those responses look like? It’s no different in in my mind. Like when you have Russian bombers flying into our air defence identification zone up near Alaska, they’re not crossing into our our national airspace, but they are in those areas just outside of it. And they watched with their radars and their surveillance. Planes, how quickly we scramble, how quickly we are able to intercept their aircraft. what tactics we use. I believe that’s also going here going on here because. In the end. If we believe, I mean so one of the things I mentioned earlier was, hey, we can’t guide our, our our greatest adversaries capabilities based on what we see in the news.

I was quite honestly, shocked in 2019 when the Director of National Intelligence published an unclassified threat assessment. And in it identified a couple of interesting facts. Number one, they named Russia and China in there, which? for those of us who have worked with the intelligence community before, that wasn’t, it wasn’t surprising that those were the the potential adversaries they named. It was surprising is that they were saying this at the unclassified level and it said that Russia could cause a power. Impact an impact to our our our power whether it be generation distribution that could last from hours to days that China could impact our water systems in in, in, in such a means to last from days to weeks. Like those are pretty bold statements coming out in an unclassified Intelligence Report. So I I think there’s a recognition at other levels of the government. Nation state adversaries do have a greater capability than what we might presume just by watching the media and the smaller activities.

You know, yes, they could be isolated incidents in the case of the Cyber Avengers, they were trying to deface the the HMI screen on Israeli made equipment. OK, that might have been an isolated campaign, but. For the other things I sit there and I think, how could this be used as part of a a larger, more diverse campaign to see how we respond to see what we put in place as a result of those attacks and how can we can, use that as part of our? Higher value target, higher value target operations and in preparing for those to have capabilities there, so.

Andrew Ginter
If I were to summarize, the one sort of surprising thing that I took from from the detail is the concept of a campaign. It’s not just that small water systems are easier targets, and so let’s go after them. I never really thought of these attacks as stepping stones. I really hadn’t thought of these attacks as testing our response capabilities. i mean the one concrete example that springs to mind is, I forget, it was a few years ago the the American administration announced that attacks on critical infrastructure, civilian infrastructure, would be regarded as acts of war. Well, someone just overflowed a water tank in Texas. Did anyone declare war?

No. So, yeah, it does, it almost does feel like, people are pushing a little bit, the bad guys are pushing a bit to say, well, really? When would you? When would you respond? How would you respond? this This makes sense.

Nathaniel Nelson
True and what I didn’t hear him say that I believe is also occurring is when nation-state APTs use one of their targets as a springboard or a relay point to another so for example you are targeting one major utility or telecommunications organization or whatnot, you go after a smaller target, and then you can use that as a relay point to hide your malicious communications, for example, among other things.

Andrew Ginter
Yeah, I mean, where I have heard of that is in supply chain, more than targeting one critical infrastructure to get into another. You tend not to have that kind of connection between a smaller water utility and a larger water utility. In my recollection, at least in North America, you might have stronger connections like that in Europe, where things tend to be sort of closer to each other, more connected. So yeah, that’s that’s a good point.

Andrew Ginter
So so work with me. we’ve been talking about the threat and, I’m convinced that that nation state threats are real. The question becomes, what do we do about them? if. I mean the the, the, the truism, I don’t know if it’s true, but the truism is that a nation state military essentially has unlimited money and talent and time to come after us. And when you have that coming after you, it’s hard to imagine how you could stop an attack like that. given what you’ve said about the threat. You know. We, as defenders from small water systems to large high speed passenger rail switching systems, we as defenders, what should we be doing about the threat?

Joseph Price
The challenge in answering that question is that the problem is multidimensional and multifaceted. But in general, I believe what we should be doing, first and foremost, is recognizing that this is a business risk or an operational risk, not a technical risk. So often. When you bring up the topic of a potential cyber attack, let’s say you’re talking to a CEO or a board. Well, well, go talk to the CISO or go talk to the CSO. Right? That’s that’s that’s their responsibility. But. When we consider that impacts. Can directly impact the business whether we’re brewing beer or providing clean drinking water to millions of citizens. The ability for cyber to now create business impacts means it should get some degree of attention.

And the consideration for what should be done should not be reserved to, well, I I did the minimum. I followed the checklist. I’m compliant with this standard. Because as we all know, in any standard. Your interpretation your your finding for how you’ve met that standard. The exceptions that you might apply for and get granted. For that standard, all could become your own undoing.

So to start with, how do we talk about? Security of security of OT systems to for the business risk. When you have attention at that level. Then you you start to recognize. The investment that’s made in any. Business activity, whether it’s bringing on new equipment, whether we’re upgrading, let’s say we’re a utility and we’re upgrading to a, we’re a large provider. We’re upgrading to a new. Energy management system. Right part of that capital expense. Is the security. And. With that, we’re not trying to meet some minimum required. Now we’re recognizing that. Just as the adversary is dynamic and can be active at different times, we need to make sure that our systems are actively monitored. That there is a responsibility whether it’s done. Locally by organically within a given company or provider, or if it’s contracted out, or if there’s some higher level organization that provides that. We talked earlier about, rural water systems and the fact that you’ve got maybe the same three people are responsible for everything. It’s unreasonable. To go tell the community of Muleshoe, Texas. Or Dubois. Idaho. Hey, you have to come up with and fund. Your own cybersecurity expert and oh, by the way, you’ve got to pay him or her healthy sum because there’s a lot of demand in the market and they’re going to, they’re going to cause a a hefty cause, a hefty price.

But what we. Could look at is to say OK. The threat? To those smaller water systems. Is not only is it probably lower in terms of somebody trying to cause sabotage? That is probably lower also. The resulting impact if that rural. Community where without water for let’s say hours to days. There are means at certain levels of government, state, federal, etc. To help compensate. For that temporary outage. It is a lot harder to compensate as the population served by that water system goes up, or the demand on that water system goes up considerably. So there’s still challenges within certainly agricultural areas and things like that that rely rely on the water supply for for growing crops, etcetera. But if you could, instead of telling every individual function you’re responsible for your own defence, you do give them some minimum amount of requirement, or maybe even assist them in meeting some minimum safe configuration. A firewall that’s properly configured to serve business to allow business purposes but not allow unsolicited communications in from the outside. You have some continuous monitor on there, even if it’s not monitored by those individual by that particular water authority. But look at like the state level and look at there are emergency response centres. Popping up in all states.

Joseph Price
And being able to be able to handle different incidents, right? Some sort of incident management or incident response capability at the state level and maybe you bring it up there. I’ve always said, when I look at the state of Idaho, we have three kind of population centres. In Idaho Falls, Pocatello, where I live on the southeastern side, the capital city of Boise and the southwest side and then the town of Coar-de-laine, not that far from Spokane, WA. Up in the northern end of the Panhandle. So you might be able to attract some talent to those population centres and have a regional secure operation centre or let’s say the water sector. When we pivot over to power. Now you’re talking about, well, you have regulated utilities, you have Merc sip certainly a lot more investment in. What? what is being done right now to set the bar to begin with. For regulated utilities, you also have. Private owner operators, right. You have companies that that might have a little more bandwidth if you will within the budget. To do things, and so you might require more self-sufficiency in that kind of scenario. Because in the end. What you don’t want to do is pass all of these, costs on to the consumer. I think we all probably pay for it one way or another, but you don’t want to suddenly triple somebody’s water bill or their power bill to say ohh well, we have to do. This particular cyber thing, because we have these two requirements.

You want to look at, how can I pool resources and use where it makes sense. Other sources of funding and support for those activities where it’s just not feasible. To bring the talent or the capability and run it organically within that organization. I think if we, then then we start to expand to the federal level and say what’s the federal government’s responsibility now? To be clear, I’m not speaking on behalf of my company or the Department of Defence. My former employer or anyone like that. But I did notice that recently Jenny Jenny Easterly, the director of CISA. Started talking. Out. Pushing responsibility for software vulnerabilities vulnerabilities onto the vendors themselves or software hardware. So that is one tact that can be taken as you start spreading that around the equipment and and software manufacturers in addition to requiring. The owners operators to provide some level of protection in addition to looking for communities of interest that might be able to come together and assist in providing active monitoring where.

It’s just not feasible to have the organic capabilities. So those are some of. The ways that I think. getting off the dime and and thinking that this is just an issue of like for checklist security. That no, we need to move beyond that and we need to be actively monitoring our systems someone and we need to be able to share that information. We’ve got a great model, we’ve got information sharing, analysis centres, ice sacks out there. Let’s make sure that they’re, properly funded and resourced so that when something does happen in Muleshoe, TX. Or in Arkansas City, KS.

That information can be pulled in quickly and shared elsewhere. So that if part of that campaign is hitting multiple small utilities. You can make them aware and quickly disseminate even response measures to help protect against them or to counter anything that’s been done. I think those are some ways we can start getting after this problem, but it again it it requires a shift in our thinking that this is just this is a CISO problem or this is just a. the network shops problem to solve.

Joseph Price
You know, as I was talking about what we should do. How we should sort of change our approach? I’m reminded of when I attended my first sans ICS security conference in 2015. I had. Just less than a year ago moved to Idaho from Germany. I I knew Mike Asante, who many in this community, if they’ve been around at all, know who Mike Assante is. And. I was listening to somebody give a talk at that conference. Kim Zetter was in attendance and she’s the author of the book Countdown to 0. And so almost every speaker up to this, I think we were on Day 2, almost every speaker had received some. Sort of question about Stuxnet. Right. And and and based on on. Zedler’s book. And they want to know how do I protect against, the nation state level attack that is Stuxnet. And the speaker. Sure. I forget his name, but he said, he said. I find it kind of funny. Said. Everyone’s sitting here, going around, saying. How do we solve against Stuxnet? He’s like most of you, don’t even know what assets you have on your network so. So there’s probably there’s probably a preparatory comments to be made, which is if you have. No cybersecurity program, or maybe a very nascent one. You can be bombarded with. All these different tools that people will bring you or say, oh, bring us on and we’ll do this for you. We’ll do that for you and it can become. Quite noisy and confusing.

What is the best step I should take? What are the first steps I should? Think. And so I will caveat my previous response by just saying. Consider first and foremost, knowing yourself. Knowing what you have on your network, identifying that, and certainly there’s automation and tools that can assist you in doing that, but know what you have. Have some sort of policy So that how you’re going to treat these systems, right. And there’s lots of policy examples out there you can you can use somebody to assist you in that or you can, if you’ve got the ability you. Can. Study examples that are out there. But know what you have have some policies how you’re going to treat whether to go onboard, off board that equipment, dispose of it, how it’s going to be configured, how you’re going. To let users access.

And then put some sort of monitoring. Capability in place. So that you can assess what is going on and and then you can start to graduate to. The more complex cases, how do I need to integrate threat intelligence? How do I do attack surface management? What are my exposures? To a very highly capable advisor or an advanced persistent threat. It’s important to recognize that you can’t. Just make all that happen overnight. So I would just. Say. broadly we need to think about. Monitoring active monitoring, having responses, rehearsing our instant response plans, knowing what assets we have in in our systems. If we can get there, then I think as a nation we’ll be better prepared. To start dealing with the more nuanced and advanced threats and being able to respond when we see a noise somewhere in the system and recognize that might be part of a broader campaign, how do I need to respond to whatever happened? There. To make myself more protected, more resilient.

Andrew Ginter
So Nate, what struck me there, long discussion of what smaller utilities can do, how important, detection is. I’m reminded of the incident in Denmark, the sector cert documented the the Russians compromising some 22 internet-facing firewalls that they’ve been monitoring. What is not widely known about that incident is the funding model for the Denmark SektorCERT. The SektorCERT is not publicly funded.

It serves some 200 or 300 utilities, most of which are tiny. It serves three large utilities. I don’t know if they’re power or or water, but three large utilities is is my recollection when I was talking to these people. I might have the numbers off by one or two, but it’s a very small number of large utilities. And those large utilities pay for the sector cert. And the sector cert provides its services to the tiny, hundreds of tiny utilities for free.

What’s the benefit? Well, part of it it is the larger utilities giving back to society. Part of it is in my in sort of the the analysis, Joseph’s analysis here, part of it is the larger utilities benefit from visibility into what’s going on in the smaller utilities. If the smaller utilities are being attacked as part of a larger campaign, the larger society, the larger utilities want to know what steps the enemy is taking, want to know how much trouble they’re in. So this is an interesting funding model. He’s right. The same three people do not have the skills nor the ability nor the the money to set up their own monitoring system, to pay for their own threat intelligence feeds. Whereas a central sector search style organization that is sort of providing service to the smaller utilities can afford to buy threat intelligence feeds from the the the commercial providers of these things, can afford to have a relationship with their government and get access to classified information. having sort of the the big fish, be it the government or the larger utilities, pay for these services for smaller utilities seems to me to make a lot of sense in terms of a funding model to bring about the kind of capabilities that Joseph was talking about.

Andrew Ginter
So I’m putting words in your mouth here, but what I kind of heard you say was the perspective of the government. I mean, in the United States, the federal government, in other nations, the national government may be somewhat different from the perspective of the tiny utilities. The same three people. you’ve talked about the the need for monitoring. Absolutely. The nation needs to monitor these campaigns and figure out, how many doors is the enemy knocking on. But in terms of monitoring, most small utilities they want. the attacks kept out. They, they don’t want to focus on the detect part of the NIST cybersecurity framework. They want to focus on the protect part. And, to me, this is them saying, well, we can if the nation wants, insight into my systems, let them pay for the monitoring because I’m, that’s benefiting the nation, not me. I need to put protection in for those small utilities when they’re designing their security program, you know? Should there be assistance? I mean I don’t wanna again I I guess I don’t want to drift into into monetary. How much should the small utility be focused on sort of assisting the nation in terms of detecting widespread campaigns and how much should the, how much of the the nation state threat should each small or large utility regard as credible, credible threats to their own their own user base, their own citizens?

Joseph Price
Yeah, those are great questions. Let’s start by. Recognizing that. As we discussed earlier, as I mentioned earlier. Smaller utilities are not going to have the resources or access to the. The skill sets to take. To take on all the responsibilities on their own, and I agree with you, let’s not drift too much into, the policy of of who pays, etcetera. But let’s think in terms of where is that expertise, who can assess. What is credible and what is not? I. I pause a little bit at the use of that term because. If we talk about. In engineering, if we talk about design basis threats, I mean we look in terms of, OK, I have two gears are made of a certain metal. We put them together, they’re going to turn, we’re going to use some sort of lubrication or something. But I can with relative accuracy predict when that’s going to fail or when it needs to be replaced to avoid it failing in operation. Right. Because we know how metals breakdown overtime and exposed to certain elements and temperatures, etcetera and stresses.

When we look at measuring risk for natural disasters, we look historically we rely on the fact that, well. There’s a. 30% chance. That we’re gonna have a, a hurricane between categories 1 and categories 2. Strike somewhere within this 100 miles of our shoreline. in the next three years. We we base everything off of the the the historic. Occurrences and use that and extend that into a. probability statement for it happening again. The challenge we have in cyber is there’s a. In most cases. There’s a human actor involved and really at some level there’s a human actor deciding to do to take certain actions. And so. When you talk, start talking about. is the threat credible and do I need to be worried? It’s it’s very difficult. I think you’ll you’ll get some broad statements made based on how critical that service or that utility or that. Function is. And then you’ll think in terms of how likely is it that a nation state level adversary would want to have that impact on them? And I say, well, again, go back to our earlier conversation. I think holding that. Infrastructure at risk is a much. Bigger coin in their pocket. Then causing some impact.

So for that reason. I look at in terms of prioritizing and and looking at credible threats, I think, OK. If. If you could. Either cause interruption of a critical service like water, power, transportation. In a large metropolitan area. There is, there is the potential of bending political will. I’d always tell people, why is why is the US Navy such a, the most powerful fighting force, on the on the seas, anywhere in the world? Well, it’s because they can park in. a dozen acres of sovereign territory 12 miles off somebody shore and give them pause. Give them time to think. And recognize that, maybe whatever action that prompted that there might be a, a diplomatic solution to. Well.

If the suddenly the populace of the US or significant number of the populace of the US are threatened. With the loss of. Life critical services. I think we’d be foolish not to believe that that might give us political pause, right? That might cause. the executive branch to. Think. Carefully, what is the next move? If they could hold that large of a? A population at risk. What are our options now? It will probably. I’m sure it will drive. Multiple different options, political, military, etc.

Andrew Ginter
It occurred to me when you’re talking here, is it credible that Vault Typhoon is is is in the news, living off the land extremely difficult to attack to to detect these adversaries? Is it? Is it reasonable to believe that hundreds of other utilities have been compromised in the same way and the Chinese? Deliberately leaked the fact that they’ve taken over these 50 odd this way to make. the the authorities aware that this capability exists because it does no good to hold, when when the when the Navy parks off the shore of of some other nation and and says let’s think twice about this the the the the the the sort of the response capability. That the capability of the Navy is clear. OK, these ships. Are sitting there if, if nobody knows that the Chinese have the ability to cause, widespread physical consequences, is it credible that the Chinese leaked Volk typhoon, deliberately or or or, really accidentally, but weren’t that dismayed by it because they have these other capabilities and it does, those other capabilities do no good. It’s a threat if nobody knows they exist.

Joseph Price
So that’s a great question, Volt Typhoon. In my mind, as an example. Of. Or I would say it’s an an exposition of an extended campaign. Right. As as you’re well aware, as you mentioned in your question. It uses living off the land techniques very difficult. To detect. And in fact in the. In the infection details that I reviewed or, excuse me, in the instances of bolt typhoon attacks that I reviewed. Quite often they say we have no idea how they landed. And so that to me. Reeks of an extended campaign of holding assets at risk. Because. Once you have them #1 remove all traces of how you got there to use living off the land, techniques to to maintain that access. And like I said, occasionally phone home and when I say phone home it’s probably to some other listening post so that you know. You have access. But if you’ve done that. And you sit back and say haha, we have all these infrastructure operations that we hold at risk. Do you need to actually create cause sabotage or create mayhem. To be able to have an have an effect, the answer is no.

But it might be worth letting them know you have. A certain. Amount of assets held at risk. Now. If you’re smart, and I believe. Our nation state level adversaries are very smart. You’re not going to, let’s say, manage and care for all of the places you hold at risk with the exact same infrastructure, right? You’re going to spread it around the technique by which you by which you connect with them and contact them. Do any of your, your your maintenance of that connection if you do collect information? You’ll use different infrastructure to. Get that back. That information back to you, so you don’t necessarily have to burn the entire the entirety of your targets held at risk.

But you absolutely. Could take a portion. Leak sufficient information. Or maybe it was found because of just, great sleuths. Looking carefully at crash dumps, but the point is at. Some point. When your target knows they’ve been owned significantly. You might have leverage to, let’s say, accomplish some diplomatic objective or some other political objective, short of military conflict or things of that. Nature. That might be very helpful in, let’s say, talks that are upcoming about, trade or. About. conditions in adjacent territories or other other nations that that are. That are allies to one of the countries in question and and and not to the other.

I mean, there’s a lot of of ways that that could be useful and. And again it causes a response. You see how willing is the target to negotiate? As a result of recognizing you hold some of their key infrastructure at risk. So I think that also would explain in my mind why the government has been so united and adamant that we do what is necessary to root out and. To identify and cleanse Vault tycoon. From our systems. It’s a. It’s in me. It’s it’s a compelling. Conjecture and again, this is all conjecture, not neither one of us is talking from a position of some greater knowledge of exactly what’s happening or what happened with Volt Typhoon, but it certainly makes sense to me. That you would possibly burn some of your infrastructure to sort of. Or show one of your cards, or maybe two of your cards to give you leveraging power. In whatever’s going on. Globally or between the between those nations at that time.

Andrew Ginter
Well, Joseph, this has been sobering, thank you for for joining us before we let you go, can you sum up for us what are sort of the the the key things we should take away from this, this nation state threat business?

Joseph Price
I would say the first nugget is. Let’s keep in mind that the capabilities of any adversary are not merely defined. By what we read in the news, what events or activities were essentially caught? And then publicized. Computers will do exactly what we tell them to do, right? The computers and digital devices that run our OT systems are not all that different from the ones that are running our IT systems. And if if someone with sufficient access and authority. Tells it to do something. It will absolutely do it, and when those logical actions are tied to physical systems or impacting the physical world. Again, the the the range of potential effects are limited by our adversaries limitations. Excuse me? They’re limited by our adversaries, imagination and further by what we do to actively defend and protect those systems from mal-operation.

Nation state hackerThe other point that I would say. To keep in mind is that. We can’t protect. Everything against everything. We need to prioritize. But. If you consider where OT systems and OT cybersecurity is. I often feel like. For 20 years or more behind of where we are with IT. And so, and yet these are the systems. That. Most affect our day-to-day lives and an impact to them would be felt much stronger. I always tell people somebody hacks my computer and gets my online banking password. It’s a bad day for me. But if someone goes in and and hacks a power distribution substance or. If they hack the water treatment facility, it’s a bad day for a whole lot of people. So there’s a certain degree of scale and again. Reliance upon. Our critical infrastructure and we should we should give it. Uh. Due diligence and and that includes resourcing, funding, attention. To those systems. Over and above some of the other areas that we maybe emphasize right now.

And then the last nugget is? Recognizing. That. These capabilities are out there. Obviously doesn’t hit the easy button easy button on solutions. So. There’s really no excuse to, I would say, basic levels of having basic levels of hygiene. But in order to. Achieve that and move on to like you said earlier, right, protecting. Not defending when they’re already there, but protecting against these capabilities then we really need to take a much more active role and we need to move the decision from. Maybe the lower end of the C-Suite to the higher end and certainly for OT systems. Again, whatever it is, whether you’re. Whether you’re manufacturing something manufacturing pharmaceuticals or. Treating wastewater in a city. Those OT systems control your business. And therefore it is a business risk. That takes the attention of not just. CSO or CISO, but the CEO, the COO, the board, even those who recognize that. The proper investment needs to be made. To protect these systems that are core to whatever service. Or product they provide.

I’ve really enjoyed getting to be on this podcast. Andrew, this is an area that’s been near and dear to me for quite some time. Like you, I’ve spent a lot of my career focused on cybersecurity in various areas. The last 10 of it solely focused on OT systems if. I and I, I work at Deloitte. I have to tell people when I show up. Hey, I’m not here to do your taxes because that’s what Deloitte is often known for is it’s a as a tax and company, which it is that for sure. But we also have for 12 years running the largest cybersecurity consultancy within the United States and so, if anyone wants to learn more about how Deloitte can assist them in tackling some of these challenges, I urge you to go to www.deloitte.com and look at the services there. You can certainly reach out to me on LinkedIn and I can connect you too if there’s an interest to have the professional discussion.

But in the meantime, Andrew Great podcast again, I really appreciate you inviting me and allowing me to come on here and talk with you about these subjects with you. You’ve actually encouraged me to think a little bit deeper on some things too, so I’m excited.

Andrew Ginter
I’m delighted to hear it. Thank you so much. the the podcast would be nowhere without without guests like you, experts coming in and and sharing, you know. I call it a piece of the elephant. Show us the face of the elephant and the nation state face is something a lot of people like I said bandy about. But it’s it’s tremendous to be able to dig into it in some depth. Thank you so much.

Nathaniel Nelson
So I know it’s just one little sentence and a much longer answer there, but Joseph mentioned that in his view, IT was like 20 years ahead of OT security, which struck me as very surprising. In what universe is IT that far ahead, if if ahead at all? I mean, based on the conversations we have here, these are much more in-depth technical forward thinking conversations than I tend to have with people in IT.

Andrew Ginter
I fear that your perspective on OT security has been tainted by a hundred episodes of the, of the, the podcast here. Um, partly, on the podcast, we interview people who are very active in OT security, and sort of the examples I gave out of my own experience at, at waterfall, we work with the most cyber secure industrial operations on the planet. We’re on the on the very high end of industrial cybersecurity. So, you’ve been sort of seeing that side of the coin. Joseph, in my recollection, he worked at Idaho National Laboratory working with lots of different kinds of stakeholders in the in the OT security space, large and small, advanced and not at Deloitte. He’s working with presumably a very wide cross section of the industry much more so than you know we have on the show here, much more so than I have in my practice. You know the the the sort of the The leading edge of industrial cybersecurity is very sophisticated.

The average is probably much closer to what he’s pointing out, saying, no, no, there’s a lot of people out there. yeah you know We had an episode, I don’t know, a year ago talking about starting from zero. We interviewed a gentleman who made it sort of his calling to walk into industrial sites who had done absolutely nothing, one after another after another. So there’s a lot of zero out there.

What I took away from the episode you know was sort of two things. One is was sobering, thinking about sort of bigger picture campaigns. I have been focused on sort of individual breaches, individual sites. What can the small sites do? I wasn’t really thinking about how a multi-site campaign might work and what would be the the advantages to a nation state in carrying out such campaigns. So that’s that’s sort of some sobering food for thought.

The other thing I took away, again, i’m I’m reminded of the Denmark SektorCERT model where the largest utilities or presumably if you’d rather the government, but you know big fish pay for a facility that A, protects the little fish because it’s the right thing to do, and B, provides intelligence to the big fish about large-scale campaigns that might be feeling their way through the little fish in the course of you know eventually targeting the big fish. that you know To me, that’s that’s a ah nugget of solution here that you know maybe we should be, as a society, considering applying more widely.

Nathaniel Nelson
All right, well, with that, thank you to Joseph Price for speaking with you, Andrew. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a pleasure Nate, thank you.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Insights into Nation State Threats – Podcast Episode 134 appeared first on Waterfall Security Solutions.

]]>
Andrew Ginter’s Top 3 Podcast Episodes of 2024 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/andrew-ginters-top-3-podcast-episodes-of-2024/ Mon, 16 Dec 2024 15:12:04 +0000 https://waterfall-security.com/?p=29337 Sit back and enjoy Andrew Ginter's top 3 picks from 2024's Industrial Security Podcast series.

The post Andrew Ginter’s Top 3 Podcast Episodes of 2024 appeared first on Waterfall Security Solutions.

]]>

Andrew Ginter’s Top 3 Podcast Episodes of 2024

As 2024 winds down, kick back and enjoy some of Andrew Ginter's best podcast picks

Andrew GinterOver the past 12 months, it has been a pleasure and a privilege to co-host the Industrial Security Podcast. When I started the podcast 5-ish years ago, bluntly, I did not know if there was enough industrial security content in the world for more than a year or two of episodes. It turns out the OT security space is much broader and deeper than I knew, and I’ve both learned something in every episode and become aware of how much more that I don’t know that every one of my guests do know and give us a few insights based on that knowledge in every episode.

Choosing three from this year’s episodes was hard, but here are three that stood out for me. If you ask me for a theme for these episodes, I’d have to say all three provide insights into high-consequence attacks, risk blind spots, and of course defenses against these attacks. This is all consistent with the perspective of the Cyber-Informed Engineering initiative and with the themes I explore in my latest book, Engineering-Grade OT Security: A Manager’s Guide.

I hope you enjoy listening to these podcasts as much as I enjoyed the interviews and discussions. And stay tuned, we are working on many more guests and discussions in 2025!

My Top Three Episodes of 2024:

Episode #134: Insights into Nation State Threats with Joseph Price

In this episode, Joseph Price nation-state threats and attacks. Nation states are often held up as “bogeymen,” able to do anything to anyone for reasons that are opaque to mere mortals. Joseph peels back a couple layers for us, explaining how to interpret the data is available in the public domain. He walks us through what to expect in terms of attack capabilities, how the world’s superpowers routinely test each other’s defenses, responses and capabilities in both physical and cyber domains, and looks at what this means for both small and large infrastructure sites and defensive programs.

Episode #123: Tractors to Table Industrial Security in the Industry of Human Consumables with Marc Sachs

In this episode, Marc Sachs, Senior Vice President and Chief Engineer at the Center for internet Security, Chief Security Officer for Pattern Computer, and a former White House National Security Council Presidential Appointee, takes a deep dive into the cybersecurity challenges facing the food production industry.

He examines the industry’s growing reliance on automation, from farmers leveraging GPS, drones and self-driving equipment to large-scale food production facilities dependent on interconnected systems. While these advancements have dramatically improved efficiency and productivity, automation has also created important new vulnerabilities. Marc walks us through real-world examples of cyber threats targeting this critical industry, the potential consequences of a future attacks, and practical measures that organizations can take to bolster their defenses.

This episode provides an eye-opening look at how completely automated the high end of agriculture and food production has become, and how this is a problem as more and more operations deploy this kind of automation.

Episode #131: Hitting Tens of Thousands of Vehicles At Once with Matt MacKinnon

In this episode, Matt MacKinnon, Head of Global Strategic Alliances at Upstream Security, looks at a cybersecurity niche in the automotive industry that I did not know existed: protecting the cloud systems that vehicle manufacturers rely on to manage and interact with the vehicles they produce. From passenger cars to 18-wheelers and massive mining equipment, connected vehicles enable everything from diagnostics and updates to real-time remote control.

Matt explains how digital transformation and the pervasive use of cloud systems in automotive and heavy equipment industries has introduced new attack opportunities, with potential consequences ranging from unauthorized manipulation of vehicular systems, data breaches, and potential threats to safe and reliable operations.

How to manage these risks and protect cloud systems connected to vehicles? Matt walks us through protective technology and how it works – technology I did not know existed.

Share

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Andrew Ginter’s Top 3 Podcast Episodes of 2024 appeared first on Waterfall Security Solutions.

]]>
Driving Change – Cloud Systems and Japanese CCE | Episode 132 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/driving-change-cloud-systems-and-japanese-cce-episode-132/ Tue, 19 Nov 2024 11:30:34 +0000 https://waterfall-security.com/?p=28325 Tomomi Ayoyama translated the book Countering Cyber Sabotage - Consequence-Driven, Cyber-Informed Engineering - to Japanese. Tomomi recalls the effort of translating CCE to Japanese and looks forward to applying CCE and OT security principles to industrial cloud systems at Cognite.

The post Driving Change – Cloud Systems and Japanese CCE | Episode 132 appeared first on Waterfall Security Solutions.

]]>

Driving Change – Cloud Systems and Japanese CCE | Episode 132

Tomomi Aoyama translated the book Countering Cyber Sabotage - Consequence-Driven, Cyber-Informed Engineering - to Japanese. Tomomi recalls the effort of translating CCE to Japanese and looks forward to applying CCE and OT security principles to industrial cloud systems at Cognite.

Picture of Waterfall team

Waterfall team

Driving Change - Cloud Systems and Japanese CCE - Industrial Security Podcast Episode 132

“…security was mostly discussed as technical topic. And there was not enough frameworks or ways of conveying important security and security risk in the way that the stakeholders can easily engage with. And CCE for me enabled that…”

Available on:

About Tomomi Aoyama and Cognite

Tomomi AoyamaDr. Tomomi Aoyama is a distinguished figure in the field of industrial cybersecurity, currently serving as Private SaaS Operations Lead at Cognite (Website). With a robust academic background, Dr. Aoyama has dedicated her career to advancing cybersecurity practices, particularly in the realm of industrial control systems (ICS).

Her expertise spans several critical areas, including the application of Process Hazard Analysis (PHA) to cyber risk assessment, lifecycle security management, and the role of human factors in cyber incident response. Dr. Aoyama’s work is globally recognized, and she actively contributes to both public and private sectors. She serves as an expert advisor to Japan’s National Centre of Incident Readiness & Strategy for Cyber Security (NISC) and the Industrial Cyber Security Center of Excellence (ICSCoE) in Japan

In addition to her advisory roles, Dr. Aoyama is committed to knowledge sharing and education. She has translated essential ICS security literature into Japanese, including NIST SP 800-82 Rev.2 and the book “Countering Cyber Sabotage” by A. Bochman and S. Freeman. Her contributions have significantly enhanced the understanding and implementation of cybersecurity measures in Japan and beyond.

Dr. Aoyama’s career is a testament to her dedication to improving cybersecurity frameworks and her influence continues to shape the future of industrial cybersecurity on a global scale.

Cognite (LinkedIn) was founded in 2016 and has over 700 employees including top-notch software developers, data scientists, designers, and 3d specialists. Over the years, Cognite has positioned themselves as global industrial Software-as-a-Service (SaaS) leader, with an eye on the future and a drive to digitalize the industrial world. Cognite has created a new class of industrial software which allows asset-intensive industries to operate more sustainably, securely, and efficiently. Their core software product is Cognite Data Fusion (CDF), designed to quickly contextualize OT/IT data to develop and scale company solutions, using technology like hybrid AI, big data, machine learning, and 3D modelling to get there. Cognite’s clients include oil & gas, power utilities, renewable energy, manufacturing, and other heavy-asset industries. Cognite helps them operate through transitions, sustainably and to scale -without sacrificing bottom lines, paving the way for a full-scale digital transformation of heavy industry. 

Share

Transcript of this podcast episode #132: 

Driving Change – Cloud Systems and Japanese CCE | Episode 132

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome listeners to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions, who’s going to introduce the subject and guest of our show today. Andrew, how are you?

Andrew Ginter
I’m very well, thank you Nate. Our guest today is Tomomi Aoyama. She is the principal development lead for Private SAS, that’s Software as a Service, at Cognite, which produces industrial control system software. And we’re going to talk a little bit about what she’s doing, but mostly of we’re going to talk about her translation of the consequence-driven cyber-informed engineering textbook, um Countering Cyber Sabotage, her translation of the book to Japanese.

Nathaniel Nelson
Then without further ado, your conversation with Tomomi.

Andrew Ginter
Hello, Tomomi, and welcome to the podcast. Before we get started, can I ask you to say a few words about yourself for our listeners and about the good work that you’re doing at Cognite?

Tomomi Aoyama
Thank you very much for having me, Andrew, by the way. I’m Tomomi, and I’ve been in the ICS security domain over a decade, and I started as an academic researcher. And my fascination for this domain was always about how can we enable this collaboration. um I started with and from understand trying to understand how safety and security risk assessment can be combined, how security risk specialists can communicate with safety risk specialists and share the metrics, share the value. That was the first research topic that I was working on. And then I ah gradually shifted more towards, okay, how cyber risk or auto security risk can be expressed to the business continuity risk or business risk.

And through the academic position I had in Japan and while while I was doing the PhD and doing doing the um assistant professor and teaching, I was lucky enough to be able to join some government project where I was so able to support a asset owners ah design and evaluate the cyber table topic exercises, um business content exercise ah drills ah for earthquake drills also, and also and develop help of government ah to develop this large auto security capability building center called ICCOE, where I supported building up the training curriculum and international engagement. And now I’m in Cognite and still I’m fascinated again, I’m still fascinated by this collaboration piece in OT security area. Cognite is a company that builds the OT data platform software in oil and gas, chemical, energy, and manufacturing and so on. so And the Cognite operation is based on software as a service on a cloud data platform.

And When we talk about the cloud security, and there is a shared responsibility model that the shared the security operation and responsibility together with the cloud service providers and asset owners. But they the usual model that they have is two-colored – very simplified to colored model, and there is no space for the SaaS company like Clonite. And especially when you consider about the most of the organization, most critical infrared operators would select a hybrid model where they have the public cloud, private cloud, on-prem system all together.

And asset owner wants to have the total visibility and data governance over all the platforms and all the um systems. um There’s no really guideline for that. There is no established model for that. So I cognize what I’m doing as using my background in research and also also in all the security domain trying to understand and navigate the conversation with customers, trying to navigate the Cognite towards how can we support this and new era for the asset owners where they want to have the data control and strong data ownership. So that’s where I am today.

Andrew Ginter
Cool. So you know the industrial cloud is coming. You know it’s great that you’re contributing to that at Cognite. Our topic is a little different today. Our topic today is Consequence-Driven Cyber Informed Engineering. And a couple of years ago, you translated the the book on the topic, Countering Cyber Sabotage, Consequence-Driven Cyber Informed Engineering, the book that Andrew Bachman and Sarah Friedman wrote, you translated the book into Japanese. So I wanted to ask you about that, but before I do, can I ask you maybe introduce the book to our listeners? What is ah you know CCE? What is consequence-driven cyber-informed engineering?

Tomomi Aoyama
Sure. CCE is quite mouthful, Consequence-Driven Cyber-Informed Engineering. It was originally part of the Cyber-Informed Engineering. It’s one of the pillars of the Cyber-Informed Engineering, which is the framework for combining cyber and engineering side and how we can enable security more by the design, security built into the engineering courses.

And INL, IDEC National DAB, especially focus on this consequence driven w risk analysis part. And they developed this CCE method. It comes with the four phases, starting from phase one, consequence prioritization, which is quite important one one for me. And phase two is system and system ah system analysis, meaning ah how systems or dependencies between the systems, resources, information, data, people, are contributing to the consequence, the worst, worst, worst case that you want to avoid to happen.

And phase three is the consequence-based targeting. This is where you bring in a little bit attacker’s perspective and margin in security perspective. how those dependency between the systems or the path to the consequence can be compromised, how can how attackers can take advantage of this dependency to make the consequences happen. And then phase four is all about mitigation and production. Okay, how can we a how can we cut those the dominant effect for attackers to enable the consequence to happen in the most efficient way. And preferably, how can we do that by combining the engineering method and traditional cybersecurity tools and solutions.

Nathaniel Nelson
Andrew, these are concepts that we’ve talked about in a number of episodes before, but for anybody who hasn’t listened to those, could you just do a quick review of CCE?

Andrew Ginter
Sure, CIE is the big tent, Cyber Informed Engineering. It’s all about engineering and cybersecurity together. You know, the engineering part has been neglected historically, overpressure relief valves, manual operations as a fallback. These techniques that are are used to manage physical risk can also be used to manage cyber risk.

CCE fits within the big tent. I mean all of you know a great A great deal of engineering is under the big tent, all of cybersecurity. CCE is a bunch of techniques, and it’s it’s more than what’s in the book, but the book itself has really three big chunks.

One is consequence evaluation, and they recommend don’t start with your simplest attacks. They recommend start with your biggest fish and and do something about them first so consequence analysis.

And then some a few chapters on you know engineering mitigations. But the bulk of the book is about system of systems analysis to understand your defenses, to look for choke points in your defenses where you can choke off attacks most efficiently with you know minimal investment, maximum return in terms of security for minimal investment. So that’s that’s the big picture. CIE is the big umbrella. CCE is actually a formal training program. It’s a piece of CIE.

But CIE is big enough that just about anything fits under it that that has to do with industrial security. And and CCE is a chunk of that.

Andrew Ginter
All right, so so that’s CCE. let’s Let’s come to the translation. Translating a book is a big job. the the The CCE book is hundreds of pages. And you’ve got to you’ve got to be sure that that the translation is right. you it’s It’s a huge investment. why Why would you undertake that big a job with this book?

Tomomi Aoyama
Right so When I first met the idea of CCE, I was a researcher at a university in Japan. and My research area was trying to understand how we can communicate and engage with stakeholders about OT security in an efficient way and how we can do the risk assessment that both understand security risk and also safety risk and also their implication to the business impact. And we struggle to find the way that how this can be achieved in one way or a simple way. And my running hypothesis back then, and also now, this is my belief is that the OT security is a communication problem. That they there are a lot of, it’s a team effort. OT security is definitely a team effort. You cannot just have very experienced or the expert Bob to save the world.

Every time, we need to engage the stakeholders in internal stakeholders, different teams to understand the security and in the same way as you do in their own job language. If it’s an operator, they need to understand what security means for their operation. If it’s a business leader, they need to understand cyber security or the security implication in terms of how it impacts their initiatives and their investment.

And it is, I found it very difficult because security at least back then when I was doing the research, academic research, security was mostly discussed as technical topic. And there was not enough frameworks or ways of conveying important security and security risk in the way that the stakeholders can easily engage with. And CCE for me enabled that, especially this first part of CCE in the consequence prioritization. You don’t talk about threat, you don’t talk about threat actors, you don’t talk about security solutions, you talk about what but what matters most for your business and business continuity. That makes it very simple but easy to align any stakeholder in the organization.

So that’s why I thought that this idea I really want to convey to my community in Japan in my mother language and I want to be that catalyst to deliver the message. That’s why.

Andrew Ginter
Okay so that’s why you felt it was important to to translate CCE into Japanese. Can I ask you how it came about? It’s one thing to read a book and say, hey, this is good stuff. It’s another thing to reach out to the authors and and actually make it happen. How did this happen?

When I first met the idea of CCE, it didn’t encourage me immediately about translating the book. I think back then there was no book yet published either. and I got to meet Andrew at S4 and he was presenting about idea of CCE. That’s when the idea of CC very much clicked with so that my academic interest.

And I want to talk to Andrew at the beer bash and say, hey, I really like your idea. I really want to and really promote this method in the community in Japan. and That’s the kind of beginning of my engagement with CCE teams.

And one of the big turnpoints was the Japanese government, in collaboration with the US government, we organized a capacity building training for Indo-Pacific countries. And ICCoE, the Industrial Security Center for Excellence, and which is the OT security training organization, that I support in Japan was the and the one that provided training together with US training trainer teams, which was INL. And we ended up providing the CCE training for the Indo-Pacific countries a and together with Andrew and CCE team in INL and trainers in ICCOE.

OT Security Translate GraphicAnd it was very fun engagement and it was and interesting how CCE was received from the participants also. And after Andy and I were celebrating the successful delivery of that training, it really came to my mind immediately and said to Andrew that, can I translate this book? I really think I can translate this in a meaningful way. And and can you support this? And that’s the kind of beginning. And it took another two years or so to actually translate the book.

Andrew Ginter
Okay, so you ran into Andrew at S4, one of the authors of the book S4, sort of where the world of industrial cybersecurity today comes together. You also mentioned the Industrial Cybersecurity Center of Excellence in Japan, a government agency. How were you connected with them? How did you connect those dots?

So I was fortunate enough to be involved in, the from the very early stage of ICCOE, from the establishment phase of ICCOE at 2017. And they, my university, well, the university I used to belong as the the assistant professor and now still support as visiting researcher, they take care of one-third to one-fourth of the curriculum at ICCoE. So that is my connection to the organization and currently I also support the international engagement that ICCoE does. So when they want to do the international engagement such as the training, overseas training, or inviting the and international speakers to the ICCoE curriculum, I tend to support it. So the joint training we provided between Japan and the US, that’s also the some project that I supported.

And that’s why I was be involved in suggesting that CCE could be the good topic to introduce to Japanese and also in the Pacific audience.

Andrew Ginter
Cool, so you were at the university, you you had an opportunity to connect the dots and you did, good job. Let’s talk about the translation. I mean, today you can take a Word document and pump it through, I don’t know, Google Translate or something. There’s other translators on the market as well. And say here, try translate this into Japanese. When I’ve done this with my documents for a German market in particular, um I speak a little German. I looked at the result and it was full of mistakes and I had to correct it.

So what was involved in the translation? Did you press a button and it worked? Did you have to review it at in detail? Did you have other people reviewing it? How did how did the actual mechanics of the translation come about?

Tomomi Aoyama
Andrew, it was all me. It was one person operation and it was painfully long. and especially I haven’t I have done translation of, for example, NIST 800 series, some documents I have translated in Japanese.

So I have done many projects, but not the book. So it was really different level of beast. I definitely used the help of machine translation sentence by sentence just to create the baseline, but most of the time it was more confusing than helpful. So most important thing that that I needed to create was the dictionary. The translation dictionary to be consistent throughout the book on how we translate.

For example, well, as you can see in the title of the book, the consequence, this word appears unlocked in the book. And I was very intentional and also a little bit cheeky when I translated this in Japanese. I intentionally translated as business consequence because I didn’t want the readers to mistake in consequence as information breach or some technical consequences or piece of the consequences. But I want this to tar this book to be the starter of the conversation with different aspects and seeing the security from the different perspective, more from the business perspective, business risk perspective. So I and intentionally changed the translation from consequence in Japanese, business consequence.

And so this process of creating dictionary and be happy with this dictionary, and that was a very challenging part. There are a lot of terms in CC books that are very common for probably military domain or government people.

But it’s not so much a resonating word when it’s directly translated. So I also needed to understand each concept concept very deeply. And Andrew Bohman, one of the authors, was kind and generous enough to have multiple sessions for walking through those terms, what they mean, what’s the backstory of these terms one by one. So that really helped me a lot.

Andrew Ginter
So Nate, I’ve written a couple of books. I’ve translated some material, especially into German. and In my experience, exactly what Tomomi talks about, terminology is important, especially when you’re translating a technical document. In a lot of the world’s languages, a lot of computer concepts are showing up in those languages as English words sort of transplanted or adopted into the language.

This despite the language often having its own words for those concepts. In German in particular, sort of fairly words that in English have comparatively,  short, simple words for a certain technical concept might have a,  in English, they’d like to jam a a few adjectives and nouns together into a single, very long, very complicated word.

And what I observe in the the German community that I interact with is they’ve adopted a lot of the short English words rather than using the the long formal German words. And when you’re putting together a translation, you’ve got to figure this out. If you use the native language words and the community that you’re addressing isn’t using those words, they’re going to look at your stuff And it’s going to be a harder read. it’s It’s not the terminology they expect. And vice versa. If you use a bunch of English,  transplant a bunch of English words into the the the translation. And this is not what the community is used to. They’re going to look at this and say,  this doesn’t it it it again, it it impairs comprehension. And this is,  this is not the only challenge with translation. What I found with German in particular, I don’t know Japanese, but I know that in German there are linguistic concepts, gender in particular, everything is gendered. When you’re when you’re doing a little bit of dialogue, A said this and B said that, and you use the word you, you’ve got to select the word very carefully. There’s the familiar you, there’s the formal you,

And in English, you don’t have all this stuff. And when you translate material from English to German, I used a machine translator. The machine translator just gets it wrong. The machine translator says, well, I need this concept in the German translation, and it doesn’t exist in English. So I’ll just make it up. And they pick the wrong one pretty consistently. So there’s there’s a lot of repair that Choose the terminology carefully and then you’ve got to go through it and and and just repair what the what the machine translator does.

Nathaniel Nelson
And I’m wondering how you felt about the particular point of translation she highlighted in her answer, how she translated consequences to business consequences, because,  you and I talk about these concepts a lot. We don’t really focus on them through the business lens. Usually it’s like physical consequences, for example.

Andrew Ginter
I was thinking about that myself after the the interview here and, reflecting on it a little bit, I wonder if it’s because it sort of reflects Tomomi’s focus on risk assessment. She was doing a lot of risk assessment work in her research and, who consumes the results of a risk assessment?

It’s generally the business decision makers who have to decide, am I going to provide funding to my engineering team, to my IT t teams to fix this problem? Explain to me in one syllable words, how much trouble we’re in, and they want to understand the impact on the business. My own focus, I tend to work more with the engineering teams who are tasked with, okay, you have a budget, solve this problem and they change the design of the systems in order to prevent physical consequences, in order to keep things from blowing up, in order to keep trains from colliding. and so I might if If I were doing this, I might have been tempted to use to substitute business na sorryria physical consequence rather than business consequence.

But thinking about it, that might just be because of who I communicate with. And to what we said at the beginning, it’s all about communication. You’ve got to get these concepts across these sort of chasms of understanding.

Andrew Ginter
And if I may, I mean, I’m an an author myself. i’d I pushed my third book out just under a year ago. I’m curious about intellectual property. I mean, I see the Idaho National Laboratory logo on the the CCE book I know that Sarah Friedman and Andrew Bachman were employees, I think, of Idaho National Laboratory at the time they wrote the book. I’m assuming that INL owns the copyright on the book. But you did the translation. Can you talk about intellectual property? Do you own the Japanese translation? How how does that work?

Tomomi Aoyama
At least I know I don’t own the copyright. So it was primarily work for hire. It’s kind of twofold contract. So one sign is my contract with INL as the so service provider, meaning that the I will provide the this translation service for them so that they can have the Japanese version of manuscript in their organization. And on behalf of INL, I was sending the manuscript to the publisher. And ICCOE in Japan, they funded to publish this book in Japanese. So I was just bridging it in between.

Andrew Ginter
Okay, so, a lot of work doing the translation. How’s it been received?

Tomomi Aoyama
Mount Fiji in JapanI got the very kind words from people in Japan that they enjoyed the book and some people mentioned about a specific part of the job that especially part of the book that touched they resonated with them very well, which is super rewarding to me. But the first review I got on a public platform on Amazon, was very funny to me. it was it was It said that the four stars, great book, great content, minus one star for the bad translation. So that really made me laugh.

Yes, it’s it’s I know I’m not the professional translator. I cannot translate the in the same level as how people would translate and yeah great novels into Japanese. I can’t yet. But at least I made them read. So that’s a win for me.

Andrew Ginter
Indeed. it’s It’s disappointing when you get stuff like that. I remember when I published my books, you get I get positive, I get negative. You you got to shrug it off. I think the the lesson is that the material is now available to a Japanese audience that doesn’t speak English. So Have you got any sort of reaction from even verbal or face-to-face from the industrial security in Japan. How useful has CCE been in Japan?

Tomomi Aoyama
Most of the people, majority of people reach out to me saying that the CCE is a very inspiring method and inspiring approach. But I’m reading between the lines and most of the times CCE is a little bit too big of the project and it’s not something bite-sized for most of the people to easily adapt to tomorrow.

So that is one challenge that I found during and duringing and then after this translation project. The great feedback I got, not necessarily negative, but I think it really, really represents what Japanese community’s character is, is that one person told me, he’s a risk assessment, OT risk assessment specialist. He supported many, many organizations. and He said that the Tomomi, CCE needs to be dumbed down. It needs to be easy and easy to do for anyone. Right now, CCE is only useful for the people who understand OT security at the deepest level. That’s not enough. It needs to be easy for any person possible.

And that’s something I’m thinking about a lot these days. I’m thinking about all the security solutions and a lot of all the security project, it’s naturally targeting towards the critical asset operators, critical infrastructure companies, and middle organizations, and government funded organizations. So the project fund in the side is huge.

But there is a concept of the cyber poverty line where organization, even they even if they know about cyber security and know about the risk, they just simply can’t afford it. They just don’t have the resource available and and any solution at their hand to mitigate the risk.

And CCE is elegant concept and right now I’m thinking how we can make CCE and any other OT security or cyber security concepts framework solutions to be affordable and easy as possible to implement fast. Because so especially when we talk women think about so supply chain security and security as a whole.

Andrew Ginter
Another, I don’t know, legal nit, maybe. In my understanding, CCE is trademarked. Idaho National Laboratory certifies training providers. You can only call yourself a certified CCE training provider if you’ve been certified by INL. I’m curious, is the Industrial Control system Center, Cybersecurity Center of Excellence, is it certified?

Tomomi Aoyama
No, I say theory is not certified to provide CCE or accessibility training, at least on my knowledge. and But I can talk a little bit about how we introduce CCE as a concept.

Tomomi Aoyama
So ICCoE runs a one year curriculum for industry professionals and they they basically leave the work for one year to um focus on the OT security training from basically nine to five plus their own research project hours. And in there we teach many principles from traditional IT security, network security aspect to and OT or engineering discipline and risk management business disciplines. And recently we also add cloud digital transformation, those domain too. And CCE fit into the category of security leadership.

And one of the trainer, Hiroshi Sasaki, a dear colleague of mine, he introduces CCE as part of the method that that they can use when they are building the security strategy for their own organization, where they go back to the company. So some of the framework they also introduced is NIST-CSF. They also mentioned about using the 62443 and other twenty ISO 27K also. and And as one of the other tools that they can use to frame their own security strategy, they introduced CCE.

So we don’t go into detail in the same way that the INL folks provide CCE training, but we we we explain the CCE concept and the trainees engage, trainee at ICCoE engage in CC and how they can use CCE concept and the framework to present their security strategy to the executives.

Andrew Ginter
So that makes sense. I’m curious, in the course of translating the book, you presumably developed a deep understanding of the material. You have to understand the material in order to to translate it correctly. How’s that served you? I mean Personally, you’ve developed a deep understanding of CCE translating the book. Your your name is on the book. Can you talk about, has has the experience of of doing this translation changed your career at all?

Tomomi Aoyama
The book was published last year, 2023 in June in Japanese, and we haven’t done any book tour or anything. And I’m also based in UK now. I’m not based in Japan. So I don’t really have day to day, way to engage with people actually and get the book in their hand. So I’m not really feeling any burning a change or anything, but internally. It was such a privilege to be able to dissect the word by word and really, really print the book in my brain by translating the work and to feel Andy and Sarah’s work so close. And also the the book has the part that written by Mike Asante, and I have never met him in person, but I can’t really express how I felt about translating his part of the book, because his word, the opening section that he wrote It was so powerful and it was such an honor to translate that in Japanese. so And when I hear the good word and good feedback from people in Japan, I always think about the part that Mike wrote in English and how I also tried to match his energy to put in the translation.

And yeah, so externally and career trajectory wise, I didn’t see a lot of changes, but internally it was a big change for me.

Andrew Ginter
And if I may come back to the present day, I mean, you’re working at Cognite. You’re doing some sort of cloud stuff on the industrial side. the industrial cloud is coming for everyone sooner or later in some capacity or another. is your sort of deep background in cybersecurity? Is that part of your role at Cognite today?

Tomomi Aoyama
Yes, and I have to say, when I first learned about what is Cognite’s mission and what they are trying to achieve, it it made me really anxious because I was very much focused, I was and I am also very much focused on and security and reliability and operation and I was more worried about how these new technologies disrupt the reliable operation. and So that that was in the beginning. But right now, as the in the project, what what we are trying to achieve is how can we make sure that the when we provide software as a service, a it doesn’t disrupt the security or reliability of the operation, the physical operation itself, especially the digital transformation transformation. It started in the enterprise area and then it’s getting closer and closer to the critical operations. And when I look into the most of the documents on how to deploy cloud technology in a secure way, a lot of government guidance and and best practice was and treating public cloud as the starting point. And there was not enough information about how do you manage the security and governance of a hybrid setup or the private cloud setup. And especially how do you continue providing a service

When the stakeholder between the SaaS providers like Cognite and Asset Owner and Cloud Service Provider, this and how how can you manage these three parties or more potentially more parties involved? How do you make this tight connection while giving the Data Owner, Asset Owners, therefore, visibility and full control on security?

Given this is largely driven by security requirements, my background gives a little bit of perspective and to balance out the need for digital digital transformation and need for pushing through the boundary and understanding and accommodating the asset owner’s needs and IT and security team’s concern. So that is where I am. And then I also see quite the connection between the CCE Again, and I’m seeing CCE as the tool to help the communication and understanding what is a consequence and especially in terms of what we do at Cognite, understanding the dependency between systems, dependency between the data and systems and people and critical process. That’s really important. Having a CCE framework in the back of my head it really helps me to have a dialogue with customers, industry and stakeholders internally and externally.

Andrew Ginter
Well, Tomomi, thank you for joining us. It’s been a real pleasure talking to you. Before I let you go, can I ask you to sum up for us? What are the the the the key messages we should take away here? We’ve been talking about CCE. We’ve been talking about translating a book. We’ve been talking about the importance of the cloud. What should we take away from this episode and from your experience in these arenas?

Tomomi Aoyama
Oh, it was really great fun and doing this interview with you, Andrew. Thank you for having me. My takeaway is that the communication and collaboration, that’s really key to enable all the security, especially at the same speed as digital transformation. CCE is a useful tool to enable that communication and collaboration. You get to examine your security strategies program from different perspectives.

And now the CCE book is available both in English and Japanese. So if you have Japanese colleagues, if you have somebody if somebody in Japan, reach out. They may know about CCE. And now you can talk about CCE together, which is awesome.

And right now, I’m in Cognite, i’m looking forward to adapt the CCE principle into industrial cloud systems and try to, again, enable that collaboration between the cloud service providers, asset owners, and sales providers like Cognite. And learning about how we can bring the data governance back to asset owners.

Again, the book is available, the CCE book is available in Amazon. And if you are coming to Japan, let me know or let ICSEoE know. We’ll be always happy happy to talk with you. And if you have experience with industrial cloud, public cloud, private cloud, hybrid, if you decide not to use a cloud in industrial space and why, let me know. I’m on LinkedIn. I’m happy to talk with you about your challenges and your experience and learn from you. Thank you.

Nathaniel Nelson
Andrew, that just about concludes your interview. Do you have any final word to take us out with today?

Andrew Ginter
Yeah, I mean, I’m looking at, a lot of the the the topics we talked about are very timely. i’m I’m a big fan of CCE and CIE. it’s all about consequences. Consequences drive the strength of of required security programs. And but, I’m looking at, I’m on the end of my career and I started in technology and sort of worked into cybersecurity and risk assessments. my My most recent book, The Topic is Risk. It’s not in the title, but it’s it’s all about how do you use an understanding of risk to decide how much cybersecurity, do how much engineering to do.

I see Tomomi working the other way. She started with risk and with sort of communicating with business decision makers and is now tackling what I believe is the future of industrial automation. And of course, industrial cybersecurity goes with industrial automation. She’s tackling the future, which is the cloud. And the vision for the cloud is very compelling. its The cloud can save enormous amounts of money. It can add flexibility. its it’s a tremendous vision. The question is how much of the vision can we realize safely? And I think the answer is almost all of it. We just don’t know how yet.

So I look forward to keeping track of of what Tomomi is doing at Cognite. I look forward to an opportunity to invite her back in a year when she’s sort of figured out a bunch of this stuff, because the world needs to understand how to reap the benefits of the industrial cloud without incurring unacceptable physical risk. So to me, it’s it’s huge that that she’s taking this deep understanding of risk and risk assessments and now diving into the technology and hopefully leading the way for us in in terms of the industrial cloud.

Nathaniel Nelson
Thank you to Tumomi Ayayama for speaking with you, Andrew. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a pleasure. Thank you, Nate.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Driving Change – Cloud Systems and Japanese CCE | Episode 132 appeared first on Waterfall Security Solutions.

]]>
Hitting Tens of Thousands of Vehicles At Once | Episode 131 https://waterfall-security.com/ot-insights-center/transportation/hitting-tens-of-thousands-of-vehicles-at-once-episode-131/ Thu, 26 Sep 2024 08:44:39 +0000 https://waterfall-security.com/?p=27586 Compromise a cloud service and tens thousands of vehicles can be affected all at once. Matt MacKinnon of Upstream Security walks us through the world of cloud security for connected vehicles, transport trucks, tractors, and other "stuff that moves."

The post Hitting Tens of Thousands of Vehicles At Once | Episode 131 appeared first on Waterfall Security Solutions.

]]>

Hitting Tens of Thousands of Vehicles At Once | Episode 131

Compromise a cloud service, and tens thousands of vehicles can be affected at once. Matt MacKinnon of Upstream Security walks us through the world of cloud security for connected vehicles, transport trucks, tractors, and other "stuff that moves."

Picture of Waterfall team

Waterfall team

Podcast: 131 about OT Security for Cars

“…the idea that someone might impact a bunch of vehicles to cause accidents is real. That absolutely could happen.”

Available on

About Matt MacKinnon and Upstream Security

Matt’s experience prior to his role at Upstream Security includes working at JupiterOne, Shift5 and Armis Security.

Upstream Security (LinkedIn Page) provides a cloud-based data management platform specifically designed for connected vehicles. This platform specializes in automotive cybersecurity detection and response (V-XDR) and data-driven applications. Essentially, it transforms highly distributed vehicle data into a centralized and structured data lake, allowing customers to build connected vehicle applications. A key component of this platform is AutoThreat® Intelligence, an automotive cybersecurity threat intelligence solution that provides cyber threat protection and actionable insights. Upstream integrates seamlessly into the customer’s existing environment and vehicle security operations centers (VSOC). Upstream’s clientele includes major automotive OEMs, suppliers, and other stakeholders, and they protect millions of vehicles.

Share

Transcript of this podcast episode #131: 
Hitting Tens of Thousands of Vehicles At Once | Episode 131

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome, everyone, to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions, who’s going to introduce the subject and guest of our show today. Andrew, how’s it going?

Andrew Ginter
I’m very well. Thank you, Nate. Our guest today is Matt McKinnon, the Director of Global Strategic Alliances at Upstream Security. And I don’t know if you remember a number of episodes ago, we had a gentleman on talking about the CAN bus in automobiles, the hundreds of CPUs in in a modern automobile and how that CAN bus, that that network of of automation reached out to the cloud, to the vendor cloud, whoever built the automobile.

Matt and upstream secure that cloud. So we’re going to be talking about the security of of cloud systems connected to automobiles.

Nathaniel Nelson
Then without further ado, here’s your conversation with Matt.

Andrew Ginter
Hello, Matt, and welcome to the show. Before we get started, can I ask you to introduce yourself, to say a few words about your background and about the good work that you’re doing at Upstream Security?

Matt MacKinnon
Andrew, thanks for having me today. Yeah, I’ve been working in network security or cybersecurity in general for the better part of the last 25 years. Got started in network security, endpoint security, IoT security, did even some DOD work and some cloud security. So kind of been around the cybersecurity market in a lot of different ways. Most recently, I’ve been working in automotive or mobility IoT security.

Connected cars networksThis is in particular where I am today is upstream security where we protect cars and trucks and tractors and pretty much anything that moves around and is connected via cellular network. I was really drawn to this company because of the connection between mobility and things that physical things that move around in cybersecurity and it really is easy to relate to everyday life and very rewarding to be able to work on something that we can sort of see and feel and observe in our everyday life.

Andrew Ginter
And our topic today is automobiles. I mean, we had a guest on a little while ago talking about the CAN bus in automobiles, in trucks, in you know things that move. You’re not talking about the CAN bus. You’re still talking about things that move, but you’re up in the cloud. Can you explain to us what is that? What’s happening out there? How how does it work and and why should we be worried?

Matt MacKinnon
It’s a great question. And it’s really important to think about what’s happening with with cars and with trucks and how they operate today and and what’s how we think they’re going to change in the future as well. So if we think about your modern car, it has really got a lot of computers in it. Everything from the infotainment system to the the most modern things have autonomous driving. So in those cars, the car itself can be can be compromised.

Those cars communicate with the cloud. They send a lot of telematic data about where they are and what they’re doing into the cloud. This is very useful for a lot of different purposes. We also have app on our phones. We can schedule a remote start or we can schedule service of the dealer and things like that on our phones.

When we get into electronic vehicles, we have to charge them. And so we connect them to charging stations and we have to authenticate and pay for electricity. And so what Upstream has realized and recognized many years ago was that no longer can you worry about just securing the car itself. The car is part of this connected ecosystem. And if you’re not looking at that entire ecosystem at once, you’re really not looking at the full spectrum of what can be compromised. The other thing that’s interesting to look at from the last five or 10 years is Upstream does an annual report about the state of automotive cybersecurity. And we’ve been doing it since about 2019. There’s really been a pretty dramatic shift in in the cybersecurity or automotive cybersecurity over that time. If you look back 2014, 2015, people were trying to compromise or hack or steal one car at a time. But if you look at the data today, that’s not the case at all.

Over 95% of the attacks that happened last year didn’t even require physical access to the vehicle at all. Over 50% of the attacks that happened at last year were attacks against thousands, if not millions of vehicles at one time. So we’re no longer talking about bad actors just trying to steal your car or my car. We’re talking about bad actors who are really going after these connected systems that we just talked about and and how can they compromise that entire system, not just one guard car at a time.

Nathaniel Nelson
Andrew, before we get into all of the detail of what he said there, can you just give me a brief overview? We’ve talked about it in a couple of episodes before, but what does the threat attack surface of my car look like? Because I have some notion that my center console is a computer and maybe some other parts of the car, but it sounds like it’s more than that.

Andrew Ginter
Yeah, we had Ken Tyndall on and he was one of the designers of the CAN bus, which is the the dominant communication system that’s used in modern vehicles. I recall that he said, look, Andrew, at the rate at which we’re adding features to the vehicle. For example, if you have a feature that says you can only start the car if your foot’s on the brake. He says for each feature we used to run and a wire, a small wire with an analog signal from let’s say the brake sensor directly to the logic that that controlled the the key and the ignition.

And there was a lot of features being added. And so for every feature when one part of the car was relevant to another part of the car, you had to run a new wire. He said they did a projection at the rate at which new features were being added, they figured that new cars by the year 2050 would be solid copper, which is, of course, nonsense. And so they invented the CAN bus. And so now most devices in in vehicles that are relevant to a feature like the brakes when you’re starting a car or something like that, they have a little CPU.

And they get power on one wire, they get the the network communications on another little wire, and now every piece of the car has one, two wires, or maybe one if you can run both power and and signal over the same wire, has one or two wires running in with not a gazillion, one for each sort of feature that is affecting another part of the car, which means a modern car has two or three hundred CPUs in it with, each CPU has a little wire or two running to it. This is this is the modern vehicle. There’s a lot of software in the vehicle.

Nathaniel Nelson
And then how does that connect to Matt’s domain, the cloud?

Andrew Ginter
Yeah, so many vehicles are connected through the cellular network or by other means, satellite, whatever, but most often I think it’s cellular, to the vendor. Whoever made the car or Matt’s business upstream is upstream security is interested in the big 18 wheelers and tractors in anything that moves. But let’s stay with cars for now. You buy a car from whoever, Chrysler, Ford, whatever. A lot of the cars are connected cellularly into the cloud so that, you can on your cell phone start them remotely. You can affect charging for electric vehicles. There’s these networks of two and 300 CPUs in the vehicle now connected through the internet into cloud systems. And of course, anything connected through the internet can be attacked through the internet. The cloud systems can be attacked through the internet. And this is the focus of of today’s conversation is what’s happening in these cloud systems and how are they being protected?

Nathaniel Nelson
Great. Understood. And maybe you get to this later in the interview. I don’t know. But the statement that stood out most to me already from Matt was this notion that over 50 percent of attacks that happened in the last year were against like thousands or millions of vehicles at one time.

Now I personally, I don’t know if I’m just not up on the news, have never heard of a cyber attack against a vehicle that wasn’t conducted in a laboratory setting or in an experiment of some kind. So what exactly was Matt referring to there?

Andrew Ginter
Well, that’s a good question. And that in fact is kind of the next question I asked our guests. So why don’t we get back to Matt and have him give us the answer first?

Andrew Ginter
So that’s a lot, hundreds, thousands, millions of vehicles at once. Can you give us an example? What has happened? What are we worried is going to happen?

Matt MacKinnon
Yeah, there’s there’s a variety of things that are happening. And I can give you a couple of real world examples of things that we’ve seen in our in our and our company’s interaction. So a couple of things. One is what what we like to call sort of a VIN-spray attack. And this is kind of interesting. So imagine a bad actor using the their app on their phone to actually try to authenticate to many vehicles at one time. So not just connecting to their car, but connecting to many vehicles at one time.

If you can trick a user into accepting, sure you can connect, now you’ve basically given control over of your vehicle and can remote start or modify your car, steal data off your car. Your attacker doesn’t have to be anywhere near you. It could be the other side of the world, but using the APIs that are connecting your phone like you are supposed to, but using it in a malicious way.

Matt MacKinnon
Similar kinds of examples with using enterprise IT and API security type of techniques to generate tokens to connect to many vehicles at one time, execute remote commands, but also cases that aren’t directly stealing data, things like odometer fraud, to roll back odometers so that your mileage on your car isn’t as high as you think or it really is to be able to get a warranty claim.

Matt MacKinnon
Or stealing stealing power from an EV charging station. So these are all variations on real things that are happening right now today. Some are very bad with people trying to take over. Other things are people trying to steal data, and then other times just people trying to sort of steal service or steal some money.

Andrew Ginter
So can we talk a little bit about who’s doing this? I mean, rolling back the odometer, anybody who wants to cheat someone does this for their vehicle, for one vehicle. There’s little benefit to be had in rolling back the odometer for a million vehicles. So people might want to tamper with their own vehicle. Who’s tampering with other vehicles? Why why would people do this? What’s what’s in it for them?

Matt MacKinnon
Like a lot of things, at the end of the day, a lot of times it just comes down to money. A lot of these attacks are based around stealing data. And that and stealing data can be done by anybody. A lot of people all over the world, bad bad organizations that are, it’s ransomware effectively. It’s just a specific variety of ransomware, people trying to steal data, sell data, collect data from a variety of things. There’s another aspect which we’re not seeing a whole lot of, but it’s definitely a concern, which would be sort of the brand damage kind of thing. Imagine if someone were able to take control over an entire fleet of vehicles, some brand, some might make and model the the impact of the fear that would that would arise if that certain variety, I don’t want to name a specific one, obviously, but would just stop working tomorrow morning, right? That would be a tremendously upsetting to many, many people. So there’s a variety of things there, but at the end of the day, the vast majority of it is really about about stealing data that they can sell and other variations on ransomware trying to get data from these automotive manufacturers.

Andrew Ginter
OK. Now, we’re on the industrial security podcast. I worry about heavy industry. Now, what I don’t know is, how diverse the North American fleet of 18 wheelers, the big heavy trucks are. But I’m wondering, is it credible that let’s say a nation state, Russia or China, someone who is involved in a physical conflict and wants to impair the delivery of goods in either the country they’re fighting with or an allies like us of, let’s say, the Ukraine. Is it credible that that the Russians could break into one or two or three vendors, the people who build the big 18-wheelers and, I don’t know, remotely turn them all off? Like cripple a third of the nation’s 18-wheeler fleet by by GPS coordinate? Is that a credible scenario?

Matt MacKinnon
it is, and there’s there’s sort of two different dimensions that are worth talking about there. One is, as you’re describing, trucking is a huge part of our critical infrastructure and the, the CSIS definition of what is critical infrastructure. And it it ranges from manufacturing, emergency services and food and agriculture and healthcare and public safety. And it’s true that if you’re able to impact transportation, you can impact massively important components of the of the economy and our our defense systems.

So to your specific question, can you can you go after trucks and and and disable a fleet? in When we’re talking about cybersecurity, the big trucks are no different than cars. And frankly, heavy machinery for manufacturing or mining or agriculture, is they’re really all connected in very similar kind of ways.

And we have actually seen real attacks like that. Last year, there was an attack against something that’s called an electronic logging device. It’s not actually the truck itself. It’s actually an IoT device that gets installed in a truck. And that that device is used primarily for logging things like hours of service, speed and location, and used for expense management, fuel and tax records, and things like that.

But they’re also connected directly to the trucks and to the CAN bus of the trucks. So they become an attack factor. And if you can compromise this device, you now have access to the actual operating system of the truck. And this did happen last year. It was pretty pretty massive. There’s over 14 million trucks in the United States that use these things. I don’t know how many of them were actually impacted, but these devices were out for better part of a month. Drivers had to resort to paper and pencil to be able to track and log their hours. And to my knowledge, it didn’t actually impact the safety of those vehicles. Like your worst case scenario that you described again didn’t actually happen. But it gave it gave us a real sort of eye opener of how close you could get if you if you really wanted to.

Nathaniel Nelson
I was waiting for Matt to give some real life examples there and it sounds interesting although despite the severity of the case, I mean, he only mentioned it in one or two sentences. Andrew, I’m wondering if you have any more detail about that story he just referenced or any other similar ones like it.

Andrew Ginter
Well, I mean, waterfall does a threat report. And I remember considering that incident for the threat report. Our criteria are different, though. We count events that had physical consequences. And I remember looking at this event and saying, the logging was impaired, but the physical process, the trucks kept moving. They still delivered goods all over the nation. They weren’t delayed at all. some of the electronics, the the logging mechanism was impaired and the the operators, the drivers of the trucks had to fall back to manual operations, but the trucks kept going.

Andrew Ginter
In the report, what I recall, that transportation is the second biggest industry hit by cyber attacks where there were physical consequences. And most of those incidents were where IT systems were impaired that were essential to, let’s say, dispatching the trucks. So you had to stop the movement of the trucks because you couldn’t figure out where stuff had to go anymore. Shipments were delayed. This is the most common sort of physical consequence of of attacks where there were physical consequences in transportation. But this, the scenario here where the cloud’s involved, this is sort of more reminiscent of a story we talked about a few episodes ago. In the Ukraine, the the battlefront with the Russian invasion moved back and forth. And at one point, the Russian army stole a bunch of John Deere farm equipment, $5 million dollars worth of it from a a small town that they’d taken over, from a John Deere dealership. John Deere was unhappy with this, having their stolen equipment driven 700 kilometers into Russia. And so they reached through the cloud because they have cloud connections to all these vehicles and turned off all of the stolen equipment. So that’s an example, not of a cyber attack, but of a capability that, you know, that a lot of people looked at that incident and said, yay, stick it to the invaders. And then they said, just a minute. What just happened here? What if John Deere gets it into their head to turn off all of the vehicles, all of the tractors in Europe at planting at planting time? What if the Russians get it into their head to break into the John Deere cloud and do that? So this is kind of the scenario that we worry about. But in the the upstream threat report, most of the incidents I saw had to do with affecting thousands or millions of vehicles, had to do with theft of information from those vehicles and holding it for ransom.

Andrew Ginter
So that all makes sense. Now, one of the reasons I asked you on as a guest is because you folks in upstream have stuff that I’ve never heard of to address this problem. So, having defined the problem as, cloud systems can reach into cars and, there on the Internet, they can be compromised. Can you talk about your solution? What do you guys do and and how does that work?

Matt MacKinnon
Yeah. so if i were to to make For those of your listeners that are at enterprise IT or you’re familiar with enterprise security, maybe I’ll make an analogy and then I can dive into the details. The analogy if you understand sort of endpoint security or those kind of network security, you’re familiar with the term of an XDR platform, then you also need a Security Operations Center to manage that and you probably want some threat intelligence to support that. That’s effectively what we’ve developed for mobile devices, cars and trucks and tractors and other ones.

The three components there really are that XDR platform. And what does that mean? That means we collect data from the vehicle itself, from the telematics cloud, from the APIs that are calling in and out of it. And we stitch that all together in the cloud in what amounts to a digital twin of a vehicle. So for every vehicle we monitor, and we monitor over 25 million vehicles today, we’ve got a digital twin of exactly what it is, where it’s going, what it’s doing, how fast it’s going, everything from oil pressure to geolocation to what was the last remote command that came to it from some some API and in in the in the cloud. That gives us the ability to look for anomalies, look for patterns of bad behavior, to identify something like, hey, why did a remote start of that vehicle come from a country that the vehicle isn’t in?

Or little things like that, that seem very simple on the surface, but are very complex to see unless you have the breadth of data that we do. So that’s one piece. That’s the technology piece. But yeah you then need someone to actually operate this thing, right? So a Security Operation Center, or we’ve coined the term the Vehicle SOC or the V-SOC.

Matt MacKinnon
A lot of operators don’t really have this capability or the skill set themselves. So we offer that as a service on top of our platform. If you want, sometimes people would do it themselves. Sometimes people bring in an MSSP to do it. The last component of the solution, though, of course, is threat intelligence. And there’s lots of vendors out there, lots of providers that will do threat intelligence for classic enterprise things and some OT things. But what we do there is very, very specific to the automotive industry of every engine control unit and software version and hardware version and yeah there’s a cars are aggregations of many, many components. So we take that whole software bill of materials, hardware bill of materials, and we actually have a team that goes and does research and on the deep web, the dark web, interacts with the bad guys and figures out what they’re up to. And so when you put that all together, the XDR like monitoring the SOC service to actually operate the platform and then the threat intelligence of what are the bad guys really doing and what are they working on, you end up with this really complete end-to-end solution for being able to determine and monitor and make sure that vehicles and these devices are are actually secure.

Andrew Ginter
So you just described a detective capability, detection, threat intel, sort of deep knowledge or deep understanding of stuff. When there’s an incident, do you also respond and recover? And to prevent incidents, do you have anything that you embed in the vehicles or in the cloud of your protected customers?

Matt MacKinnon
Car of the futureYeah, so you’re right. Our primary focus is on detection. But all those other sort of respond and recover and protection are equally as important. So you’re right, we are not in-line. We don’t have a way ourselves to natively block something that’s happening. But we do that via integration in the partner ecosystem around us. So it may be that if it is a sort of more modern vehicle that is a software-defined vehicle, then there are ways that we can actually send commands or updates back to a vehicle to tell it to stop a behavior or to integrate with the network itself. So if a device is cellular connected, can we talk to the cellular provider to drop that connection to to do that? So we can’t do it directly, but we can integrate to do it. From a protection, like in the design time phase, we do work with the automotive manufacturers directly themselves, the chip makers, as well as the software providers and everybody from Red Hat to Amazon and Google to Qualcomm and others where we’re involved and can be influential in the way that those systems are designed, using our threat intelligence, using our knowledge of what bad actors are doing to help make sure that there is a secure development process and that these these devices have the right level of onboard protection in place.

Andrew Ginter
And you folks have been doing this for a while. You have customers, the big automobile makers all over the world. Can you talk about your customers experience using this technology?  What have you been finding? What’s of value to them?

Matt MacKinnon
It’s very interesting to see what people can use the platform for. We do see a lot of cyber attacks, and we talked about the VIN-spray and some of the API examples before. But the the platform we have, the visibility and vulnerability that we provide definitely lends itself to a bunch of other things. We’re seeing customers use the platform for identifying theft, stolen vehicles, and seeing vehicles being in places they shouldn’t be.

We’re seeing fleet operators use the data that we have to be able to monitor where fleets are or the vehicles being used appropriately. Everything from fast accelerations and breaking hard to other types of usage and mileage for fleet management. The other use case that’s emerging to be more common is related to electronic vehicles and the use of their batteries.

And there’s a lot of new behaviors people need to learn about properly but managing a battery. How do you charge it? When do you charge it? Things like that. And we can provide some really interesting insights to those kind of use cases. So customer satisfaction kind of things as well there. So it is one of the sort of fascinating and fun things about the the company and the product and the technology is the useages uses of the technology beyond just traditional cybersecurity.

Andrew Ginter
Nate, let me jump in here. The reason I asked that that question of Matt is that he’s got basically a detective, intrusion detection, attack detection technology here. And what I’ve observed is that almost whenever we deploy a detective technology into an OT system, we get operational insights as well as security insights. so I remember 20 years ago when I was deploying intrusion detection systems, the the first intrusion detection systems that went into industrial networks, the engineers at the site would be looking over our people’s shoulders while we were tuning the system, tuning out false alarms and figuring out the the the right way to to report on these systems. And they’d look over our shoulders and say, what’s that? That’s a lot of traffic between a a the engineering workstation and a particular PLC sucking up 80% of the bandwidth of the the network going to that you family of PLCs. What is that? And we dig into it. And well, a test had left had been left running on the on the engineering workstation that should have been turned off. This is why the whole system was a little bit sluggish, not slow enough that anyone raised an alarm about it, but once you lift the lid on these OT systems and you see what’s inside, often there’s operational benefits.

I mean, Matt talked about electric vehicles. Batteries are a huge part of electric vehicles. And these batteries, they’re chemical systems. If you deep discharge them or don’t deep discharge them enough or charge them sub-optimally, battery life is reduced. The lifetime of the battery, years of battery life, the range you get on the battery. And so, the sense I had is that before, the upstream security technology went in, fleet vehicle owners and electric vehicle vendors might not have had the data. They didn’t have the instrumentation to figure out, to gather all this data. well Upstream gathered all the data to figure out if there was an attack in progress, looked at the data and said, nope, there’s no attack in progress, and then go back to the vendors and say, by the way, we have all this data. Would you like to use it to change the design or improve the design or optimize the design of your electric vehicles so your batteries last longer? Yes, please.

So A lesson here is that there’s often secondary benefits to deploying detective security measures. You get insights by looking at data that you just didn’t have before.

Andrew Ginter
So this is all good. What I worry about as someone involved in industrial cybersecurity, heavy industry, mines, high speed passenger trains, I always worry about safety.

We’ve talked about sort of credible threats to safety sort of as as future concerns. Can you talk about what’s happening there? How how worried should I be about the the safety of my cloud connected vehicle?

Matt MacKinnon
It’s a really important topic. I think the good news is from your as an individual consumer, should you be worried about your connected vehicle from a safety perspective? Probably not. I certainly don’t worry about know driving my car every day. But I think and on a grander scale, safety really is important. Right. The fact that we’re talking about these software in vehicles, the connection between software and the physical world, you’ve got vehicles, cars, trucks, tractors, these things are thousands of pounds, they move at very high speeds. The implication of a cyber incident to safety is pretty dramatic. And fortunately, we’re not seeing that a whole lot, but it is possible and certainly could happen.

And so the idea that someone might impact a bunch of vehicles to cause accidents real. That absolutely could happen. We have seen, not quite safety, but we’ve seen attacks that were designed to cause congestion and gridlock by sort of car services all being called into one location and causing gridlock and that causes a lot of people start to panic when there’s gridlock. And so there’s variations on safety. But the other related concept that I think is also really important is actually I sort of borrow it from the military world. And that is the concept of readiness. And it applies to almost any industry, really. And that is your vehicle ready. And today a lot of people think about vehicles and readiness. They think about, is there gas in the tank? Did you change the oil? And is there air in the tires?

Well now that these vehicles are also software defined or have software connectivity, readiness includes is it cyber secure? And has someone impacted it from a cybersecurity perspective? And so it’s not a concept that I hear a lot of talk about today, but I do think it’s something we’re going to see more and more, especially in industries that rely on the vehicles for their business, like delivery and trucking and things like that.

Andrew Ginter
So that makes sense. You are deep into automotive cybersecurity. We’ve covered in this podcast a bit of what’s happening in the vehicle with you folks, a bit of what’s happening in the cloud. What’s the future hold? What is the future of of automation in vehicles large and small?

Matt MacKinnon
Yeah, what we’re seeing for sure is what is known in the industry as the software-defined vehicle, where really the cars and trucks and tractors and all these devices become computers first and vehicles second, almost. And so that increases the attack surface. I mean, the the power of these vehicles is pretty amazing in what they can do. And we’ve all been watching the future of autonomous driving. But that also applies to connected agriculture, autonomous agriculture, robotics in all sorts of ways. Right, so we’re seeing more and more of these vehicles or or mobile devices become connected and become software defined.

And that has amazing business benefits and and productivity benefits that we’re all going to benefit from. But it does increase the attack surface and just make these things much more complicated and much more targeted and secure. So it is an area that is rapidly evolving. we’d We’d be remiss to talk about this without throwing in the implications of Gen AI and how then the data that these things are going to generate and how that’s going to both make the bad guys better and make us better at protecting. But yeah, the the software-defined vehicle, the increased volume of software in vehicles is really the future of the industry, but then the impacts to cybersecurity are clear.

Andrew Ginter
Software-defined vehicles. That’s a scary thought for someone like me who’s focused on the worst that can possibly happen. But if we have people working on the problem, I’m confident we can work something out that’s going to keep us all safe. Thank you for bringing these insights and these worries to the podcast. Before I let you go, can I ask you, can you sum up for our listeners, what are what are the key takeaways here?

Matt MacKinnon
Yeah, thanks, Andrew. I would start by reiterating what you just said, which is, the good news is for the average consumer, the average driver, it’s just not something you have to spend that much time worried about. The manufacturers are taking it seriously. There’s, software vendors like upstream that are taking it seriously. We’re working on it. It does happen, but it’s not something everybody needs to – it’s like don’t stop driving. The next thing though is to also be aware that this isn’t just about cars, right? There are cars and trucks. I have alluded to agriculture and tractors but this is continuing to get bigger and bigger the the notion of software-defined anything and software to-defined vehicles of all varieties is is growing, not not slowing down.

As we get into autonomous vehicles, that’s going to make it even more and more complex. Don’t worry about it too much, but it is getting bigger at the same time. The last thing is, this is what we do at Upstream. The company was formed for this. It’s what we do. We take it seriously. We also care very much about sort of giving back and contributing. And that’s why we do the annual report and the research that we do that we publish, host webinars, most of which is information sharing and thought leadership and not trying to sell stuff. So please check us out and take a look at that report. It is free and anybody can take a look at it and we’re already starting to work on next year’s now.

Nathaniel Nelson
So, Andrew, cars are a microcosm for cybersecurity at large.

Andrew Ginter
Indeed, and the cloud is coming. The cloud is coming, and it’s coming to many industries. In my experience, manufacturing, all kinds of manufacturing, is using cloud systems quite intensively. More sort of conventional, critical infrastructure, water systems, power plants are using cloud systems somewhat and increasingly, and it looks like the cloud has arrived for automobiles and other kinds of moving equipment and is is being used fairly intensively. And all of those uses, I think, are going to increase. This is the future. And of course, what we have then is, lots more software involved, lots of opportunity to attack that software.

Attacks are targeting cloud systems and there can be physical consequences. So I think it’s a big new field. It’s just going to become more important as the years go by and is, I guess, something more, something new to worry about in, in the field of industrial cybersecurity.

Nathaniel Nelson
Well with that, thank you to Matt McKinnon for his interview with you. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a pleasure Nate, thank you.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Hitting Tens of Thousands of Vehicles At Once | Episode 131 appeared first on Waterfall Security Solutions.

]]>