Credibility, not Likelihood – Episode 140
Safety defines cybersecurity - Kenneth Titlestad of Omny joins us to explore safety, risk, likelihood, credibility, and deterministic / unhackable cyber defenses - a lot of it in the context of Norwegian offshore platforms.
Share this podcast:
“Large scale destructive attacks on big machinery is, not something that I would consider a credible attack.” – Kenneth Titlestad
Transcript of Credibility, not Likelihood | Episode 140
Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.
Nathaniel Nelson
Welcome everyone to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions, who’s going to introduce the subject and guest of our show today. Andrew, how are you
Andrew Ginter
I’m very well, thank you, Nate. Our guest today is Kenneth Tittelstad. He is the Chief Commercial Officer at Omni, and he’s also the Chair of the Norwegian International Electrotechnical Committee of Subgroup working on 62443. So this is the Norwegian delegation to the IEC that produces the widely used IEC 62443 standard.
We’re going to be talking about credible threats. What should we be planning for security wise? And by the way, I happened… I had opportunity to be in Norway and I visited Kenneth at the Omni head office where they have a lovely recording studio. So we recorded this face to face in their in their studio in their head office.
Then let’s get right into your conversation with Kenneth.
Andrew Ginter
Hello, Kenneth, and welcome to the podcast. Before we get started, can you tell our listeners, give us a bit of information about your background, about what you know, what you’ve been up to and and the good work that you’re doing here at Omny Security
Kenneth Titlestad
Thank you so much, Andrew, and welcome to Norway and our office. It’s, I’m so glad to have you visiting us. So my name is Kenneth Titlestad and I’m working as a Chief Commercial officer in Omny and I’ve just started as a commercial officer here in Omny. I went over from Southwest area where where I was heading up OT cyber security for. I’ve been doing that for six years.
Before that I was working in Ecuador also working on OT cybersecurity, so I’ve been working in the field now for almost 15 years and also for the last five or six years I’ve been chairman for the Norwegian Electrotechnical Committee, the the group that is handling IEC 62443. I’ve been diving deep into the cybersecurity now for quite many years.
And at Omny, we are developing a software platform for for handling cyber security and security for critical infrastructure. It contains security, knowledge graph and. AI that provides actionable insights into security for critical infrastructure. So it’s about it out and physical infrastructure.
Andrew Ginter
OK. Thank you for that. Our topic today is credibility. Now this is talking about risk. You know a lot of people think risk is boring. OK, a lot of people when they enter the industrial security space, they they want to know about attacks. They want to know about the technical bits and bytes. You tell me that you got interested in risk. Very long time ago. Can you talk about that? Where? Where did that come from?
Kenneth Titlestad
Absolutely. I’m I’m not sure if I when I when I considered it as as a as a risk or as a as a field of expertise. So when I was just a small boy, actually my dad, he worked as a control room technician offshore in Conoco Phillips or back then it was called Phillips. So when I was only two years or three years old in 1977. He was working at the Palau offshore oil and gas. Before and I don’t remember this of course back then. But it it, uh, it was always a topic around the dinner table at my my home where he talked about how it was working in the oil and gas business. So in 1977 he was on his way out to the platform when the big horrible blowout happened. He was not actually. He hadn’t arrived at the platform, but he was on his way out there. So it it really was a big topic around the dinner table all the time about safety risks involved in oil and gas.
So I was always listening with my my small ears back then being a bit fascinated about this world, I didn’t see the real danger in it, but I I was trying to picture it in my mind what it was to actually work on in these kind of environments.
So it I was kind of primed back when I was just a small, small boy and later on when I moved into the I I was more into computers. So I did a lot of gaming and programming on Commodore 64 and I started to work in Ecuador on the IT side. But I was still fascinated, fascinated, about the core business being oil and gas and production and exploration. So when I actually got my first trip offshore. I kind of felt that the the circle was closed and I saw the big world, the industrial world that my dad was had been talking about for several years and the kind of the risk perspectives also kicked in. The first thing you meet when you step on board, such a platform is the HSE focus a lot of focus on HS.
OK. And it’s for a reason and I fully got to understand that first, when I actually came on board such a facility, I understood why it’s so important, because it’s it can be really dangerous if you don’t have control over what you’re doing. So that’s when I actually saw the big scale of risk as a perspective.
Andrew Ginter
Yeah. Offshore platforms are intense. I’ve never set foot on one myself, but I’ve I’ve heard the stories quite the environment. And this is I mean, we’re talking about industrial cybersecurity, so you know offshore platforms are intense in terms of physical risk. Can you talk about cyber?
Kenneth Titlestad
It’s it’s an emerging topic. So when I was working in, in Statoil when it was called stator, now it’s equinor, we started to look into that area Around 2010 two 1011 I I still remember the day when people came charging into the meeting room and they started talking about the news of Stuxnet. So that was I I think we got to hear about it in 2010. I was working on the IT side and I I was responsible for large part parts of our Windows infrastructure in the company and we started to I I started to look into what what this SCADA things, what what is it I didn’t know about. PLCS I had never seen a PLC. I didn’t know that there was actually other kind of digital equipment operating critical infrastructure. So so with Stuxnet I started to to dive into the landscape of cyber security.
Kenneth Titlestad
And also as a company, we started a big uh journey back then on on really making uh OT much more cybersecurity. And Stuxnet was kind of a kickstart for it.
Nathaniel Nelson
Andrew, it feels like maybe there are certain kinds of seminola cyber security incidents in the O2 world. We talk, we reference off in the 2007 Aurora test. Maybe, you know, Triton and destroyer. But Stuxnet is that foundational thing that, you know, set the timeline for everybody, right?
Andrew Ginter
Indeed. And you know I was active in the space. I mean, I was leading the the team at Industrial Defender building the world’s first industrial SIM at the time. So Stuxnet was big news. I did a lot of work on Stuxnet. I had a blog at the time, you know, every time I learned something new about it because somebody had published a report, somebody had published another blog.
I’ve done a little research on my own. I published a paper on how Stuxnet spread because, you know, analysis had been done of the artifact. You know, the malware. But it had been done by IT. People at Symantec at I think he said, a bunch of people had analyzed the malware and you know, that’s work I couldn’t do. I’m not a I’m not a a reverse analyst.
But I sat down with Joel Langill. I sat down with Eric Byers and we investigated the impact that Stuxnet would have in a network. What would what would happen if you let this thing loose in a network? Given our understanding of the the Siemens systems, Joel was nexpert on the Siemens systems. You know, Eric and I were sort of more expert more generally, firewalls and industrial systems. So we all contributed to this paper and said here’s what happens if you let loose Stuxnet into an industrial network.
And in hindsight, I have to wonder if we didn’t do more damage than than good, because a lot of people learned stuff about Stuxnet, but there was only one outfit that benefited. And that was Iran’s nuclear weapons program. That was the only, site in the world that was physically impacted.
Why? I regret some of the stuff that I published about about Stuxnet.
Nathaniel Nelson
Do you recall if that research got traction, whether it might have gotten over there or is there no way to tell?
Andrew Ginter
I have no way to tell. I do recall a conversation, sometime later, because I’m a Canadian, I I work with the the Canadian authorities. I remember a conversation with Canadian intelligence services. And I remember, asking them. I’ve stopped, but at one point when I figured out that there’s only one place in the world that’s physically benefiting from my research, I stopped publishing anything about Stuxnet. And I remember some time after that talking to Canadian intelligence saying, I’ve stopped publishing anything about Stuxnet. You don’t have to tell me nothing. In the future, if you ever see me putting out information that’s helping our enemy, tap me on the shoulder, would you? And tell me. Shut up, Ginter. You’re doing more harm than good, and I will shut up. So, yeah, I, I look back on Stuxnet with with mixed emotions. It it was a wake up call for the industry. a lot of people learned about cyber security because of Stuxnet, but who benefited because of all that research?
OK. So that’s Stuxnet is. A lot of people got started in the OT space it was the big news years ago.
Andrew Ginter
Can I ask you, let’s let’s talk about industrial security and the work you, the work you’ve been doing. Stuxnet is where it got started. Where have you wound up? What are you up to today?
Kenneth Titlestad
Yeah, it’s. It’s as you say, it’s 15 years and it’s been, it’s for me. I think it’s been a very interesting journey. So but back in 2010 when when Stuxnet hit the news, I wasn’t immediately immediately diving into OT cybersecurity full time. I was working on the IT side, trying to secure Windows environment in a large oil and gas company.
But uh short, uh, after a while I move more and more over to outsider security, and I had my first trip offshore to oil and gas platform. I think that first trip was in 2013, so actually three years after the Stuxnet. But then I was going out just to to do some troubleshooting on a firewall. So, but more and more, I was moving into OT cybersecurity, and at the end I was. I moved over to Super Steria and I think it was in 2017. And at the end I was really working hard on finding really proper solutions for OT cybersecurity when when potential nation states are targeting you, what do you then do? If you must sort of have their mindset of assume breach and these kind of systems with the PLCS and all they are really, really vulnerable. What do you do when you are being targeted so then then I started to look into. I heard rumors that could there could be something that was non hacky.
So I started investigating into unidirectional data. Diodes was exposed to to waterfall. That was one of the first examples of of where I heard about non hackable stuff. And also I got to to to hear about the, the the Crown Jewel analysis, Cyber informed engineering. Back then it was consequence driven, cyber informed. Hearing. But those kind of topics really, really sparked an extra interest for me because then then I saw on some attack vectors on some of the risks I saw actually a solution that could remove the risk instead of just mitigating it.
Andrew Ginter
So your first sort of foray, everyone was interested in Stuxnet, but you started working on the problem you said with a firewall and to a degree that makes sense. I mean the the firewall, the Itot firewall is often the boundary between the engineering discipline on the platform in the industrial process and the IT discipline, where information is the asset that needs to be protected. And so that boundary is something that both the engineers and the IT folk care about, so that that kind of makes sense. I’m, I’m curious, you got out to the platform you were tasked with the firewall. What did you find?
Kenneth Titlestad
There. Yeah, it was actually kind of a long, long lasting ticket we had in our system, there was a firewall between it and OT that was noisy, so it was causing creating a lot of events and alerts on traffic that it shouldn’t have so I was tasked to go out there and try to troubleshoot this. We we absolutely didn’t think that it was a cyber cyber attack or kind of evil intent, but it was incorrectly configured firewall rule. But when I got out there I could see that it was. It was just incorrectly configured firewall.
There’s nothing, not, not anything dangerous or cyber attack involved, but I also got to to think of of a scenario where if it had actually been a cyber attack and one that created so much noise as well on a security boundary, a security component. Sitting on the outskirts of OT, shouldn’t the OT environment do something to sort of shut down or go into a more fail safe situation? So I got kind of interested in in actually the instrumentation behind your security components on the outskirts of OT. So that’s a topic I continued to explore for for several years, having in the back of my mind cyber informed engineering, non hackable approaches unidirectional systems and on on S4 last year I talked about the the safety instrumented system because safety has always been a particular interest of mine. So I talked about the cyber informed safety instrument. The system shouldn’t the safety instrumented system. At some point, when you’re under an attack, shouldn’t the the the sort of the big brain? Uh, in the room? Shouldn’t that actually take an action? An instrumented automated action and going into not necessarily. A fail safe only, but a more fail failover to a more safe and secure situation.
Andrew Ginter
So that makes sense in theory. I mean if the firewall was saying help help. I’m under attack over and over again. Should some action not have taken place on the OT side. But let me ask you this. It was a false positive. It would have shut down the platform. a very expensive that form unnecessarily, can we detect cyberattacks reliably enough to prevent this kind of unnecessary shutdown, and have if if we do shut down whenever there’s a bunch of alarms? Is that not a new sort of denial of service vulnerability? The bad guys don’t even need to get into OT. They just need to launch a few packets. That firewall generates some alarms in the shuts down without them even bothering to break in the OT. Is that really the right way forward?
Kenneth Titlestad
No, I totally agree. It’s not a good approach going forward. But at the same time I think to shut down one too many times, is is better than not actually doing it, so we should be kind of overreacting and and going into fail safe situation and it could cause unnecessary down time and it could. It’s vulnerability on the production side, but I think it’s much more dangerous with the false negatives where we actually don’t see any attacks and but it’s it’s actually happening. So false positive we need to reduce them, but it’s much more important to actually reduce the false negatives.
Andrew Ginter
So just listening to the recording here. I mean, this is not something I discussed with Kenneth, but we were talking about automatic action when we discovered that an attack might be in progress, for example, because there’s a lot of alarms coming out of the firewall, you know. He agreed with me that shutting down the platform was probably an overreaction because that introduces a new attack vector. The bad guys just need to send a few packets against the firewall, generate a few lines and the whole platform shut down, I agreed with him that something should be done, but we didn’t really figure out what. Here’s an idea in hindsight, a number of jurisdictions are introducing what they call islanding rules, meaning if IT is compromised, you need to, basically, I don’t know, power off the IT firewall, nothing gets through into OT anymore.
For the duration of the emergency, you have the ability to shut off all communications into OT. This is part of, the regulation says you must be able to island. So now you have that capability. I wonder if it isn’t reasonable to trigger islanding when you automatically discover a whole bunch of alarms coming out of anything, because the modern attack pattern, most of them of of modern day attacks, are not like Stuxnet, where you let it loose and it does its thing most of modern day attacks have remote control from the Internet, and if you island, if you break the connection between it and OT.
If there was an attack in the OT network, the bad guys can no longer control it. They can no longer send commands. So and this is not, this is not new. The the term islanding is a little bit new. The concept of sort of an automatic shut off is has been bandied about for for many years. But again, given that the regulators are demanding an islanding capability. maybe engaging it automatically from time to time is not the worst thing that can happen. It increases our security and the impact on operations is is minimal because you’ve you’ve deployed the ability to island already.
You’ve developed the capability of running your OT system independently, and so interrupting that communication for a period of hours at a time while you track things down and say, oh, that was a false alarm. I’m guessing is, minimal cost. So there’s an idea.
Andrew Ginter
OK. Well, let’s come back to our our topic here. The topic is credibility. we’re talking about the risk equation, the typical risk equation is consequence times likelihood. generally we do it qualitatively, but we we wind up with a number coming out of that to compare different different kinds of risks, high frequency versus versus high impact risks. can you talk about that? Where does credibility fit in that equation?
Kenneth Titlestad
I think it fits very well into that equation because when we we, especially when we talk about the likelihood or the probability part of it, the left left side of the equation it it’s always a very, very difficult conversation to have when you try to identify the risk or the the risk levels we are talking about or you try to identify the consequence levels involved. It’s sad to see that a lot of the conversations they go astray due to not being able to put the number on the probability or the likelihood, and I think it it the the conversation gets to be much more fruitful if we can get rid of that challenge on trying to figure out the number on the probability or the likelihood.
Credibility gives us tools in our language to actually be able to talk about the left part of the. So it’s something that is a bit more analog and analog value where we can move more towards the consequence approach, the consequence driven where the the right side of the equation is is more important to talk about as long as you get, if you consider it being credible.
Andrew Ginter
Well, I have to agree. Uh, I’ve argued in my previous in my last book that that likelihood is flawed, that at the high end of cyber attacks, not the low end, the low end likelihood actually works. The high end. The outcomes of cyberattacks are not random. If the same ransomware hits a factory twice and we’ve all we’ve done is restore from backup, it took them down the first time we restore from backup. We make no changes. It hits. Again, they’re going to go down the same way. It’s not random.
I argue that on the high end nation state, targeting is not random either. it’s not that they they they try for a while and if they if they don’t succeed they, go try somewhere else. Nation state threat actors keep targeting the same target until they achieve their mission objective. It’s not random. Once they’ve targeted you, it’s not random. Randomness to me doesn’t work at the high end. Credibility makes more sense. We know is is the threat credible? Is the consequence credible? If this threat comes after us, is this attack comes after us? Is it reasonable to believe credibility is what’s reasonable to believe, not who what’s reasonable to believe? Is it reasonable to believe that the consequence will be realized?
I think it makes a lot of sense, but it’s it’s new. I don’t see the word credibility in a lot of of standards. where does this sit? What what you know. Is this? Is this something people are talking about?
Kenneth Titlestad
Yeah, absolutely. In my work with the clients, I’ve been working with and also the professionals I’ve been working with, we have discussed for some years now that the, the OR we have discussed the big challenge of the the likelihood or the probability part of the equation. And we’ve we’ve without actually having having without following standards or best practices, we’ve seen that we need to skip the discussion on the probability or the likelihood and and talk about the consequent side of it first and then we revisit the likelihood and probability afterwards. But I also see in IRC 6243, especially with the 3-2, it actually talks about consequence, only cyber cyber risk analysis.
So that’s giving a opportunity to actually move away from the discussions on on probability and also of course with the consequence driven approach with cyber informed engineering, we start to see more focus on the far right side with the. The consequence consequence side but leaving out what to do with the likelihood, and I think with credibility we we get some some language based tools to actually play. Is it where we talk about it in a qualitative manner? Instead of having to force it into a number?
Andrew Ginter
So that makes sense to me. I mean, I have the sense that over time in the course of time, cyber attacks become more sophisticated, more sophisticated attacks become credible attacks that were dismissed A decade ago as theoretical have actually happened. Do you see that? what? What do you see coming at us in terms of sophisticated attacks in in the near?
Kenneth Titlestad
I think that’s a really challenging question looking far into the future or or far into the into the history to try to extrapolate what could we expect from the future we see with with the Stokes net, the against Ukraine. Triton, Colonial Pipeline. We see incidents that have had a really high impact, but there’s not very many of.
So, but we see it’s those kind of capabilities are being explored and are being put into different tools, so they can be used by not only nation states but also criminal groups. So with with that kind of analysis we can expect more and more sophisticated attacks and also by more and more non sophisticated groups. So we should expect increase in high impact incident.
Andrew Ginter
OK, so if we’re not talking likelihood, we’re not talking probability, we’re talking credible. How do we decide what’s credible? How do we decide what’s reasonable to believe?
Kenneth Titlestad
Yeah, that’s a that’s a good question. So we need to have some grasp of of what is credible and what is not credible. I’m also of the opinion that that the credibility part of the equation. It’s a qualitative thing. It’s not a zero or one, it’s something that is attached to a kind of a a slippery slope not easily defined. But what we could say if we are trying to to see credibility as a zero or one, what is credible things that have happened actually have happened once or twice or three times. They are credible, so the twice on incident or a safety only type of cybersecurity. That’s now a credible attack because it has happened.
And also near misses. That’s something that Triton was kind of a near miss. They didn’t actually cause it this this destructive attack, but it could have happened. And so we also have other near misses, incidents that we should be considering.
Andrew Ginter
So that makes a lot of sense to me. Credibility versus likelihood. How do we decide though credibility sounds like a judgment call. How do we decide? What’s?
Kenneth Titlestad
That’s a that’s a good question. I I I think there’s a good recommendations in 62443, for instance the 3-2 it it talks about the like I said, the consequence only as an example on how how you can approach the risk equation but it also talks about the need for focusing on worst case consequences. So it talks about essential functions, which basically could be the safety functions. For instance, you need to investigate the consequence if those are actually attacked and compromised. What could be the worst case consequence? So you begin there and then once you identify the worst case consequences, then you move over to the probability or likelihood dimension.
And then you need to consider all the factors. So what are the vulnerabilities involved? What are the safeguards and or what the the standard is talking about? You’re compensating countermeasures. You consider that you consider the function or the asset as well, that if there’s. If there’s no actual interest in the assets, then the vulnerability could be also non interesting to address or analyze. But you start with the consequence side, then you start to look at the likelihood and probability and then you are informed by the the consequence approach.
Andrew Ginter
OK, so let me challenge you on that. I’ve read the CI implementation guide. It says start with the worst case consequences. It says those words. I’ve not seen those words in three Dash 2. Are you sure that that you’re you’re not reading into 3-2?
Kenneth Titlestad
No, I’ve been searching for for that specific part of three dash too many times because because I’ve, I’ve heard others say that the same and it’s actually there. It’s really gold Nuggets in 3-2 talking about essential functions, specifically saying the worst case consequence and also specifically saying that you can choose to do a consequence only risk assessment, so that’s really important. Single words or single sentences in three after. So worth highlighting in the three Dash 2.
Andrew Ginter
OK. So that that makes sense in the abstract. Can you give me some examples what applying these principles? What what should we regard as credible?
Kenneth Titlestad
Yeah, interesting question. I think that the things that come to mind first is for instance the, the, the Triton incident. Before 2017, where when it actually happened, we didn’t think it was credible that someone would actually target a safety only system or cause a safety incident with a cyber attack with with Triton it we actually saw the first first of its kind and the threat became obviously credible. And then SolarWinds as well. It’s a very interesting study where the way they actually compromised the solar winds update mechanism, suddenly massive, massive deployment of kind of malware within critical and non critical infrastructure became really credible threat as well and also near misses. Of course we should be informed by things happening out there and coming on the news that are near misses that can talk about talk to us about what is a credible threat.
Another kind of near miss that I think or is not a near miss, but it’s scenarios or incidents that could talk about credibility is is where we actually have a safety incident. For instance, we we had have had lots of them in Norwegian oil and gas and in oil and gas gas. In general, is safety incidents where we, which is not cyber related at all, but where we see that it it could be able to be replicated by a cyber attack. So that’s something that we should be considering as a credible threat going forward where we actually could replicate the cyber or the incident with the cyber cause.
On credibility, I also think that we need to have in the back of our mind or in the analysis we have to have focus on on the technology evolution, the development and sharing of new technology. So we I see it as a graph where where we are exposed to more and more heavy machinery or heavy software that can be used on the adversary side.
Kenneth Titlestad
So with Kali Linux Metasploit now there’s also AI. So what is being about becoming a credible threat threat is more and more sophisticated stuff due to development of technology. So AI now is on on both sides of the table, or both as an attacker as a tool that makes more more attacks credible, but also on the on the defensive side where we actually need to use it to protect against more and more sophisticated attacks.
Andrew Ginter
So Nate, I was, let me go just a little bit deeper into into Kenneth’s last example. I remember talking to him about this two days before I recorded the session with Kenneth. I was at another event. I had 1/2 hour speaking slot. I was, listening politely to the other speakers. I remember. And one of the speakers was a a penetration tester. I remember asking the pen tester a question about AI and his answer alarmingly.
And, I discussed it with Kenneth. I discussed it with with others. Since the future is is difficult, I asked the AI the the pen-tester so you, you touched on AI. What should we look for from AI going forward? And I asked, should we worry about about AI crafting phishing attacks because I’ve I’ve heard of that happening. Should we worry about Ai helping the bad guys write malware to write more sophisticated malware because I’ve heard of that happening.
And I paused and his answer was Andrew, you’re not thinking hard enough about this problem, you know? Yeah, that stuff’s happening. But what you need to worry about is somebody taking a Kali Linux ISO image. This is the Linux disk image that everybody uses. All the pen testers use. Lots of attack tools, he says. Taking that GB of ISO image. coupling and adding it together with two gigabytes of AI model and the model has not been trained on natural language and creating phishing attacks. The model has been trained by watching professional pen testers attack OT systems, mostly in test beds. I mean, this is what pen testers do. They take a test bed that is a a copy of a system that they’re supposed to be, doing the pen test on no one that does the pen test on a live system. They do it on a test bed.
They use the Kali Linux tools. They attack the system and demonstrate how you can get into the system and cause it to bring about simulated physical consequences. So you’ve taught this AI model how to use the Kali Linux tools to attack OCF OT systems to brick stuff and bring about physical consequences. You take that training model, couple it with the image.
Wrap it up in enough code to run the image as a sort of kind of embedded virtual machine to run the the AI model the million by million matrix of numbers that is a neural network run the neural networ. Run the the the Kelly Linux image and have the AI operate the tools to attack a real OT system. Drop that three, 3 1/2 gigabytes of attack code on an OT asset, start it and walk away and it will figure out what’s there? It will figure out how to attack it. It will figure out how to bring about physical consequences.
I heard that and I thought crap. That’s nasty. back in the day, Stuxnet was autonomous. It did its thing, but it was a massive investment to to produce an an asset, a piece of malware that did its thing without human intervention. This strikes me as again something that will do its thing without human intervention, and it will figure out as it goes. It’s one investment you can leverage across hundreds of different kinds of targets.
I was alarmed. This is something I’m I’m thinking about going forward, it’s to me this is a credible threat. This is something we all need to worry about. I don’t know that the this thing exists yet. But I’m pretty sure it will in five years.
Andrew Ginter
OK. So that’s that’s a lot to worry about. Can I ask you know? Is everything credible? What? What in your mind is not a credible threat at this point.
Kenneth Titlestad
I would think that large scale destructive attacks on big machinery is not something that I would consider a credible attack, but it also goes back to the motivation of the threat sector, for instance, if you have a small municipality, I would lee that really heavy, sophisticated cyber attacks, a lot of them wouldn’t be actually credible due to the target not being interesting for such a threat actor. So large scale destructive attacks is something that in a lot of scenarios wouldn’t be a credible attack.
And then we have for, for instance, large large scale blackouts is quite an interesting story nowadays because a couple of weeks ago, I would think that it wasn’t actually a credible attack. Once we now see that it can happen, for instance, with Spain, it was probably not a cyber attack, but it was something that happened on the consequence side. If we can show that or or identify that it actually can be caused by a cyber attack, then that suddenly nowadays within the last week has become a credible attack.
And also swarm kind of attacks we I hear the discussions on that from time to time where where they see talk about whether it’s a credible thing where you attack millions of cars. As of now, I don’t see that as a credible attack, but things can change.
Nathaniel Nelson
You know, that’s an interesting statement he made there. That large scale attacks on heavy machinery. It isn’t credible. when I think about what we’re talking about on this podcast, the purpose of OT security presumably is that there are significant risks to really important machines. Large scale, but maybe at this point we’ve covered that.
That’s a good point. I think one of the the lessons here is that determining what is and is not credible is a judgment call. OK? Different experts are going to disagree. I’ve, few years ago I saw research published. Saying, look, here’s let’s take for the sake of argument, the possibility of attacking a I don’t know, a chemical plant and causing a toxic discharge. And the researchers concluded that it was theoretically possible, but it was such an enormous amount of effort on the on the part of the adversary, all of which would have to go on undetected by the sight, they said. in the end, I just don’t know that this is reasonable to believe that this will ever happen. So, that was one site.
But again, there are the experts, experts disagree. This is the the what I learned on the very first book I wrote. I got wildly different feedback from different internationally recognized experts. Here’s here’s an insight. To me, this means that when we make judgments about credibility, we probably have to be we have to make if we’re going to make a mistake, make a mistake on the side of caution, err on the side of caution because different experts have different opinions. We might be wrong. every expert has to be honest enough to admit that we might be wrong and build a margin for error into their judgment of what’s credible.
So even if we don’t believe that an attack that I don’t know destroys a turbine is credible, we might want to take some reasonable defences to against, such a not terribly credible attack in our opinion, but we might want to to deploy defences anyway.
Just because we might be wrong and this, this is something that that is also being discussed. It’s how big a margin for error do we need to build into our our planning. I mean I talked to a gentleman who produces who who designs pedestrian bridges. I said how do you how do you calculate the maximum mode? He says that’s easy. Andrew you you you build a barrier to either side of the bridge so vehicles can’t get on the bridge.
Most people are less than two meters tall. Most people are mostly water. You model 2 meters of water. The width of the bridge, the length of the bridge. That’s your maximum load. And then he says. And then he says, you multiply that by 8 and you build the bridge to carry the multiplied load. Because these are people we’re talking about, it is unacceptable for the bridge to fail under load. And so this is the margin for error that engineers routinely built into their safety calculations. I believe, we as as experts in cybersecurity need to build a margin for error into our security planning as well.
Andrew Ginter
So this all makes sense. One of the things that appeals to me very much about the credibility concept is using the concept to communicate with non-technical decision makers like boards of directors. You do this, you have experience with this. Can you talk about your experience?
Kenneth Titlestad
Yeah, I think it’s interesting. When we talk to board members and the the CXOS in different companies, they they don’t necessarily go into details about risk, but they know that they have a special accountability.
So so when we talk about credibility for for those kind of people, they are getting more on board with the discussions, they know they have a special accountability, they draw the line in the sand. For instance if if if the potential consequence is that somewhat somebody to die then that’s a non acceptable risk and they they take on that kind of position due to their accountability as as board members or or heads of of the company.
And they also are being accountable for from from the the government and from the for the society. So the some, some risks when it comes to the consequence side if if we talk about people dying then that’s absolutely and not acceptable risk for this?
Society and the representatives for for that kind of approach is is elected persons in the government and they put the the heads of the company or the Board of Directors as accountable for that on top of the company.
Andrew Ginter
So that makes sense. Boards care about consequences that the business or the society is going to find unacceptable. You didn’t use the word credible. How does credibility fit into acceptability when you’re communicating with?
Kenneth Titlestad
Yeah, we don’t have to defend against all possible cyber attacks. What we do have to protect against is the credible ones. So when we bring credibility in as a concept, then it’s something that communicates, communicates much better for the the Board of Directors and the heads of the companies.
Andrew Ginter
This has been good, but it’s it’s a field big enough that I fear we’ve missed something. let me ask you an open question. What? What should I have asked you here?
Kenneth Titlestad
We’ve been talking about credibility. Credibility is what is reasonable to believe. But it’s not enough to talk about reasonable attacks. We also need to be talking about reasonable defence. So what is a reasonable defence? We then need to be considering or or taking all the tools.
We need to use all the tools at our disposal for a reasonable defence, and nowadays that also obviously includes AI on the defensive side, not only on the offensive side.
This is also a very important part of me, of the reason for me joining Omny. So Omny is is built on our security knowledge graph, so it’s a data model where we can put all information we need about our assets on the vulnerabilities on the network, topologies, on the threats, the threat actors. So it becomes a digital representation or a digital twin of our asset. Combining that with AI which we have built in from the beginning, we get a very strong assistance on security where it matters most.
Andrew Ginter
Cool. Well, this has been great. Thank you, Kenneth, for joining us. Before I let you go, can I ask you to sum up for our listeners, what should we take away from this episode?
Kenneth Titlestad
Thank you, Andrew, for having me and and thank you so much for being here in Norway and and visiting us at our office. So we’ve we’ve had a good conversation about consequence, the focus on on the worst case consequences we’re we moved over to talking about credibility, replacing the the likely good concept with credibility, especially for high impact stuff where we don’t have the probability or the data to talk about it. We also talked about reasonable attacks and reasonable defences. So what is a reasonable defence against increasingly credible, sophisticated attacks with high consequences. So it’s been a really good discussion about all of these topics.
Kenneth Titlestad
If people want to know more about these topics or they want to discuss them, please connect with me on LinkedIn and message me there. I’m more than happy to discuss these topics and please visit our webpage Omnysecurity.com. Our platform addresses most of these topics we talked about today.
Nathaniel Nelson
Andrew, that just about does it for your conversation with Kenneth Title. Scott, do you have any final words you would like to take out our episode with today?
Andrew Ginter
Yeah, I mean we’ve we’ve talked about about credibility and this is a concept that is is relevant to sort of the high end of sophisticated attacks, the high end of of consequence. But I’m not sure let me.
Let me try and give a very simple example. I mean I was I was raised in Brooks, Alberta, little town, 10,000 people in the middle of nowhere. Literally an hours drive from any larger population centre. In terms of cyber threats, do let pick. Let’s pick on, I don’t know, the Russian military, does the Russian military have the money to buy three absolute cyber gurus, train them up on water systems, plant them as a sleeper cell in the workforce of the town of Brooks water treatment system. Have them sit on their hands for three years and after three years.
Using the passwords they’ve gained, the trust they’ve gained and the expertise that they have. Have them launch a crippling cyber attack that that damages equipment that takes the water treatment system down for 45 days is that a credible threat? Well, the Russians have the money to do that. It’s, they have the capability to do that.
But you have to ask, why would they bother? I mean, this is a little agricultural community. There’s a little bit of oil and gas, activity. Why would they bother? That does not seem to be it. It. It does not seem to be reasonable to launch that kind of attack against the town of Brooks. It just makes no sense. I don’t see that as a credible threat.
Is that a credible threat for the water treatment system in the city of Washington, DC, home of the Pentagon? I do think that’s a credible threat. So the question of what’s credible is an important question that I see more and more people asking in risk analysis going forward. we have to figure out what’s credible for us, what are what, what, what capabilities do our adversaries have? What kind of assets are we protecting? What kind of defences we have deployed what makes sense, what’s reasonable to believe in terms of the bad guys coming after us. This is an important question going forward and I see lots of people discussing it. I’m I’m, grateful for the the the chance to explore the concept here with with Kenneth.
Nathaniel Nelson
Well, thanks to Kenneth for exploring this with us. And Andrew, as always, thank you for speaking with me.
Andrew Ginter
It’s always a pleasure. Thank you, Nate.
Nathaniel Neson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there listening.
Trending posts
Stay up to date
Subscribe to our blog and receive insights straight to your inbox