Managing Risk with Digital Twins – What Do We Do Next? – Episode 144
Asset inventory, networks and router / firewall configurations, device criticality - a lot of information. How can we USE this information to make useful decisions about next steps to address cyber risk? Vivek Ponnada of Frenos joins us to explore a new kind of OT / industrial digital twin - grab all that data and work it to draw useful conclusions.
Share this podcast:
“Lots of people have different data sets. They have done some investment in OT security, but they’re all struggling to identify what’s the logical next step in their journey.” – Vivek Ponnada
Managing Risk with Digital Twins – What Do We Do Next? | Episode 144
Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.
Nathaniel Nelson
Welcome listeners to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the vice president of industrial security at waterfall security solutions, who’s going to introduce the subject and guest of our show today.
Andrew, how’s it going?
Andrew Ginter
I’m very well, thank you, nate. Our guest today is Vivek Ponnada. You might remember him from an episode a little while ago. He was the co-lead on the top 20 secure PLC coding practices document that came out a year ago, two years ago.
Today, he’s the Senior Vice President growth and strategy at Frenos. And our topic is digital twins for managing risk. And it sounds like a bunch of marketing buzzwords, you know, digital twins, managing risk, but they’ve got some real technology behind this. So I’m looking forward to this.
Nathaniel Nelson
Then without further ado, here’s you with Vivek.
Andrew Ginter
Hello, Vivek, and welcome to the show. Before we get started, can I ask you to say a few words about yourself for our listeners and about the good work that you’re doing at Frenos?
Vivek Ponnada
Sure, thanks Andrew. Hey everyone, my name is Vivek Ponnada. I am the SVP of Growth and Strategy at Frenos. I’ve been in the OT security space for quite some time. Back in the day, I was a gas turbine controls engineer for GE, then I became a controls and cybersecurity solutions upgrade sales manager for them.
I initially covered power and utilities and then of course added oil and gas. I’m based in houston so that was a natural thing. Before joining Frenos worked at nozomi networks as the regional sales director for three years so I’ve been in the OT security space for quite some time and I am happy to be on this podcast.
And at Frenos, we’re doing something cool. We’re doing an attack path analysis and risk assessment at scale, bringing autonomous risk assessments to a space that’s been lacking this kind of approach. So we’re looking forward to our conversation discussing more about that.
Andrew Ginter
Thanks for that. And our topic today is risk, which a lot of people find boring. I mean, people new to the field tend to want to focus on attacks. Attacks are interesting. Attacks are technical. It’s not until they have failed to secure funding as a manager of, you know, their security team for the last 10 years that they start being interested in risk, which is the language and the decision-making of business.
We’re going to talk about risk. You’re talking about, you know, we’re going to talk about digital twins, which is a real buzzword nowadays, but, you know, this is our topic.
And you’ve mentioned, you know, risk assessments, you’ve mentioned attack path analysis. You know I look forward to looking looking into all of this. You know to me, risk is is fascinating. It’s how we make progress. It’s how we shake the money loose.
But you know before we start, can we can we can you before we dig into it, can we start at the beginning? What is the problem, the risk problem that that you know we’re trying to address here?
Vivek Ponnada
Yeah, great question, Andrew. The past 10 plus years in OT security has been, let’s find out what we have, right? So lots of people start figuring out that they need asset inventory solutions. So the likes of Dragos, Nozomi, Claroty have been the forefront of that kind of an approach. So network security monitoring leading to passive asset discovery and vulnerability identification.
So now 10 plus years into this people have a lot of datasets. They have several sites, especially the ones that they would consider important to their production. They’ve installed sensors. They have lots of information.
Now they’re asking what next, right? The real use case is risk identification and risk mitigation as you mentioned, but there’s a struggle. We’ll struggle out there with different data sets not able to figure out what the actual risk is for them to address next. So that’s the problem we’re trying to solve.
We are trying to aggregate information, provide contextual analysis of what’s the riskiest path to a crown jewel or what might be the logical way to isolate and segment because not every risk can be mitigated by just patching your vulnerability for whatever reason that that’s the the main problem.
The conclusion is that lots of people have different data sets. They have done some investment in OT security, but they’re all struggling to identify what you do with that information what’s the logical next step in their journey.
Andrew Ginter
So that makes sense. I mean, it’s one thing to sketch, this is what, the nist cybersecurity framework says a complete security program should look like.
It’s another thing to say, I’ve only got so much budget this year and a comparable amount, hopefully next year. What do I do this year? What do I do next year? What’s sort of most important to do first? That’s that’s a really important question.
How does a person figure that out? What what’s the decision path there?
Vivek Ponnada
Yeah, that’s the real question. Lots of people in the past used to say over isolated or we are segmented. Where we have a DMZ between it and ot. A lot of these assumptions have not been validated.
In other cases where they have different data sets, it’s not very clear what the what the next problem that they could solve is, right? So everybody like you said has limited budget or resources.
So the honest question is, hey, where we should focus next? It’s not very clear. People have done linear projects, right? They’ll pick a firewall project or a segmentation project or a vulnerability management program.
And all these are are good, but overall not fixing the immediate problem or not solving the immediate problem first, right? So the commonly requested feature of many of these tools like dragos, nozomi or other vendors has been, hey, can you please tell me what my riskiest asset is or what my riskiest path is?
And they have not been able to do it because that’s not in in their and their current portfolio, is that contextual summarization, right? So let’s say you have an asset at the purdue model level two, for example, that is talking to another asset at level three, and then there’s a DMZ about that with some kind of firewall rules, isolating it, and if someone has a real world knowledge of this network and and that’s what we are talking about right a digital twin that’s kind of replicating the network and you analyze if that firewall rule and if that path is possible to get to level two or maybe they have other compensated controls in the path allowing them to say yep my level two is secure this network this location is not reachable easily or it takes a lot of complicated daisy-chaining of attacks to get to then that would be a an identification of what the what the risk is and if you need to address something.
The common consensus has been one, of course, you can really assess these in real time in the production environment, right? So you need to build something that’s a replica of that network.
And then you analyze all these scenarios to see if that asset that you deem important or that network that you deem is a is it critical for your environment.
Is reachable or not reachable from the outside or from any other attack vector that you choose, right? They assume breach could be your corporate enterprise network it could be a wireless network or it could be anything else that you deem as a as an attack vector and to assess in this digital replica or digital twin if that asset can be reached.
So that’s what in general most people have been asking for that’s been missing in the currently available set of tools.
Andrew Ginter
So Nate, Vivek’s answer there was a little abstract. Let me let me be a little more concrete. He’s saying, look, a lot of people in the last 10 years have deployed Dragos and Nozomi and Industrial Defender and you name it, asset inventory tools.
And in a large organization, these tools come back and say, you have 10,000, you might have 50,000 industrial control system assets. Okay.
And many of them are poorly patched because they’re deep down in areas where you can’t, it’s really hard to patch them. Patching them is dangerous. You have to test these patches, blahh blah, blah, blah.
So you’ve got 107,000 vulnerabilities in these 50 odd thousand assets. Okay. And they’re arranged into 800, 2000, whatever subnetworks.
And the networks are all interconnected. Right. So now you’re you’re you’re you’re scratching your head going, and the question is, what do I do next with my security?
And one of the things the asset inventory folks have done is they’ve allowed you to go through these assets, understand what they are, and assign a criticality to them. These are the safety instrumented systems. They’re really important.
Nothing touches them. These are the protective relays. They prevent damage to equipment and so on. And so what he’s saying is you can’t just look at the list of assets and vulnerabilities and figure out what to do next.
You need a model. And so this is what he’s talking about, a digital twin that is looking at attack paths and looking at which assets are really important and telling you which really important have assets have really short and easy attack paths.
That’s probably what you need to focus on next.
Nathaniel Nelson
Yeah, and I fear this is one of those things where everybody else in the world knows something that I don’t, but like, what is a digital twin?
Andrew Ginter
You know… That word is a marketing buzzword and it means whatever the marketing team wants it to mean. The first time I heard the word was in a presentation a few years ago at s4.
The sales guy from ge got up and did a sales pitch, in my opinion, a very smooth, a very, what’s the right word, cleverly scripted sales pitch. But he basically said a digital twin is a computer model of a physical system.
And you the ge at the time had technology, they probably still have it, that will, let’s say you’ve got a chemical process. It’s going to it’s got a physical emulator built in. It can simulate the chemistry.
It’s got emulators built in for all of the ge PLCs in the solution, for all of the ge ihistorian and other components. It’s got a complete simulation. And whenever the physical the measurements coming out of the physical world, they correlate against the measurements that should be coming out based on the simulation.
Whenever there’s a material discrepancy, they would say, oh, that’s potentially a cyber attack. Investigate this. Something has gone really weird here and would take all sorts of automatic action to correct it.
It was amazing in principle, yet I’ve heard dozens of other vendors use the term digital twin to mean other things. The best definition that I’ve heard is, look, your cell phone, Nate, your cell phone is a digital twin of you.
What does that mean?
It’s not, probably not, a biological simulation of your body, though some apps kind of do that. They’re measuring heartbeat and whatnot.
It is an enormous amount of different kinds of information about you. Somebody who steals this your cell phone, steals all that information, knows an enormous amount about you.
And so, I like that definition because it’s much broader than the very specific original definition that I heard at s4 from ge. A digital twin can be anything that is a lot of detailed information.
And so, I can’t remember if it’s on the recording or not, but I remember asking Vivek, is your digital twin that kind of physical simulation? And he’s going, no, no, no. It’s a network simulation. It’s a different kind of digital twin than the physical simulation that some people talk about. And they use it for different purposes. So, again, it’s a marketing buzzword, but it means, generally speaking, a system that has a lot of information that uses and analyzes and, does good things with a lot of information about another thing, like my cell phone does for me.
Andrew Ginter
So that makes sense in the abstract. I mean, you folks do this. You’re building this technology. You’ve got this this digital twin concept. Can you talk about what you folks have? I mean, maybe give us an example of deciding what to patch next and using this this digital twin and sort of give us some insight into into what data you have, what data you need, and and how you use that to make these decisions.
Vivek Ponnada
Yeah, great question, Andrew. Patching has been a significantly challenging problem to solve in ot, as you’re well aware, right? In it, if it’s vulnerable, you apply a patch and there’s a limit of downtime impact, but you run with it.
In ot, of course, it’s not practical because a patch might not be available, an outage window might not be available, and of course, there’s production, downtime issues to deal with, so patching has been really hard.
With what we’re doing though, it’s actually highlighting what to patch and what might be skipped for the moment. Right so when we’re doing this attack path analysis and we come up with a mitigation prioritization score and we say that, hey, this particular network is easy to get to, the complexity of the attack is pretty pretty low.
In just one or two hops from the enterprise network, I’m able to get to this asset and this is vulnerable. And we do provide other options besides patching right we’ll say maybe segmentation or adjusting the firewall role might be a way to go in some cases but if you do decide that patching is relevant and and our recommendation provides that you’ll see that if something is not on that attack path, right? So it might be another asset in the vicinity, but the complexity the attack of that to that asset is much, much higher, then you could deprioritize patching that asset, even if those two assets we’re talking about have the exact same vulnerability, right?
So if something is on the attack path and it’s easier to execute an attack to that asset, maybe you want to prioritize that more than another asset that’s exactly the same vulnerability, but it’s not on a critical attack path, if you will.
And so getting to it is harder. So you would want to deprioritize that compared to the other ones.
Andrew Ginter
All right, so so you used the word reachable. Is that loosely the same as or connected to the concept of pivoting, where an adversary takes over a an asset and a computer, a PLC, something, and uses the compromised cpu, basically, to attack other things, pivot through a compromised device to attack other things, and then repeat, use the newly compromised things to attack other things?
Eventually, you find, let’s say, computers that have permission to go through a firewall into a deeper network, and now you can use that compromised computer to reach through the firewall. Is this what reachable means? Reachable by a pivoting path?
Vivek Ponnada
It certainly could be right so pivoting would be jumping from one host or one asset to another right or from one network to another.
The concept of living of the land means that you have ownership of an asset and you’re using native functionality and eventually get to another asset from there because you have a direct connection or to a firewall for example. And so reachable essentially means that you’re able to get to that asset.
Now how you get to that asset or network is it because know firewall rule has any any for example that allowed you to just get there or in another case you were able to use rdp or some kind of insecure remote access to get there or in other cases maybe a usb right somebody plugged in the usb and now you have access to that asset. So lot of these scenarios are very much dependent on what the end user is trying to evaluate the risk for.
So if they are for example heavily segmented and their primary mediations are all segmentation and firewall based then they would want to know if those firewall rules are working according to plan or if the last time there was an exception that that poked a hole in their firewall now they are allowing access from level 4 to their critical networks, not realizing that their firewall has as a hole.
In other cases, they might have assumed that rdp was disabled in this level 3 device in this workstation, but it is actually enabled. And so now suddenly someone from outside of their enterprise network is able to get to that level 3 and now once you’re there, they could do a lot more, right, further exploration. So reachable essentially means that you’re able to get to a network that’s of interest from another area that’s your starting point.
Andrew Ginter
So, Nate, I remember a couple of episodes, a year and a half, two years ago, robin berthier was on from network perception. He was doing, it sounded like a bunch of similar stuff.
He wasn’t, I don’t think they were taking the output of, drago’s tools, but I could be wrong. What I remember was that he was taking firewall configurations and putting sort of a reachability, what’s reachable from where, map together for large complex OT networks, and would issue alarms, would issue alerts when sort of reality deviated from policy. You could say policy is this, safety instrumented systems never talk to the internet.
That’s a reasonable policy. And he would ingest hundreds, sometimes thousands of firewall configurations and say and router configurations and come back with an alert saying, these three devices over here are safety systems and they can reach the internet. So that what he was doing. What we’re talking here, what seems to me to be different here, but I could be wrong, is we’re talking here about pivoting paths, not only paths.
Sort of network configuration, not not just reachable not not just reachability, but the difficulty of pivoting as well.
Nathaniel Nelson
Yeah, and and is the reason why pivoting becomes relevant in a discussion about PLC security because these devices make for such efficient means of, that they connect your maybe, let’s say, lesser it t assets to more important safety critical systems. So PLCs sort of seem like a natural point at which an attacker would move through.
Andrew Ginter
Sort of. PLCs tend to be the targets of pivoting attacks in ot, sophisticated attacks, because they’re the ones that control the physical world. You want to reach the PLC to cause it to misoperate the physical process.
Pivoting through PLCs is possible in theory, and it’s a little bit more possible in practice when the PLC is based on a popular operating system like a stripped-down windows or a stripped-down linux.
But a lot of PLCs are just weird. They just their operating system, their code does one thing. It does the PLC thing. In theory, you could break into the PLC and give it new code.
But if I want to if I want to pivot through a PLC to a windows device, what am I going to how am I going to get into the windows device? I might want to get into it with a remote desktop. There is no remote desktop client on a PLC. It doesn’t exist.
And so pivoting through PLCs, you the attacker might, depending on the version of the PLC, might have to do an enormous amount more work to get pivoted through a PLC.
And so if the only way into, a let’s say, a safety system target is a really critical system, is to pivot through three different PLCs, pivoting through firewalls each time, that’s going to be really hard to do.
Whereas if, I remember a presentation from from dale peterson at s4 last year, year before, where he he was talking about network segmentation. He says, network segmentation, firewalls are almost always the second thing that industrial sites do to to launch their security program.
And I’m going, excuse me, excuse me, what’s but second thing? What’s the first thing? I thought firewalls were the first thing everybody does. “Andrew,” he says, “the first thing is to take the passwordless hmi off of the internet. That’s the first thing you have to do.” and I’m going, yep, you’re you’re right.
And a tool like this will be able to look at you and say, here’s my network. If I want to go from the bad guys into this hmi, it’s on the internet. It has no password.
That’s your number one. It’s it can tell you that. Not just policy, but it it it says, and the safety systems back there, you’ve got to pivot through three PLCs.
That’s going to be really hard to do. You might have some other security you might want to deploy in between. So this is the the concept of of pivoting that, I found very attractive in this this tool, measuring the difficulty of an attacker from the internet reaching a a target inside of a a defensive posture.
Andrew Ginter
That’s interesting. We’ve had guests on the show talking about attack paths. These, these are tools that, build a model of the system and, count all of the ways that an attacker can get from where they are into a consequence that we want to avoid. Um,
And it’s not just count them, but evaluate, let’s call it the difficulty. Mean, risk talks about the classic approximation for risk is likelihood times frequency.
Sorry, likelihood times consequence or impact, if you wish. And, likelihood is a really murky, difficult concept for high consequence attacks. And so what a lot of people do is they substitute likelihood with difficulty. And they They try to evaluate how difficult are really nasty, attacks with really nasty consequences.
It sounds vaguely like you’re doing this. You’re you’re You’re talking about attack paths. You’re talking about difficulty. Is this Is this where you’re going? The one thing you haven’t mentioned is consequence.
Vivek Ponnada
Yeah, that’s a good point because we are doing something unique in that we are allowing user to evaluate in this digital to in this digital replica how an adversary might be not only pivoting but exploiting different components to get to their crown jewels right the way we’re doing that is showcasing different views of TTPs that are well documented with all the IOCs and the threat intel that we aggregated so if it’s a power customer for example they could use a volt typhoon view to see how a volt typhoon actor might be able to leverage initial access to credential exploitation to other kind of exploits within within the environment and there might be a manufacturing customer with a whole different set of interesting TTPs that they want to evaluate But the idea behind this is you figure out what the generally documented TTPs are for a certain type of adversary and how they might you go about from your your starting point, which is initial access or the starting point of your threat analysis to all the way to the crown jewels. And in doing so, you’re making assumptions, right? Because, we’re not in this production environment. We’re not actually exploding something, but you’re evaluating the different scenarios where you say, OK, I have this Windows workstation and I’m going to use RDP, right? I’m going to exploit something there.
What if RDP was disabled? So these days people have some datasets where they can export from an EDR tool and provide open ports and services, right? Then we know, for example, upfront that and some of these services like SMB or whatever that you think is typically exploited by the TTP or the threat actor of choice or or interest is exploding and you disable that, you now know that at least that path is closed, right?
In other cases, The attack path might show three or four different types of exploits to be able to get to that ground jewel or the ground jewel network.
Then that that layer of difficulty or the complexity of the daisy chaining is much higher compared to another network or another attack path. That is trivial, right? So it uses native credentials and it only takes one hop in the attack path to get to that asset or network, then for example, that the previous one was more complex to even get to, right?
But the end of the day, all this conversation so far is about, how difficult it is to get to that ground jewel network or the ground jewel asset right not talking about what the attacker might do once they get there because that part is the impact or the consequence here we actually have a an automatic assessment based on the types of PLCs or types of controllers or the types of assets we see in general based on our threat intel and our initial assessment.
But an end user that’s running this tool or a consultant that’s running this tool can adjust that. Right So there’s a manual way for them to say, hey this network is of a higher priority for me compared to this other network.
Show me what the impact of getting to this network is for me because this is higher for me. So to to be fair, we’re not doing quantification yet in this In this tool we’re limiting ourselves at the moment to how easy or difficult it is to get to a particular crown jewel network and what the adversary might be able to do in that kind of a network. Right So it’s it’s one of those interesting aspects of that analysis where you’re not doing the analysis of what an attacker would do once they get to a crown jewel because that’s a whole different ballgame compared to you’re trying to break the kill chain break the path way before that so you’re you’re assessing or analyzing what are all the attack paths and how easy or difficult it is to get to the crown jewels that you’re trying to protect.
Andrew Ginter
Good going. I mean, I have maintained for some time, and and it’s easy for me to do because I’m on the outside. I don’t have to do the work. But I’ve maintained for some time that risk assessments, part of a risk assessment should be a description of the simplest attack or three that remain credible threats in the defensive posture, threats able to bring about unacceptable consequences. There’s always a path that will let you bring about, an attacker bring about an unacceptable consequence. The question is how difficult it is.
And so to me, the risk assessment should include a description of the simplest such attack or, attacks, plural. Um,
So that’s that’s sort of one. Is this kind of what you’re doing? Can can you give me the next level of detail on on what you’re looking at and and how you’re making these decisions?
Vivek Ponnada
Yeah, definitely. So the problem like you described is that there might be some open ports or services that are vulnerable.
However, if those ports are closed or those services are disabled, then that problem is solved, at least for the moment, right? Unless there’s another vulnerability discovered on the particular asset. So what we’re doing is we’re ingesting information from the various sources that they have.
In other cases, provide options to add that in the tool so that you have the contextual information as to what attacks are possible with what’s relevant in that environment, right?
And in the past, people did this using questionnaires, asking people or evaluating and subject matter experts, using a tabletop or something like that. But the beauty of our frameworks platform is that you’re actually able to do this in an automated fashion and at scale, because if you have like a typical customer, or dozens of end-user sites and hundreds or even thousands of networks, you’re not actually able to analyze the risk of each network of each asset down to the level of what’s possible with the given ports and services or install software or not install software in that environment, right?
But if you’re able to ingest all this information right from the IP addresses and different types of assets and the vulnerabilities tied to them to the ports and services that are enabled or disabled or in other cases, making a an exception to say hey I’m disabling this using some kind of application whitelisting or some kind of segmentation.
All the information at scale can be analyzed and you can get a a view that shows a realistic and more or less validated attack path versus someone that’s just looking at a piece of paper or a complex network in a manual fashion.
So this this is where I think the big difference is in that we’re looking at the attack complexity and the attack path at scale with whether it’s tens or so of sites or thousands of networks and able to decipher what the context is for exploitation or just lateral movement or or whatever the path might be to get to your crown jewels.
Andrew Ginter
So you’ve mentioned a couple of times at scale, you’ve mentioned a couple of times the potential for ingesting information about a lot of assets and networks. The asset inventory tools out there produce that knowledge already. I’m guessing you’re interfaced with them.
Can you talk about about that? How do you get data? How do you get the data about the system that that you’re going to analyze?
Vivek Ponnada
Yeah, that’s a great question. Yeah, we definitely can ingest information from a variety of sources. So the platform can ingest information both offline. So drag and drop a CSV or an XML file or any kind of spreadsheet.
And we also have API hooks to be able to automatically ingest information from The likes of Dragos / Nozomi / Claroty, which are the OT security product vendors. We can also ingest information from CMDBs or any kind of centralized data depositories like Rapid7 or Tenable.
In other cases, the customers might have just spreadsheets from the last time they did a site walk. We can ingest that too. So we’re not restricted on ingest ingesting any specific type of format. We have a command line tool that can ingest other sources as well.
But the basis, the digital twin starts with the firewall and the config file. So we ingest information from the likes of Fortinet, Cisco, Palo Alto, you name it.
Then ingest information from these IT or OT tools. At the end of the day, the more information that’s provided, the fidelity of the data is higher. But the and beauty of the platform is that if you don’t have any kind of information,
We can not only create mitigating controls and options within the platform, but we also built an extension of the Frenos platform called Optica, where you can quickly leverage existing templates, for example, Dell servers or Cisco routers or Rockwell PLCs.
Within a few minutes, you can drag and drop and build a template, which you then import into Frenos. To replicate what might be in the system already. So long story short, any kind of asset information, vulnerability information out there, we can ingest.
And if there is none or there’s limited visibility in certain sections or location, we can build something that’s very similar so that the customers can have a view for what the risk is in a similar environment.
Andrew Ginter
And you mentioned a couple of times, I remember here, compensating controls. I mean, the compensating control everybody talks about is more firewall rules, more firewalls, more firewall rules, keep the bad guys away from the vulnerable assets that we can’t patch because, we can’t afford to shut everything down and test everything again.
Can you talk about compensating controls? What other kinds of compensating controls might your your system recommend?
Vivek Ponnada
That’s a great question because as we were discussing earlier in OT, not everything is fixable because a patch might not be available or an outage window is not available, right? So historically, most people have used a combination of allow listing or deny listing or some kind of ports and services disabled or, to your point, firewall rules and segmentation have a place in that as well.
Overall, the key is to figure out what the attack path is and in how or which fashion you can break that attack path. So if the consideration is from level 4 through a DMZ or firewall and the firewall rule was any any or something that was allowing too much, and maybe too many protocols or something that could be disabled, you can start there as a preference. Right If that’s not possible or that’s not a project you can take the next thing could be hey I’m leveraging this kind of SMB or other exploit at that level 3 device before going to level 2.
Let’s look at what this service was on that particular asset right so you can disable that so within the tool we built in almost 20 or so different options for combinations of all these compensating controls and that are historically used in OT right so it could be a combination of firewall rule or a service or port disabled or or in other cases it could be disconnecting them to put in a different segment Again, this is not new, right? This is how historically OT has been able to mitigate some of the risk.
We’re just bringing that to the forefront to see or show you what other things can be done to break the attack path versus strictly talking about vulnerability management and fixing the problem by applying a patch, which is not practical as we talked about.
Andrew Ginter
Compensating controls are are tricky Nate, making we identify a vulnerability a weakness in a defensive posture there’s a new vulnerability announced from some piece of software that we use on some PLC or safety system or who knows what deep into our architecture the what do we do about that is an open is a question everybody asks sort of the consensus that’s building up is that, if that system is exposed to attack, then we have to put compensating measures in.
If it’s not exposed or if it’s, really hard to reach, maybe we don’t need to change anything in the short term until our next opportunity to to do an upgrade or, a planned outage or something.
And a tool like this one, like the Frenos tool, is one that can tell us how reachable is it, how exposed is this, compare that to our risk tolerance. Are we running a passenger rail switching system? Are we running a small bakery?
Different levels of exposure are acceptable in different circumstances. So having the tool give us a sense of how exposed we are is useful in making that that decision, are we gonna patch or not? And if we have to do something, it’s useful to have a list of compensating controls and sort of the list that that I heard Vivek go through, but they’re probably gonna add to this if they haven’t already.
You can change permissions. If you got a file server that sharing files is the problem and the bad guys can put a nasty on the file server, change permissions so that it’s harder to do that.
Turn off services, programs that are running on, Windows ships with, I don’t know, 73 services running. Most, industrial systems don’t need all of these services. They would have been nice to turn them off ages ago if you haven’t already turned them off and there’s a vulnerability in one of these services and you’re pretty sure you’re not using it, you can turn it off.
Add firewall rules that make it harder to reach the system. Add firewall rules that say, fine, if I need to reach the system for some of the services, but I don’t think I ever need to reach this service from the outside, even if I need to use it on the inside, add a firewall rule that blocks access to that service on that host from the outside.
None of this is easy. Every change you make to an important system have the engineering team has to ask the question is this how likely is it that I’m i’m messing stuff up here how likely is it that I’m introducing a problem that’s gonna that’s gonna bite me with a really serious consequence how how likely is it that the cure is worse than the than the disease here so compensated controls aren’t easy but what I see this tool doing is giving us more information about the vulnerable system about how reachable is that vulnerable system. What are the paths that are easiest to get to that vulnerable system? If I can turn off, I don’t know, remote desktop halfway through the attack path and make the attack that much more difficult, now you have to go through, I don’t know, PLCs instead of Windows boxes.
That’s useful knowledge. This is all useful knowledge. We we need as much ammunition as as we can get when we’re making these difficult decisions about shoot, I have to change the system to make it less vulnerable. What am I going to change without breaking something?
Andrew Ginter
Well, thank you so much for joining us, Vivek. Before I let you go, can I ask, can you sum up for our listeners, what are the most important points to to take away from this new technology? And I don’t know, what can they do next?
Vivek Ponnada
Yeah, for sure. So the quick summary is we’re trying to solve a problem that’s been around for a decade plus. Lots of customers do not have a risk assessment in place. They’re not quite sure where they stand currently.
So some of them are early in their journey with this lack of information. They still need to figure out where they have to invest their next dollar or next hour of resource. And in other cases they had spent the past three or five years in developing an OT security program.
A lot of information available, lots of alerts, but again they’re not so sure how they are compared to maybe their industry peers or how they are compared to where they should be in their security posture management.
So what Frenos is able to do is to both leverage their existing data sets and missing information by providing something that’s a replica of their environment showcase where they should be focusing on in terms of breaking the attack paths highlighting not just where they currently stand but also where they were compared to yesterday so overall this is what most executives have been asking before investing in OT security where do we stand currently how good are we compared to an existing known
Attack vector or campaign if you will and then how good can we be currently as in today because the risks are not staying constant so how do we keep up with it so the outcome of the frameworks platform is both a point in time assessment if you like and also continuous posture management because you’re able to validate what compensating controls and preventive measures that you are deploying or or implementing and if they’re going well or not
So conclusion is that we are a security posture management and visibility company that’s able to bring out the best in your existing data sets and provide you gaps and the gap analysis and and help you figure out where to invest your next dollar or resource on what site or what location.
And if you’d like to know more, hit me up on LinkedIn. My email is Vivek at Frenos.io or happy to connect with you on LinkedIn to take it from there. If you’d like more information, know hit up on our website, Frenos.io as well. You’ll see all the information about our current use cases, the different products and services we have to offer. So looking forward to connecting with more of you.
Nathaniel Nelson
Andrew, that just about does it for your interview with Vive Banada. Do you have any final word to take us out with today?
Andrew Ginter
Yeah. This topic is timely. the topic of risk-based decision-making. I mean, this too is coming into effect in a lot of countries, particularly In Europe, the regulation in every country is different, but the directive says you have to be making risk-based decisions.
And I’m sorry, a risk assessment is… Should be much more than a list of unpatched vulnerabilities. A list of unpatched vulnerabilities does not tell you how vulnerable you are.
It’s just a list of vulnerabilities. To figure out how much trouble you’re in, you need a lot more information. You need information about how which assets are most critical. You need information about how reachable are those critical assets for your adversaries.
And when new vulnerabilities are announced a arise that simplify the pivoting path that simplify reachability of a critical asset for your adversaries you need advice as to that’s what you need to fix next and here are your options for fixing that so I see this kind of of tool as as uh step in the right direction. This is the kind of information that that a lot of us need in not just the world of NIST-2, in the world of managing risk, managing reachability.
You know We’ve all segmented our networks. What does that mean? You can still reach, bang, bang, bang, pivot on through. Well, then, What does that mean? This kind of tool tells us what that means. It gives us deeper visibility into reachability and and vulnerability of the critical assets, risk, opportunity to attack. You know I don’t like the word vulnerability. Too often it means software vulnerability. This talks about This kind of tool exposes attack opportunities and tells us what to do about them. So to me, that’s that’s a very useful thing to do.
Nathaniel Nelson
Well, thank you to Vivek for highlighting all that for us. And Andrew, as always, thank you for speaking with me.
Andrew Ginter
It’s always a pleasure. Thank you, Nate.
Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there listening.
Trending posts
Managing Risk with Digital Twins – What Do We Do Next? – Episode 144
IT & OT Relationship Management
Analyzing Recent NIS2 Regulations – OT security is changing
Stay up to date
Subscribe to our blog and receive insights straight to your inbox