ot security – Waterfall Security Solutions https://waterfall-security.com Unbreachable OT security, unlimited OT connectivity Sun, 05 Apr 2026 06:17:07 +0000 en-US hourly 1 https://wordpress.org/?v=6.9.4 https://waterfall-security.com/wp-content/uploads/2023/09/cropped-favicon2-2-32x32.png ot security – Waterfall Security Solutions https://waterfall-security.com 32 32 8 (and a Half) Questions for Your OT “Secure” Remote Access Vendors https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/8-and-a-half-questions-for-your-ot-secure-remote-access-vendors/ Wed, 01 Apr 2026 05:26:23 +0000 https://waterfall-security.com/?p=39051 Ask different questions, get different answers. What should you be asking your OT “secure” remote access (SRA) vendor?

The post 8 (and a Half) Questions for Your OT “Secure” Remote Access Vendors appeared first on Waterfall Security Solutions.

]]>

8 (and a Half) Questions for Your OT “Secure” Remote Access Vendors

Ask different questions, get different answers: What should you be asking your OT “secure” remote access (SRA) vendor?
Picture of Waterfall team

Waterfall team

Terminology first. The word “secure” is in quotes, because cybersecurity (like safety) is a continuum, not a pair of discrete yes/no states. We can always be safer, or less safe. We can always be more secure, or less. The question “Are we secure?” is meaningless. The question “How secure are we?” has an answer. The question “How secure should we be?” is even more important. Anyone who uses “secure” as an adjective is selling something – “secure” communications (really: encrypted and/or authenticated), “secure” boot (really: cryptographically authenticated firmware), “secure” by design (really: better security by designing security in), and so on.

There is no such thing as “secure” remote access.

Want to learn more about OT remote access? Join our next webinar: “13 Ways To Break “Secure” OT Remote Access Systems”

Question 1: For SRA into OT systems, does your vendor provide IT-grade protection we HOPE can detect attacks in time, or do they provide hardware-enforced, engineering-grade protection?

What is IT-grade protection? Imagine a long suspension bridge has dangerous harmonic frequencies – people simply walking over the bridge risk setting up oscillations that build up, eventually to the point of tearing the bridge apart. See the 1940 Tacoma Narrows disaster for an example. Imagine that a bridge you cross every day on the way to work has this problem, and so is stabilized by hydraulic dampers – multiply redundant dampers, redundant power supplies and “secure” control systems. How happy would you be driving across that bridge every day if you knew the design engineer HOPED that, if there was a cyber attack on the control system, HOPED we could detect the attack before the bridge tore itself apart. How happy would you be knowing the design engineer HOPED that, if we detected the attack in time, HOPED we could scramble an incident response team fast enough to prevent disaster?

Hope is not what we expect of design engineers. we expect bridges to carry a specified load, in a specified operating environment, for a specified number of decades, with a large margin for error. Engineering-grade solutions, like over-pressure relief valves and unidirectional gateways, behave deterministically, no matter how sophisticated a cyber attack is launched at them.

Question 2: If someone phishes an SRA credential, can they exploit a vulnerability in the Multi-Factor Authentication (MFA) to get into the protected OT systems?

“Secure” Remote Access vendors boast about their MFA, but MFA is software. Yes, the little dongle on our keychain looks like hardware, but the “secure” SRA system we are logging into with the dongle is software. All software has defects, and some defects are security vulnerabilities. Some of those vulnerabilities are known to the SRA product developers, who are madly trying to develop patches / security updates for the vulnerabilities. Others are known only to our enemies, who are using these zero-day vulnerabilities against us without our knowledge. Our attackers phish our “secure” password, ignore our RSA dongle or cell phone authentication app, and exploit a zero-day in the “secure” system to break in with our credentials and work their will upon our OT networks. Is this possible in the “secure” system we are using or considering using?

Question 3: Is that SRA a H2M solution, or an M2M solution?

Terminology:

  • H2M = human-to-machine = sends keystroke & mouse movements in / receives screen images back out.
  • M2M = machine-to-machine = software talking to software – for example: an HMI running on our remote laptop, talking through a VPN to PLCs or OPC servers in the OT network, or a PLC programming tool on our remote laptop, talking through a VPN to update firmware in our safety-instrumented systems (SIS).


When “secure” remote access supports M2M, then any malware that might be present on our laptops can reach across the M2M/VPN and connecting to any vulnerable, out-of-date (eg: XP) OT systems in our OT network. Such systems are a bonanza to common malware that relies on exploiting known vulnerabilities.

Question 4: Can users override SRA encryption / certificate warnings?

Many “secure” OT solutions use industry standard Transport Layer Security (TLS) to protect their connections across the Internet. This is the same technology used by web browsers, M2M applications, and the vast majority of Internet and IT applications. TLS uses certificates. If an attacker intercepts our communications, they can substitute their certificates. Our software – eg: our web browsers – are supposed to diagnose the substitution. A lot of these applications, like many web browsers, caution their users when they see an unexpected certificate and ask if the user really wants to proceed. Most users answer, “yes of course – override the warning / force the connection to complete / finally I’m connected through this nonsense!” And they successfully use their MFA and other credentials to log into the “secure” remote access system in a way that lets the bad guys take over their session.

Question 5: Can you paste or file-transfer arbitrarily complex files into OT equipment remotely?

A lot of OT equipment is sensitive – it malfunctions if anti-virus is running on it, so we do not run AV on it. It costs a lot of money to re-certify for safety if anything changes, so we have not applied any security updates, nor upgrade the operating system. These systems are often found still running obsolete versions of Windows XP. What risk is there in downloading a PDF file to this device? Or a software update executable? Or a clever new OT tool we just found on the Internet that claims it can “clean the hard drive” on this very old, very vulnerable, very important OT system? If people can transfer files that can contain malware, sooner or later they will do so. Does our “secure” remote access permit this very dangerous operation?

Question 6: Is there a session timeout?

Many users find session timeouts to be really annoying. Users must log in repeatedly when they get distracted by other emergencies during OT SRA sessions. But what happens if there is no session timeout? We log in and finish a job in the evening on our home computer. We go to work the next day. Our kids log into the home computer to do their homework. They find our session still open, still connected. What harm could that cause? Or – we put no password on our cell phones, because constantly entering PINs is annoying. Now open a “secure” remote access session, set the phone down and forget it. A stranger picks it up. There is no PIN. The remote session is still active into our critical infrastructure operations. What harm could be done?

Question 7: Do you require deny-by-default on firewalls protecting OT networks?

Many “secure” remote access vendors claim we can install their software on the OT computer of your choice, and the software will connect straight out to the Internet through IT/OT and IT firewalls, without needing to do anything to reconfigure the firewalls. This design assumes that OT firewalls are configured like most IT firewalls are configured – they allow any outbound connection by default, disallowing only inbound connections and outbound connections to known-dangerous destinations.

Such configuration means the “secure” remote access solution counts on a firewall configuration that any well-meaning technician on the OT network can use to install their own rogue remote access solution, among other things. For example: open a persistent SSH connection to a home Linux computer that is able to forward connections back into OT systems or download a “free” remote access / support solution, connect it out to the cloud and at home, rendezvous with this solution from a home computer. Well-meaning technicians imagine that there is no need to “bother” IT or engineering with matters like this when anyone with the most modest of computer skills can download and install whatever “secure” remote access software they wish, using their XP admin credentials.

Question 8: Does your OT SRA need a firewall?

Most SRA vendors assume there is a firewall between the IT and OT networks, and their SRA software relies on establishing connections through this firewall. Firewalls, however, are vulnerable to many attacks. For examples, see Thirteen Ways to Break a Firewall. In contrast, hardware-enforced remote access (HERA), for example, is compatible with, but does not require a vulnerable firewall at the IT/OT interface.

Question 8 1/2: Does your SRA support MFA?

We count this as only half a question, because all commercial-grade OT SRA supports MFA. The only SRA without MFA is the “roll your own” kind, where you are hard-pressed to find any vendor to ask these questions of in the first place. Internet-exposed, and even IT-exposed OT facilities should all support MFA and we must enable that MFA without fail.

Digging Deeper

To better understand why these questions are important, or to dig deeper into the simple attack scenarios that lie behind these questions, please join us in our April webinar 13 Ways To Break “Secure” OT Remote Access Systems – And questions you should be asking your OT SRA vendor about these attacks.

About the author
Picture of Waterfall team

Waterfall team

Share

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post 8 (and a Half) Questions for Your OT “Secure” Remote Access Vendors appeared first on Waterfall Security Solutions.

]]>
How to Apply the NCSC/CISA 2026 Guidance https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/how-to-apply-the-ncsc-cisa-secure-connectivity-principles-for-operational-technology-2026-guidance/ Sun, 01 Mar 2026 14:33:08 +0000 https://waterfall-security.com/?p=38805 Hardware-enforced OT Security solutions help industrial operators follow the latest multi-government OT security guidance

The post How to Apply the NCSC/CISA 2026 Guidance appeared first on Waterfall Security Solutions.

]]>

How to Apply the NCSC/CISA 2026 Guidance

Hardware-enforced OT Security solutions help industrial operators follow the latest multi-government OT security guidance.
Picture of Waterfall team

Waterfall team

How to Apply the NCSC CISA Secure Connectivity Principles for Operational Technology (OT) 2026 Guidance

For the first time, joint guidance from the UK NCSC, co-signed by CISA, BSI, Australia’s ACSC and others, calls for centralizing risky connections into OT networks, simplifying instructions sent into OT so they can be inspected for safety, and even “browsing down” for engineering workstation access. Alongside these newer ideas, it reinforces more established advice, such as hardening OT boundaries with hardware-enforced protections like Unidirectional Gateways and Hardware-Enforced Remote Access.

The challenge is that the guidance is fairly abstract. The principles are clear, but how to apply them in real OT architectures is not always obvious.

What are the 8 core principles of the NCSC / CISA “Secure connectivity principles for Operational Technology (OT)” guidance, and how does Waterfall support their application?

1) Balance the risks and opportunities – Waterfall’s Unidirectional Gateways dramatically reduce cyber risks to connected OT networks. One-way hardware prevents attack information from reaching back into OT networks, significantly reducing risks for even obsolete, unpatchable targets.

2) Limit the exposure of your connectivity – Waterfall’s Secure Bypass product is a time-limited switch, controlling how often and how long vulnerable software components are exposed to external networks, Waterfall’s Unidirectional Gateways are intrinsically outbound connections – no inbound threat is possible to connected devices through the gateways.

3) Centralise and standardise network connections – Waterfall’s Unidirectional Gateways scale from the smallest DIN rail form factors to 10Gbps rack-mount devices supporting dozens of simultaneous connectors & replications, making both distributed and centralized deployment straightforward.

4) Use standardised and secure protocols – Waterfall’s Unidirectional Gateways support dozens of OT protocols and applications, both plain-text and encrypted versions. Better yet, even when using plain-text communications into IT networks, no session hijack or other plain-text attack can reach through the unidirectional hardware back into the OT network to put physical operations at risk.

5) Harden your OT boundary – The guidance recommends hardware-enforced unidirectionality and integrity filtering. Waterfall’s Unidirectional Gateways enforce unidirectionality in hardware. Waterfall’s Hardware-Enforced Remote Access (HERA) uses a hardware filter to ensure only HERA protocol information can enter the OT side of the HERA device.

6) Limit the impact of compromise – Waterfall Unidirectional Gateway and FLIP products are compatible with a wide variety of anti-virus systems, patch management systems, zero trust, and other systems that provide this second level of defense in defense-in-depth programs.

7) Ensure all connectivity is logged and monitoredWaterfall for IDS is hardware-enforced protection for SPAN port and mirror ports sending data to IT-resident OT intrusion detection system (IDS) sensors. Waterfall is partnered with all the most important OT IDS vendors.

8) Establish an isolation plan – Waterfall’s Unidirectional Gateways are used by TSA-compliant sites and other sites with isolation / islanding requirements. The gateways ensure critical data continues to move, even during “isolation” emergencies where firewalls are not permitted to connect OT with IT networks, or the Internet.

Waterfall’s Unidirectional Gateway, HERA remote access and other hardware-enforced products are dramatically stronger than software and are used routinely at the sensitive IT/OT trust/consequence boundary.

FAQ about the NCSC / CISA “Secure Connectivity Principles for Operational Technology (OT)” guidance

What are the key recommendations from the NCSC / CISA “Secure Connectivity Principles for Operational Technology (OT)” guidance?

The guidance heavily emphasizes a “Push-Only” architecture, where data is sent from the secure OT zone to lower-trust corporate zones, preventing external, unsolicited inbound connections. The guidance recommends unidirectional hardware as a powerful tool to enforce the “push only” rule.

The guidance is for OT asset owners and operators, cybersecurity professionals, integrators and manufacturers and risk managers and engineers – at medium-sized to large industrial sites or enterprises. The guidance is fairly abstract and requires expertise to understand, expertise that is generally not available at the smallest of industrial sites.

The guidance heavily emphasizes a “Push-Only” architecture, where data is sent from the secure OT zone to lower-trust corporate zones, preventing external, unsolicited inbound connections. Unidirectional hardware is a powerful tool to enforce the “push only” rule.

About the author
Picture of Waterfall team

Waterfall team

Share

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post How to Apply the NCSC/CISA 2026 Guidance appeared first on Waterfall Security Solutions.

]]>
Groundbreaking OT Security Guidance https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/groundbreaking-ot-security-guidance/ Thu, 05 Feb 2026 14:22:54 +0000 https://waterfall-security.com/?p=38306 The UK National Cyber Security Centre (NCSC) in conjunction with many others, including CISA, CCCS, BSI, FBI, NCSC-NL and NCSC-NZ, has just issued new guidance: Secure connectivity principles for Operational Technology (OT).

The post Groundbreaking OT Security Guidance appeared first on Waterfall Security Solutions.

]]>

Groundbreaking OT Security Guidance

I’ve been working in OT security for decades and I don’t say this lightly: I’ve never seen guidance like this before. The UK NCSC, alongside CISA, the Canadian CCCS, and others, just released new guidance on securing OT connectivity that includes topics rarely (if ever) covered before.
Picture of Andrew Ginter

Andrew Ginter

Groundbreaking OT Security Guidance

The UK National Cyber Security Centre (NCSC) in conjunction with many others, including CISA, CCCS, BSI, FBI, NCSC-NL and NCSC-NZ, has just issued new guidance: Secure connectivity principles for Operational Technology (OT). The guidance is designed for medium-sized through large industrial sites and includes many topics that are either unique in the industry – that I’ve never seen in guidance before – or are otherwise unusual or infrequent – and useful.

These topics include: keeping the most IT / Internet-exposed equipment the most patched, centralizing the most dangerous connections, abstracting any instructions that OT receives from IT or the Internet if we can, hardening  IT/OT interfaces with cross-domain solutions, using unidirectional hardware and hardware-enforced remote access, microsegmenting east/west OT communications, paying special attention to “break glass” accounts and workstations, not permitting anything like a remote-access engineering workstation, and using unidirectional hardware to help with islanding / emergency isolation requirements.

The document is, however, 33 pages long, and much of the language is general and abstract – it can be hard to figure out what the real point is. Here is a condensed version, with simplified language and occasional examples. This introduction may not be as 100% accurate as the original, but I hope to give readers enough of a head start on the tricky bits to have a fighting chance of getting through the document.

Overview

Let’s begin – the NCSC document describes 8 principles – with my summaries & paraphrasing in italics.

  1. Balance the risks and opportunitiesa somewhat confusing mix of OT cyber risk, brownfield cautions, and supply chain advice – most readers have seen this stuff before.
  2. Limiting the exposure of your connectivitywhen we have to connect stuff to IT or worse to the Internet, keep it patched, scan regularly for Internet-exposed IP addresses and services, and be paranoid about wireless communications. None of the individual bits of advice are new, but some of the combinations are unusually useful.
  3. Centralise and standardise network connectionsminimise our external connectivity, and ideally route it all through a central facility for intrusion detection and active management – of rules, vulnerabilities, actionable intel, etc. This is practical advice that I have not seen before.
  4. Use standardised and secure protocolsuse encryption and authentication inside our ICS as much as is practical, and always encrypt and authenticate communications across IT, Internet and other external networks. Good advice, not terribly new.
  5. Harden your OT boundarylots of good advice for this important consequence boundary, including hardware-enforced unidirectional, hardware-enforced remote access and (unusual) cross-domain solutions. Good advice – some of it right out on the edge of state-of-the-practice.
  6. Limit the impact of compromisea surprisingly old discussion of types of firewall packet filtering that everyone really should know already, coupled with a newer discussion of options for microsegmentation to control “lateral movement” (pivoting attacks).
  7. Ensure all connectivity is logged and monitoredthe usual exhortation to monitor connectivity, especially remote access from IT networks and the Internet, with an interesting segue into “break glass” connectivity.
  8. Establish an isolation plantalks about different kinds of site and/or subsystem emergency isolation / islanding approaches, including a brand-new discussion of the business value of hardware-enforced unidirectional communications as part of the emergency islanding plan.

With that introduction, let’s dig into what’s new and what’s interesting.

Keep Exposed Gear Patched

Lots of OT guidance talks about how important it is to patch systems. Lots talks about how hard it is to patch change-controlled or obsolete (or both) OT systems. Very few bits of guidance talk about how important it is to patch IT-exposed or Internet-exposed equipment. This document does – Section (2) says in rather abstract language, look – if we’ve had to connect something to the IT network or to the Internet – like a firewall, or a software service through a firewall – keep it patched. And if we cannot patch the connected device or software, then it should not be connected to the Internet. And if we cannot patch the underlying OS, that’s as bad as not being able to patch the application – get it off the Internet!

Centralize

I’ve never seen guidance tell us to centralize our most dangerous communications connections before. To a lot of practitioners this is second nature – if we do not have the people or skills at remote or unstaffed sites to keep communications infrastructure up to date, monitored, documented and maintained, then most of us already try to do it centrally where we  do have the people. This is worth saying in guidance, and again, Section (3) is the first time I’ve seen this advice written down and endorsed by such a wide range of authorities.

Abstraction

Section (4) talks about encryption, authentication and – abstraction. The section does not use the word “abstraction” but does talk about “protocol validation.” For example, if a cloud-based AI is making complex optimization decisions and writing encrypted / authenticated Modbus into a bunch of OT PLCs, does a NGFW looking at that traffic have any hope of figuring out if the instructions to the PLCs are safe? 

If instead, the AI sent an XML file into a Manufacturing Execution System (MES) in the OT network, and the XML file said to orient the <drum> to <low> or <high> orientation, rather than 23.2 degrees, or heat the drum to the 73% point in the allowed, safe operating temperature range rather than to 352 °C, verification of the safety of the communication would be as simple as checking the XML document to make sure it agrees with the XML schema.

Now, this is easier said than done – most of us are stuck with whatever communications protocol the application vendors give us, but the concept makes sense. And this is the first time I’ve seen a piece of multi-government guidance talk about the concept. If owners and operators start demanding this capability (citing the NCSC guidance) and using the capability to decide which external systems to purchase / connect to, vendors (hopefully) will eventually respond or lose business.

IT/OT Hardening

Section (5) starts with some introduction and then repeats the exhortation to keep our IT/OT firewalls patched. The section continues and eventually recommends hardware-based unidirectional security controls. This is not the first time we’ve seen that advice, but the unidirectional option is often missed – these people caught it.

And then the advice gets a little confusing. It talks about Cross Domain Solutions (CDS), which is a military term for (oversimplified) cleaning malware out of documents going into high-security / classified networks. In OT, an emerging use I’ve observed for this kind of CDS technology is to keep malware and other attack information out of communications that arrive in OT networks from IT, or worse from the Internet. 

And then the advice gets more confusing. It starts talking about “data diodes” (hardware-enforced unidirectional communications), but the advice does not make a lot of sense unless we apply it to communications going into an OT network. This is not intuitive. Most unidirectional hardware is oriented to send stuff out of an OT network, not in. That said, I do see inbound unidirectional traffic in customer deployments increasingly frequently, and this is the first government guidance I’ve seen for sending stuff unidirectionally into an OT network.

Simplifying the advice, it says:

  1. The simplest (inbound) hardware diodes only forward data, sometimes including attack data, into OT networks. These devices do not check for malicious content the way a CDS can. 
  2. Two diodes, one in and one out, where a communications protocol is split so that inbound packets go in through one diode and answers come out through the other is not useful – this is an “antipattern.”
  3. The best inbound unidirectional hardware checks the validity of data passing into OT – checks the data in hardware, not in external software.
  4. Pushing inbound data unidirectionally into a “unidirectional DMZ” (unidirectional hardware inbound one side, and a second unidirectional gateway outbound on the other side) with data validation (eg: a software CDS) done “in the middle” is a useful design.

All four are true. (1) and (2) basically say “caveat emptor.” There are diode hardware vendors out there making claims that are not defensible. (2) in particular confuses a lot of people. When I see Waterfall’s Unidirectional Gateways deployed to send information both into and out of an OT network, I never see nor recommend a round-trip protocol like the “anti-pattern” in (2). (2) is how command and control (C2) loops work. 

Recommendations (3) and (4) are confusing as well – in my read (4) contradicts (3) – (4) says data validation should be done in the software CDS, while (3) says to do the validation in the unidirectional hardware. Don’t get me wrong, (4) is still a good idea, but (4) is not as powerful as (3)’s validation done in the unhackable hardware. In the past I’ve seen (4) discussed only in the context of classified networks, and even then only in the most abstract terms, because I have no security clearance. But in principle, yes, we can use the concept of a CDS between a pair of hardware-enforced gateways to push data into OT as well.

Point (3) is unusual in another respect – the requirement for hardware filtering / validation of data entering OT. I’ve only seen the hardware filtering recommendation once before – in the 2024 Modern Approaches to Network Access Security talking about hardware-enforced remote access (HERA).

Microsegmentation

Section (6) talks about lateral movement. Other documentation calls these pivoting attacks: using compromised equipment to attack other equipment in the same network, eventually reaching equipment that can push attack connections through firewalls into more critical networks. The IT buzzword to address this risk is “microsegmentation.” Section (6) is a good discussion of the role firewalls play in slowing down attack propagation inside OT networks. There is a nice discussion of using built-in host firewalls, but that discussion is missing a caveat that host firewalls are more practical higher in an OT architecture, closer to the IT network. Vendor support agreements and change control constraints make managing host firewalls harder when we get deeper into OT architectures.

And as mentioned earlier, the section has a surprisingly long discussion of the difference between routing, static firewall rules, stateful inspection and deep-packet inspection (DPI), a discussion that I’m pretty sure every OT practitioner can already recite backwards. The information is correct, but could have been much shorter, saying essentially “modern firewalls do the good stuff, and we should not pretend that what looks like firewall rules in switches and routers have much security value at all.”

What is surprisingly good is a very short section entitled “Browse Down.” I had to dig into some of their references, but what they’re saying is:

  • Give only a very small number of machines the ability to make far-reaching configuration and security changes – eg: minimize the number of engineering workstations, and
  • Lock down and secure those machines nine ways to Sunday – they are prime targets when intruders get into the systems.

Said negatively – do not allow Internet-exposed machines to carry out sensitive reconfiguration of our OT systems. For example – do not let any remote access laptop carry out these functions. I read the advice as saying, to the greatest extent feasible, “remote engineering workstations” should be an oxymoron. I agree completely – but have never heard anyone write this down before. Good job.

Break Glass Access

Section (7) has an interesting discussion of “break glass” access. Again, I had to look up what this was: accounts and especially remote access accounts that can be used to bypass normal security mechanisms in an emergency, such as when our password vault is compromised, or goes up in smoke. The term was easily find-able, so I’m guessing it’s widely used in IT. The concept makes sense – common wisdom in IT for “break glass” accounts is to secure them really thoroughly. “Break glass” accounts do not need to be convenient to use – these are emergency measures only.

The guidance recommends that if our IDS or logging ever sees anyone use a break-glass account, then those tools should issue the highest priority alarms they can to our security operations center (SOC). This makes sense. Use these powerful accounts in emergencies, not for routine remote access.

Islanding

Section (8) talks about isolation / islanding: disconnecting IT from OT in IT emergencies, such as a ransomware infection, so OT can continue working throughout the IT emergency. This advice is not unique – the US TSA continues to require emergency isolation for rail systems in TSA SD 1580-21-01E, for pipelines in TSA SD 2021-02F, and the Danes require it in their latest Executive Order 260 of 2025. What is unique is the connection to hardware-enforced unidirectional gateway technology. The advice suggests either:

  • Deploy a gateway as the sole connection outbound from OT to IT, which amounts to “permanent” islanding – no malware from IT can ever propagate back into OT through the gateway, or
  • Deploy an outbound gateway in parallel to a firewall at the IT/OT interface, so that when we power off that firewall for the duration of an IT / ransomware emergency, critical communications can still flow from OT to IT, or to the Internet – for partners, government regulators, etc.

While I’ve seen many of these kinds of unidirectional islanding deployments in the last several years, and I’m aware that regulators seem happy with those designs, this is the first time I’ve seen unidirectional hardware actually described and recommended in guidance in the context of an islanding / isolation discussion.

Conclusions

There are minor nits I could pick with the document: the guidance uses “secure” as an adjective (first law of OT security – nothing is “secure”), it talks about CIA / AIC / etc. as if information was the asset we are protecting (we in fact protect safe, reliable and efficient physical operations), and talks about “compensating controls” as if boundary protection were a secondary priority, rather than the first priority for preventing cyber-sabotage (see Bib’s 50 year old cybersecurity theory).

But there is no point in picking nits. While difficult to understand sometimes, this is a groundbreaking piece of guidance, covering useful topics that I’ve never seen covered before. Good job.

Digging Deeper

About the author
Picture of Andrew Ginter

Andrew Ginter

Andrew Ginter is the most widely-read author in the industrial security space, with over 35,000 copies of his three books in print. He is a trusted advisor to the world's most secure industrial enterprises, and contributes regularly to industrial cybersecurity standards and guidance.
Share

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Groundbreaking OT Security Guidance appeared first on Waterfall Security Solutions.

]]>
IT/OT Cyber Theory: Espionage vs. Sabotage https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/it-ot-cyber-theory-espionage-vs-sabotage/ Tue, 06 Jan 2026 14:35:13 +0000 https://waterfall-security.com/?p=38210 The second-generation of OT security advice started to emerge in 2012-2016.

The post IT/OT Cyber Theory: Espionage vs. Sabotage appeared first on Waterfall Security Solutions.

]]>

IT/OT Cyber Theory: Espionage vs. Sabotage

Picture of Andrew Ginter

Andrew Ginter

ITOT Cyber Theory Espionage vs Sabotage

The second-generation of OT security advice started to emerge in 2012-2016. At the time, the difference between the second and first gen advice was a bit confusing. In hindsight, one important difference has become clear – the difference between preventing cyber-sabotage vs. cyber-espionage. We do not prevent sabotage the same way we prevent espionage. **50** year old cybersecurity theory (wow – we’ve been at this a long time) makes the difference clear. Bell / La Padula’s theory is how we prevent espionage, while Biba’s theory is how we prevent cyber-sabotage.

Let’s look at each of these theories and at how they define one of the fundamental differences between our approach to OT vs IT security.

First Gen Security Advice

First-gen OT security advice said, loosely:

  1. Information is the asset we protect, so
  2. Assure the confidentiality, integrity and availability (CIA) of the information assets.

And of course, we muttered at the time a bit about CIA vs AIC vs IAC as priorities, but we all agreed, however hard the concept seemed at the time, that information was the asset we were protecting. This was and is, back of the envelope, exactly what we still do on IT networks. After all, when engineering teams first started looking at cybersecurity, who were the experts we could call on for help? There were no OT security experts back then, and so we called on IT experts. It is therefore no surprise that first-gen OT security advice was close to indistinguishable from IT security advice.

The theory backing up preventing theft of information was defined by Bell and La Padula. The theory had its roots in timeshared computers – 50 years ago, large organizations had only small numbers of computers with hundreds of users each. And in some organizations, like the military, it was really important that we prevent low-classification users from reading high-classification national secrets. Bell / La Padula theory mandated that, to prevent espionage:

  • A “subject” or “actor” at a given security level must never be able to read information from a higher security / classification level, and
  • That actor must never be able to write information to any lower security level.

 

Rule (1) is obvious to most people encountering the theory for the first time. (2) often seems a little strange. To make sense of (2), imagine that malware has established a foothold in a classified user’s account. If the user can write sensitive classified information into less-sensitive areas of the computer, then so can the malware. In the worst case, the information may be steganographically encoded – such as spreading the information through the low-order bits of pixels in images. To prevent all information leakage, we must forbid any information flowing from high-security to low-security users and systems, because steganographic encoding is always possible, at least in theory.

Second-Gen OT Security

Second-gen advice said, loosely, that in most OT systems, information is not the most important asset we protect, but rather:

  • Safe, reliable and efficient physical operations are what we protect, and
  • All cyber-sabotage is (by definition) information, so to protect physical operations, we must control the flow of attack information into high-consequence automation systems and networks from lower-consequence networks.

At the time this advice came out, (a) made a lot of sense to a lot of engineering teams. They had never been comfortable with the idea that information was the asset they were trying to protect. (b) seemed a bit strange at first to a lot of people but made sense if you thought about it for a day or two. Nobody can deny that cyber-sabotage is information – the only way an automation system can change from a normal state to a compromised state is if attack information enters the system, somehow. Controlling the flow of information therefore makes sense – and if we think about first-gen OT security advice, such as the IEC 62443-1-1 standard, a good half of that first standard was focused on network segmentation – controlling the flow of attack information.

The theory backing up this second-gen perspective was defined by Biba, not Bell and La Padula. Biba’s theory also had its roots in timeshared computers for the military, but was focused on preventing sabotage, not preventing espionage. Eg: think the difference between preventing re-targeting of nuclear weapons, vs. preventing the theft of the knowledge of how to build those same weapons. Biba’s theory mandated that, to prevent cyber-sabotage:

  • A “subject” or “actor” at a given security level must never be able to read information from a lower security level, and
  • That actor must never be able to write information to any higher level.

 

Rule (2) is easier to understand for most people encountering the theory for the first time – a malicious actor must not be able to write malware into a higher security level (eg: to change the missiles’ targets). In Biba’s theory, (1) is the strange one. To make sense of it, imagine that malware has established a foothold in a less-secured, less-sensitive network, like the Internet. If a sensitive network pulls information from the Internet, we risk pulling malware, which if activated, can wreak havoc.

Second-gen advice therefore generally forbade any online transfer of information from less-secure networks into high-consequence safety-critical or equipment-critical networks.

Data Diodes + Unidirectional Gateways

Data Diodes were the military’s answer to Bell / La Padula and Biba. Unidirectional Gateways were OT security’s answer. The difference?

  • Data Diodes send information into confidential military networks and are physically unable to leak any national secrets back out.
  • Unidirectional Gateways send information out of OT networks into IT, and are physically unable to leak cyber-sabotage attacks back in.

There are secondary differences as well. For example, data diodes typically transmit a very limited number of data types into military networks through custom-engineered software, while unidirectional gateways replicate OPC, historian and many other kinds of servers out to IT networks using off-the-shelf software components.

And every rule has exceptions. Many manufacturing operations use trade secrets that they cannot afford to have stolen, for example. And most industrial operations need some very small, very select data to flow back into the system from time to time.

Both Bell / La Padula and Biba’s theories provided for these exceptions, and demanded that any data flow that violated the primary principles be minimal, simple, understandable, and deeply scrutinized to ensure that the primary objective (preventing espionage, or sabotage, respectively) was not compromised by these secondary objectives and data flows.

Resilience

Third-gen OT security advice, FTR, is still emerging and is focused on resilience. The theoretical framework behind resilience is more engineering practice than mathematics, but we are working on it. The most thorough, most widely-used resilience framework today is Idaho National Laboratory’s (INL’s) Cyber-Informed Engineering (CIE). CIE is positioned as “the big umbrella.” CIE encompasses cyber-relevant parts of safety engineering, protection engineering, automation engineering, and network engineering, as well as most of the cybersecurity discipline, including all of Bell / La Padula and Biba’s theories.

Using This Knowledge

An important difference between IT and OT networks is the difference between preventing espionage and preventing sabotage. First-gen advice seemed a hard fit for OT, in part because that advice tried to apply the language and concepts of preventing espionage to the task of preventing sabotage. In hindsight, second-gen advice corrected this, though neither generation of advice used the words “espionage” nor “sabotage,” nor did they reference 50-year-old theory.

Today our terminology is maturing, and OT security’s connections to the theoretical foundations of cybersecurity are becoming clearer. Clarifying this understanding and terminology helps a lot when trying to get our engineering and enterprise security teams to work together. If we are to cooperate effectively, we need to understand foundational differences between the assets and networks we protect, and we need a terminology to express those differences as we design our joint security programs.

Digging Deeper

This is one of the topics that will be covered in Waterfall’s Jan 28 webinar Bringing Engineering on Board and Resetting IT Expectations. Please <click here> to register.

About the author
Picture of Andrew Ginter

Andrew Ginter

Andrew Ginter is the most widely-read author in the industrial security space, with over 35,000 copies of his three books in print. He is a trusted advisor to the world's most secure industrial enterprises, and contributes regularly to industrial cybersecurity standards and guidance.
Share

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post IT/OT Cyber Theory: Espionage vs. Sabotage appeared first on Waterfall Security Solutions.

]]>
Ships Re-Routed, Ships Run Aground https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/ships-re-routed-ships-run-aground/ Tue, 06 Jan 2026 09:38:29 +0000 https://waterfall-security.com/?p=38185 “Everyone” has heard of the 5-week shutdown of Jaguar Land Rover by a cyber attack. That attack is the obvious headline for Waterfall's up-coming webinar “Top 10 OT Cyber Attacks of 2025” that I'm currently researching.

The post Ships Re-Routed, Ships Run Aground appeared first on Waterfall Security Solutions.

]]>

Ships Re-Routed, Ships Run Aground

Picture of Andrew Ginter

Andrew Ginter

Ships Re-Routed, Ships Run Aground

“Everyone” has heard of the 5-week shutdown of Jaguar Land Rover by a cyber attack. That attack is the obvious headline for Waterfall’s up-coming webinar “Top 10 OT Cyber Attacks of 2025” that I’m currently researching. But – is this attack the most interesting of 2025?

Here are a couple other incidents for consideration:

While details of the investigations into these events have not been published, on the surface the three incidents seem evidence of the importance of evaluating residual risk when we design automation and cybersecurity systems.

GPS Spoofing

A bit of background first: GPS Spoofing (as opposed to simpler GPS jamming) is when false geolocation signals are transmitted, either directionally to affect a specific target, or broadcast in a region to affect indiscriminately all nearby receivers. GPS satellite signals are comparatively weak, and it does not take a very powerful transmitter to overwhelm legitimate signals. GPS spoofing has become fairly common in kinetic conflict areas such as the Middle East (the Red Sea in particular), the North/South Korean border, the Black Sea and Baltic Sea, Northern Europe, and anywhere near Ukraine and western Russia. All of which means that anyone who cares about where they are in these and other regions really cannot rely exclusively on GPS.

Rerouting Tankers

The original report of the teenager’s hack of ship routes included graphics with the appearance of an Electronic Chart Display and Information System (ECDIS), which is a shipboard system that regulators allow as a substitute for paper charts. ECDIS display the position and heading of vessels automatically, pulling information from the ship’s GPS, other location systems, as well as Automatic Identification System (AIS) broadcasts from nearby ships detailing those ships’ location, speed, heading and other navigational data. Some (all?) these ECDIS can also steer ships by auto-pilot, once a route is entered. While the news report’s ECDIS-looking graphic was entitled “Maritime traffic in the Mediterranean” and subsequent reports claimed the teenager in fact hacked into one or more ECDIS, these reports may not be accurate. It seems more plausible, to me at least, that the individual hacked into a shore-side system that managed route planning for multiple ships, rather than hacked into multiple ships at sea and modified their shipboard systems to bring about the diversions.

Assessing Residual Risks & Consequences

Managing cyber risk to physical operations involves more than blindly deploying a bunch of OT security controls, dusting our hands off, and walking away. It’s easy to say “Hah! They should have had two factor!” or some such, but 2FA isn’t going to help with GPS spoofing is it?

Once we’ve deployed an automation or security system, we need to evaluate residual risk – what’s left over? The right way to do this is not just to produce a list of missing patches in our PLC’s. The right way is to look at a representative spectrum of credible attacks – attacks that are reasonable to believe may be leveled against us, the system, or someone much like us or the system, within our planning horizon. Evaluate these credible attacks against our defensive posture and determine what are credible consequences – what consequences are reasonable to expect when a credible attack hits us? And when those consequences are unacceptable (eg: ship runs aground, oil tanker is diverted into environmentally sensitive waters), we need to change something.

For example, given the prevalence of GPS spoofing in many regions, and the prevalence of GPS jammers in many more, it seems reasonable to me that anyone (operating a ship, an aircraft, or a locomotive) who needs to know their precise position or even the precise time needs multiple, independent sources of that information. And we need alarms to sound when those independent sources disagree materially, and we need manual or other fall-back procedures when we detect such disagreement.

Another example – given the importance of a big vessel’s route, it seems reasonable that when the route changes for any reason, the captain should be notified of the change, and the change logged in an indelible / WORM ship’s log. It also seems reasonable that captains or acting captains are trained to examine unexpected route changes to make sure they make sense – not just because of potential attacks, but because of potential errors and omissions of shipboard or on-shore personnel. Note: I’m not an expert on shipboard systems – for all I know all this happens already and is how the teenager’s hack was detected? One can hope.

Reasonable Responses to Credible Threats

When we make decisions about other people’s safety, we have ethical and often legal obligations to make reasonable decisions. For that matter, when we make decisions about other people’s money, especially large amounts of it, we have similar obligations. OT security is more than OT putting our head in the sand and saying “Ship route planning is an IT system.” It is more than IT putting their head in the sand and saying “Not running aground is the captain’s responsibility.” Every business has an obligation to make reasonable design, training and other decisions about the safety of the public and workers, and reasonable decisions about the large amounts of money invested in physical processes like large ships.

More generally, we study attacks to understand what is reasonable to defend against. And we study breaches and defensive failures to try to understand whether our own management processes would really have prevented analogous breaches and failures.

About the author
Picture of Andrew Ginter

Andrew Ginter

Andrew Ginter is the most widely-read author in the industrial security space, with over 35,000 copies of his three books in print. He is a trusted advisor to the world's most secure industrial enterprises, and contributes regularly to industrial cybersecurity standards and guidance.
Share

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Ships Re-Routed, Ships Run Aground appeared first on Waterfall Security Solutions.

]]>
New CISA, CCCS et al Alert | Advice on Pro-Russian Hacktivists Targeting https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/new-cisa-cccs-et-al-alert-advice-on-pro-russian-hacktivists-targeting/ Tue, 06 Jan 2026 08:49:25 +0000 https://waterfall-security.com/?p=38047 The most recent CISA, CCCS et al alert / advice on pro-Russian hacktivists targeting critical infrastructures is a lot of good work, with one or two exceptions.

The post New CISA, CCCS et al Alert | Advice on Pro-Russian Hacktivists Targeting appeared first on Waterfall Security Solutions.

]]>

New CISA, CCCS et al Alert | Advice on Pro-Russian Hacktivists Targeting

Picture of Andrew Ginter

Andrew Ginter

New CISA, CCCS et al Alert Advice on Pro-Russian Hacktivists Targeting

The most recent CISA, CCCS et al alert / advice on pro-Russian hacktivists targeting critical infrastructures is a lot of good work, with one or two exceptions. The alert documents poorly resourced hacktivists connecting with ICS gear over the Internet and hacking it. That gear tends to control critical infrastructures in the smallest, poorest and weakest of critical infrastructure installations – infrastructures most in need of simple, clear advice.

To its credit, the guide documents threats and tactics, and provides advice to both owners / operators and device manufacturers. However, the guide misses the mark in the section “OT Device Manufacturers.” I find this language very misleading:

“Although critical infrastructure organizations can take steps to mitigate risks, it is ultimately the responsibility of OT device manufacturers to build products that are secure by design.”

And,

“By using secure by design tactics, software manufacturers can make their product lines secure “out of the box” without requiring customers to spend additional resources making configuration changes, purchasing tiered security software and logs, monitoring, and making routine updates.”

When I read these words, the message I get is “If device manufacturers would only do their job better, then critical infrastructure owners and operators could ignore security and go forth to connect as much of their control systems as they wish to the Internet.”

This is of course nonsense.

We can configure “secure” products into hopelessly insecure systems, just as we routinely (with a bit of care) configure “insecure” ICS products into “secure” systems. That manufacturers should “take ownership of security outcomes” does not mean they can or should ever take sole ownership of such outcomes. A sentence or two to this effect would help readers better understand the relative responsibilities of manufacturers vs. owners & operators.

By analogy, automobile manufacturers can build all the seat belts, turn signals and rear-view mirrors they want into their vehicles, owners and operators still need to be taught to use these features to improve their driving safety. More specifically, owners and operators of the smallest, poorest and most vulnerable critical infrastructures need to hear that it is never reasonable for them to deploy safety-critical nor reliability-critical HMIs on the Internet, no matter what “secure” by design features have been built into these products.

And again, while I commend these organizations for doing the work of putting out the alert / guidance, a second feedback is that their advice to owners and operators missed the mark. It is not that the advice is wrong – it   the wrong audience. The advice is appropriate for larger “medium-sized” infrastructures with a larger workforce, some of whom are knowledgeable in basic computer and cybersecurity concepts. The hacktivist attacks we’re talking about are targeting the smallest, poorest and least well-defended of critical infrastructures globally. These are organizations that uniformly suffer from STP Syndrome – Same Three People.

There is nobody no staff in these organizations who will understand the carefully phrased, completely general and abstract language of the guide’s 8 major recommendations and 17 sub-recommendations. These smallest organizations need the simplest advice possible. Eg:

  • Don’t connect any of your OT systems on the Internet. Ever.
  • Don’t enable remote access into any of your OT systems. Ever.
  • Auto-update all of your ICS firewalls, and religiously replace these devices every 3 years, because let’s face it, some time after that the manufacturer is going to stop providing updates, and when they do, you’re not going to notice are you?
  • Lock the doors to rooms containing your OT gear, and change the locks annually to control who has access to the space, because again, let’s face it, you’re going to lose track of who has those keys aren’t you?
  • Make sure you have backups and spare equipment to restore those backups into when your main equipment breaks, or when that gear is hacked irrecoverably.
  • Buy insurance from a reliable provider who can send someone who knows what they’re doing to your site when you have an emergency, to clean up the mess and restore your systems.

Again – I commend these organizations for making the effort. Securing the smallest, least-capable critical infrastructures is a hard problem to solve. This document is much better than nothing but would benefit from clearer and stronger guidance targeting owners and operators of the smallest critical infrastructure control systems, not just manufacturers of the control devices in those systems.

About the author
Picture of Andrew Ginter

Andrew Ginter

Andrew Ginter is the most widely-read author in the industrial security space, with over 35,000 copies of his three books in print. He is a trusted advisor to the world's most secure industrial enterprises, and contributes regularly to industrial cybersecurity standards and guidance.
Share

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post New CISA, CCCS et al Alert | Advice on Pro-Russian Hacktivists Targeting appeared first on Waterfall Security Solutions.

]]>
We can’t – and shouldn’t – fix everything – Episode 147 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/we-cant-and-shouldnt-fix-everything-episode-147/ Wed, 17 Dec 2025 14:47:15 +0000 https://waterfall-security.com/?p=38027 We know there are problems in our security systems, but we can't and shouldn't fix everything. What do we fix? Who decides? How do we explain what's reasonable to people who do decide? Kayne McGladrey, CEO In Residence at Hyperproof, joins us to explore risk, communication, and a surprising role for insurance.

The post We can’t – and shouldn’t – fix everything – Episode 147 appeared first on Waterfall Security Solutions.

]]>

We can’t – and shouldn’t – fix everything – Episode 147

We know there are problems in our security systems, but we can't and shouldn't fix everything. What do we fix? Who decides? How do we explain what's reasonable to people who do decide? Kayne McGladrey, CISO in Residence at Hyperproof, joins us to explore risk, communication, and a surprising role for insurance.

For more episodes, follow us on:

Share this podcast:

“We have new intel. The threat has changed, the probability has changed, the impact has changed, whatever it might be. Do we still feel good about our previous judgment of this?” – Kayne McGladrey

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post We can’t – and shouldn’t – fix everything – Episode 147 appeared first on Waterfall Security Solutions.

]]>
Medical Device Cybersecurity Is Tricky – Episode 146 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/medical-device-cybersecurity-is-tricky-episode-146/ Thu, 11 Dec 2025 14:21:50 +0000 https://waterfall-security.com/?p=37991 Yes the device has to be safe to use on patients, and yes it has to produce its results reliably, but patient / data confidentiality is also really important. Naomi Schwartz of Medcrypt joins us to explore the multi-faceted world of medical device cybersecurity - from MRI's to blood sugar testers.

The post Medical Device Cybersecurity Is Tricky – Episode 146 appeared first on Waterfall Security Solutions.

]]>

Medical Device Cybersecurity Is Tricky – Episode 146

Yes the device has to be safe to use on patients, and yes it has to produce its results reliably, but patient / data confidentiality is also really important. Naomi Schwartz of Medcrypt joins us to explore the multi-faceted world of medical device cybersecurity - from MRI's to blood sugar testers.

For more episodes, follow us on:

Share this podcast:

“I would estimate that somewhere between 30 and 50% of medical devices that are submitted to FDA today qualify as a cyber device per the Food, Drug and Cosmetic Act.” – Naomi Schwartz

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Medical Device Cybersecurity Is Tricky – Episode 146 appeared first on Waterfall Security Solutions.

]]>
Hardware Hacking – Essential OT Attack Knowledge – Episode 145 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/hardware-hacking-essential-ot-attack-knowledge-episode-145/ Wed, 26 Nov 2025 01:46:31 +0000 https://waterfall-security.com/?p=37609 If you can touch it, you can hack it, usually. And having hacked it, you can often more easily find exploitable vulnerabilities. Marcel Rick-Cen of Foxgrid walks us through the basics of hacking industrial hardware and software systems.

The post Hardware Hacking – Essential OT Attack Knowledge – Episode 145 appeared first on Waterfall Security Solutions.

]]>

Hardware Hacking – Essential OT Attack Knowledge – Episode 145

If you can touch it, you can hack it, usually. And having hacked it, you can often more easily find exploitable vulnerabilities. Marcel Rick-Cen of Foxgrid walks us through the basics of hacking industrial hardware and software systems.

For more episodes, follow us on:

Share this podcast:

“Security doesn’t stop at the network interface and also the PCB, the hardware level should be taken into consideration. And in general, I think OT security needs more curious minds that are looking under the hood.” – Marcel Rick-Cen

Hardware Hacking – Essential OT Attack Knowledge | Episode 145

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome listeners to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the Vice President of Industrial Security at Waterfall Security Solutions, who’s going to introduce the subject and guest of our show today. Andrew, how’s it going?

Andrew Ginter
I’m doing very well. Thank you, Nate. Our guest today is Marcel Rick-Cen. He is the founder and lead instructor at Fox Grid International. And our topic is hardware hacking, picking apart the hardware, finding the vulnerabilities, arguably essential attack knowledge. We need to understand how we’re going to be attacked if we’re going to design effective defenses. So that’s that’s the topic for today.

Nathaniel Nelson
Then without further ado, here’s your conversation with Marcel.

Andrew Ginter
Hello, Marcel, and welcome to the podcast. Before we get started, can I ask you to introduce yourself, please? Tell our listeners a little bit about your background and about the good work that you’re doing at FoxGrid.

Marcel Rick-Cen
Yeah, thank you, Andrew. Hi, everyone. My name is Marcel Rick-Cen, and if I would introduce me in one sentence, I am an automation engineer turned OT security nerd. To my background, I have a master’s in automation engineering.

I have global experience in commissioning automation systems, as well as programming, planning, industrial operations. Now, during my day job, I am an OT and IIoT security consultant and a product owner of our in-house OT remote access solution.

During my nighttime, I am a hacker, or if you want to put it more formal, I am an independent OT security researcher that looks at what makes and breaks OT devices. Coming from that, I also founded FoxGrid, where I want to teach industrial cybersecurity and safety to newcomers.

Andrew Ginter
Thank you for that. And our topic is hardware hacking. Can we start with an example? You’ve got a couple of reports out. Can you pick one? Can you tell us about, a concrete example of what what is that?

Marcel Rick-Cen
Yeah, let’s talk about hardware hacking that led to a CVE that I found last year, where I found hard-coded root credentials hidden deep in the device’s firmware memory.

Andrew Ginter
Okay, so you know can you can you go a little deeper? What was the device? How’s it supposed to work? And you know how important is what you found?

Marcel Rick-Cen
So the device is a remote access gateway that machine builders usually built into the electric cabinet, so which connects the machine to the service provider.

In case there’s an unplanned interruption or any other operational bug or coming up, the service provider can directly connect over the cloud portal to this Edge device and start troubleshooting.

Andrew Ginter
So if I may, this is something that’s used in manufacturing. When you say the manufacturer, you mean someone who’s building a robot, someone who’s building a stamping machine, someone who’s building, I don’t know, a conveyor. Is is is that the use case here?

Marcel Rick-Cen
This basically can be used in any operation, from your maybe water treatment plant to your manufacturing to your building automation. Like there are really no limits. This really a network connection from the service engineer’s laptop directly into the heart of the device or into the heart of the operation.

Andrew Ginter
Okay, so it’s not just used for for like a robot, for a manufacturer of equipment. It might also be used by a service provider, by the know the engineer who’s responsible for for you know occasionally coming in and servicing oh parts of a water treatment system. it’s It’s used to access systems as well as devices is is what I’m hearing.

Marcel Rick-Cen
Yes, correct. So this acts as the gateway tu the machine or operational network.

Andrew Ginter
Okay, and so you found the default credentials. Does that mean that any fool who wants to can connect to the cloud, connect into this thing? Or how would you use those default credentials?

Marcel Rick-Cen
Luckily, the attack vector is really narrow. These default credentials, they grant root access to the device, and you only can get root access when you are physically connected to the device. So luckily, the cloud attack surface or the cloud is not exposed to this vulnerability.

Nathaniel Nelson
Okay. Andrew, I don’t know if I just missed it, but what is the actual device that we’re talking about here?

Andrew Ginter
It is an Ixon device, I-X-O-N. I forget the exact name of it, but its it’s physically it’s a little device about you know six inches square and an inch thick. And in my understanding, it’s a remote access device. You can connect into it from the cloud. Who uses this?

The sense I have is that it’s used in manufacturing. If you’re building a a laser cutter or a stamping machine, you might build one of these into the thing so that when the customer calls you up and says, your machine isn’t working, something is worn out.

You can remote into it, do the diagnostics and say, I think it’s this part, replace this part, see if if if the problem is solved. Because, moving parts wear out. Friction is the is the enemy of moving parts.

Andrew Ginter
But, when I asked the gentleman, Rick, he said, yeah, manufacturers of physical equipment use it so they can maintain the equipment or diagnose the equipment remotely.

But It’s remote access. A service provider, an engineer who’s responsible for keeping the automation running at a dozen small water utilities in the geography, might well buy a half dozen of these and drop one of them into each water system to access the HMI and the automation and whatnot.

So it’s remote access. The sense I have is it is used frequently engineers. manufacturers of equipment that’s used in manufacturing, but it could be used by service providers as well.

Nathaniel Nelson
And I think it’s the remote access thing that has me a little bit confused here. We’re talking about hard-coded credentials as vulnerability, something I’m rather used to in the IT space, right? Like a public repository or a server that’s been incorrectly configured will leak credentials to the web that then hackers could use to get in. And we’re talking about a romantic remote access device And yet, I think he mentioned there that you can only actually exploit this vulnerability if you have local physical access to the machine. So can you help me explain that gap?

Andrew Ginter
We go into this in sort of more detail later in the interview, but let me let people know kind of what’s happening. There are basically two user interfaces to the device.

One is the remote access user interface with users configured and blah, blah, blah. That’s not where the vulnerability is. The other interface is if you touch the device and you connect to it, I don’t know, I was a little weak on the details. If you connect through the USB port or if you connect, pins, you’re to electrically to pins sitting bare on the on the on the circuit board, if you open the device up, you can get access to the operating system of the device.

And it’s the operating system credentials that were leaked. So in order to use those credentials, they don’t work on the remote user interface. They work locally when you’re when you’re able to physically touch the device and plug stuff into it.

The CVE was 2024-577990. It was given 5 or a 5.9 or something like this, not a 10. This is not a remote code execution vulnerability. You can’t do this remotely. You have to be local. It’s a local escalation of privilege for vulnerability.

Nathaniel Nelson
And that explanation makes a lot of sense to me, but why is it that Like, how can you even leak credentials to somebody who’s physically using a computer, right? Like any credentials on my computer that get leaked to me doesn’t matter because I’m the user. So what I suppose I’m asking attack scenarios, are we worried about with this vulnerability?

Andrew Ginter
Actually, we didn’t go into that, but as far as I know, the scenario is that you’re there locally touching the device. Now, normally, you look at the device, it’s got network ports, it’s got one of the ports is is connected out to to the world, you come in remotely, it does its thing.

There is no other supported user interface. But if you touch the device, you can get in there, you can tamper with the the firmware, you can tamper with stuff, you can you could presumably create credentials that you could use remotely. You’d need a little bit of skill to do that, but you could brick the device. So it’s, again, if you’re standing there with a hammer, you could also brick the device.

This is why it was given a lower priority. Yes, technically it’s a vulnerability. It’s not a really alarming one. What’s interesting is how did you find it? Because the way that he found it, the technique is what he teaches at Fox Grid. You can also use to find more interesting stuff.

Andrew Ginter
Okay, so so this is something that is is is a local vulnerability. It’s not a remotely exploitable thing, which, yeah, is is lower priority. But still,

What I wanted to ask you about is, we’ve never had someone one on the show who picks things apart like this, or maybe we did once about three years ago, but it’s been a long time.

You know, can you talk about the process? How did you find this? How does one pick these things apart? What is, what does that mean

Marcel Rick-Cen
Yes, absolutely. If I would just describe it to you maybe in a pub or over coffee, this really is like hardware and digital scavenger hunt because you have to look at so many things. You also go down a rabbit hole, see it’s the wrong way, turn around, and then keep digging.

And to find this, I just needed tools for about 30 euros. So a multimeter screwdriver, prying tools, a USB logic analyzer, and USB U-RAT interface was all I needed to find this vulnerability.

Andrew Ginter
So, I mean, can you give me a little more detail? Those are the parts I’ve never used a logic analyzer. I mean, How technical is that? Do I need to be an engineer to use a logic analyzer? How did, how did you go about this? What, can you, can you tell us a story? what did you start with? What do you do next? What does that mean? And, a blind alley went down. How did, how did it work?

Marcel Rick-Cen
Yeah, I can walk you through the all the six or seven steps that led to root access from opening up to the device until I was greeted with the root banner, okay? And before anyone gets started with hardware hacking and picking a device, or before anyone gets started with taking a device apart, here are four electrical safety rules that you should follow because your own life is more important than your curiosity.

And never ever open wall-plugged devices because if they’re directly plugged into the socket, this means that hazardous voltage is inside the PCB, inside the device, and there’s the risk that you can touch a live wire and that’s you risk well you risk an electric shock. Therefore, use devices that have external power adapters only so that the voltage conversion happens outside the area where you’re working with. Also avoid mixing power sources. This is important, for example, if you really go into firmware extraction and of course preventing short circuits because this will fry your PCB and then you have a very expensive brick.

But if you stick to these rules, you can open up the device and first start with the hardware reconnaissance where you just take a look on what chips are on there. And many industrial embedded devices, they run on a so-called system on chip and they have somewhere close to the system on chip the flash memory firmware chip.

So this is basically where the brains and all information are stored and once a is and once the system is powered on, the system on chip pulls the firmware information from the firmware memory chip. If you identified these, then you take a look around the board, are there any debug interfaces and on this device I found a so-called UART debugging interface.

So with these, we can move to the next steps. And first we do some electrical measurements just to prevent that our USB, USB UART and USB signal analyzers get fried because they are very sensitive to voltage. And first things first, we first confirmed the common electrical ground on the debug interface we identified. Once we identified the common ground, we know where we well connect the ground wire of our USB logic analyzer. Then UART interface usually has two more pins, RX and TX, which stands for receive and transmit.

And then we turn on the power and then measure these pins against the electrical common ground. And in most cases, we will find a voltage range between three volts and five volts, which means these devices are communicating on the so-called transistor-transistor logic level. When this is identified, we can move on with the logic analyzer. Power off the device, connect the logic analyzer, we connect the logic analyzer’s ground to the board’s ground, and then the RX and TX wire.

Although, The Rx and Tx pin are already labeled and we only could connect to the Tx. It’s always good to connect to all pins that we have a full picture of what’s going on. Because, Andrew, at this board, this was easy mode. The pins are already labeled, but that’s not always the case. Sometimes you just have a three, four, five pins sticking out, and you don’t know what they mean.

Then you also do the same procedure. You measure for the electric common ground and then start measuring the voltage levels, and this gives you at an idea if you can find logical signals going on.

Andrew Ginter
So thanks for that. I mean, that’s giving giving us some insight into the the mechanics of of dealing with a device. I mean, I’m um’m a software guy. I never have to worry about electrocuting myself if I bring up a compiler on my laptop. But let me ask, you’ve talked about two sort of devices here that that seem to ring a bell with me. There’s the UART, which – so quick question. Is the UART USB or is it RS-232?

Marcel Rick-Cen
This is an RS232 connection to USB. So this basically converts the signal, converts this logic serial signals to USB so that your machine, your computer can work with that.

Andrew Ginter
Nate, real quick here, I had a a very short sort of interaction with with Rick there asking about the difference between USB and RS-232. For anyone who didn’t quite track that, RS-232 is a very old hardware signaling protocol. I mean, I remember using RS-232 back in the day to to connect, this was 30 years ago, to connect to 300 bits per second modems, okay? Ancient, ancient technology.

Why would there be such an ancient interface on this modern device? Is roughly what I asked him. and He said there isn’t. What there is, is a USB port. It turns out that what he connected to the TX and RX he connected to were signaling USB. And when he looked at the signals, he discovered that the messages coming across the USB were RS-232 over USB.

So he looked around, he said, well, I have a a dongle that can can take USB and gives me RS-232 and he connected to it and there he can see the messages coming across. So that’s what’s going on there. It’s a USB connector on the device that connector on the device but the signaling RS-232 over USB.

Andrew Ginter
The other one that that that struck me, and again, I’m a software guy, you mentioned the flash chip. to me, if I get what’s on the flash, I can start looking at instructions, I can start running my disassembler. Is it possible to to sort of go under the nose of the device and just read the flash chip? Or do you have to go through the front door? Do you have to go through the CPU in order to get access to the flash?

Marcel Rick-Cen
No you don’t, you also can basically perform a chip off of the flash chip and then read out the contents with a programmer. This is also possible, but at but at the time of my research I didn’t have such equipment here, so I went through the front door.

Andrew Ginter
Okay, that’s fair. So please carry on. you’re You’re talking about the UART. I interrupted you finish Finish the story here. how how are we How did you get in?

Marcel Rick-Cen
Okay, the logic analyzer revealed that indeed a logical data is transmitted over the TX pin. So this means we can connect our USB UART interface and open a serial console to that.

Andrew Ginter
That’s fair. I didn’t realize that USB was, I mean, I knew USB was serial. That’s what it means, universal serial bus.

Andrew Ginter
But I guess I never put two and two together that you could just, I don’t know, connect an RS-232 to it.

Marcel Rick-Cen
Well, you always need this interface device that you plug between that you plug into your USB socket and then on the other end of the device you can connect to the target device. So once the USB UART interface is connected and I started the terminal on, and I started the serial console on my Linux machine, then I powered up the device again I and could see the boot log flashing in front of the screen.

You know, it was a basic Linux boot lock, and at the very end, there was a login prompt to log into this device. And this is really where it got interesting, and here my curiosity was really on fire because I really wanted to get into this device and I started to look at the boot lock itself first.

Here I learned that the firmware memory is partitioned into several partitions and if you look at the common IoT hardware hacker courses, then they always tell you to go for the rootfs file system because that’s where all the binaries are stored of this Linux device.

But there was another partition that was interesting for me. This was the so-called factory partition. Scrolling further up in the boot lock, there was also a brief prompt to press space bar to enter the bootloader. But Andrew, the timing for this was so narrow that it was almost impossible to hit the timing right to enter the bootloader.

You you can imagine I was jamming the, I was hammering the space bar like a lunatic. And then maybe at the fourth or fifth time I succeeded to get the timing right and then I was presented with the next option to choose the operation. And here a very interesting option was presented to me by pressing the number four, I would be able to enter the boot command line interface.

And this was something what I was interested in and wanted to go, but with this narrow timing, I turned to chat GPT asking it, it this way is there a way that I can automate the key presses and can send a space bar press and a number four press at rapid speed? The AI gave me a five-line shell script code which uses an onboard tool of Kali Linux to send spacebar and number 4’s.

And this immediately landed me into the boot shell. And the boot shell of this device is based on the uBoot bootloader. And all the hardware hackers out there that are familiar with uBoot would immediately see that this is already a very stripped down and secured restricted version of uBoot. There was almost no way of manipulating the device, but they left in the so-called SPI command, which enabled me to read the content of the factory partition.

So that’s what I did. I issued the command to read the factory partition and then its and then it printed out the content of the factory partition in the hexadecimal format. And here’s something really strange occurred to me that the data was not always represented in two hexadecimal digits. that hexadecimal data always needs to have two digits.

If not, the data gets misaligned and then gets corrupted. So the problem I was facing here is that some digits were, or some data was represented with single digits, missing the second digit. So the data was not usable for me. Then I used another script to align the data and then convert the text hexadecimal data back into binary hexadecimal data.

And then I was able to view the binary data and the ASCII interpretation of that. And here’s something really interesting stood out. There were basically three strings of data that at first made not really sense to me but somehow felt familiar. And suddenly I realized this is the information which is also printed on the device’s label on the side. Suddenly I could see that in this partition of the factory, the version number, the serial number, the device version, and the login password for the web management surface for the web management interface was stored.

But there was another string that also kept me guessing and puzzling for quite a while, but this unknown string had the same characteristics as the web login password. It had 10 characters, capital and lowercase letters, and numbers. And I tell you, this had to be another password. So I restarted the device again, and at the very end of the boot process, I was prompted for the login once more. I entered the username root and entered This data I found inside the memory and this gave me root access to the device.

Nathaniel Nelson
Andrew, it’s not that anything that Marcel said at any point there wasn’t clear, but we’ve now gone a while and he’s expressed a lot of technical steps. Can you just give me the big picture summary, what we’re talking about here, what he achieved and why it’s important?

Andrew Ginter
Absolutely. He, he real quick, managed to connect to the boot shell with the space for script constantly blasting. And he got in and discovered there was almost nothing he could do there, but he could look at this one tiny partition. And he managed to get the data, decode the data. And he looked at it and said, this looks like a serial number.

It looks like a password. And so he said, well, let’s try it. And he reboots the device again. He doesn’t do the space for this time. He lets it completely boot. And it comes up and says, OK, I’m ready. Log in. You want to log in? And he said, yeah, let’s log in as root. And it says, well, what’s the root password? And he says, well, here’s the string that I saw in the partition. He enters it, and he’s in.

And now he’s in as root.

Andrew Ginter
Cool. So it’s not like you looked at the file system and said, oh here’s files. Look, there’s a file with the name password. it It wasn’t nearly that obvious.

Marcel Rick-Cen
No it was very well hidden, but I think also on purpose because there’s nothing written in this memory area before or after this partition. You really just find the version number, the serial number, the web management password, and well, the root password. So somewhere in production when the device gets, so to speak, gets the breath of life and the data for the label, At this moment, the data must be flashed into this firmware memory chip.

Andrew Ginter
Okay, so so this is a very small partition then. We’re not talking tens of megabytes. We’re talking tens of kilobytes.

Marcel Rick-Cen
Yeah, it was very small indeed.

Andrew Ginter
Okay, cool. So you found the vulnerability. And then I assume, there’s something called a responsible disclosure process. I assume you contacted the vendor, you contacted the government.

Marcel Rick-Cen
Right

Andrew Ginter
What was the next step there?

Marcel Rick-Cen
So the next step was to contact the security contact of this company and luckily I was already in contact with him on LinkedIn. So on a Sunday morning I sent him a screenshot of, hey Mr. XYZ, I got root access to your OT gateway.

And within two hours, he replied and said, okay, this is very concerning. Please send your findings and everything you have to our security email address and we will look into this first thing Monday morning.

then I wrote a quick report attached to screenshots and the proof of concept video. And around Monday lunchtime, they replied, said, yes, this root password is uniquely generated per device and inserted here during production. But since everything is uniquely, they kind of hinted at that they are accepting the risk so that the probability of this being exploited is rather low. They also said if machine builders, integrators, operators stick to their security requirements, they do not see really a risk of this being exploited.

Andrew Ginter
Okay, so the vendor said, it’s a low priority because, people are expected to have physical security. No fool off the street can come in take one of these devices, walk away with it, pick it apart, bring it back. That’s not a realistic threat. Do you agree with that?

Marcel Rick-Cen
Yes, totally, I agree with that. From all my experience on the shop floor and in the field, you cannot just walk up to an electric cabinet, take out a device, screw it open, extract the root credentials, in and then put it back in with a backdoor you implanted, right? This would hopefully catch some attention.

Andrew Ginter
Okay. and can you Can you finish the the the thought? I mean, you wound up with a CVE for this. you’ve You’ve interacted with a vendor, then then what? how do you How do you finish the process?

Marcel Rick-Cen
Then I contacted Mitre to file a CVE, also reported the things I found and the implications for this and after two months the CVE was assigned.

Andrew Ginter
And at that point, you’re able to disclose publicly. Is that right?

Marcel Rick-Cen
Yes.

All that being said, there is a tiny, tiny risk that you may receive the backdoor device, but then someone really must be targeting your operations. They need to know that you are operating such device. And if you’re expecting a new shipment, they could intercept the shipment, open up the device, extract the root credentials, implant the backdoor, pack it back up and ship it forward to your operations. So for that, if you are operating something critical, or if you’re operating, or if you’re having critical infrastructure and operations, you should definitely opt for temper detection and protection. You know, some devices, they have this little sticker on there, warranty void if removed.

Andrew Ginter
So fascinating stuff, at least at least to me. I’d always wondered, how some of this this hardware hacking was done. But, as far as I know, you don’t get paid to do the hardware hacking unless, I don’t know, there’s a bounty or something. You know, how does this relate to to making a living for you?

Marcel Rick-Cen
Yeah, no, this is not my day job and I also don’t get paid to find these vulnerabilities. Let’s just say this is a very expensive hobby. I’ve been in the I’ve been in the domain of automation systems for half of my life and after my work, I’m still interested, especially in what makes and breaks these devices.

And that’s also how my trainings got born. I took all the experience I made from, well, breaking these devices and turned them into training.

Andrew Ginter
Okay, and this is what you do at FoxGrid. Can you go a little deeper? I mean, if I if I sign up for one of these courses, what are you going to lead me through?

Marcel Rick-Cen
If we stay with hardware hacking, you could sign up to Industrial Embedded Systems Hardware Penetration Testing, where you will also go through these six or seven steps from investigating the PCB to hopefully getting root access. But this course has a unique approach because if you look at the IoT hardware hacking courses, you usually hack some IoT camera or home router, but it’s almost impossible to hack an industrial device because there is an entry barrier problem.

First of all, this hardware is really expensive. You usually pay $500 or more, and it’s risky because you can brick it and then you wasted $500. To get around this, I built a custom firmware for a cheap ESP8266 microcontroller that mimics the behavior of an industrial device and introduces the student to the same challenges I faced.

Andrew Ginter
Okay, so that’s the hardware hacking. Have you got other courses?

Marcel Rick-Cen
Yes, I have my flagship course, Practical Offensive Industrial Security Essentials, which gives students from diverse backgrounds, whether they’re automation engineers, IT professionals, or total newcomers, an holistic introduction to industrial cybersecurity.

Of course, there are some gaps that needs to be filled, but anyone with so anyone with enough curiosity will get through this course. or will have success with the course and then get a holistic understanding of industrial cybersecurity.

Andrew Ginter
So if I can take you sort of on a side trip real quick here. throughout this interview, I have been surprised by you personally. I mean, I had always had a stereotype in mind for people who found vulnerabilities, who hacked stuff, hardware, software, whatever. the the The stereotype that I had in mind was sort of somebody with a big ego, somebody with an ego saying to themselves, I’m smarter than you are. I can find these problems. You, and you you the vendor, have have messed it up.

I always thought you needed a that kind of attitude to be able to go in and tackle know the vendors defenses in and incorporated the product I always thought you needed attitude but what’s coming across from you is something different can you talk about who what do you need sort of in your brain in your personality to be successful here?

Marcel Rick-Cen
Well, to make it short, you just need curiosity and persistence. I think people with a big ego, they are more successful in finding more vulnerabilities But like I said earlier, this is more an expensive hobby for me, so I do not really have the pressure to find vulnerability after vulnerability. For me, it’s more, well, being on this scavenger hunt to go away that, or find a way to operate the device it was not intended to, and then really find a way in. And to be honest, I also have a whole box of scrap electrical, scrap OT devices where I did not find a vulnerability.

So this is where we come back to the expensive hobby. So I think if someone is, understanding a bit of the domain these devices are operated in and have enough curiosity and then persistence to stick to it, they can definitely find some vulnerabilities or if not, well, they can at least learn a lot about the devices, how they operate and how they interact with other devices in the OT domain.

Nathaniel Nelson
So Andrew, we’ve been talking hardware vulnerabilities.

Nathaniel Nelson
It seems relatively serious, but bring it to a practical level for me. If I’m running industrial site and I discover ah a hard coded issue in one of my gateways, and am I running around red alarms ringing? To patch immediately or am I more focused on the systems and data flows around it that enable sort of legacy technologies to occasionally have vulnerabilities like this? How would you interpret it in the grand scheme of things?

Andrew Ginter
Well, in the grand scheme of things, there’s sort of a couple of different questions. let’s let’s Let’s pick it apart. What we’ve been talking about primarily is how to find these vulnerabilities. Once you’ve found a vulnerability, now you’ve got to ask the question, A, can I patch the system? Because if it’s a vulnerability in your safety system, well, I’m sorry, the testing cost of the new version is going to be prohibitive.

It’s just really hard to patch some things. Other things are easier to patch. So can I patch it? Second question is, do I need urgently to patch it? And that’s sort of a different skill set. It’s one skill set to find the vulnerability. It’s a different skill set to say, well, how would an attacker, so it’s an imagination thing. Imagine how would an attacker use this against me?

And, we talked about two scenarios for this vulnerability. One is physically walking up and stealing the device and taking it apart and putting it back, which seems not a very credible threat because you’re going to be discovered. The second scenario was, someone with much more resources discovers that you’ve just ordered 50 of these, intercepts the shipment, bribes the driver to go take a long coffee break, breaks into, five or 10 or 15 of these devices, inserts malware, packages them all up again. Again, is that a credible threat? It’s a credible threat for some people – very high value targets. Is it a credible threat for a small bakery? probably not. So, first step is find it. Second step is figure out, can I even patch it? Third step is how would a bad guy exploit this?

Are there credible threats? Is there a third scenario that we haven’t imagined? So it’s a, it’s a, a question of imagination and studying what people have done in the past. And then, the the the decision, part part of it is, how easy is this to exploit? So we’re talking about devices generally. We’re also talking about cloud connected devices, because a lot of the devices that Marcel is focused on, that he teaches you about is industrial internet devices. They’re connected out to the cloud.

So that’s more internet internet connected, more internet exposed. But really what he looked at here was an OT, cloud remote access device. It’s arguably the most exposed piece of technology in the OT network. It’s the technology that gives internet-based users access to the OT system. So normally you would set these things on automatic update. Why? What if they blue screen? Well, nobody cares if they blue screen. It’s inconvenient if they blue screen. If the bad guys get in, they can work whatever they want, sabotage on your OT network. So, um, normally people pay a lot of attention to defects in their, to, to vulnerabilities in their OT remote access.

This one, we just, we couldn’t imagine a credible attack scenario for mere mortals. Um, it might not be that worry that that big to worry about, but generally speaking, this is the kind of device you want people like Marcel picking apart the most thoroughly, because this is the device that has to be the most thoroughly protected.

Andrew Ginter
Well, thank you so much. I mean, I learned something this episode. Before we let you go, can I ask you to sum up for our listeners? What should we take away from this? what What’s important to to to know about this stuff and and how do we use it going forward?

Marcel Rick-Cen
Okay, looking at the vulnerability I found, this was a prime example that just one part of security was completely overlooked. When you look at the device from a network perspective, you see a very fortified device.

But security doesn’t stop at the network interface and also the PCB, the hardware level should be taken into consideration. And in general, I think OT security needs more curious minds that are looking under the hood. For example, if you’re an engineer, you already understand the industrial processes.

And here I just can recommend you to level up your cybersecurity skills. And this this is exactly what I’m doing with FoxGrid. This platform exists to teach industrial security in an affordable and practical way. The flagship course, practical offensive and Practical Offensive Industrial Security Essentials, comes with an open source lab where you not only learn about penetration testing tools, but also how you can use them on simulated industrial controllers. And that way, you also can understand how your real devices would behave under such conditions. So for the next steps, if you’re curious, check out Fox Grid for resources and connect with me on LinkedIn. And of course, keep pushing OT security forward.

Nathaniel Nelson
So that seems to just about do it for your interview, Andrew, with Marcel Richtsen. Do you have any final thoughts that you’d like to share before we leave today?

Andrew Ginter
I guess so. I mean, I had always been curious, how people do this stuff. What surprised me about the the interview here was that I actually followed what he did. I kind of understood it. I thought it’d be harder than that. And I suppose it could be if you have to, if you don’t have a small amount of information to look at, it if you got to look at the entire firmware and start, I don’t know, disassembling megabytes of firmware looking for vulnerabilities.

That would strike me as harder. This seemed really straightforward. I don’t know if I don’t know if I’m curious enough about how this stuff works that I would do the work myself, but I sure wouldn’t mind another two or three guests like this to to walk us through how they did the hard work so that we can satisfy our curiosity.
-14:48

<insert bit from the end of the commentary>

Andrew Ginter
And beyond my curiosity, I agree with Marcel, we need people tracking down vulnerabilities. That’s, it’s because that’s the good way to persuade vendors to invest more in security, to, make these devices more secure to begin with is to point out afterwards, they’ve got problems. And, the next time around, hopefully they will be more careful. The bad way is to wait for the bad guys to find the vulnerabilities and exploit them and take advantage of us. So, we need more of the good guys. we need more more technical, curious people out there fighting the fight. So, thank you to Marcel.

Nathaniel Nelson
Well, thanks to Marcel for satisfying our curiosity. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a pleasure. Thank you, Nate.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Hardware Hacking – Essential OT Attack Knowledge – Episode 145 appeared first on Waterfall Security Solutions.

]]>
Managing Risk with Digital Twins – What Do We Do Next? – Episode 144 https://waterfall-security.com/ot-insights-center/ot-cybersecurity-insights-center/managing-risk-with-digital-twins-what-do-we-do-next-episode-144/ Mon, 20 Oct 2025 15:17:50 +0000 https://waterfall-security.com/?p=36741 How can we USE this information to make useful decisions about next steps to address cyber risk? Vivek Ponada of Frenos joins us to explore a new kind of OT / industrial digital twin - grab all that data and work it to draw useful conclusions.

The post Managing Risk with Digital Twins – What Do We Do Next? – Episode 144 appeared first on Waterfall Security Solutions.

]]>

Managing Risk with Digital Twins – What Do We Do Next? – Episode 144

Asset inventory, networks and router / firewall configurations, device criticality - a lot of information. How can we USE this information to make useful decisions about next steps to address cyber risk? Vivek Ponnada of Frenos joins us to explore a new kind of OT / industrial digital twin - grab all that data and work it to draw useful conclusions.

For more episodes, follow us on:

Share this podcast:

“Lots of people have different data sets. They have done some investment in OT security, but they’re all struggling to identify what’s the logical next step in their journey.” – Vivek Ponnada

Managing Risk with Digital Twins – What Do We Do Next? | Episode 144

Please note: This transcript was auto-generated and then edited by a person. In the case of any inconsistencies, please refer to the recording as the source.

Nathaniel Nelson
Welcome listeners to the Industrial Security Podcast. My name is Nate Nelson. I’m here with Andrew Ginter, the vice president of industrial security at waterfall security solutions, who’s going to introduce the subject and guest of our show today.

Andrew, how’s it going?

Andrew Ginter
I’m very well, thank you, nate. Our guest today is Vivek Ponnada. You might remember him from an episode a little while ago. He was the co-lead on the top 20 secure PLC coding practices document that came out a year ago, two years ago.

Today, he’s the Senior Vice President growth and strategy at Frenos. And our topic is digital twins for managing risk. And it sounds like a bunch of marketing buzzwords, you know, digital twins, managing risk, but they’ve got some real technology behind this. So I’m looking forward to this.

Nathaniel Nelson
Then without further ado, here’s you with Vivek.

Andrew Ginter
Hello, Vivek, and welcome to the show. Before we get started, can I ask you to say a few words about yourself for our listeners and about the good work that you’re doing at Frenos?

Vivek Ponnada
Sure, thanks Andrew. Hey everyone, my name is Vivek Ponnada. I am the SVP of Growth and Strategy at Frenos. I’ve been in the OT security space for quite some time. Back in the day, I was a gas turbine controls engineer for GE, then I became a controls and cybersecurity solutions upgrade sales manager for them.

I initially covered power and utilities and then of course added oil and gas. I’m based in houston so that was a natural thing. Before joining Frenos worked at nozomi networks as the regional sales director for three years so I’ve been in the OT security space for quite some time and I am happy to be on this podcast.

And at Frenos, we’re doing something cool. We’re doing an attack path analysis and risk assessment at scale, bringing autonomous risk assessments to a space that’s been lacking this kind of approach. So we’re looking forward to our conversation discussing more about that.

Andrew Ginter
Thanks for that. And our topic today is risk, which a lot of people find boring. I mean, people new to the field tend to want to focus on attacks. Attacks are interesting. Attacks are technical. It’s not until they have failed to secure funding as a manager of, you know, their security team for the last 10 years that they start being interested in risk, which is the language and the decision-making of business.

We’re going to talk about risk. You’re talking about, you know, we’re going to talk about digital twins, which is a real buzzword nowadays, but, you know, this is our topic.

And you’ve mentioned, you know, risk assessments, you’ve mentioned attack path analysis. You know I look forward to looking looking into all of this. You know to me, risk is is fascinating. It’s how we make progress. It’s how we shake the money loose.

But you know before we start, can we can we can you before we dig into it, can we start at the beginning? What is the problem, the risk problem that that you know we’re trying to address here?

Vivek Ponnada
Yeah, great question, Andrew. The past 10 plus years in OT security has been, let’s find out what we have, right? So lots of people start figuring out that they need asset inventory solutions. So the likes of Dragos, Nozomi, Claroty have been the forefront of that kind of an approach. So network security monitoring leading to passive asset discovery and vulnerability identification.

So now 10 plus years into this people have a lot of datasets. They have several sites, especially the ones that they would consider important to their production. They’ve installed sensors. They have lots of information.

Now they’re asking what next, right? The real use case is risk identification and risk mitigation as you mentioned, but there’s a struggle. We’ll struggle out there with different data sets not able to figure out what the actual risk is for them to address next. So that’s the problem we’re trying to solve.

We are trying to aggregate information, provide contextual analysis of what’s the riskiest path to a crown jewel or what might be the logical way to isolate and segment because not every risk can be mitigated by just patching your vulnerability for whatever reason that that’s the the main problem.

The conclusion is that lots of people have different data sets. They have done some investment in OT security, but they’re all struggling to identify what you do with that information what’s the logical next step in their journey.

Andrew Ginter
So that makes sense. I mean, it’s one thing to sketch, this is what, the nist cybersecurity framework says a complete security program should look like.

It’s another thing to say, I’ve only got so much budget this year and a comparable amount, hopefully next year. What do I do this year? What do I do next year? What’s sort of most important to do first? That’s that’s a really important question.

How does a person figure that out? What what’s the decision path there?

Vivek Ponnada
Yeah, that’s the real question. Lots of people in the past used to say over isolated or we are segmented. Where we have a DMZ between it and ot. A lot of these assumptions have not been validated.

In other cases where they have different data sets, it’s not very clear what the what the next problem that they could solve is, right? So everybody like you said has limited budget or resources.

So the honest question is, hey, where we should focus next? It’s not very clear. People have done linear projects, right? They’ll pick a firewall project or a segmentation project or a vulnerability management program.

And all these are are good, but overall not fixing the immediate problem or not solving the immediate problem first, right? So the commonly requested feature of many of these tools like dragos, nozomi or other vendors has been, hey, can you please tell me what my riskiest asset is or what my riskiest path is?

And they have not been able to do it because that’s not in in their and their current portfolio, is that contextual summarization, right? So let’s say you have an asset at the purdue model level two, for example, that is talking to another asset at level three, and then there’s a DMZ about that with some kind of firewall rules, isolating it, and if someone has a real world knowledge of this network and and that’s what we are talking about right a digital twin that’s kind of replicating the network and you analyze if that firewall rule and if that path is possible to get to level two or maybe they have other compensated controls in the path allowing them to say yep my level two is secure this network this location is not reachable easily or it takes a lot of complicated daisy-chaining of attacks to get to then that would be a an identification of what the what the risk is and if you need to address something.

The common consensus has been one, of course, you can really assess these in real time in the production environment, right? So you need to build something that’s a replica of that network.

And then you analyze all these scenarios to see if that asset that you deem important or that network that you deem is a is it critical for your environment.

Is reachable or not reachable from the outside or from any other attack vector that you choose, right? They assume breach could be your corporate enterprise network it could be a wireless network or it could be anything else that you deem as a as an attack vector and to assess in this digital replica or digital twin if that asset can be reached.

So that’s what in general most people have been asking for that’s been missing in the currently available set of tools.

Andrew Ginter
So Nate, Vivek’s answer there was a little abstract. Let me let me be a little more concrete. He’s saying, look, a lot of people in the last 10 years have deployed Dragos and Nozomi and Industrial Defender and you name it, asset inventory tools.

And in a large organization, these tools come back and say, you have 10,000, you might have 50,000 industrial control system assets. Okay.

And many of them are poorly patched because they’re deep down in areas where you can’t, it’s really hard to patch them. Patching them is dangerous. You have to test these patches, blahh blah, blah, blah.

So you’ve got 107,000 vulnerabilities in these 50 odd thousand assets. Okay. And they’re arranged into 800, 2000, whatever subnetworks.

And the networks are all interconnected. Right. So now you’re you’re you’re you’re scratching your head going, and the question is, what do I do next with my security?

And one of the things the asset inventory folks have done is they’ve allowed you to go through these assets, understand what they are, and assign a criticality to them. These are the safety instrumented systems. They’re really important.

Nothing touches them. These are the protective relays. They prevent damage to equipment and so on. And so what he’s saying is you can’t just look at the list of assets and vulnerabilities and figure out what to do next.

You need a model. And so this is what he’s talking about, a digital twin that is looking at attack paths and looking at which assets are really important and telling you which really important have assets have really short and easy attack paths.

That’s probably what you need to focus on next.

Nathaniel Nelson
Yeah, and I fear this is one of those things where everybody else in the world knows something that I don’t, but like, what is a digital twin?

Andrew Ginter
You know… That word is a marketing buzzword and it means whatever the marketing team wants it to mean. The first time I heard the word was in a presentation a few years ago at s4.

The sales guy from ge got up and did a sales pitch, in my opinion, a very smooth, a very, what’s the right word, cleverly scripted sales pitch. But he basically said a digital twin is a computer model of a physical system.

And you the ge at the time had technology, they probably still have it, that will, let’s say you’ve got a chemical process. It’s going to it’s got a physical emulator built in. It can simulate the chemistry.

It’s got emulators built in for all of the ge PLCs in the solution, for all of the ge ihistorian and other components. It’s got a complete simulation. And whenever the physical the measurements coming out of the physical world, they correlate against the measurements that should be coming out based on the simulation.

Whenever there’s a material discrepancy, they would say, oh, that’s potentially a cyber attack. Investigate this. Something has gone really weird here and would take all sorts of automatic action to correct it.

It was amazing in principle, yet I’ve heard dozens of other vendors use the term digital twin to mean other things. The best definition that I’ve heard is, look, your cell phone, Nate, your cell phone is a digital twin of you.

What does that mean?

It’s not, probably not, a biological simulation of your body, though some apps kind of do that. They’re measuring heartbeat and whatnot.

It is an enormous amount of different kinds of information about you. Somebody who steals this your cell phone, steals all that information, knows an enormous amount about you.

And so, I like that definition because it’s much broader than the very specific original definition that I heard at s4 from ge. A digital twin can be anything that is a lot of detailed information.

And so, I can’t remember if it’s on the recording or not, but I remember asking Vivek, is your digital twin that kind of physical simulation? And he’s going, no, no, no. It’s a network simulation. It’s a different kind of digital twin than the physical simulation that some people talk about. And they use it for different purposes. So, again, it’s a marketing buzzword, but it means, generally speaking, a system that has a lot of information that uses and analyzes and, does good things with a lot of information about another thing, like my cell phone does for me.

Andrew Ginter
So that makes sense in the abstract. I mean, you folks do this. You’re building this technology. You’ve got this this digital twin concept. Can you talk about what you folks have? I mean, maybe give us an example of deciding what to patch next and using this this digital twin and sort of give us some insight into into what data you have, what data you need, and and how you use that to make these decisions.

Vivek Ponnada
Yeah, great question, Andrew. Patching has been a significantly challenging problem to solve in ot, as you’re well aware, right? In it, if it’s vulnerable, you apply a patch and there’s a limit of downtime impact, but you run with it.

In ot, of course, it’s not practical because a patch might not be available, an outage window might not be available, and of course, there’s production, downtime issues to deal with, so patching has been really hard.

With what we’re doing though, it’s actually highlighting what to patch and what might be skipped for the moment. Right so when we’re doing this attack path analysis and we come up with a mitigation prioritization score and we say that, hey, this particular network is easy to get to, the complexity of the attack is pretty pretty low.

In just one or two hops from the enterprise network, I’m able to get to this asset and this is vulnerable. And we do provide other options besides patching right we’ll say maybe segmentation or adjusting the firewall role might be a way to go in some cases but if you do decide that patching is relevant and and our recommendation provides that you’ll see that if something is not on that attack path, right? So it might be another asset in the vicinity, but the complexity the attack of that to that asset is much, much higher, then you could deprioritize patching that asset, even if those two assets we’re talking about have the exact same vulnerability, right?

So if something is on the attack path and it’s easier to execute an attack to that asset, maybe you want to prioritize that more than another asset that’s exactly the same vulnerability, but it’s not on a critical attack path, if you will.

And so getting to it is harder. So you would want to deprioritize that compared to the other ones.

Andrew Ginter
All right, so so you used the word reachable. Is that loosely the same as or connected to the concept of pivoting, where an adversary takes over a an asset and a computer, a PLC, something, and uses the compromised cpu, basically, to attack other things, pivot through a compromised device to attack other things, and then repeat, use the newly compromised things to attack other things?

Eventually, you find, let’s say, computers that have permission to go through a firewall into a deeper network, and now you can use that compromised computer to reach through the firewall. Is this what reachable means? Reachable by a pivoting path?

Vivek Ponnada
It certainly could be right so pivoting would be jumping from one host or one asset to another right or from one network to another.

The concept of living of the land means that you have ownership of an asset and you’re using native functionality and eventually get to another asset from there because you have a direct connection or to a firewall for example. And so reachable essentially means that you’re able to get to that asset.

Now how you get to that asset or network is it because know firewall rule has any any for example that allowed you to just get there or in another case you were able to use rdp or some kind of insecure remote access to get there or in other cases maybe a usb right somebody plugged in the usb and now you have access to that asset. So lot of these scenarios are very much dependent on what the end user is trying to evaluate the risk for.

So if they are for example heavily segmented and their primary mediations are all segmentation and firewall based then they would want to know if those firewall rules are working according to plan or if the last time there was an exception that that poked a hole in their firewall now they are allowing access from level 4 to their critical networks, not realizing that their firewall has as a hole.

In other cases, they might have assumed that rdp was disabled in this level 3 device in this workstation, but it is actually enabled. And so now suddenly someone from outside of their enterprise network is able to get to that level 3 and now once you’re there, they could do a lot more, right, further exploration. So reachable essentially means that you’re able to get to a network that’s of interest from another area that’s your starting point.

Andrew Ginter
So, Nate, I remember a couple of episodes, a year and a half, two years ago, robin berthier was on from network perception. He was doing, it sounded like a bunch of similar stuff.

He wasn’t, I don’t think they were taking the output of, drago’s tools, but I could be wrong. What I remember was that he was taking firewall configurations and putting sort of a reachability, what’s reachable from where, map together for large complex OT networks, and would issue alarms, would issue alerts when sort of reality deviated from policy. You could say policy is this, safety instrumented systems never talk to the internet.

That’s a reasonable policy. And he would ingest hundreds, sometimes thousands of firewall configurations and say and router configurations and come back with an alert saying, these three devices over here are safety systems and they can reach the internet. So that what he was doing. What we’re talking here, what seems to me to be different here, but I could be wrong, is we’re talking here about pivoting paths, not only paths.

Sort of network configuration, not not just reachable not not just reachability, but the difficulty of pivoting as well.

Nathaniel Nelson
Yeah, and and is the reason why pivoting becomes relevant in a discussion about PLC security because these devices make for such efficient means of, that they connect your maybe, let’s say, lesser it t assets to more important safety critical systems. So PLCs sort of seem like a natural point at which an attacker would move through.

Andrew Ginter
Sort of. PLCs tend to be the targets of pivoting attacks in ot, sophisticated attacks, because they’re the ones that control the physical world. You want to reach the PLC to cause it to misoperate the physical process.

Pivoting through PLCs is possible in theory, and it’s a little bit more possible in practice when the PLC is based on a popular operating system like a stripped-down windows or a stripped-down linux.

But a lot of PLCs are just weird. They just their operating system, their code does one thing. It does the PLC thing. In theory, you could break into the PLC and give it new code.

But if I want to if I want to pivot through a PLC to a windows device, what am I going to how am I going to get into the windows device? I might want to get into it with a remote desktop. There is no remote desktop client on a PLC. It doesn’t exist.

And so pivoting through PLCs, you the attacker might, depending on the version of the PLC, might have to do an enormous amount more work to get pivoted through a PLC.

And so if the only way into, a let’s say, a safety system target is a really critical system, is to pivot through three different PLCs, pivoting through firewalls each time, that’s going to be really hard to do.

Whereas if, I remember a presentation from from dale peterson at s4 last year, year before, where he he was talking about network segmentation. He says, network segmentation, firewalls are almost always the second thing that industrial sites do to to launch their security program.

And I’m going, excuse me, excuse me, what’s but second thing? What’s the first thing? I thought firewalls were the first thing everybody does. “Andrew,” he says, “the first thing is to take the passwordless hmi off of the internet. That’s the first thing you have to do.” and I’m going, yep, you’re you’re right.

And a tool like this will be able to look at you and say, here’s my network. If I want to go from the bad guys into this hmi, it’s on the internet. It has no password.

That’s your number one. It’s it can tell you that. Not just policy, but it it it says, and the safety systems back there, you’ve got to pivot through three PLCs.

That’s going to be really hard to do. You might have some other security you might want to deploy in between. So this is the the concept of of pivoting that, I found very attractive in this this tool, measuring the difficulty of an attacker from the internet reaching a a target inside of a a defensive posture.

Andrew Ginter
That’s interesting. We’ve had guests on the show talking about attack paths. These, these are tools that, build a model of the system and, count all of the ways that an attacker can get from where they are into a consequence that we want to avoid. Um,

And it’s not just count them, but evaluate, let’s call it the difficulty. Mean, risk talks about the classic approximation for risk is likelihood times frequency.

Sorry, likelihood times consequence or impact, if you wish. And, likelihood is a really murky, difficult concept for high consequence attacks. And so what a lot of people do is they substitute likelihood with difficulty. And they They try to evaluate how difficult are really nasty, attacks with really nasty consequences.

It sounds vaguely like you’re doing this. You’re you’re You’re talking about attack paths. You’re talking about difficulty. Is this Is this where you’re going? The one thing you haven’t mentioned is consequence.

Vivek Ponnada
Yeah, that’s a good point because we are doing something unique in that we are allowing user to evaluate in this digital to in this digital replica how an adversary might be not only pivoting but exploiting different components to get to their crown jewels right the way we’re doing that is showcasing different views of TTPs that are well documented with all the IOCs and the threat intel that we aggregated so if it’s a power customer for example they could use a volt typhoon view to see how a volt typhoon actor might be able to leverage initial access to credential exploitation to other kind of exploits within within the environment and there might be a manufacturing customer with a whole different set of interesting TTPs that they want to evaluate But the idea behind this is you figure out what the generally documented TTPs are for a certain type of adversary and how they might you go about from your your starting point, which is initial access or the starting point of your threat analysis to all the way to the crown jewels. And in doing so, you’re making assumptions, right? Because, we’re not in this production environment. We’re not actually exploding something, but you’re evaluating the different scenarios where you say, OK, I have this Windows workstation and I’m going to use RDP, right? I’m going to exploit something there.

What if RDP was disabled? So these days people have some datasets where they can export from an EDR tool and provide open ports and services, right? Then we know, for example, upfront that and some of these services like SMB or whatever that you think is typically exploited by the TTP or the threat actor of choice or or interest is exploding and you disable that, you now know that at least that path is closed, right?

In other cases, The attack path might show three or four different types of exploits to be able to get to that ground jewel or the ground jewel network.

Then that that layer of difficulty or the complexity of the daisy chaining is much higher compared to another network or another attack path. That is trivial, right? So it uses native credentials and it only takes one hop in the attack path to get to that asset or network, then for example, that the previous one was more complex to even get to, right?

But the end of the day, all this conversation so far is about, how difficult it is to get to that ground jewel network or the ground jewel asset right not talking about what the attacker might do once they get there because that part is the impact or the consequence here we actually have a an automatic assessment based on the types of PLCs or types of controllers or the types of assets we see in general based on our threat intel and our initial assessment.

But an end user that’s running this tool or a consultant that’s running this tool can adjust that. Right So there’s a manual way for them to say, hey this network is of a higher priority for me compared to this other network.

Show me what the impact of getting to this network is for me because this is higher for me. So to to be fair, we’re not doing quantification yet in this In this tool we’re limiting ourselves at the moment to how easy or difficult it is to get to a particular crown jewel network and what the adversary might be able to do in that kind of a network. Right So it’s it’s one of those interesting aspects of that analysis where you’re not doing the analysis of what an attacker would do once they get to a crown jewel because that’s a whole different ballgame compared to you’re trying to break the kill chain break the path way before that so you’re you’re assessing or analyzing what are all the attack paths and how easy or difficult it is to get to the crown jewels that you’re trying to protect.

Andrew Ginter
Good going. I mean, I have maintained for some time, and and it’s easy for me to do because I’m on the outside. I don’t have to do the work. But I’ve maintained for some time that risk assessments, part of a risk assessment should be a description of the simplest attack or three that remain credible threats in the defensive posture, threats able to bring about unacceptable consequences. There’s always a path that will let you bring about, an attacker bring about an unacceptable consequence. The question is how difficult it is.

And so to me, the risk assessment should include a description of the simplest such attack or, attacks, plural. Um,

So that’s that’s sort of one. Is this kind of what you’re doing? Can can you give me the next level of detail on on what you’re looking at and and how you’re making these decisions?

Vivek Ponnada
Yeah, definitely. So the problem like you described is that there might be some open ports or services that are vulnerable.

However, if those ports are closed or those services are disabled, then that problem is solved, at least for the moment, right? Unless there’s another vulnerability discovered on the particular asset. So what we’re doing is we’re ingesting information from the various sources that they have.

In other cases, provide options to add that in the tool so that you have the contextual information as to what attacks are possible with what’s relevant in that environment, right?

And in the past, people did this using questionnaires, asking people or evaluating and subject matter experts, using a tabletop or something like that. But the beauty of our frameworks platform is that you’re actually able to do this in an automated fashion and at scale, because if you have like a typical customer, or dozens of end-user sites and hundreds or even thousands of networks, you’re not actually able to analyze the risk of each network of each asset down to the level of what’s possible with the given ports and services or install software or not install software in that environment, right?

But if you’re able to ingest all this information right from the IP addresses and different types of assets and the vulnerabilities tied to them to the ports and services that are enabled or disabled or in other cases, making a an exception to say hey I’m disabling this using some kind of application whitelisting or some kind of segmentation.

All the information at scale can be analyzed and you can get a a view that shows a realistic and more or less validated attack path versus someone that’s just looking at a piece of paper or a complex network in a manual fashion.

So this this is where I think the big difference is in that we’re looking at the attack complexity and the attack path at scale with whether it’s tens or so of sites or thousands of networks and able to decipher what the context is for exploitation or just lateral movement or or whatever the path might be to get to your crown jewels.

Andrew Ginter
So you’ve mentioned a couple of times at scale, you’ve mentioned a couple of times the potential for ingesting information about a lot of assets and networks. The asset inventory tools out there produce that knowledge already. I’m guessing you’re interfaced with them.

Can you talk about about that? How do you get data? How do you get the data about the system that that you’re going to analyze?

Vivek Ponnada
Yeah, that’s a great question. Yeah, we definitely can ingest information from a variety of sources. So the platform can ingest information both offline. So drag and drop a CSV or an XML file or any kind of spreadsheet.

And we also have API hooks to be able to automatically ingest information from The likes of Dragos / Nozomi / Claroty, which are the OT security product vendors. We can also ingest information from CMDBs or any kind of centralized data depositories like Rapid7 or Tenable.

In other cases, the customers might have just spreadsheets from the last time they did a site walk. We can ingest that too. So we’re not restricted on ingest ingesting any specific type of format. We have a command line tool that can ingest other sources as well.

But the basis, the digital twin starts with the firewall and the config file. So we ingest information from the likes of Fortinet, Cisco, Palo Alto, you name it.

Then ingest information from these IT or OT tools. At the end of the day, the more information that’s provided, the fidelity of the data is higher. But the and beauty of the platform is that if you don’t have any kind of information,

We can not only create mitigating controls and options within the platform, but we also built an extension of the Frenos platform called Optica, where you can quickly leverage existing templates, for example, Dell servers or Cisco routers or Rockwell PLCs.

Within a few minutes, you can drag and drop and build a template, which you then import into Frenos. To replicate what might be in the system already. So long story short, any kind of asset information, vulnerability information out there, we can ingest.

And if there is none or there’s limited visibility in certain sections or location, we can build something that’s very similar so that the customers can have a view for what the risk is in a similar environment.

Andrew Ginter
And you mentioned a couple of times, I remember here, compensating controls. I mean, the compensating control everybody talks about is more firewall rules, more firewalls, more firewall rules, keep the bad guys away from the vulnerable assets that we can’t patch because, we can’t afford to shut everything down and test everything again.

Can you talk about compensating controls? What other kinds of compensating controls might your your system recommend?

Vivek Ponnada
That’s a great question because as we were discussing earlier in OT, not everything is fixable because a patch might not be available or an outage window is not available, right? So historically, most people have used a combination of allow listing or deny listing or some kind of ports and services disabled or, to your point, firewall rules and segmentation have a place in that as well.

Overall, the key is to figure out what the attack path is and in how or which fashion you can break that attack path. So if the consideration is from level 4 through a DMZ or firewall and the firewall rule was any any or something that was allowing too much, and maybe too many protocols or something that could be disabled, you can start there as a preference. Right If that’s not possible or that’s not a project you can take the next thing could be hey I’m leveraging this kind of SMB or other exploit at that level 3 device before going to level 2.

Let’s look at what this service was on that particular asset right so you can disable that so within the tool we built in almost 20 or so different options for combinations of all these compensating controls and that are historically used in OT right so it could be a combination of firewall rule or a service or port disabled or or in other cases it could be disconnecting them to put in a different segment Again, this is not new, right? This is how historically OT has been able to mitigate some of the risk.

We’re just bringing that to the forefront to see or show you what other things can be done to break the attack path versus strictly talking about vulnerability management and fixing the problem by applying a patch, which is not practical as we talked about.

Andrew Ginter
Compensating controls are are tricky Nate, making we identify a vulnerability a weakness in a defensive posture there’s a new vulnerability announced from some piece of software that we use on some PLC or safety system or who knows what deep into our architecture the what do we do about that is an open is a question everybody asks sort of the consensus that’s building up is that, if that system is exposed to attack, then we have to put compensating measures in.

If it’s not exposed or if it’s, really hard to reach, maybe we don’t need to change anything in the short term until our next opportunity to to do an upgrade or, a planned outage or something.

And a tool like this one, like the Frenos tool, is one that can tell us how reachable is it, how exposed is this, compare that to our risk tolerance. Are we running a passenger rail switching system? Are we running a small bakery?

Different levels of exposure are acceptable in different circumstances. So having the tool give us a sense of how exposed we are is useful in making that that decision, are we gonna patch or not? And if we have to do something, it’s useful to have a list of compensating controls and sort of the list that that I heard Vivek go through, but they’re probably gonna add to this if they haven’t already.

You can change permissions. If you got a file server that sharing files is the problem and the bad guys can put a nasty on the file server, change permissions so that it’s harder to do that.

Turn off services, programs that are running on, Windows ships with, I don’t know, 73 services running. Most, industrial systems don’t need all of these services. They would have been nice to turn them off ages ago if you haven’t already turned them off and there’s a vulnerability in one of these services and you’re pretty sure you’re not using it, you can turn it off.

Add firewall rules that make it harder to reach the system. Add firewall rules that say, fine, if I need to reach the system for some of the services, but I don’t think I ever need to reach this service from the outside, even if I need to use it on the inside, add a firewall rule that blocks access to that service on that host from the outside.

None of this is easy. Every change you make to an important system have the engineering team has to ask the question is this how likely is it that I’m i’m messing stuff up here how likely is it that I’m introducing a problem that’s gonna that’s gonna bite me with a really serious consequence how how likely is it that the cure is worse than the than the disease here so compensated controls aren’t easy but what I see this tool doing is giving us more information about the vulnerable system about how reachable is that vulnerable system. What are the paths that are easiest to get to that vulnerable system? If I can turn off, I don’t know, remote desktop halfway through the attack path and make the attack that much more difficult, now you have to go through, I don’t know, PLCs instead of Windows boxes.

That’s useful knowledge. This is all useful knowledge. We we need as much ammunition as as we can get when we’re making these difficult decisions about shoot, I have to change the system to make it less vulnerable. What am I going to change without breaking something?

Andrew Ginter
Well, thank you so much for joining us, Vivek. Before I let you go, can I ask, can you sum up for our listeners, what are the most important points to to take away from this new technology? And I don’t know, what can they do next?

Vivek Ponnada
Yeah, for sure. So the quick summary is we’re trying to solve a problem that’s been around for a decade plus. Lots of customers do not have a risk assessment in place. They’re not quite sure where they stand currently.

So some of them are early in their journey with this lack of information. They still need to figure out where they have to invest their next dollar or next hour of resource. And in other cases they had spent the past three or five years in developing an OT security program.

A lot of information available, lots of alerts, but again they’re not so sure how they are compared to maybe their industry peers or how they are compared to where they should be in their security posture management.

So what Frenos is able to do is to both leverage their existing data sets and missing information by providing something that’s a replica of their environment showcase where they should be focusing on in terms of breaking the attack paths highlighting not just where they currently stand but also where they were compared to yesterday so overall this is what most executives have been asking before investing in OT security where do we stand currently how good are we compared to an existing known

Attack vector or campaign if you will and then how good can we be currently as in today because the risks are not staying constant so how do we keep up with it so the outcome of the frameworks platform is both a point in time assessment if you like and also continuous posture management because you’re able to validate what compensating controls and preventive measures that you are deploying or or implementing and if they’re going well or not

So conclusion is that we are a security posture management and visibility company that’s able to bring out the best in your existing data sets and provide you gaps and the gap analysis and and help you figure out where to invest your next dollar or resource on what site or what location.

And if you’d like to know more, hit me up on LinkedIn. My email is Vivek at Frenos.io or happy to connect with you on LinkedIn to take it from there. If you’d like more information, know hit up on our website, Frenos.io as well. You’ll see all the information about our current use cases, the different products and services we have to offer. So looking forward to connecting with more of you.

Nathaniel Nelson
Andrew, that just about does it for your interview with Vive Banada. Do you have any final word to take us out with today?

Andrew Ginter
Yeah. This topic is timely. the topic of risk-based decision-making. I mean, this too is coming into effect in a lot of countries, particularly In Europe, the regulation in every country is different, but the directive says you have to be making risk-based decisions.

And I’m sorry, a risk assessment is… Should be much more than a list of unpatched vulnerabilities. A list of unpatched vulnerabilities does not tell you how vulnerable you are.

It’s just a list of vulnerabilities. To figure out how much trouble you’re in, you need a lot more information. You need information about how which assets are most critical. You need information about how reachable are those critical assets for your adversaries.

And when new vulnerabilities are announced a arise that simplify the pivoting path that simplify reachability of a critical asset for your adversaries you need advice as to that’s what you need to fix next and here are your options for fixing that so I see this kind of of tool as as uh step in the right direction. This is the kind of information that that a lot of us need in not just the world of NIST-2, in the world of managing risk, managing reachability.

You know We’ve all segmented our networks. What does that mean? You can still reach, bang, bang, bang, pivot on through. Well, then, What does that mean? This kind of tool tells us what that means. It gives us deeper visibility into reachability and and vulnerability of the critical assets, risk, opportunity to attack. You know I don’t like the word vulnerability. Too often it means software vulnerability. This talks about This kind of tool exposes attack opportunities and tells us what to do about them. So to me, that’s that’s a very useful thing to do.

Nathaniel Nelson
Well, thank you to Vivek for highlighting all that for us. And Andrew, as always, thank you for speaking with me.

Andrew Ginter
It’s always a pleasure. Thank you, Nate.

Nathaniel Nelson
This has been the Industrial Security Podcast from Waterfall. Thanks to everyone out there listening.

Stay up to date

Subscribe to our blog and receive insights straight to your inbox

The post Managing Risk with Digital Twins – What Do We Do Next? – Episode 144 appeared first on Waterfall Security Solutions.

]]>