Groundbreaking OT Security Guidance

The UK National Cyber Security Centre (NCSC) in conjunction with many others, including CISA, CCCS, BSI, FBI, NCSC-NL and NCSC-NZ, has just issued new guidance: Secure connectivity principles for Operational Technology (OT). The guidance is designed for medium-sized through large industrial sites and includes many topics that are either unique in the industry – that I've never seen in guidance before – or are otherwise unusual or infrequent – and useful.
Picture of Andrew Ginter

Andrew Ginter

Groundbreaking OT Security Guidance

These topics include: keeping the most IT / Internet-exposed equipment the most patched, centralizing the most dangerous connections, abstracting any instructions that OT receives from IT or the Internet if we can, hardening  IT/OT interfaces with cross-domain solutions, using unidirectional hardware and hardware-enforced remote access, microsegmenting east/west OT communications, paying special attention to “break glass” accounts and workstations, not permitting anything like a remote-access engineering workstation, and using unidirectional hardware to help with islanding / emergency isolation requirements.

The document is, however, 33 pages long, and much of the language is general and abstract – it can be hard to figure out what the real point is. Here is a condensed version, with simplified language and occasional examples. This introduction may not be as 100% accurate as the original, but I hope to give readers enough of a head start on the tricky bits to have a fighting chance of getting through the document.

Overview

Let’s begin – the NCSC document describes 8 principles – with my summaries & paraphrasing in italics.

  1. Balance the risks and opportunitiesa somewhat confusing mix of OT cyber risk, brownfield cautions, and supply chain advice – most readers have seen this stuff before.
  2. Limiting the exposure of your connectivitywhen we have to connect stuff to IT or worse to the Internet, keep it patched, scan regularly for Internet-exposed IP addresses and services, and be paranoid about wireless communications. None of the individual bits of advice are new, but some of the combinations are unusually useful.
  3. Centralise and standardise network connectionsminimise our external connectivity, and ideally route it all through a central facility for intrusion detection and active management – of rules, vulnerabilities, actionable intel, etc. This is practical advice that I have not seen before.
  4. Use standardised and secure protocolsuse encryption and authentication inside our ICS as much as is practical, and always encrypt and authenticate communications across IT, Internet and other external networks. Good advice, not terribly new.
  5. Harden your OT boundarylots of good advice for this important consequence boundary, including hardware-enforced unidirectional, hardware-enforced remote access and (unusual) cross-domain solutions. Good advice – some of it right out on the edge of state-of-the-practice.
  6. Limit the impact of compromisea surprisingly old discussion of types of firewall packet filtering that everyone really should know already, coupled with a newer discussion of options for microsegmentation to control “lateral movement” (pivoting attacks).
  7. Ensure all connectivity is logged and monitoredthe usual exhortation to monitor connectivity, especially remote access from IT networks and the Internet, with an interesting segue into “break glass” connectivity.
  8. Establish an isolation plantalks about different kinds of site and/or subsystem emergency isolation / islanding approaches, including a brand-new discussion of the business value of hardware-enforced unidirectional communications as part of the emergency islanding plan.

With that introduction, let’s dig into what’s new and what’s interesting.

Keep Exposed Gear Patched

Lots of OT guidance talks about how important it is to patch systems. Lots talks about how hard it is to patch change-controlled or obsolete (or both) OT systems. Very few bits of guidance talk about how important it is to patch IT-exposed or Internet-exposed equipment. This document does – Section (2) says in rather abstract language, look – if we’ve had to connect something to the IT network or to the Internet – like a firewall, or a software service through a firewall – keep it patched. And if we cannot patch the connected device or software, then it should not be connected to the Internet. And if we cannot patch the underlying OS, that’s as bad as not being able to patch the application – get it off the Internet!

Centralize

I’ve never seen guidance tell us to centralize our most dangerous communications connections before. To a lot of practitioners this is second nature – if we do not have the people or skills at remote or unstaffed sites to keep communications infrastructure up to date, monitored, documented and maintained, then most of us already try to do it centrally where we  do have the people. This is worth saying in guidance, and again, Section (3) is the first time I’ve seen this advice written down and endorsed by such a wide range of authorities.

Abstraction

Section (4) talks about encryption, authentication and – abstraction. The section does not use the word “abstraction” but does talk about “protocol validation.” For example, if a cloud-based AI is making complex optimization decisions and writing encrypted / authenticated Modbus into a bunch of OT PLCs, does a NGFW looking at that traffic have any hope of figuring out if the instructions to the PLCs are safe? 

If instead, the AI sent an XML file into a Manufacturing Execution System (MES) in the OT network, and the XML file said to orient the <drum> to <low> or <high> orientation, rather than 23.2 degrees, or heat the drum to the 73% point in the allowed, safe operating temperature range rather than to 352 °C, verification of the safety of the communication would be as simple as checking the XML document to make sure it agrees with the XML schema.

Now, this is easier said than done – most of us are stuck with whatever communications protocol the application vendors give us, but the concept makes sense. And this is the first time I’ve seen a piece of multi-government guidance talk about the concept. If owners and operators start demanding this capability (citing the NCSC guidance) and using the capability to decide which external systems to purchase / connect to, vendors (hopefully) will eventually respond or lose business.

IT/OT Hardening

Section (5) starts with some introduction and then repeats the exhortation to keep our IT/OT firewalls patched. The section continues and eventually recommends hardware-based unidirectional security controls. This is not the first time we’ve seen that advice, but the unidirectional option is often missed – these people caught it.

And then the advice gets a little confusing. It talks about Cross Domain Solutions (CDS), which is a military term for (oversimplified) cleaning malware out of documents going into high-security / classified networks. In OT, an emerging use I’ve observed for this kind of CDS technology is to keep malware and other attack information out of communications that arrive in OT networks from IT, or worse from the Internet. 

And then the advice gets more confusing. It starts talking about “data diodes” (hardware-enforced unidirectional communications), but the advice does not make a lot of sense unless we apply it to communications going into an OT network. This is not intuitive. Most unidirectional hardware is oriented to send stuff out of an OT network, not in. That said, I do see inbound unidirectional traffic in customer deployments increasingly frequently, and this is the first government guidance I’ve seen for sending stuff unidirectionally into an OT network.

Simplifying the advice, it says:

  1. The simplest (inbound) hardware diodes only forward data, sometimes including attack data, into OT networks. These devices do not check for malicious content the way a CDS can. 
  2. Two diodes, one in and one out, where a communications protocol is split so that inbound packets go in through one diode and answers come out through the other is not useful – this is an “antipattern.”
  3. The best inbound unidirectional hardware checks the validity of data passing into OT – checks the data in hardware, not in external software.
  4. Pushing inbound data unidirectionally into a “unidirectional DMZ” (unidirectional hardware inbound one side, and a second unidirectional gateway outbound on the other side) with data validation (eg: a software CDS) done “in the middle” is a useful design.

All four are true. (1) and (2) basically say “caveat emptor.” There are diode hardware vendors out there making claims that are not defensible. (2) in particular confuses a lot of people. When I see Waterfall’s Unidirectional Gateways deployed to send information both into and out of an OT network, I never see nor recommend a round-trip protocol like the “anti-pattern” in (2). (2) is how command and control (C2) loops work. 

Recommendations (3) and (4) are confusing as well – in my read (4) contradicts (3) – (4) says data validation should be done in the software CDS, while (3) says to do the validation in the unidirectional hardware. Don’t get me wrong, (4) is still a good idea, but (4) is not as powerful as (3)’s validation done in the unhackable hardware. In the past I’ve seen (4) discussed only in the context of classified networks, and even then only in the most abstract terms, because I have no security clearance. But in principle, yes, we can use the concept of a CDS between a pair of hardware-enforced gateways to push data into OT as well.

Point (3) is unusual in another respect – the requirement for hardware filtering / validation of data entering OT. I’ve only seen the hardware filtering recommendation once before – in the 2024 Modern Approaches to Network Access Security talking about hardware-enforced remote access (HERA).

Microsegmentation

Section (6) talks about lateral movement. Other documentation calls these pivoting attacks: using compromised equipment to attack other equipment in the same network, eventually reaching equipment that can push attack connections through firewalls into more critical networks. The IT buzzword to address this risk is “microsegmentation.” Section (6) is a good discussion of the role firewalls play in slowing down attack propagation inside OT networks. There is a nice discussion of using built-in host firewalls, but that discussion is missing a caveat that host firewalls are more practical higher in an OT architecture, closer to the IT network. Vendor support agreements and change control constraints make managing host firewalls harder when we get deeper into OT architectures.

And as mentioned earlier, the section has a surprisingly long discussion of the difference between routing, static firewall rules, stateful inspection and deep-packet inspection (DPI), a discussion that I’m pretty sure every OT practitioner can already recite backwards. The information is correct, but could have been much shorter, saying essentially “modern firewalls do the good stuff, and we should not pretend that what looks like firewall rules in switches and routers have much security value at all.”

What is surprisingly good is a very short section entitled “Browse Down.” I had to dig into some of their references, but what they’re saying is:

  • Give only a very small number of machines the ability to make far-reaching configuration and security changes – eg: minimize the number of engineering workstations, and
  • Lock down and secure those machines nine ways to Sunday – they are prime targets when intruders get into the systems.

Said negatively – do not allow Internet-exposed machines to carry out sensitive reconfiguration of our OT systems. For example – do not let any remote access laptop carry out these functions. I read the advice as saying, to the greatest extent feasible, “remote engineering workstations” should be an oxymoron. I agree completely – but have never heard anyone write this down before. Good job.

Break Glass Access

Section (7) has an interesting discussion of “break glass” access. Again, I had to look up what this was: accounts and especially remote access accounts that can be used to bypass normal security mechanisms in an emergency, such as when our password vault is compromised, or goes up in smoke. The term was easily find-able, so I’m guessing it’s widely used in IT. The concept makes sense – common wisdom in IT for “break glass” accounts is to secure them really thoroughly. “Break glass” accounts do not need to be convenient to use – these are emergency measures only.

The guidance recommends that if our IDS or logging ever sees anyone use a break-glass account, then those tools should issue the highest priority alarms they can to our security operations center (SOC). This makes sense. Use these powerful accounts in emergencies, not for routine remote access.

Islanding

Section (8) talks about isolation / islanding: disconnecting IT from OT in IT emergencies, such as a ransomware infection, so OT can continue working throughout the IT emergency. This advice is not unique – the US TSA continues to require emergency isolation for rail systems in TSA SD 1580-21-01E, for pipelines in TSA SD 2021-02F, and the Danes require it in their latest Executive Order 260 of 2025. What is unique is the connection to hardware-enforced unidirectional gateway technology. The advice suggests either:

  • Deploy a gateway as the sole connection outbound from OT to IT, which amounts to “permanent” islanding – no malware from IT can ever propagate back into OT through the gateway, or
  • Deploy an outbound gateway in parallel to a firewall at the IT/OT interface, so that when we power off that firewall for the duration of an IT / ransomware emergency, critical communications can still flow from OT to IT, or to the Internet – for partners, government regulators, etc.

While I’ve seen many of these kinds of unidirectional islanding deployments in the last several years, and I’m aware that regulators seem happy with those designs, this is the first time I’ve seen unidirectional hardware actually described and recommended in guidance in the context of an islanding / isolation discussion.

Conclusions

There are minor nits I could pick with the document: the guidance uses “secure” as an adjective (first law of OT security – nothing is “secure”), it talks about CIA / AIC / etc. as if information was the asset we are protecting (we in fact protect safe, reliable and efficient physical operations), and talks about “compensating controls” as if boundary protection were a secondary priority, rather than the first priority for preventing cyber-sabotage (see Bib’s 50 year old cybersecurity theory).

But there is no point in picking nits. While difficult to understand sometimes, this is a groundbreaking piece of guidance, covering useful topics that I’ve never seen covered before. Good job.

Digging Deeper

About the author
Picture of Andrew Ginter

Andrew Ginter

Andrew Ginter is the most widely-read author in the industrial security space, with over 23,000 copies of his three books in print. He is a trusted advisor to the world's most secure industrial enterprises, and contributes regularly to industrial cybersecurity standards and guidance.
Share

Stay up to date

Subscribe to our blog and receive insights straight to your inbox