The post How to Apply the NCSC/CISA 2026 Guidance appeared first on Waterfall Security Solutions.
]]>
For the first time, joint guidance from the UK NCSC, co-signed by CISA, BSI, Australia’s ACSC and others, calls for centralizing risky connections into OT networks, simplifying instructions sent into OT so they can be inspected for safety, and even “browsing down” for engineering workstation access. Alongside these newer ideas, it reinforces more established advice, such as hardening OT boundaries with hardware-enforced protections like Unidirectional Gateways and Hardware-Enforced Remote Access.
The challenge is that the guidance is fairly abstract. The principles are clear, but how to apply them in real OT architectures is not always obvious.
What are the 8 core principles of the NCSC / CISA “Secure connectivity principles for Operational Technology (OT)” guidance, and how does Waterfall support their application?
1) Balance the risks and opportunities – Waterfall’s Unidirectional Gateways dramatically reduce cyber risks to connected OT networks. One-way hardware prevents attack information from reaching back into OT networks, significantly reducing risks for even obsolete, unpatchable targets.
2) Limit the exposure of your connectivity – Waterfall’s Secure Bypass product is a time-limited switch, controlling how often and how long vulnerable software components are exposed to external networks, Waterfall’s Unidirectional Gateways are intrinsically outbound connections – no inbound threat is possible to connected devices through the gateways.
3) Centralise and standardise network connections – Waterfall’s Unidirectional Gateways scale from the smallest DIN rail form factors to 10Gbps rack-mount devices supporting dozens of simultaneous connectors & replications, making both distributed and centralized deployment straightforward.
4) Use standardised and secure protocols – Waterfall’s Unidirectional Gateways support dozens of OT protocols and applications, both plain-text and encrypted versions. Better yet, even when using plain-text communications into IT networks, no session hijack or other plain-text attack can reach through the unidirectional hardware back into the OT network to put physical operations at risk.
5) Harden your OT boundary – The guidance recommends hardware-enforced unidirectionality and integrity filtering. Waterfall’s Unidirectional Gateways enforce unidirectionality in hardware. Waterfall’s Hardware-Enforced Remote Access (HERA) uses a hardware filter to ensure only HERA protocol information can enter the OT side of the HERA device.
6) Limit the impact of compromise – Waterfall Unidirectional Gateway and FLIP products are compatible with a wide variety of anti-virus systems, patch management systems, zero trust, and other systems that provide this second level of defense in defense-in-depth programs.
7) Ensure all connectivity is logged and monitored – Waterfall for IDS is hardware-enforced protection for SPAN port and mirror ports sending data to IT-resident OT intrusion detection system (IDS) sensors. Waterfall is partnered with all the most important OT IDS vendors.
8) Establish an isolation plan – Waterfall’s Unidirectional Gateways are used by TSA-compliant sites and other sites with isolation / islanding requirements. The gateways ensure critical data continues to move, even during “isolation” emergencies where firewalls are not permitted to connect OT with IT networks, or the Internet.
Waterfall’s Unidirectional Gateway, HERA remote access and other hardware-enforced products are dramatically stronger than software and are used routinely at the sensitive IT/OT trust/consequence boundary.
The guidance heavily emphasizes a “Push-Only” architecture, where data is sent from the secure OT zone to lower-trust corporate zones, preventing external, unsolicited inbound connections. The guidance recommends unidirectional hardware as a powerful tool to enforce the “push only” rule.
The guidance is for OT asset owners and operators, cybersecurity professionals, integrators and manufacturers and risk managers and engineers – at medium-sized to large industrial sites or enterprises. The guidance is fairly abstract and requires expertise to understand, expertise that is generally not available at the smallest of industrial sites.
The guidance heavily emphasizes a “Push-Only” architecture, where data is sent from the secure OT zone to lower-trust corporate zones, preventing external, unsolicited inbound connections. Unidirectional hardware is a powerful tool to enforce the “push only” rule.
Subscribe to our blog and receive insights straight to your inbox
The post How to Apply the NCSC/CISA 2026 Guidance appeared first on Waterfall Security Solutions.
]]>The post Webinar: 2026 OT Cyber Threat Report appeared first on Waterfall Security Solutions.
]]>Join us on March 25th, 2026, 11AM NY Time
This is the only industry report focused exclusively on verified cyber incidents with physical consequences. The data set is public, all the incidents we use are included in the report’s appendix with links to public news reports.
Highlighted attacks include:
Record-breaking costs of consequences
What is behind the drop in ransomware attacks
Key defensive developments of 2025, in light of these threats
The post Webinar: 2026 OT Cyber Threat Report appeared first on Waterfall Security Solutions.
]]>The post 2026 OT Cyber Threat Report appeared first on Waterfall Security Solutions.
]]>
Cyber breaches with physical consequences in the public record for heavy industry and critical industrial infrastructures decreased 25% to 57 in 2025 from 76 in 2024. Most of this reduction is because of temporary factors affecting ransomware attacks. Nation-state and hacktivist attacks doubled, with most attacks targeting critical infrastructures.
The report is unique in its focus, and in that the entire 2025 data set is included in the Appendix.
The Waterfall Threat Report 2026 brings you comprehensive, verifiable data on cyber attacks that caused physical consequences in OT environments to help you understand today’s threat landscape and what’s required to face it.
Unlike other industry reports, the Waterfall Threat Report 2026 focuses exclusively on verified incidents with physical consequences. Reading the report will help you understand today’s threat landscape and what’s required to face it.
The post 2026 OT Cyber Threat Report appeared first on Waterfall Security Solutions.
]]>The post Groundbreaking OT Security Guidance appeared first on Waterfall Security Solutions.
]]>
The UK National Cyber Security Centre (NCSC) in conjunction with many others, including CISA, CCCS, BSI, FBI, NCSC-NL and NCSC-NZ, has just issued new guidance: Secure connectivity principles for Operational Technology (OT). The guidance is designed for medium-sized through large industrial sites and includes many topics that are either unique in the industry – that I’ve never seen in guidance before – or are otherwise unusual or infrequent – and useful.
These topics include: keeping the most IT / Internet-exposed equipment the most patched, centralizing the most dangerous connections, abstracting any instructions that OT receives from IT or the Internet if we can, hardening IT/OT interfaces with cross-domain solutions, using unidirectional hardware and hardware-enforced remote access, microsegmenting east/west OT communications, paying special attention to “break glass” accounts and workstations, not permitting anything like a remote-access engineering workstation, and using unidirectional hardware to help with islanding / emergency isolation requirements.
The document is, however, 33 pages long, and much of the language is general and abstract – it can be hard to figure out what the real point is. Here is a condensed version, with simplified language and occasional examples. This introduction may not be as 100% accurate as the original, but I hope to give readers enough of a head start on the tricky bits to have a fighting chance of getting through the document.
Let’s begin – the NCSC document describes 8 principles – with my summaries & paraphrasing in italics.
With that introduction, let’s dig into what’s new and what’s interesting.
Lots of OT guidance talks about how important it is to patch systems. Lots talks about how hard it is to patch change-controlled or obsolete (or both) OT systems. Very few bits of guidance talk about how important it is to patch IT-exposed or Internet-exposed equipment. This document does – Section (2) says in rather abstract language, look – if we’ve had to connect something to the IT network or to the Internet – like a firewall, or a software service through a firewall – keep it patched. And if we cannot patch the connected device or software, then it should not be connected to the Internet. And if we cannot patch the underlying OS, that’s as bad as not being able to patch the application – get it off the Internet!
I’ve never seen guidance tell us to centralize our most dangerous communications connections before. To a lot of practitioners this is second nature – if we do not have the people or skills at remote or unstaffed sites to keep communications infrastructure up to date, monitored, documented and maintained, then most of us already try to do it centrally where we do have the people. This is worth saying in guidance, and again, Section (3) is the first time I’ve seen this advice written down and endorsed by such a wide range of authorities.
Section (4) talks about encryption, authentication and – abstraction. The section does not use the word “abstraction” but does talk about “protocol validation.” For example, if a cloud-based AI is making complex optimization decisions and writing encrypted / authenticated Modbus into a bunch of OT PLCs, does a NGFW looking at that traffic have any hope of figuring out if the instructions to the PLCs are safe?
If instead, the AI sent an XML file into a Manufacturing Execution System (MES) in the OT network, and the XML file said to orient the <drum> to <low> or <high> orientation, rather than 23.2 degrees, or heat the drum to the 73% point in the allowed, safe operating temperature range rather than to 352 °C, verification of the safety of the communication would be as simple as checking the XML document to make sure it agrees with the XML schema.
Now, this is easier said than done – most of us are stuck with whatever communications protocol the application vendors give us, but the concept makes sense. And this is the first time I’ve seen a piece of multi-government guidance talk about the concept. If owners and operators start demanding this capability (citing the NCSC guidance) and using the capability to decide which external systems to purchase / connect to, vendors (hopefully) will eventually respond or lose business.
Section (5) starts with some introduction and then repeats the exhortation to keep our IT/OT firewalls patched. The section continues and eventually recommends hardware-based unidirectional security controls. This is not the first time we’ve seen that advice, but the unidirectional option is often missed – these people caught it.
And then the advice gets a little confusing. It talks about Cross Domain Solutions (CDS), which is a military term for (oversimplified) cleaning malware out of documents going into high-security / classified networks. In OT, an emerging use I’ve observed for this kind of CDS technology is to keep malware and other attack information out of communications that arrive in OT networks from IT, or worse from the Internet.
And then the advice gets more confusing. It starts talking about “data diodes” (hardware-enforced unidirectional communications), but the advice does not make a lot of sense unless we apply it to communications going into an OT network. This is not intuitive. Most unidirectional hardware is oriented to send stuff out of an OT network, not in. That said, I do see inbound unidirectional traffic in customer deployments increasingly frequently, and this is the first government guidance I’ve seen for sending stuff unidirectionally into an OT network.
Simplifying the advice, it says:
All four are true. (1) and (2) basically say “caveat emptor.” There are diode hardware vendors out there making claims that are not defensible. (2) in particular confuses a lot of people. When I see Waterfall’s Unidirectional Gateways deployed to send information both into and out of an OT network, I never see nor recommend a round-trip protocol like the “anti-pattern” in (2). (2) is how command and control (C2) loops work.
Recommendations (3) and (4) are confusing as well – in my read (4) contradicts (3) – (4) says data validation should be done in the software CDS, while (3) says to do the validation in the unidirectional hardware. Don’t get me wrong, (4) is still a good idea, but (4) is not as powerful as (3)’s validation done in the unhackable hardware. In the past I’ve seen (4) discussed only in the context of classified networks, and even then only in the most abstract terms, because I have no security clearance. But in principle, yes, we can use the concept of a CDS between a pair of hardware-enforced gateways to push data into OT as well.
Point (3) is unusual in another respect – the requirement for hardware filtering / validation of data entering OT. I’ve only seen the hardware filtering recommendation once before – in the 2024 Modern Approaches to Network Access Security talking about hardware-enforced remote access (HERA).
Section (6) talks about lateral movement. Other documentation calls these pivoting attacks: using compromised equipment to attack other equipment in the same network, eventually reaching equipment that can push attack connections through firewalls into more critical networks. The IT buzzword to address this risk is “microsegmentation.” Section (6) is a good discussion of the role firewalls play in slowing down attack propagation inside OT networks. There is a nice discussion of using built-in host firewalls, but that discussion is missing a caveat that host firewalls are more practical higher in an OT architecture, closer to the IT network. Vendor support agreements and change control constraints make managing host firewalls harder when we get deeper into OT architectures.
And as mentioned earlier, the section has a surprisingly long discussion of the difference between routing, static firewall rules, stateful inspection and deep-packet inspection (DPI), a discussion that I’m pretty sure every OT practitioner can already recite backwards. The information is correct, but could have been much shorter, saying essentially “modern firewalls do the good stuff, and we should not pretend that what looks like firewall rules in switches and routers have much security value at all.”
What is surprisingly good is a very short section entitled “Browse Down.” I had to dig into some of their references, but what they’re saying is:
Said negatively – do not allow Internet-exposed machines to carry out sensitive reconfiguration of our OT systems. For example – do not let any remote access laptop carry out these functions. I read the advice as saying, to the greatest extent feasible, “remote engineering workstations” should be an oxymoron. I agree completely – but have never heard anyone write this down before. Good job.
Section (7) has an interesting discussion of “break glass” access. Again, I had to look up what this was: accounts and especially remote access accounts that can be used to bypass normal security mechanisms in an emergency, such as when our password vault is compromised, or goes up in smoke. The term was easily find-able, so I’m guessing it’s widely used in IT. The concept makes sense – common wisdom in IT for “break glass” accounts is to secure them really thoroughly. “Break glass” accounts do not need to be convenient to use – these are emergency measures only.
The guidance recommends that if our IDS or logging ever sees anyone use a break-glass account, then those tools should issue the highest priority alarms they can to our security operations center (SOC). This makes sense. Use these powerful accounts in emergencies, not for routine remote access.
Section (8) talks about isolation / islanding: disconnecting IT from OT in IT emergencies, such as a ransomware infection, so OT can continue working throughout the IT emergency. This advice is not unique – the US TSA continues to require emergency isolation for rail systems in TSA SD 1580-21-01E, for pipelines in TSA SD 2021-02F, and the Danes require it in their latest Executive Order 260 of 2025. What is unique is the connection to hardware-enforced unidirectional gateway technology. The advice suggests either:
While I’ve seen many of these kinds of unidirectional islanding deployments in the last several years, and I’m aware that regulators seem happy with those designs, this is the first time I’ve seen unidirectional hardware actually described and recommended in guidance in the context of an islanding / isolation discussion.
There are minor nits I could pick with the document: the guidance uses “secure” as an adjective (first law of OT security – nothing is “secure”), it talks about CIA / AIC / etc. as if information was the asset we are protecting (we in fact protect safe, reliable and efficient physical operations), and talks about “compensating controls” as if boundary protection were a secondary priority, rather than the first priority for preventing cyber-sabotage (see Bib’s 50 year old cybersecurity theory).
But there is no point in picking nits. While difficult to understand sometimes, this is a groundbreaking piece of guidance, covering useful topics that I’ve never seen covered before. Good job.
Click here for more information on Unidirectional Gateways.
Click here for more information on Hardware-Enforced Remote Access.
Subscribe to our blog and receive insights straight to your inbox
The post Groundbreaking OT Security Guidance appeared first on Waterfall Security Solutions.
]]>New guidance from the UK NCSC, co-signed by CISA, BSI, Australia’s ACSC and others, introduces significant updates for securing critical infrastructure.
In this webinar we will review the 8 principles and dozens of sub-principles, while introducing a simple grid for visualizing coverage. We apply the grid to network architectures typically seen in power generation, pipelines and passenger metros, evaluating the residual risk for each architecture in light of this guidance.
![]()
Aggressive patching for Internet-exposed and IT-exposed equipment.
![]()
Centralizing dangerous IT and Internet connectivity
![]()
Designing communications to simplify inspection
![]()
Hardening the IT/OT interface with hardware-enforced remote access and unidirectional technologies
![]()
Firewalled micro-segmentation to control lateral movement
![]()
“Browsing down” for engineering workstations
![]()
Managing “break-glass” accounts
![]()
New designs for unidirectional hardware in emergency islanding / isolation scenarios
Subscribe to our blog and receive insights straight to your inbox
The post Applying the New NCSC / CISA Guidance appeared first on Waterfall Security Solutions.
]]>The post IT/OT Cyber Theory: Espionage vs. Sabotage appeared first on Waterfall Security Solutions.
]]>
The second-generation of OT security advice started to emerge in 2012-2016. At the time, the difference between the second and first gen advice was a bit confusing. In hindsight, one important difference has become clear – the difference between preventing cyber-sabotage vs. cyber-espionage. We do not prevent sabotage the same way we prevent espionage. **50** year old cybersecurity theory (wow – we’ve been at this a long time) makes the difference clear. Bell / La Padula’s theory is how we prevent espionage, while Biba’s theory is how we prevent cyber-sabotage.
Let’s look at each of these theories and at how they define one of the fundamental differences between our approach to OT vs IT security.
First-gen OT security advice said, loosely:
And of course, we muttered at the time a bit about CIA vs AIC vs IAC as priorities, but we all agreed, however hard the concept seemed at the time, that information was the asset we were protecting. This was and is, back of the envelope, exactly what we still do on IT networks. After all, when engineering teams first started looking at cybersecurity, who were the experts we could call on for help? There were no OT security experts back then, and so we called on IT experts. It is therefore no surprise that first-gen OT security advice was close to indistinguishable from IT security advice.
The theory backing up preventing theft of information was defined by Bell and La Padula. The theory had its roots in timeshared computers – 50 years ago, large organizations had only small numbers of computers with hundreds of users each. And in some organizations, like the military, it was really important that we prevent low-classification users from reading high-classification national secrets. Bell / La Padula theory mandated that, to prevent espionage:
Rule (1) is obvious to most people encountering the theory for the first time. (2) often seems a little strange. To make sense of (2), imagine that malware has established a foothold in a classified user’s account. If the user can write sensitive classified information into less-sensitive areas of the computer, then so can the malware. In the worst case, the information may be steganographically encoded – such as spreading the information through the low-order bits of pixels in images. To prevent all information leakage, we must forbid any information flowing from high-security to low-security users and systems, because steganographic encoding is always possible, at least in theory.
Second-gen advice said, loosely, that in most OT systems, information is not the most important asset we protect, but rather:
At the time this advice came out, (a) made a lot of sense to a lot of engineering teams. They had never been comfortable with the idea that information was the asset they were trying to protect. (b) seemed a bit strange at first to a lot of people but made sense if you thought about it for a day or two. Nobody can deny that cyber-sabotage is information – the only way an automation system can change from a normal state to a compromised state is if attack information enters the system, somehow. Controlling the flow of information therefore makes sense – and if we think about first-gen OT security advice, such as the IEC 62443-1-1 standard, a good half of that first standard was focused on network segmentation – controlling the flow of attack information.
The theory backing up this second-gen perspective was defined by Biba, not Bell and La Padula. Biba’s theory also had its roots in timeshared computers for the military, but was focused on preventing sabotage, not preventing espionage. Eg: think the difference between preventing re-targeting of nuclear weapons, vs. preventing the theft of the knowledge of how to build those same weapons. Biba’s theory mandated that, to prevent cyber-sabotage:
Rule (2) is easier to understand for most people encountering the theory for the first time – a malicious actor must not be able to write malware into a higher security level (eg: to change the missiles’ targets). In Biba’s theory, (1) is the strange one. To make sense of it, imagine that malware has established a foothold in a less-secured, less-sensitive network, like the Internet. If a sensitive network pulls information from the Internet, we risk pulling malware, which if activated, can wreak havoc.
Second-gen advice therefore generally forbade any online transfer of information from less-secure networks into high-consequence safety-critical or equipment-critical networks.
Data Diodes were the military’s answer to Bell / La Padula and Biba. Unidirectional Gateways were OT security’s answer. The difference?
There are secondary differences as well. For example, data diodes typically transmit a very limited number of data types into military networks through custom-engineered software, while unidirectional gateways replicate OPC, historian and many other kinds of servers out to IT networks using off-the-shelf software components.
And every rule has exceptions. Many manufacturing operations use trade secrets that they cannot afford to have stolen, for example. And most industrial operations need some very small, very select data to flow back into the system from time to time.
Both Bell / La Padula and Biba’s theories provided for these exceptions, and demanded that any data flow that violated the primary principles be minimal, simple, understandable, and deeply scrutinized to ensure that the primary objective (preventing espionage, or sabotage, respectively) was not compromised by these secondary objectives and data flows.
Third-gen OT security advice, FTR, is still emerging and is focused on resilience. The theoretical framework behind resilience is more engineering practice than mathematics, but we are working on it. The most thorough, most widely-used resilience framework today is Idaho National Laboratory’s (INL’s) Cyber-Informed Engineering (CIE). CIE is positioned as “the big umbrella.” CIE encompasses cyber-relevant parts of safety engineering, protection engineering, automation engineering, and network engineering, as well as most of the cybersecurity discipline, including all of Bell / La Padula and Biba’s theories.
An important difference between IT and OT networks is the difference between preventing espionage and preventing sabotage. First-gen advice seemed a hard fit for OT, in part because that advice tried to apply the language and concepts of preventing espionage to the task of preventing sabotage. In hindsight, second-gen advice corrected this, though neither generation of advice used the words “espionage” nor “sabotage,” nor did they reference 50-year-old theory.
Today our terminology is maturing, and OT security’s connections to the theoretical foundations of cybersecurity are becoming clearer. Clarifying this understanding and terminology helps a lot when trying to get our engineering and enterprise security teams to work together. If we are to cooperate effectively, we need to understand foundational differences between the assets and networks we protect, and we need a terminology to express those differences as we design our joint security programs.
This is one of the topics that will be covered in Waterfall’s Jan 28 webinar Bringing Engineering on Board and Resetting IT Expectations. Please <click here> to register.
Subscribe to our blog and receive insights straight to your inbox
The post IT/OT Cyber Theory: Espionage vs. Sabotage appeared first on Waterfall Security Solutions.
]]>The post Ships Re-Routed, Ships Run Aground appeared first on Waterfall Security Solutions.
]]>
“Everyone” has heard of the 5-week shutdown of Jaguar Land Rover by a cyber attack. That attack is the obvious headline for Waterfall’s up-coming webinar “Top 10 OT Cyber Attacks of 2025” that I’m currently researching. But – is this attack the most interesting of 2025?
Here are a couple other incidents for consideration:
While details of the investigations into these events have not been published, on the surface the three incidents seem evidence of the importance of evaluating residual risk when we design automation and cybersecurity systems.
A bit of background first: GPS Spoofing (as opposed to simpler GPS jamming) is when false geolocation signals are transmitted, either directionally to affect a specific target, or broadcast in a region to affect indiscriminately all nearby receivers. GPS satellite signals are comparatively weak, and it does not take a very powerful transmitter to overwhelm legitimate signals. GPS spoofing has become fairly common in kinetic conflict areas such as the Middle East (the Red Sea in particular), the North/South Korean border, the Black Sea and Baltic Sea, Northern Europe, and anywhere near Ukraine and western Russia. All of which means that anyone who cares about where they are in these and other regions really cannot rely exclusively on GPS.
The original report of the teenager’s hack of ship routes included graphics with the appearance of an Electronic Chart Display and Information System (ECDIS), which is a shipboard system that regulators allow as a substitute for paper charts. ECDIS display the position and heading of vessels automatically, pulling information from the ship’s GPS, other location systems, as well as Automatic Identification System (AIS) broadcasts from nearby ships detailing those ships’ location, speed, heading and other navigational data. Some (all?) these ECDIS can also steer ships by auto-pilot, once a route is entered. While the news report’s ECDIS-looking graphic was entitled “Maritime traffic in the Mediterranean” and subsequent reports claimed the teenager in fact hacked into one or more ECDIS, these reports may not be accurate. It seems more plausible, to me at least, that the individual hacked into a shore-side system that managed route planning for multiple ships, rather than hacked into multiple ships at sea and modified their shipboard systems to bring about the diversions.
Managing cyber risk to physical operations involves more than blindly deploying a bunch of OT security controls, dusting our hands off, and walking away. It’s easy to say “Hah! They should have had two factor!” or some such, but 2FA isn’t going to help with GPS spoofing is it?
Once we’ve deployed an automation or security system, we need to evaluate residual risk – what’s left over? The right way to do this is not just to produce a list of missing patches in our PLC’s. The right way is to look at a representative spectrum of credible attacks – attacks that are reasonable to believe may be leveled against us, the system, or someone much like us or the system, within our planning horizon. Evaluate these credible attacks against our defensive posture and determine what are credible consequences – what consequences are reasonable to expect when a credible attack hits us? And when those consequences are unacceptable (eg: ship runs aground, oil tanker is diverted into environmentally sensitive waters), we need to change something.
For example, given the prevalence of GPS spoofing in many regions, and the prevalence of GPS jammers in many more, it seems reasonable to me that anyone (operating a ship, an aircraft, or a locomotive) who needs to know their precise position or even the precise time needs multiple, independent sources of that information. And we need alarms to sound when those independent sources disagree materially, and we need manual or other fall-back procedures when we detect such disagreement.
Another example – given the importance of a big vessel’s route, it seems reasonable that when the route changes for any reason, the captain should be notified of the change, and the change logged in an indelible / WORM ship’s log. It also seems reasonable that captains or acting captains are trained to examine unexpected route changes to make sure they make sense – not just because of potential attacks, but because of potential errors and omissions of shipboard or on-shore personnel. Note: I’m not an expert on shipboard systems – for all I know all this happens already and is how the teenager’s hack was detected? One can hope.
When we make decisions about other people’s safety, we have ethical and often legal obligations to make reasonable decisions. For that matter, when we make decisions about other people’s money, especially large amounts of it, we have similar obligations. OT security is more than OT putting our head in the sand and saying “Ship route planning is an IT system.” It is more than IT putting their head in the sand and saying “Not running aground is the captain’s responsibility.” Every business has an obligation to make reasonable design, training and other decisions about the safety of the public and workers, and reasonable decisions about the large amounts of money invested in physical processes like large ships.
More generally, we study attacks to understand what is reasonable to defend against. And we study breaches and defensive failures to try to understand whether our own management processes would really have prevented analogous breaches and failures.
Subscribe to our blog and receive insights straight to your inbox
The post Ships Re-Routed, Ships Run Aground appeared first on Waterfall Security Solutions.
]]>The post New CISA, CCCS et al Alert | Advice on Pro-Russian Hacktivists Targeting appeared first on Waterfall Security Solutions.
]]>
The most recent CISA, CCCS et al alert / advice on pro-Russian hacktivists targeting critical infrastructures is a lot of good work, with one or two exceptions. The alert documents poorly resourced hacktivists connecting with ICS gear over the Internet and hacking it. That gear tends to control critical infrastructures in the smallest, poorest and weakest of critical infrastructure installations – infrastructures most in need of simple, clear advice.
To its credit, the guide documents threats and tactics, and provides advice to both owners / operators and device manufacturers. However, the guide misses the mark in the section “OT Device Manufacturers.” I find this language very misleading:
“Although critical infrastructure organizations can take steps to mitigate risks, it is ultimately the responsibility of OT device manufacturers to build products that are secure by design.”
And,
“By using secure by design tactics, software manufacturers can make their product lines secure “out of the box” without requiring customers to spend additional resources making configuration changes, purchasing tiered security software and logs, monitoring, and making routine updates.”
When I read these words, the message I get is “If device manufacturers would only do their job better, then critical infrastructure owners and operators could ignore security and go forth to connect as much of their control systems as they wish to the Internet.”
This is of course nonsense.
We can configure “secure” products into hopelessly insecure systems, just as we routinely (with a bit of care) configure “insecure” ICS products into “secure” systems. That manufacturers should “take ownership of security outcomes” does not mean they can or should ever take sole ownership of such outcomes. A sentence or two to this effect would help readers better understand the relative responsibilities of manufacturers vs. owners & operators.
By analogy, automobile manufacturers can build all the seat belts, turn signals and rear-view mirrors they want into their vehicles, owners and operators still need to be taught to use these features to improve their driving safety. More specifically, owners and operators of the smallest, poorest and most vulnerable critical infrastructures need to hear that it is never reasonable for them to deploy safety-critical nor reliability-critical HMIs on the Internet, no matter what “secure” by design features have been built into these products.
And again, while I commend these organizations for doing the work of putting out the alert / guidance, a second feedback is that their advice to owners and operators missed the mark. It is not that the advice is wrong – it the wrong audience. The advice is appropriate for larger “medium-sized” infrastructures with a larger workforce, some of whom are knowledgeable in basic computer and cybersecurity concepts. The hacktivist attacks we’re talking about are targeting the smallest, poorest and least well-defended of critical infrastructures globally. These are organizations that uniformly suffer from STP Syndrome – Same Three People.
There is nobody no staff in these organizations who will understand the carefully phrased, completely general and abstract language of the guide’s 8 major recommendations and 17 sub-recommendations. These smallest organizations need the simplest advice possible. Eg:
Again – I commend these organizations for making the effort. Securing the smallest, least-capable critical infrastructures is a hard problem to solve. This document is much better than nothing but would benefit from clearer and stronger guidance targeting owners and operators of the smallest critical infrastructure control systems, not just manufacturers of the control devices in those systems.
Subscribe to our blog and receive insights straight to your inbox
The post New CISA, CCCS et al Alert | Advice on Pro-Russian Hacktivists Targeting appeared first on Waterfall Security Solutions.
]]>The post Bringing Engineering on Board and Resetting IT Expectations appeared first on Waterfall Security Solutions.
]]>In many organizations the relationship between IT/enterprise security and OT/engineering teams is dysfunctional. These teams work in the same organization, support the same mission, and even address many of the same threats, but when they sit down together it sounds like they need relationship counseling. Much has been written about the problem. Most of that writing misses the point, focusing on symptoms, not root causes. In this webinar we dig into causes, solutions and how to ask the right questions to guide the relationship into healthy cooperation.
Consequence is one root cause of OT/IT differences – we cannot restore human lives and damaged equipment from backups
Another root cause – we defeat OT sabotage with many of the same tools as we defeat IT espionage, but we must use those tools differently
Who manages OT equipment is less important than how that equipment is managed
We need to avoid common mistakes regarding inertia, criticality, credibility, and consequences
Subscribe to our blog and receive insights straight to your inbox
The post Bringing Engineering on Board and Resetting IT Expectations appeared first on Waterfall Security Solutions.
]]>The post We can’t – and shouldn’t – fix everything – Episode 147 appeared first on Waterfall Security Solutions.
]]>“We have new intel. The threat has changed, the probability has changed, the impact has changed, whatever it might be. Do we still feel good about our previous judgment of this?” – Kayne McGladrey
Subscribe to our blog and receive insights straight to your inbox
The post We can’t – and shouldn’t – fix everything – Episode 147 appeared first on Waterfall Security Solutions.
]]>