IT/OT Cyber Theory: Espionage vs. Sabotage

Picture of Andrew Ginter

Andrew Ginter

ITOT Cyber Theory Espionage vs Sabotage

The second-generation of OT security advice started to emerge in 2012-2016. At the time, the difference between the second and first gen advice was a bit confusing. In hindsight, one important difference has become clear – the difference between preventing cyber-sabotage vs. cyber-espionage. We do not prevent sabotage the same way we prevent espionage. **50** year old cybersecurity theory (wow – we’ve been at this a long time) makes the difference clear. Bell / La Padula’s theory is how we prevent espionage, while Biba’s theory is how we prevent cyber-sabotage.

Let’s look at each of these theories and at how they define one of the fundamental differences between our approach to OT vs IT security.

First Gen Security Advice

First-gen OT security advice said, loosely:

  1. Information is the asset we protect, so
  2. Assure the confidentiality, integrity and availability (CIA) of the information assets.

And of course, we muttered at the time a bit about CIA vs AIC vs IAC as priorities, but we all agreed, however hard the concept seemed at the time, that information was the asset we were protecting. This was and is, back of the envelope, exactly what we still do on IT networks. After all, when engineering teams first started looking at cybersecurity, who were the experts we could call on for help? There were no OT security experts back then, and so we called on IT experts. It is therefore no surprise that first-gen OT security advice was close to indistinguishable from IT security advice.

The theory backing up preventing theft of information was defined by Bell and La Padula. The theory had its roots in timeshared computers – 50 years ago, large organizations had only small numbers of computers with hundreds of users each. And in some organizations, like the military, it was really important that we prevent low-classification users from reading high-classification national secrets. Bell / La Padula theory mandated that, to prevent espionage:

  • A “subject” or “actor” at a given security level must never be able to read information from a higher security / classification level, and
  • That actor must never be able to write information to any lower security level.

 

Rule (1) is obvious to most people encountering the theory for the first time. (2) often seems a little strange. To make sense of (2), imagine that malware has established a foothold in a classified user’s account. If the user can write sensitive classified information into less-sensitive areas of the computer, then so can the malware. In the worst case, the information may be steganographically encoded – such as spreading the information through the low-order bits of pixels in images. To prevent all information leakage, we must forbid any information flowing from high-security to low-security users and systems, because steganographic encoding is always possible, at least in theory.

Second-Gen OT Security

Second-gen advice said, loosely, that in most OT systems, information is not the most important asset we protect, but rather:

  • Safe, reliable and efficient physical operations are what we protect, and
  • All cyber-sabotage is (by definition) information, so to protect physical operations, we must control the flow of attack information into high-consequence automation systems and networks from lower-consequence networks.

At the time this advice came out, (a) made a lot of sense to a lot of engineering teams. They had never been comfortable with the idea that information was the asset they were trying to protect. (b) seemed a bit strange at first to a lot of people but made sense if you thought about it for a day or two. Nobody can deny that cyber-sabotage is information – the only way an automation system can change from a normal state to a compromised state is if attack information enters the system, somehow. Controlling the flow of information therefore makes sense – and if we think about first-gen OT security advice, such as the IEC 62443-1-1 standard, a good half of that first standard was focused on network segmentation – controlling the flow of attack information.

The theory backing up this second-gen perspective was defined by Biba, not Bell and La Padula. Biba’s theory also had its roots in timeshared computers for the military, but was focused on preventing sabotage, not preventing espionage. Eg: think the difference between preventing re-targeting of nuclear weapons, vs. preventing the theft of the knowledge of how to build those same weapons. Biba’s theory mandated that, to prevent cyber-sabotage:

  • A “subject” or “actor” at a given security level must never be able to read information from a lower security level, and
  • That actor must never be able to write information to any higher level.

 

Rule (2) is easier to understand for most people encountering the theory for the first time – a malicious actor must not be able to write malware into a higher security level (eg: to change the missiles’ targets). In Biba’s theory, (1) is the strange one. To make sense of it, imagine that malware has established a foothold in a less-secured, less-sensitive network, like the Internet. If a sensitive network pulls information from the Internet, we risk pulling malware, which if activated, can wreak havoc.

Second-gen advice therefore generally forbade any online transfer of information from less-secure networks into high-consequence safety-critical or equipment-critical networks.

Data Diodes + Unidirectional Gateways

Data Diodes were the military’s answer to Bell / La Padula and Biba. Unidirectional Gateways were OT security’s answer. The difference?

  • Data Diodes send information into confidential military networks and are physically unable to leak any national secrets back out.
  • Unidirectional Gateways send information out of OT networks into IT, and are physically unable to leak cyber-sabotage attacks back in.

There are secondary differences as well. For example, data diodes typically transmit a very limited number of data types into military networks through custom-engineered software, while unidirectional gateways replicate OPC, historian and many other kinds of servers out to IT networks using off-the-shelf software components.

And every rule has exceptions. Many manufacturing operations use trade secrets that they cannot afford to have stolen, for example. And most industrial operations need some very small, very select data to flow back into the system from time to time.

Both Bell / La Padula and Biba’s theories provided for these exceptions, and demanded that any data flow that violated the primary principles be minimal, simple, understandable, and deeply scrutinized to ensure that the primary objective (preventing espionage, or sabotage, respectively) was not compromised by these secondary objectives and data flows.

Resilience

Third-gen OT security advice, FTR, is still emerging and is focused on resilience. The theoretical framework behind resilience is more engineering practice than mathematics, but we are working on it. The most thorough, most widely-used resilience framework today is Idaho National Laboratory’s (INL’s) Cyber-Informed Engineering (CIE). CIE is positioned as “the big umbrella.” CIE encompasses cyber-relevant parts of safety engineering, protection engineering, automation engineering, and network engineering, as well as most of the cybersecurity discipline, including all of Bell / La Padula and Biba’s theories.

Using This Knowledge

An important difference between IT and OT networks is the difference between preventing espionage and preventing sabotage. First-gen advice seemed a hard fit for OT, in part because that advice tried to apply the language and concepts of preventing espionage to the task of preventing sabotage. In hindsight, second-gen advice corrected this, though neither generation of advice used the words “espionage” nor “sabotage,” nor did they reference 50-year-old theory.

Today our terminology is maturing, and OT security’s connections to the theoretical foundations of cybersecurity are becoming clearer. Clarifying this understanding and terminology helps a lot when trying to get our engineering and enterprise security teams to work together. If we are to cooperate effectively, we need to understand foundational differences between the assets and networks we protect, and we need a terminology to express those differences as we design our joint security programs.

Digging Deeper

This is one of the topics that will be covered in Waterfall’s Jan 28 webinar Bringing Engineering on Board and Resetting IT Expectations. Please <click here> to register.

About the author
Picture of Andrew Ginter

Andrew Ginter

Andrew Ginter is the most widely-read author in the industrial security space, with over 23,000 copies of his three books in print. He is a trusted advisor to the world's most secure industrial enterprises, and contributes regularly to industrial cybersecurity standards and guidance.
Share

Stay up to date

Subscribe to our blog and receive insights straight to your inbox