The design process for safety systems is well established, and has been improved incrementally throughout the last few decades to specifically cover the emergence of digital systems (see IEC 61508).  For example, the concepts of addressing common cause failues, avoiding systematic errors, analysing hazards, designing layers of protection are commonplace within the control and safety system community.   

For safety systems, the demonstration of adequate design (design justification) is equally important; recorded evidence that safety analysis has been thoroughly performed, and risks have been assessed and reduced.

From the earlier days in my career working within the civil nuclear industry, these design processes were the focus of our work in the engineering team.  Following the rigorous design process for a new system or a modification, provided the opportunity to ensure that safety analysis had been performed in the right manner, with appropriately skilled and experience personnel, selecting the right technology, engineering the solution and ultimately demonstrating that the risks to the worker and public had been reduced to be as low as reasonably practicable (ALARP).

These design processes have evolved and improved over the years, to address new technology and hazards, such as digital systems and devices, as well as considering external hazards such as space weather or changes to the analysed seismic activity predictions.  These improvements lead to standards, guidance and recommendations to help engineering and safety teams address them, i.e. it helped them answer the question of "what do I do about it?"

With the emergence of cyber threats to digital control and safety systems, that same question needs answering.  There have been enough incidents, reporting and expert analysis to show that adversaries have the capability, and that control and safety systems are a target. Denial and security-by-obscurity cannot be considered legitimate options. 

However, for those responsible for designing and justifying the operation of safety systems, where any change or modification requires assessment, it is a legitimate question to ask, in engineering terms, "OK, so what do I do about it? What can I do about it? How well do I need to engineer this?".  The "Secure by Design" concept is touted, but what does that actually mean and how do we achieve it for control and safety systems?

There are now a number of approaches being put forward, that provide a methodology for assessing cyber risks to control and protection systems, using language and analysis which integrates with the existing and long-standing design and justification processes:

Adopting and incorporating these approaches will provide huge benefit to those responsible for safety systems, and will help them get to a state where the input required from the engineering, safety, cyber, threat intelligence and operations teams is defined and understood by all, including members of the board and regulators. This provides assurance for the system(s) in question, and lets the engineers get on with the engineering.

Additionally, if more asset owners adopt a similar approach, then their process can be audited, benchmarked and regularly reviewed and improved upon, providing benefits to the wider community.   

Whilst this post does not attempt to compare these approaches, it does highlight the positive trend of integrating security into the design process, not a bolt-on at the end. This trend is great to see, as the incorporation of security into the likes of Fault Tree Analysis and HAZOPs is surely the best way to consider cybersecurity within the design process.