As security professionals helping customers build secure solutions, we’re often told to “think like an attacker”. That is to say, put yourselves into the shoes of an attacker to work out how you might attack an application, or server, or infrastructure, and allow this to drive your thinking on what defences to build. This is normally followed by a quote by Sun Tzu about knowing your enemy and not fearing the battle.
I don’t disagree that this is one way of addressing the problem but is it still the best way in today’s threat landscape? Questions that often arise with this approach are “who is our attacker?”, “what are their skills and capabilities?” and “what is their motivation?” This can be subjective analysis and answers could be so wide-ranging as to be unmanageable. Is it a rogue ex-employee seeking collateral damage, for example, or a wannabe who’s looking to be malicious and deface a website or a hacktivist group attempting to DDoS an organisation?
In Adam Shostack’s book “Threat Modeling, designing for security”, Adam outlines approaches to Threat Modeling and argues that focusing on attackers is not as useful as it may appear to be. While it can make the threats seem more real, there is a danger that bias comes into play and people dismiss types of threat actors altogether or come up with wildly imaginative attack scenarios, or even focus on the threats they have mitigated with controls already. Also, security professionals in the blue corner may not find it natural to think like their nemeses in the red corner and could miss key threats.
Instead, Adam recommends focusing on the software because this is where most of the “knowns” are known.
As an alternative approach, how about if security professionals “think like a developer” instead? I understand there are similar challenges here with an understanding of development processes and practices but, if we focused on the tools and techniques of developers, could this help ensure we can stay ahead of the game?
Another reason to consider a different approach is that, in today’s app-centric, fast moving world of application development, it’s not necessarily a lack of understanding of the attacker that threatens our apps. It’s more about not being able to build security into applications and the reliance on reactive controls, or a lack of visibility into security weaknesses and the ability to respond in a timely fashion.
For the former, I’ve spoken previously about the need to move from DevOps to DevSecOps practices and embed security testing into application development pipelines, part of the “shift left” approach to secure development.
For the latter, it’s best to accept that there are always going to be bugs, but have processes in place to be reactive once new bugs are found. Whitehat’s 2018 Application Security Statistics Report noted the average “Time to Fix” of vulnerability risk levels had, in general, increased from the previous year. In fact, critical threats take an average of 139 days to fix. Interestingly, if we look into the timeline of the Apache Struts vulnerability and the Equifax breach, on March 6th Apache released a fix for the vulnerability and the very next day an exploit appeared. Equifax was breached two months later on 14th May, and it was a further two months before Equifax detected the breach.
Of course, organizations need to be aware of the bugs and vulnerabilities first, before they can look to remediate, so an effective monitoring strategy is a must, as outlined by the National Cyber Security Centre here. How then, do we remediate within the very limited period of bug disclosure and potential breach and significantly reduce the “Time to Fix”?
For both requirements, this is where using tools and techniques of developers could serve a tremendous purpose. By breaking down the silos between Devs and Security and embedding a collaborative approach we can start to see where security can be part of the development lifecycle.
For instance, for each user story that a developer creates for an application, could there be an associated security story, and both become part of the product backlog? Can security testing become part of the development pipeline and be triggered as part of each code commit, alongside existing practices such as unit or integration testing? What about, if a threat or vulnerability is found, we add these to an existing defect tracking system which, again, developers use as part of a product backlog?
These are just three examples where thinking like a developer can help with both building security into applications, and being able to respond in the most agile of ways to reduce the window of exposure.
Moving into 2019, there’s likely to be a greater requirement to “shift left” and while the traditional way of thinking like an attacker to build controls is still valid, perhaps in today's world more value can be achieved by ensuring security can be agile and iterative with automation playing a key role, practices so frequently used by our developer colleagues.