The principles, methods, and tools for performing good risk measurement already exist and are being used successfully by organizations today. They take some effort -- and are totally worth it.
There's an old saying in marketing: "Half of your marketing dollars are wasted. You just don't know which half." This has become far less true in recent years for organizations that apply rigorous quantitative marketing analysis techniques.
Unfortunately, given common practices in cybersecurity today, you could update that old saying by substituting "marketing" with "cybersecurity" and have to wonder if it isn't accurate. At the very least, you'd have to decide how you'd defend that it isn't. For example, if I asked what the most valuable cybersecurity investment has been for your organization in the past three years, how would you answer?
How Do We Define Cybersecurity Value?
You can't reliably measure what you haven't clearly defined, so before we can have an intelligent conversation about cybersecurity value, we first have to clearly define what we mean. For this, I turn to the question I've heard executives ask many times over the years: "How much less risk will we have if we spend these dollars on cybersecurity?" Clearly, from their perspective (and it's their perspective that matters) cybersecurity value should be measured in how much less risk the organization faces.
Unfortunately, what I commonly see in board reports, budget justifications, and conference presentations is something different. Most of the time, as an industry we appear to lean on implicit proxies for measuring risk reduction ― things like NIST CSF (National Institute of Standards and Technology Cyber Security Framework) benchmark improvements, credit-like scores, and higher compliance ratings. Don't get me wrong; these are useful directional references that generally mean an organization has less risk. The problem is that we don't know how much less risk, and the "how much" matters.
For example, if the overall NIS CSF score for your organization went from 2.5 to 2.9 last year, what does that 0.4 improvement mean in terms of risk reduction? Along the same lines, how much less risk comes from reducing the time to patch or shortening the time to detect a breach?
Measuring Risk Reduction
Everything we do in cybersecurity in some way affects, directly or indirectly, the probable frequency and/or magnitude of loss-event scenarios. That being the case, measuring the value of our efforts begins with clearly defining the loss-event scenarios we're trying to affect. At a superficial level, this often boils down to confidentiality breaches, availability outages, and compromises of data integrity. That level of abstraction isn't usually very useful in risk measurement though, so we need to be more specific.
A more reasonable level of specificity would include, for example, a confidentiality breach of which information, by which threat community, via which vector. At this level of abstraction, you can begin to evaluate the effect of cybersecurity controls on the frequency and magnitude of loss for that scenario.
If that sounds like more work than you're used to applying in risk measurement, it's not surprising. Most of what passes for risk measurement today is nothing more than someone proclaiming high/medium/low risk.
Value Analysis
To drive my point home, let me share a high-level example from my past as a CISO. The organization I worked for had huge databases containing millions of consumer credit card records. The Payment Card Industry standard called for data at rest encryption (DaRE), which at the time would have cost the organization well over a million dollars, required modifications to key applications, and taken over a year and a half to implement.
Rather than simply go to my executives with an expensive compliance problem, I took a couple of days to do the following:
Identify which loss-event scenarios DaRE was relevant to as a control. Perform a quantitative risk analysis using Factor Analysis of Information Risk (FAIR) to determine how much risk we currently faced from these scenarios. Perform a second analysis that estimated the reduction in risk if we implemented DaRE. Identify a set of alternative controls that were also relevant to the same loss-event scenarios. (These controls cost a fraction as much as DaRE, didn't require application changes, and could be implemented in a few months.) Perform a third analysis that estimated the reduction in risk if we implemented these alternative controls (which turned out to be a greater reduction in risk than DaRE).The upshot is that I was able to go to my executives and the PCI auditor with options that included clearly described cost-benefit analyses. From their perspective, it was a no-brainer.
By not simply telling my executives that we had to bite the compliance bullet, the organization was able to save over a million dollars, avoid significant operational disruption, and reduce more risk in a shorter time frame.
The Bottom Line
Every dollar spent on cybersecurity is a dollar that can't be spent on the many other business imperatives with which an organization must deal. For this reason (and because we have an inherent obligation to be good stewards of our resources), we must be able to effectively measure and communicate the value proposition of our cybersecurity efforts.
Fortunately, the principles, methods, and tools for performing good risk measurement already exist and are being used successfully by organizations today. Do these analyses take more effort than proclaiming high/medium/low risk, or falling back on ambiguous metrics? Absolutely. Is the extra effort worthwhile? I'll answer based on my experience as a CISO ― yes. It's not even close.