Information Security Management

6 Cybersecurity Metrics that Financial Institutions Should NOT Report to the BoD

16 Jan 2020 | Randy Lindberg

If you are responsible for cybersecurity compliance at a financial institution, or at least are involved in it, you know that several key controls in the FFIEC CAT and NCUA ACET require organizations to report cybersecurity to the Board of Directors. The BoD is ultimately responsible for overseeing cybersecurity. To effectively oversee the program, they need to know what is going on.

Some examples of the controls related to BoD reporting in the CAT/ACET

(these are done annually, if not more frequently)

  • The board or an appropriate board committee reviews and approves the institution’s cybersecurity program on an annual basis.
  • Management provides a written report on the overall status of the information security and business continuity programs to the board or an appropriate board committee.
  • The institution prepares an annual report of security incidents or violations for the board or an appropriate board committee.

Unfortunately, there is no prescriptive guidance on what specifically to report and more importantly, what NOT to report. When we help financial institutions improve their cybersecurity programs or perform a CAT/ACET assessment, we dig into Board reporting. We look at, understand, and make recommendations about what is being reported to the BoD.

In many cases, we see some form of numbers often referred to as vanity metrics. The numbers look good at first glance, such as 1 bazillion spam emails blocked, but upon further consideration don’t have any actual business value. In reality, these kinds of values being reported may actually be a detriment to the security program by providing a false sense of security! With that in mind, we have assembled a short list to help out our friends in the financial world.

Below are six “metrics” (using the term loosely here) that we commonly see reported to Boards or Board Committees, that should not be reported to executives.

(These items have no business showing up in a Board packet. Ever.)

1. Number of spam emails blocked

This may be the most common reporting number we see in cybersecurity reports to the Board. The number is touted by the anti-spam filtering tool, so it’s easy to find, and it looks cool. The only reason the number looks cool is that it’s typically very large, with the massive amount of spam being sent constantly. The spam filter should be fine-tuned, if possible, to block the highest number of spam emails without introducing an unacceptable level of false positives. Reporting on the huge number of spam messages blocked might trick a Board member into thinking user training is less important because 99% of spam messages are being blocked.

Unfortunately, the real concern is the spam emails that are crafty enough to get by the filtering algorithm, and therefore more likely to get by a human’s mental filter. What is more important than number of messages blocked is what’s getting through. What is most important is how employees respond when they see a spam email that made it past the filter. Can they identify it and report it, or do they click links?

What to report instead: Report on employee cybersecurity awareness training results. Let the Board know how successful your training program is, by demonstrating a decrease in happy-clickers over time.

2. Qualitative measures of risk

Unfortunately, the most common method of measuring Cybersecurity risk is qualitative using some flavor of ordinal scale. Even the most well-known risk assessment guide, NIST 800-30, prescribes a qualitative approach to measuring risk.

Here is the problem: Ordinal scales. We as an industry don’t measure IT risk well. We are still in the wild west of information security decision making and to make matters worse, we’re not getting value out of our risk assessment efforts. We just know we need to do it because the CAT/ACET and our examiners tell us it needs to be done.

An ordinal scale is something that denotes an order. For example Medium is higher than Low, and High is higher than Medium. Two is more than one, Neutral is better than Bad. There are scenarios where ordinal scales work very well.

For example: Restaurant rating reviews. We all have a pretty good understanding of what it means for a restaurant to have a 5 star rating (or a one star rating...yikes!). We also understand what it means when it is rated as $ verse $$$.

Let’s say you’re shopping for a home, and the lender tells you your payments are going to be $$$$. What the heck does that really mean? Is that $1,000 a month or is that $4,000 per month?

It is the same concept with cybersecurity risk. We as security people have been telling executives for years that we have high risks, or that we have a 5-risk level, and we need to spend thousands of dollars to mitigate the risk. But what does that actually mean?

What to report instead: Find a quantitative measurement approach that works for you... at the very least use financial ranges in place of simple ordinal scales, preferably using a more advanced method like Monte Carlo Analysis.

3. Additional Security Tools

More people, applications, and tools are often considered a measure of success, but more is not always better. Adding to the list of resources doesn’t necessarily mean better security. Telling the Board that another security appliance was plugged in to demonstrate security doesn’t necessarily help.

What to report instead: Tell the Board what risks have been reduced and which gaps in the security program have been filled, whether the improvements were done with existing tools or new ones.

4. CVSS Scores

When reporting vulnerabilities, their prioritization needs to consider what the real risk to the organization is. When you run a scan, the scanning tool doesn’t know where it is in relation to the asset being scanned. The tool has to use generic ratings that can be used as a starting point.

For example: We see a lot of SSL issues reported as Medium rated vulnerabilities. If we see these findings in an external network scan, most of them are definitely medium. However, if we see SSL vulnerabilities show up in an internal network scan, many of them realistically should have a Low rating.

Also, consider the likelihood of the vulnerabilities being exploited. Again, the scanner knows very little about the assets being scanned. In many cases a vulnerability will be present on a system that is unlikely to be exploited.

Side note: this is a standard part of our vulnerabilities assessments. Hopefully your security vendor also does this.

What to report instead: The Board would be better off seeing adjusted vulnerability ratings rather than the raw results spit out by the vulnerability scanning tool.

5. Perimeter Attacks Blocked

On a daily basis, there are likely to be thousands of threats hitting your perimeter firewall from all over the world. Some organizations like to report the number because, much like spam messages being blocked, it looks cool. However, telling the Board about relatively normal network activity doesn’t provide any value. Rather, it might give the Board a false sense of security.

What to report instead: Results of firewall testing and blocked attacks that made it inside the firewall.

6. Unpatched Vulnerabilities

Certainly patching vulnerabilities is important, but the number of vulnerabilities patched, by itself, doesn’t provide any actionable information without additional context. Some people might report that they’ve patched 200 vulnerabilities in the last month or quarter, which sounds great on the surface, but there might still be 200 critical vulnerabilities that still need patching.

The number of vulnerabilities patched, outside the context of the IT risk assessment, does not provide information around the importance of the assets being patched, or the number of assets.

What to report instead: Telling the Board the ratio of critical and high vulnerabilities patched would at least give them an idea of what is left to patch. This way they don’t develop a false sense of security. Also, providing historical context for the number of vulnerabilities would let them know if the situation is improving or is the team not keeping up with new vulnerabilities.