28. January 2014 · Comments Off on Prioritizing Vulnerability Remediation – CVSS vs. Threat Intelligence · Categories: Uncategorized · Tags: , , , ,

The CVSS vulnerability scoring system is probably the most popular method to prioritize vulnerability remediation. Unfortunately, it’s wildly inaccurate. Dan Geer, CISO for In-Q-Tel, and Michael Roytman, the predictive analytics engineer at Risk I/O published a paper in December 2013, entitled Measuring vs. Modeling that shows empirically just how bad CVSS is.

The authors had access to 30 million live vulnerabilities across 1.1 million assets from 10,000 organizations. In addition, they had another data set of SIEM logs of 20,000 organizations from which they extracted exploit signatures. They then paired those exploits with vulnerability scans of the same organizations. The time period for their analysis was June to August 2013.

Although the two sets of data come from different organizations, the authors believe that data sets are large enough that correlating them produces significant insights. Maybe more importantly, they say, “Because this is observed data, per se, we contend that it is a better indicator than the qualitative analysis done during CVSS scoring.”

The first step of their analysis was to establish a base rate, i.e. the probability that a randomly selected vulnerability is one that resulted in a breach. They determined that the base rate was 2%. Then they used CVSS numbers to correlate vulnerabilities to breaches. A CVSSv2 score of 9 resulted in 2.4%, and a CVSSv2 score of 10 resulted in 3.5%.

So how did Threat intelligence do? As a proxy for threat intelligence they used the Exploit-DB, Metasploit individually and combined. The numbers for these were 12.6%,  25.1%, and 29.2% respectively!! Clearly, using Exploit-DB and Metasploit together were almost 10 times better than CVSSv2!!

This jives with other similar work done by Luca Allodi from the University of Toronto. He found that that 87.8% of vulnerabilities that had a CVSS score of 9 or 10 were never exploited. “Conversely, a large portion of Exploit-DB and Symantec’s intelligence go unflagged by CVSS scoring; however, this is still a definitional analysis.”

This Usenix paper is well worth reading in its entirety, as well as the references they provide.

One caveat, the second author’s company, Risk I/O offers a vulnerability prioritization service based on threat intelligence. You might suspect that this study was performed with the end in mind of proving the value of their service. However, I find it hard to believe that Dan Geer would participate in such a scam. Nor do I think Usenix would be easily fooled. In addition, this study had similar results to Luca Allodi’s. I would surely be interested in hearing from anyone who can show that CVSS is a better predictor of vulnerabilities being exploited than threat intelligence.

 

 

 

27. November 2010 · Comments Off on Why Counting Flaws is Flawed — Krebs on Security · Categories: blog · Tags: , ,

Why Counting Flaws is Flawed — Krebs on Security.

Krebs calls into question Bit9’s “Dirty Dozen” Top Vulnerable Application List which placed Google’s Chrome as number one. The key issue is that categorizing vulnerabilities simply by severity creates a misleading picture.

Certainly severity is an important criteria, but does not equal risk. Krebs highlights several additional factors which affect risk level:

  • Was the vulnerability discovered in-house — or was the vendor first alerted to the flaw by external researchers (or attackers)?
  • How long after being initially notified or discovering the flaw did it take each vendor to fix the problem?
  • Which products had the broadest window of vulnerability, from notification to patch?
  • How many of the vulnerabilities were exploitable using code that was publicly available at the time the vendor patched the problem?
  • How many of the vulnerabilities were being actively exploited at the time the vendor issued a patch?
  • Which vendors make use of auto-update capabilities? For those vendors that include auto-update capabilities, how long does it take “n” percentage of customers to be updated to the latest, patched version?

When taking these factors into consideration, Krebs opines that Adobe comes in first, second, and third!!