28. January 2014 · Comments Off on Prioritizing Vulnerability Remediation – CVSS vs. Threat Intelligence · Categories: Uncategorized · Tags: , , , ,

The CVSS vulnerability scoring system is probably the most popular method to prioritize vulnerability remediation. Unfortunately, it’s wildly inaccurate. Dan Geer, CISO for In-Q-Tel, and Michael Roytman, the predictive analytics engineer at Risk I/O published a paper in December 2013, entitled Measuring vs. Modeling that shows empirically just how bad CVSS is.

The authors had access to 30 million live vulnerabilities across 1.1 million assets from 10,000 organizations. In addition, they had another data set of SIEM logs of 20,000 organizations from which they extracted exploit signatures. They then paired those exploits with vulnerability scans of the same organizations. The time period for their analysis was June to August 2013.

Although the two sets of data come from different organizations, the authors believe that data sets are large enough that correlating them produces significant insights. Maybe more importantly, they say, “Because this is observed data, per se, we contend that it is a better indicator than the qualitative analysis done during CVSS scoring.”

The first step of their analysis was to establish a base rate, i.e. the probability that a randomly selected vulnerability is one that resulted in a breach. They determined that the base rate was 2%. Then they used CVSS numbers to correlate vulnerabilities to breaches. A CVSSv2 score of 9 resulted in 2.4%, and a CVSSv2 score of 10 resulted in 3.5%.

So how did Threat intelligence do? As a proxy for threat intelligence they used the Exploit-DB, Metasploit individually and combined. The numbers for these were 12.6%,  25.1%, and 29.2% respectively!! Clearly, using Exploit-DB and Metasploit together were almost 10 times better than CVSSv2!!

This jives with other similar work done by Luca Allodi from the University of Toronto. He found that that 87.8% of vulnerabilities that had a CVSS score of 9 or 10 were never exploited. “Conversely, a large portion of Exploit-DB and Symantec’s intelligence go unflagged by CVSS scoring; however, this is still a definitional analysis.”

This Usenix paper is well worth reading in its entirety, as well as the references they provide.

One caveat, the second author’s company, Risk I/O offers a vulnerability prioritization service based on threat intelligence. You might suspect that this study was performed with the end in mind of proving the value of their service. However, I find it hard to believe that Dan Geer would participate in such a scam. Nor do I think Usenix would be easily fooled. In addition, this study had similar results to Luca Allodi’s. I would surely be interested in hearing from anyone who can show that CVSS is a better predictor of vulnerabilities being exploited than threat intelligence.

 

 

 

27. January 2014 · Comments Off on Detecting unknown malware using sandboxing or anomaly detection · Categories: blog · Tags: ,

It’s been clear for several years that signature-based anti-virus and Intrusion Prevention / Detection controls are not sufficient to detect modern, fast-changing malware. Sandboxing as become a popular (rightfully so) complementary control to detect “unknown” malware, i.e. malware for which no signature exists yet. The concept is straightforward. Analyze inbound suspicious files by allowing them to run in a virtual machine environment. While sandboxing has been successful, I believe it’s worthwhile to understand its limitations. Here they are:

  • Access to the malware in motion, i.e. on the network, is not always available.
  • Most sandboxing solutions are limited to Windows
  • Malware authors have developed techniques to discover virtualized or testing environments
  • Newer malware communication techniques use random, one-time domains and non-HTTP protocols
  • Sandboxing cannot confirm malware actually installed and infected the endpoint
  • Droppers, the first stage of multi-stage malware is often the only part that is analyzed

Please check out Damballa’s Webcast on the Shortfalls of Security Sandboxing for more details.

Let me reiterate, I am not saying that sandboxing is not valuable. It surely is. However, due to the limitations listed above, we recommend that it be complemented by a log-based anomaly detection control that’s analyzing one or more of the following: outbound DNS traffic, all outbound traffic through the firewall and proxy server, user connections to servers, for retailers – POS terminals connections to servers, application authentications and authorizations. In addition to different network traffic sources, there are also a variety of statistical approaches available including supervised and unsupervised machine learning algorithms.

So in order to substantially reduce the risk of a data breach from unknown malware, the issue is not sandboxing or anomaly detection, it’s sandboxing and anomaly detection.

20. January 2014 · Comments Off on How Palo Alto Networks could have prevented the Target breach · Categories: blog · Tags: , , , , ,

Brian Krebs’ recent posts on the Target breach, A First Look at the Target Intrusion, Malware, and A Closer Look at the Target Malware, provide the most detailed and accurate analysis available.

The malware the attackers used captured complete credit card data contained on the mag stripe by “memory scraping.”

This type of malicious software uses a technique that parses data stored briefly in the memory banks of specific POS devices; in doing so, the malware captures the data stored on the card’s magnetic stripe in the instant after it has been swiped at the terminal and is still in the system’s memory. Armed with this information, thieves can create cloned copies of the cards and use them to shop in stores for high-priced merchandise. Earlier this month, U.S. Cert issued a detailed analysis of several common memory scraping malware variants.

Furthermore, no known antivirus software at the time could detect this malware.

The source close to the Target investigation said that at the time this POS malware was installed in Target’s environment (sometime prior to Nov. 27, 2013), none of the 40-plus commercial antivirus tools used to scan malware at virustotal.com flagged the POS malware (or any related hacking tools that were used in the intrusion) as malicious. “They were customized to avoid detection and for use in specific environments,” the source said.

The key point I want to discuss however, is that the attackers took control of an internal Target server and used it to collect and store the stolen credit card information from the POS terminals.

Somehow, the attackers were able to upload the malicious POS software to store point-of-sale machines, and then set up a control server within Target’s internal network that served as a central repository for data hoovered by all of the infected point-of-sale devices.

“The bad guys were logging in remotely to that [control server], and apparently had persistent access to it,” a source close to the investigation told KrebsOnSecurity. “They basically had to keep going in and manually collecting the dumps.”

First, obviously the POS terminals have to communicate with specific Target servers to complete and store transactions. Second, the communications between the POS terminals and the malware on the compromised server(s) could have been denied had there been policies defined and enforced to do so. Palo Alto Networks’ Next Generation Firewalls are ideal for this use case for the following two reasons:

  1. Palo Alto Networks enables you to include zone, IP address, port, user, protocol, application information, and more in a single policy.
  2. Palo Alto Networks firewalls monitor all ports for all protocols and applications, all of the time, to enforce these polices to establish a Positive Control Model (default deny or application traffic white listing).

You might very well ask, why couldn’t Router Access Control Lists be used? Or why not a traditional port-based, stateful inspection firewall? Because these types of network controls limit policy definition to ports, IP addresses, and protocols, which cannot enforce a Positive Control Model. They are simply not detailed enough to control traffic with a high degree of confidence. One or the other might have worked in the 1990s. But by the mid-2000s, network-based applications were regularly bypassing both of these types of controls.

Therefore, if Target had deployed Palo Alto Networks firewalls between the POS terminals and their servers with granular policies to control POS terminals’ communications by zone, port, and application, the malware on the POS terminals would never have been able to communicate with the server(s) the attackers compromised.

In addition, it’s possible that the POS terminals may never have become infected in the first place because the compromised server(s) the attackers initially compromised would not have been able to communicate with the POS terminals. Note, I am not assuming that the servers used to compromise the POS terminals were the same servers used to collect the credit card data that was breached.

Unfortunately, a control with the capabilities of Palo Alto Networks is not specified by the Payment Card Industry (PCI) Data Security Standard (DSS). Yes, “Requirement #1: Install and maintain a firewall configuration to protect cardholder data,” seems to cover the subject. However, you can fully meet these PCI DSS requirements with a port-based, stateful inspection firewall. But, as I said above, an attacker can easily bypass this 1990s type of network control. Retailers and e-Commerce sites need to go beyond PCI DSS to actually protect themselves. You need is Next Generation Firewall like Palo Alto Networks which enables you to define and enforce a Positive Control.

06. January 2014 · Comments Off on Two views on FireEye’s Mandiant acquisition · Categories: blog · Tags: , , , , , ,

There are two views on the significance of FireEye’s acquisition of Mandiant. One is the consensus typified by Arik Hesseldahl, Why FireEye is the Internet’s New Security Powerhouse. Arik sees the synergy of FireEye’s network-based appliances coupled with Mandiant’s endpoint agents.

Richard Stiennon as a different view, Will FireEye’s Acquistion Strategy Work? Richard believes that FireEye’s stock price is way overvalued compared to more established players like Check Point and Palo Alto Networks. While FireEye initially led the market with network-based “sandboxing” technology to detect unknown threats, most of the major security vendors have matched or even exceeded FireEye’s capabilities. IMHO, you should not even consider any network-based security manufacturer that doesn’t provide integrated sandboxing technology to detect unknown threats. Therefore the only way FireEye can meet Wall Street’s revenue expectations is via acquisition using their inflated stock.

The best strategy for a high-flying public company whose products do not have staying power is to embark on an acquisition spree that juices revenue. In those terms, trading overvalued stock for Mandiant, with estimated 2013 revenue of $150 million, will easily satisfy Wall Street’s demand for continued growth to sustain valuations. FireEye has already locked in 100% growth for 2014.

It will probably take a couple of years to determine who is correct.