02. November 2016 · Comments Off on Cybersecurity Architecture in Transition to Address Incident Detection and Response · Categories: blog · Tags: ,

I would like to discuss the most significant cybersecurity architectural change in the last 15 years. It’s being driven by new products built to respond to the industry’s abysmal incident detection record.

Overview

There are two types of products driving this architectural change. The first is new “domain-level” detection countermeasures, primarily endpoint and network domains. They dramatically improve threat detection by (1) expanding the types of detection methods used and (2) leveraging threat intelligence to detect zero day threats.

The second is a new type of incident detection and response productivity tool, which Gartner would categorize as SOAR (Security Operations, Analysis and Reporting). It provides SOC analysts with (1) playbooks and orchestration to improve analyst productivity, efficiency, and consistency, (2) automated correlation of alerts from SIEMs and the above mentioned “next-generation” domain-based countermeasures, (3) the ability to query the domain-level countermeasures and other sources of information like CMDBs and vulnerability management solutions via their APIs to provide SOC analysts with improved context, and (4) a graphical user interface supported by a robust database that enables rapid SOC analyst investigations.

I am calling the traditional approach we’ve been using for the last 15 years, “Monolithic,” because a SIEM or a single log repository has been at the center of the detection process. I’m calling the new approach “Composite” because there are multiple event repositories – the SIEM/log repository and those of the domain-level countermeasures.

First I will review why this change is needed, and then I’ll go into more detail about how the Composite architecture addresses the incident detection problems we have been experiencing.

Problems with SIEM

Back around 2000, the first Security Information and Event Management (SIEM) solutions appeared. The rationale for the SIEM was the need to consolidate and analyze the events, as represented by logs, being generated by disparate domain technologies such as anti-virus, firewalls, IDSs, and servers. While SIEMs did OK with compliance reporting and forensics investigation, they were poor at incident detection. The question is, why?

First, for the most part, SIEMs are limited to log analysis. While most of the criticism of SIEMs relates to the limitations of rule based alerting, more on that below, limiting analysis to logs is a problem in itself for two reasons. One, capturing the details of actual activities from logs is difficult. Two, a log often represents just a summary of an event. So SIEMs were handicapped by their data sources.

Additionally, the SIEM’s rule-based analysis approach only alerts on predefined, known bad scenarios. New, creative, unknown scenarios are missed. Tuning rules for a particular organization is also very time consuming, especially when the number of rules that need to be tuned can run into the hundreds.

Another issue that is often overlooked is that SIEM vendors are generalists. They know a little about a lot of domains, but don’t have the same in-depth knowledge about a specific domain as a vendor who specializes in that domain. SIEM vendors don’t know as much about endpoint security as endpoint security vendors, and they don’t know as much about network security as network security vendors.

Finally, SIEMs have not addressed two other key issues that have plagued security operations teams for years – (1) lack of consistent, repeatable processes among different SOC analysts, and (2) mind-numbing repetitive manual tasks that beg to be automated. These issues, plus SIEMs high rate of false positives sap SOC team morale which results in high turnover. Considering cybersecurity’s “zero unemployment” environment, this is a costly problem indeed.

Problems with Traditional Domain-level Countermeasures

But the industry’s poor incident detection track record is not just due to SIEMs. Traditional domain-level detection products also bear responsibility. Let me explain.

Traditional domain-level detection products, whether endpoint or networking, must make their “benign (allow) vs suspicious (alert but allow) vs malicious (alert and block)” decisions in micro or milliseconds. A file appears. Is it malicious or benign? The anti-virus software on the endpoint must decide in a fraction of a second. Then it’s on to the next file. In-line IDS countermeasures face a similar problem. Analyze a packet. Good, suspicious, or malicious? Move on to the next packet. In some out-of-band cases, the countermeasure has the luxury of assembling a complete file before making the decision. File detonation/sandboxing products can take longer, but are still limited to minutes at most. Then it’s on to the next file.

So it’s really been the combination of the limitations of traditional domain-level countermeasures and SIEMs that have resulted in the poor record of incident detection, high false positive rates, and low morale and high turnover among SOC analysts. But there is hope. I am seeing a new generation of security products built around a new, Composite architecture that addresses these issues.

Next Generation Domain-level countermeasures

First, there are new, “next generation,” security domain-level companies that have expanded their analysis timeframe from microseconds to months. A next-gen endpoint product not only analyzes events on the endpoint itself, but collects and stores hundreds of event types for further analysis over time. This also gives the next-gen endpoint product the ability to leverage threat intelligence, i.e. apply new threat intelligence retrospectively over the event repository to detect previously unknown zero-day threats.

A “next-gen” network security vendor collects, analyzes, and stores full packets. With full packets captured, the nature of threat intelligence actually expands beyond IP addresses, URLs, and file hashes to include new signatures. In addition, combinations of “weak” signals can be correlated over time to generate high fidelity alerts.

These domain specific security vendors are also using machine learning and other statistical algorithms to detect malicious scenarios across combinations of multiple events that traditional rule-based analysis would miss.

Finally, these next-gen domain-level countermeasures provide APIs that (1) enables a SOAR product to pull more detailed event information to add context for the SOC analyst, and (2) enables threat hunting with third party threat intelligence.

Architectural issue created by next-gen domain-level countermeasures

But replacing traditional domain specific security countermeasures with these next gen ones actually creates an architectural problem. Instead of having a single Monolithic event repository, i.e. the SIEM or log repository, you have multiple event repositories because it no longer makes sense to add the next-gen domain specific event data into what has been your single event repository. Why? First, the analysis of the raw domain data has already been done by the domain product. Second, if you want to access the data, you can via APIs. Third, as already stated, the type of analysis a SIEM does has not been effective at detecting incidents anyway. Fourth, you are already paying the domain-level vendor for storing the data. Why pay the SIEM or log repository vendor to store that data again?

Having said all this, your primary log repository is not going away anytime soon because you still need it for traditional log sources such as firewall, Active Directory, and Data Loss Prevention. But, over time, there will be fewer traditional countermeasures as these vendors expand their analyses timeframes. Some are already doing this.

So by embracing these next-gen domain specific countermeasures we are creating multiple silos of security information that don’t talk to each other. So how do we correlate these different domains if the events they generate are not in a single repository?

Security Operations, Analysis, and Reporting (SOAR)

This issue is addressed by a new type of correlation analysis product, the second architectural component, which Gartner calls Security Operations, Analysis, and Reporting (SOAR). I believe Gartner first published research on this in the fall of 2015. Here are my SOAR solution requirements:

  1. Receive and correlate alerts from the next-gen domain-level security products.
  2. Query the next-gen domain-level products for more detailed information related to each alert to provide context.
  3. Access CMDBs and vulnerability repositories for additional context.
  4. Receive and correlate alerts from SIEMs. Rule-based alerts from SIEMs need to be correlated by entity, i.e. user.
  5. Query Splunk and other types of log repositories.
  6. Correlate alerts and events from all these sources.
  7. Take threat intelligence feeds and generate queries to the various data repositories. While the next-gen domain specific countermeasures have their own sources of threat intelligence, I fully expect organizations to still subscribe to product independent threat intelligence.
  8. Use a robust database of its own, preferably a graph database, to store all this collected information, and provide fast query responses to SOC analysts pivoting on known data during investigations.
  9. Provide playbooks and the tools to build and customize playbooks to assure consistent incident response processes.
  10. Provide orchestration/automation functions to reduce repetitive manual tasks.

User Entity Behavior Analysis (UEBA)

At this point, you may be thinking, where does User Entity Behavior Analysis (UEBA) fit in? If you are concerned about a true insider threat, i.e. malicious user activity with no malware involved, then UBA is a must. UBA solutions definitely fit into the Composite architecture querying multiple event repositories and sending alerts to the SOAR solution. They should also have APIs so they can be queried by the SOAR solution.

Future Evolution

Looking to the future, I expect the Composite architecture to evolve. Here are some possibilities:

  • A UEBA solution could add SOAR functionality
  • A SIEM solution could add UEBA and/or SOAR functionality
  • A SOAR solution could add log repository and/or SIEM functionality
  • A next-gen domain solution could add SOAR functionality

Summary

To summarize, the traditional Monolithic architecture consisting of domain countermeasures that are limited to microsecond/millisecond analysis that feed a SIEM for incident detection has failed. It’s being replaced by the Composite model featuring (1) next-gen domain-level countermeasures that play an expanded analysis role, (2) for the near term, traditional SIEMs and/or primary log repositories continue to play their roles for traditional security countermeasures, and (3) a SOAR solution at the “top of the stack” that is the SOC analysts’ primary incident detection and response tool.

Originally posted on LinkedIn on November 2, 2016

https://www.linkedin.com/pulse/cybersecurity-architecture-transition-bill-frank?trk=mp-author-card

27. January 2014 · Comments Off on Detecting unknown malware using sandboxing or anomaly detection · Categories: blog · Tags: ,

It’s been clear for several years that signature-based anti-virus and Intrusion Prevention / Detection controls are not sufficient to detect modern, fast-changing malware. Sandboxing as become a popular (rightfully so) complementary control to detect “unknown” malware, i.e. malware for which no signature exists yet. The concept is straightforward. Analyze inbound suspicious files by allowing them to run in a virtual machine environment. While sandboxing has been successful, I believe it’s worthwhile to understand its limitations. Here they are:

  • Access to the malware in motion, i.e. on the network, is not always available.
  • Most sandboxing solutions are limited to Windows
  • Malware authors have developed techniques to discover virtualized or testing environments
  • Newer malware communication techniques use random, one-time domains and non-HTTP protocols
  • Sandboxing cannot confirm malware actually installed and infected the endpoint
  • Droppers, the first stage of multi-stage malware is often the only part that is analyzed

Please check out Damballa’s Webcast on the Shortfalls of Security Sandboxing for more details.

Let me reiterate, I am not saying that sandboxing is not valuable. It surely is. However, due to the limitations listed above, we recommend that it be complemented by a log-based anomaly detection control that’s analyzing one or more of the following: outbound DNS traffic, all outbound traffic through the firewall and proxy server, user connections to servers, for retailers – POS terminals connections to servers, application authentications and authorizations. In addition to different network traffic sources, there are also a variety of statistical approaches available including supervised and unsupervised machine learning algorithms.

So in order to substantially reduce the risk of a data breach from unknown malware, the issue is not sandboxing or anomaly detection, it’s sandboxing and anomaly detection.

20. January 2014 · Comments Off on How Palo Alto Networks could have prevented the Target breach · Categories: blog · Tags: , , , , ,

Brian Krebs’ recent posts on the Target breach, A First Look at the Target Intrusion, Malware, and A Closer Look at the Target Malware, provide the most detailed and accurate analysis available.

The malware the attackers used captured complete credit card data contained on the mag stripe by “memory scraping.”

This type of malicious software uses a technique that parses data stored briefly in the memory banks of specific POS devices; in doing so, the malware captures the data stored on the card’s magnetic stripe in the instant after it has been swiped at the terminal and is still in the system’s memory. Armed with this information, thieves can create cloned copies of the cards and use them to shop in stores for high-priced merchandise. Earlier this month, U.S. Cert issued a detailed analysis of several common memory scraping malware variants.

Furthermore, no known antivirus software at the time could detect this malware.

The source close to the Target investigation said that at the time this POS malware was installed in Target’s environment (sometime prior to Nov. 27, 2013), none of the 40-plus commercial antivirus tools used to scan malware at virustotal.com flagged the POS malware (or any related hacking tools that were used in the intrusion) as malicious. “They were customized to avoid detection and for use in specific environments,” the source said.

The key point I want to discuss however, is that the attackers took control of an internal Target server and used it to collect and store the stolen credit card information from the POS terminals.

Somehow, the attackers were able to upload the malicious POS software to store point-of-sale machines, and then set up a control server within Target’s internal network that served as a central repository for data hoovered by all of the infected point-of-sale devices.

“The bad guys were logging in remotely to that [control server], and apparently had persistent access to it,” a source close to the investigation told KrebsOnSecurity. “They basically had to keep going in and manually collecting the dumps.”

First, obviously the POS terminals have to communicate with specific Target servers to complete and store transactions. Second, the communications between the POS terminals and the malware on the compromised server(s) could have been denied had there been policies defined and enforced to do so. Palo Alto Networks’ Next Generation Firewalls are ideal for this use case for the following two reasons:

  1. Palo Alto Networks enables you to include zone, IP address, port, user, protocol, application information, and more in a single policy.
  2. Palo Alto Networks firewalls monitor all ports for all protocols and applications, all of the time, to enforce these polices to establish a Positive Control Model (default deny or application traffic white listing).

You might very well ask, why couldn’t Router Access Control Lists be used? Or why not a traditional port-based, stateful inspection firewall? Because these types of network controls limit policy definition to ports, IP addresses, and protocols, which cannot enforce a Positive Control Model. They are simply not detailed enough to control traffic with a high degree of confidence. One or the other might have worked in the 1990s. But by the mid-2000s, network-based applications were regularly bypassing both of these types of controls.

Therefore, if Target had deployed Palo Alto Networks firewalls between the POS terminals and their servers with granular policies to control POS terminals’ communications by zone, port, and application, the malware on the POS terminals would never have been able to communicate with the server(s) the attackers compromised.

In addition, it’s possible that the POS terminals may never have become infected in the first place because the compromised server(s) the attackers initially compromised would not have been able to communicate with the POS terminals. Note, I am not assuming that the servers used to compromise the POS terminals were the same servers used to collect the credit card data that was breached.

Unfortunately, a control with the capabilities of Palo Alto Networks is not specified by the Payment Card Industry (PCI) Data Security Standard (DSS). Yes, “Requirement #1: Install and maintain a firewall configuration to protect cardholder data,” seems to cover the subject. However, you can fully meet these PCI DSS requirements with a port-based, stateful inspection firewall. But, as I said above, an attacker can easily bypass this 1990s type of network control. Retailers and e-Commerce sites need to go beyond PCI DSS to actually protect themselves. You need is Next Generation Firewall like Palo Alto Networks which enables you to define and enforce a Positive Control.

06. January 2014 · Comments Off on Two views on FireEye’s Mandiant acquisition · Categories: blog · Tags: , , , , , ,

There are two views on the significance of FireEye’s acquisition of Mandiant. One is the consensus typified by Arik Hesseldahl, Why FireEye is the Internet’s New Security Powerhouse. Arik sees the synergy of FireEye’s network-based appliances coupled with Mandiant’s endpoint agents.

Richard Stiennon as a different view, Will FireEye’s Acquistion Strategy Work? Richard believes that FireEye’s stock price is way overvalued compared to more established players like Check Point and Palo Alto Networks. While FireEye initially led the market with network-based “sandboxing” technology to detect unknown threats, most of the major security vendors have matched or even exceeded FireEye’s capabilities. IMHO, you should not even consider any network-based security manufacturer that doesn’t provide integrated sandboxing technology to detect unknown threats. Therefore the only way FireEye can meet Wall Street’s revenue expectations is via acquisition using their inflated stock.

The best strategy for a high-flying public company whose products do not have staying power is to embark on an acquisition spree that juices revenue. In those terms, trading overvalued stock for Mandiant, with estimated 2013 revenue of $150 million, will easily satisfy Wall Street’s demand for continued growth to sustain valuations. FireEye has already locked in 100% growth for 2014.

It will probably take a couple of years to determine who is correct.

 

 

04. November 2013 · Comments Off on Response to Stiennon’s attack on NIST Cybersecurity Framework · Categories: blog · Tags: , , , ,

In late October, NIST issued its Preliminary Cybersecurity Framework based on President Obama’s Executive Order 13636, Improving Critical Infrastructure Cybersecurity.

The NIST Cybersecurity Framework is based on one of the most basic triads of information security – Prevention, Detection, Response. In other words, start by preventing as many threats as possible. But you also must recognize that 100% prevention is not possible, so you need to invest in Detection controls. And of course, there are going to be security incidents, therefore you must invest in Response.

The NIST Framework defines a “Core” that expands on this triad. It defines five basic “Functions” of cybersecurity – Identify, Protect, Detect, Respond, and Recover. Each Function is made up of related Categories and Subcategories.

Richard Stiennon, as always provocative, rales against the NIST Framework, calling it “fatally flawed,” because it’s “poisoned with Risk Management thinking.” He goes on to say:

The problem with frameworks in general is that they are so removed from actually defining what has to be done to solve a problem. The problem with critical infrastructure, which includes oil and gas pipelines, the power grid, and city utilities, is that they are poorly protected against network and computer attacks. Is publishing a turgid high-level framework going to address that problem? Will a nuclear power plant that perfectly adopts the framework be resilient to cyber attack? Are there explicit controls that can be tested to determine if the framework is in place? Sadly, no to all of the above.

He then says:

IT security Risk Management can be summarized briefly:

1. Identify Assets

2. Rank business value of each asset

3. Discover vulnerabilities

4. Reduce the risk to acceptable value by patching and deploying defenses around the most critical assets

He then summarizes the problems with this approach as follows:

1. It is impossible to identify all assets

2. It is impossible to rank the value of each asset

3. It is impossible to determine all vulnerabilities

4. Trying to combine three impossible tasks to manage risk is impossible

Mr. Stiennon’s solution is to focus on Threats.

How many ways has Stiennon gone wrong?

First,  if your Risk Management process is as Stiennon outlines, then your process needs to be updated. Risk Management is surely not just about identifying assets and patching vulnerabilities. Threats are a critical component of Risk Management. Furthermore, while the NIST Framework surely includes identifying assets and patching vulnerabilities, they are only two Subcategories within the rich Identify and Protect Functions. The whole Detect Function is focused on detecting threats!! Therefore Stiennon is completely off-base in his criticism. I wonder if he actually read the NIST document.

Second, all organizations perform Risk Management either implicitly or explicitly. No organization has enough money to implement every administrative and technical control that is available. And that surely goes for all of the controls recommended by the NIST Framework’s Categories and Subcategories. Even the organizations that want to fully commit the NIST Framework still will need to prioritize the order in which controls are implemented. Trade-offs have to be made. Is it better to make these trade-offs implicitly and unsystematically? Or is it better to have an explicit Risk Management process that can be improved over time?

I am surely not saying that we have reached the promised land of cybersecurity risk management, just as we have not in virtually any other field to which risk management is applied. There is a lot of research going on to improve risk management and decision theory. One example is the use of Prospect Theory.

Third, if IT security teams are to communicate successfully with senior management and Boards of Directors, explain to me how else to do it? IT security risks, which are technical in nature, have to be translated into business terms. That means, how will a threat impact the business. It has to be in terms of core business processes. Is Richard saying that an organization cannot and should not expect to identify the IT assets related to a specific business process? I think not.

When we in IT security look for a model to follow, I believe it should be akin to the role of lawyers’ participation in negotiating a business transaction. At some point, the lawyers have done all the negotiating they can. They then have to explain to the business executives responsible for the transaction the risks involved in accepting a particular paragraph or sentence in the contract. In other words, lawyers advise and business executives decide.

In the same way, it is up to IT security folks to explain a particular IT security risk in business terms to the business executive, who will then decide to accept the risk or reduce it by allocating funds to implement the proposed administrative or technical control. And of course meaningful metrics that can show the value of the requested control must be included in the communication process.

Given the importance of information technology to the success of any business, cybersecurity decisions must be elevated to the business level. Risk Management is the language of business executives. While cybersecurity risk management is clearly a young field, we surely cannot give up. We have to work to improve it. I believe the NIST Cybersecurity Framework is a big step in the right direction.

 

23. October 2013 · Comments Off on Detection Controls Beyond Signatures and Rules · Categories: blog · Tags: ,

Charles Kolodgy of IDC has a thoughtful post on SecurityCurrent entitled, Defending Against Custom Malware: The Rise of STAP.

STAP (Specialized Threat Analysis and Protection) technical controls are designed to complement, maybe in the future replace, traditional detection controls that require signatures and rules. STAP controls focus on threats/attacks that have not been seen before or that can morph very quickly and therefore are missed by signature-based controls.

Actors such as criminal organizations and nation states are interested in the long haul.  They create specialized malware, intended for a specific target or groups of targets, with the ultimate goal of becoming embedded in the target’s infrastructure.  These threats are nearly always new and never seen before.  This malware is targeted, polymorphic, and dynamic.  It can be delivered via Web page, spear-phishing email, or any other number of avenues.

Mr. Kolodgy breaks STAP controls into three categories:

  • Virtual sandboxing/emulation and behavioral analysis
  • Virtual containerization/isolation
  • Advanced system scanning

Based on Cymbel’s research, we would create fourth category for Advanced log analysis. There is considerable research and funded companies going beyond traditional rule- and statistical/threshold-based techniques. Many of these efforts are levering Hadoop and/or advanced Machine Learning algorithms.

23. October 2013 · Comments Off on The Secrets of Successful CIOs (and CISOs) · Categories: blog

Rachel King, a reporter with the CIO Journal of the Wall St. Journal published an article last week entitled, The Secrets of Successful CIOs. She reports on a study performed by Accenture to determine the priorities of high-performing IT executives.

Ms. King highlights high performers’ top three business objectives compared to lower performing IT executives. Unfortunately, what’s missing from the article is how Accenture measured the performances of IT executives. Having said that, this chart is interesting:

Accenture Says Highest-Performing CIOs Focus on Customers, Business - The CIO Report - WSJ

One would naturally jump to the conclusion that high performing Chief Information Security Officers would need to orient themselves to these top priorities as well. But what if you work for one of the CIOs who is focused on cutting business operational costs, increasing workforce productivity, and automating core business processes?

22. April 2013 · Comments Off on DropSmack: Using Dropbox Maliciously · Categories: blog · Tags: , ,

I found an interesting article on TechRepublic, “DropSmack: Using Dropbox to steal files and deliver malware.

Given that 50 million people are using DropBox, it surely looks like an inviting attack vector for cyber adversaries. Jacob Williams (@MalwareJake) seems to have developed malware, DropSmack, to embed in a Word file already synchronized by DropBox to infect an internal endpoint and provide Command & Control communications.

What technical control do you have in place that would detect and block DropSmack? A network security product would have to be able to decode application files such as Word, Excel, PowerPoint, PDF, and then detect the malware and/or anomalies embedded in the document.

Can you prevent DropBox from being used in your organization? Should you? What about other file sharing applications?

04. April 2013 · Comments Off on The Real Value of a Positive Control Model · Categories: blog · Tags: , , , ,

During the last several years I’ve written a lot about the fact that Palo Alto Networks enables you to re-establish a network-based Positive Control Model from the network layer up through the application layer. But I never spent much time on why it’s important.

Today, I will reference a blog post by Jack Whitsitt, Avoiding Strategic Cyber Security Loss and the Unacceptable Offensive Advantage (Post 2/2), to help explain the value of implementing a Positive Control Model.

TL;DR: All information breaches result from human error. The human error rate per unit of information technology is fairly constant. However, because IT is always expanding (more applications and more functions per application), the actual number of human errors resulting in Vulnerabilities (used in the most general sense of the word) per time period is always increasing. Unfortunately, the information security team has limited resources (Defensive Capability) and cannot cope with the users’ ever increasing number of errors. This has created an ever growing “Offensive Advantage (Vulnerabilities – Defensive Capability).”  However, implementing a Positive Control Model to influence/control human behavior will reduce the number of user errors per time interval, which will reduce the Offensive Advantage to a manageable size.

On the network side Palo Alto Networks’ Next Generation Firewall monitors and controls traffic by user and application across all 65,535 TCP and UDP ports, all of the time, at specified speeds. Granular policies based on any combination of application, user, security zone, IP address, port, URL, and/or Threat Protection profiles are created with a single unified interface that enables the infosec team to respond quickly to new business requirements.

On the endpoint side, Trusteer provides a behavioral type of whitelisting that prevents device compromise and confidential data exfiltration. It requires little to no administrative configuration effort. Thousands of agents can be deployed in days. When implemented on already deployed Windows and Mac devices, Trusteer will detect compromised devices that traditional signature-based anti-virus products miss.

Let’s start with Jack’s basic truths about the relationships between technology, people’s behavior, and infosec resources. Cyber security is a problem that occurs over unbounded time. So it’s a rate problem driven by the ever increasing number of human errors per unit of time. While the number of human errors per unit of time per “unit of information technology” is steady, complexity, in the form of new applications and added functions to existing applications, is constantly increasing. Therefore the number of human errors per unit of time is constantly increasing.

Unfortunately, information security resources (technical and administrative controls) are limited. Therefore the organization’s Defense Capability cannot keep up with the increasing number of Vulnerabilities. Since the number of human errors increases at a faster rate than limited resource Defense Capacity, an Unacceptable Offensive Advantage is created. Here is a diagram that shows this.

offensiveadvantage1

What’s even worse, most Defensive controls cannot significantly shrink the gap between the Vulnerability curve and the Defense curve because they do not bend the vulnerability curve, as this graph shows.

offensiveadvantage2

So the only real hope of reducing organizational cyber security risk, i.e. the adversaries’ Offensive Advantage is to bend the Vulnerability curve as this graph shows.

offensiveadvantage3

Once you do that, you can apply additional controls to further shrink the gap between Vulnerability and Defense curves as this graph shows.

offensiveadvantage4

The question is how to do this. Perhaps Security Awareness Training can have some impact.

I recommend implementing network and host-based technical controls that can establish a Positive Control Model. In other words, only by defining what people are allowed to do and denying everything else can you actually bend the Vulnerability curve, i.e. reduce human errors, both unintentional and intentional.

Implementing a Positive Control Model does not happen instantly, i.e. it’s also is a rate problem. But if you don’t have the technical controls in place, no amount of process is going to improve the organization’s security posture.

This is why firewalls are such a critical network technical control. They are placed at critical choke points in the network, between subnets of different trust levels, with the express purpose of implementing a Positive Control Model.

Firewalls first became popular in the mid 1990s. At that time, when a new application was built, it was assigned a port number. For example, the mail protocol, SMTP was assigned port 25, and the HTTP protocol was assigned to port 80. At that time, (1) protocol and application meant the same thing, and (2) all applications “behaved,” i.e. they ran only on their assigned ports. Given this environment, all a firewall had to do was use the port numbers (and IP addresses) to control traffic. Hence the popularity of port-based stateful inspection firewalls.

Unfortunately, starting in the early 2000s, developers began writing applications to bypass the port-based stateful inspection firewall in order to get their applications deployed quickly in organizations without waiting for the security teams to make changes in policies. Also different applications were developed that could share a port like port 80 because it was always open to give people access to the Internet. Other techniques like port-hopping and encryption were used to bypass the port-based, stateful inspection firewall.

Security teams started deploying additional network security controls like URL Filtering to complement firewalls. This increase in complexity created new problems such as (1) policy coordination between URL Filtering and the firewalls, (2) performance issues, and (3) since URL Filtering products were mostly proxy based, they would break some of the newer applications frustrating users trying to do their jobs.

By 2005 it was obvious to some people that the application technology had obsoleted port-based firewalls and their helpers. A completely new approach to firewall architecture was needed that (1)  classified traffic by application first regardless of port, and (2) was backwardly compatible with port-based firewalls to enable the conversion process. This is exactly what the Palo Alto Networks team did, releasing their first “Next Generation” Firewall in 2007.

Palo Alto Networks classifies traffic at the beginning of the policy process by application. It monitors all 65,535 TCP and UDP for all applications, all of the time, at specified speeds. This enables organizations to re-establish the Positive Control Model which bends the “Vulnerability” curve and allows an infosec team with limited resources to reduce, what Jack Whitsitt calls, the adversaries’ “Offensive Advantage.”

On the endpoint side, Trusteer provides a type of Positive Control Model / whitelisting whereby highly targeted applications like browsers, Java, Adobe Flash, PDF, and Microsoft Office applications are automatically protected behaviorally. The Trusteer agent understands the memory state – file I/O relationship to the degree that it knows the difference between good I/O and malicious I/O behavior. Trusteer then blocks the malicious I/O before any damage can be done.

Thus human errors resulting from social engineering such as clicking on links to malicious web pages or opening documents containing malicious code are automatically blocked. This is all done with no policy configuration efforts on the part of the infosec team. The policies are updated by Trusteer periodically. There are no policies to configure. Furthermore, thousands of agents can be deployed in days. Finally, when implemented to deployed Windows and Mac endpoints, it will detect already compromised devices.

Trusteer, founded in 2006, has over 40 million agents deployed across the banking industry to protect online banking users. So their agent technology has been battle tested.

In closing then, only by implementing technical controls which establish a Positive Control Model to reduce human errors, can an organization bend the Vulnerability Curve sufficiently to reduce the adversaries’ Offensive Advantage to an acceptable level.

25. February 2013 · Comments Off on Surprising Application-Threat Analysis from Palo Alto Networks · Categories: blog · Tags: , , ,

This past week, Palo Alto Networks released its H2/2012 Application Usage and Threat Report. Actually, it’s the first time Palo Alto has integrated Application Usage and Threat Analysis. Previous reports were focused only on Application Risk. This report analyzed 12.6 petabytes of data from 3,056 networks, covering 1,395 applications. 5,307 unique threats were identified from 268 million threat logs.

Here are the four most interesting items I noted:

1. Of the 1,395 applications found, 10 were responsible for 97% of all Exploit* logs. One of these was web-browsing. This is to be expected. However, the other nine were internal applications representing 82% of the Exploit* logs!!

This proves once again that perimeter traffic security monitoring is not adequate. Internal network segmentation and threat monitoring are required.

2. Custom or Unknown UDP traffic represented only 2% of all the bandwidth analyzed, yet it accounted for 55% of the Malware* logs!!

This clearly shows the importance of minimizing unidentified application traffic. Therefore the ratio of unidentified to identified traffic is a key security performance indicator and ought to trend down over time.

3. DNS traffic total bytes was only 0.4% of traffic but 25.4% of sessions, and was 3rd for Malware* logs at 13%.

No doubt most, if not all, of this represents malicious Command & Control traffic. If you are not actively monitoring and analyzing DNS traffic, you are missing a key method of detecting compromised devices in your network.

4. 85 of the 356 applications that use SSL never use port 443.

If your firewall is not monitoring all ports for all applications all of the time, you are simply not getting complete visibility and cannot re-establish a Positive Control Model.

*If you are not familiar with Palo Alto Networks’ Threat Protection function, “Exploit” and “Malware” are the two main categories of “Threat” logs. There is a table at the top of page 4 of this AUT report that summarizes the categories and sub-categories of the 268 million Threat Logs captured and analyzed. The “Exploit” logs refer to matches against vulnerability signatures which are typical of Intrusion Prevention Systems. The “Malware” logs are for Anti-Virus and Anti-Spyware signature matches.

What is not covered in this report is Palo Alto’s cloud-based, Wildfire zero-day analysis service which analyzes files not seen before to determine if they benign or malicious. If malicious behavior is found, signatures of the appropriate types are generated in less than one hour and update Threat Protection. In addition, the appropriate IP addresses and URLs are added to their respective blacklists.

This report is well worth reading.