04. April 2013 · Comments Off on The Real Value of a Positive Control Model · Categories: blog · Tags: , , , ,

During the last several years I’ve written a lot about the fact that Palo Alto Networks enables you to re-establish a network-based Positive Control Model from the network layer up through the application layer. But I never spent much time on why it’s important.

Today, I will reference a blog post by Jack Whitsitt, Avoiding Strategic Cyber Security Loss and the Unacceptable Offensive Advantage (Post 2/2), to help explain the value of implementing a Positive Control Model.

TL;DR: All information breaches result from human error. The human error rate per unit of information technology is fairly constant. However, because IT is always expanding (more applications and more functions per application), the actual number of human errors resulting in Vulnerabilities (used in the most general sense of the word) per time period is always increasing. Unfortunately, the information security team has limited resources (Defensive Capability) and cannot cope with the users’ ever increasing number of errors. This has created an ever growing “Offensive Advantage (Vulnerabilities – Defensive Capability).”  However, implementing a Positive Control Model to influence/control human behavior will reduce the number of user errors per time interval, which will reduce the Offensive Advantage to a manageable size.

On the network side Palo Alto Networks’ Next Generation Firewall monitors and controls traffic by user and application across all 65,535 TCP and UDP ports, all of the time, at specified speeds. Granular policies based on any combination of application, user, security zone, IP address, port, URL, and/or Threat Protection profiles are created with a single unified interface that enables the infosec team to respond quickly to new business requirements.

On the endpoint side, Trusteer provides a behavioral type of whitelisting that prevents device compromise and confidential data exfiltration. It requires little to no administrative configuration effort. Thousands of agents can be deployed in days. When implemented on already deployed Windows and Mac devices, Trusteer will detect compromised devices that traditional signature-based anti-virus products miss.

Let’s start with Jack’s basic truths about the relationships between technology, people’s behavior, and infosec resources. Cyber security is a problem that occurs over unbounded time. So it’s a rate problem driven by the ever increasing number of human errors per unit of time. While the number of human errors per unit of time per “unit of information technology” is steady, complexity, in the form of new applications and added functions to existing applications, is constantly increasing. Therefore the number of human errors per unit of time is constantly increasing.

Unfortunately, information security resources (technical and administrative controls) are limited. Therefore the organization’s Defense Capability cannot keep up with the increasing number of Vulnerabilities. Since the number of human errors increases at a faster rate than limited resource Defense Capacity, an Unacceptable Offensive Advantage is created. Here is a diagram that shows this.

offensiveadvantage1

What’s even worse, most Defensive controls cannot significantly shrink the gap between the Vulnerability curve and the Defense curve because they do not bend the vulnerability curve, as this graph shows.

offensiveadvantage2

So the only real hope of reducing organizational cyber security risk, i.e. the adversaries’ Offensive Advantage is to bend the Vulnerability curve as this graph shows.

offensiveadvantage3

Once you do that, you can apply additional controls to further shrink the gap between Vulnerability and Defense curves as this graph shows.

offensiveadvantage4

The question is how to do this. Perhaps Security Awareness Training can have some impact.

I recommend implementing network and host-based technical controls that can establish a Positive Control Model. In other words, only by defining what people are allowed to do and denying everything else can you actually bend the Vulnerability curve, i.e. reduce human errors, both unintentional and intentional.

Implementing a Positive Control Model does not happen instantly, i.e. it’s also is a rate problem. But if you don’t have the technical controls in place, no amount of process is going to improve the organization’s security posture.

This is why firewalls are such a critical network technical control. They are placed at critical choke points in the network, between subnets of different trust levels, with the express purpose of implementing a Positive Control Model.

Firewalls first became popular in the mid 1990s. At that time, when a new application was built, it was assigned a port number. For example, the mail protocol, SMTP was assigned port 25, and the HTTP protocol was assigned to port 80. At that time, (1) protocol and application meant the same thing, and (2) all applications “behaved,” i.e. they ran only on their assigned ports. Given this environment, all a firewall had to do was use the port numbers (and IP addresses) to control traffic. Hence the popularity of port-based stateful inspection firewalls.

Unfortunately, starting in the early 2000s, developers began writing applications to bypass the port-based stateful inspection firewall in order to get their applications deployed quickly in organizations without waiting for the security teams to make changes in policies. Also different applications were developed that could share a port like port 80 because it was always open to give people access to the Internet. Other techniques like port-hopping and encryption were used to bypass the port-based, stateful inspection firewall.

Security teams started deploying additional network security controls like URL Filtering to complement firewalls. This increase in complexity created new problems such as (1) policy coordination between URL Filtering and the firewalls, (2) performance issues, and (3) since URL Filtering products were mostly proxy based, they would break some of the newer applications frustrating users trying to do their jobs.

By 2005 it was obvious to some people that the application technology had obsoleted port-based firewalls and their helpers. A completely new approach to firewall architecture was needed that (1)  classified traffic by application first regardless of port, and (2) was backwardly compatible with port-based firewalls to enable the conversion process. This is exactly what the Palo Alto Networks team did, releasing their first “Next Generation” Firewall in 2007.

Palo Alto Networks classifies traffic at the beginning of the policy process by application. It monitors all 65,535 TCP and UDP for all applications, all of the time, at specified speeds. This enables organizations to re-establish the Positive Control Model which bends the “Vulnerability” curve and allows an infosec team with limited resources to reduce, what Jack Whitsitt calls, the adversaries’ “Offensive Advantage.”

On the endpoint side, Trusteer provides a type of Positive Control Model / whitelisting whereby highly targeted applications like browsers, Java, Adobe Flash, PDF, and Microsoft Office applications are automatically protected behaviorally. The Trusteer agent understands the memory state – file I/O relationship to the degree that it knows the difference between good I/O and malicious I/O behavior. Trusteer then blocks the malicious I/O before any damage can be done.

Thus human errors resulting from social engineering such as clicking on links to malicious web pages or opening documents containing malicious code are automatically blocked. This is all done with no policy configuration efforts on the part of the infosec team. The policies are updated by Trusteer periodically. There are no policies to configure. Furthermore, thousands of agents can be deployed in days. Finally, when implemented to deployed Windows and Mac endpoints, it will detect already compromised devices.

Trusteer, founded in 2006, has over 40 million agents deployed across the banking industry to protect online banking users. So their agent technology has been battle tested.

In closing then, only by implementing technical controls which establish a Positive Control Model to reduce human errors, can an organization bend the Vulnerability Curve sufficiently to reduce the adversaries’ Offensive Advantage to an acceptable level.

09. March 2012 · Comments Off on The six most dangerous infosec attacks – Hackers – SC Magazine Australia – Secure Business Intelligence · Categories: blog · Tags: , ,

The six most dangerous infosec attacks – Hackers – SC Magazine Australia – Secure Business Intelligence.

SC Magazine Autralia summarized Ed Skoudis’s and Joannes Ullrich’s RSA presentation on the six most dangerous IT Security threats of 2011 and what to expect in the year ahead. They are:

  1. DNS as command-and-control
  2. SSL slapped down
  3. Mobile malware as a network infection vector
  4. Hacktivism is back
  5. SCADA at home
  6. Cloud Security
Additional trends:
  • IPv6
  • Oldies
  • Social Networking
  • Malware
  • DNSSEC
The reference to the Malware item above is that blacklisting is a losing proposition and organizations need to move to whitelisting. IMHO, this especially true for establishing positive network control at the application level.

Provera 10mg

23. February 2012 · Comments Off on Black Cat, White Cat | InfoSec aXioms · Categories: blog · Tags: , , , , ,

Ofer Shezaf highlights one of the fundamental ways of categorizing security tools in his post Black Cat, White Cat | InfoSec aXioms.

Black listing, sometimes called negative security or “open by default”, focuses on catching the bad guys by detecting attacks.  Security controls such as Intrusion Prevention Systems and Anti-Virus software use various methods to do so. The most common method to detect attacks matching signatures against network traffic or files. Other methods include rules which detect conditions that cannot be expressed in a pattern and abnormal behavior detection.

 White listing on the other hand allows only known good activity. Other terms associated with the concept are positive security and “closed by default” and policy enforcement. White listing is commonly embedded in systems and the obvious example is the authentication and authorization mechanism found in virtually every information system. Dedicated security controls which use white listing either ensures the build-in policy enforcement is used correctly or provide a second enforcement layer. The former include configuration and vulnerability assessment tools while the latter include firewalls.

Unfortunately, when manufactures apply the term “Next Generation” to firewalls, they may be misleading the marketplace.  As Ofer says, a firewall, by definition, performs white listing, i.e. policy enforcement. One of the key functions of a NGFW is the ability to white list applications. This means the applications that are allowed must be defined in the firewall policy. On the other hand, if you are defining applications that are to be blocked, that’s black listing, and not a firewall.

Also note that Next Generation Firewalls also perform Intrusion Prevention, which is a black listing function. So clearly, NGFWs perform white listing and black listing functions. But to truly earn the right to call a network security appliance a “Next Generation” Firewall, the device must enable application white listing. Adding “application awareness” as a blacklist function is nice, but not a NGFW. For more information, I have written about Next Generation Firewalls and the difference between UTMs and NGFWs.

 

26. October 2011 · Comments Off on Australia DSD’s Top Four Security Strategies · Categories: blog · Tags: , , , , ,

The SANS Institute has endorsed Australia’s Defense Signals Directorate (DSD) four top strategies for mitigating  information security risk:

  1. Patching applications and using the latest version of an application
  2. Patching operating systems
  3. Keeping admin right under strict control (and forbidding the use of administrative accounts for email and browsing)
  4. Whitelisting applications
While there is nothing new with these four strategies, I would like to discuss #4. The Australian DSD Strategies to Mitigate Targeted Cyber Intrusions defines Application Whitelisting as preventing unapproved programs from running on PCs. I recommend extending whitelisting to the network. In other words, define which applications are allowed on the network by user groups, both internally and Web-based, and deny all others.
My recommendation is not really a new idea either. After all, that’s what firewalls are supposed to do. The issue is that the traditional stateful inspection firewall does it using port numbers and IP addresses. For at least the last five years applications and users have routinely bypassed these firewalls by using applications that share open ports.
This is why in October 2009, Gartner started talking about “Next Generation Firewalls” which enable you to implement whitelisting on the network at Layer 7 (Application) as well as down the stack to Layer 4 and 3. In other words extend the traditional “Positive Control Model” firewall functionality up through the Application Layer. (If you have not seen that Gartner research report, please contact me and I will arrange for you to receive a copy.)
30. January 2011 · Comments Off on Schneier on Security: Whitelisting vs. Blacklisting · Categories: blog · Tags: , , ,

Schneier on Security: Whitelisting vs. Blacklisting.

Excellent discussion of whitelisting vs. blacklisting. In theory, it’s clear which approach is more appropriate for a given situation. For example:

Physical security works generally on a whitelist model: if you have a key, you can open the door; if you know the combination, you can open the lock. We do it this way not because it’s easier — although it is generally much easier to make a list of people who should be allowed through your office door than a list of people who shouldn’t–but because it’s a security system that can be implemented automatically, without people.

In corporate environments, application control, if done at all, has been done with blacklists, it seems to me, mainly because whitelisting was simply too difficult. In other words, in theory white listing is the right thing to do, but in practice the tools were simply not there.

However, this is changing. Next Generation Firewalls hold the promise of application whitelisting. If the NGFW can identify and classify all of the applications traversing the organization’s network, then you have the visibility to implement application whitelisting.

The advantage of network-based application whitelisting is that you get off the treadmill of needing to identify every new potentially malicious application and adding it to the blacklist.

The objective is that the last firewall policy rule is, “If application is unknown, then block.” At that point you have returned to the Positive Control Model for which firewalls were conceived.