24. February 2012 · Comments Off on Botnet communicates via P2P instead of C&C · Categories: blog · Tags: , ,

Symantec reported on a version of Zeus/Spyeye that communicates via P2P among its bot peers rather than “traditional” C&C directly to its control servers. (I put traditional in quotes because I don’t want to give the impression that detecting C&C traffic is easy.)

…it seems that the C&C server has disappeared entirely for this functionality. Where they were previously sending and receiving control messages to and from the C&C, these control messages are now handled by the P2P network.

This means that every peer in the botnet can act as a C&C server, while none of them really are one. Bots are now capable of downloading commands, configuration files, and executables from other bots—every compromised computer is capable of providing data to the other bots. We don’t yet know how the stolen data is communicated back to the attackers, but it’s possible that such data is routed through the peers until it reaches a drop zone controlled by the attackers.

Now if you are successfully blocking all P2P traffic on your network, you don’t have to worry about this new development. However, when P2P is blocked, this version of Zeus/Spyeye reverts to C&C methods. So you still need a technical network security control that can reliably detect compromised end points by monitoring egress traffic to proxies and firewalls and DNS traffic because you surely cannot rely on your host-based security controls. (If you doubt my claim, please contact me and I will prove it to you.)

But what if you have a business requirement for access to one or more P2P networks? Do you have a way to implement a positive control policy that only allows the specific P2P networks you need and blocks all the others? A Next Generation Firewall ought to enable you to meet this business requirement. I say “ought to” because not all of them do. I have written about NGFWs here, here, here, and here.

23. February 2012 · Comments Off on Black Cat, White Cat | InfoSec aXioms · Categories: blog · Tags: , , , , ,

Ofer Shezaf highlights one of the fundamental ways of categorizing security tools in his post Black Cat, White Cat | InfoSec aXioms.

Black listing, sometimes called negative security or “open by default”, focuses on catching the bad guys by detecting attacks.  Security controls such as Intrusion Prevention Systems and Anti-Virus software use various methods to do so. The most common method to detect attacks matching signatures against network traffic or files. Other methods include rules which detect conditions that cannot be expressed in a pattern and abnormal behavior detection.

 White listing on the other hand allows only known good activity. Other terms associated with the concept are positive security and “closed by default” and policy enforcement. White listing is commonly embedded in systems and the obvious example is the authentication and authorization mechanism found in virtually every information system. Dedicated security controls which use white listing either ensures the build-in policy enforcement is used correctly or provide a second enforcement layer. The former include configuration and vulnerability assessment tools while the latter include firewalls.

Unfortunately, when manufactures apply the term “Next Generation” to firewalls, they may be misleading the marketplace.  As Ofer says, a firewall, by definition, performs white listing, i.e. policy enforcement. One of the key functions of a NGFW is the ability to white list applications. This means the applications that are allowed must be defined in the firewall policy. On the other hand, if you are defining applications that are to be blocked, that’s black listing, and not a firewall.

Also note that Next Generation Firewalls also perform Intrusion Prevention, which is a black listing function. So clearly, NGFWs perform white listing and black listing functions. But to truly earn the right to call a network security appliance a “Next Generation” Firewall, the device must enable application white listing. Adding “application awareness” as a blacklist function is nice, but not a NGFW. For more information, I have written about Next Generation Firewalls and the difference between UTMs and NGFWs.


19. February 2012 · Comments Off on Stiennon’s confusion between UTM and Next Generation Firewall · Categories: blog · Tags: , , , ,

Richard Stiennon has published a blog post on Netasq, a European UTM vendor called, A brief history of firewalls and the rise of the UTM. I found the post indirectly from Alan Shimmel’s post about it.

Stiennen seems to think that Next Generation Firewalls are just a type of UTM. Shimmel also seems to go along with Stiennon’s view. Stiennon gives credit to IDC for defining the term UTM, but has not acknowledged Gartner’s work in defining Next Generation Firewall.

My purpose here is not to get into a debate about terms like UTM and NGFW. The real question is which network security device provides the best network security “prevention” control. The reality is that marketing people have so abused the terms UTM and NGFW, you cannot depend on the term to mean anything. My remarks here are based on Gartner’s definition of Next Generation Firewall which they published in October 2009.

All the UTMs I am aware of, whether software-based or with hardware assist, use port-based (stateful inspection) firewall technology. They may do a lot of other things like IPS, URL filtering and some DLP, but these UTMs have not really advanced the state (pardon the pun) of “firewall” technology. These UTMs do not enable a positive control model (default-deny) from the network layer up through the application layer. They depend on the negative control model of their IPS and application modules/blades.

Next Generation Firewalls, on the other hand, as defined by Gartner’s 2009 research report, enable positive network traffic control policies from the network layer up through the application layer. Therefore true NGFWs are something totally new and were developed in response to the changes in the way applications are now written. In the early days of TCP/IP, port-based firewalls worked well because each new application ran on its assigned port. For example, SMTP on port 25. In the 90s, you could be sure that traffic that ran on port 25 was SMTP and that SMTP would run only port 25.

About ten years ago applications began using port-hopping, encryption, tunneling, and a variety of other techniques to circumvent port-based firewalls. In fact, we have now reached the point where port-based firewalls are pretty much useless at controlling traffic between networks of different trust levels. UTM vendors responded by adding application identification functionality using their intrusion detection/prevention engines. This is surely better than nothing, but IPS engines use a negative enforcement model, i.e. default allow, and only monitor a limited number of ports. A true NGFW monitors all 65,535 ports for all applications at all times.

In closing, there is no doubt about the value of a network security “prevention” control performing multiple functions. The real question is, does the device you are evaluating fulfill its primary function of reducing the organization’s attack surface by (1) enabling positive control policies from the network layer through the application layer, and (2) doing it across all 65,535 ports all the time?







As I look over my experience in Information Security since 1999, I see three distinct eras with respect to the motivation driving technical control purchases:

  • Basic (mid-90’s to early 2000’s) – Organizations implemented basic host-based and network-based technical security controls, i.e. anti-virus and firewalls respectively.
  • Compliance (early 2000’s to mid 2000’s) – Compliance regulations such as Sarbanes-Oxley and PCI drove major improvements in security.
  • Breach Prevention and Incident Detection & Response (BPIDR) (late 2000’s to present) – Organizations realize that regulatory compliance represents a minimum level of security, and is not sufficient to cope with the fast changing methods used by cyber predators. Meeting compliance requirements will not effectively reduce the likelihood of a breach by more skilled and aggressive adversaries or detect their malicious activity.

I have three examples to support the shift from the Compliance era to the Breach Prevention and Incident Detection & Response (BPIDR) era. The first is the increasing popularity of Palo Alto Networks. No compliance regulation I am aware of makes the distinction between a traditional stateful inspection firewall and a Next Generation Firewall as defined by Gartner in their 2009 research report.  Yet in the last four years, 6,000 companies have selected Palo Alto Networks because their NGFWs enable organizations to regain control of traffic at points in their networks where trust levels change or ought to change.

The second example is the evolution of Log Management/SIEM. One can safely say that the driving force for most Log/SIEM purchases in the early to mid 2000s was compliance. The fastest growing vendors of that period had the best compliance reporting capabilities. However, by the late 2000s, many organizations began to realize they needed better detection controls. We began so see a shift in the SIEM market to those solutions which not only provided the necessary compliance reports, but could also function satisfactorily as the primary detection control within limited budget requirements. Hence the ascendancy of Q1 Labs, which actually passed ArcSight in number of installations prior to being acquired by IBM.

The third example is email security. From a compliance perspective, Section 5 of PCI DSS, for example, is very comprehensive regarding anti-virus software. However, it is silent regarding phishing. The popularity of products from Proofpoint and FireEye show that organizations have determined that blocking email-borne viruses is simply not adequate. Phishing and particularly spear-phishing must be addressed.

Rather than simply call the third era “Breach Prevention,” I chose to add “Incident Detection & Response” because preventing all system compromises that could lead to a breach is not possible. You must assume that Prevention controls will have failures. Therefore you must invest in Detection controls as well. Too often, I have seen budget imbalances in favor of Prevention controls.

The goal of a defense-in-depth architecture is to (1) prevent breaches by minimizing attack surfaces, controlling access to assets, and preventing threats and malicious behavior on allowed traffic, and (2) to detect malicious activity missed by prevention controls and detect compromised systems more quickly to minimize the risk of disclosure of confidential data.

18. December 2011 · Comments Off on Gartner December 2011 Firewall Magic Quadrant Comments · Categories: blog · Tags: , , , , ,

Gartner just released their 2011 Enterprise Firewall Magic Quadrant 21 months since their last one just days before Christmas. Via distribution from one of the firewall manufacturers, I received a copy today. Here are the key highlights:

  • Palo Alto Networks moved up from the Visionary to Leader quadrant
  • Juniper slid back from the Leader to the Challenger quadrant
  • Cisco remained in the Challenger quadrant
  • There are no manufacturers in the Visionary quadrant

In fact, there are only two manufacturers in the Leader quadrant – the aforementioned Palo Alto Networks and Check Point. And these two manufacturers are the only ones to the right of center!!

Given Gartner’s strong belief in the value of Next Generation Firewalls, one might conclude that both of these companies actually do meet Gartner’s 2009 research paper outlining the features of a NGFW. Unfortunately that is not the case today. Check Point’s latest generally available release simply does not meet Gartner’s NGFW requirements.

So the question is, why did Gartner include them in the Leader quadrant? The only explanation I can think of is that their next release meets their NGFW criteria. Gartner alludes to Project Gaia which is in beta at a few sites but says only that it is a blending of Check Point’s three different operating systems. So let’s follow through on this thought experiment. First, this would mean that none of the other vendors will meet Gartner’s NGFW criteria in their next release. If any of them did, why wouldn’t they too be placed to the right of center?

Before I go on, let’s review what a NGFW is. Let’s start with a basic definition of a firewall – a network security device that enables you to define a “Positive Control Model” about what traffic is allowed to pass between two network segments of different trust levels. By Positive Enforcement Model I mean you define what is allowed and deny everything else. Another term for this is “default deny.”

Traditional stateful firewalls enable this Positive Control Model at the port and protocol levels. NGFWs do this also but most importantly do this at the application level. In fact, an NGFW enables policies that combine port, protocol, and application (and more). Stateful inspection firewalls have no ability to control applications sharing open ports. Some have added application identification and blocking to their IPS modules, but this is a negative enforcement model. In other words, block what I tell you to block and allow everything else. Some have called this the “Wack-A-Mole” approach to application control.

In order then to qualify as a NGFW, the core traffic analysis engine has to be built from the ground up to perform deep packet inspection and application detection at the beginning of the analysis/decision process to allow or deny the session. Since that was Palo Alto Networks’ vision when they were founded in 2005, that’s what they did. All the other firewall manufacturers have to start from scratch and build an entirely new platform.

So let’s pick up where I left off three paragraphs ago, i.e. the only traditional stateful inspection firewall manufacturer that might have a technically true NGFW coming in its next release is Check Point. Since Palo Alto Networks shipped its first NGFW in mid-2007, this would mean that Check Point is, at best, four and half years, four major releases, and six thousand customers behind Palo Alto Networks.

On the other hand, if Check Point is in the Leader quadrant because it’s Palo Alto Networks’ toughest competitor, then Palo Alto Networks is in even a better position in the firewall market.