13. September 2012 · Comments Off on The story behind the Microsoft Nitol Botnet takedown · Categories: blog · Tags: , , , , , , ,

Earlier today Microsoft announced the takedown of the Nitol botnet and takeover of the 3322.org domain. However, if you are using the Damballa flow-based Detection Control, this was a non-event. Full disclosure – Cymbel partners with Damballa.

Gunter Ollman, Damballa’s CTO, today commented on Nitol and 3322.org, and the ramifications of the Microsoft takedown, which I will summarize.

First, Damballa has been tracking Nitol and the other 70 or so botnets leveraging 3322.org for quite some time. Therefore, as a Damballa user, any device on your network infected with Nitol, or the other 70 botnets leveraging 3322.org, would be identified by Damballa. Furthermore, if you were using Damballa’s blocking capabilities, those devices would be prevented from communicating with their malware’s Command & Control (C&C) servers.

Second, most of these 70+ botnets make use of “multiple C&C domain names distributed over multiple DNS providers. Botnet operators are only too aware of domain takedown orders from law enforcement, so they add a few layers of resilience to their C&C infrastructure to protect against that kind of disruption.” Therefore this takedown did not kill these botnets.

In closing, while botnet and DNS provider takedowns are interesting, they simply do not reduce an organization’s risk of data breaches. Damballa does!!

 

 

21. August 2012 · Comments Off on Zero-day exploit trade impact on enterprises · Categories: blog · Tags: , , , , , ,

SC Magazine’s Dan Kaplan’s on The Hypocrisy of the zero-day exploit trade shows that enterprises can no longer rely on signature-based Detection Controls to mitigate the risks of confidential data breaches resulting from compromised devices.

I am surely not saying that signature-based IPS/IDS controls are dead, as you do want to detect and block known threats. However, IPS/IDS’s are surely no longer sufficient. They must be complemented by a behavior analysis Detection Control (flow and DNS analysis) as part of a redesigned Defense-in-Depth architecture.

 

29. July 2012 · Comments Off on Speaking of Next Gen Firewalls – Forbes · Categories: blog · Tags: , , , ,

I would like to respond to Richard Stiennon’s Forbes article, Speaking of Next Gen Firewalls. Richard starts off his article as follows:

“As near as I can tell the salient feature of Palo Alto Networks’ products that sets them apart is application awareness. … In my opinion application awareness is just an extension of URL content filtering.”

First, let me start my comment by saying that application awareness, out of context, is almost meaningless. Second, I view technical controls from a risk management perspective, i.e. I judge the value of a proposed technical control by the risks it can mitigate.

Third, the purpose of a firewall is to establish a positive control model, i.e. limit traffic into and out of a defined network to what is allowed and block everything else. The reason everyone is focused on application awareness is that traditional stateful inspection firewalls are port-based and cannot control modern applications that do not adhere to the network layer port model and conventions established when the Internet protocols were first designed in the 1970s.

The reason Palo Alto Networks is so popular is that it extends firewall functionality from the network layer up through the application layer in a single unified policy view. This is unlike most application awareness solutions which, as Richard says, are just extensions of URL filtering, because they are based on proxy technology.

For those more technically inclined, URL Filtering solutions are generally based on proxy technology and therefore only monitor a small set of ports including 80 and 443. However, Palo Alto Networks monitors all 65,535 TCP and UDP ports at specified speeds, all the time from the network layer up through the application layer. If you doubt this, try it yourself. It’s easy. Simply run a standard application on a non-standard port and see what the logs show.

Furthermore, Palo Alto provides a single policy view that includes user, application, zone, URL filtering, and threat prevention columns in addition to the traditional five tuples – source IP, destination IP, source port, destination port, and service.

To the best of my knowledge, Palo Alto Networks is the only firewall, whether called Next Generation Firewall or UTM that has this set of features. Therefore, from a risk management perspective, Palo Alto Networks is the only firewall that can establish a positive enforcement model from the network layer up through the application layer.

21. June 2012 · Comments Off on Bromium Micro-Virtualization: New Approach to endpoint security · Categories: blog · Tags: , , ,

Bromium is emerging from stealth mode with their announcement of a $26.5 million B venture capital round, participation in GigaOm Structure, and a lengthy blog post from CTO Simon Crosby explaining what Bromium is doing. Cymbel has been working with Bromium for close to six months and we are very excited about its unique approach to endpoint security.

Bromium has built what they are calling a “micro-hypervisor” that leverages Intel’s Virtualization Technology hardware to isolate each task and each browser tab (Internet Explorer, Firefox, and Chrome) within  a version of Windows running on an endpoint.

The result is to greatly reduce the likelihood that malware infecting a document or a browser tab can cross to another browser tab or task, to the base operating system, or to the network.  One way of measuring the risk reduction Bromium achieves is by looking at the reduction in attack surface. This can be approximated by the number of lines of code of Windows vs. Bromium’s micro-hypervisor  – 10 million vs. 10,000.

Furthermore, if a browser tab or a downloaded document is infected not only won’t the malware spread, but when you close the tab or document, the malware is erased from memory. Even if you save a malware-laden document, as long as you open it on a Bromium-installed endpoint, the malware will not infect the underlying machine or spread via the network.

In addition, the user experience is not altered at all, i.e. the user does not realize Bromium is even there unless she opens the Task Manager and sees only the current task on which she is working.

If you are interested in learning more, please contact me and I will send a white paper which goes into a lot more detail.

20. April 2012 · Comments Off on A response to Stiennon’s analysis of Palo Alto Networks · Categories: blog · Tags: , , , , , , ,

I was dismayed to read Richard Stiennon’s article in Forbes, Tearing away the veil of hype from Palo Alto Networks’ IPO. I will say my knowledge of network security and experience with Palo Alto Networks appear to be very different from Stiennon’s.

Full disclosure, my company has been a Palo Alto Networks partner for about four years. I noticed on Stiennon’s LinkedIn biography that he worked for one of PAN’s competitors, Fortinet. I don’t own any of the stocks individually mentioned in Stiennon’s article, although from time to time, I own mutual funds that might. Finally, I am planning on buying PAN stock when they go public.

Let me first summarize my key concerns and then I will go into more detail:

  • Stiennon overstates and IMHO misleads the reader about the functionality of stateful inspection firewall technology. While he seems to place value in it, he fails to mention what security risks they can actually mitigate in today’s environment.
  • He does not seem to understand the difference between UTMs and Next Generation Firewalls (NGFW). UTMs combine multiple functions on an appliance with each function processed independently and sequentially, and each managed with a separate user interface. NGFWs integrate multiple functions which share information, execute in parallel, and are managed with a unified interface. These differences result in dramatically different risk mitigation capabilities.
  • He does not seem to understand Palo Alto Networks unique ability to reduce attack surfaces by enabling a positive control model (default deny) from the network layer up through the application layer.
  • He seems to have missed the fact that Palo Alto Networks NGFWs were designed from the ground up to deliver Next Generation Firewall capabilities while other manufacturers have simply added features to their stateful inspection firewalls
  • He erroneously states that Palo Alto Networks does not have stateful inspection capabilities. It does and is backwards compatible with traditional stateful inspection firewalls to enable conversions.
  • He claims that Palo Alto Networks uses a lot of third party components when in fact there are only two that I am aware of. And he completely ignores several of Palo Alto Networks latest innovations including Wildfire and GlobalProtect.
  • He missed the reason why Palo Alto Networks Jan 2012 quarter revenue was slightly lower than its Oct 2011 quarter which was clearly stated in the S-1.

Here are my detailed comments.

Stateful inspection is a core functionality of firewalls introduced by Check Point Software over 15 years ago. It allows an inline gateway device to quickly determine, based on a set policy, if a particular connection is allowed or denied. Can someone in accounting connect to Facebook? Yes or no.

The bolded sentence is misleading and wrong in the context of stateful inspection. Stateful inspection has nothing to do with concepts like who is in accounting or whether the session is attempting to connect to Facebook. Stateful Inspection is purely a Layer 3/Layer 4 technology and defines security policies based on Source IP, Destination IP, Source Port, Destination Port, and network protocol, i.e. UDP or TCP.

If you wanted to implement a stateful inspection firewall policy that says Joe in accounting cannot connect to Facebook, you would first have to know the IP address of Joe’s device and the IP address of Facebook. Of course this presents huge administrative problems because somebody would have to keep track of this information and the policy would have to be modified if Joe changed locations. Not to mention the huge number of policy rules that would have to be written for all the possible sites Joe is allowed to visit. No organization I have ever known would attempt to control Joe’s access to Facebook using stateful inspection technology.

Since the early 2000s, hundreds and hundreds of applications have been written, including Facebook and its subcomponents, that no longer obey the “rules” that were in place in the mid-90s when stateful inspection was invented. At that time, when a new application was built, it would be assigned a specific port number that only that application would use. For example, email transport agents using SMTP were assigned Port 25. Therefore the stateful inspection firewall policy implementer could safely control access to the email transport service by defining policies using Port 25.

At present, the usage of ports is totally chaotic and abused by malicious actors. Applications share ports. Applications hop from port to port looking for a way to bypass stateful inspection firewalls. Cyber predators use this weakness of stateful inspection for their gain and your loss. Of course the security industry understood this issue and many new types of network security device types were invented and added to the network as Stiennon acknowledges.

But, inspecting 100% of traffic to implement these advanced capabilities is extremely stressful to the appliance, all of them still use stateful inspection to keep track of those connections that have been denied. That way the traffic from those connections does not need to be inspected, it is just dropped, while approved connections can still be filtered by the enhanced capability of these Unified Threat Management (UTM) devices (sometimes called Next  Generation Firewalls (NGFW), a term coined by Palo Alto Networks).

The first bolded phrase is true when a manufacturer adds advanced capabilities like application identification to an existing appliance. Palo Alto Networks understood this and designed an appliance from the ground up specifically to implement these advanced functions under load with low latency.

In the second bolded phrase, Steinnon casually lumps together the terms UTMs and Next Generation Firewalls as if they are synonymous. They are not. While it is true that Palo Alto Networks coined the term Next Generation Firewalls, it only became an industry defined term when Gartner published a research paper in October, 2009 (ID Number G00171540) and applied a rigorous definition.

The key point is that a next generation firewall provides fully integrated Application Awareness and Intrusion Prevention with stateful inspection. Fully integrated means that (1) the application identification occurs in the firewall which enables positive traffic control from the network layer up through the application layer, (2) all intrusion prevention is applied to the resulting allowed traffic, (3) all this is accomplished in a single pass to minimize latency, and (4) there is a unified interface for creating firewall policies. Running multiple inspection processes sequentially, controlled by independently defined policies results in increased latency and excessive use of security management resources, thus not qualifying as a Next Generation Firewall

But PAN really has abandoned stateful inspection, at a tremendous cost to their ability to establish connections fast enough to address the needs of large enterprises and carriers.

This is simply false. Palo Alto Networks supports standard stateful inspection for two purposes. First to ease the conversion process from a traditional stateful inspection firewall. Most of our customers start by converting their existing stateful inspection firewall policy rules and then they add the more advanced NGFW functions.

Second, the use of ports in policies can be very useful when combined with application identification. For example, you can build a policy that says (a) SMTP can run only on port 25 and (b) only SMTP can run on port 25. The first part (a) assures that if SMTP is detected on any of the other 65,534 ports it will be blocked. This means that no cyber predator can set up an email service on any of your non-SMTP servers. The second part (b) says that no other application besides SMTP can run on port 25. Therefore when you open a port for a specific application, you can assure it will be the only application running on that port. Palo Alto Networks can do this because its core functionality monitors all 65,535 ports for all applications all the time.

Steinnon then goes on to quote Bob Walder of NSS Labs and interprets his statement as follows:

In other words, an enterprise deploying PAN’s NGFW is getting full content inspection all the time with no ability to turn it off. That makes the device performance unacceptable as a drop-in replacement for Juniper, Cisco, Check Point, or Fortinet firewalls.

This statement has no basis in facts that I am aware of. Palo Alto Firewalls are used all the time to replace the above mentioned companies’ firewalls. Palo Alto has over 6,500 customers! Does full packet inspection take more resources than simple stateful inspection? Of course. But that misses the point. As I said above, stateful inspection is completely useless at providing an organization a Positive Enforcement Model, which after all is the sine qua non of a firewall. By Positive Enforcement Model, I mean the ability to define what is allowed and block everything else. This is also described as “default deny.”

Furthermore, based on my experience, in a bake-off situaton where the criteria  are a combination of real-world traffic, real-world security policy requirements designed to mitigate defined high risks, and total cost of ownership, Palo Alto Networks will always win. I’ll go a step further and say that in today’s world there is simply no significant risk mitigation value for traditional stateful inspection.

It’s the application awareness feature. This is where PAN’s R&D spending is going. All the other features made possible by their hardware acceleration and content inspection ability are supported by third parties who provide malware signatures and URL databases of malicious websites and categorization of websites by type. 

This is totally wrong. In fact, the URL filtering database and the end point checking host software in GlobalProtect (explained further on) are the only third party components Palo Alto Network uses that I am aware of. PAN built a completely new firewall engine capable of performing stateful inspection (for backward compatibility and for highly granular policies described above), application control, anti-virus, anti-spyware, anti-malware, and URL Filtering in a single pass. PAN writes all of its malware signatures and of course participates in security intelligence sharing arrangements with other companies.

Palo Alto Networks has further innovated with (1) Wildfire which provides the ability to analyze executables being downloaded from the Internet to detect zero-day attacks, and (2) GlobalProtect which enables remote and mobile users to stay under the control and protection of PAN NGFWs.

While anecdotal, the reports I get from enterprise IT professionals are that PAN is being deployed behindexisting (sic) firewalls. If that is the general case PAN is not the Next Generation Firewall, it is a stand alone technology that provides visibility into application usage.  Is that new? Not really. Flow monitoring technology has been available for over a decade from companies like Lancope and Arbor Networks that provides this visibility at a high level. Application fingerprinting was invented by SourceFire and is the basis of their RNA product.

Wow. Let me try to deconstrust this. First, it is true that some companies start by putting Palo Alto Networks behind existing firewalls. Why not? I see this as an advantage for PAN as it gives organizations the ability to leverage PAN’s value without waiting until it’s time to do a firewall refresh. Also PAN can replace a proxy to improve content filtering. I’ll save the proxy discussion for another time. I am surely not privy to PAN’s complete breakdown of installation architectures, but “anecdotally” I would say most organizations are doing straight firewall replacements.

Much more importantly, the idea of doing application identification in an IPS or in a flow product totally misses the point. Palo Alto Networks ships the only firewall that does it to enable positive control (default deny) from the network layer up through the application layer. I am surely not saying that there is no value in adding application awareness to IPSs or flow products. There is. But IPSs use a negative control model, i.e. define what should be blocked and allow everything else. Firewalls are supposed to provide attack surface reduction and cannot unless they are able to exert positive control.

While I will agree that application identification and the ability to enforce policies that control what applications can be used within the enterprise is important I contend that application awareness is ultimately a feature that belongs in a UTM appliance or stand alone device behind the firewall. Like other UTM features it must be disabled for high connection rate environments such as large corporate gateways, data centers, and within carrier networks.

This may be Stiennon’s opintion, but I would ask, what meaningful risks, besides not meeting the requirements of a compliance regime, does a stateful inspection firewall mitigate considering the ease with which attackers can bypass them? I have nothing against compliance requirements per se, but our focus is on information security risk mitigation.

In the three months ending Jan. 31 2012 PAN’s revenue is off from the previous quarter. The fourth quarter is usually the best quarter for technology vendors. There may be some extraordinary situation that accounts for that, but it is not evident in the S-1

There is no denying that year-over-year PAN has been on a tear, almost doubling its revenue from Q4 2010 to Q4 2011. But the glaring fact is that PAN’s revenue growth has completely stalled out in what was a great quarter for the industry.

Perhaps my commenting on these last paragraphs does not belong in this blog post as they are not technical in nature, but IMHO Stiennon is wrong again. Stiennon glosses over the excellent quarter that preceded the last one where PAN grew its revenue from $40.22 million to $57.11 million. Thus the last quarter’s $56.68 million looks to Stiennon like a stall with no explanation. However, here is the exact quote from the S-1 explaining what happened, “For the three month period ended October 31, 2011, the increase in product revenue was driven by strong performance in our federal business, as a result of improved productivity from our expanded U.S. government sales force and increased U.S. government spending at the end of its September 30 fiscal year.” My translation from investment banker/lawyer speak to English is that PAN did so well with the Federal government that quarter that the following quarter suffered by comparison. I could be wrong.

In closing, let me say I fully understand that there is no single silver bullet in security. Our approach is about balancing resources among Prevention, Detection, and Incident Response controls. There is never enough budget to implement every technical control that mitigates some risk. The exercise is to prioritize the selection of controls within budget constraints to provide the maximum information security risk reduction based on an organization’s understanding of its risks. While these priorities vary widely among organizations, I can confidently say that based on my experience, Palo Alto Networks provides the best network-based, Prevention Control, risk mitigation available today. Its, yes, revolutionary technology is well worth investing time to understand.

 

 

 

09. March 2012 · Comments Off on The six most dangerous infosec attacks – Hackers – SC Magazine Australia – Secure Business Intelligence · Categories: blog · Tags: , ,

The six most dangerous infosec attacks – Hackers – SC Magazine Australia – Secure Business Intelligence.

SC Magazine Autralia summarized Ed Skoudis’s and Joannes Ullrich’s RSA presentation on the six most dangerous IT Security threats of 2011 and what to expect in the year ahead. They are:

  1. DNS as command-and-control
  2. SSL slapped down
  3. Mobile malware as a network infection vector
  4. Hacktivism is back
  5. SCADA at home
  6. Cloud Security
Additional trends:
  • IPv6
  • Oldies
  • Social Networking
  • Malware
  • DNSSEC
The reference to the Malware item above is that blacklisting is a losing proposition and organizations need to move to whitelisting. IMHO, this especially true for establishing positive network control at the application level.

Provera 10mg

04. March 2012 · Comments Off on Anonymous, Decentralized and Uncensored File-Sharing is Booming | TorrentFreak · Categories: blog · Tags: , , ,

Anonymous, Decentralized and Uncensored File-Sharing is Booming | TorrentFreak.

Despite efforts to curb file-sharing, it’s booming. New file-sharing apps have been developed that are harder for enterprises to control.

The file-sharing landscape is slowly adjusting in response to the continued push for more anti-piracy tools, the final Pirate Bay verdict, and the raids and arrests in the Megaupload case. Faced with uncertainty and drastic changes at file-sharing sites, many users are searching for secure, private and uncensored file-sharing clients. Despite the image its name suggests, RetroShare is one such future-proof client.

If your Next Generation Firewall uses a Positive Control Model and monitors all 65,535 ports all the time you do not have to worry about these new file-sharing products because they will be blocked as unknown applications. Of course, before you go into production, you must investigate all of the unknown apps to assure that all business-required apps are identified, defined, and allowed by policy.

24. February 2012 · Comments Off on Botnet communicates via P2P instead of C&C · Categories: blog · Tags: , ,

Symantec reported on a version of Zeus/Spyeye that communicates via P2P among its bot peers rather than “traditional” C&C directly to its control servers. (I put traditional in quotes because I don’t want to give the impression that detecting C&C traffic is easy.)

…it seems that the C&C server has disappeared entirely for this functionality. Where they were previously sending and receiving control messages to and from the C&C, these control messages are now handled by the P2P network.

This means that every peer in the botnet can act as a C&C server, while none of them really are one. Bots are now capable of downloading commands, configuration files, and executables from other bots—every compromised computer is capable of providing data to the other bots. We don’t yet know how the stolen data is communicated back to the attackers, but it’s possible that such data is routed through the peers until it reaches a drop zone controlled by the attackers.

Now if you are successfully blocking all P2P traffic on your network, you don’t have to worry about this new development. However, when P2P is blocked, this version of Zeus/Spyeye reverts to C&C methods. So you still need a technical network security control that can reliably detect compromised end points by monitoring egress traffic to proxies and firewalls and DNS traffic because you surely cannot rely on your host-based security controls. (If you doubt my claim, please contact me and I will prove it to you.)

But what if you have a business requirement for access to one or more P2P networks? Do you have a way to implement a positive control policy that only allows the specific P2P networks you need and blocks all the others? A Next Generation Firewall ought to enable you to meet this business requirement. I say “ought to” because not all of them do. I have written about NGFWs here, here, here, and here.

23. February 2012 · Comments Off on Black Cat, White Cat | InfoSec aXioms · Categories: blog · Tags: , , , , ,

Ofer Shezaf highlights one of the fundamental ways of categorizing security tools in his post Black Cat, White Cat | InfoSec aXioms.

Black listing, sometimes called negative security or “open by default”, focuses on catching the bad guys by detecting attacks.  Security controls such as Intrusion Prevention Systems and Anti-Virus software use various methods to do so. The most common method to detect attacks matching signatures against network traffic or files. Other methods include rules which detect conditions that cannot be expressed in a pattern and abnormal behavior detection.

 White listing on the other hand allows only known good activity. Other terms associated with the concept are positive security and “closed by default” and policy enforcement. White listing is commonly embedded in systems and the obvious example is the authentication and authorization mechanism found in virtually every information system. Dedicated security controls which use white listing either ensures the build-in policy enforcement is used correctly or provide a second enforcement layer. The former include configuration and vulnerability assessment tools while the latter include firewalls.

Unfortunately, when manufactures apply the term “Next Generation” to firewalls, they may be misleading the marketplace.  As Ofer says, a firewall, by definition, performs white listing, i.e. policy enforcement. One of the key functions of a NGFW is the ability to white list applications. This means the applications that are allowed must be defined in the firewall policy. On the other hand, if you are defining applications that are to be blocked, that’s black listing, and not a firewall.

Also note that Next Generation Firewalls also perform Intrusion Prevention, which is a black listing function. So clearly, NGFWs perform white listing and black listing functions. But to truly earn the right to call a network security appliance a “Next Generation” Firewall, the device must enable application white listing. Adding “application awareness” as a blacklist function is nice, but not a NGFW. For more information, I have written about Next Generation Firewalls and the difference between UTMs and NGFWs.

 

19. February 2012 · Comments Off on Stiennon’s confusion between UTM and Next Generation Firewall · Categories: blog · Tags: , , , ,

Richard Stiennon has published a blog post on Netasq, a European UTM vendor called, A brief history of firewalls and the rise of the UTM. I found the post indirectly from Alan Shimmel’s post about it.

Stiennen seems to think that Next Generation Firewalls are just a type of UTM. Shimmel also seems to go along with Stiennon’s view. Stiennon gives credit to IDC for defining the term UTM, but has not acknowledged Gartner’s work in defining Next Generation Firewall.

My purpose here is not to get into a debate about terms like UTM and NGFW. The real question is which network security device provides the best network security “prevention” control. The reality is that marketing people have so abused the terms UTM and NGFW, you cannot depend on the term to mean anything. My remarks here are based on Gartner’s definition of Next Generation Firewall which they published in October 2009.

All the UTMs I am aware of, whether software-based or with hardware assist, use port-based (stateful inspection) firewall technology. They may do a lot of other things like IPS, URL filtering and some DLP, but these UTMs have not really advanced the state (pardon the pun) of “firewall” technology. These UTMs do not enable a positive control model (default-deny) from the network layer up through the application layer. They depend on the negative control model of their IPS and application modules/blades.

Next Generation Firewalls, on the other hand, as defined by Gartner’s 2009 research report, enable positive network traffic control policies from the network layer up through the application layer. Therefore true NGFWs are something totally new and were developed in response to the changes in the way applications are now written. In the early days of TCP/IP, port-based firewalls worked well because each new application ran on its assigned port. For example, SMTP on port 25. In the 90s, you could be sure that traffic that ran on port 25 was SMTP and that SMTP would run only port 25.

About ten years ago applications began using port-hopping, encryption, tunneling, and a variety of other techniques to circumvent port-based firewalls. In fact, we have now reached the point where port-based firewalls are pretty much useless at controlling traffic between networks of different trust levels. UTM vendors responded by adding application identification functionality using their intrusion detection/prevention engines. This is surely better than nothing, but IPS engines use a negative enforcement model, i.e. default allow, and only monitor a limited number of ports. A true NGFW monitors all 65,535 ports for all applications at all times.

In closing, there is no doubt about the value of a network security “prevention” control performing multiple functions. The real question is, does the device you are evaluating fulfill its primary function of reducing the organization’s attack surface by (1) enabling positive control policies from the network layer through the application layer, and (2) doing it across all 65,535 ports all the time?

 

 

 

 

 

Phenergan