20. January 2014 · Comments Off on How Palo Alto Networks could have prevented the Target breach · Categories: blog · Tags: , , , , ,

Brian Krebs’ recent posts on the Target breach, A First Look at the Target Intrusion, Malware, and A Closer Look at the Target Malware, provide the most detailed and accurate analysis available.

The malware the attackers used captured complete credit card data contained on the mag stripe by “memory scraping.”

This type of malicious software uses a technique that parses data stored briefly in the memory banks of specific POS devices; in doing so, the malware captures the data stored on the card’s magnetic stripe in the instant after it has been swiped at the terminal and is still in the system’s memory. Armed with this information, thieves can create cloned copies of the cards and use them to shop in stores for high-priced merchandise. Earlier this month, U.S. Cert issued a detailed analysis of several common memory scraping malware variants.

Furthermore, no known antivirus software at the time could detect this malware.

The source close to the Target investigation said that at the time this POS malware was installed in Target’s environment (sometime prior to Nov. 27, 2013), none of the 40-plus commercial antivirus tools used to scan malware at virustotal.com flagged the POS malware (or any related hacking tools that were used in the intrusion) as malicious. “They were customized to avoid detection and for use in specific environments,” the source said.

The key point I want to discuss however, is that the attackers took control of an internal Target server and used it to collect and store the stolen credit card information from the POS terminals.

Somehow, the attackers were able to upload the malicious POS software to store point-of-sale machines, and then set up a control server within Target’s internal network that served as a central repository for data hoovered by all of the infected point-of-sale devices.

“The bad guys were logging in remotely to that [control server], and apparently had persistent access to it,” a source close to the investigation told KrebsOnSecurity. “They basically had to keep going in and manually collecting the dumps.”

First, obviously the POS terminals have to communicate with specific Target servers to complete and store transactions. Second, the communications between the POS terminals and the malware on the compromised server(s) could have been denied had there been policies defined and enforced to do so. Palo Alto Networks’ Next Generation Firewalls are ideal for this use case for the following two reasons:

  1. Palo Alto Networks enables you to include zone, IP address, port, user, protocol, application information, and more in a single policy.
  2. Palo Alto Networks firewalls monitor all ports for all protocols and applications, all of the time, to enforce these polices to establish a Positive Control Model (default deny or application traffic white listing).

You might very well ask, why couldn’t Router Access Control Lists be used? Or why not a traditional port-based, stateful inspection firewall? Because these types of network controls limit policy definition to ports, IP addresses, and protocols, which cannot enforce a Positive Control Model. They are simply not detailed enough to control traffic with a high degree of confidence. One or the other might have worked in the 1990s. But by the mid-2000s, network-based applications were regularly bypassing both of these types of controls.

Therefore, if Target had deployed Palo Alto Networks firewalls between the POS terminals and their servers with granular policies to control POS terminals’ communications by zone, port, and application, the malware on the POS terminals would never have been able to communicate with the server(s) the attackers compromised.

In addition, it’s possible that the POS terminals may never have become infected in the first place because the compromised server(s) the attackers initially compromised would not have been able to communicate with the POS terminals. Note, I am not assuming that the servers used to compromise the POS terminals were the same servers used to collect the credit card data that was breached.

Unfortunately, a control with the capabilities of Palo Alto Networks is not specified by the Payment Card Industry (PCI) Data Security Standard (DSS). Yes, “Requirement #1: Install and maintain a firewall configuration to protect cardholder data,” seems to cover the subject. However, you can fully meet these PCI DSS requirements with a port-based, stateful inspection firewall. But, as I said above, an attacker can easily bypass this 1990s type of network control. Retailers and e-Commerce sites need to go beyond PCI DSS to actually protect themselves. You need is Next Generation Firewall like Palo Alto Networks which enables you to define and enforce a Positive Control.

04. April 2013 · Comments Off on The Real Value of a Positive Control Model · Categories: blog · Tags: , , , ,

During the last several years I’ve written a lot about the fact that Palo Alto Networks enables you to re-establish a network-based Positive Control Model from the network layer up through the application layer. But I never spent much time on why it’s important.

Today, I will reference a blog post by Jack Whitsitt, Avoiding Strategic Cyber Security Loss and the Unacceptable Offensive Advantage (Post 2/2), to help explain the value of implementing a Positive Control Model.

TL;DR: All information breaches result from human error. The human error rate per unit of information technology is fairly constant. However, because IT is always expanding (more applications and more functions per application), the actual number of human errors resulting in Vulnerabilities (used in the most general sense of the word) per time period is always increasing. Unfortunately, the information security team has limited resources (Defensive Capability) and cannot cope with the users’ ever increasing number of errors. This has created an ever growing “Offensive Advantage (Vulnerabilities – Defensive Capability).”  However, implementing a Positive Control Model to influence/control human behavior will reduce the number of user errors per time interval, which will reduce the Offensive Advantage to a manageable size.

On the network side Palo Alto Networks’ Next Generation Firewall monitors and controls traffic by user and application across all 65,535 TCP and UDP ports, all of the time, at specified speeds. Granular policies based on any combination of application, user, security zone, IP address, port, URL, and/or Threat Protection profiles are created with a single unified interface that enables the infosec team to respond quickly to new business requirements.

On the endpoint side, Trusteer provides a behavioral type of whitelisting that prevents device compromise and confidential data exfiltration. It requires little to no administrative configuration effort. Thousands of agents can be deployed in days. When implemented on already deployed Windows and Mac devices, Trusteer will detect compromised devices that traditional signature-based anti-virus products miss.

Let’s start with Jack’s basic truths about the relationships between technology, people’s behavior, and infosec resources. Cyber security is a problem that occurs over unbounded time. So it’s a rate problem driven by the ever increasing number of human errors per unit of time. While the number of human errors per unit of time per “unit of information technology” is steady, complexity, in the form of new applications and added functions to existing applications, is constantly increasing. Therefore the number of human errors per unit of time is constantly increasing.

Unfortunately, information security resources (technical and administrative controls) are limited. Therefore the organization’s Defense Capability cannot keep up with the increasing number of Vulnerabilities. Since the number of human errors increases at a faster rate than limited resource Defense Capacity, an Unacceptable Offensive Advantage is created. Here is a diagram that shows this.

offensiveadvantage1

What’s even worse, most Defensive controls cannot significantly shrink the gap between the Vulnerability curve and the Defense curve because they do not bend the vulnerability curve, as this graph shows.

offensiveadvantage2

So the only real hope of reducing organizational cyber security risk, i.e. the adversaries’ Offensive Advantage is to bend the Vulnerability curve as this graph shows.

offensiveadvantage3

Once you do that, you can apply additional controls to further shrink the gap between Vulnerability and Defense curves as this graph shows.

offensiveadvantage4

The question is how to do this. Perhaps Security Awareness Training can have some impact.

I recommend implementing network and host-based technical controls that can establish a Positive Control Model. In other words, only by defining what people are allowed to do and denying everything else can you actually bend the Vulnerability curve, i.e. reduce human errors, both unintentional and intentional.

Implementing a Positive Control Model does not happen instantly, i.e. it’s also is a rate problem. But if you don’t have the technical controls in place, no amount of process is going to improve the organization’s security posture.

This is why firewalls are such a critical network technical control. They are placed at critical choke points in the network, between subnets of different trust levels, with the express purpose of implementing a Positive Control Model.

Firewalls first became popular in the mid 1990s. At that time, when a new application was built, it was assigned a port number. For example, the mail protocol, SMTP was assigned port 25, and the HTTP protocol was assigned to port 80. At that time, (1) protocol and application meant the same thing, and (2) all applications “behaved,” i.e. they ran only on their assigned ports. Given this environment, all a firewall had to do was use the port numbers (and IP addresses) to control traffic. Hence the popularity of port-based stateful inspection firewalls.

Unfortunately, starting in the early 2000s, developers began writing applications to bypass the port-based stateful inspection firewall in order to get their applications deployed quickly in organizations without waiting for the security teams to make changes in policies. Also different applications were developed that could share a port like port 80 because it was always open to give people access to the Internet. Other techniques like port-hopping and encryption were used to bypass the port-based, stateful inspection firewall.

Security teams started deploying additional network security controls like URL Filtering to complement firewalls. This increase in complexity created new problems such as (1) policy coordination between URL Filtering and the firewalls, (2) performance issues, and (3) since URL Filtering products were mostly proxy based, they would break some of the newer applications frustrating users trying to do their jobs.

By 2005 it was obvious to some people that the application technology had obsoleted port-based firewalls and their helpers. A completely new approach to firewall architecture was needed that (1)  classified traffic by application first regardless of port, and (2) was backwardly compatible with port-based firewalls to enable the conversion process. This is exactly what the Palo Alto Networks team did, releasing their first “Next Generation” Firewall in 2007.

Palo Alto Networks classifies traffic at the beginning of the policy process by application. It monitors all 65,535 TCP and UDP for all applications, all of the time, at specified speeds. This enables organizations to re-establish the Positive Control Model which bends the “Vulnerability” curve and allows an infosec team with limited resources to reduce, what Jack Whitsitt calls, the adversaries’ “Offensive Advantage.”

On the endpoint side, Trusteer provides a type of Positive Control Model / whitelisting whereby highly targeted applications like browsers, Java, Adobe Flash, PDF, and Microsoft Office applications are automatically protected behaviorally. The Trusteer agent understands the memory state – file I/O relationship to the degree that it knows the difference between good I/O and malicious I/O behavior. Trusteer then blocks the malicious I/O before any damage can be done.

Thus human errors resulting from social engineering such as clicking on links to malicious web pages or opening documents containing malicious code are automatically blocked. This is all done with no policy configuration efforts on the part of the infosec team. The policies are updated by Trusteer periodically. There are no policies to configure. Furthermore, thousands of agents can be deployed in days. Finally, when implemented to deployed Windows and Mac endpoints, it will detect already compromised devices.

Trusteer, founded in 2006, has over 40 million agents deployed across the banking industry to protect online banking users. So their agent technology has been battle tested.

In closing then, only by implementing technical controls which establish a Positive Control Model to reduce human errors, can an organization bend the Vulnerability Curve sufficiently to reduce the adversaries’ Offensive Advantage to an acceptable level.

25. February 2013 · Comments Off on Surprising Application-Threat Analysis from Palo Alto Networks · Categories: blog · Tags: , , ,

This past week, Palo Alto Networks released its H2/2012 Application Usage and Threat Report. Actually, it’s the first time Palo Alto has integrated Application Usage and Threat Analysis. Previous reports were focused only on Application Risk. This report analyzed 12.6 petabytes of data from 3,056 networks, covering 1,395 applications. 5,307 unique threats were identified from 268 million threat logs.

Here are the four most interesting items I noted:

1. Of the 1,395 applications found, 10 were responsible for 97% of all Exploit* logs. One of these was web-browsing. This is to be expected. However, the other nine were internal applications representing 82% of the Exploit* logs!!

This proves once again that perimeter traffic security monitoring is not adequate. Internal network segmentation and threat monitoring are required.

2. Custom or Unknown UDP traffic represented only 2% of all the bandwidth analyzed, yet it accounted for 55% of the Malware* logs!!

This clearly shows the importance of minimizing unidentified application traffic. Therefore the ratio of unidentified to identified traffic is a key security performance indicator and ought to trend down over time.

3. DNS traffic total bytes was only 0.4% of traffic but 25.4% of sessions, and was 3rd for Malware* logs at 13%.

No doubt most, if not all, of this represents malicious Command & Control traffic. If you are not actively monitoring and analyzing DNS traffic, you are missing a key method of detecting compromised devices in your network.

4. 85 of the 356 applications that use SSL never use port 443.

If your firewall is not monitoring all ports for all applications all of the time, you are simply not getting complete visibility and cannot re-establish a Positive Control Model.

*If you are not familiar with Palo Alto Networks’ Threat Protection function, “Exploit” and “Malware” are the two main categories of “Threat” logs. There is a table at the top of page 4 of this AUT report that summarizes the categories and sub-categories of the 268 million Threat Logs captured and analyzed. The “Exploit” logs refer to matches against vulnerability signatures which are typical of Intrusion Prevention Systems. The “Malware” logs are for Anti-Virus and Anti-Spyware signature matches.

What is not covered in this report is Palo Alto’s cloud-based, Wildfire zero-day analysis service which analyzes files not seen before to determine if they benign or malicious. If malicious behavior is found, signatures of the appropriate types are generated in less than one hour and update Threat Protection. In addition, the appropriate IP addresses and URLs are added to their respective blacklists.

This report is well worth reading.

 

 

 

29. July 2012 · Comments Off on Speaking of Next Gen Firewalls – Forbes · Categories: blog · Tags: , , , ,

I would like to respond to Richard Stiennon’s Forbes article, Speaking of Next Gen Firewalls. Richard starts off his article as follows:

“As near as I can tell the salient feature of Palo Alto Networks’ products that sets them apart is application awareness. … In my opinion application awareness is just an extension of URL content filtering.”

First, let me start my comment by saying that application awareness, out of context, is almost meaningless. Second, I view technical controls from a risk management perspective, i.e. I judge the value of a proposed technical control by the risks it can mitigate.

Third, the purpose of a firewall is to establish a positive control model, i.e. limit traffic into and out of a defined network to what is allowed and block everything else. The reason everyone is focused on application awareness is that traditional stateful inspection firewalls are port-based and cannot control modern applications that do not adhere to the network layer port model and conventions established when the Internet protocols were first designed in the 1970s.

The reason Palo Alto Networks is so popular is that it extends firewall functionality from the network layer up through the application layer in a single unified policy view. This is unlike most application awareness solutions which, as Richard says, are just extensions of URL filtering, because they are based on proxy technology.

For those more technically inclined, URL Filtering solutions are generally based on proxy technology and therefore only monitor a small set of ports including 80 and 443. However, Palo Alto Networks monitors all 65,535 TCP and UDP ports at specified speeds, all the time from the network layer up through the application layer. If you doubt this, try it yourself. It’s easy. Simply run a standard application on a non-standard port and see what the logs show.

Furthermore, Palo Alto provides a single policy view that includes user, application, zone, URL filtering, and threat prevention columns in addition to the traditional five tuples – source IP, destination IP, source port, destination port, and service.

To the best of my knowledge, Palo Alto Networks is the only firewall, whether called Next Generation Firewall or UTM that has this set of features. Therefore, from a risk management perspective, Palo Alto Networks is the only firewall that can establish a positive enforcement model from the network layer up through the application layer.

20. April 2012 · Comments Off on A response to Stiennon’s analysis of Palo Alto Networks · Categories: blog · Tags: , , , , , , ,

I was dismayed to read Richard Stiennon’s article in Forbes, Tearing away the veil of hype from Palo Alto Networks’ IPO. I will say my knowledge of network security and experience with Palo Alto Networks appear to be very different from Stiennon’s.

Full disclosure, my company has been a Palo Alto Networks partner for about four years. I noticed on Stiennon’s LinkedIn biography that he worked for one of PAN’s competitors, Fortinet. I don’t own any of the stocks individually mentioned in Stiennon’s article, although from time to time, I own mutual funds that might. Finally, I am planning on buying PAN stock when they go public.

Let me first summarize my key concerns and then I will go into more detail:

  • Stiennon overstates and IMHO misleads the reader about the functionality of stateful inspection firewall technology. While he seems to place value in it, he fails to mention what security risks they can actually mitigate in today’s environment.
  • He does not seem to understand the difference between UTMs and Next Generation Firewalls (NGFW). UTMs combine multiple functions on an appliance with each function processed independently and sequentially, and each managed with a separate user interface. NGFWs integrate multiple functions which share information, execute in parallel, and are managed with a unified interface. These differences result in dramatically different risk mitigation capabilities.
  • He does not seem to understand Palo Alto Networks unique ability to reduce attack surfaces by enabling a positive control model (default deny) from the network layer up through the application layer.
  • He seems to have missed the fact that Palo Alto Networks NGFWs were designed from the ground up to deliver Next Generation Firewall capabilities while other manufacturers have simply added features to their stateful inspection firewalls
  • He erroneously states that Palo Alto Networks does not have stateful inspection capabilities. It does and is backwards compatible with traditional stateful inspection firewalls to enable conversions.
  • He claims that Palo Alto Networks uses a lot of third party components when in fact there are only two that I am aware of. And he completely ignores several of Palo Alto Networks latest innovations including Wildfire and GlobalProtect.
  • He missed the reason why Palo Alto Networks Jan 2012 quarter revenue was slightly lower than its Oct 2011 quarter which was clearly stated in the S-1.

Here are my detailed comments.

Stateful inspection is a core functionality of firewalls introduced by Check Point Software over 15 years ago. It allows an inline gateway device to quickly determine, based on a set policy, if a particular connection is allowed or denied. Can someone in accounting connect to Facebook? Yes or no.

The bolded sentence is misleading and wrong in the context of stateful inspection. Stateful inspection has nothing to do with concepts like who is in accounting or whether the session is attempting to connect to Facebook. Stateful Inspection is purely a Layer 3/Layer 4 technology and defines security policies based on Source IP, Destination IP, Source Port, Destination Port, and network protocol, i.e. UDP or TCP.

If you wanted to implement a stateful inspection firewall policy that says Joe in accounting cannot connect to Facebook, you would first have to know the IP address of Joe’s device and the IP address of Facebook. Of course this presents huge administrative problems because somebody would have to keep track of this information and the policy would have to be modified if Joe changed locations. Not to mention the huge number of policy rules that would have to be written for all the possible sites Joe is allowed to visit. No organization I have ever known would attempt to control Joe’s access to Facebook using stateful inspection technology.

Since the early 2000s, hundreds and hundreds of applications have been written, including Facebook and its subcomponents, that no longer obey the “rules” that were in place in the mid-90s when stateful inspection was invented. At that time, when a new application was built, it would be assigned a specific port number that only that application would use. For example, email transport agents using SMTP were assigned Port 25. Therefore the stateful inspection firewall policy implementer could safely control access to the email transport service by defining policies using Port 25.

At present, the usage of ports is totally chaotic and abused by malicious actors. Applications share ports. Applications hop from port to port looking for a way to bypass stateful inspection firewalls. Cyber predators use this weakness of stateful inspection for their gain and your loss. Of course the security industry understood this issue and many new types of network security device types were invented and added to the network as Stiennon acknowledges.

But, inspecting 100% of traffic to implement these advanced capabilities is extremely stressful to the appliance, all of them still use stateful inspection to keep track of those connections that have been denied. That way the traffic from those connections does not need to be inspected, it is just dropped, while approved connections can still be filtered by the enhanced capability of these Unified Threat Management (UTM) devices (sometimes called Next  Generation Firewalls (NGFW), a term coined by Palo Alto Networks).

The first bolded phrase is true when a manufacturer adds advanced capabilities like application identification to an existing appliance. Palo Alto Networks understood this and designed an appliance from the ground up specifically to implement these advanced functions under load with low latency.

In the second bolded phrase, Steinnon casually lumps together the terms UTMs and Next Generation Firewalls as if they are synonymous. They are not. While it is true that Palo Alto Networks coined the term Next Generation Firewalls, it only became an industry defined term when Gartner published a research paper in October, 2009 (ID Number G00171540) and applied a rigorous definition.

The key point is that a next generation firewall provides fully integrated Application Awareness and Intrusion Prevention with stateful inspection. Fully integrated means that (1) the application identification occurs in the firewall which enables positive traffic control from the network layer up through the application layer, (2) all intrusion prevention is applied to the resulting allowed traffic, (3) all this is accomplished in a single pass to minimize latency, and (4) there is a unified interface for creating firewall policies. Running multiple inspection processes sequentially, controlled by independently defined policies results in increased latency and excessive use of security management resources, thus not qualifying as a Next Generation Firewall

But PAN really has abandoned stateful inspection, at a tremendous cost to their ability to establish connections fast enough to address the needs of large enterprises and carriers.

This is simply false. Palo Alto Networks supports standard stateful inspection for two purposes. First to ease the conversion process from a traditional stateful inspection firewall. Most of our customers start by converting their existing stateful inspection firewall policy rules and then they add the more advanced NGFW functions.

Second, the use of ports in policies can be very useful when combined with application identification. For example, you can build a policy that says (a) SMTP can run only on port 25 and (b) only SMTP can run on port 25. The first part (a) assures that if SMTP is detected on any of the other 65,534 ports it will be blocked. This means that no cyber predator can set up an email service on any of your non-SMTP servers. The second part (b) says that no other application besides SMTP can run on port 25. Therefore when you open a port for a specific application, you can assure it will be the only application running on that port. Palo Alto Networks can do this because its core functionality monitors all 65,535 ports for all applications all the time.

Steinnon then goes on to quote Bob Walder of NSS Labs and interprets his statement as follows:

In other words, an enterprise deploying PAN’s NGFW is getting full content inspection all the time with no ability to turn it off. That makes the device performance unacceptable as a drop-in replacement for Juniper, Cisco, Check Point, or Fortinet firewalls.

This statement has no basis in facts that I am aware of. Palo Alto Firewalls are used all the time to replace the above mentioned companies’ firewalls. Palo Alto has over 6,500 customers! Does full packet inspection take more resources than simple stateful inspection? Of course. But that misses the point. As I said above, stateful inspection is completely useless at providing an organization a Positive Enforcement Model, which after all is the sine qua non of a firewall. By Positive Enforcement Model, I mean the ability to define what is allowed and block everything else. This is also described as “default deny.”

Furthermore, based on my experience, in a bake-off situaton where the criteria  are a combination of real-world traffic, real-world security policy requirements designed to mitigate defined high risks, and total cost of ownership, Palo Alto Networks will always win. I’ll go a step further and say that in today’s world there is simply no significant risk mitigation value for traditional stateful inspection.

It’s the application awareness feature. This is where PAN’s R&D spending is going. All the other features made possible by their hardware acceleration and content inspection ability are supported by third parties who provide malware signatures and URL databases of malicious websites and categorization of websites by type. 

This is totally wrong. In fact, the URL filtering database and the end point checking host software in GlobalProtect (explained further on) are the only third party components Palo Alto Network uses that I am aware of. PAN built a completely new firewall engine capable of performing stateful inspection (for backward compatibility and for highly granular policies described above), application control, anti-virus, anti-spyware, anti-malware, and URL Filtering in a single pass. PAN writes all of its malware signatures and of course participates in security intelligence sharing arrangements with other companies.

Palo Alto Networks has further innovated with (1) Wildfire which provides the ability to analyze executables being downloaded from the Internet to detect zero-day attacks, and (2) GlobalProtect which enables remote and mobile users to stay under the control and protection of PAN NGFWs.

While anecdotal, the reports I get from enterprise IT professionals are that PAN is being deployed behindexisting (sic) firewalls. If that is the general case PAN is not the Next Generation Firewall, it is a stand alone technology that provides visibility into application usage.  Is that new? Not really. Flow monitoring technology has been available for over a decade from companies like Lancope and Arbor Networks that provides this visibility at a high level. Application fingerprinting was invented by SourceFire and is the basis of their RNA product.

Wow. Let me try to deconstrust this. First, it is true that some companies start by putting Palo Alto Networks behind existing firewalls. Why not? I see this as an advantage for PAN as it gives organizations the ability to leverage PAN’s value without waiting until it’s time to do a firewall refresh. Also PAN can replace a proxy to improve content filtering. I’ll save the proxy discussion for another time. I am surely not privy to PAN’s complete breakdown of installation architectures, but “anecdotally” I would say most organizations are doing straight firewall replacements.

Much more importantly, the idea of doing application identification in an IPS or in a flow product totally misses the point. Palo Alto Networks ships the only firewall that does it to enable positive control (default deny) from the network layer up through the application layer. I am surely not saying that there is no value in adding application awareness to IPSs or flow products. There is. But IPSs use a negative control model, i.e. define what should be blocked and allow everything else. Firewalls are supposed to provide attack surface reduction and cannot unless they are able to exert positive control.

While I will agree that application identification and the ability to enforce policies that control what applications can be used within the enterprise is important I contend that application awareness is ultimately a feature that belongs in a UTM appliance or stand alone device behind the firewall. Like other UTM features it must be disabled for high connection rate environments such as large corporate gateways, data centers, and within carrier networks.

This may be Stiennon’s opintion, but I would ask, what meaningful risks, besides not meeting the requirements of a compliance regime, does a stateful inspection firewall mitigate considering the ease with which attackers can bypass them? I have nothing against compliance requirements per se, but our focus is on information security risk mitigation.

In the three months ending Jan. 31 2012 PAN’s revenue is off from the previous quarter. The fourth quarter is usually the best quarter for technology vendors. There may be some extraordinary situation that accounts for that, but it is not evident in the S-1

There is no denying that year-over-year PAN has been on a tear, almost doubling its revenue from Q4 2010 to Q4 2011. But the glaring fact is that PAN’s revenue growth has completely stalled out in what was a great quarter for the industry.

Perhaps my commenting on these last paragraphs does not belong in this blog post as they are not technical in nature, but IMHO Stiennon is wrong again. Stiennon glosses over the excellent quarter that preceded the last one where PAN grew its revenue from $40.22 million to $57.11 million. Thus the last quarter’s $56.68 million looks to Stiennon like a stall with no explanation. However, here is the exact quote from the S-1 explaining what happened, “For the three month period ended October 31, 2011, the increase in product revenue was driven by strong performance in our federal business, as a result of improved productivity from our expanded U.S. government sales force and increased U.S. government spending at the end of its September 30 fiscal year.” My translation from investment banker/lawyer speak to English is that PAN did so well with the Federal government that quarter that the following quarter suffered by comparison. I could be wrong.

In closing, let me say I fully understand that there is no single silver bullet in security. Our approach is about balancing resources among Prevention, Detection, and Incident Response controls. There is never enough budget to implement every technical control that mitigates some risk. The exercise is to prioritize the selection of controls within budget constraints to provide the maximum information security risk reduction based on an organization’s understanding of its risks. While these priorities vary widely among organizations, I can confidently say that based on my experience, Palo Alto Networks provides the best network-based, Prevention Control, risk mitigation available today. Its, yes, revolutionary technology is well worth investing time to understand.

 

 

 

04. March 2012 · Comments Off on Anonymous, Decentralized and Uncensored File-Sharing is Booming | TorrentFreak · Categories: blog · Tags: , , ,

Anonymous, Decentralized and Uncensored File-Sharing is Booming | TorrentFreak.

Despite efforts to curb file-sharing, it’s booming. New file-sharing apps have been developed that are harder for enterprises to control.

The file-sharing landscape is slowly adjusting in response to the continued push for more anti-piracy tools, the final Pirate Bay verdict, and the raids and arrests in the Megaupload case. Faced with uncertainty and drastic changes at file-sharing sites, many users are searching for secure, private and uncensored file-sharing clients. Despite the image its name suggests, RetroShare is one such future-proof client.

If your Next Generation Firewall uses a Positive Control Model and monitors all 65,535 ports all the time you do not have to worry about these new file-sharing products because they will be blocked as unknown applications. Of course, before you go into production, you must investigate all of the unknown apps to assure that all business-required apps are identified, defined, and allowed by policy.

23. February 2012 · Comments Off on Black Cat, White Cat | InfoSec aXioms · Categories: blog · Tags: , , , , ,

Ofer Shezaf highlights one of the fundamental ways of categorizing security tools in his post Black Cat, White Cat | InfoSec aXioms.

Black listing, sometimes called negative security or “open by default”, focuses on catching the bad guys by detecting attacks.  Security controls such as Intrusion Prevention Systems and Anti-Virus software use various methods to do so. The most common method to detect attacks matching signatures against network traffic or files. Other methods include rules which detect conditions that cannot be expressed in a pattern and abnormal behavior detection.

 White listing on the other hand allows only known good activity. Other terms associated with the concept are positive security and “closed by default” and policy enforcement. White listing is commonly embedded in systems and the obvious example is the authentication and authorization mechanism found in virtually every information system. Dedicated security controls which use white listing either ensures the build-in policy enforcement is used correctly or provide a second enforcement layer. The former include configuration and vulnerability assessment tools while the latter include firewalls.

Unfortunately, when manufactures apply the term “Next Generation” to firewalls, they may be misleading the marketplace.  As Ofer says, a firewall, by definition, performs white listing, i.e. policy enforcement. One of the key functions of a NGFW is the ability to white list applications. This means the applications that are allowed must be defined in the firewall policy. On the other hand, if you are defining applications that are to be blocked, that’s black listing, and not a firewall.

Also note that Next Generation Firewalls also perform Intrusion Prevention, which is a black listing function. So clearly, NGFWs perform white listing and black listing functions. But to truly earn the right to call a network security appliance a “Next Generation” Firewall, the device must enable application white listing. Adding “application awareness” as a blacklist function is nice, but not a NGFW. For more information, I have written about Next Generation Firewalls and the difference between UTMs and NGFWs.

 

19. February 2012 · Comments Off on Stiennon’s confusion between UTM and Next Generation Firewall · Categories: blog · Tags: , , , ,

Richard Stiennon has published a blog post on Netasq, a European UTM vendor called, A brief history of firewalls and the rise of the UTM. I found the post indirectly from Alan Shimmel’s post about it.

Stiennen seems to think that Next Generation Firewalls are just a type of UTM. Shimmel also seems to go along with Stiennon’s view. Stiennon gives credit to IDC for defining the term UTM, but has not acknowledged Gartner’s work in defining Next Generation Firewall.

My purpose here is not to get into a debate about terms like UTM and NGFW. The real question is which network security device provides the best network security “prevention” control. The reality is that marketing people have so abused the terms UTM and NGFW, you cannot depend on the term to mean anything. My remarks here are based on Gartner’s definition of Next Generation Firewall which they published in October 2009.

All the UTMs I am aware of, whether software-based or with hardware assist, use port-based (stateful inspection) firewall technology. They may do a lot of other things like IPS, URL filtering and some DLP, but these UTMs have not really advanced the state (pardon the pun) of “firewall” technology. These UTMs do not enable a positive control model (default-deny) from the network layer up through the application layer. They depend on the negative control model of their IPS and application modules/blades.

Next Generation Firewalls, on the other hand, as defined by Gartner’s 2009 research report, enable positive network traffic control policies from the network layer up through the application layer. Therefore true NGFWs are something totally new and were developed in response to the changes in the way applications are now written. In the early days of TCP/IP, port-based firewalls worked well because each new application ran on its assigned port. For example, SMTP on port 25. In the 90s, you could be sure that traffic that ran on port 25 was SMTP and that SMTP would run only port 25.

About ten years ago applications began using port-hopping, encryption, tunneling, and a variety of other techniques to circumvent port-based firewalls. In fact, we have now reached the point where port-based firewalls are pretty much useless at controlling traffic between networks of different trust levels. UTM vendors responded by adding application identification functionality using their intrusion detection/prevention engines. This is surely better than nothing, but IPS engines use a negative enforcement model, i.e. default allow, and only monitor a limited number of ports. A true NGFW monitors all 65,535 ports for all applications at all times.

In closing, there is no doubt about the value of a network security “prevention” control performing multiple functions. The real question is, does the device you are evaluating fulfill its primary function of reducing the organization’s attack surface by (1) enabling positive control policies from the network layer through the application layer, and (2) doing it across all 65,535 ports all the time?

 

 

 

 

 

Phenergan

18. December 2011 · Comments Off on Gartner December 2011 Firewall Magic Quadrant Comments · Categories: blog · Tags: , , , , ,

Gartner just released their 2011 Enterprise Firewall Magic Quadrant 21 months since their last one just days before Christmas. Via distribution from one of the firewall manufacturers, I received a copy today. Here are the key highlights:

  • Palo Alto Networks moved up from the Visionary to Leader quadrant
  • Juniper slid back from the Leader to the Challenger quadrant
  • Cisco remained in the Challenger quadrant
  • There are no manufacturers in the Visionary quadrant

In fact, there are only two manufacturers in the Leader quadrant – the aforementioned Palo Alto Networks and Check Point. And these two manufacturers are the only ones to the right of center!!

Given Gartner’s strong belief in the value of Next Generation Firewalls, one might conclude that both of these companies actually do meet Gartner’s 2009 research paper outlining the features of a NGFW. Unfortunately that is not the case today. Check Point’s latest generally available release simply does not meet Gartner’s NGFW requirements.

So the question is, why did Gartner include them in the Leader quadrant? The only explanation I can think of is that their next release meets their NGFW criteria. Gartner alludes to Project Gaia which is in beta at a few sites but says only that it is a blending of Check Point’s three different operating systems. So let’s follow through on this thought experiment. First, this would mean that none of the other vendors will meet Gartner’s NGFW criteria in their next release. If any of them did, why wouldn’t they too be placed to the right of center?

Before I go on, let’s review what a NGFW is. Let’s start with a basic definition of a firewall – a network security device that enables you to define a “Positive Control Model” about what traffic is allowed to pass between two network segments of different trust levels. By Positive Enforcement Model I mean you define what is allowed and deny everything else. Another term for this is “default deny.”

Traditional stateful firewalls enable this Positive Control Model at the port and protocol levels. NGFWs do this also but most importantly do this at the application level. In fact, an NGFW enables policies that combine port, protocol, and application (and more). Stateful inspection firewalls have no ability to control applications sharing open ports. Some have added application identification and blocking to their IPS modules, but this is a negative enforcement model. In other words, block what I tell you to block and allow everything else. Some have called this the “Wack-A-Mole” approach to application control.

In order then to qualify as a NGFW, the core traffic analysis engine has to be built from the ground up to perform deep packet inspection and application detection at the beginning of the analysis/decision process to allow or deny the session. Since that was Palo Alto Networks’ vision when they were founded in 2005, that’s what they did. All the other firewall manufacturers have to start from scratch and build an entirely new platform.

So let’s pick up where I left off three paragraphs ago, i.e. the only traditional stateful inspection firewall manufacturer that might have a technically true NGFW coming in its next release is Check Point. Since Palo Alto Networks shipped its first NGFW in mid-2007, this would mean that Check Point is, at best, four and half years, four major releases, and six thousand customers behind Palo Alto Networks.

On the other hand, if Check Point is in the Leader quadrant because it’s Palo Alto Networks’ toughest competitor, then Palo Alto Networks is in even a better position in the firewall market.

26. October 2011 · Comments Off on Australia DSD’s Top Four Security Strategies · Categories: blog · Tags: , , , , ,

The SANS Institute has endorsed Australia’s Defense Signals Directorate (DSD) four top strategies for mitigating  information security risk:

  1. Patching applications and using the latest version of an application
  2. Patching operating systems
  3. Keeping admin right under strict control (and forbidding the use of administrative accounts for email and browsing)
  4. Whitelisting applications
While there is nothing new with these four strategies, I would like to discuss #4. The Australian DSD Strategies to Mitigate Targeted Cyber Intrusions defines Application Whitelisting as preventing unapproved programs from running on PCs. I recommend extending whitelisting to the network. In other words, define which applications are allowed on the network by user groups, both internally and Web-based, and deny all others.
My recommendation is not really a new idea either. After all, that’s what firewalls are supposed to do. The issue is that the traditional stateful inspection firewall does it using port numbers and IP addresses. For at least the last five years applications and users have routinely bypassed these firewalls by using applications that share open ports.
This is why in October 2009, Gartner started talking about “Next Generation Firewalls” which enable you to implement whitelisting on the network at Layer 7 (Application) as well as down the stack to Layer 4 and 3. In other words extend the traditional “Positive Control Model” firewall functionality up through the Application Layer. (If you have not seen that Gartner research report, please contact me and I will arrange for you to receive a copy.)