20. January 2014 · Comments Off on How Palo Alto Networks could have prevented the Target breach · Categories: blog · Tags: , , , , ,

Brian Krebs’ recent posts on the Target breach, A First Look at the Target Intrusion, Malware, and A Closer Look at the Target Malware, provide the most detailed and accurate analysis available.

The malware the attackers used captured complete credit card data contained on the mag stripe by “memory scraping.”

This type of malicious software uses a technique that parses data stored briefly in the memory banks of specific POS devices; in doing so, the malware captures the data stored on the card’s magnetic stripe in the instant after it has been swiped at the terminal and is still in the system’s memory. Armed with this information, thieves can create cloned copies of the cards and use them to shop in stores for high-priced merchandise. Earlier this month, U.S. Cert issued a detailed analysis of several common memory scraping malware variants.

Furthermore, no known antivirus software at the time could detect this malware.

The source close to the Target investigation said that at the time this POS malware was installed in Target’s environment (sometime prior to Nov. 27, 2013), none of the 40-plus commercial antivirus tools used to scan malware at virustotal.com flagged the POS malware (or any related hacking tools that were used in the intrusion) as malicious. “They were customized to avoid detection and for use in specific environments,” the source said.

The key point I want to discuss however, is that the attackers took control of an internal Target server and used it to collect and store the stolen credit card information from the POS terminals.

Somehow, the attackers were able to upload the malicious POS software to store point-of-sale machines, and then set up a control server within Target’s internal network that served as a central repository for data hoovered by all of the infected point-of-sale devices.

“The bad guys were logging in remotely to that [control server], and apparently had persistent access to it,” a source close to the investigation told KrebsOnSecurity. “They basically had to keep going in and manually collecting the dumps.”

First, obviously the POS terminals have to communicate with specific Target servers to complete and store transactions. Second, the communications between the POS terminals and the malware on the compromised server(s) could have been denied had there been policies defined and enforced to do so. Palo Alto Networks’ Next Generation Firewalls are ideal for this use case for the following two reasons:

  1. Palo Alto Networks enables you to include zone, IP address, port, user, protocol, application information, and more in a single policy.
  2. Palo Alto Networks firewalls monitor all ports for all protocols and applications, all of the time, to enforce these polices to establish a Positive Control Model (default deny or application traffic white listing).

You might very well ask, why couldn’t Router Access Control Lists be used? Or why not a traditional port-based, stateful inspection firewall? Because these types of network controls limit policy definition to ports, IP addresses, and protocols, which cannot enforce a Positive Control Model. They are simply not detailed enough to control traffic with a high degree of confidence. One or the other might have worked in the 1990s. But by the mid-2000s, network-based applications were regularly bypassing both of these types of controls.

Therefore, if Target had deployed Palo Alto Networks firewalls between the POS terminals and their servers with granular policies to control POS terminals’ communications by zone, port, and application, the malware on the POS terminals would never have been able to communicate with the server(s) the attackers compromised.

In addition, it’s possible that the POS terminals may never have become infected in the first place because the compromised server(s) the attackers initially compromised would not have been able to communicate with the POS terminals. Note, I am not assuming that the servers used to compromise the POS terminals were the same servers used to collect the credit card data that was breached.

Unfortunately, a control with the capabilities of Palo Alto Networks is not specified by the Payment Card Industry (PCI) Data Security Standard (DSS). Yes, “Requirement #1: Install and maintain a firewall configuration to protect cardholder data,” seems to cover the subject. However, you can fully meet these PCI DSS requirements with a port-based, stateful inspection firewall. But, as I said above, an attacker can easily bypass this 1990s type of network control. Retailers and e-Commerce sites need to go beyond PCI DSS to actually protect themselves. You need is Next Generation Firewall like Palo Alto Networks which enables you to define and enforce a Positive Control.

06. January 2014 · Comments Off on Two views on FireEye’s Mandiant acquisition · Categories: blog · Tags: , , , , , ,

There are two views on the significance of FireEye’s acquisition of Mandiant. One is the consensus typified by Arik Hesseldahl, Why FireEye is the Internet’s New Security Powerhouse. Arik sees the synergy of FireEye’s network-based appliances coupled with Mandiant’s endpoint agents.

Richard Stiennon as a different view, Will FireEye’s Acquistion Strategy Work? Richard believes that FireEye’s stock price is way overvalued compared to more established players like Check Point and Palo Alto Networks. While FireEye initially led the market with network-based “sandboxing” technology to detect unknown threats, most of the major security vendors have matched or even exceeded FireEye’s capabilities. IMHO, you should not even consider any network-based security manufacturer that doesn’t provide integrated sandboxing technology to detect unknown threats. Therefore the only way FireEye can meet Wall Street’s revenue expectations is via acquisition using their inflated stock.

The best strategy for a high-flying public company whose products do not have staying power is to embark on an acquisition spree that juices revenue. In those terms, trading overvalued stock for Mandiant, with estimated 2013 revenue of $150 million, will easily satisfy Wall Street’s demand for continued growth to sustain valuations. FireEye has already locked in 100% growth for 2014.

It will probably take a couple of years to determine who is correct.

 

 

04. April 2013 · Comments Off on The Real Value of a Positive Control Model · Categories: blog · Tags: , , , ,

During the last several years I’ve written a lot about the fact that Palo Alto Networks enables you to re-establish a network-based Positive Control Model from the network layer up through the application layer. But I never spent much time on why it’s important.

Today, I will reference a blog post by Jack Whitsitt, Avoiding Strategic Cyber Security Loss and the Unacceptable Offensive Advantage (Post 2/2), to help explain the value of implementing a Positive Control Model.

TL;DR: All information breaches result from human error. The human error rate per unit of information technology is fairly constant. However, because IT is always expanding (more applications and more functions per application), the actual number of human errors resulting in Vulnerabilities (used in the most general sense of the word) per time period is always increasing. Unfortunately, the information security team has limited resources (Defensive Capability) and cannot cope with the users’ ever increasing number of errors. This has created an ever growing “Offensive Advantage (Vulnerabilities – Defensive Capability).”  However, implementing a Positive Control Model to influence/control human behavior will reduce the number of user errors per time interval, which will reduce the Offensive Advantage to a manageable size.

On the network side Palo Alto Networks’ Next Generation Firewall monitors and controls traffic by user and application across all 65,535 TCP and UDP ports, all of the time, at specified speeds. Granular policies based on any combination of application, user, security zone, IP address, port, URL, and/or Threat Protection profiles are created with a single unified interface that enables the infosec team to respond quickly to new business requirements.

On the endpoint side, Trusteer provides a behavioral type of whitelisting that prevents device compromise and confidential data exfiltration. It requires little to no administrative configuration effort. Thousands of agents can be deployed in days. When implemented on already deployed Windows and Mac devices, Trusteer will detect compromised devices that traditional signature-based anti-virus products miss.

Let’s start with Jack’s basic truths about the relationships between technology, people’s behavior, and infosec resources. Cyber security is a problem that occurs over unbounded time. So it’s a rate problem driven by the ever increasing number of human errors per unit of time. While the number of human errors per unit of time per “unit of information technology” is steady, complexity, in the form of new applications and added functions to existing applications, is constantly increasing. Therefore the number of human errors per unit of time is constantly increasing.

Unfortunately, information security resources (technical and administrative controls) are limited. Therefore the organization’s Defense Capability cannot keep up with the increasing number of Vulnerabilities. Since the number of human errors increases at a faster rate than limited resource Defense Capacity, an Unacceptable Offensive Advantage is created. Here is a diagram that shows this.

offensiveadvantage1

What’s even worse, most Defensive controls cannot significantly shrink the gap between the Vulnerability curve and the Defense curve because they do not bend the vulnerability curve, as this graph shows.

offensiveadvantage2

So the only real hope of reducing organizational cyber security risk, i.e. the adversaries’ Offensive Advantage is to bend the Vulnerability curve as this graph shows.

offensiveadvantage3

Once you do that, you can apply additional controls to further shrink the gap between Vulnerability and Defense curves as this graph shows.

offensiveadvantage4

The question is how to do this. Perhaps Security Awareness Training can have some impact.

I recommend implementing network and host-based technical controls that can establish a Positive Control Model. In other words, only by defining what people are allowed to do and denying everything else can you actually bend the Vulnerability curve, i.e. reduce human errors, both unintentional and intentional.

Implementing a Positive Control Model does not happen instantly, i.e. it’s also is a rate problem. But if you don’t have the technical controls in place, no amount of process is going to improve the organization’s security posture.

This is why firewalls are such a critical network technical control. They are placed at critical choke points in the network, between subnets of different trust levels, with the express purpose of implementing a Positive Control Model.

Firewalls first became popular in the mid 1990s. At that time, when a new application was built, it was assigned a port number. For example, the mail protocol, SMTP was assigned port 25, and the HTTP protocol was assigned to port 80. At that time, (1) protocol and application meant the same thing, and (2) all applications “behaved,” i.e. they ran only on their assigned ports. Given this environment, all a firewall had to do was use the port numbers (and IP addresses) to control traffic. Hence the popularity of port-based stateful inspection firewalls.

Unfortunately, starting in the early 2000s, developers began writing applications to bypass the port-based stateful inspection firewall in order to get their applications deployed quickly in organizations without waiting for the security teams to make changes in policies. Also different applications were developed that could share a port like port 80 because it was always open to give people access to the Internet. Other techniques like port-hopping and encryption were used to bypass the port-based, stateful inspection firewall.

Security teams started deploying additional network security controls like URL Filtering to complement firewalls. This increase in complexity created new problems such as (1) policy coordination between URL Filtering and the firewalls, (2) performance issues, and (3) since URL Filtering products were mostly proxy based, they would break some of the newer applications frustrating users trying to do their jobs.

By 2005 it was obvious to some people that the application technology had obsoleted port-based firewalls and their helpers. A completely new approach to firewall architecture was needed that (1)  classified traffic by application first regardless of port, and (2) was backwardly compatible with port-based firewalls to enable the conversion process. This is exactly what the Palo Alto Networks team did, releasing their first “Next Generation” Firewall in 2007.

Palo Alto Networks classifies traffic at the beginning of the policy process by application. It monitors all 65,535 TCP and UDP for all applications, all of the time, at specified speeds. This enables organizations to re-establish the Positive Control Model which bends the “Vulnerability” curve and allows an infosec team with limited resources to reduce, what Jack Whitsitt calls, the adversaries’ “Offensive Advantage.”

On the endpoint side, Trusteer provides a type of Positive Control Model / whitelisting whereby highly targeted applications like browsers, Java, Adobe Flash, PDF, and Microsoft Office applications are automatically protected behaviorally. The Trusteer agent understands the memory state – file I/O relationship to the degree that it knows the difference between good I/O and malicious I/O behavior. Trusteer then blocks the malicious I/O before any damage can be done.

Thus human errors resulting from social engineering such as clicking on links to malicious web pages or opening documents containing malicious code are automatically blocked. This is all done with no policy configuration efforts on the part of the infosec team. The policies are updated by Trusteer periodically. There are no policies to configure. Furthermore, thousands of agents can be deployed in days. Finally, when implemented to deployed Windows and Mac endpoints, it will detect already compromised devices.

Trusteer, founded in 2006, has over 40 million agents deployed across the banking industry to protect online banking users. So their agent technology has been battle tested.

In closing then, only by implementing technical controls which establish a Positive Control Model to reduce human errors, can an organization bend the Vulnerability Curve sufficiently to reduce the adversaries’ Offensive Advantage to an acceptable level.

25. February 2013 · Comments Off on Surprising Application-Threat Analysis from Palo Alto Networks · Categories: blog · Tags: , , ,

This past week, Palo Alto Networks released its H2/2012 Application Usage and Threat Report. Actually, it’s the first time Palo Alto has integrated Application Usage and Threat Analysis. Previous reports were focused only on Application Risk. This report analyzed 12.6 petabytes of data from 3,056 networks, covering 1,395 applications. 5,307 unique threats were identified from 268 million threat logs.

Here are the four most interesting items I noted:

1. Of the 1,395 applications found, 10 were responsible for 97% of all Exploit* logs. One of these was web-browsing. This is to be expected. However, the other nine were internal applications representing 82% of the Exploit* logs!!

This proves once again that perimeter traffic security monitoring is not adequate. Internal network segmentation and threat monitoring are required.

2. Custom or Unknown UDP traffic represented only 2% of all the bandwidth analyzed, yet it accounted for 55% of the Malware* logs!!

This clearly shows the importance of minimizing unidentified application traffic. Therefore the ratio of unidentified to identified traffic is a key security performance indicator and ought to trend down over time.

3. DNS traffic total bytes was only 0.4% of traffic but 25.4% of sessions, and was 3rd for Malware* logs at 13%.

No doubt most, if not all, of this represents malicious Command & Control traffic. If you are not actively monitoring and analyzing DNS traffic, you are missing a key method of detecting compromised devices in your network.

4. 85 of the 356 applications that use SSL never use port 443.

If your firewall is not monitoring all ports for all applications all of the time, you are simply not getting complete visibility and cannot re-establish a Positive Control Model.

*If you are not familiar with Palo Alto Networks’ Threat Protection function, “Exploit” and “Malware” are the two main categories of “Threat” logs. There is a table at the top of page 4 of this AUT report that summarizes the categories and sub-categories of the 268 million Threat Logs captured and analyzed. The “Exploit” logs refer to matches against vulnerability signatures which are typical of Intrusion Prevention Systems. The “Malware” logs are for Anti-Virus and Anti-Spyware signature matches.

What is not covered in this report is Palo Alto’s cloud-based, Wildfire zero-day analysis service which analyzes files not seen before to determine if they benign or malicious. If malicious behavior is found, signatures of the appropriate types are generated in less than one hour and update Threat Protection. In addition, the appropriate IP addresses and URLs are added to their respective blacklists.

This report is well worth reading.

 

 

 

29. July 2012 · Comments Off on Speaking of Next Gen Firewalls – Forbes · Categories: blog · Tags: , , , ,

I would like to respond to Richard Stiennon’s Forbes article, Speaking of Next Gen Firewalls. Richard starts off his article as follows:

“As near as I can tell the salient feature of Palo Alto Networks’ products that sets them apart is application awareness. … In my opinion application awareness is just an extension of URL content filtering.”

First, let me start my comment by saying that application awareness, out of context, is almost meaningless. Second, I view technical controls from a risk management perspective, i.e. I judge the value of a proposed technical control by the risks it can mitigate.

Third, the purpose of a firewall is to establish a positive control model, i.e. limit traffic into and out of a defined network to what is allowed and block everything else. The reason everyone is focused on application awareness is that traditional stateful inspection firewalls are port-based and cannot control modern applications that do not adhere to the network layer port model and conventions established when the Internet protocols were first designed in the 1970s.

The reason Palo Alto Networks is so popular is that it extends firewall functionality from the network layer up through the application layer in a single unified policy view. This is unlike most application awareness solutions which, as Richard says, are just extensions of URL filtering, because they are based on proxy technology.

For those more technically inclined, URL Filtering solutions are generally based on proxy technology and therefore only monitor a small set of ports including 80 and 443. However, Palo Alto Networks monitors all 65,535 TCP and UDP ports at specified speeds, all the time from the network layer up through the application layer. If you doubt this, try it yourself. It’s easy. Simply run a standard application on a non-standard port and see what the logs show.

Furthermore, Palo Alto provides a single policy view that includes user, application, zone, URL filtering, and threat prevention columns in addition to the traditional five tuples – source IP, destination IP, source port, destination port, and service.

To the best of my knowledge, Palo Alto Networks is the only firewall, whether called Next Generation Firewall or UTM that has this set of features. Therefore, from a risk management perspective, Palo Alto Networks is the only firewall that can establish a positive enforcement model from the network layer up through the application layer.

20. April 2012 · Comments Off on A response to Stiennon’s analysis of Palo Alto Networks · Categories: blog · Tags: , , , , , , ,

I was dismayed to read Richard Stiennon’s article in Forbes, Tearing away the veil of hype from Palo Alto Networks’ IPO. I will say my knowledge of network security and experience with Palo Alto Networks appear to be very different from Stiennon’s.

Full disclosure, my company has been a Palo Alto Networks partner for about four years. I noticed on Stiennon’s LinkedIn biography that he worked for one of PAN’s competitors, Fortinet. I don’t own any of the stocks individually mentioned in Stiennon’s article, although from time to time, I own mutual funds that might. Finally, I am planning on buying PAN stock when they go public.

Let me first summarize my key concerns and then I will go into more detail:

  • Stiennon overstates and IMHO misleads the reader about the functionality of stateful inspection firewall technology. While he seems to place value in it, he fails to mention what security risks they can actually mitigate in today’s environment.
  • He does not seem to understand the difference between UTMs and Next Generation Firewalls (NGFW). UTMs combine multiple functions on an appliance with each function processed independently and sequentially, and each managed with a separate user interface. NGFWs integrate multiple functions which share information, execute in parallel, and are managed with a unified interface. These differences result in dramatically different risk mitigation capabilities.
  • He does not seem to understand Palo Alto Networks unique ability to reduce attack surfaces by enabling a positive control model (default deny) from the network layer up through the application layer.
  • He seems to have missed the fact that Palo Alto Networks NGFWs were designed from the ground up to deliver Next Generation Firewall capabilities while other manufacturers have simply added features to their stateful inspection firewalls
  • He erroneously states that Palo Alto Networks does not have stateful inspection capabilities. It does and is backwards compatible with traditional stateful inspection firewalls to enable conversions.
  • He claims that Palo Alto Networks uses a lot of third party components when in fact there are only two that I am aware of. And he completely ignores several of Palo Alto Networks latest innovations including Wildfire and GlobalProtect.
  • He missed the reason why Palo Alto Networks Jan 2012 quarter revenue was slightly lower than its Oct 2011 quarter which was clearly stated in the S-1.

Here are my detailed comments.

Stateful inspection is a core functionality of firewalls introduced by Check Point Software over 15 years ago. It allows an inline gateway device to quickly determine, based on a set policy, if a particular connection is allowed or denied. Can someone in accounting connect to Facebook? Yes or no.

The bolded sentence is misleading and wrong in the context of stateful inspection. Stateful inspection has nothing to do with concepts like who is in accounting or whether the session is attempting to connect to Facebook. Stateful Inspection is purely a Layer 3/Layer 4 technology and defines security policies based on Source IP, Destination IP, Source Port, Destination Port, and network protocol, i.e. UDP or TCP.

If you wanted to implement a stateful inspection firewall policy that says Joe in accounting cannot connect to Facebook, you would first have to know the IP address of Joe’s device and the IP address of Facebook. Of course this presents huge administrative problems because somebody would have to keep track of this information and the policy would have to be modified if Joe changed locations. Not to mention the huge number of policy rules that would have to be written for all the possible sites Joe is allowed to visit. No organization I have ever known would attempt to control Joe’s access to Facebook using stateful inspection technology.

Since the early 2000s, hundreds and hundreds of applications have been written, including Facebook and its subcomponents, that no longer obey the “rules” that were in place in the mid-90s when stateful inspection was invented. At that time, when a new application was built, it would be assigned a specific port number that only that application would use. For example, email transport agents using SMTP were assigned Port 25. Therefore the stateful inspection firewall policy implementer could safely control access to the email transport service by defining policies using Port 25.

At present, the usage of ports is totally chaotic and abused by malicious actors. Applications share ports. Applications hop from port to port looking for a way to bypass stateful inspection firewalls. Cyber predators use this weakness of stateful inspection for their gain and your loss. Of course the security industry understood this issue and many new types of network security device types were invented and added to the network as Stiennon acknowledges.

But, inspecting 100% of traffic to implement these advanced capabilities is extremely stressful to the appliance, all of them still use stateful inspection to keep track of those connections that have been denied. That way the traffic from those connections does not need to be inspected, it is just dropped, while approved connections can still be filtered by the enhanced capability of these Unified Threat Management (UTM) devices (sometimes called Next  Generation Firewalls (NGFW), a term coined by Palo Alto Networks).

The first bolded phrase is true when a manufacturer adds advanced capabilities like application identification to an existing appliance. Palo Alto Networks understood this and designed an appliance from the ground up specifically to implement these advanced functions under load with low latency.

In the second bolded phrase, Steinnon casually lumps together the terms UTMs and Next Generation Firewalls as if they are synonymous. They are not. While it is true that Palo Alto Networks coined the term Next Generation Firewalls, it only became an industry defined term when Gartner published a research paper in October, 2009 (ID Number G00171540) and applied a rigorous definition.

The key point is that a next generation firewall provides fully integrated Application Awareness and Intrusion Prevention with stateful inspection. Fully integrated means that (1) the application identification occurs in the firewall which enables positive traffic control from the network layer up through the application layer, (2) all intrusion prevention is applied to the resulting allowed traffic, (3) all this is accomplished in a single pass to minimize latency, and (4) there is a unified interface for creating firewall policies. Running multiple inspection processes sequentially, controlled by independently defined policies results in increased latency and excessive use of security management resources, thus not qualifying as a Next Generation Firewall

But PAN really has abandoned stateful inspection, at a tremendous cost to their ability to establish connections fast enough to address the needs of large enterprises and carriers.

This is simply false. Palo Alto Networks supports standard stateful inspection for two purposes. First to ease the conversion process from a traditional stateful inspection firewall. Most of our customers start by converting their existing stateful inspection firewall policy rules and then they add the more advanced NGFW functions.

Second, the use of ports in policies can be very useful when combined with application identification. For example, you can build a policy that says (a) SMTP can run only on port 25 and (b) only SMTP can run on port 25. The first part (a) assures that if SMTP is detected on any of the other 65,534 ports it will be blocked. This means that no cyber predator can set up an email service on any of your non-SMTP servers. The second part (b) says that no other application besides SMTP can run on port 25. Therefore when you open a port for a specific application, you can assure it will be the only application running on that port. Palo Alto Networks can do this because its core functionality monitors all 65,535 ports for all applications all the time.

Steinnon then goes on to quote Bob Walder of NSS Labs and interprets his statement as follows:

In other words, an enterprise deploying PAN’s NGFW is getting full content inspection all the time with no ability to turn it off. That makes the device performance unacceptable as a drop-in replacement for Juniper, Cisco, Check Point, or Fortinet firewalls.

This statement has no basis in facts that I am aware of. Palo Alto Firewalls are used all the time to replace the above mentioned companies’ firewalls. Palo Alto has over 6,500 customers! Does full packet inspection take more resources than simple stateful inspection? Of course. But that misses the point. As I said above, stateful inspection is completely useless at providing an organization a Positive Enforcement Model, which after all is the sine qua non of a firewall. By Positive Enforcement Model, I mean the ability to define what is allowed and block everything else. This is also described as “default deny.”

Furthermore, based on my experience, in a bake-off situaton where the criteria  are a combination of real-world traffic, real-world security policy requirements designed to mitigate defined high risks, and total cost of ownership, Palo Alto Networks will always win. I’ll go a step further and say that in today’s world there is simply no significant risk mitigation value for traditional stateful inspection.

It’s the application awareness feature. This is where PAN’s R&D spending is going. All the other features made possible by their hardware acceleration and content inspection ability are supported by third parties who provide malware signatures and URL databases of malicious websites and categorization of websites by type. 

This is totally wrong. In fact, the URL filtering database and the end point checking host software in GlobalProtect (explained further on) are the only third party components Palo Alto Network uses that I am aware of. PAN built a completely new firewall engine capable of performing stateful inspection (for backward compatibility and for highly granular policies described above), application control, anti-virus, anti-spyware, anti-malware, and URL Filtering in a single pass. PAN writes all of its malware signatures and of course participates in security intelligence sharing arrangements with other companies.

Palo Alto Networks has further innovated with (1) Wildfire which provides the ability to analyze executables being downloaded from the Internet to detect zero-day attacks, and (2) GlobalProtect which enables remote and mobile users to stay under the control and protection of PAN NGFWs.

While anecdotal, the reports I get from enterprise IT professionals are that PAN is being deployed behindexisting (sic) firewalls. If that is the general case PAN is not the Next Generation Firewall, it is a stand alone technology that provides visibility into application usage.  Is that new? Not really. Flow monitoring technology has been available for over a decade from companies like Lancope and Arbor Networks that provides this visibility at a high level. Application fingerprinting was invented by SourceFire and is the basis of their RNA product.

Wow. Let me try to deconstrust this. First, it is true that some companies start by putting Palo Alto Networks behind existing firewalls. Why not? I see this as an advantage for PAN as it gives organizations the ability to leverage PAN’s value without waiting until it’s time to do a firewall refresh. Also PAN can replace a proxy to improve content filtering. I’ll save the proxy discussion for another time. I am surely not privy to PAN’s complete breakdown of installation architectures, but “anecdotally” I would say most organizations are doing straight firewall replacements.

Much more importantly, the idea of doing application identification in an IPS or in a flow product totally misses the point. Palo Alto Networks ships the only firewall that does it to enable positive control (default deny) from the network layer up through the application layer. I am surely not saying that there is no value in adding application awareness to IPSs or flow products. There is. But IPSs use a negative control model, i.e. define what should be blocked and allow everything else. Firewalls are supposed to provide attack surface reduction and cannot unless they are able to exert positive control.

While I will agree that application identification and the ability to enforce policies that control what applications can be used within the enterprise is important I contend that application awareness is ultimately a feature that belongs in a UTM appliance or stand alone device behind the firewall. Like other UTM features it must be disabled for high connection rate environments such as large corporate gateways, data centers, and within carrier networks.

This may be Stiennon’s opintion, but I would ask, what meaningful risks, besides not meeting the requirements of a compliance regime, does a stateful inspection firewall mitigate considering the ease with which attackers can bypass them? I have nothing against compliance requirements per se, but our focus is on information security risk mitigation.

In the three months ending Jan. 31 2012 PAN’s revenue is off from the previous quarter. The fourth quarter is usually the best quarter for technology vendors. There may be some extraordinary situation that accounts for that, but it is not evident in the S-1

There is no denying that year-over-year PAN has been on a tear, almost doubling its revenue from Q4 2010 to Q4 2011. But the glaring fact is that PAN’s revenue growth has completely stalled out in what was a great quarter for the industry.

Perhaps my commenting on these last paragraphs does not belong in this blog post as they are not technical in nature, but IMHO Stiennon is wrong again. Stiennon glosses over the excellent quarter that preceded the last one where PAN grew its revenue from $40.22 million to $57.11 million. Thus the last quarter’s $56.68 million looks to Stiennon like a stall with no explanation. However, here is the exact quote from the S-1 explaining what happened, “For the three month period ended October 31, 2011, the increase in product revenue was driven by strong performance in our federal business, as a result of improved productivity from our expanded U.S. government sales force and increased U.S. government spending at the end of its September 30 fiscal year.” My translation from investment banker/lawyer speak to English is that PAN did so well with the Federal government that quarter that the following quarter suffered by comparison. I could be wrong.

In closing, let me say I fully understand that there is no single silver bullet in security. Our approach is about balancing resources among Prevention, Detection, and Incident Response controls. There is never enough budget to implement every technical control that mitigates some risk. The exercise is to prioritize the selection of controls within budget constraints to provide the maximum information security risk reduction based on an organization’s understanding of its risks. While these priorities vary widely among organizations, I can confidently say that based on my experience, Palo Alto Networks provides the best network-based, Prevention Control, risk mitigation available today. Its, yes, revolutionary technology is well worth investing time to understand.

 

 

 

As I look over my experience in Information Security since 1999, I see three distinct eras with respect to the motivation driving technical control purchases:

  • Basic (mid-90’s to early 2000’s) – Organizations implemented basic host-based and network-based technical security controls, i.e. anti-virus and firewalls respectively.
  • Compliance (early 2000’s to mid 2000’s) – Compliance regulations such as Sarbanes-Oxley and PCI drove major improvements in security.
  • Breach Prevention and Incident Detection & Response (BPIDR) (late 2000’s to present) – Organizations realize that regulatory compliance represents a minimum level of security, and is not sufficient to cope with the fast changing methods used by cyber predators. Meeting compliance requirements will not effectively reduce the likelihood of a breach by more skilled and aggressive adversaries or detect their malicious activity.

I have three examples to support the shift from the Compliance era to the Breach Prevention and Incident Detection & Response (BPIDR) era. The first is the increasing popularity of Palo Alto Networks. No compliance regulation I am aware of makes the distinction between a traditional stateful inspection firewall and a Next Generation Firewall as defined by Gartner in their 2009 research report.  Yet in the last four years, 6,000 companies have selected Palo Alto Networks because their NGFWs enable organizations to regain control of traffic at points in their networks where trust levels change or ought to change.

The second example is the evolution of Log Management/SIEM. One can safely say that the driving force for most Log/SIEM purchases in the early to mid 2000s was compliance. The fastest growing vendors of that period had the best compliance reporting capabilities. However, by the late 2000s, many organizations began to realize they needed better detection controls. We began so see a shift in the SIEM market to those solutions which not only provided the necessary compliance reports, but could also function satisfactorily as the primary detection control within limited budget requirements. Hence the ascendancy of Q1 Labs, which actually passed ArcSight in number of installations prior to being acquired by IBM.

The third example is email security. From a compliance perspective, Section 5 of PCI DSS, for example, is very comprehensive regarding anti-virus software. However, it is silent regarding phishing. The popularity of products from Proofpoint and FireEye show that organizations have determined that blocking email-borne viruses is simply not adequate. Phishing and particularly spear-phishing must be addressed.

Rather than simply call the third era “Breach Prevention,” I chose to add “Incident Detection & Response” because preventing all system compromises that could lead to a breach is not possible. You must assume that Prevention controls will have failures. Therefore you must invest in Detection controls as well. Too often, I have seen budget imbalances in favor of Prevention controls.

The goal of a defense-in-depth architecture is to (1) prevent breaches by minimizing attack surfaces, controlling access to assets, and preventing threats and malicious behavior on allowed traffic, and (2) to detect malicious activity missed by prevention controls and detect compromised systems more quickly to minimize the risk of disclosure of confidential data.

12. October 2011 · Comments Off on Controlling remote access tool usage in the enterprise · Categories: blog · Tags: , , ,

Palo Alto Networks’ recent advice on controlling remote access tools in the enterprise was prompted by Google releasing a remote desktop control feature for Chrome, which also has the ability to be configured “to punch through the firewall.”

As Palo Alto Networks points out, the 2011 Verizon Data Breach Report showed that the initial penetrations in over 1/3 of the 900 incidents analyzed could be tracked to remote access errors.

Here are Palo Alto Networks’ recommendations:

  1. Learn which remote access tools are in use, who is using them and why.
  2. Establish a standard list of remote access tools for those who need them
  3. Establish a list of who should be allowed to use these tools.
  4. Document the usage guidelines, complete with ramifications of misuse and educate ALL users.
  5. Enforce the usage using traffic monitoring tools or better yet, a Palo Alto Networks next-generation firewall.

 

 

During the last several years we have observed dramatic changes in the identity of attackers, their goals, and methods. Today’s most dangerous attackers are cyber criminals and nation-states who are stealing money and intellectual property. Their primary attack vector is no longer the traditional “outside-in” method of directly penetrating the enterprise at the network level through open ports and exploiting operating system vulnerabilities.

The new dominant attack vector is at the application level. It starts with baiting the end-user via phishing or some other social engineering technique to click on a link which takes the unsuspecting user to a malware-laden web page. The malware is downloaded to the user’s personal device, steals the person’s credentials, establishes a back-channel out to a controlling server, and, using the person’s credentials, steals money from corporate bank accounts, credit card information, and/or intellectual property. We call this the “Inside-Out” attack vector.

Here are my recommendations for mitigating these modern malware risks:

  • Reduce the enterprise’s attack surface by limiting the web-based applications to only those that are necessary to the enterprise and controlling who has access to those applications. This requires an application-based Positive Control Model at the firewall.
  • Deploy heuristic analysis coupled with sandbox technology to block the user from downloading malware.
  • Leverage web site reputation services and blacklists.
  • Deploy effective Intrusion Prevention functionality which is rapidly updated with new signatures.
  • Segment the enterprise’s internal network to:
    • Control users’ access to internal applications and data
    • Deny unknown applications
    • Limit the damage when a user or system is compromised
  • Provide remote and mobile users with the same control and protection as itemized above
  • Monitor the network security devices’ logs in real-time on a 24x7x365 basis

Full disclosure: For the last four years my company Cymbel has partnered with Palo Alto Networks to provide much of this functionality. For the real-time 24x7x365 log monitoring, we partner with Solutionary.

20. January 2011 · Comments Off on ‘Cyberlockers’ present new challenges to music industry · Categories: blog · Tags: , ,

PaidContent.org published an interesting article yesterday entitled, How ‘Cyberlockers’ Became The Biggest Problem In Piracy.

PaidContent uses the term “cyberlocker” to refer to browser-based-based file sharing applications which pose a new challenge to the music industry’s efforts to thwart illegal sharing of music, aka piracy.

The article highlights some of the better known applications like RapidShare, Hotfile, Mediafire, and Megaupload. It also points out that Google Docs qualifies as a cyberlocker, although it’s used mostly for Word and Excel documents.

What the article fails to mention is amount of malware lurking in these cyberlockers. The file you download may be the song you think it is or it may be trojan.

Palo Alto Networks, the Next Generation Firewall manufacturer, has the statistics to corroborate PaidContent’s claim that browser-base file sharing is growing rapidly.

Palo Alto Network’s Applipedia identifies 141 file sharing applications, of which 65 are browser-based.

Any organization which has deployed Palo Alto Networks can control the use of browser-based file sharing with the same ease as the older peer-to-peer file sharing applications.

Furthermore, if you configure Palo Alto to block the “file sharing” sub-category of  applications, not only will all of the known file sharing applications be blocked, but any newly discovered ones will also be blocked. However, there are valid business use cases for using a file sharing application. Therefore you would want an exception for the one you have selected.

Finally should you choose to allow a file sharing application, Palo Alto will provide protection against malware.