23. October 2013 · Comments Off on The Secrets of Successful CIOs (and CISOs) · Categories: blog

Rachel King, a reporter with the CIO Journal of the Wall St. Journal published an article last week entitled, The Secrets of Successful CIOs. She reports on a study performed by Accenture to determine the priorities of high-performing IT executives.

Ms. King highlights high performers’ top three business objectives compared to lower performing IT executives. Unfortunately, what’s missing from the article is how Accenture measured the performances of IT executives. Having said that, this chart is interesting:

Accenture Says Highest-Performing CIOs Focus on Customers, Business - The CIO Report - WSJ

One would naturally jump to the conclusion that high performing Chief Information Security Officers would need to orient themselves to these top priorities as well. But what if you work for one of the CIOs who is focused on cutting business operational costs, increasing workforce productivity, and automating core business processes?

22. April 2013 · Comments Off on DropSmack: Using Dropbox Maliciously · Categories: blog · Tags: , ,

I found an interesting article on TechRepublic, “DropSmack: Using Dropbox to steal files and deliver malware.

Given that 50 million people are using DropBox, it surely looks like an inviting attack vector for cyber adversaries. Jacob Williams (@MalwareJake) seems to have developed malware, DropSmack, to embed in a Word file already synchronized by DropBox to infect an internal endpoint and provide Command & Control communications.

What technical control do you have in place that would detect and block DropSmack? A network security product would have to be able to decode application files such as Word, Excel, PowerPoint, PDF, and then detect the malware and/or anomalies embedded in the document.

Can you prevent DropBox from being used in your organization? Should you? What about other file sharing applications?

04. April 2013 · Comments Off on The Real Value of a Positive Control Model · Categories: blog · Tags: , , , ,

During the last several years I’ve written a lot about the fact that Palo Alto Networks enables you to re-establish a network-based Positive Control Model from the network layer up through the application layer. But I never spent much time on why it’s important.

Today, I will reference a blog post by Jack Whitsitt, Avoiding Strategic Cyber Security Loss and the Unacceptable Offensive Advantage (Post 2/2), to help explain the value of implementing a Positive Control Model.

TL;DR: All information breaches result from human error. The human error rate per unit of information technology is fairly constant. However, because IT is always expanding (more applications and more functions per application), the actual number of human errors resulting in Vulnerabilities (used in the most general sense of the word) per time period is always increasing. Unfortunately, the information security team has limited resources (Defensive Capability) and cannot cope with the users’ ever increasing number of errors. This has created an ever growing “Offensive Advantage (Vulnerabilities – Defensive Capability).”  However, implementing a Positive Control Model to influence/control human behavior will reduce the number of user errors per time interval, which will reduce the Offensive Advantage to a manageable size.

On the network side Palo Alto Networks’ Next Generation Firewall monitors and controls traffic by user and application across all 65,535 TCP and UDP ports, all of the time, at specified speeds. Granular policies based on any combination of application, user, security zone, IP address, port, URL, and/or Threat Protection profiles are created with a single unified interface that enables the infosec team to respond quickly to new business requirements.

On the endpoint side, Trusteer provides a behavioral type of whitelisting that prevents device compromise and confidential data exfiltration. It requires little to no administrative configuration effort. Thousands of agents can be deployed in days. When implemented on already deployed Windows and Mac devices, Trusteer will detect compromised devices that traditional signature-based anti-virus products miss.

Let’s start with Jack’s basic truths about the relationships between technology, people’s behavior, and infosec resources. Cyber security is a problem that occurs over unbounded time. So it’s a rate problem driven by the ever increasing number of human errors per unit of time. While the number of human errors per unit of time per “unit of information technology” is steady, complexity, in the form of new applications and added functions to existing applications, is constantly increasing. Therefore the number of human errors per unit of time is constantly increasing.

Unfortunately, information security resources (technical and administrative controls) are limited. Therefore the organization’s Defense Capability cannot keep up with the increasing number of Vulnerabilities. Since the number of human errors increases at a faster rate than limited resource Defense Capacity, an Unacceptable Offensive Advantage is created. Here is a diagram that shows this.

offensiveadvantage1

What’s even worse, most Defensive controls cannot significantly shrink the gap between the Vulnerability curve and the Defense curve because they do not bend the vulnerability curve, as this graph shows.

offensiveadvantage2

So the only real hope of reducing organizational cyber security risk, i.e. the adversaries’ Offensive Advantage is to bend the Vulnerability curve as this graph shows.

offensiveadvantage3

Once you do that, you can apply additional controls to further shrink the gap between Vulnerability and Defense curves as this graph shows.

offensiveadvantage4

The question is how to do this. Perhaps Security Awareness Training can have some impact.

I recommend implementing network and host-based technical controls that can establish a Positive Control Model. In other words, only by defining what people are allowed to do and denying everything else can you actually bend the Vulnerability curve, i.e. reduce human errors, both unintentional and intentional.

Implementing a Positive Control Model does not happen instantly, i.e. it’s also is a rate problem. But if you don’t have the technical controls in place, no amount of process is going to improve the organization’s security posture.

This is why firewalls are such a critical network technical control. They are placed at critical choke points in the network, between subnets of different trust levels, with the express purpose of implementing a Positive Control Model.

Firewalls first became popular in the mid 1990s. At that time, when a new application was built, it was assigned a port number. For example, the mail protocol, SMTP was assigned port 25, and the HTTP protocol was assigned to port 80. At that time, (1) protocol and application meant the same thing, and (2) all applications “behaved,” i.e. they ran only on their assigned ports. Given this environment, all a firewall had to do was use the port numbers (and IP addresses) to control traffic. Hence the popularity of port-based stateful inspection firewalls.

Unfortunately, starting in the early 2000s, developers began writing applications to bypass the port-based stateful inspection firewall in order to get their applications deployed quickly in organizations without waiting for the security teams to make changes in policies. Also different applications were developed that could share a port like port 80 because it was always open to give people access to the Internet. Other techniques like port-hopping and encryption were used to bypass the port-based, stateful inspection firewall.

Security teams started deploying additional network security controls like URL Filtering to complement firewalls. This increase in complexity created new problems such as (1) policy coordination between URL Filtering and the firewalls, (2) performance issues, and (3) since URL Filtering products were mostly proxy based, they would break some of the newer applications frustrating users trying to do their jobs.

By 2005 it was obvious to some people that the application technology had obsoleted port-based firewalls and their helpers. A completely new approach to firewall architecture was needed that (1)  classified traffic by application first regardless of port, and (2) was backwardly compatible with port-based firewalls to enable the conversion process. This is exactly what the Palo Alto Networks team did, releasing their first “Next Generation” Firewall in 2007.

Palo Alto Networks classifies traffic at the beginning of the policy process by application. It monitors all 65,535 TCP and UDP for all applications, all of the time, at specified speeds. This enables organizations to re-establish the Positive Control Model which bends the “Vulnerability” curve and allows an infosec team with limited resources to reduce, what Jack Whitsitt calls, the adversaries’ “Offensive Advantage.”

On the endpoint side, Trusteer provides a type of Positive Control Model / whitelisting whereby highly targeted applications like browsers, Java, Adobe Flash, PDF, and Microsoft Office applications are automatically protected behaviorally. The Trusteer agent understands the memory state – file I/O relationship to the degree that it knows the difference between good I/O and malicious I/O behavior. Trusteer then blocks the malicious I/O before any damage can be done.

Thus human errors resulting from social engineering such as clicking on links to malicious web pages or opening documents containing malicious code are automatically blocked. This is all done with no policy configuration efforts on the part of the infosec team. The policies are updated by Trusteer periodically. There are no policies to configure. Furthermore, thousands of agents can be deployed in days. Finally, when implemented to deployed Windows and Mac endpoints, it will detect already compromised devices.

Trusteer, founded in 2006, has over 40 million agents deployed across the banking industry to protect online banking users. So their agent technology has been battle tested.

In closing then, only by implementing technical controls which establish a Positive Control Model to reduce human errors, can an organization bend the Vulnerability Curve sufficiently to reduce the adversaries’ Offensive Advantage to an acceptable level.

23. March 2013 · Comments Off on Practical Zero Trust Recommendations · Categories: Slides

Cymbel Zero Trust Recommendations with gray bottom1

Cymbel has adopted Forrester’s Zero Trust Model for Information Security. Zero Trust means there are no longer “trusted” networks, devices, or users. There is no such thing as 100% Prevention, if there ever was. In light of the changes we’ve seen during the last several years, this is the only approach that makes sense. There is simply no way to prevent end points and servers from becoming compromised 100% of the time. For more details see Cymbel’s Zero Trust Recommendations.

Links to Explore

25. February 2013 · Comments Off on Application Usage and Threat Report · Categories: Slides

Palo Alto Networks Real Data Real Threats for slider

The latest Application Usage and Threat Report from Palo Alto Networks provides a global view into enterprise application usage and the associated threats by summarizing network traffic assessments conducted in 3,056 organizations worldwide between May 2012 and December 2012.

This report discusses application usage patterns and the specific types of threats they may or may not introduce. The application and threat patterns discussed within this report dispel the position that social networking, filesharing and video applications are the most common threat vectors, while reaffirming that internal applications are highly prized targets. Rather than use more obvious, commercially available applications, attackers are masking their activities through custom or encrypted applications

If you would like a copy of this report, please fill out the form on the right side of this page.

Links to Explore

25. February 2013 · Comments Off on Surprising Application-Threat Analysis from Palo Alto Networks · Categories: blog · Tags: , , ,

This past week, Palo Alto Networks released its H2/2012 Application Usage and Threat Report. Actually, it’s the first time Palo Alto has integrated Application Usage and Threat Analysis. Previous reports were focused only on Application Risk. This report analyzed 12.6 petabytes of data from 3,056 networks, covering 1,395 applications. 5,307 unique threats were identified from 268 million threat logs.

Here are the four most interesting items I noted:

1. Of the 1,395 applications found, 10 were responsible for 97% of all Exploit* logs. One of these was web-browsing. This is to be expected. However, the other nine were internal applications representing 82% of the Exploit* logs!!

This proves once again that perimeter traffic security monitoring is not adequate. Internal network segmentation and threat monitoring are required.

2. Custom or Unknown UDP traffic represented only 2% of all the bandwidth analyzed, yet it accounted for 55% of the Malware* logs!!

This clearly shows the importance of minimizing unidentified application traffic. Therefore the ratio of unidentified to identified traffic is a key security performance indicator and ought to trend down over time.

3. DNS traffic total bytes was only 0.4% of traffic but 25.4% of sessions, and was 3rd for Malware* logs at 13%.

No doubt most, if not all, of this represents malicious Command & Control traffic. If you are not actively monitoring and analyzing DNS traffic, you are missing a key method of detecting compromised devices in your network.

4. 85 of the 356 applications that use SSL never use port 443.

If your firewall is not monitoring all ports for all applications all of the time, you are simply not getting complete visibility and cannot re-establish a Positive Control Model.

*If you are not familiar with Palo Alto Networks’ Threat Protection function, “Exploit” and “Malware” are the two main categories of “Threat” logs. There is a table at the top of page 4 of this AUT report that summarizes the categories and sub-categories of the 268 million Threat Logs captured and analyzed. The “Exploit” logs refer to matches against vulnerability signatures which are typical of Intrusion Prevention Systems. The “Malware” logs are for Anti-Virus and Anti-Spyware signature matches.

What is not covered in this report is Palo Alto’s cloud-based, Wildfire zero-day analysis service which analyzes files not seen before to determine if they benign or malicious. If malicious behavior is found, signatures of the appropriate types are generated in less than one hour and update Threat Protection. In addition, the appropriate IP addresses and URLs are added to their respective blacklists.

This report is well worth reading.

 

 

 

13. September 2012 · Comments Off on The story behind the Microsoft Nitol Botnet takedown · Categories: blog · Tags: , , , , , , ,

Earlier today Microsoft announced the takedown of the Nitol botnet and takeover of the 3322.org domain. However, if you are using the Damballa flow-based Detection Control, this was a non-event. Full disclosure – Cymbel partners with Damballa.

Gunter Ollman, Damballa’s CTO, today commented on Nitol and 3322.org, and the ramifications of the Microsoft takedown, which I will summarize.

First, Damballa has been tracking Nitol and the other 70 or so botnets leveraging 3322.org for quite some time. Therefore, as a Damballa user, any device on your network infected with Nitol, or the other 70 botnets leveraging 3322.org, would be identified by Damballa. Furthermore, if you were using Damballa’s blocking capabilities, those devices would be prevented from communicating with their malware’s Command & Control (C&C) servers.

Second, most of these 70+ botnets make use of “multiple C&C domain names distributed over multiple DNS providers. Botnet operators are only too aware of domain takedown orders from law enforcement, so they add a few layers of resilience to their C&C infrastructure to protect against that kind of disruption.” Therefore this takedown did not kill these botnets.

In closing, while botnet and DNS provider takedowns are interesting, they simply do not reduce an organization’s risk of data breaches. Damballa does!!

 

 

21. August 2012 · Comments Off on Zero-day exploit trade impact on enterprises · Categories: blog · Tags: , , , , , ,

SC Magazine’s Dan Kaplan’s on The Hypocrisy of the zero-day exploit trade shows that enterprises can no longer rely on signature-based Detection Controls to mitigate the risks of confidential data breaches resulting from compromised devices.

I am surely not saying that signature-based IPS/IDS controls are dead, as you do want to detect and block known threats. However, IPS/IDS’s are surely no longer sufficient. They must be complemented by a behavior analysis Detection Control (flow and DNS analysis) as part of a redesigned Defense-in-Depth architecture.

 

29. July 2012 · Comments Off on Speaking of Next Gen Firewalls – Forbes · Categories: blog · Tags: , , , ,

I would like to respond to Richard Stiennon’s Forbes article, Speaking of Next Gen Firewalls. Richard starts off his article as follows:

“As near as I can tell the salient feature of Palo Alto Networks’ products that sets them apart is application awareness. … In my opinion application awareness is just an extension of URL content filtering.”

First, let me start my comment by saying that application awareness, out of context, is almost meaningless. Second, I view technical controls from a risk management perspective, i.e. I judge the value of a proposed technical control by the risks it can mitigate.

Third, the purpose of a firewall is to establish a positive control model, i.e. limit traffic into and out of a defined network to what is allowed and block everything else. The reason everyone is focused on application awareness is that traditional stateful inspection firewalls are port-based and cannot control modern applications that do not adhere to the network layer port model and conventions established when the Internet protocols were first designed in the 1970s.

The reason Palo Alto Networks is so popular is that it extends firewall functionality from the network layer up through the application layer in a single unified policy view. This is unlike most application awareness solutions which, as Richard says, are just extensions of URL filtering, because they are based on proxy technology.

For those more technically inclined, URL Filtering solutions are generally based on proxy technology and therefore only monitor a small set of ports including 80 and 443. However, Palo Alto Networks monitors all 65,535 TCP and UDP ports at specified speeds, all the time from the network layer up through the application layer. If you doubt this, try it yourself. It’s easy. Simply run a standard application on a non-standard port and see what the logs show.

Furthermore, Palo Alto provides a single policy view that includes user, application, zone, URL filtering, and threat prevention columns in addition to the traditional five tuples – source IP, destination IP, source port, destination port, and service.

To the best of my knowledge, Palo Alto Networks is the only firewall, whether called Next Generation Firewall or UTM that has this set of features. Therefore, from a risk management perspective, Palo Alto Networks is the only firewall that can establish a positive enforcement model from the network layer up through the application layer.

21. June 2012 · Comments Off on Bromium Micro-Virtualization: New Approach to endpoint security · Categories: blog · Tags: , , ,

Bromium is emerging from stealth mode with their announcement of a $26.5 million B venture capital round, participation in GigaOm Structure, and a lengthy blog post from CTO Simon Crosby explaining what Bromium is doing. Cymbel has been working with Bromium for close to six months and we are very excited about its unique approach to endpoint security.

Bromium has built what they are calling a “micro-hypervisor” that leverages Intel’s Virtualization Technology hardware to isolate each task and each browser tab (Internet Explorer, Firefox, and Chrome) within  a version of Windows running on an endpoint.

The result is to greatly reduce the likelihood that malware infecting a document or a browser tab can cross to another browser tab or task, to the base operating system, or to the network.  One way of measuring the risk reduction Bromium achieves is by looking at the reduction in attack surface. This can be approximated by the number of lines of code of Windows vs. Bromium’s micro-hypervisor  – 10 million vs. 10,000.

Furthermore, if a browser tab or a downloaded document is infected not only won’t the malware spread, but when you close the tab or document, the malware is erased from memory. Even if you save a malware-laden document, as long as you open it on a Bromium-installed endpoint, the malware will not infect the underlying machine or spread via the network.

In addition, the user experience is not altered at all, i.e. the user does not realize Bromium is even there unless she opens the Task Manager and sees only the current task on which she is working.

If you are interested in learning more, please contact me and I will send a white paper which goes into a lot more detail.