As I look over my experience in Information Security since 1999, I see three distinct eras with respect to the motivation driving technical control purchases:

  • Basic (mid-90’s to early 2000’s) – Organizations implemented basic host-based and network-based technical security controls, i.e. anti-virus and firewalls respectively.
  • Compliance (early 2000’s to mid 2000’s) – Compliance regulations such as Sarbanes-Oxley and PCI drove major improvements in security.
  • Breach Prevention and Incident Detection & Response (BPIDR) (late 2000’s to present) – Organizations realize that regulatory compliance represents a minimum level of security, and is not sufficient to cope with the fast changing methods used by cyber predators. Meeting compliance requirements will not effectively reduce the likelihood of a breach by more skilled and aggressive adversaries or detect their malicious activity.

I have three examples to support the shift from the Compliance era to the Breach Prevention and Incident Detection & Response (BPIDR) era. The first is the increasing popularity of Palo Alto Networks. No compliance regulation I am aware of makes the distinction between a traditional stateful inspection firewall and a Next Generation Firewall as defined by Gartner in their 2009 research report.  Yet in the last four years, 6,000 companies have selected Palo Alto Networks because their NGFWs enable organizations to regain control of traffic at points in their networks where trust levels change or ought to change.

The second example is the evolution of Log Management/SIEM. One can safely say that the driving force for most Log/SIEM purchases in the early to mid 2000s was compliance. The fastest growing vendors of that period had the best compliance reporting capabilities. However, by the late 2000s, many organizations began to realize they needed better detection controls. We began so see a shift in the SIEM market to those solutions which not only provided the necessary compliance reports, but could also function satisfactorily as the primary detection control within limited budget requirements. Hence the ascendancy of Q1 Labs, which actually passed ArcSight in number of installations prior to being acquired by IBM.

The third example is email security. From a compliance perspective, Section 5 of PCI DSS, for example, is very comprehensive regarding anti-virus software. However, it is silent regarding phishing. The popularity of products from Proofpoint and FireEye show that organizations have determined that blocking email-borne viruses is simply not adequate. Phishing and particularly spear-phishing must be addressed.

Rather than simply call the third era “Breach Prevention,” I chose to add “Incident Detection & Response” because preventing all system compromises that could lead to a breach is not possible. You must assume that Prevention controls will have failures. Therefore you must invest in Detection controls as well. Too often, I have seen budget imbalances in favor of Prevention controls.

The goal of a defense-in-depth architecture is to (1) prevent breaches by minimizing attack surfaces, controlling access to assets, and preventing threats and malicious behavior on allowed traffic, and (2) to detect malicious activity missed by prevention controls and detect compromised systems more quickly to minimize the risk of disclosure of confidential data.

18. December 2011 · Comments Off on Gartner December 2011 Firewall Magic Quadrant Comments · Categories: blog · Tags: , , , , ,

Gartner just released their 2011 Enterprise Firewall Magic Quadrant 21 months since their last one just days before Christmas. Via distribution from one of the firewall manufacturers, I received a copy today. Here are the key highlights:

  • Palo Alto Networks moved up from the Visionary to Leader quadrant
  • Juniper slid back from the Leader to the Challenger quadrant
  • Cisco remained in the Challenger quadrant
  • There are no manufacturers in the Visionary quadrant

In fact, there are only two manufacturers in the Leader quadrant – the aforementioned Palo Alto Networks and Check Point. And these two manufacturers are the only ones to the right of center!!

Given Gartner’s strong belief in the value of Next Generation Firewalls, one might conclude that both of these companies actually do meet Gartner’s 2009 research paper outlining the features of a NGFW. Unfortunately that is not the case today. Check Point’s latest generally available release simply does not meet Gartner’s NGFW requirements.

So the question is, why did Gartner include them in the Leader quadrant? The only explanation I can think of is that their next release meets their NGFW criteria. Gartner alludes to Project Gaia which is in beta at a few sites but says only that it is a blending of Check Point’s three different operating systems. So let’s follow through on this thought experiment. First, this would mean that none of the other vendors will meet Gartner’s NGFW criteria in their next release. If any of them did, why wouldn’t they too be placed to the right of center?

Before I go on, let’s review what a NGFW is. Let’s start with a basic definition of a firewall – a network security device that enables you to define a “Positive Control Model” about what traffic is allowed to pass between two network segments of different trust levels. By Positive Enforcement Model I mean you define what is allowed and deny everything else. Another term for this is “default deny.”

Traditional stateful firewalls enable this Positive Control Model at the port and protocol levels. NGFWs do this also but most importantly do this at the application level. In fact, an NGFW enables policies that combine port, protocol, and application (and more). Stateful inspection firewalls have no ability to control applications sharing open ports. Some have added application identification and blocking to their IPS modules, but this is a negative enforcement model. In other words, block what I tell you to block and allow everything else. Some have called this the “Wack-A-Mole” approach to application control.

In order then to qualify as a NGFW, the core traffic analysis engine has to be built from the ground up to perform deep packet inspection and application detection at the beginning of the analysis/decision process to allow or deny the session. Since that was Palo Alto Networks’ vision when they were founded in 2005, that’s what they did. All the other firewall manufacturers have to start from scratch and build an entirely new platform.

So let’s pick up where I left off three paragraphs ago, i.e. the only traditional stateful inspection firewall manufacturer that might have a technically true NGFW coming in its next release is Check Point. Since Palo Alto Networks shipped its first NGFW in mid-2007, this would mean that Check Point is, at best, four and half years, four major releases, and six thousand customers behind Palo Alto Networks.

On the other hand, if Check Point is in the Leader quadrant because it’s Palo Alto Networks’ toughest competitor, then Palo Alto Networks is in even a better position in the firewall market.

19. November 2011 · Comments Off on Water supply system reportedly hacked, with physical damage · Categories: blog · Tags: ,

Bellovin comments on Krebs blog post about CNN’s report on water supply system breach.

According to press reports, a water utility’s SCADA network was hacked. The attacker turned a pump on and off too much, resulting in physical damage to the pump. This is an extremmely significant incident, for three reasons:

 

  • The attack actually happened.
  • Ordinary, off-the-shelf hacking tools were used, rather than something custom like Stuxnet
  • Physical damage resulted
This is the scenario that security people and the Dept of Homeland Security have been predicting for years. Sophisticated methods with 0-day vulnerabilities were not needed. When the FBI investigates, will the Curran-Gardner Public Water District (near Springfield, IL) be called out for lax security practices as was Nasdaq?

 

 

 

18. November 2011 · Comments Off on FBI says lax security at Nasdaq helped hackers · Categories: blog · Tags: , , ,

Exclusive: Lax security at Nasdaq helped hackers | Reuters.

A federal investigation into last year’s cyber attack on Nasdaq OMX Group found surprisingly lax security practices that made the exchange operator an easy target for hackers, people with knowledge of the probe said. The sources did not want to be identified because the matter is classified.

The ongoing probe by the Federal Bureau of Investigation is focused on Nasdaq’s Directors Desk collaboration software for corporate boards, where the breach occurred. The Web-based software is used by directors to share confidential information and to collaborate on projects.

…investigators were surprised to find some computers with out-of-date software, misconfigured firewalls and uninstalled security patches that could have fixed known “bugs” that hackers could exploit. Versions of Microsoft Corp’s Windows 2003 Server operating system, for example, had not been properly updated.

This story is interesting on several fronts. First, we find out that when the FBI is brought into a criminal breach investigation, it evaluates the victim organization’s information security posture, i.e. is the organization following best practices? While this may be obvious, one might want to know what the FBI’s definition of best practices is.

Second, this leak could have a chilling effect on organizations’ willingness to report cybercrimes to the FBI. On the other hand, the breach laws in most states will most likely still compel organizations to report breaches.

Overall though, I believe the compounded loss of reputation from disclosing a breach and the disclosure of lax information security practices will increase organizations’ motivation to strengthen the latter to reduce the risk of the former.

16. November 2011 · Comments Off on Tor launches do-it-yourself privacy bridge in Amazon cloud · Categories: blog · Tags:

Tor launches do-it-yourself privacy bridge in Amazon cloud.

While the core routers are known to the Tor team, bridges are “hidden” from the network and act as a way for users to get around Internet service providers who block access to Tor’s core network for one reason or another. A Tor bridge configured in Amazon’s EC2 environment would be available to route traffic from anyone who can reach Amazon over the Internet, concealing the true destination of a user’s Web traffic.

Tor Cloud will make it even easier for individuals within enterprises to bypass traditional internet security controls.

05. November 2011 · Comments Off on Branden R. Williams, Business Security Specialist » Where is your Chaos Monkey? · Categories: blog · Tags: ,

Branden R. Williams, Business Security Specialist » Where is your Chaos Monkey?.

Branden Williams discusses applying the Chaos Monkey to information security.

We need more of the semi-controlled security events to keep our employees fresh and ready for the uncontrolled ones coming from the outside. Our version of the Chaos Monkey could do things like:

  • Interrupt backup routines
  • Phish employees
  • Hijack caller-id and place “trusted calls from IT” to unsuspecting users
  • Forward requests to common sites to look-alikes to see if employees are fooled
  • Pop up bad certificate errors
  • Offer new software packages as “security patches”

What features would you add into the chaos monkey?

The goal is improve the organization’s IT resilience. Incidents are inevitable. The question is, how will the organization respond? Practice will improve response.

05. November 2011 · Comments Off on lcamtufs blog: In praise of anarchy: metrics are holding you back · Categories: blog · Tags:

lcamtufs blog: In praise of anarchy: metrics are holding you back.

Michal Zalewski presents two risks of a security metrics program – reduced adaptability and agility.

The frameworks for constructing security metrics often promise to advance one’s adaptability and agility, but that’s very seldom true. These attributes depend entirely on having bright, inquisitive security engineers thriving in a healthy corporate culture. A dysfunctional organization, or a security team with no technical insight, will not be saved by a checklist and a set of indicators; while a healthy team is unlikely to truly benefit from having them.

While I am surely no advocating against security metrics. it is worth noting the risks.

26. October 2011 · Comments Off on Australia DSD’s Top Four Security Strategies · Categories: blog · Tags: , , , , ,

The SANS Institute has endorsed Australia’s Defense Signals Directorate (DSD) four top strategies for mitigating  information security risk:

  1. Patching applications and using the latest version of an application
  2. Patching operating systems
  3. Keeping admin right under strict control (and forbidding the use of administrative accounts for email and browsing)
  4. Whitelisting applications
While there is nothing new with these four strategies, I would like to discuss #4. The Australian DSD Strategies to Mitigate Targeted Cyber Intrusions defines Application Whitelisting as preventing unapproved programs from running on PCs. I recommend extending whitelisting to the network. In other words, define which applications are allowed on the network by user groups, both internally and Web-based, and deny all others.
My recommendation is not really a new idea either. After all, that’s what firewalls are supposed to do. The issue is that the traditional stateful inspection firewall does it using port numbers and IP addresses. For at least the last five years applications and users have routinely bypassed these firewalls by using applications that share open ports.
This is why in October 2009, Gartner started talking about “Next Generation Firewalls” which enable you to implement whitelisting on the network at Layer 7 (Application) as well as down the stack to Layer 4 and 3. In other words extend the traditional “Positive Control Model” firewall functionality up through the Application Layer. (If you have not seen that Gartner research report, please contact me and I will arrange for you to receive a copy.)
18. October 2011 · Comments Off on Practical SIEM Deployment | SecurityWeek.Com · Categories: blog · Tags: , ,

Practical SIEM Deployment | SecurityWeek.Com. Chris Poulin, from Q1 Labs, has an excellent article in SecurityWeek on practical SIEM deployment.

Chris points out that SIEM is more challenging than say Anti-Virus because (1) there are so many possible use cases for a modern SIEM and (2) context is a factor.

Chris describes the general use cases that apply to most organizations and are mostly ready to deploy using out-of-the-box rules, dashboard widgets, reports, and saved searches:

  • Botnet detection
  • Excessive authentication failures
  • Traffic from darknets
  • IDS alerts that a particular attack is targeting an asset that the VA scanner confirms is vulnerable to that exploit
These cover many of the controls in compliance mandates and provide a good foundation for your log management program, not to mention they’re the main log sources used by default rules in most SIEMs. 
It surely makes sense that Phase 1 of a SIEM project should focus on collecting telemetry from key log sources and implementing the general use cases itemized above.
Chris points out that while IPS/IDS is core telemetry, it should not be part of Phase 1 because there can be a lot of tuning work needed if the IPS/IDSs have not already been tuned. So I will call IPS/IDS integration Phase 2, although Phase 1 includes basic IPS/IDS – Vulnerability matching.
If the IPS/IDSs are well tuned for direct analysis using the IPS/IDS’s console, then by all means include them in phase one. In addition, if the IPS/IDSs are well-tuned, it’s been my experience that you ought to consider “de-tuning” or opening up the IPS/IDSs somewhat and leverage the SIEM to generate more actionable intelligence.
Chris says that the next phase (by my count, Phase 3) ought to be integrating network activity, i.e. flows if the SIEM “has natively integrated network activity so it adds context and situational awareness.” If not, save flow integration for organization-specific use cases.
Network activity can automatically discover and profile assets in your environment, and dramatically simplify the tuning process. Vulnerability Assessment (VA) scanners can also build an asset model; however, VA scanners are noisy, performing active probes rather than passively watching the network, and only give you a point -in-time view of the network. To be sure, VA scanners are core telemetry, and every SIEM deployment should integrate them, but not as a replacement for native network activity monitoring, which provides a near-real-time view and can alert to new assets popping up, network scans—even low and slow ones—and DDoS attacks. And if you’re monitoring network activity and collecting flows, don’t forget to collect the events from the routers and switches as well.
At this point, with the first three phases complete, the security team has demonstrated the value of SIEM and is well down the product learning curve. Therefore you are ready to focus on organization-specific use cases (my Phase 4), for example adding application and database logs.
13. October 2011 · Comments Off on Looking for Infected Systems as Part of a Security Assessment · Categories: blog · Tags: , , , , , , , ,

Looking for Infected Systems as Part of a Security AssessmentLooking for Infected Systems as Part of a Security Assessment. Lenny Seltzer describes techniques for identifying signs of malware or compromise in an enterprise setting.

Lenny mentions Damballa’s consultant-friendly licensing option, Damballa Failsafe. We partner with Seculert, who provides a cloud-based service for detecting botnet infected devices in the enterprise.