During the last several years we have observed dramatic changes in the identity of attackers, their goals, and methods. Today’s most dangerous attackers are cyber criminals and nation-states who are stealing money and intellectual property. Their primary attack vector is no longer the traditional “outside-in” method of directly penetrating the enterprise at the network level through open ports and exploiting operating system vulnerabilities.

The new dominant attack vector is at the application level. It starts with baiting the end-user via phishing or some other social engineering technique to click on a link which takes the unsuspecting user to a malware-laden web page. The malware is downloaded to the user’s personal device, steals the person’s credentials, establishes a back-channel out to a controlling server, and, using the person’s credentials, steals money from corporate bank accounts, credit card information, and/or intellectual property. We call this the “Inside-Out” attack vector.

Here are my recommendations for mitigating these modern malware risks:

  • Reduce the enterprise’s attack surface by limiting the web-based applications to only those that are necessary to the enterprise and controlling who has access to those applications. This requires an application-based Positive Control Model at the firewall.
  • Deploy heuristic analysis coupled with sandbox technology to block the user from downloading malware.
  • Leverage web site reputation services and blacklists.
  • Deploy effective Intrusion Prevention functionality which is rapidly updated with new signatures.
  • Segment the enterprise’s internal network to:
    • Control users’ access to internal applications and data
    • Deny unknown applications
    • Limit the damage when a user or system is compromised
  • Provide remote and mobile users with the same control and protection as itemized above
  • Monitor the network security devices’ logs in real-time on a 24x7x365 basis

Full disclosure: For the last four years my company Cymbel has partnered with Palo Alto Networks to provide much of this functionality. For the real-time 24x7x365 log monitoring, we partner with Solutionary.

30. January 2011 · Comments Off on Schneier on Security: Whitelisting vs. Blacklisting · Categories: blog · Tags: , , ,

Schneier on Security: Whitelisting vs. Blacklisting.

Excellent discussion of whitelisting vs. blacklisting. In theory, it’s clear which approach is more appropriate for a given situation. For example:

Physical security works generally on a whitelist model: if you have a key, you can open the door; if you know the combination, you can open the lock. We do it this way not because it’s easier — although it is generally much easier to make a list of people who should be allowed through your office door than a list of people who shouldn’t–but because it’s a security system that can be implemented automatically, without people.

In corporate environments, application control, if done at all, has been done with blacklists, it seems to me, mainly because whitelisting was simply too difficult. In other words, in theory white listing is the right thing to do, but in practice the tools were simply not there.

However, this is changing. Next Generation Firewalls hold the promise of application whitelisting. If the NGFW can identify and classify all of the applications traversing the organization’s network, then you have the visibility to implement application whitelisting.

The advantage of network-based application whitelisting is that you get off the treadmill of needing to identify every new potentially malicious application and adding it to the blacklist.

The objective is that the last firewall policy rule is, “If application is unknown, then block.” At that point you have returned to the Positive Control Model for which firewalls were conceived.

17. January 2011 · Comments Off on Top 3 Tools For Busting Through Firewalls — Internet Censorship — InformationWeek · Categories: blog · Tags: , , ,

Top 3 Tools For Busting Through Firewalls — Internet Censorship — InformationWeek.

The three tools described in this article are Tor (The Onion Router), Circumventor, and Glype. If you are unfamiliar with them, here is a brief description. The article provides a deeper analysis of them.

TorTor is nominally used for the sake of anonymity, but also works as a circumvention tool, and its decentralized design makes it resilient to attacks. It started as a U.S. Naval Research Laboratory project but has since been developed by a 501(c)(3) nonprofit, and is open source software available for a variety of platforms. Human Rights Watch, Reporters without Borders, and the United States International Broadcasting Bureau (Voice of America) all advocate using Tor as a way to avoid compromising one’s anonymity. With a little care, it can also be used to route around information blocking.


Circumventor – Developed by Bennett Haslelton of the anti-Internet-censorship site Peacefire.org, Circumventor works a little bit like Tor in that each machine running the Circumventor software is a node in a network.

Circumventor is most commonly used to get around the Web-blocking system in a workplace or school. The user installs Circumventor on an unblocked PC — e.g., their own PC at home — and then uses their home PC as a proxy. Since most blocking software works by blocking known Web sites and not random IP addresses, setting up a Circumventor instance ought to be a bit more effective than attempting to use a list of proxies that might already be blocked.

Glype – The Glype proxy has been created in the same spirit as Circumventor. It’s installed on an unblocked computer, which the user then accesses to retrieve Web pages that are normally blocked. It’s different from Circumventor in that it needs to be installed on a Web server running PHP, not just any old PC with Internet access. To that end, it’s best for situations where a Web server is handy or the user knows how to set one up manually.

While these tools are used in certain countries to bypass censorship, in the U.S. they are mostly used to bypass organizational firewall policies.

In order to block these tunneling and proxy applications, organizations have turned to Palo Alto Networks, the leading Next Generation Firewall manufacturer.

However, the real issue is much bigger than blocking the three most popular tools for bypassing traditional stateful inspection firewalls. Or even peer-to-peer applications. The real goal is to enable a Positive Control Model, i.e. only allow the applications that are needed and block everything else. This is a much harder goal to achieve. Why?

In order to achieve a Positive Control Model, your firewall, not your IPS, has to be able to identify every application you are running. So in addition to the applications the firewall manufacturer identifies, the firewall must give you the ability to identify your home-grown proprietary applications. Then you have to build policies (when possible leveraging your directory service) to control who can use which applications.

Once you have implemented the policies covering all the identified applications the organization is using, and who can use them, then the final policy rule can be, “If application is unknown, then deny.”

Once you have implemented the Positive Control Model, you don’t really care about the next new proxy or peer-to-peer application that is developed. It’s the Negative Control Model that keeps you the never-ending cycle of identifying and blocking every possible undesirable application in existence.

Achieving this Positive Control Model is one of the primary reasons organizations are deploying Palo Alto Networks at the perimeter and on internal network segments.

27. November 2010 · Comments Off on Securosis Blog | No More Flat Networks · Categories: blog · Tags: , ,

Securosis Blog | No More Flat Networks.

Mike Rothman at Securosis is tentatively calling for more internal network segmentation in light of the Stuxnet worm. We here at Cymbel, who have been recommending Palo Alto Networks for its ability to define security zone policies by application and LDAP user group for the last three years, say welcome.

Using firewalls on internal networks to define zones is not new with Palo Alto. Netscreen (now Juniper) had the zone concept ten years ago.

Palo Alto was the first, and, as far as I know, is still the only firewall vendor that enables you to classify traffic by application rather than port. Therefore you can implement a Positive Control Model from the network layer up through the application layer. Therefore, with some work over time, you can implement the “unknown application – deny” rule. In other words, if there is application traffic for which no policies are defined, deny it.