26. October 2011 · Comments Off on Australia DSD’s Top Four Security Strategies · Categories: blog · Tags: , , , , ,

The SANS Institute has endorsed Australia’s Defense Signals Directorate (DSD) four top strategies for mitigating  information security risk:

  1. Patching applications and using the latest version of an application
  2. Patching operating systems
  3. Keeping admin right under strict control (and forbidding the use of administrative accounts for email and browsing)
  4. Whitelisting applications
While there is nothing new with these four strategies, I would like to discuss #4. The Australian DSD Strategies to Mitigate Targeted Cyber Intrusions defines Application Whitelisting as preventing unapproved programs from running on PCs. I recommend extending whitelisting to the network. In other words, define which applications are allowed on the network by user groups, both internally and Web-based, and deny all others.
My recommendation is not really a new idea either. After all, that’s what firewalls are supposed to do. The issue is that the traditional stateful inspection firewall does it using port numbers and IP addresses. For at least the last five years applications and users have routinely bypassed these firewalls by using applications that share open ports.
This is why in October 2009, Gartner started talking about “Next Generation Firewalls” which enable you to implement whitelisting on the network at Layer 7 (Application) as well as down the stack to Layer 4 and 3. In other words extend the traditional “Positive Control Model” firewall functionality up through the Application Layer. (If you have not seen that Gartner research report, please contact me and I will arrange for you to receive a copy.)
14. August 2011 · Comments Off on The Six Dumbest Ideas in Computer Security – Revisited · Categories: blog · Tags: , , ,

Marcus Ranum’s The Six Dumbest Ideas in Computer Security, written in 2006, is still referred to regularly as the gospel. I think it should be reviewed in the context of the 2011 threat landscape.

  1. Default Permit – I agree. This still ranks as number one. In this age of application level threats, it’s more important than ever to implement “default deny” policies at the application level in order to reduce the organization’s attack surface. The objective must be “Unknown applications – Deny.”
  2. Enumerating Badness – While I understand that in theory it’s far better to enumerate goodness and deny all other actions (see #1 above), in practice, I have not yet seen a host or network intrusion prevention product that uses this model that is reliable enough to replace traditional IPS technology that uses vulnerability-based signatures. If you know of such a product, I would surely be interested in learning about it.
  3. Penetrate and Patch – I’m not sure I would call this dumb. Practically speaking, I would say it’s necessary but not sufficient.
  4. Hacking is Cool – I agree, although this is no different than “analog hacking.” Movies about criminals are still being made because the public is fascinated by them.
  5. Educating Users –  I strongly disagree here. The issue is that our methods for educating users have left out the idea of “incentives.” In other words, the problem is that in most organizations, users’ merely inappropriate or thoughtless behavior does not cost them anything. Employees and contractor behavior will change if their compensation is affected by their actions. Of course you have to have the right technical controls in place to assure you can properly attribute observed network activity to people. Because these controls are relatively new, we are at the beginning of the use of economic incentives to influence employee behavior. I wrote about Security Awareness Training and Incentives last week.
  6. Action is better than Inaction – Somewhat valid. The management team of each organization will have to decide for themselves whether they are “early adopters” or what I would call, “fast followers.” Ranum uses the term “pause and thinkers,” clearly indicating his view. However, if there are no early adopters, there will be no innovation. And as has been shown regularly, there are only a small number of early adopters anyway.

Of the “Minor Dumbs” I agree with all of them except the last one – “We can’t stop the occasional problem.” Ranum says “yes you can.” Not possible. You must assume you will suffer successful attacks. Good security means budget is allocated to Detection and Response in addition to Prevention.

24. July 2011 · Comments Off on Lenny Zeltser on Information Security — 3 Reasons Why People Choose to Ignore Security Recommendations · Categories: blog · Tags: , ,

Lenny Zeltser on Information Security — 3 Reasons Why People Choose to Ignore Security Recommendations.

Lenny Zeltser relates a general psychology paper on Information Avoidance ($30 if you want to read the paper) to why security recommendations are ignored.

Here are the three reasons outlined in the paper:

(a) the information may demand a change in beliefs,
(b) the information may demand undesired action, and
(c) the information itself or the decision to learn information may cause unpleasant emotions or diminish pleasant emotions.

On the third point, Lenny hits on one of the age old concerns – the unpleasant emotion of “I bought the wrong security products.”

While this could be true in some situations, the more likely issue is that the security landscape has changed and obsoleted the purchased security product in question before it’s fully amortized.

We are seeing this today with respect to firewalls. The changes in the way browser-based applications communicate with servers and the related attack vectors have left traditional port-based firewall policies helpless to defend the organization.

 

 

 

During the last several years we have observed dramatic changes in the identity of attackers, their goals, and methods. Today’s most dangerous attackers are cyber criminals and nation-states who are stealing money and intellectual property. Their primary attack vector is no longer the traditional “outside-in” method of directly penetrating the enterprise at the network level through open ports and exploiting operating system vulnerabilities.

The new dominant attack vector is at the application level. It starts with baiting the end-user via phishing or some other social engineering technique to click on a link which takes the unsuspecting user to a malware-laden web page. The malware is downloaded to the user’s personal device, steals the person’s credentials, establishes a back-channel out to a controlling server, and, using the person’s credentials, steals money from corporate bank accounts, credit card information, and/or intellectual property. We call this the “Inside-Out” attack vector.

Here are my recommendations for mitigating these modern malware risks:

  • Reduce the enterprise’s attack surface by limiting the web-based applications to only those that are necessary to the enterprise and controlling who has access to those applications. This requires an application-based Positive Control Model at the firewall.
  • Deploy heuristic analysis coupled with sandbox technology to block the user from downloading malware.
  • Leverage web site reputation services and blacklists.
  • Deploy effective Intrusion Prevention functionality which is rapidly updated with new signatures.
  • Segment the enterprise’s internal network to:
    • Control users’ access to internal applications and data
    • Deny unknown applications
    • Limit the damage when a user or system is compromised
  • Provide remote and mobile users with the same control and protection as itemized above
  • Monitor the network security devices’ logs in real-time on a 24x7x365 basis

Full disclosure: For the last four years my company Cymbel has partnered with Palo Alto Networks to provide much of this functionality. For the real-time 24x7x365 log monitoring, we partner with Solutionary.

19. March 2011 · Comments Off on IT in the Age of the Empowered Employee · Categories: blog · Tags: , ,

I recently came across this blog post from Harvard Business Review, IT in the Age of the Empowered Employee. The author, Ted Schadler, who recently co-authored a book entitled, Empowered, seems to have coined the term, “highly empowered and resourceful operatives (HEROes).” These people represent 20% of the employees in an organization who aggressively seek out information technology solutions on their own without the IT department’s support.

Schadler recommends managers and IT support HEROes’ efforts:

What caught my eye of course is, “Provide tools to manage risk.” Yes, enable the use of Web 2.0 applications and social networking by mitigating the risks they create. Next Generation Firewalls come to mind.

30. January 2011 · Comments Off on Schneier on Security: Whitelisting vs. Blacklisting · Categories: blog · Tags: , , ,

Schneier on Security: Whitelisting vs. Blacklisting.

Excellent discussion of whitelisting vs. blacklisting. In theory, it’s clear which approach is more appropriate for a given situation. For example:

Physical security works generally on a whitelist model: if you have a key, you can open the door; if you know the combination, you can open the lock. We do it this way not because it’s easier — although it is generally much easier to make a list of people who should be allowed through your office door than a list of people who shouldn’t–but because it’s a security system that can be implemented automatically, without people.

In corporate environments, application control, if done at all, has been done with blacklists, it seems to me, mainly because whitelisting was simply too difficult. In other words, in theory white listing is the right thing to do, but in practice the tools were simply not there.

However, this is changing. Next Generation Firewalls hold the promise of application whitelisting. If the NGFW can identify and classify all of the applications traversing the organization’s network, then you have the visibility to implement application whitelisting.

The advantage of network-based application whitelisting is that you get off the treadmill of needing to identify every new potentially malicious application and adding it to the blacklist.

The objective is that the last firewall policy rule is, “If application is unknown, then block.” At that point you have returned to the Positive Control Model for which firewalls were conceived.

20. January 2011 · Comments Off on ‘Cyberlockers’ present new challenges to music industry · Categories: blog · Tags: , ,

PaidContent.org published an interesting article yesterday entitled, How ‘Cyberlockers’ Became The Biggest Problem In Piracy.

PaidContent uses the term “cyberlocker” to refer to browser-based-based file sharing applications which pose a new challenge to the music industry’s efforts to thwart illegal sharing of music, aka piracy.

The article highlights some of the better known applications like RapidShare, Hotfile, Mediafire, and Megaupload. It also points out that Google Docs qualifies as a cyberlocker, although it’s used mostly for Word and Excel documents.

What the article fails to mention is amount of malware lurking in these cyberlockers. The file you download may be the song you think it is or it may be trojan.

Palo Alto Networks, the Next Generation Firewall manufacturer, has the statistics to corroborate PaidContent’s claim that browser-base file sharing is growing rapidly.

Palo Alto Network’s Applipedia identifies 141 file sharing applications, of which 65 are browser-based.

Any organization which has deployed Palo Alto Networks can control the use of browser-based file sharing with the same ease as the older peer-to-peer file sharing applications.

Furthermore, if you configure Palo Alto to block the “file sharing” sub-category of  applications, not only will all of the known file sharing applications be blocked, but any newly discovered ones will also be blocked. However, there are valid business use cases for using a file sharing application. Therefore you would want an exception for the one you have selected.

Finally should you choose to allow a file sharing application, Palo Alto will provide protection against malware.

19. January 2011 · Comments Off on HIghlights from Sophos threat report · Categories: blog · Tags: , , ,

Highlights from Sophos threat report.

The recently released Sophos Threat Report claims that with more than 50 percent of companies allowing free and open access to social networking sites:

  • 67 percent of users were spammed on social networks – double from when the survey began in 2009 (33.4 percent)
  • 40 percent were sent malware
  • 43 percent were phished – more than double from when the survey began in 2009 (21 percent)

The answer is not totally blocking access to social network sites. People in marketing and sales need access, but they don’t need to be playing Farmville. Also totally blocking all aspects of social network sites might create a morale issue.

Anti-virus can play a role, but a defense-in-depth strategy is needed that includes Next Generation Firewalls.

17. January 2011 · Comments Off on Top 3 Tools For Busting Through Firewalls — Internet Censorship — InformationWeek · Categories: blog · Tags: , , ,

Top 3 Tools For Busting Through Firewalls — Internet Censorship — InformationWeek.

The three tools described in this article are Tor (The Onion Router), Circumventor, and Glype. If you are unfamiliar with them, here is a brief description. The article provides a deeper analysis of them.

TorTor is nominally used for the sake of anonymity, but also works as a circumvention tool, and its decentralized design makes it resilient to attacks. It started as a U.S. Naval Research Laboratory project but has since been developed by a 501(c)(3) nonprofit, and is open source software available for a variety of platforms. Human Rights Watch, Reporters without Borders, and the United States International Broadcasting Bureau (Voice of America) all advocate using Tor as a way to avoid compromising one’s anonymity. With a little care, it can also be used to route around information blocking.


Circumventor – Developed by Bennett Haslelton of the anti-Internet-censorship site Peacefire.org, Circumventor works a little bit like Tor in that each machine running the Circumventor software is a node in a network.

Circumventor is most commonly used to get around the Web-blocking system in a workplace or school. The user installs Circumventor on an unblocked PC — e.g., their own PC at home — and then uses their home PC as a proxy. Since most blocking software works by blocking known Web sites and not random IP addresses, setting up a Circumventor instance ought to be a bit more effective than attempting to use a list of proxies that might already be blocked.

Glype – The Glype proxy has been created in the same spirit as Circumventor. It’s installed on an unblocked computer, which the user then accesses to retrieve Web pages that are normally blocked. It’s different from Circumventor in that it needs to be installed on a Web server running PHP, not just any old PC with Internet access. To that end, it’s best for situations where a Web server is handy or the user knows how to set one up manually.

While these tools are used in certain countries to bypass censorship, in the U.S. they are mostly used to bypass organizational firewall policies.

In order to block these tunneling and proxy applications, organizations have turned to Palo Alto Networks, the leading Next Generation Firewall manufacturer.

However, the real issue is much bigger than blocking the three most popular tools for bypassing traditional stateful inspection firewalls. Or even peer-to-peer applications. The real goal is to enable a Positive Control Model, i.e. only allow the applications that are needed and block everything else. This is a much harder goal to achieve. Why?

In order to achieve a Positive Control Model, your firewall, not your IPS, has to be able to identify every application you are running. So in addition to the applications the firewall manufacturer identifies, the firewall must give you the ability to identify your home-grown proprietary applications. Then you have to build policies (when possible leveraging your directory service) to control who can use which applications.

Once you have implemented the policies covering all the identified applications the organization is using, and who can use them, then the final policy rule can be, “If application is unknown, then deny.”

Once you have implemented the Positive Control Model, you don’t really care about the next new proxy or peer-to-peer application that is developed. It’s the Negative Control Model that keeps you the never-ending cycle of identifying and blocking every possible undesirable application in existence.

Achieving this Positive Control Model is one of the primary reasons organizations are deploying Palo Alto Networks at the perimeter and on internal network segments.

06. December 2010 · Comments Off on Enterprises Riding A Tiger With Consumer Devices | threatpost · Categories: blog · Tags: , , , , ,

Enterprises Riding A Tiger With Consumer Devices | threatpost.

George Hulme highlights two technology trends which are increasing enterprise security risks – employee-owned smartphones and Web 2.0 applications including social networking.

Today, more than ever, employees are bucking efforts to be forced to work on stale and stodgy corporate notebooks, desktops or clunky, outdated mobile phones. They want to use the same trendy smart phones, tablets, or netbooks that they have at home for both play and work. And that, say security experts, poses a problem.

“If you prohibit access to the services people want to use for their jobs, they end up ignoring you and doing it from their own phone or netbook with their own data connection,” says Josh Corman, research director, security at the analyst firm 451 Group. “Workers are always going to find a way to share data and information more efficiently, and people will always embrace ways to do their job as effectively as possible.”

To control and mitigate the risks of using Web 2.0 applications and social networking, we’ve been recommending to and deploying for our clients Palo Alto Networks’ Next Generation Firewalls.

Palo Alto posted a well written response to Hulme’s article, Which is Riskier: Consumer Devices or the Applications in Use? Clearly, Palo Alto’s focus is on (1) controlling application usage, (2) providing intrusion detection/prevention for allowed applications, and (3) blocking the methods people have been using (remote access tools, external proxies, circumventors) to get around traditional network security solutions.

We have been big supporters of the thinking that the focus of information security must shift from protecting devices to protecting information. That is the core of the next generation defense-in-depth architecture we’ve assembled.

Corman agrees that the focus needs to shift from protecting devices to protecting data. “Security managers need to focus on the things they can control. And if they can control the computation platforms, and the entry and exit points of the network, they can control the access to sensitive data, regardless of who is trying to access it,” he says. Corman advises enterprises to deploy, or increase their focus on, technologies that help to control data access: file and folder encryption, enterprise digital rights management, role-based access control, and network segmentation.

Having said that, we are currently investigating a variety of new solutions directly aimed at bringing smartphones under enterprise control, at least for the enterprise applications and data portion of smartphone usage.