06. December 2010 · Comments Off on Enterprises Riding A Tiger With Consumer Devices | threatpost · Categories: blog · Tags: , , , , ,

Enterprises Riding A Tiger With Consumer Devices | threatpost.

George Hulme highlights two technology trends which are increasing enterprise security risks – employee-owned smartphones and Web 2.0 applications including social networking.

Today, more than ever, employees are bucking efforts to be forced to work on stale and stodgy corporate notebooks, desktops or clunky, outdated mobile phones. They want to use the same trendy smart phones, tablets, or netbooks that they have at home for both play and work. And that, say security experts, poses a problem.

“If you prohibit access to the services people want to use for their jobs, they end up ignoring you and doing it from their own phone or netbook with their own data connection,” says Josh Corman, research director, security at the analyst firm 451 Group. “Workers are always going to find a way to share data and information more efficiently, and people will always embrace ways to do their job as effectively as possible.”

To control and mitigate the risks of using Web 2.0 applications and social networking, we’ve been recommending to and deploying for our clients Palo Alto Networks’ Next Generation Firewalls.

Palo Alto posted a well written response to Hulme’s article, Which is Riskier: Consumer Devices or the Applications in Use? Clearly, Palo Alto’s focus is on (1) controlling application usage, (2) providing intrusion detection/prevention for allowed applications, and (3) blocking the methods people have been using (remote access tools, external proxies, circumventors) to get around traditional network security solutions.

We have been big supporters of the thinking that the focus of information security must shift from protecting devices to protecting information. That is the core of the next generation defense-in-depth architecture we’ve assembled.

Corman agrees that the focus needs to shift from protecting devices to protecting data. “Security managers need to focus on the things they can control. And if they can control the computation platforms, and the entry and exit points of the network, they can control the access to sensitive data, regardless of who is trying to access it,” he says. Corman advises enterprises to deploy, or increase their focus on, technologies that help to control data access: file and folder encryption, enterprise digital rights management, role-based access control, and network segmentation.

Having said that, we are currently investigating a variety of new solutions directly aimed at bringing smartphones under enterprise control, at least for the enterprise applications and data portion of smartphone usage.

05. December 2010 · Comments Off on Microsoft Research Develops Zozzle JavaScript Malware Detection Tool | threatpost · Categories: blog · Tags: , ,

Microsoft Research Develops Zozzle JavaScript Malware Detection Tool | threatpost.

Microsoft Research just released a paper on Zozzle, software they developed to detect certain types of JavaScript malware.

There are two ways Zozzle can be used:

  • In the browser to block malicious JavaScript before it does any damage
  • Scanning websites to detect malware-laden pages which can then be blacklisted

The question is, is this going to be a valuable tool for detecting and stopping malicious JavaScript? For some comments, I went to slashdot.org – Microsoft Builds JavaScript Malware Detection Tool.

Clearly, the slashdot crowd is anti-Microsoft, but it seems to me there was one insightful comment which I have paraphrased:

03. December 2010 · Comments Off on Schneier on Security: Risk Reduction Strategies on Social Networking Sites · Categories: blog · Tags:

Schneier on Security: Risk Reduction Strategies on Social Networking Sites.

Two good ways to reduce security risks on social networking sites

  • super-logoff – deactivate and log off
  • wall-scrubbing – delete wall messages and status updates
29. November 2010 · Comments Off on Clear-text is Fine…It’s Internal. · Categories: blog · Tags: , , , ,

Clear-text is Fine…It’s Internal..

In light of the recent discussions about public websites using SSL or not, our Managed Security Services Provider partner Solutionary discusses the reasons for NOT using clear text protocols even within the enterprise:

  • Corporate Insider / Disgruntled Employee
  • DMZ Host Compromised Externally
  • Internal Host Compromised Externally

Some examples of clear-text protocols and their encrypted alternatives are:
o    FTP -> SFTP
o    HTTP -> HTTPS
o    telnet -> SSH
o    SNMPv1 & 2 -> SNMPv3

29. November 2010 · Comments Off on Zscaler Research: Why the web has not switched to SSL-only yet? · Categories: blog · Tags: , ,

Zscaler Research: Why the web has not switched to SSL-only yet?.

Great post following up on the Firesheep threat, detailing the reasons why more websites are not using SSL:

  • Server overhead
  • Increased latency
  • Challenge for CDNs
  • Wildcard certificates are not enough
  • Mixed HTTP/HTTPS: the chicken & the egg problem

Zscaler did a follow up blog post, SSL: the sites which don’t want to protect their users, highlighting popular sites which do not use SSL.

Full disclosure – Zscaler is a Cymbel partner.

29. November 2010 · Comments Off on What is Information Security: New School Primer « The New School of Information Security · Categories: blog · Tags: , , , ,

What is Information Security: New School Primer « The New School of Information Security.

I would like to comment on each of the three components of Alex’s “primer” on Information Security.

First, InfoSec is a hypothetical construct. It is something that we can all talk about, but it’s not directly observable and therefore measurable like, say, speed that we can describe km/hr.   “Directly” is to be stressed there because there are many hypothetical constructs of subjective value that we do create measurements and measurement scales for in order to create a state of (high) intersubjectivity between observers (don’t like that wikipedia definition, I use it to mean that you and I can kind of understand the same thing in the same way).

Clearly InfoSec cannot be measured like speed or acceleration or weight. Therefore I would agree with Alex’s classification.

Second, security is not an engineering discipline, per se.  Our industry treats it as such because most of us come from that background, and because the easiest thing to do to try to become “more secure” is buy a new engineering solution (security product marketing).   But the bankruptcy of this way of thinking is present in both our budgets and our standards.   A security management approach focused solely on engineering fails primarily because of the “intelligent” or adaptable attacker.

Again, clearly InfoSec involves people and therefore is more than purely an engineering exercise like building a bridge. On the other hand, if, for example, you look at the statistics from the Verizon Business 2010 Data Breach Investigation Report, page 3, 85% of the analyzed attacks were not considered highly difficult. In other words, if “sound” security engineering practices are applied, the number of breaches would decline dramatically.

This is why we at Cymbel have embraced the SANS 20 Critical Security Controls for Effective Cyber Defense.

Finally, InfoSec is a subset of Information Risk Management (IRM).  IRM takes what we know about “secure” and adds concepts like probable impacts and resource allocation strategies.  This can be confusing to many because of the many definitions of the word “risk” in the english language, but that’s a post for a different day.

This is the part of Alex’s primer with which I have the most concern – “probable impacts.” The problem is that estimating probabilities with respect to exploits is almost totally subjective and there is still far too little available data to estimate probabilities.On the other hand, there is enough information about successful exploits and threats in the wild, to give infosec teams a plan to move forward, like the SANS 20 Critical Controls.

My biggest concern is Alex referencing FAIR, Factor Analysis of Information Risk in a positive light. From my perspective, any tool which when used by two independent groups sitting in different rooms to analyze the same environment can generate wildly different results is simply not valid. Richard Bejtlich, in 2007, provided a thoughtful analysis of FAIR here and here.

Bejtlich shows that FAIR is just a more elaborate version of ALE, Annual Loss Expectency. For a more detailed analysis of the shortcomings of ALE, see Security Metrics, by Andrew Jaquith, page 31. In summary, the problems with ALE are:

  • The inherent difficulty of modeling outlier
  • The lack of data for estimating probabilities of occurrence or loss expectancies
  • Sensitivity  of the ALE model to small changes in assumptions

I am surely not saying that there are no valid methods of measuring risk. It’s just that I have not seen any that work effectively. I am intrigued by Douglas Hubbard’s theories expressed in his two books, How to Measure Anything and The Failure of Risk Management. Anyone using them? I would love to hear your results.

I look forward to Alex’s post on Risk.

28. November 2010 · Comments Off on User activity monitoring answers the age-old questions of who, what and when · Categories: blog · Tags: , ,

User activity monitoring answers the age-old questions of who, what and when.

NetworkWorld ran a comprehensive article on the City of Richmond, Virginia’s deployment of PacketMotion’s User Activity Management solution to:

…provide information and snapshots to discover whether a folder or database was only being accessed by the appropriate groups of people, or if there was an access problem.

Conventional approaches for file activity monitoring and managing file permissions aren’t sufficient for many organizations. Third-party administrative tools and other widely used solutions, such as directory services groups and the file auditing built in to operating systems, often cannot keep pace with organizational changes or the sheer volume and growth of unstructured data.  Many times with these approaches, there is also a home grown or manual system required to aggregate data across the multiple point solutions to support the ever increasing need to answer the burning questions:  Who has accessed What? When? And perhaps even Why?

Well worth reading the whole article. Full disclosure – Cymbel partners with PacketMotion.

27. November 2010 · Comments Off on Securosis Blog | No More Flat Networks · Categories: blog · Tags: , ,

Securosis Blog | No More Flat Networks.

Mike Rothman at Securosis is tentatively calling for more internal network segmentation in light of the Stuxnet worm. We here at Cymbel, who have been recommending Palo Alto Networks for its ability to define security zone policies by application and LDAP user group for the last three years, say welcome.

Using firewalls on internal networks to define zones is not new with Palo Alto. Netscreen (now Juniper) had the zone concept ten years ago.

Palo Alto was the first, and, as far as I know, is still the only firewall vendor that enables you to classify traffic by application rather than port. Therefore you can implement a Positive Control Model from the network layer up through the application layer. Therefore, with some work over time, you can implement the “unknown application – deny” rule. In other words, if there is application traffic for which no policies are defined, deny it.

27. November 2010 · Comments Off on Why Counting Flaws is Flawed — Krebs on Security · Categories: blog · Tags: , ,

Why Counting Flaws is Flawed — Krebs on Security.

Krebs calls into question Bit9’s “Dirty Dozen” Top Vulnerable Application List which placed Google’s Chrome as number one. The key issue is that categorizing vulnerabilities simply by severity creates a misleading picture.

Certainly severity is an important criteria, but does not equal risk. Krebs highlights several additional factors which affect risk level:

  • Was the vulnerability discovered in-house — or was the vendor first alerted to the flaw by external researchers (or attackers)?
  • How long after being initially notified or discovering the flaw did it take each vendor to fix the problem?
  • Which products had the broadest window of vulnerability, from notification to patch?
  • How many of the vulnerabilities were exploitable using code that was publicly available at the time the vendor patched the problem?
  • How many of the vulnerabilities were being actively exploited at the time the vendor issued a patch?
  • Which vendors make use of auto-update capabilities? For those vendors that include auto-update capabilities, how long does it take “n” percentage of customers to be updated to the latest, patched version?

When taking these factors into consideration, Krebs opines that Adobe comes in first, second, and third!!

27. November 2010 · Comments Off on Stuxnet source code released? Unlikely · Categories: blog · Tags:

Larry Seltzer believes the report from Britain’s Sky News that the source code for the Stuxnet worm is being traded on the black market and could be used by terrorists is highly unlikely. (The link to the story appears to be broken as of Saturday, 4:11pm EST. Try the main blog link, http://blogs.pcmag.com/securitywatch/ and scroll to the story.)

Larry is by no means the only skeptical security blogger. AVG’s Roger Thompson and Sophos’s Paul Ducklin agree.

Sky News may be confused by the fact that an exploit for one of the Stuxnet 0-day vulnerabilities was released a few days ago. While this is problematic, it is by no means Stuxnet itself.