29. November 2010 · Comments Off on Clear-text is Fine…It’s Internal. · Categories: blog · Tags: , , , ,

Clear-text is Fine…It’s Internal..

In light of the recent discussions about public websites using SSL or not, our Managed Security Services Provider partner Solutionary discusses the reasons for NOT using clear text protocols even within the enterprise:

  • Corporate Insider / Disgruntled Employee
  • DMZ Host Compromised Externally
  • Internal Host Compromised Externally

Some examples of clear-text protocols and their encrypted alternatives are:
o    FTP -> SFTP
o    HTTP -> HTTPS
o    telnet -> SSH
o    SNMPv1 & 2 -> SNMPv3

29. November 2010 · Comments Off on Zscaler Research: Why the web has not switched to SSL-only yet? · Categories: blog · Tags: , ,

Zscaler Research: Why the web has not switched to SSL-only yet?.

Great post following up on the Firesheep threat, detailing the reasons why more websites are not using SSL:

  • Server overhead
  • Increased latency
  • Challenge for CDNs
  • Wildcard certificates are not enough
  • Mixed HTTP/HTTPS: the chicken & the egg problem

Zscaler did a follow up blog post, SSL: the sites which don’t want to protect their users, highlighting popular sites which do not use SSL.

Full disclosure – Zscaler is a Cymbel partner.

29. November 2010 · Comments Off on What is Information Security: New School Primer « The New School of Information Security · Categories: blog · Tags: , , , ,

What is Information Security: New School Primer « The New School of Information Security.

I would like to comment on each of the three components of Alex’s “primer” on Information Security.

First, InfoSec is a hypothetical construct. It is something that we can all talk about, but it’s not directly observable and therefore measurable like, say, speed that we can describe km/hr.   “Directly” is to be stressed there because there are many hypothetical constructs of subjective value that we do create measurements and measurement scales for in order to create a state of (high) intersubjectivity between observers (don’t like that wikipedia definition, I use it to mean that you and I can kind of understand the same thing in the same way).

Clearly InfoSec cannot be measured like speed or acceleration or weight. Therefore I would agree with Alex’s classification.

Second, security is not an engineering discipline, per se.  Our industry treats it as such because most of us come from that background, and because the easiest thing to do to try to become “more secure” is buy a new engineering solution (security product marketing).   But the bankruptcy of this way of thinking is present in both our budgets and our standards.   A security management approach focused solely on engineering fails primarily because of the “intelligent” or adaptable attacker.

Again, clearly InfoSec involves people and therefore is more than purely an engineering exercise like building a bridge. On the other hand, if, for example, you look at the statistics from the Verizon Business 2010 Data Breach Investigation Report, page 3, 85% of the analyzed attacks were not considered highly difficult. In other words, if “sound” security engineering practices are applied, the number of breaches would decline dramatically.

This is why we at Cymbel have embraced the SANS 20 Critical Security Controls for Effective Cyber Defense.

Finally, InfoSec is a subset of Information Risk Management (IRM).  IRM takes what we know about “secure” and adds concepts like probable impacts and resource allocation strategies.  This can be confusing to many because of the many definitions of the word “risk” in the english language, but that’s a post for a different day.

This is the part of Alex’s primer with which I have the most concern – “probable impacts.” The problem is that estimating probabilities with respect to exploits is almost totally subjective and there is still far too little available data to estimate probabilities.On the other hand, there is enough information about successful exploits and threats in the wild, to give infosec teams a plan to move forward, like the SANS 20 Critical Controls.

My biggest concern is Alex referencing FAIR, Factor Analysis of Information Risk in a positive light. From my perspective, any tool which when used by two independent groups sitting in different rooms to analyze the same environment can generate wildly different results is simply not valid. Richard Bejtlich, in 2007, provided a thoughtful analysis of FAIR here and here.

Bejtlich shows that FAIR is just a more elaborate version of ALE, Annual Loss Expectency. For a more detailed analysis of the shortcomings of ALE, see Security Metrics, by Andrew Jaquith, page 31. In summary, the problems with ALE are:

  • The inherent difficulty of modeling outlier
  • The lack of data for estimating probabilities of occurrence or loss expectancies
  • Sensitivity  of the ALE model to small changes in assumptions

I am surely not saying that there are no valid methods of measuring risk. It’s just that I have not seen any that work effectively. I am intrigued by Douglas Hubbard’s theories expressed in his two books, How to Measure Anything and The Failure of Risk Management. Anyone using them? I would love to hear your results.

I look forward to Alex’s post on Risk.

28. November 2010 · Comments Off on User activity monitoring answers the age-old questions of who, what and when · Categories: blog · Tags: , ,

User activity monitoring answers the age-old questions of who, what and when.

NetworkWorld ran a comprehensive article on the City of Richmond, Virginia’s deployment of PacketMotion’s User Activity Management solution to:

…provide information and snapshots to discover whether a folder or database was only being accessed by the appropriate groups of people, or if there was an access problem.

Conventional approaches for file activity monitoring and managing file permissions aren’t sufficient for many organizations. Third-party administrative tools and other widely used solutions, such as directory services groups and the file auditing built in to operating systems, often cannot keep pace with organizational changes or the sheer volume and growth of unstructured data.  Many times with these approaches, there is also a home grown or manual system required to aggregate data across the multiple point solutions to support the ever increasing need to answer the burning questions:  Who has accessed What? When? And perhaps even Why?

Well worth reading the whole article. Full disclosure – Cymbel partners with PacketMotion.

27. November 2010 · Comments Off on Securosis Blog | No More Flat Networks · Categories: blog · Tags: , ,

Securosis Blog | No More Flat Networks.

Mike Rothman at Securosis is tentatively calling for more internal network segmentation in light of the Stuxnet worm. We here at Cymbel, who have been recommending Palo Alto Networks for its ability to define security zone policies by application and LDAP user group for the last three years, say welcome.

Using firewalls on internal networks to define zones is not new with Palo Alto. Netscreen (now Juniper) had the zone concept ten years ago.

Palo Alto was the first, and, as far as I know, is still the only firewall vendor that enables you to classify traffic by application rather than port. Therefore you can implement a Positive Control Model from the network layer up through the application layer. Therefore, with some work over time, you can implement the “unknown application – deny” rule. In other words, if there is application traffic for which no policies are defined, deny it.

27. November 2010 · Comments Off on Why Counting Flaws is Flawed — Krebs on Security · Categories: blog · Tags: , ,

Why Counting Flaws is Flawed — Krebs on Security.

Krebs calls into question Bit9’s “Dirty Dozen” Top Vulnerable Application List which placed Google’s Chrome as number one. The key issue is that categorizing vulnerabilities simply by severity creates a misleading picture.

Certainly severity is an important criteria, but does not equal risk. Krebs highlights several additional factors which affect risk level:

  • Was the vulnerability discovered in-house — or was the vendor first alerted to the flaw by external researchers (or attackers)?
  • How long after being initially notified or discovering the flaw did it take each vendor to fix the problem?
  • Which products had the broadest window of vulnerability, from notification to patch?
  • How many of the vulnerabilities were exploitable using code that was publicly available at the time the vendor patched the problem?
  • How many of the vulnerabilities were being actively exploited at the time the vendor issued a patch?
  • Which vendors make use of auto-update capabilities? For those vendors that include auto-update capabilities, how long does it take “n” percentage of customers to be updated to the latest, patched version?

When taking these factors into consideration, Krebs opines that Adobe comes in first, second, and third!!

27. November 2010 · Comments Off on Stuxnet source code released? Unlikely · Categories: blog · Tags:

Larry Seltzer believes the report from Britain’s Sky News that the source code for the Stuxnet worm is being traded on the black market and could be used by terrorists is highly unlikely. (The link to the story appears to be broken as of Saturday, 4:11pm EST. Try the main blog link, http://blogs.pcmag.com/securitywatch/ and scroll to the story.)

Larry is by no means the only skeptical security blogger. AVG’s Roger Thompson and Sophos’s Paul Ducklin agree.

Sky News may be confused by the fact that an exploit for one of the Stuxnet 0-day vulnerabilities was released a few days ago. While this is problematic, it is by no means Stuxnet itself.

27. November 2010 · Comments Off on Gartner: Security policy should factor in business risks · Categories: blog · Tags: , , , , ,

Gartner: Security policy should factor in business risks.

Understanding the business risk posed due to security threats is crucial for IT managers and security officers, two analysts have claimed.

Viewing and analyzing security threats from a business risk perspective is surely a worthwhile goal.

How do you operationalize this objective? Deploy a Log/SIEM solution with integrated IT/Business Service Management capabilities. These include:

  • Device and Software Discovery
  • Layer 2 and Layer 3 Topology Discovery and Mapping
  • User interface to group devices and applications into IT/Business Services
  • Change Management Monitoring
  • Alerts/Incidents with IT/Business Service context
  • IT/Business Service Management Reports and Dashboards
25. November 2010 · Comments Off on Schneier on Security: Me on Airport Security · Categories: blog · Tags:

Schneier on Security: Me on Airport Security.

Short history of airport security from Bruce Schneier:

A short history of airport security: We screen for guns and bombs, so the terrorists use box cutters. We confiscate box cutters and corkscrews, so they put explosives in their sneakers. We screen footwear, so they try to use liquids. We confiscate liquids, so they put PETN bombs in their underwear. We roll out full-body scanners, even though they wouldn’t have caught the Underwear Bomber, so they put a bomb in a printer cartridge. We ban printer cartridges over 16 ounces — the level of magical thinking here is amazing — and they’re going to do something else.

This is a stupid game, and we should stop playing it.

25. November 2010 · Comments Off on Escrow Co. Sues Bank Over $440K Cyber Theft — Krebs on Security · Categories: blog · Tags: ,

Escrow Co. Sues Bank Over $440K Cyber Theft — Krebs on Security.

The Choice Escrow and Land Title escrow company had $440,000 stolen from its bank account in one fraudulent online transaction. Choice Escrow is suing the bank – BancorpSouth, Inc of Tupulow, Miss.

The fraudulent transaction was to a corporate account payee in Cyprus.

Technically the bank is not responsible for commercial account losses unless reported within 48 hours of the transaction. However Choice Escrow is suing on the basis that BancorpSouth did not provide the two-factor authentication required by the Federal Financial Institutions Examination Council (FFIEC).

Even if that were true, two-factor authentication is no longer enough to thwart online banking fraud. The problem is if the end user’s computer is compromised with a “man-in-the-browser” trojan like Zeus, once the authentication process is completed, the illicit transactions are performed while the end user is logged on!!

Think of it this way. No number of locks on your front door will stop a bad guy from walking into your house right behind you after you have opened the door.

We have partnered with Becrypt, who provides a “Trusted Client” solution which (1) resides on an encrypted USB stick which you boot from, or (2) resides on a dedicated PC which you use only for banking.