28. December 2009 · Comments Off on Verizon Business 2009 DBIR Supplemental Report provides empirical guidance for unifying security and compliance priorities · Categories: Breaches, Compliance, Risk Management, Security Management, Theory vs. Practice · Tags: , , ,

The Verizon Business security forensics group's recently released 2009 Data Breach Investigations Supplemental Report provides common ground between those in the enterprise who are compliance oriented and those who are security oriented. While in theory, there should be no difference between these groups, in practice there is.   

Table 8 on page 28 evaluates the breach data set from the perspective of data types breached. Number one by far is Payment Card Data at 84%. Second is Personal Information at 31%. (Obviously each case in their data set can be categorized in multiple data breach categories.) These are exactly the types of breaches regulatory compliance standards like PCI and breach disclosure laws like Mass 201 CMR 17 are focused on.

Therefore there is high value in using the report's "threat action types" analysis to prioritize risk reduction as well as compliance programs, processes, and technologies.

While the original 2009 DBIR did provide similar information in Figure 29 on page 33, it's the Supplemental report which provides the threat action type analysis that can drive a common set of risk reduction and compliance priorities.

27. December 2009 · Comments Off on First Heartland suit dismissed – executives off the hook – for now · Categories: Breaches, Compliance, Legal · Tags: ,

The first adjudicated lawsuit against the executives of Heartland Payment Systems went in favor of the defense.

As I am sure you aware, Heartland Payment Systems is embroiled in countless lawsuits as a result of the disclosure it had to make in January 2009 of a breach of over 130 million credit card numbers. It is considered the largest breach of credit card data in history.

A class action shareholder lawsuit filed against the executives of Heartland was dismissed earlier this month by Judge Anne Thompson of the U.S. District Court of New Jersey on the basis that the executives' claim that they took security seriously was not a lie. Here is the actual opinion.here.

Gene Schultz weighed in with a thoughtful opinion here.

While I am no lawyer, it seems to me that this lawsuit was very narrowly focused and based on my reading of the opinion, it's hard to see how the judge could have found for the plaintiff. 

A lawsuit that would bring out the emails and memos associated with a variety of compliance and security decisions made by the Heartland executives would be more interesting.

23. October 2009 · Comments Off on Relational databases dead for log management? · Categories: Compliance, Log Management, Security Management · Tags: , , , , ,

Larry Walsh wrote an interesting post this week, Splunk Disrupts Security Log Auditing, in which he claims that Splunk's success is due to capturing market share in the security log auditing market because of it's Google-like approach to storing log data rather than using a "relational database."

There was also a very good blog post at Securosis in response – Splunk and Unstructured Data.

While there is no doubt that Splunk has been successful as a company, I am not so sure it's due to security log auditing.

It's my understanding that the primary use case for Splunk is actually in Operations where, for example, a network administrator wants to search logs to resolve an Alert generated by an SNMP-based network management system. Most SNMP-based network management systems are good at telling you "what" is going on, but not very good at telling you "why."

So when the network management system generates an Alert, the admin goes to Splunk to find the logs that would show what actually happened so s/he can fix the root cause of the Alert. For this use case, you don't really need more than a day's worth of logs.

Splunk's brilliant move was to allow "free" usage of the software for one day's worth of logs or some limited amount of storage that generally would not exceed one day. In reality, a few hours of logs is very valuable. This freemium model has been very successful.

Security log auditing is a very different use case. It can require a year or more of data and sophisticated reporting capabilities. That is not to say that a Google-like storage approach cannot accomplish this.

In fact, security log auditing is just another online analytical processing (OLAP) application, albeit with potentially huge amounts of data. It's been at least ten years that the IT industry realized that OLAP applications require a different way to organize stored data compared to online transaction processing (OLTP) applications. OLTP applications still use traditional relational databases.

There has been much experimentation about ways to store data for OLAP applications. However, there is still a lot of value in the SQL language as a kind of open industry standard API to stored data.

So I would agree that traditional relational database products are not appropriate for log management data storage, but SQL as a language has merit as the "API layer" between the query and reporting tools and the data.

Wired Magazine reported this week that Wal-Mart kept secret a breach it discovered in November 2006 that had been ongoing for 17 months. According to the article, Walmart claimed there was no reason to disclose the exploit at the time as they believe no customer data or credit card information was breached.

They are admitting that custom developed Point-of-Sale software was breached. The California Breach Law covering breached financial information of California residents had gone into effect on July 1, 2003 and was extended to health information on January 1, 2009. I blogged about that here.

I think it would be more accurate to say that the forensics analysts hired by Wal-Mart could not "prove" that customer data was breached, i.e., could not find specific evidence that customer data was breached. One key piece of information the article revealed, "The company’s server logs recorded only unsuccessful log-in attempts, not successful ones, frustrating a detailed analysis."

Based on my background in log management, I understand the approach of only collecting "bad" events like failed log-ins. Other than this sentence the article does not discuss what types of events were and were not collected. Therefore they have very little idea of what was really going on.

The problem Wal-Mart was facing at the time was that the cost of collecting and storing all the logs in an accessible manner was prohibitive. Fortunately, log data management software has improved and hardware costs have dropped dramatically. In addition there are new tools for user activity monitoring.

However, my key reaction to this article is my disappointment that Wal-Mart chose to keep this incident a secret. It's possible that news of a Wal-Mart breach might have motivated other retailers to strengthen their security defenses and increase their vigilance, which might have reduced the number of breaches that occurred since 2006. It may also have more quickly increased the rigor QSAs applied to PCI DSS audits.

In closing, I would like to call attention to Adam Shostack's and Andrew Stewart's book, "The New School of Information Security," and quote a passage from page 78 which talks about the value of disclosing breaches aside from the need to inform people whose personal financial or health information may have been breached:

"Breach data is bringing us more and better objective data than any past information-sharing initiative in the field of information security. Breach data allows us to see more about the state of computer security than we've been able to with traditional sources of information. … Crucially, breach data allows us to understand what sorts of issues lead to real problems, and this can help us all make better security decisions."

28. September 2009 · Comments Off on All enterprises have infected hosts controlled by botnets · Categories: Botnets, Breaches, Compliance, Malware · Tags:

If you think your organization is free of botnet controlled hosts (aka zombies), it's only because you don't have the right detection tools! For example, Damballa, a botnet detection company claims that every organization it has tested was infected. And the number of infected hosts is rising – from 5% to 7% last year to 7% to 9% this year.

In one sense, this is a shocking number, i.e. almost 10% of the hosts in your network are controlled by botnets. On the other hand, not so much because I have yet to find an enterprise with hosts not running non-compliant or non-monitored software. 

Another interesting finding from Damballa's research is the proliferation of small, customized botnets. Here is a quote from the Dark Reading article:

"The bad guys are also finding that deploying a
small botnet inside a targeted organization is a more efficient way of
stealing information than deploying a traditional exploit on a specific
machine. And [Damballa VP of Research Gunter] Ollmann says many of the smaller botnets appear to have
more knowledge of the targeted organization as well. "They are very
strongly associated with a lot of insider knowledge…and we see a lot
of hands-on command and control with these small botnets," he says.

There are several advanced security tools that can be easily deployed in a couple of days that will pinpoint non-compliant and non-monitored software and network communications.

McKinsey's just released report on its third annual survey of the usage and benefits of Web 2.0 technology was enlightening as far as it went. However, it completely ignores the IT security risks Web 2.0 creates. Furthermore, traditional IT security products do not mitigate these risks. If we are going to deploy Web 2.0 technology, then we need to upgrade our security to, dare I say, "IT Security 2.0."

Even if Web 2.0 products had no vulnerabilities for cybercriminals to exploit, which is not possible, there is still the need for a control function, i.e. which applications should be allowed and who should be able to use them. Unfortunately traditional security vendors have had limited success with both. Fortunately, there are security vendors who have recognized this as an opportunity
and have built solutions which mitigate these new risks.

In the past, I had never subscribed to the concept of security enabling innovation, but I do in this case. There is no doubt that improved communication, learning, and collaboration within the organization and with customers and suppliers enhances the organization's competitive position. Ignoring Web 2.0 or letting it happen by itself is not an option. Therefore when planning Web 2.0 projects, we must also include plans for mitigating the new risks Web 2.0 applications create.

The Web 2.0 good news – The survey results are very positive:

"69 percent of respondents report that their companies have gained
measurable business benefits, including more innovative products and
services, more effective marketing, better access to knowledge, lower
cost of doing business, and higher revenues.

Companies that made
greater use of the technologies, the results show, report even greater
benefits. We also looked closely at the factors driving these
improvements—for example, the types of technologies companies are
using, management practices that produce benefits, and any
organizational and cultural characteristics that may contribute to the
gains. We found that successful companies not only tightly integrate
Web 2.0 technologies with the work flows of their employees but also
create a “networked company,” linking themselves with customers and
suppliers through the use of Web 2.0 tools. Despite the current
recession, respondents overwhelmingly say that they will continue to
invest in Web 2.0."

The Web 2.0 bad news – Web 2.0 technologies introduce IT security risks that cannot be ignored. The main risk comes from the fact that these applications are purposely built to bypass traditional IT security controls in order to simplify deployment and increase usage. They use techniques such as port hopping, encrypted tunneling, and browser based applications. If we cannot identify these applications and the people using them, we cannot monitor or control them. Any exploitation of vulnerabilities in these applications can go undetected until it's too late.

A second risk is bandwidth consumption. For example, unauthorized and uncontrolled consumer-oriented video and audio file sharing applications consume large chunks of bandwidth. How much? Hard to know if we cannot see them.

In case we need some examples of the bad news, just in the last few days see here, here, here, and here.

The IT Security 2.0 good news – There are new IT Security 2.0 vendors who are addressing these issues in different ways as follows:

Database Activity Monitoring – Since we cannot depend on traditional perimeter defenses, we must protect the database itself. Database encryption, another technology, is also useful. But if someone has stolen authorized credentials (very common with trojan keyloggers), encryption is of no value. I discussed Database Activity Monitoring in more detail here. It's also useful for compliance reporting when integrated with application users.

User Activity Monitoring – Network appliances designed to
monitor internal user activity and block actions that are out of
policy. Also useful for compliance reporting.

Web Application Firewalls – Web server host-based software or appliances specifically designed to analyze anomalies in browser-based applications. WAFs are not meant to be primary firewalls but rather to be used to monitor the Layer 7 fields of browser-based forms into which users enter information. Cybercriminals enter malicious code which, if not detected and blocked, can trigger a wide range of exploits. It's also useful for PCI compliance.

"Web 2.0" Firewalls – Next generation network firewalls that can detect and control Web 2.0 applications in addition to traditional firewall functions. They also identify users and can analyze content. They can also perform URL filtering, intrusion prevention, proxying, and data leak prevention. This multi-function capability can be used to generate significant cost reductions by (1) consolidating network appliances and (2) unifying policy management and compliance reporting.

I have heard this type of firewall referred to as an Application Firewall. But it seems confusing to me because it's too close to Web Application Firewall, which I described above and performs completely different functions. Therefore, I prefer the term, Web 2.0 Firewall.

In conclusion, Web 2.0 is real and IT Security 2.0 must be part of Web 2.0 strategy. Put another way, IT Security 2.0 enables Web 2.0.

I thought a post about Database Activity Monitoring was timely because one of the DAM vendors, Sentrigo, published a Microsoft SQLServer vulnerability today along with a utility that mitigates the risk. Also of note, Microsoft denies that this is a real vulnerability.

I generally don't like to write about a single new vulnerability because there are just so many of them. However, Adrian Lane, CTO and Analyst at Securosis, wrote a detailed post about this new vulnerability, Sentrigo's workaround, and Sentrigo's DAM product, Hedgehog. Therefore I wanted to put this in context.

Also of note, Sentrigo sponsored a SANS Report called "Understanding and Selecting a Database Activity Monitoring Solution." I found this report to be fair and balanced as I have found all of SANS activities.

Database Activity Monitoring is becoming a key component in a defense-in-depth approach to protecting "competitive advantage" information like intellectual  property, customer and financial information and meeting compliance requirements.

One of the biggest issues organizations face when selecting a Database Activity Monitoring solution is the method of activity collection, of which there are three – logging, network based monitoring, and agent based monitoring. Each has pros and cons:

  • Logging – This requires turning on the database product's native logging capability. The main advantage of this approach is that it is a standard feature included with every database. Also some database vendors like Oracle have a complete, but separately priced Database Activity Monitoring solution, which they claim will support other databases. Here are the issues with logging:
    • You need a log management or Security Information and Event Management (SIEM) system to normalize each vendor's log format into a standard format so you can correlate events across different databases and store the large volume of events that are generated. If you already committed to a SIEM product this might not be an issue assuming the SIEM vendor does a good job with database logs.
    • There can be significant performance overhead on the database associated with logging, possibly as high as 50%.
    • Database administrators can tamper with the logs. Also if an external hacker gains control of the database server, he/she is likely to turn logging off or delete the logs. 
    • Logging is not a good alternative if you want to block out of policy actions. Logging is after the fact and cannot be expected to block malicious activity. While SIEM vendors may have the ability to take actions, by the time the events are processed by the SIEM, seconds or minutes have passed which means the exploit could already be completed.
  • Network based – An appliance is connected to a tap or a span port on the switch that sits in front of the database servers. Traffic to and, in most cases, from the databases is captured and analyzed. Clearly this puts no performance burden on the database servers at all. It also provides a degree of isolation from the database administrators.Here are the issues:
    • Local database calls and stored procedures are not seen. Therefore you have an incomplete picture of database activity.
    • Your must have the network infrastructure to support these appliances.
    • It can get expensive depending on how many databases you have and how geographically dispersed they are.
  • Host based – An agent is installed directly on each database server.The overhead is much lower than with native database logging, as low as 1% to 5%, although you should test this for yourself.  Also, the agent sees everything including stored procedures. Database administrators will have a hard time interfering with the process without being noticed. Deployment is simple, i.e. neither the networking group nor the datacenter team need be involved. Finally, the installation process should  not require a database restart. As for disadvantages, this is where Adrian Lane's analysis comes in. Here are his concerns:
    • Building and maintaining the agent software is difficult and more time consuming for the vendor than the network approach. However, this is the vendor's issue not the user's.
    • The analysis is performed by the agent right on the database. This could mean additional overhead, but has the advantage of being able to block a query that is not "in policy."
    • Under heavy load, transactions could be missed. But even if this is true, it's still better than the network based approach which surely misses local actions and stored procedures.
    • IT administrators could use the agent to snoop on database transactions to which they would not normally have access.

Dan Sarel, Sentrigo's Vice President of Product, responded in the comments section of Adrian Lane's post. (Unfortunately there is no dedicated link to the response. You just have to scroll down to his response.) He addressed the "losing events under heavy load" issue by saying Sentrigo has customers processing heavy loads and not losing transactions. He addressed the IT administrator snooping issue by saying that the Sentrigo sensors doe not require database credentials. Therefore database passwords are not available to IT administrators.

Controversy around the PCI DSS compliance program increased recently when Robert Carr, the CEO of Heartland Payment Systems, in an article in CSO Online, attacked his QSAs saying, "The audits done by our QSAs (Qualified Security Assessors) were of no value whatsoever. To the extent that they were telling us we were secure beforehand, that we were PCI compliant, was a major problem."

Mike Rothman, Senior VP of eIQNetworks responded to Mr. Carr's comments not so much to defend PCI but to place PCI in perspective, i.e. compliance does not equal security. I discussed this myself in my post about the 8 Dirty Secrets of IT Security, specifically in my comments on Dirty Secret #6 – Compliance Threatens Security

Eric Ogren, a security industry analyst, continued the attack on PCI in his article in SearchSecurity last week where he said, "The federal indictment this week of three men for their roles in the
largest data security breach in U.S. history also serves as an
indictment of sorts against the fraud conducted by PCI – placing the
burden of security costs onto retailers and card processors when what
is really needed is the payment card industry investing in a secure
business process."

The federal indictment to which Eric Ogren referred was that of Albert Gonzalez and others for the breaches at Heartland Payment Services, 7-Eleven, Hannaford, and two national retailers referred to as Company A and Company B. Actually this is the second federal indictment of Albert Gonzalez that I am aware of. The first, filed in Massachusetts in August 2008, was for the breaches at BJ's Wholesale Club, DSW, OfficeMax, Boston Market, Barnes & Noble, Sport Authority, and TJX.

Bob Russo, the general manager of the PCI Security Standards Council disagreed with Eric Ogren's characterizations of PCI, saying that retailers and credit card processors must take responsibility for protecting cardholder information.

Rich Mogull, CEO and Analyst at Securosis, responded to Bob Russo's article with recommendations to improve the PCI compliance program which he characterized as an "overall positive development for the state of security." He went on to say, "In other words, as much as PCI is painful, flawed, and ineffective, it
has also done more to improve security than any other regulation or
industry initiative in the past 10 years. Yes, it's sometimes a
distraction; and the checklist mentality reduces security in some environments, but overall I see it as a net positive."

Rich Mogull seems to agree with Eric Ogren that the credit card companies have the responsibility and the power to improve the technical foundations of credit card transactions. In addition, he calls the PCI Council to task for such issues as:

  • incomplete and/or weak compliance requirements
  • QSA shopping
  • the conflict of interest they created by allowing QSA's to perform audits and then sell security services based on the findings of the audits.

Clearly organizations have no choice but to comply with mandatory regulations. But the compliance process must be part of an overall risk management process. In other words, the compliance process is not equal to the risk management process but a component of it.

Finally, and most importantly, the enterprise risk management process must be more agile and responsive to new security threats than a bureaucratic regulatory body can be. For example, it may be some time before the PCI standards are updated to specify that firewalls must be able to work at the application level so all the the Web 2.0 applications traversing the enterprise network can be controlled. This is an important issue today as this has been a major vector for compromising systems that are then used for funds transfer fraud.

The Department of Health and Human Services this week published the regulations for the "breach notification" provision of the Health Information Technology for Economic and Clinical Health (HITECH) Act, of the American Recovery and Reinvestment Act of 2009 (ARRA). In effect, this is an extension of HIPAA and further strengthens HIPAA's Privacy Rule and Security Rule.

The new breach notification regulations are in a 121 page document. HHS also issued a press release that summarizes the new regulations.

This type of breach notification regulation started in California with SB 1386 which went into effect on July 1, 2003. Since then about 40 other states passed a similar law.

In 2008, California went on to pass a specific health care information protection law, SB 541, which requires notification of breaches and financial penalties up to $250,000 per incident. Here is a Los Angeles law firm's presentation on it.  Since SB 541 went into effect on January 1, 2009, there have been over 800 incidents reported.

20. August 2009 · Comments Off on 8 Dirty Secrets of the IT Security Industry – Provocative headline; content not so much · Categories: Application Security, Compliance, Innovation, Risk Management, Security Management · Tags: , , , , ,

CSO Online Magazine has an article about IBM ISS Security Strategist Joshua Corman's concerns with the security industry. While I agree with much of what he says, I disagree with his core premise, expressed in Dirty Secret 1. Here are my comments on each of Josh's eight dirty secrets.

"Dirty Secret 1: Vendors don't need to be ahead of the threat, just the buyer – This is the problem that leads to the seven "dirty secrets" that
follow. In essence, Corman said, the goal of the security market is to
make money, not to ensure the customer's security.
"

I find it surprising that a representative of one the largest and most profitable enterprises in the world attacks other vendors for wanting to make money, as if making money is bad. Is he serious about attacking capitalism? Is security some special market where profits are bad? From my perspective, making money is the result of solving client problems and helping them meet their objectives.

"Dirty Secret 2: AV certification omissions – While AV tools detect replicating malware like worms, they fail to identify such as [sic] non-replicating malware as Trojans."

Aside from the grammar issue, I agree that some vendors are having difficulty keeping up with the constantly evolving threat landscape. However, this creates opportunities for new vendors. Joseph Schumpeter called this "creative destruction."

"Dirty Secret 3: There is no perimeter – Corman said those who truly believe there's still a network "Perimeter" may as well believe in Santa Claus."

There has never been a perimeter in the sense that if you just protect the edge of your network, you are safe. I do agree that it can be difficult to know where that edge is. However, there is still an important role to be played by a perimeter firewall that understands applications, users, and content. Beyond that, good security has always been about "defense-in-depth."

"Dirty Secret 4: Risk management threatens vendorsRisk
management really helps an organization understand its business and its
highest level of risk, Corman said. But a company's priorities don't
always map to what the vendors are selling."

Again, this allusion to disreputable vendors. At any point in time, there surely are disreputable vendors. But they don't last long. Of course any IT Security control being deployed should be in the context of how risk is being reduced.

"Dirty Secret 5: There is more to risk than weak software – Corman
said the lion's share of the security market is focused on software
vulnerabilities. But software represents only one of the three ways to
be compromised, the other two being weak configurations and people."

No argument here, but not really new. The issues around security awareness training, for example, are much deeper than lack of money being spent on it. Regarding configuration management, has the issue been lack of attention or lack of good products to deal with the issues? It's a hard problem.

"Dirty Secret 6: Compliance threatens security – Compliance with such laws and industry standards as Sarbanes-Oxley and PCI DSS
drives companies to spend far more on security than they might
otherwise. Security vendors have obviously seized upon this fact,
offering products that do everything from offer PCI compliance out of
the box to ultimate cure-alls for healthcare entities coping with the
demands of HIPAA. Of course, this too leads to companies buying security tools that fail to properly address the particular risks they face."

I surely agree that compliance threatens security and there surely are cases where vendors have been successful by focusing on compliance rather than on reducing risk. When an organization "only" focuses on compliance requirements it falls short of what it can and should be doing to protect its assets. In fact, compliance represents a floor or bare minimum level of security.

Put another way, if you only focus on compliance, you will surely not be maximizing the value of your security investment. At the very least, there is no way that regulatory bureaucracies can keep up with the changing threat landscape. 

"Dirty Secret 7: Vendor blind spots allowed for Storm – The Storm botnet, as an archetype, is being copied and improved. The Storm era of botnets is alive and well, nearly two years from when it first appeared, Corman said."

As I said in my comment on Dirty Secret 2, some vendors may not be responding to the changing threat landscape, but there are others who are. If you feel your vendors are not responding, look for new ones. There is a lot of innovation in the IT Security industry.

"Dirty Secret 8: Security has grown well past "do it yourself" – Technology
without strategy is chaos, Corman said. The sheer volume of security
products and the rate of change has super-saturated most organizations
and exceeded their ability to keep up."

Any actions or tactics that are not part of a strategy is obviously chaos. First Corman says that vendors are not keeping up and now he is saying that enterprises cannot keep up (without his help). With all due respect, let's remember that Corman is part of IBM's consulting organization. On the other hand, there is no harm in repeating that technology by itself is not the answer. It's people, process, and technology, as it has always been.