NetworkWorld has an interesting article today on the perils of social networking. The article focuses on the risk of employees transmitting confidential data. However, it's actually worse than that. There are also risks of malware infection via spam and other social engineering tactics. Twitter is notorious for its lax security. See my post, Twitter is Dead.

Blocking social networks completely is not the answer just as disconnecting from the Internet was not the answer in the 90's. Facebook, Twitter, and LinkedIn, among others can be powerful marketing and sales tools.

The answer is "IT Security 2.0" tools that can monitor these and hundreds of other web 2.0 applications to block incoming malware and outgoing confidential documents.

26. October 2009 · Comments Off on Evil Maid attack shows that laptop hard drive encryption not the silver bullet · Categories: Breaches, Malware, Risk Management · Tags: , , , ,

As important as laptop hard drive encryption is, it's not the silver bullet for protecting confidential data on laptops. Bruce Schneier described Joanna Rutkowska's "evil maid" attack against a disk encryption product. This type of attack would probably work against any disk encryption product because disk encryption does not defend against an attack where the attacker gets access to your encryption key.

As usual, risk management is about understanding the threat which you are trying to mitigate. Disk encryption does solve the stolen laptop problem. But if an attacker can get access to your laptop multiple times without your realizing it, the evil maid attack can defeat disk encryption.

PGP, a disk encryption vendor, discusses the limitations of disk encryption and as well as other defenses available to protect against evil maid and other attacks.

Bruce Schneier notes that two-factor authentication will defeat the evil maid attack. BTW, don't leave your token in the hotel room for the evil maid to find. 🙂

23. October 2009 · Comments Off on Relational databases dead for log management? · Categories: Compliance, Log Management, Security Management · Tags: , , , , ,

Larry Walsh wrote an interesting post this week, Splunk Disrupts Security Log Auditing, in which he claims that Splunk's success is due to capturing market share in the security log auditing market because of it's Google-like approach to storing log data rather than using a "relational database."

There was also a very good blog post at Securosis in response – Splunk and Unstructured Data.

While there is no doubt that Splunk has been successful as a company, I am not so sure it's due to security log auditing.

It's my understanding that the primary use case for Splunk is actually in Operations where, for example, a network administrator wants to search logs to resolve an Alert generated by an SNMP-based network management system. Most SNMP-based network management systems are good at telling you "what" is going on, but not very good at telling you "why."

So when the network management system generates an Alert, the admin goes to Splunk to find the logs that would show what actually happened so s/he can fix the root cause of the Alert. For this use case, you don't really need more than a day's worth of logs.

Splunk's brilliant move was to allow "free" usage of the software for one day's worth of logs or some limited amount of storage that generally would not exceed one day. In reality, a few hours of logs is very valuable. This freemium model has been very successful.

Security log auditing is a very different use case. It can require a year or more of data and sophisticated reporting capabilities. That is not to say that a Google-like storage approach cannot accomplish this.

In fact, security log auditing is just another online analytical processing (OLAP) application, albeit with potentially huge amounts of data. It's been at least ten years that the IT industry realized that OLAP applications require a different way to organize stored data compared to online transaction processing (OLTP) applications. OLTP applications still use traditional relational databases.

There has been much experimentation about ways to store data for OLAP applications. However, there is still a lot of value in the SQL language as a kind of open industry standard API to stored data.

So I would agree that traditional relational database products are not appropriate for log management data storage, but SQL as a language has merit as the "API layer" between the query and reporting tools and the data.

21. October 2009 · Comments Off on Phishing emails have become more convincing · Categories: Botnets, Funds Transfer Fraud, Malware, Social Engineering · Tags: , , ,

The "quality" of phishing emails continues to improve. In other words, the attackers continue to make their phishing emails seem legitimate and thus trick more people into taking the emails' suggested actions. An article in Dark Reading this week discusses research done by F-Secure about new, more convincing, phishing attacks generated by the Zbot botnet which attempts to infect victims with the Zeus trojan. I wrote about how the Zeus trojan is used as a keylogger to steal banking credentials which enable funds transfer fraud

While one might have considered the Dark Reading article a public relations piece for F-Secure, its validity was increased for me by Rich Mogull at Securosis who wrote about  "the first phishig email I almost fell for," i.e. one of these Zbot phishing emails.

If a security person like Rich Mogull, who has the requisite security "paranoia DNA" can almost be fooled, then the phishing attackers are indeed improving their social engineering craft.

Wired Magazine reported this week that Wal-Mart kept secret a breach it discovered in November 2006 that had been ongoing for 17 months. According to the article, Walmart claimed there was no reason to disclose the exploit at the time as they believe no customer data or credit card information was breached.

They are admitting that custom developed Point-of-Sale software was breached. The California Breach Law covering breached financial information of California residents had gone into effect on July 1, 2003 and was extended to health information on January 1, 2009. I blogged about that here.

I think it would be more accurate to say that the forensics analysts hired by Wal-Mart could not "prove" that customer data was breached, i.e., could not find specific evidence that customer data was breached. One key piece of information the article revealed, "The company’s server logs recorded only unsuccessful log-in attempts, not successful ones, frustrating a detailed analysis."

Based on my background in log management, I understand the approach of only collecting "bad" events like failed log-ins. Other than this sentence the article does not discuss what types of events were and were not collected. Therefore they have very little idea of what was really going on.

The problem Wal-Mart was facing at the time was that the cost of collecting and storing all the logs in an accessible manner was prohibitive. Fortunately, log data management software has improved and hardware costs have dropped dramatically. In addition there are new tools for user activity monitoring.

However, my key reaction to this article is my disappointment that Wal-Mart chose to keep this incident a secret. It's possible that news of a Wal-Mart breach might have motivated other retailers to strengthen their security defenses and increase their vigilance, which might have reduced the number of breaches that occurred since 2006. It may also have more quickly increased the rigor QSAs applied to PCI DSS audits.

In closing, I would like to call attention to Adam Shostack's and Andrew Stewart's book, "The New School of Information Security," and quote a passage from page 78 which talks about the value of disclosing breaches aside from the need to inform people whose personal financial or health information may have been breached:

"Breach data is bringing us more and better objective data than any past information-sharing initiative in the field of information security. Breach data allows us to see more about the state of computer security than we've been able to with traditional sources of information. … Crucially, breach data allows us to understand what sorts of issues lead to real problems, and this can help us all make better security decisions."

12. October 2009 · Comments Off on IBM CIO study ranks Risk Management and Compliance #3 of 10 CIO visionary plans · Categories: IT Security 2.0, Risk Management · Tags: , , ,

On September 10th, IBM released the results of a global study (registration required) they conducted of 2,500 CIO's from around the world. Of the ten top "visionary plans," these CIO's ranked Risk Management and Compliance third. Business Intelligence and Analytics was first followed by Virtualization. Also, I found it significant that Customer and Partner Collaboration came in fourth.

Unfortunately, the report did not divulge details of the methodology used beyond saying that over 2,500 CIO's were interviewed. If one grants that IBM is an able marketing organization, it genuinely wants to understand the priorities of CIO's so it can respond with the right services to increase its revenue. Therefore these priorities do represent what CIOs are thinking.

A more cynical opinion would be that this study is simply a marketing tool of IBM Global Services. In this case, IBM Global Services is advising CIOs that Risk Management and Compliance should be their third highest priority. Either way, this report highlights the importance of Risk Management and Compliance.

Looking at the study as a whole, it correlates the use of information technology to drive innovation with higher corporate profits. (Reminder – correlation and causation are not the same thing.)  In addition, information technology creates new risks which must be understood and mitigated.

Perhaps I am writing this because it supports my previously stated position that risk management enables innovation, e.g. Web 2.0 creates new risks which if not mitigated completely outweigh the value.

09. October 2009 · Comments Off on Cloud-based Data Leak Detection complements Data Leak Prevention – Monitoring P2P Networks · Categories: Breaches, Data Loss Prevention, IT Security 2.0, Privacy · Tags: , ,

Can you imagine your Data Leak Prevention system not being perfect? Is there value in a service that scans P2P networks looking for leaked data that eluded your Data Leak Prevention (DLP) controls?

Tiversa offers such a service. In an example of the value of their service, according to a Washington Post article, they claim that "the personal data of tens of thousands of U.S. soldiers – including those in the Special Forces – continue to be downloaded to unauthorized computer users in countries such as China and Pakistan…"

On a separate, but possibly related note, there was an Ars Technica article last last week on a bill working its way through Congress called the "Informed P2P User Act." From the Ars Technica article:

"First, it requires P2P software vendors to provide "clear and
conspicuous" notice about the files being shared by the software and
then obtain user consent for sharing them. Second, it prohibits P2P
programs from being exceptionally sneaky; surreptitious installs are
forbidden, and the software cannot prevent users from removing it."

It's clear that P2P represents risks that can be reduced by both technical and legal means.


04. October 2009 · Comments Off on Bogus Identity Theft Study – Conclusions? Who cares. · Categories: Identity Theft · Tags: ,

Slashdot posted a story about an Identity Theft study conducted by interviewing people convicted of identity theft. A more detailed post to which Slashdot referred has more details.

The supposed conclusion drawn by Heith Copes (University of Alabama at Birmingham) and Lynne Vieraitis (University of Texas at Austin) is:

"Despite public perceptions of identity theft being a high-tech,
computer driven crime, it is rather mundane and requires few technical
skills. Identity thieves do not need to know how to hack into large,
secure databases. They can simply dig through garbage or pay insiders
for information. No particular group has a monopoly on the skills
needed to be a capable identity thief."

The flaw in their work of course is that they only interviewed thieves who were caught!! In order to really draw meaningful conclusions about identity theft they need to interview thieves who did not get caught.
Of course that increases the difficulty dramatically.

It reminds me of a story I heard years ago from my friend Joe. One night he was walking down a narrow street that had only one street light. Under it was a drunk who seemed to be looking for something. My friend Joe went up to him and asked if he could help. The drunk said, "Sure, I lost my keys and I'm looking for them." My friend asked the drunk, "Where did you lose them?" The drunk responded, "Over there." My friend asked, "Then why are you looking over here?" The drunk answered, "Well it's dark over there. The light is over here."

04. October 2009 · Comments Off on Canadian study reports breaches triple in 2009. Is this a valid statistic? · Categories: Breaches, Security Management · Tags: , , , ,

Earlier this week, Telus released the results of their 2009 joint Telus/Rotman School of Management at the University of Toronto study on Canadian IT Security Practices, which claimed that the number of breaches tripled to an average of 11.3. Here is the press release. But are these valid claims?

First, let's take a deeper look at the average of 11.3. By simply taking the raw answers of the 2009 question about the number of breaches during the last 12 months, the average, i.e. mean, is indeed 11.3. However, let's take a closer look at the actual responses:

Number of Breaches   Percentage of Organizations

0                                      14%

1                                        6%

2 to 5                                 33% 

6 to 10                                9%

11 to 25                              7%

26 to 50                              3%

51 to 100                            2%

More than 100                     2%

Don't know                          23%

Given the number of outliers, the average (mean) is not really a valid number. Those outliers significantly skew the average. The mode, between 2 and 5, is much more meaningful.

Also, there is no attempt to correlate the number of breaches an organization suffered with the organization's size. Of the 500 organizations participating, 31% had under 100 employees and 23% had over 10,000 employees. The point here is that the outliers may very well be a small group of very large organizations.

Now let's address the claim of "tripling." What could account for this huge increase?

  1. It may just be a case of people being more honest this year, i.e. reporting more breaches. After all, this is just a survey.
  2. It may be that organizations actually have better security controls in place and therefore detected more breaches.
  3. It may be a function of the organizations participating. In 2008, there were only 297 versus 500 in 2009.
  4. It could be the change in the way the question was worded in 2009 versus 2008. Here is the question from 2008 (In fact the only place in the study that uses real numbers rather than percentages):

Q40. A major IT security incident can be defined as one which causes a disruption to normal activities such that significant time, resources, and/or payments are required to resolve the situation. Based on this definition, how many major security incidents do you estimate your organization has experienced in the last 12 months?

1 to 5             63%

6 to 10             2%

More than 10    1%

Don't know       24%

The 2009 study question:

Q48. How many Security breaches do you estimate your organization has experienced in the past 12 months?

I provided the responses earlier in this post. The point is that in 2008, the question specifically asked about major incidents while in 2009 the question was about all breaches.

Also note, in both cases the organizations were asked for "estimates." Don't most of these organizations have Security Incident Response Teams? At least the 69% with over 100 full time employees? Wouldn't they know exactly how many "incidents" they investigated and how many were actual breaches? 

I suppose studies like these, based on surveys, have some value, but we really need information based on facts and analysis based on sound techniques.

04. October 2009 · Comments Off on URLZone – Funds Transfer Fraud innovation accelerates · Categories: Botnets, Breaches, Funds Transfer Fraud, Innovation, Malware · Tags: , , ,

Web security firm, Finjan, published a report (Issue 2, 2009) this week on a more advanced funds transfer fraud trojan called URLZone. It basically follows the now well understood process I blogged about previously, where:

  1. Cybercriminals infect Web sites using, for example, Cross Site Scripting.
  2. Web site visitors are infected with a trojan, in this case URLZone.
  3. The trojan is used to collect bank credentials.
  4. Cybercrirminals transfer money from the victims to mules.
  5. The money is transferred from the mules to the cybercriminals.

URLZone is a more advanced trojan because of the level of automation of the funds transfer fraud  (direct quotes from the Finjan report):

  • It hides its fraudulent transaction(s) in the report screen of the compromised account.
  • Its C&C [Command and Control] server sends instructions over HTTP about the amount to be stolen and where the stolen money should be deposited.
  • It logs and reports on other web accounts (e.g., Facebook, PayPal, Gmail) and banks from other countries.

In the past, the trojan was merely a keylogger that sent credentials back to the cybercriminal. These exploits were mostly against small businesses and schools where relatively large amounts of money could be stolen. But the URLZone trojan has much more sophisticated command and control which enables a much higher volume of transactions. Finjan reports 6,400 victims in 22 days losing 300,000 Euros. So far all the victims have been in Germany.