12. September 2009 · Comments Off on Apache.org site hacked – details published · Categories: Breaches, Risk Management, Security Management · Tags: ,

The Apache.org team published the details of a recent incident where one of their web sites was breached. While the details are, of course, very technical, it provides a great learning experience for the rest of us. Dan Goodin of the Register summarized the incident.

Unfortunately, most organizations are very reluctant to even admit when they are hacked, let alone share the details of the experience. Hence the various federal and state laws forcing organizations to report incidents where people's personal and/or financial information may have been disclosed.

Given the fact that Apache produces open source software (the number one web server software), it is appropriate that they would be so open about a breach.

07. September 2009 · Comments Off on Court allows bank customer to sue bank for “negligent” security practices · Categories: Authentication, Breaches, Funds Transfer Fraud, Legal, Risk Management, Security Management, Vendor Liability · Tags: , , , ,

Computerworld reported last week that a judge in Illinois ruled that a couple who lost $26,500 when their bank account was breached can sue the bank for negligence for not implementing "state-of-the-art" security measures which would have prevented the breach.

While bank credit card issuers have been suing credit card processors and retailers regularly to recoup losses due to breaches, this is the first time that I am aware of that a judge has ruled that a customer can sue the bank for negligence.

The more detailed blog post by attorney David Johnson, upon which the Computerworld article is based, discusses some really interesting details of this case.

The plaintiffs sued Citizens Financial Bank for negligence because it had not implemented multifactor authentication. The timeline is important here. The Federal Financial Institutions Examination Council (FFIEC) issued multifactor authentication guidelines in 2005. By 2007, when the plaintiffs' breach occurred, the bank had still not implemented multifactor authentication. The judge, Rebecca Pallmeyer of the District Court of Northern Illinois, found this two year delay unacceptable. 

Two interesting complications – (1) The account from which the money was stolen was from a home equity line of credit account, not a deposit or consumer asset account. (2) This credit account was linked to the plaintiffs' business checking account. I discussed the differences between consumer and business account liability here. Fortunately for the plaintiffs, the judge brushed these issues aside and focused on the lack of multifactor authentication.

One issue that was not addressed – where was Fiserv in all of this?
They are the provider of the online banking software used by Citizens
Financial Bank. Were they offering some type of multifactor
authentication? I would assume yes, although I have not been able to
confirm this.

In conclusion, attorney David Johnson makes clear that this ruling increases the risk to banks (and possibly other organizations responsible for protecting money and/or other assets of value) if they do not implement state-of-the-art security measures.

07. September 2009 · Comments Off on Older versions of WordPress are under attack – Welcome to the real world · Categories: Breaches, Risk Management · Tags: , ,

In the last week, vulnerabilities in older versions of WordPress software have been exploited resulting in blog posts being deleted and the blog sites being used for malicious purposes. Welcome to the real world.

The shock that some people are expressing, like Robert Scoble on Scobleizer is somewhat surprising. It's clear that WordPress knew about the vulnerabilities for some time and urged self-hosting customers to upgrade to WordPress version 2.8.4. Some of those that did not, have paid the price. Here is some additional useful information.

People have been snickering for years about Microsoft's security travails. We have since learned that Microsoft does not have a monopoly on security vulnerabilities and exploits. All software products have vulnerabilities. The issue is that as a software product becomes popular, it attracts cyber criminals. Therefore, software companies, as they become successful, must increase their focus on security issues, which WordPress seems to have done.

And we as consumers of software have risk management responsibilities too:

  • Upgrading to current releases
  • Backing up regularly to increase resiliency, i.e. the ability to recover quickly from an attack.

Controversy around the PCI DSS compliance program increased recently when Robert Carr, the CEO of Heartland Payment Systems, in an article in CSO Online, attacked his QSAs saying, "The audits done by our QSAs (Qualified Security Assessors) were of no value whatsoever. To the extent that they were telling us we were secure beforehand, that we were PCI compliant, was a major problem."

Mike Rothman, Senior VP of eIQNetworks responded to Mr. Carr's comments not so much to defend PCI but to place PCI in perspective, i.e. compliance does not equal security. I discussed this myself in my post about the 8 Dirty Secrets of IT Security, specifically in my comments on Dirty Secret #6 – Compliance Threatens Security

Eric Ogren, a security industry analyst, continued the attack on PCI in his article in SearchSecurity last week where he said, "The federal indictment this week of three men for their roles in the
largest data security breach in U.S. history also serves as an
indictment of sorts against the fraud conducted by PCI – placing the
burden of security costs onto retailers and card processors when what
is really needed is the payment card industry investing in a secure
business process."

The federal indictment to which Eric Ogren referred was that of Albert Gonzalez and others for the breaches at Heartland Payment Services, 7-Eleven, Hannaford, and two national retailers referred to as Company A and Company B. Actually this is the second federal indictment of Albert Gonzalez that I am aware of. The first, filed in Massachusetts in August 2008, was for the breaches at BJ's Wholesale Club, DSW, OfficeMax, Boston Market, Barnes & Noble, Sport Authority, and TJX.

Bob Russo, the general manager of the PCI Security Standards Council disagreed with Eric Ogren's characterizations of PCI, saying that retailers and credit card processors must take responsibility for protecting cardholder information.

Rich Mogull, CEO and Analyst at Securosis, responded to Bob Russo's article with recommendations to improve the PCI compliance program which he characterized as an "overall positive development for the state of security." He went on to say, "In other words, as much as PCI is painful, flawed, and ineffective, it
has also done more to improve security than any other regulation or
industry initiative in the past 10 years. Yes, it's sometimes a
distraction; and the checklist mentality reduces security in some environments, but overall I see it as a net positive."

Rich Mogull seems to agree with Eric Ogren that the credit card companies have the responsibility and the power to improve the technical foundations of credit card transactions. In addition, he calls the PCI Council to task for such issues as:

  • incomplete and/or weak compliance requirements
  • QSA shopping
  • the conflict of interest they created by allowing QSA's to perform audits and then sell security services based on the findings of the audits.

Clearly organizations have no choice but to comply with mandatory regulations. But the compliance process must be part of an overall risk management process. In other words, the compliance process is not equal to the risk management process but a component of it.

Finally, and most importantly, the enterprise risk management process must be more agile and responsive to new security threats than a bureaucratic regulatory body can be. For example, it may be some time before the PCI standards are updated to specify that firewalls must be able to work at the application level so all the the Web 2.0 applications traversing the enterprise network can be controlled. This is an important issue today as this has been a major vector for compromising systems that are then used for funds transfer fraud.

The recent Goldman Sachs breach of proprietary trading software highlights the risk of insider fraud and abuse. RGE, Nouriel Roubini's website, has the best analysis I've read on the implications of such an incident.

Here is the money quote, "What is troubling about the Goldman leak is how unprepared our infrastructure is against active measures. We already have good security practices, defamation laws and laws against market manipulation. What we don't have is a mechanism for dealing with threats that appear to be minor, but where the resulting disinformation is catastrophic."

I cannot imagine any better proof of the need for better user, application, content, and transaction monitoring and control tools.

Read the whole article.

03. August 2009 · Comments Off on LoJack-For-Laptops creates rootkit-like BIOS vulnerability · Categories: Breaches, Malware, Risk Management, Security Management, Security Policy · Tags: , , , , , , , ,

Alfredo Ortega and Anibal Sacco, researchers for penetration testing software company Core Security Technologies, demonstrated at Black Hat how Absolute Software's Computrace LoJack For Laptops contains a BIOS rootkit-like vulnerability.The reason this is significant is that about 60% of laptops ship with this installed including those from Dell, HP, Toshiba, and Lenovo. These companies are listed as OEM partners on Absolute's web site.

Here is a good article which describes how LoJack for Laptops works and the vulnerability. Lest you think this is only a Windows issue, the software is also used on Macs, although Apple is not listed as an OEM partner.

In order for this vulnerability to be exploited the bad guy would need physical access to your laptop or remote access with Admin/root privileges. If you are running in User-mode, which should be an enforced policy, the risk drops significantly. The high risk exploits are:

  • A keylogger is installed and used to capture your passwords which, for example, you use to access your bank accounts
  • An agent is installed that enables the bad guy to retrieve whatever data is stored on the system, such as intellectual property, financial records, etc.

There are always trade-offs in technology. By definition, adding features increases the attack surface. The good news is that LoJack for Laptops reduces the risk of disclosing information on lost or stolen laptops. The bad news is that by using it, you are increasing the risk of a rootkit-like attack on the laptop.

03. August 2009 · Comments Off on Vendor “fined” by customer for lax security that resulted in an incident · Categories: Breaches, Risk Management, Security Management, Vendor Liability · Tags: , , , , , , ,

Richard Bejtlich reports on a story that appeared in the Washington Times last week, "Apptis Inc., a military information technology provider, repaid
$1.3 million of a $5.4 million Pentagon contract after investigators
said the company provided inadequate computer security and a
subcontractors system was hacked from an Internet address in China
…"

As Richard said, this may be a first. When is this going to happen in the commercial market?

02. August 2009 · Comments Off on The most severe breaches result from application level attacks · Categories: Application Security, Breaches, Risk Management, Security Management · Tags: , , , , ,

Last week, I highlighted the Methods of Attack data from the Verizon Business 2009 Data Breach Investigations Report. Today, I would like to discuss an equally important finding they reported about Attack Vectors (page 18).

The surprise is that only 10% of the breaches were traced to network devices. And network devices represented only 11% of the actual records breached. The top vector was Remote Access and Management at 39%. Web Applications came in second at 37%. Even more interesting is that 79% of all records breached were the result of the Web Application vector!

Clearly there has been a major shift in attack vectors. While this may not be a total surprise, we now have empirical evidence. We must focus our security efforts on applications, users, and content.

31. July 2009 · Comments Off on Clampi malware plus exploit raises risk to extremely high · Categories: Uncategorized · Tags: , , , , , , , , , ,

The risk associated with a known three year old Trojan-type virus called Clampi has gone from low to extremely high due the sophisticated exploit created and being executed by an Eastern European cyber-crime group.

Just as businesses can differentiate themselves by applying creative processes to commodity technology, so now are cyber-criminals. Clampi has been around since 2007. Symantec as of July 23, 2009 considered the risk posed by Clampi as Risk Level 1: Very Low. I don’t mean to pick on Symantec. McAfee, which calls the virus LLomo, has the Risk Level set to Low as of July 16, 2009. TrendMicro’s ThreatInfo site was so slow, I gave up trying to find the Risk Level they chose.

The exploit process used was first reported (to my knowledge) by Brian Krebs of the Washington Post on July 20, 2009.

On July 29, 2009, Joe Stewart, Director of Malware Research for the Counter Threat Unit (CTU) of SecureWorks released a summary of his research about Clampi and how it’s being used, just prior to this week’s Black Hat Security Conference in Las Vegas.

Clampi is a Trojan-type virus which, when installed on your desktop or
laptop, can be used by this cyber-crime group to steal financial data,
apparently including User Identification and Password credentials used
for online banking and other types of online commerce. Apparently, this
Eastern European cyber-crime group controls a large number of PC’s
infected with Clampi and is stealing money from both consumers and
businesses.

Brian Krebs of the Washington Post ran a story on July 2, 2009 about a similar exploit using a different PC-based Trojan called Zeus. $415,000 was stolen from Bullitt County, KY.

Trojans like Clampi and Zeus have been around for years. What makes these exploits so high risk is the methods by which these Trojans infect us and the sophistication of the exploits’ processes for extracting money from bank accounts.

Security has always been a “cat-and-mouse” game where the bad guys develop new exploits and the good guys respond. So now I am sure we are going to see the creativity of the security vendor industry applied to reducing the risk associated with this type of exploit. At the most basic level, firewalls need to be much more application and user aware. Intrusion detection systems may already be able to detect some aspect of this type of exploit. We also need better anomaly detection capabilities.

Detailed
empirical data on IT Security breaches is hard to come by despite laws like
California SB1386.
So
there is much to be learned from Verizon Business’s April 2009 Data Breach
Investigations Report
.

The specific issue I would like to highlight now is the
section on methods by which the investigated breaches were discovered (Discovery
Methods, page 37). 83% were discovered by third parties or non-security employees
going about their normal business. Only 6% were found by event monitoring or
log analysis. Routine internal or external audit combined came in at a rousing
2%.

These numbers are truly shocking considering the amount
of money that has been spent on Intrusion Detection systems, Log Management
systems, and Security Information and Event Management systems. Actually, the
Verizon team concludes that many breached organizations did not invest sufficiently
in detection controls. Based on my experience, I agree.

Given a limited security budget there needs to be a balance
between prevention, detection, and response. I don’t think anyone would argue against
this in theory. But obviously, in practice, it’s not happening. Too
often I have seen too much focus on prevention to the detriment of detection
and response.

In addition, these
numbers point to the difficulties in deploying viable detection controls, as there
were a significant number of organizations that had purchased detection
controls but had not put them into production. Again, I have seen this myself
as most of the tools are too difficult to manage and it’s difficult to implement
effective processes.