I like the idea of maturity models as they can help an organization improve the state of a process in an organized fashion and enables the organization to compare itself to others. The granddaddy of maturity models is Carnegie Mellon University's software development Capability Maturity Model which was started in 1987. Now comes the Building Security In Maturity Model which is focused on building security into the software development process.

Here is the opening paragraph of their web site:

The Building Security In Maturity Model (BSIMM) described on this website is designed to help you understand
and plan a software security initiative. BSIMM was created through a process of understanding and analyzing
real-world data from nine leading software security initiatives. Though particular methodologies differ (think OWASP
CLASP, Microsoft SDL, or the Cigital Touchpoints), many initiatives share common ground. This common ground
is captured and described in BSIMM. As an organizing feature, we introduce and use a Software Security Framework
(SSF), which provides a conceptual scaffolding for BSIMM. Properly used, BSIMM can help you determine where
your organization stands with respect to real-world software security initiatives and what steps can be taken to make
your approach more effective.

The organizers are Gary McGraw and Sammy Migues of Cigital and Brian Chess of Fortify. Cigital and Fortify are both leading vendors in the software security market. Please do not interpret this as a negative. Putting out valuable information for free and enabling two-way communications with users is about as ethical marketing as there is.

They are promoting the very worthwhile and intuitively obvious notion that your software will be more secure if you build security in during design and development rather than bolt it on afterward.

BTW, Carnegie Mellon's Software Engineering Institute is still very active with respect to maturity models. Check them out here. Wikipedia provides a nice summary here.

30. September 2009 · Comments Off on Twitter is dead · Categories: Application Security, Breaches · Tags: , , ,

According to Robert X. Cringeley, long time computer industry pundit, Twitter is dead. Why?

"Twitter is dead because it is now so popular that the spammers and
the scammers have arrived in force. And history tells us that once they
sink their teeth into something, they do not let go. Ever.

Twitter scams aren't new. But I've never seen so many hit in a single week or with such rigorous precision."

Symantec has a nice blog post about one of the underlying problems with Twitter, i.e. since Twitter is limited to 140 characters, people use "URL shorteners" instead of the actual URLs to which they are referring. Therefore you have no idea where you are going when you click on the shortened URL.

Cringely closes with this:

Spam will kill Twitter's usefulness for everyone but relentless
Internet marketers, unless the brainiacs at TwitCentral can figure out
a better way to block it. Smart people have tried and failed everywhere
else, though. I don't hold out much hope.

My view is that just as with any new technology, if there are real benefits people will tolerate the risks for some period of time and third parties will develop solutions to mitigate the risks. This is the history of the whole IT security industry.

Take email for example. Email has been so valuable that people tolerated spam for some time. Then third parties developed anti-spam solutions for which enterprises were willing to pay and consumers got as a feature of either their email client or anti-malware product.

On the other hand, there is still a huge amount of email spam, which means that email spamming is still profitable. Therefore there are tons of people who either are not availing themselves of anti-spam filters or for some reason still fall for spam scams.

Yet with all that spam, there is no sign of email dying due its immense value.


The recent Goldman Sachs breach of proprietary trading software highlights the risk of insider fraud and abuse. RGE, Nouriel Roubini's website, has the best analysis I've read on the implications of such an incident.

Here is the money quote, "What is troubling about the Goldman leak is how unprepared our infrastructure is against active measures. We already have good security practices, defamation laws and laws against market manipulation. What we don't have is a mechanism for dealing with threats that appear to be minor, but where the resulting disinformation is catastrophic."

I cannot imagine any better proof of the need for better user, application, content, and transaction monitoring and control tools.

Read the whole article.

02. August 2009 · Comments Off on The most severe breaches result from application level attacks · Categories: Application Security, Breaches, Risk Management, Security Management · Tags: , , , , ,

Last week, I highlighted the Methods of Attack data from the Verizon Business 2009 Data Breach Investigations Report. Today, I would like to discuss an equally important finding they reported about Attack Vectors (page 18).

The surprise is that only 10% of the breaches were traced to network devices. And network devices represented only 11% of the actual records breached. The top vector was Remote Access and Management at 39%. Web Applications came in second at 37%. Even more interesting is that 79% of all records breached were the result of the Web Application vector!

Clearly there has been a major shift in attack vectors. While this may not be a total surprise, we now have empirical evidence. We must focus our security efforts on applications, users, and content.

31. July 2009 · Comments Off on Clampi malware plus exploit raises risk to extremely high · Categories: Uncategorized · Tags: , , , , , , , , , ,

The risk associated with a known three year old Trojan-type virus called Clampi has gone from low to extremely high due the sophisticated exploit created and being executed by an Eastern European cyber-crime group.

Just as businesses can differentiate themselves by applying creative processes to commodity technology, so now are cyber-criminals. Clampi has been around since 2007. Symantec as of July 23, 2009 considered the risk posed by Clampi as Risk Level 1: Very Low. I don’t mean to pick on Symantec. McAfee, which calls the virus LLomo, has the Risk Level set to Low as of July 16, 2009. TrendMicro’s ThreatInfo site was so slow, I gave up trying to find the Risk Level they chose.

The exploit process used was first reported (to my knowledge) by Brian Krebs of the Washington Post on July 20, 2009.

On July 29, 2009, Joe Stewart, Director of Malware Research for the Counter Threat Unit (CTU) of SecureWorks released a summary of his research about Clampi and how it’s being used, just prior to this week’s Black Hat Security Conference in Las Vegas.

Clampi is a Trojan-type virus which, when installed on your desktop or
laptop, can be used by this cyber-crime group to steal financial data,
apparently including User Identification and Password credentials used
for online banking and other types of online commerce. Apparently, this
Eastern European cyber-crime group controls a large number of PC’s
infected with Clampi and is stealing money from both consumers and
businesses.

Brian Krebs of the Washington Post ran a story on July 2, 2009 about a similar exploit using a different PC-based Trojan called Zeus. $415,000 was stolen from Bullitt County, KY.

Trojans like Clampi and Zeus have been around for years. What makes these exploits so high risk is the methods by which these Trojans infect us and the sophistication of the exploits’ processes for extracting money from bank accounts.

Security has always been a “cat-and-mouse” game where the bad guys develop new exploits and the good guys respond. So now I am sure we are going to see the creativity of the security vendor industry applied to reducing the risk associated with this type of exploit. At the most basic level, firewalls need to be much more application and user aware. Intrusion detection systems may already be able to detect some aspect of this type of exploit. We also need better anomaly detection capabilities.