26. March 2010 · Comments Off on HSBC database breach highlights need for better database security · Categories: Breaches, Database Activity Monitoring · Tags:

Dark Reading is reporting more details are emerging about the HSBC database breach where it now appears that data on 25% of HSBC's private clients' accounts were stolen by a "privileged" user.

Click on the Database Activity Monitoring Category on the right for my other posts about the need for Database Activity Monitoring.

10. February 2010 · Comments Off on Insiders abuse poor database account provisioning and lack of database activity monitoring · Categories: Breaches, Database Activity Monitoring, Log Management, Security Information and Event Management (SIEM) · Tags: , ,

DarkReading published a good article about breaches caused by malicious insiders who get direct access to databases because account provisioning is poor and there is little or no database activity monitoring.

There are lots of choices out there for database activity monitoring but only three methods, which I wrote about here. I wrote about why database security lags behind network and end-point security here

A week later, "Operation Aurora," which I discussed in detail here, is still the most important IT security story. PC Magazine provided additional details here.

Early in the week it appeared that the exploit took advantage of a vulnerability in Internet Explorer 6, the version of Microsoft's browser originally released on August 27, 2001. Larry Seltzer blogged about Microsoft's ridiculously long support cycles demanded by corporate customers. Why any organization would allow the use of this nine year old browser is a mystery to me, especially at Google!!

Later in the week, we found out that the exploit could be retooled to exploit IE7 and IE8.

In conclusion, let me restate perhaps the obvious point that a defense-in-depth security architecture can minimize the risk of this exploit:

  • Next Generation Firewall
  • Secure Web Gateway
  • Mail Server well configured
  • Desktop Anti-malware that includes web site checking
  • Latest version of browser, perhaps not Internet Explorer
  • Latest version of Windows, realistically at least XP Service Pack 3, with all patches
  • Database Activity Monitoring
  • Data Loss Prevention
  • Third Generation Security Information and Event Management
28. December 2009 · Comments Off on Database security – the last frontier · Categories: Database Activity Monitoring · Tags: , ,

i just stumbled on a blog post by John Oltsik of ESG entitled Database Security Is In Need of Repair written on August 26th, 2009. John reports on a survey ESG conducted that showed Database Security is surprisingly weak given the fact that 58% of the survey respondents said that databases contain the highest percentage of their organizations' confidential data. File Servers came in a distant second at 15%.

How can this be? John says:

1. No one owns database security, rather it appears to be a collective
effort done by security administrators, IT operations, data center
managers, system administrators, DBAs, etc. With this many people
involved, it is likely that database security is fraught with redundant
processes, numerous "root" access passwords, and human error.

This resonates with my experience. The worlds of DBAs and IT Security professionals rarely meet. They speak different languages. DBAs are all about availability and performance, just as network administrators traditionally were.

There are two types of Database Security solutions – Encryption and Database Activity Monitoring. Encryption solutions are used for compliance purposes, for example to encrypt the Social Security Number column of a database o block unauthorized users who gain access to the database server. However, it does nothing to block authorized users violating access policies.

Database Activity Monitoring, which I wrote about here, comes in three flavors – logging, network, and host based. In some cases, Database Activity Monitoring can provide a layer of policy control to restrict authorized users (insiders) to just the data they need to do their jobs. And even of those solutions there can be limitations.

In summary, 1) the solutions available are improving and 2) it behooves database administrators to expand their vision to include database security.


McKinsey's just released report on its third annual survey of the usage and benefits of Web 2.0 technology was enlightening as far as it went. However, it completely ignores the IT security risks Web 2.0 creates. Furthermore, traditional IT security products do not mitigate these risks. If we are going to deploy Web 2.0 technology, then we need to upgrade our security to, dare I say, "IT Security 2.0."

Even if Web 2.0 products had no vulnerabilities for cybercriminals to exploit, which is not possible, there is still the need for a control function, i.e. which applications should be allowed and who should be able to use them. Unfortunately traditional security vendors have had limited success with both. Fortunately, there are security vendors who have recognized this as an opportunity
and have built solutions which mitigate these new risks.

In the past, I had never subscribed to the concept of security enabling innovation, but I do in this case. There is no doubt that improved communication, learning, and collaboration within the organization and with customers and suppliers enhances the organization's competitive position. Ignoring Web 2.0 or letting it happen by itself is not an option. Therefore when planning Web 2.0 projects, we must also include plans for mitigating the new risks Web 2.0 applications create.

The Web 2.0 good news – The survey results are very positive:

"69 percent of respondents report that their companies have gained
measurable business benefits, including more innovative products and
services, more effective marketing, better access to knowledge, lower
cost of doing business, and higher revenues.

Companies that made
greater use of the technologies, the results show, report even greater
benefits. We also looked closely at the factors driving these
improvements—for example, the types of technologies companies are
using, management practices that produce benefits, and any
organizational and cultural characteristics that may contribute to the
gains. We found that successful companies not only tightly integrate
Web 2.0 technologies with the work flows of their employees but also
create a “networked company,” linking themselves with customers and
suppliers through the use of Web 2.0 tools. Despite the current
recession, respondents overwhelmingly say that they will continue to
invest in Web 2.0."

The Web 2.0 bad news – Web 2.0 technologies introduce IT security risks that cannot be ignored. The main risk comes from the fact that these applications are purposely built to bypass traditional IT security controls in order to simplify deployment and increase usage. They use techniques such as port hopping, encrypted tunneling, and browser based applications. If we cannot identify these applications and the people using them, we cannot monitor or control them. Any exploitation of vulnerabilities in these applications can go undetected until it's too late.

A second risk is bandwidth consumption. For example, unauthorized and uncontrolled consumer-oriented video and audio file sharing applications consume large chunks of bandwidth. How much? Hard to know if we cannot see them.

In case we need some examples of the bad news, just in the last few days see here, here, here, and here.

The IT Security 2.0 good news – There are new IT Security 2.0 vendors who are addressing these issues in different ways as follows:

Database Activity Monitoring – Since we cannot depend on traditional perimeter defenses, we must protect the database itself. Database encryption, another technology, is also useful. But if someone has stolen authorized credentials (very common with trojan keyloggers), encryption is of no value. I discussed Database Activity Monitoring in more detail here. It's also useful for compliance reporting when integrated with application users.

User Activity Monitoring – Network appliances designed to
monitor internal user activity and block actions that are out of
policy. Also useful for compliance reporting.

Web Application Firewalls – Web server host-based software or appliances specifically designed to analyze anomalies in browser-based applications. WAFs are not meant to be primary firewalls but rather to be used to monitor the Layer 7 fields of browser-based forms into which users enter information. Cybercriminals enter malicious code which, if not detected and blocked, can trigger a wide range of exploits. It's also useful for PCI compliance.

"Web 2.0" Firewalls – Next generation network firewalls that can detect and control Web 2.0 applications in addition to traditional firewall functions. They also identify users and can analyze content. They can also perform URL filtering, intrusion prevention, proxying, and data leak prevention. This multi-function capability can be used to generate significant cost reductions by (1) consolidating network appliances and (2) unifying policy management and compliance reporting.

I have heard this type of firewall referred to as an Application Firewall. But it seems confusing to me because it's too close to Web Application Firewall, which I described above and performs completely different functions. Therefore, I prefer the term, Web 2.0 Firewall.

In conclusion, Web 2.0 is real and IT Security 2.0 must be part of Web 2.0 strategy. Put another way, IT Security 2.0 enables Web 2.0.

I thought a post about Database Activity Monitoring was timely because one of the DAM vendors, Sentrigo, published a Microsoft SQLServer vulnerability today along with a utility that mitigates the risk. Also of note, Microsoft denies that this is a real vulnerability.

I generally don't like to write about a single new vulnerability because there are just so many of them. However, Adrian Lane, CTO and Analyst at Securosis, wrote a detailed post about this new vulnerability, Sentrigo's workaround, and Sentrigo's DAM product, Hedgehog. Therefore I wanted to put this in context.

Also of note, Sentrigo sponsored a SANS Report called "Understanding and Selecting a Database Activity Monitoring Solution." I found this report to be fair and balanced as I have found all of SANS activities.

Database Activity Monitoring is becoming a key component in a defense-in-depth approach to protecting "competitive advantage" information like intellectual  property, customer and financial information and meeting compliance requirements.

One of the biggest issues organizations face when selecting a Database Activity Monitoring solution is the method of activity collection, of which there are three – logging, network based monitoring, and agent based monitoring. Each has pros and cons:

  • Logging – This requires turning on the database product's native logging capability. The main advantage of this approach is that it is a standard feature included with every database. Also some database vendors like Oracle have a complete, but separately priced Database Activity Monitoring solution, which they claim will support other databases. Here are the issues with logging:
    • You need a log management or Security Information and Event Management (SIEM) system to normalize each vendor's log format into a standard format so you can correlate events across different databases and store the large volume of events that are generated. If you already committed to a SIEM product this might not be an issue assuming the SIEM vendor does a good job with database logs.
    • There can be significant performance overhead on the database associated with logging, possibly as high as 50%.
    • Database administrators can tamper with the logs. Also if an external hacker gains control of the database server, he/she is likely to turn logging off or delete the logs. 
    • Logging is not a good alternative if you want to block out of policy actions. Logging is after the fact and cannot be expected to block malicious activity. While SIEM vendors may have the ability to take actions, by the time the events are processed by the SIEM, seconds or minutes have passed which means the exploit could already be completed.
  • Network based – An appliance is connected to a tap or a span port on the switch that sits in front of the database servers. Traffic to and, in most cases, from the databases is captured and analyzed. Clearly this puts no performance burden on the database servers at all. It also provides a degree of isolation from the database administrators.Here are the issues:
    • Local database calls and stored procedures are not seen. Therefore you have an incomplete picture of database activity.
    • Your must have the network infrastructure to support these appliances.
    • It can get expensive depending on how many databases you have and how geographically dispersed they are.
  • Host based – An agent is installed directly on each database server.The overhead is much lower than with native database logging, as low as 1% to 5%, although you should test this for yourself.  Also, the agent sees everything including stored procedures. Database administrators will have a hard time interfering with the process without being noticed. Deployment is simple, i.e. neither the networking group nor the datacenter team need be involved. Finally, the installation process should  not require a database restart. As for disadvantages, this is where Adrian Lane's analysis comes in. Here are his concerns:
    • Building and maintaining the agent software is difficult and more time consuming for the vendor than the network approach. However, this is the vendor's issue not the user's.
    • The analysis is performed by the agent right on the database. This could mean additional overhead, but has the advantage of being able to block a query that is not "in policy."
    • Under heavy load, transactions could be missed. But even if this is true, it's still better than the network based approach which surely misses local actions and stored procedures.
    • IT administrators could use the agent to snoop on database transactions to which they would not normally have access.

Dan Sarel, Sentrigo's Vice President of Product, responded in the comments section of Adrian Lane's post. (Unfortunately there is no dedicated link to the response. You just have to scroll down to his response.) He addressed the "losing events under heavy load" issue by saying Sentrigo has customers processing heavy loads and not losing transactions. He addressed the IT administrator snooping issue by saying that the Sentrigo sensors doe not require database credentials. Therefore database passwords are not available to IT administrators.