10. February 2010 · Comments Off on Insiders abuse poor database account provisioning and lack of database activity monitoring · Categories: Breaches, Database Activity Monitoring, Log Management, Security Information and Event Management (SIEM) · Tags: , ,

DarkReading published a good article about breaches caused by malicious insiders who get direct access to databases because account provisioning is poor and there is little or no database activity monitoring.

There are lots of choices out there for database activity monitoring but only three methods, which I wrote about here. I wrote about why database security lags behind network and end-point security here

A week later, "Operation Aurora," which I discussed in detail here, is still the most important IT security story. PC Magazine provided additional details here.

Early in the week it appeared that the exploit took advantage of a vulnerability in Internet Explorer 6, the version of Microsoft's browser originally released on August 27, 2001. Larry Seltzer blogged about Microsoft's ridiculously long support cycles demanded by corporate customers. Why any organization would allow the use of this nine year old browser is a mystery to me, especially at Google!!

Later in the week, we found out that the exploit could be retooled to exploit IE7 and IE8.

In conclusion, let me restate perhaps the obvious point that a defense-in-depth security architecture can minimize the risk of this exploit:

  • Next Generation Firewall
  • Secure Web Gateway
  • Mail Server well configured
  • Desktop Anti-malware that includes web site checking
  • Latest version of browser, perhaps not Internet Explorer
  • Latest version of Windows, realistically at least XP Service Pack 3, with all patches
  • Database Activity Monitoring
  • Data Loss Prevention
  • Third Generation Security Information and Event Management

NSS Labs the well-respected UK-based security product research and testing service, just published the results of its consumer anti-malware test. The most popular products, Symantec and McAfee, both came it at only 82%. Therefore you cannot rely on this single security control to protect you against malware. A layered, defense-in-depth strategy is a must.

While all organizations are different, complementary technologies include Secure Web Gateways, Intrusion Prevention, Data Leak Prevention, or an advanced firewall that performs all of these functions,  and possibly a Security Information and Event Management System. If you are running web applications, you will also need a Web Application Firewall. I wrote about this in my post about the 20 Top Security Controls.

The top vendor was Trend Micro with a 96% success rate when you combine the 91% caught at download time and the 5.5% caught at execution time. I also read about this report in an article at Dark Reading written by Tim Wilson. However, Tim said Trend Micro only blocked 70% of the malware. I am not sure where he got his number.

I thought a post about Database Activity Monitoring was timely because one of the DAM vendors, Sentrigo, published a Microsoft SQLServer vulnerability today along with a utility that mitigates the risk. Also of note, Microsoft denies that this is a real vulnerability.

I generally don't like to write about a single new vulnerability because there are just so many of them. However, Adrian Lane, CTO and Analyst at Securosis, wrote a detailed post about this new vulnerability, Sentrigo's workaround, and Sentrigo's DAM product, Hedgehog. Therefore I wanted to put this in context.

Also of note, Sentrigo sponsored a SANS Report called "Understanding and Selecting a Database Activity Monitoring Solution." I found this report to be fair and balanced as I have found all of SANS activities.

Database Activity Monitoring is becoming a key component in a defense-in-depth approach to protecting "competitive advantage" information like intellectual  property, customer and financial information and meeting compliance requirements.

One of the biggest issues organizations face when selecting a Database Activity Monitoring solution is the method of activity collection, of which there are three – logging, network based monitoring, and agent based monitoring. Each has pros and cons:

  • Logging – This requires turning on the database product's native logging capability. The main advantage of this approach is that it is a standard feature included with every database. Also some database vendors like Oracle have a complete, but separately priced Database Activity Monitoring solution, which they claim will support other databases. Here are the issues with logging:
    • You need a log management or Security Information and Event Management (SIEM) system to normalize each vendor's log format into a standard format so you can correlate events across different databases and store the large volume of events that are generated. If you already committed to a SIEM product this might not be an issue assuming the SIEM vendor does a good job with database logs.
    • There can be significant performance overhead on the database associated with logging, possibly as high as 50%.
    • Database administrators can tamper with the logs. Also if an external hacker gains control of the database server, he/she is likely to turn logging off or delete the logs. 
    • Logging is not a good alternative if you want to block out of policy actions. Logging is after the fact and cannot be expected to block malicious activity. While SIEM vendors may have the ability to take actions, by the time the events are processed by the SIEM, seconds or minutes have passed which means the exploit could already be completed.
  • Network based – An appliance is connected to a tap or a span port on the switch that sits in front of the database servers. Traffic to and, in most cases, from the databases is captured and analyzed. Clearly this puts no performance burden on the database servers at all. It also provides a degree of isolation from the database administrators.Here are the issues:
    • Local database calls and stored procedures are not seen. Therefore you have an incomplete picture of database activity.
    • Your must have the network infrastructure to support these appliances.
    • It can get expensive depending on how many databases you have and how geographically dispersed they are.
  • Host based – An agent is installed directly on each database server.The overhead is much lower than with native database logging, as low as 1% to 5%, although you should test this for yourself.  Also, the agent sees everything including stored procedures. Database administrators will have a hard time interfering with the process without being noticed. Deployment is simple, i.e. neither the networking group nor the datacenter team need be involved. Finally, the installation process should  not require a database restart. As for disadvantages, this is where Adrian Lane's analysis comes in. Here are his concerns:
    • Building and maintaining the agent software is difficult and more time consuming for the vendor than the network approach. However, this is the vendor's issue not the user's.
    • The analysis is performed by the agent right on the database. This could mean additional overhead, but has the advantage of being able to block a query that is not "in policy."
    • Under heavy load, transactions could be missed. But even if this is true, it's still better than the network based approach which surely misses local actions and stored procedures.
    • IT administrators could use the agent to snoop on database transactions to which they would not normally have access.

Dan Sarel, Sentrigo's Vice President of Product, responded in the comments section of Adrian Lane's post. (Unfortunately there is no dedicated link to the response. You just have to scroll down to his response.) He addressed the "losing events under heavy load" issue by saying Sentrigo has customers processing heavy loads and not losing transactions. He addressed the IT administrator snooping issue by saying that the Sentrigo sensors doe not require database credentials. Therefore database passwords are not available to IT administrators.

Detailed
empirical data on IT Security breaches is hard to come by despite laws like
California SB1386.
So
there is much to be learned from Verizon Business’s April 2009 Data Breach
Investigations Report
.

The specific issue I would like to highlight now is the
section on methods by which the investigated breaches were discovered (Discovery
Methods, page 37). 83% were discovered by third parties or non-security employees
going about their normal business. Only 6% were found by event monitoring or
log analysis. Routine internal or external audit combined came in at a rousing
2%.

These numbers are truly shocking considering the amount
of money that has been spent on Intrusion Detection systems, Log Management
systems, and Security Information and Event Management systems. Actually, the
Verizon team concludes that many breached organizations did not invest sufficiently
in detection controls. Based on my experience, I agree.

Given a limited security budget there needs to be a balance
between prevention, detection, and response. I don’t think anyone would argue against
this in theory. But obviously, in practice, it’s not happening. Too
often I have seen too much focus on prevention to the detriment of detection
and response.

In addition, these
numbers point to the difficulties in deploying viable detection controls, as there
were a significant number of organizations that had purchased detection
controls but had not put them into production. Again, I have seen this myself
as most of the tools are too difficult to manage and it’s difficult to implement
effective processes.