05. September 2010 · Comments Off on Mitre releases log standards architecture – Common Event Expression (CEE) · Categories: Log Management, Security-Compliance · Tags: ,

Finally, on August 27, 2010, Mitre’s log standard, Common Event Expression Architecture Overview was released. The goal of CEE is to standardize event logs to simplify collection, correlation, and reporting which will drive down the costs of implementing and operating Log Management controls and improve audit and event analysis.

At present there are no accepted log standards. Each commercial application and security product implements logs in a proprietary way. In addition, the most commonly used log transport protocol, syslog, is unreliable since it’s usually implemented on UDP. The custom application environment is even worse as there are no accepted standards to guide application developers’ implementation of logs for audit and event management.

Why after ten years of log management efforts are there still no standards? In my opinion, it’s because government agencies and enterprises have not recognized that they are indirectly bearing the costs of the lack of standardization. Now that log management has become mandatory for compliance and strongly recommended for effective cyber defense, organizations will realize the need for log standardization. Initially, it’s going to be up to the Federal Government and large enterprises to force CEE compatibility as a requirement of purchase in order to get product manufacturers to adhere to CEE. The log management vendors will embrace CEE once they see product manufacturers using it.

Here is the Common Event Expression Architecture Overview (CEE AO) Abstract:

This Common Event Expression (CEE) Architecture defines the structure and components that comprise the CEE event log standard. This architecture was developed by MITRE, in collaboration with industry and government, and builds upon the Common Event Expression Whitepaper [1]. This document defines the CEE Architecture for an open, practical, and industry-accepted event log standard. This document provides a high-level overview of CEE along with details on the overall architecture and introduces each of the CEE components including the data dictionary, syntax encodings, event taxonomies, and profiles. The CEE Architecture is the first in a collection of documents and specifications, whose combination provides the necessary pieces to create the complete CEE event log standard.
KEYWORDS: CEE, Logs, Event Logs, Audit Logs, Log Analysis, Log Management, SIEM

There are four components of the CEE Architecture – CEE Dictionary and Taxonomy (CDET), Common Log Syntax (CLS), Common Log Transport (CLT), and Common Event Log Recommendations (CELR).
  • Common Log Syntax (CLS) – how the event and event data is represented. The event syntax is what an event producer writes and what an event consumer processes.
  • CEE Dictionary – defines a collection of event fields and value types that can be used within event records to specify the values of an event property associated with a specific event instance.
  • CEE Taxonomy – defines a collection of “tags” that can be used to categorize events. Its goal is to provide a common vocabulary, through sets of tags, to help classify and relate records that pertain to similar types of events.
  • Common Event Log Recommendations (CELR) – provides recommendations to developers and implementers of applications or systems as to which events and fields should be recorded in certain situations and what log messages should be recorded for various circumstances. CELR provides this guidance in the form of a machine-readable profile. The CELR also defines a function – a group of event structures that comprise a certain capability. For example, a “firewall” function can be defined consisting of “connection allow” and “connection block” event structures. Similarly, an “authentication management” function can be composed of “account logon,” “account logoff,” “session started,” and “session stopped.”
  • Common Log Transport (CLT) – provides the technical support necessary for an improved log transport framework. A good framework requires more than just standardized event records, support is needed for international string encodings, standardized event record interfaces, and reliable, verifiable log trails. In addition to the application support, the CLT event streams supplement the CLS event record encodings to allow systems to share event records securely and reliably.
The CEE Architecture Overview document also defines the CEE “product” approval management process and four levels of CEE conformance.
ANY CHARACTER HERE

CEE holds the promise of driving down the costs of implementing Log Management systems and improving the quality of audit and event analysis. However, there is still much work to be done for example in defining Taxonomies and defining and testing interoperability at the Transport and Syntax levels.

Mitre has had mixed results over the years in it’s efforts to standardize security processes. CVE (Common Vulnerabilities and Exposures) has been it’s biggest success as virtually all vulnerability publishers use CVE numbers. CEE is much more ambitious though and will require more money and resources than Mitre is accustomed to having at its disposal.

Enhanced by Zemanta
10. February 2010 · Comments Off on Insiders abuse poor database account provisioning and lack of database activity monitoring · Categories: Breaches, Database Activity Monitoring, Log Management, Security Information and Event Management (SIEM) · Tags: , ,

DarkReading published a good article about breaches caused by malicious insiders who get direct access to databases because account provisioning is poor and there is little or no database activity monitoring.

There are lots of choices out there for database activity monitoring but only three methods, which I wrote about here. I wrote about why database security lags behind network and end-point security here

23. October 2009 · Comments Off on Relational databases dead for log management? · Categories: Compliance, Log Management, Security Management · Tags: , , , , ,

Larry Walsh wrote an interesting post this week, Splunk Disrupts Security Log Auditing, in which he claims that Splunk's success is due to capturing market share in the security log auditing market because of it's Google-like approach to storing log data rather than using a "relational database."

There was also a very good blog post at Securosis in response – Splunk and Unstructured Data.

While there is no doubt that Splunk has been successful as a company, I am not so sure it's due to security log auditing.

It's my understanding that the primary use case for Splunk is actually in Operations where, for example, a network administrator wants to search logs to resolve an Alert generated by an SNMP-based network management system. Most SNMP-based network management systems are good at telling you "what" is going on, but not very good at telling you "why."

So when the network management system generates an Alert, the admin goes to Splunk to find the logs that would show what actually happened so s/he can fix the root cause of the Alert. For this use case, you don't really need more than a day's worth of logs.

Splunk's brilliant move was to allow "free" usage of the software for one day's worth of logs or some limited amount of storage that generally would not exceed one day. In reality, a few hours of logs is very valuable. This freemium model has been very successful.

Security log auditing is a very different use case. It can require a year or more of data and sophisticated reporting capabilities. That is not to say that a Google-like storage approach cannot accomplish this.

In fact, security log auditing is just another online analytical processing (OLAP) application, albeit with potentially huge amounts of data. It's been at least ten years that the IT industry realized that OLAP applications require a different way to organize stored data compared to online transaction processing (OLTP) applications. OLTP applications still use traditional relational databases.

There has been much experimentation about ways to store data for OLAP applications. However, there is still a lot of value in the SQL language as a kind of open industry standard API to stored data.

So I would agree that traditional relational database products are not appropriate for log management data storage, but SQL as a language has merit as the "API layer" between the query and reporting tools and the data.

Wired Magazine reported this week that Wal-Mart kept secret a breach it discovered in November 2006 that had been ongoing for 17 months. According to the article, Walmart claimed there was no reason to disclose the exploit at the time as they believe no customer data or credit card information was breached.

They are admitting that custom developed Point-of-Sale software was breached. The California Breach Law covering breached financial information of California residents had gone into effect on July 1, 2003 and was extended to health information on January 1, 2009. I blogged about that here.

I think it would be more accurate to say that the forensics analysts hired by Wal-Mart could not "prove" that customer data was breached, i.e., could not find specific evidence that customer data was breached. One key piece of information the article revealed, "The company’s server logs recorded only unsuccessful log-in attempts, not successful ones, frustrating a detailed analysis."

Based on my background in log management, I understand the approach of only collecting "bad" events like failed log-ins. Other than this sentence the article does not discuss what types of events were and were not collected. Therefore they have very little idea of what was really going on.

The problem Wal-Mart was facing at the time was that the cost of collecting and storing all the logs in an accessible manner was prohibitive. Fortunately, log data management software has improved and hardware costs have dropped dramatically. In addition there are new tools for user activity monitoring.

However, my key reaction to this article is my disappointment that Wal-Mart chose to keep this incident a secret. It's possible that news of a Wal-Mart breach might have motivated other retailers to strengthen their security defenses and increase their vigilance, which might have reduced the number of breaches that occurred since 2006. It may also have more quickly increased the rigor QSAs applied to PCI DSS audits.

In closing, I would like to call attention to Adam Shostack's and Andrew Stewart's book, "The New School of Information Security," and quote a passage from page 78 which talks about the value of disclosing breaches aside from the need to inform people whose personal financial or health information may have been breached:

"Breach data is bringing us more and better objective data than any past information-sharing initiative in the field of information security. Breach data allows us to see more about the state of computer security than we've been able to with traditional sources of information. … Crucially, breach data allows us to understand what sorts of issues lead to real problems, and this can help us all make better security decisions."

I thought a post about Database Activity Monitoring was timely because one of the DAM vendors, Sentrigo, published a Microsoft SQLServer vulnerability today along with a utility that mitigates the risk. Also of note, Microsoft denies that this is a real vulnerability.

I generally don't like to write about a single new vulnerability because there are just so many of them. However, Adrian Lane, CTO and Analyst at Securosis, wrote a detailed post about this new vulnerability, Sentrigo's workaround, and Sentrigo's DAM product, Hedgehog. Therefore I wanted to put this in context.

Also of note, Sentrigo sponsored a SANS Report called "Understanding and Selecting a Database Activity Monitoring Solution." I found this report to be fair and balanced as I have found all of SANS activities.

Database Activity Monitoring is becoming a key component in a defense-in-depth approach to protecting "competitive advantage" information like intellectual  property, customer and financial information and meeting compliance requirements.

One of the biggest issues organizations face when selecting a Database Activity Monitoring solution is the method of activity collection, of which there are three – logging, network based monitoring, and agent based monitoring. Each has pros and cons:

  • Logging – This requires turning on the database product's native logging capability. The main advantage of this approach is that it is a standard feature included with every database. Also some database vendors like Oracle have a complete, but separately priced Database Activity Monitoring solution, which they claim will support other databases. Here are the issues with logging:
    • You need a log management or Security Information and Event Management (SIEM) system to normalize each vendor's log format into a standard format so you can correlate events across different databases and store the large volume of events that are generated. If you already committed to a SIEM product this might not be an issue assuming the SIEM vendor does a good job with database logs.
    • There can be significant performance overhead on the database associated with logging, possibly as high as 50%.
    • Database administrators can tamper with the logs. Also if an external hacker gains control of the database server, he/she is likely to turn logging off or delete the logs. 
    • Logging is not a good alternative if you want to block out of policy actions. Logging is after the fact and cannot be expected to block malicious activity. While SIEM vendors may have the ability to take actions, by the time the events are processed by the SIEM, seconds or minutes have passed which means the exploit could already be completed.
  • Network based – An appliance is connected to a tap or a span port on the switch that sits in front of the database servers. Traffic to and, in most cases, from the databases is captured and analyzed. Clearly this puts no performance burden on the database servers at all. It also provides a degree of isolation from the database administrators.Here are the issues:
    • Local database calls and stored procedures are not seen. Therefore you have an incomplete picture of database activity.
    • Your must have the network infrastructure to support these appliances.
    • It can get expensive depending on how many databases you have and how geographically dispersed they are.
  • Host based – An agent is installed directly on each database server.The overhead is much lower than with native database logging, as low as 1% to 5%, although you should test this for yourself.  Also, the agent sees everything including stored procedures. Database administrators will have a hard time interfering with the process without being noticed. Deployment is simple, i.e. neither the networking group nor the datacenter team need be involved. Finally, the installation process should  not require a database restart. As for disadvantages, this is where Adrian Lane's analysis comes in. Here are his concerns:
    • Building and maintaining the agent software is difficult and more time consuming for the vendor than the network approach. However, this is the vendor's issue not the user's.
    • The analysis is performed by the agent right on the database. This could mean additional overhead, but has the advantage of being able to block a query that is not "in policy."
    • Under heavy load, transactions could be missed. But even if this is true, it's still better than the network based approach which surely misses local actions and stored procedures.
    • IT administrators could use the agent to snoop on database transactions to which they would not normally have access.

Dan Sarel, Sentrigo's Vice President of Product, responded in the comments section of Adrian Lane's post. (Unfortunately there is no dedicated link to the response. You just have to scroll down to his response.) He addressed the "losing events under heavy load" issue by saying Sentrigo has customers processing heavy loads and not losing transactions. He addressed the IT administrator snooping issue by saying that the Sentrigo sensors doe not require database credentials. Therefore database passwords are not available to IT administrators.

Detailed
empirical data on IT Security breaches is hard to come by despite laws like
California SB1386.
So
there is much to be learned from Verizon Business’s April 2009 Data Breach
Investigations Report
.

The specific issue I would like to highlight now is the
section on methods by which the investigated breaches were discovered (Discovery
Methods, page 37). 83% were discovered by third parties or non-security employees
going about their normal business. Only 6% were found by event monitoring or
log analysis. Routine internal or external audit combined came in at a rousing
2%.

These numbers are truly shocking considering the amount
of money that has been spent on Intrusion Detection systems, Log Management
systems, and Security Information and Event Management systems. Actually, the
Verizon team concludes that many breached organizations did not invest sufficiently
in detection controls. Based on my experience, I agree.

Given a limited security budget there needs to be a balance
between prevention, detection, and response. I don’t think anyone would argue against
this in theory. But obviously, in practice, it’s not happening. Too
often I have seen too much focus on prevention to the detriment of detection
and response.

In addition, these
numbers point to the difficulties in deploying viable detection controls, as there
were a significant number of organizations that had purchased detection
controls but had not put them into production. Again, I have seen this myself
as most of the tools are too difficult to manage and it’s difficult to implement
effective processes.