Dark Reading recently published an article about the problems that plague Security Information and Event Management deployments, Five Reasons SIEM Deployments Fail. First, I would say that you could use these five reasons to explain why almost any “enterprise” information technology project fails. Having said that, I would like to address each of the five points individually:

1. SIEM is too hard to use.

The nut of it really comes down to the fact that SIEM is not an easy technology to use. Part of that rests squarely at the feet of SIEM vendors, who still have not done enough to simplify their products — particularly for small and midsize enterprises, says Mike Rothman, analyst and president of Securosis.

There is no doubt that some SIEM products are harder than others to use. Ease-of-use must surely be one of the criteria you use when evaluating SIEM solutions. On the other hand, too hard to use may be code for not having the resources needed to deploy and operate a SIEM solution. For those organizations, there is an alternative to buying a SIEM solution. Use a Managed Security Service Provider (MSSP) to provide the service. This is a particularly appropriate approach for small and midsize enterprises.

“I think that we need to see more of a set of deployment models [that] make it easier for folks that aren’t necessarily experts on this stuff to use it. In order for this market to continue to grow and to continue to drive value to customers, it has to be easier to use, and it has to be much more applicable to the midmarket customer,” Rothman says. “Right now the technology is still way too complicated for that.”

There is an alternate deployment model which Mike seems to be ignoring. Incident detection and response is complicated. If you don’t have skilled resources or the budget to hire and train people, you need to go with a MSSP. A good MSSP will have multiple deployment models to support different customer needs.

A more correct statement might be that an organization has to decide whether it has the resources to select, deploy, and operate a SIEM.

2. Log management lacks standardization.

In order to truly automate the collection of data from different devices and automate the parsing of all that data, organizations need standardization within their logged events, says Scott Crawford, analyst for Enterprise Management Associates. “This is one of the biggest issues of event management,” Crawford says. “A whole range of point products can produce a very wide variety of ways to characterize events.”

There is no doubt that there is no standardization in logs. That’s like saying there is no standardization in operating systems, firewalls, or any of the other products for which you need to collect logs. Even if there were to be a standard, there would still be ways for manufacturers to differentiate themselves. Just take a look at SNMP. It represents one of the most used industry standards. Yet manufacturers always add proprietary functions for which systems management products must account. So logs may get somewhat more standardized if, for example, Mitre’s CEE were to become a standard. But the SIEM manufacturers and MSSPs will always be dealing with integrating custom logs.

3. IT can’t rise above organizational power struggles.

“One of the key challenges our customers face is really getting all parts of the company to work together to actually make the connections to get the right scope of monitoring,” says Joe Gottlieb, president and CEO of SenSage. “And the things you want to monitor sit in different places within the organization and are controlled by different parts of the organization.”

Yes, by definition SIEM cuts across departmental lines when the goal is to provide organization-wide security posture and incident visibility. As with most “enterprise” solutions, you need senior management support in order to have any hope of success.

4. Security managers see SIEM as magic.

SIEM expectations frequently don’t jibe with reality because many IT managers believe SIEM is about as powerful as Merlin’s wand.

“A lot of people look at SIEM like it’s this magical box — I get a SIEM and it’s going to do all my work for me,” says Eric Knapp, vice president of technology marketing for NitroSecurity. “SIEM has different levels of ease of use, but they all come back to looking at information and drawing conclusions. Unless you’re looking at it in the correct context for your specific environment, it’s not going to help you as much as it should.”

SIEM has been around for ten years now. Is it really possible that SIEM still has some kind of magical mystique about it? SIEM vendors that let their sales people sell this way don’t last because the resources the vendor has to commit to alleviate customer dissatisfaction is huge and profit-sapping. On the other hand, caveat emptor. Any organization buying SIEM without understanding how it works and what resources they need to make it successful, have only themselves to blame. Again, if you are not sure what you are getting yourself into, consider a MSSP as an alternative buying a SIEM solution.

5. Scalability nightmares continue to reign.

There is no doubt that scalability is a particularly important attribute of a SIEM solution. And there are SIEM products out there that do not scale well. If the vendor tells you, (1) We store log data in a traditional relational database, or (2) You only need to save the “relevant” logs, RUN. These statements are sure signs of lack of scalability. On the other hand, you do need to know or estimate how many events per second and per day you will actually generate in order to configure the underlying hardware to get reasonable performance.

There are SIEM solutions that do scale well. They don’t use traditional relational databases to store log data. As to which log events are unimportant? It’s practically impossible to determine. If you are in doubt, there is no doubt. Collect them.

22. August 2010 · Comments Off on Intel, McAfee, and vPro · Categories: Security Management · Tags: , ,

How many people remember Intel’s vPro? Do you know if your PC supports vPro? Do you care? It was announced by Intel at least six years ago.

As Intel says on its vPro home page:

Notebook and desktop PCs with Intel® vPro™ technology enable IT to take advantage of hardware-assisted security and manageability capabilities that enhance their ability to maintain, manage, and protect their business PCs. And with the latest IT management consoles from Independent Software Vendors (ISVs) with native Intel vPro technology support, IT can now take advantage of enhanced features to manage notebooks over a wired or corporate wireless network- or even outside the corporate firewall through a wired LAN connection.

PCs with Intel vPro technology integrate robust hardware-based security and enhanced maintenance and management capabilities that work seamlessly with ISV consoles. Because these capabilities are built into the hardware, Intel vPro technology provides IT with the industry’s first solution for OS-absent manageability and down-the-wire security even when the PC is off, the OS is unresponsive, or software agents are disabled.

While vPro looks intriguing, it does not appear to me that ISVs really embraced it. Perhaps one of the reasons for Intel acquiring McAfee was it felt it had to force the issue. The Microsoft approach of “loose” integration was not working and Intel decided to place a bet on the Apple strategy of “tight” integration.


08. June 2010 · Comments Off on Facebook – Read-Only · Categories: Palo Alto Networks, Security Management, Security-Compliance

What kind of access to Facebook do you give your employees? What about those in Marketing who want to use Facebook to monitor a competitor’s social marketing efforts? Or just gather competitive intelligence? Completely blocking Facebook for everyone in the organization may not make sense anymore because there are legitimate business uses for Facebook.

Palo Alto Networks has been a leader in enabling fine-grained policy control of web-based applications. Today, they extended their Facebook policy capabilities by creating a “Read-Only” option. I have no doubt that this was a customer driven enhancement to their already robust Facebook policy capabilities.

This is a great example of enabling business value while minimizing risk.

04. June 2010 · Comments Off on SANS Twenty Critical Controls · Categories: Palo Alto Networks, Security Management, Security-Compliance

An important part of Cymbel’s approach to IT Security and Compliance leverages the SANS Twenty Critical Controls for Effective Cyber Defense: Consensus Audit Guidelines (20CC). We have embraced 20CC for the following reasons:

  • Comprehensiveness – All the major critical IT Security functions are covered.
  • Credentials – The document was generated by a strong group of experienced security professionals from government and industry.
  • Concreteness – The document provides very specific recommendations.
  • Automation – Fifteen of the twenty controls are readily automated.
  • Metrics – One or more simple, specific, measurable tests are provided to assess the effectiveness of each recommended control.
  • Phases – Each of the twenty controls have sub-controls which can be implemented in phases. In fact, each control describes at least one “Quick Win.” This lessens the potentially overwhelming nature of other security models.
  • Brevity – The current version of the document is only 58 pages as compared to other approaches which are spread over multiple books.
  • Price – The document is free.

If there is any weakness to the 20CC, it’s the consensus nature of it. However, in our opinion this weakness is only reflected in its understandable unwillingness to recommend a solution that would inure to the benefit of a single manufacturer. This is particularly reflected in the “Boundary Defense” control which recommends stateful inspection firewalls and separate Intrusion Prevention Systems.

For boundary defense, Cymbel recommends the only next-generation firewall on the market – Palo Alto Networks. That’s not just us saying it. Gartner said it in its 2010 Enterprise Firewall Magic Quadrant.

I would love to hear your opinions on the SANS Twenty Critical Security Controls.

13. March 2010 · Comments Off on Verizon Business extends its thought leadership in security incident metrics · Categories: Breaches, Research, Risk Management, Security Management, Theory vs. Practice · Tags: , ,

The Verizon Business Security Incident Response team, whose yearly published Data Breach Investigations Reports I've written about here, has has extended its thought leadership in security incident metrics with the release of its Incident Sharing Framework. Their purpose is to enable those responsible for incident response to "create data sets that can be used and compared because of their
commonality. Together, we can work to eliminate both equivocality (sic) and
uncertainty, and help defend the organizations we serve." The document can be found here.

Of course Verizon Business is a for-profit organization and the license terms are as follows:

Verizon grants you a limited, revocable, personal and nontransferable license to use the Verizon Incident Sharing Framework for purposes of collecting, organizing and reporting security incident information for non-­‐commercial purposes.

Nevertheless, I do hope that this or an alternative incident sharing framework becomes an industry standard which enables the publishing and sharing of a larger number incidents from which we can all learn and improve our security policies and processes.

20. February 2010 · Comments Off on Top 25 Most Dangerous Programming Errors · Categories: Research, Security Management · Tags: , , ,

Mitre, via its Common Weakness Enumeration effort, in conjunction with SANS, just published the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors. Heading the list are:

  1. Cross-site Scripting (Score = 346)
  2. SQL Injection (330)
  3. Classic Buffer Overflow (273)
  4. Cross-Site Request Forgery (261)
  5. Improper Access Control (219)

For each weakness this report provides a Description, Prevention and Mitigation techniques, and links to more reference material. This is well worth reading.

16. January 2010 · Comments Off on Google discloses breach and new threat type from China – Advanced Persistent Threats · Categories: Advanced Persistent Threat (APT), Books, Botnets, Breaches, Malware, Phishing, Privacy, Risk Management, Security Management, Trade Secrets Theft · Tags: , , , ,

Earlier this week Google took the unprecedented step of disclosing a breach which does not legally require disclosure. Google's reasons for the disclosure are tightly linked to its concerns about human rights in China and its views on China's reasons for breaching Google's email systems. These last two points are well worth discussing and are being discussed at length all over the blogosphere. However, I am going to focus on the security and disclosure issues.

First regarding disclosure, IT risk reduction strategies greatly benefit from public breach disclosure information. In other words, organizations learn best what to do and avoid overreacting to vendor scare tactics by understanding the threats that actually result in breaches. This position is best articulated by Adam Shostack and Andrew Stewart in their book, "The New School of Information Security."

I blogged about Verizon Business's forensic team's empirical 2009 Data Breach Investigations Supplemental Report here. This report shows cause-and-effect between threat types and breaches. You could not ask for better data to guide your IT risk reduction strategies.

Organizations have been so reluctant to publicly admit they suffered breaches, the Federal and many state governments had to pass laws to force organizations to disclose breaches when customer or employee personal information was stolen.

Regarding the attack itself, it represents a type of attack that is relatively new called "advanced persistent threats" (APT) which in the past had primarily been focused on governments. Now they are targeting companies to steal intellectual property. McAfee describes the combination of spear fishing, zero-day threats, and crafted malware here. The implications:

The world has changed. Everyone’s threat model now needs to be adapted
to the new reality of these advanced persistent threats. In addition to
worrying about Eastern European cybercriminals trying to siphon off
credit card databases, you have to focus on protecting all of your core
intellectual property, private nonfinancial customer information and
anything else of intangible value. 

Gunter Ollman, VP of Research at Damballa, discusses APT's further here, focusing on detecting these attacks by detecting and breaking the Command and Control (CnC) component of the threat. The key point he makes is:

Malware is just a tool. The fundamental element to these (and
any espionage attack) lies with the tether that connects the victim
with the attacker. Advanced Persistent Threats (APT), like their bigger
and more visible brother “botnets”, are meaningless without that tether
– which is more often labeled as Command and Control (CnC).

Jeremiah Grossman points out the implications of Google's breach disclosure for all cloud-based product offerings here, countering Google's announcement of Default https access for Gmail.

Indeed, the threat landscape has changed.

30. December 2009 · Comments Off on Schneier’s take on aviation security as theater · Categories: Security Management, Security Policy · Tags: ,

In light of TSA's reaction to the near-miss catastrophe on Northwest Flight 253 on Christmas Day, I'm glad to see that CNN republished an article by Bruce Schneier entitled, "Is Aviation security mostly for show?"

28. December 2009 · Comments Off on Verizon Business 2009 DBIR Supplemental Report provides empirical guidance for unifying security and compliance priorities · Categories: Breaches, Compliance, Risk Management, Security Management, Theory vs. Practice · Tags: , , ,

The Verizon Business security forensics group's recently released 2009 Data Breach Investigations Supplemental Report provides common ground between those in the enterprise who are compliance oriented and those who are security oriented. While in theory, there should be no difference between these groups, in practice there is.   

Table 8 on page 28 evaluates the breach data set from the perspective of data types breached. Number one by far is Payment Card Data at 84%. Second is Personal Information at 31%. (Obviously each case in their data set can be categorized in multiple data breach categories.) These are exactly the types of breaches regulatory compliance standards like PCI and breach disclosure laws like Mass 201 CMR 17 are focused on.

Therefore there is high value in using the report's "threat action types" analysis to prioritize risk reduction as well as compliance programs, processes, and technologies.

While the original 2009 DBIR did provide similar information in Figure 29 on page 33, it's the Supplemental report which provides the threat action type analysis that can drive a common set of risk reduction and compliance priorities.

23. October 2009 · Comments Off on Relational databases dead for log management? · Categories: Compliance, Log Management, Security Management · Tags: , , , , ,

Larry Walsh wrote an interesting post this week, Splunk Disrupts Security Log Auditing, in which he claims that Splunk's success is due to capturing market share in the security log auditing market because of it's Google-like approach to storing log data rather than using a "relational database."

There was also a very good blog post at Securosis in response – Splunk and Unstructured Data.

While there is no doubt that Splunk has been successful as a company, I am not so sure it's due to security log auditing.

It's my understanding that the primary use case for Splunk is actually in Operations where, for example, a network administrator wants to search logs to resolve an Alert generated by an SNMP-based network management system. Most SNMP-based network management systems are good at telling you "what" is going on, but not very good at telling you "why."

So when the network management system generates an Alert, the admin goes to Splunk to find the logs that would show what actually happened so s/he can fix the root cause of the Alert. For this use case, you don't really need more than a day's worth of logs.

Splunk's brilliant move was to allow "free" usage of the software for one day's worth of logs or some limited amount of storage that generally would not exceed one day. In reality, a few hours of logs is very valuable. This freemium model has been very successful.

Security log auditing is a very different use case. It can require a year or more of data and sophisticated reporting capabilities. That is not to say that a Google-like storage approach cannot accomplish this.

In fact, security log auditing is just another online analytical processing (OLAP) application, albeit with potentially huge amounts of data. It's been at least ten years that the IT industry realized that OLAP applications require a different way to organize stored data compared to online transaction processing (OLTP) applications. OLTP applications still use traditional relational databases.

There has been much experimentation about ways to store data for OLAP applications. However, there is still a lot of value in the SQL language as a kind of open industry standard API to stored data.

So I would agree that traditional relational database products are not appropriate for log management data storage, but SQL as a language has merit as the "API layer" between the query and reporting tools and the data.