20. July 2015 · Comments Off on The evolution of SIEM · Categories: Uncategorized · Tags: , , ,

In the last several years, a new “category” of log analytics for security has arisen called “User Behavior Analytics.” From my 13-year perspective, UBA is really the evolution of SIEM.

The term “Security Information and Event Management (SIEM)” was defined by Gartner 10 years ago. At the time, some people were arguing between Security Information Management (SIM) and Security Event Management (SEM). Gartner just combined the two and ended that debate.

The focus of SIEM was on consolidating and analyzing log information from disparate sources such as firewalls, intrusion detection systems, operating systems, etc. in order to meet compliance requirements, detect security incidents, and provide forensics.

At the time, the correlation was designed mostly around IP addresses, although some systems could correlate using ports and protocols, and even users. All log sources were in the datacenter. And most correlation was rule-based, although there was some statistical analysis done as early as 2003. Finally, most SIEMs used relational databases to store the logs.

Starting in the late 2000s, organizations began to realize that while they were meeting compliance requirements, they were still being breached due to the limitations of “traditional” SIEM solutions’ incident detection capabilities as follows:

  • They were designed to focus on IP addresses rather than users. At present, correlating by IP addresses is useless given the increasing number of remote and mobile users, and the number of times a day those users’ IP addresses can change. Retrofitting the traditional SIEM for user analysis has shown to be difficult.
  • They are notoriously difficult to administer. This is due mostly to the rule-based method of event correlation. Customizing and keeping up-to-date hundreds of rules is time consuming. Too often organizations did not realize this when they purchased the SIEM and therefore under-budgeted resources to administer it.
  • They tend to generate too many false positives. This is also mostly due to rule-based event correlation. This is particularly insidious as analysts start to ignore alerts because investigating most of them turns out to be a waste of time. This also affects morale resulting in high turnover.
  • They miss true positives because either the generated alerts are simply missed by analysts overwhelmed by too many alerts, or there was no rule built to detect the attackers activity. The rule-building cycle is usually backward looking. In other words, an incident happens and then rules are built to detect that situation should it happen again. Since attackers are constantly innovating, the rule building process is a losing proposition.
  • They tend to have sluggish performance in part due to organizations underestimating, and therefore under-budgeting, infrastructure requirements, and due to the limitations of relational databases.

In the last few years, we have seen a new security log analysis “category” defined as “User Behavior Analytics (UBA), which focuses on analyzing user credentials and user oriented event data. The data stores are almost never relational, and the algorithms are mostly machine learning which are predictive in nature and require much less tuning.

Notice how UBA solutions address most of the shortcomings of traditional SIEMs for incident detection. So the question is why is UBA considered a separate category? It seems to me that UBA is the evolution of SIEM – better user interfaces (in some cases), better algorithms, better log storage systems, and a more appropriate “entity” on which to focus, i.e. users. In addition, UBAs can support user data coming from SaaS as well as on-premise applications and controls.

I understand that some UBA vendors’ short-term, go-to-market strategy is to complement the installed SIEM. It seems to me this is the justification for considering UBA and SIEM as separate product categories. But my question is, how many organizations are going to be willing to use two or three different products to analyze logs?

In my view, in 3-5 years there won’t be a separate UBA market. The traditional SIEM vendors are already attempting to add UBA capabilities with varying degrees of success. We are also beginning to see SIEM vendors acquire UBA vendors. We’ll see how successful the integration process will be. A couple of UBA vendors will prosper/survive as SIEM vendors due to a combination of superior user interface, more efficacious analytics, faster and more scalable storage, and lower administrative costs.

As I look over my experience in Information Security since 1999, I see three distinct eras with respect to the motivation driving technical control purchases:

  • Basic (mid-90’s to early 2000’s) – Organizations implemented basic host-based and network-based technical security controls, i.e. anti-virus and firewalls respectively.
  • Compliance (early 2000’s to mid 2000’s) – Compliance regulations such as Sarbanes-Oxley and PCI drove major improvements in security.
  • Breach Prevention and Incident Detection & Response (BPIDR) (late 2000’s to present) – Organizations realize that regulatory compliance represents a minimum level of security, and is not sufficient to cope with the fast changing methods used by cyber predators. Meeting compliance requirements will not effectively reduce the likelihood of a breach by more skilled and aggressive adversaries or detect their malicious activity.

I have three examples to support the shift from the Compliance era to the Breach Prevention and Incident Detection & Response (BPIDR) era. The first is the increasing popularity of Palo Alto Networks. No compliance regulation I am aware of makes the distinction between a traditional stateful inspection firewall and a Next Generation Firewall as defined by Gartner in their 2009 research report.  Yet in the last four years, 6,000 companies have selected Palo Alto Networks because their NGFWs enable organizations to regain control of traffic at points in their networks where trust levels change or ought to change.

The second example is the evolution of Log Management/SIEM. One can safely say that the driving force for most Log/SIEM purchases in the early to mid 2000s was compliance. The fastest growing vendors of that period had the best compliance reporting capabilities. However, by the late 2000s, many organizations began to realize they needed better detection controls. We began so see a shift in the SIEM market to those solutions which not only provided the necessary compliance reports, but could also function satisfactorily as the primary detection control within limited budget requirements. Hence the ascendancy of Q1 Labs, which actually passed ArcSight in number of installations prior to being acquired by IBM.

The third example is email security. From a compliance perspective, Section 5 of PCI DSS, for example, is very comprehensive regarding anti-virus software. However, it is silent regarding phishing. The popularity of products from Proofpoint and FireEye show that organizations have determined that blocking email-borne viruses is simply not adequate. Phishing and particularly spear-phishing must be addressed.

Rather than simply call the third era “Breach Prevention,” I chose to add “Incident Detection & Response” because preventing all system compromises that could lead to a breach is not possible. You must assume that Prevention controls will have failures. Therefore you must invest in Detection controls as well. Too often, I have seen budget imbalances in favor of Prevention controls.

The goal of a defense-in-depth architecture is to (1) prevent breaches by minimizing attack surfaces, controlling access to assets, and preventing threats and malicious behavior on allowed traffic, and (2) to detect malicious activity missed by prevention controls and detect compromised systems more quickly to minimize the risk of disclosure of confidential data.

18. October 2011 · Comments Off on Practical SIEM Deployment | SecurityWeek.Com · Categories: blog · Tags: , ,

Practical SIEM Deployment | SecurityWeek.Com. Chris Poulin, from Q1 Labs, has an excellent article in SecurityWeek on practical SIEM deployment.

Chris points out that SIEM is more challenging than say Anti-Virus because (1) there are so many possible use cases for a modern SIEM and (2) context is a factor.

Chris describes the general use cases that apply to most organizations and are mostly ready to deploy using out-of-the-box rules, dashboard widgets, reports, and saved searches:

  • Botnet detection
  • Excessive authentication failures
  • Traffic from darknets
  • IDS alerts that a particular attack is targeting an asset that the VA scanner confirms is vulnerable to that exploit
These cover many of the controls in compliance mandates and provide a good foundation for your log management program, not to mention they’re the main log sources used by default rules in most SIEMs. 
It surely makes sense that Phase 1 of a SIEM project should focus on collecting telemetry from key log sources and implementing the general use cases itemized above.
Chris points out that while IPS/IDS is core telemetry, it should not be part of Phase 1 because there can be a lot of tuning work needed if the IPS/IDSs have not already been tuned. So I will call IPS/IDS integration Phase 2, although Phase 1 includes basic IPS/IDS – Vulnerability matching.
If the IPS/IDSs are well tuned for direct analysis using the IPS/IDS’s console, then by all means include them in phase one. In addition, if the IPS/IDSs are well-tuned, it’s been my experience that you ought to consider “de-tuning” or opening up the IPS/IDSs somewhat and leverage the SIEM to generate more actionable intelligence.
Chris says that the next phase (by my count, Phase 3) ought to be integrating network activity, i.e. flows if the SIEM “has natively integrated network activity so it adds context and situational awareness.” If not, save flow integration for organization-specific use cases.
Network activity can automatically discover and profile assets in your environment, and dramatically simplify the tuning process. Vulnerability Assessment (VA) scanners can also build an asset model; however, VA scanners are noisy, performing active probes rather than passively watching the network, and only give you a point -in-time view of the network. To be sure, VA scanners are core telemetry, and every SIEM deployment should integrate them, but not as a replacement for native network activity monitoring, which provides a near-real-time view and can alert to new assets popping up, network scans—even low and slow ones—and DDoS attacks. And if you’re monitoring network activity and collecting flows, don’t forget to collect the events from the routers and switches as well.
At this point, with the first three phases complete, the security team has demonstrated the value of SIEM and is well down the product learning curve. Therefore you are ready to focus on organization-specific use cases (my Phase 4), for example adding application and database logs.
19. March 2011 · Comments Off on SIEM resourcing – in-house or outsource? · Categories: blog · Tags: ,

Anton Chuvakin wrote an article on the costs associated with Security Information & Event Management SIEM and log management which will help you decide whether you should do SIEM in-house or outsource to a Managed Security Services Provider. Anton breaks the costs down into the following categories:

  • Hard costs
    • Initial costs
    • Ongoing operating costs
    • Periodic or occasional costs
  • Soft costs
    • Initial costs
    • Ongoing operating costs
    • Periodic or occasional costs

BTW, in my experience, I have seen the total cost of a SIEM project (hard + soft) range from 10% of SIEM license costs (for shelfware SIEM “deployments”) to a mind-boggling 20x of license cost.

19. March 2011 · Comments Off on RSA breach and APT – Detection Controls and Access Control · Categories: blog · Tags: , , , , , , , , ,

I would like to comment on RSA’s use of the term Advanced Persistent Threat (APT) in their Open Letter to RSA Customers. From my perspective, any company’s trade secrets are subject to APTs from someone. There is always some competitor or government that can benefit from your trade secrets. All APT means is that someone is willing to focus on your organization with resources of approximately the value of a penetration test plus the cost of acquiring a 0-day attack.

This means that you must assume that you are or will be compromised and therefore you must invest in “detection controls.”  In other words, your security portfolio must include detection as well as prevention controls. Important detection controls include intrusion detection, behavior anomaly detection, botnet command & control communications detection, and Security Information & Event Management (SIEM). If you don’t have the resources to administer and monitor these controls then you need to hire a managed security services provider (MSSP).

Furthermore, organizations must take a close look at their internal access control systems. Are they operationally and cost effective? Are you compromising effectiveness due to budget constraints? Are you suffering from “role explosion?” A three thousand person company with 800 Active Directory Groups is difficult to manage, to say the least. Does your access control system impede your responsiveness to changes in business requirements? Have you effectively implemented Separation of Duties? Can you cost effectively audit authorization?

27. November 2010 · Comments Off on Gartner: Security policy should factor in business risks · Categories: blog · Tags: , , , , ,

Gartner: Security policy should factor in business risks.

Understanding the business risk posed due to security threats is crucial for IT managers and security officers, two analysts have claimed.

Viewing and analyzing security threats from a business risk perspective is surely a worthwhile goal.

How do you operationalize this objective? Deploy a Log/SIEM solution with integrated IT/Business Service Management capabilities. These include:

  • Device and Software Discovery
  • Layer 2 and Layer 3 Topology Discovery and Mapping
  • User interface to group devices and applications into IT/Business Services
  • Change Management Monitoring
  • Alerts/Incidents with IT/Business Service context
  • IT/Business Service Management Reports and Dashboards
09. October 2010 · Comments Off on NitroSecurity Fuels Momentum With New Funding and Technology Acquisition – MarketWatch · Categories: Security-Compliance · Tags: , ,

NitroSecurity Fuels Momentum With New Funding and Technology Acquisition – MarketWatch.

Having spent eight years of my life at LogMatrix (which had been called OpenService until it was renamed in 2009) helping develop its security business, I am glad to see it in the hands of the fast-growing NitroSecurity.

We brought to market several innovative concepts to improve the effectiveness of SIEM solutions including a risk-based quantitative algorithm that worked on both network and application logs, and a user-based behavioral anomaly algorithm.

I wish my friends at LogMatrix who moved over to NitroSecurity all the best.

Dark Reading recently published an article about the problems that plague Security Information and Event Management deployments, Five Reasons SIEM Deployments Fail. First, I would say that you could use these five reasons to explain why almost any “enterprise” information technology project fails. Having said that, I would like to address each of the five points individually:

1. SIEM is too hard to use.

The nut of it really comes down to the fact that SIEM is not an easy technology to use. Part of that rests squarely at the feet of SIEM vendors, who still have not done enough to simplify their products — particularly for small and midsize enterprises, says Mike Rothman, analyst and president of Securosis.

There is no doubt that some SIEM products are harder than others to use. Ease-of-use must surely be one of the criteria you use when evaluating SIEM solutions. On the other hand, too hard to use may be code for not having the resources needed to deploy and operate a SIEM solution. For those organizations, there is an alternative to buying a SIEM solution. Use a Managed Security Service Provider (MSSP) to provide the service. This is a particularly appropriate approach for small and midsize enterprises.

“I think that we need to see more of a set of deployment models [that] make it easier for folks that aren’t necessarily experts on this stuff to use it. In order for this market to continue to grow and to continue to drive value to customers, it has to be easier to use, and it has to be much more applicable to the midmarket customer,” Rothman says. “Right now the technology is still way too complicated for that.”

There is an alternate deployment model which Mike seems to be ignoring. Incident detection and response is complicated. If you don’t have skilled resources or the budget to hire and train people, you need to go with a MSSP. A good MSSP will have multiple deployment models to support different customer needs.

A more correct statement might be that an organization has to decide whether it has the resources to select, deploy, and operate a SIEM.

2. Log management lacks standardization.

In order to truly automate the collection of data from different devices and automate the parsing of all that data, organizations need standardization within their logged events, says Scott Crawford, analyst for Enterprise Management Associates. “This is one of the biggest issues of event management,” Crawford says. “A whole range of point products can produce a very wide variety of ways to characterize events.”

There is no doubt that there is no standardization in logs. That’s like saying there is no standardization in operating systems, firewalls, or any of the other products for which you need to collect logs. Even if there were to be a standard, there would still be ways for manufacturers to differentiate themselves. Just take a look at SNMP. It represents one of the most used industry standards. Yet manufacturers always add proprietary functions for which systems management products must account. So logs may get somewhat more standardized if, for example, Mitre’s CEE were to become a standard. But the SIEM manufacturers and MSSPs will always be dealing with integrating custom logs.

3. IT can’t rise above organizational power struggles.

“One of the key challenges our customers face is really getting all parts of the company to work together to actually make the connections to get the right scope of monitoring,” says Joe Gottlieb, president and CEO of SenSage. “And the things you want to monitor sit in different places within the organization and are controlled by different parts of the organization.”

Yes, by definition SIEM cuts across departmental lines when the goal is to provide organization-wide security posture and incident visibility. As with most “enterprise” solutions, you need senior management support in order to have any hope of success.

4. Security managers see SIEM as magic.

SIEM expectations frequently don’t jibe with reality because many IT managers believe SIEM is about as powerful as Merlin’s wand.

“A lot of people look at SIEM like it’s this magical box — I get a SIEM and it’s going to do all my work for me,” says Eric Knapp, vice president of technology marketing for NitroSecurity. “SIEM has different levels of ease of use, but they all come back to looking at information and drawing conclusions. Unless you’re looking at it in the correct context for your specific environment, it’s not going to help you as much as it should.”

SIEM has been around for ten years now. Is it really possible that SIEM still has some kind of magical mystique about it? SIEM vendors that let their sales people sell this way don’t last because the resources the vendor has to commit to alleviate customer dissatisfaction is huge and profit-sapping. On the other hand, caveat emptor. Any organization buying SIEM without understanding how it works and what resources they need to make it successful, have only themselves to blame. Again, if you are not sure what you are getting yourself into, consider a MSSP as an alternative buying a SIEM solution.

5. Scalability nightmares continue to reign.

There is no doubt that scalability is a particularly important attribute of a SIEM solution. And there are SIEM products out there that do not scale well. If the vendor tells you, (1) We store log data in a traditional relational database, or (2) You only need to save the “relevant” logs, RUN. These statements are sure signs of lack of scalability. On the other hand, you do need to know or estimate how many events per second and per day you will actually generate in order to configure the underlying hardware to get reasonable performance.

There are SIEM solutions that do scale well. They don’t use traditional relational databases to store log data. As to which log events are unimportant? It’s practically impossible to determine. If you are in doubt, there is no doubt. Collect them.

I thought a post about Database Activity Monitoring was timely because one of the DAM vendors, Sentrigo, published a Microsoft SQLServer vulnerability today along with a utility that mitigates the risk. Also of note, Microsoft denies that this is a real vulnerability.

I generally don't like to write about a single new vulnerability because there are just so many of them. However, Adrian Lane, CTO and Analyst at Securosis, wrote a detailed post about this new vulnerability, Sentrigo's workaround, and Sentrigo's DAM product, Hedgehog. Therefore I wanted to put this in context.

Also of note, Sentrigo sponsored a SANS Report called "Understanding and Selecting a Database Activity Monitoring Solution." I found this report to be fair and balanced as I have found all of SANS activities.

Database Activity Monitoring is becoming a key component in a defense-in-depth approach to protecting "competitive advantage" information like intellectual  property, customer and financial information and meeting compliance requirements.

One of the biggest issues organizations face when selecting a Database Activity Monitoring solution is the method of activity collection, of which there are three – logging, network based monitoring, and agent based monitoring. Each has pros and cons:

  • Logging – This requires turning on the database product's native logging capability. The main advantage of this approach is that it is a standard feature included with every database. Also some database vendors like Oracle have a complete, but separately priced Database Activity Monitoring solution, which they claim will support other databases. Here are the issues with logging:
    • You need a log management or Security Information and Event Management (SIEM) system to normalize each vendor's log format into a standard format so you can correlate events across different databases and store the large volume of events that are generated. If you already committed to a SIEM product this might not be an issue assuming the SIEM vendor does a good job with database logs.
    • There can be significant performance overhead on the database associated with logging, possibly as high as 50%.
    • Database administrators can tamper with the logs. Also if an external hacker gains control of the database server, he/she is likely to turn logging off or delete the logs. 
    • Logging is not a good alternative if you want to block out of policy actions. Logging is after the fact and cannot be expected to block malicious activity. While SIEM vendors may have the ability to take actions, by the time the events are processed by the SIEM, seconds or minutes have passed which means the exploit could already be completed.
  • Network based – An appliance is connected to a tap or a span port on the switch that sits in front of the database servers. Traffic to and, in most cases, from the databases is captured and analyzed. Clearly this puts no performance burden on the database servers at all. It also provides a degree of isolation from the database administrators.Here are the issues:
    • Local database calls and stored procedures are not seen. Therefore you have an incomplete picture of database activity.
    • Your must have the network infrastructure to support these appliances.
    • It can get expensive depending on how many databases you have and how geographically dispersed they are.
  • Host based – An agent is installed directly on each database server.The overhead is much lower than with native database logging, as low as 1% to 5%, although you should test this for yourself.  Also, the agent sees everything including stored procedures. Database administrators will have a hard time interfering with the process without being noticed. Deployment is simple, i.e. neither the networking group nor the datacenter team need be involved. Finally, the installation process should  not require a database restart. As for disadvantages, this is where Adrian Lane's analysis comes in. Here are his concerns:
    • Building and maintaining the agent software is difficult and more time consuming for the vendor than the network approach. However, this is the vendor's issue not the user's.
    • The analysis is performed by the agent right on the database. This could mean additional overhead, but has the advantage of being able to block a query that is not "in policy."
    • Under heavy load, transactions could be missed. But even if this is true, it's still better than the network based approach which surely misses local actions and stored procedures.
    • IT administrators could use the agent to snoop on database transactions to which they would not normally have access.

Dan Sarel, Sentrigo's Vice President of Product, responded in the comments section of Adrian Lane's post. (Unfortunately there is no dedicated link to the response. You just have to scroll down to his response.) He addressed the "losing events under heavy load" issue by saying Sentrigo has customers processing heavy loads and not losing transactions. He addressed the IT administrator snooping issue by saying that the Sentrigo sensors doe not require database credentials. Therefore database passwords are not available to IT administrators.

Detailed
empirical data on IT Security breaches is hard to come by despite laws like
California SB1386.
So
there is much to be learned from Verizon Business’s April 2009 Data Breach
Investigations Report
.

The specific issue I would like to highlight now is the
section on methods by which the investigated breaches were discovered (Discovery
Methods, page 37). 83% were discovered by third parties or non-security employees
going about their normal business. Only 6% were found by event monitoring or
log analysis. Routine internal or external audit combined came in at a rousing
2%.

These numbers are truly shocking considering the amount
of money that has been spent on Intrusion Detection systems, Log Management
systems, and Security Information and Event Management systems. Actually, the
Verizon team concludes that many breached organizations did not invest sufficiently
in detection controls. Based on my experience, I agree.

Given a limited security budget there needs to be a balance
between prevention, detection, and response. I don’t think anyone would argue against
this in theory. But obviously, in practice, it’s not happening. Too
often I have seen too much focus on prevention to the detriment of detection
and response.

In addition, these
numbers point to the difficulties in deploying viable detection controls, as there
were a significant number of organizations that had purchased detection
controls but had not put them into production. Again, I have seen this myself
as most of the tools are too difficult to manage and it’s difficult to implement
effective processes.