20. July 2015 · Comments Off on The evolution of SIEM · Categories: Uncategorized · Tags: , , ,

In the last several years, a new “category” of log analytics for security has arisen called “User Behavior Analytics.” From my 13-year perspective, UBA is really the evolution of SIEM.

The term “Security Information and Event Management (SIEM)” was defined by Gartner 10 years ago. At the time, some people were arguing between Security Information Management (SIM) and Security Event Management (SEM). Gartner just combined the two and ended that debate.

The focus of SIEM was on consolidating and analyzing log information from disparate sources such as firewalls, intrusion detection systems, operating systems, etc. in order to meet compliance requirements, detect security incidents, and provide forensics.

At the time, the correlation was designed mostly around IP addresses, although some systems could correlate using ports and protocols, and even users. All log sources were in the datacenter. And most correlation was rule-based, although there was some statistical analysis done as early as 2003. Finally, most SIEMs used relational databases to store the logs.

Starting in the late 2000s, organizations began to realize that while they were meeting compliance requirements, they were still being breached due to the limitations of “traditional” SIEM solutions’ incident detection capabilities as follows:

  • They were designed to focus on IP addresses rather than users. At present, correlating by IP addresses is useless given the increasing number of remote and mobile users, and the number of times a day those users’ IP addresses can change. Retrofitting the traditional SIEM for user analysis has shown to be difficult.
  • They are notoriously difficult to administer. This is due mostly to the rule-based method of event correlation. Customizing and keeping up-to-date hundreds of rules is time consuming. Too often organizations did not realize this when they purchased the SIEM and therefore under-budgeted resources to administer it.
  • They tend to generate too many false positives. This is also mostly due to rule-based event correlation. This is particularly insidious as analysts start to ignore alerts because investigating most of them turns out to be a waste of time. This also affects morale resulting in high turnover.
  • They miss true positives because either the generated alerts are simply missed by analysts overwhelmed by too many alerts, or there was no rule built to detect the attackers activity. The rule-building cycle is usually backward looking. In other words, an incident happens and then rules are built to detect that situation should it happen again. Since attackers are constantly innovating, the rule building process is a losing proposition.
  • They tend to have sluggish performance in part due to organizations underestimating, and therefore under-budgeting, infrastructure requirements, and due to the limitations of relational databases.

In the last few years, we have seen a new security log analysis “category” defined as “User Behavior Analytics (UBA), which focuses on analyzing user credentials and user oriented event data. The data stores are almost never relational, and the algorithms are mostly machine learning which are predictive in nature and require much less tuning.

Notice how UBA solutions address most of the shortcomings of traditional SIEMs for incident detection. So the question is why is UBA considered a separate category? It seems to me that UBA is the evolution of SIEM – better user interfaces (in some cases), better algorithms, better log storage systems, and a more appropriate “entity” on which to focus, i.e. users. In addition, UBAs can support user data coming from SaaS as well as on-premise applications and controls.

I understand that some UBA vendors’ short-term, go-to-market strategy is to complement the installed SIEM. It seems to me this is the justification for considering UBA and SIEM as separate product categories. But my question is, how many organizations are going to be willing to use two or three different products to analyze logs?

In my view, in 3-5 years there won’t be a separate UBA market. The traditional SIEM vendors are already attempting to add UBA capabilities with varying degrees of success. We are also beginning to see SIEM vendors acquire UBA vendors. We’ll see how successful the integration process will be. A couple of UBA vendors will prosper/survive as SIEM vendors due to a combination of superior user interface, more efficacious analytics, faster and more scalable storage, and lower administrative costs.

07. February 2014 · Comments Off on Jumping to conclusions about the Target breach · Categories: Uncategorized · Tags: , , , ,

On Feb 5, 2014 Brian Krebs published a story which provided more details about the Target breach entitled, Target Hackers Broke in Via HVAC Company. The story connects the Target breach to the fact that Target allowed Fazio Mechanical Services, a provider of refrigeration and HVAC systems to remotely connect to Target stores in the Pennsylvania area. Fazio provides these same services to Trader Joe’s, Whole Foods, and BJ’s Wholesale Club in Pennsylvania, Maryland, Ohio, Virginia, and West Virginia. Krebs goes on to say that this practice is common and why.

Krebs rightly never jumps to a conclusion about how this remote access resulted in the breach because there are no known facts on which to base such a conclusion. However that did not stop Network World from publishing a story on Feb 6, 2014 that the Target breach happened because of a basic network segmentation error. The problem with the story is that no one has shown, much less stated, that the attackers’ ability to move around the network was due to an error in network segmentation in the Target stores.

In fact, one of the commenters, “LT,” in the Krebs story actually stated:

Target does have separate VLANs for Registers, Security cameras, office computers, registry scanners/kiosks, even a separate VLAN for the coupon printers at the registers. The problem is not lack of VLAN’s, they use them everywhere and each VLAN is configured for exactly the number of devices it needs to support. The problem is somehow lateral movement was allowed that allowed the hackers to enter in through the HVAC system and eventually get to the POS VLAN.

So there are really TWO possible conclusions one can draw from this, not just the one Network World jumped to:

  1. There were in fact VLAN configuration errors that more easily allowed the attackers to move around undetected.
  2. The attackers knew how to circumvent VLAN control. For some reason Network World failed to consider this possibility. To me, this is a reasonable alternative. VLAN hopping is a well-understood attack vector.

So one might ask, why was Target relying on VLANs for network segmentation rather than firewalls? Based on my interpretation of the PCI DSS 3.0 Requirements and Security Assessment Procedures published in November 2013, there is no requirement to deploy firewalls in stores. Requirement 1.3 is fairly clear that firewalls are only relevant when there is an Internet (public) connection present. Based on my experience, retail stores do not have direct Internet access. They communicate on “private” networks to internal datacenters. Therefore, the use of VLANs to segment store traffic is not a violation of PCI DSS requirements.

Finally, even if PCI DSS specified “stateful inspection” firewalls were deployed in stores, they do not provide adequate network security control against attackers, as I wrote previously,

 

 

 

 

28. January 2014 · Comments Off on Prioritizing Vulnerability Remediation – CVSS vs. Threat Intelligence · Categories: Uncategorized · Tags: , , , ,

The CVSS vulnerability scoring system is probably the most popular method to prioritize vulnerability remediation. Unfortunately, it’s wildly inaccurate. Dan Geer, CISO for In-Q-Tel, and Michael Roytman, the predictive analytics engineer at Risk I/O published a paper in December 2013, entitled Measuring vs. Modeling that shows empirically just how bad CVSS is.

The authors had access to 30 million live vulnerabilities across 1.1 million assets from 10,000 organizations. In addition, they had another data set of SIEM logs of 20,000 organizations from which they extracted exploit signatures. They then paired those exploits with vulnerability scans of the same organizations. The time period for their analysis was June to August 2013.

Although the two sets of data come from different organizations, the authors believe that data sets are large enough that correlating them produces significant insights. Maybe more importantly, they say, “Because this is observed data, per se, we contend that it is a better indicator than the qualitative analysis done during CVSS scoring.”

The first step of their analysis was to establish a base rate, i.e. the probability that a randomly selected vulnerability is one that resulted in a breach. They determined that the base rate was 2%. Then they used CVSS numbers to correlate vulnerabilities to breaches. A CVSSv2 score of 9 resulted in 2.4%, and a CVSSv2 score of 10 resulted in 3.5%.

So how did Threat intelligence do? As a proxy for threat intelligence they used the Exploit-DB, Metasploit individually and combined. The numbers for these were 12.6%,  25.1%, and 29.2% respectively!! Clearly, using Exploit-DB and Metasploit together were almost 10 times better than CVSSv2!!

This jives with other similar work done by Luca Allodi from the University of Toronto. He found that that 87.8% of vulnerabilities that had a CVSS score of 9 or 10 were never exploited. “Conversely, a large portion of Exploit-DB and Symantec’s intelligence go unflagged by CVSS scoring; however, this is still a definitional analysis.”

This Usenix paper is well worth reading in its entirety, as well as the references they provide.

One caveat, the second author’s company, Risk I/O offers a vulnerability prioritization service based on threat intelligence. You might suspect that this study was performed with the end in mind of proving the value of their service. However, I find it hard to believe that Dan Geer would participate in such a scam. Nor do I think Usenix would be easily fooled. In addition, this study had similar results to Luca Allodi’s. I would surely be interested in hearing from anyone who can show that CVSS is a better predictor of vulnerabilities being exploited than threat intelligence.

 

 

 

10. October 2010 · Comments Off on Ukraine Detains 5 Individuals Tied to $70 Million in U.S. eBanking Heists — Krebs on Security · Categories: Uncategorized · Tags: ,

Ukraine Detains 5 Individuals Tied to $70 Million in U.S. eBanking Heists — Krebs on Security.

Authorities in Ukraine this week detained five individuals believed to be the masterminds behind sophisticated cyber thefts that siphoned $70 million – out of an attempted $220 million — from hundreds of U.S.-based small to mid-sized businesses over the last 18 months, the FBI said Friday.

At a press briefing on “Operation Trident Breach,” FBI officials described the Ukrainian suspects as the “coders and exploiters” behind a series of online banking heists that have led to an increasing number of disputes and lawsuits between U.S. banks and the victim businesses that are usually left holding the bag.

This is an excellent article by Brian Krebs detailing the latest in a series of arrests related to electronic funds transfer fraud.

In another article Brian Krebs details a specific incident where hackers stole $600,000 from the town of Brigantine, NJ.

No business should be using the “general purpose” computer for electronic funds transfer transactions. As I said in my last post, either use a dedicated computer or an encrypted bootable USB stick like the one we offer from Becrypt.

10. October 2010 · Comments Off on Bill would protect towns, schools from cybertheft losses – Computerworld · Categories: Uncategorized · Tags: , , ,

Bill would protect towns, schools from cybertheft losses – Computerworld.

Sen. Charles Schumer (D-N.Y.) has introduced a bill that would protect municipalities and school districts against financial losses resulting from certain types of cybertheft.

Under the proposed bill, cities, towns and school districts would not be held liable for losses tied to online account takeovers and fraudulent electronic funds transfers initiated by cyberthieves, as long as the theft is reported in a timely manner.

It is the same sort of protection that consumers have under the Electronic Fund Transfer Act, which caps consumer liability for an unauthorized EFT at $50. Schumer’s bill (S. 3898) would modify portions of the EFTA to offer the same protection to schools and municipalities.

The idea of moving the liability electronic funds transfer fraud from the bank account holder to the bank will force banks to implement better protection measures.

In our opinion, there are only two ways online account holders can protect themselves from online bank fraud: (1) use a dedicated computer for online bank transactions, (2) use a dedicated encrypted bootable USB stick. Using just a separate browser, even in a separate virtual machine is not good enough.

If a dedicated computer is not feasible, we at Cymbel recommend Becrypt‘s Trusted Client solution.

19. August 2010 · Comments Off on Internet Explorer 6 still represents more than 16% of web traffic · Categories: Uncategorized

I was reviewing Zscaler’s State of the Web – Q2 2010 and was surprised to learn that Zscaler is seeing 16% of web traffic is still using Internet Explorer 6! Since Zscaler can be configured to prevent the use of IE 6, my guess is that IE 6 usage in the general population is even higher.

There is good news though – the trend for IE 6 and IE 7 is down and IE 8 is up, but IE 7 is still the most used browser by far at 25%. Firefox is second at 10%.

20. February 2010 · Comments Off on The only time it makes sense to use a pie chart · Categories: Uncategorized

via emergentchaos.com

An amusing image from Adam Shostack's blog to help you understand when to use pie charts, i.e. never. The yellow = the pie not eaten, the silver = the pie that's been eaten.

03. October 2009 · Comments Off on Technorati Claim Code · Categories: Uncategorized

g6vbtfr39c

31. July 2009 · Comments Off on Clampi malware plus exploit raises risk to extremely high · Categories: Uncategorized · Tags: , , , , , , , , , ,

The risk associated with a known three year old Trojan-type virus called Clampi has gone from low to extremely high due the sophisticated exploit created and being executed by an Eastern European cyber-crime group.

Just as businesses can differentiate themselves by applying creative processes to commodity technology, so now are cyber-criminals. Clampi has been around since 2007. Symantec as of July 23, 2009 considered the risk posed by Clampi as Risk Level 1: Very Low. I don’t mean to pick on Symantec. McAfee, which calls the virus LLomo, has the Risk Level set to Low as of July 16, 2009. TrendMicro’s ThreatInfo site was so slow, I gave up trying to find the Risk Level they chose.

The exploit process used was first reported (to my knowledge) by Brian Krebs of the Washington Post on July 20, 2009.

On July 29, 2009, Joe Stewart, Director of Malware Research for the Counter Threat Unit (CTU) of SecureWorks released a summary of his research about Clampi and how it’s being used, just prior to this week’s Black Hat Security Conference in Las Vegas.

Clampi is a Trojan-type virus which, when installed on your desktop or
laptop, can be used by this cyber-crime group to steal financial data,
apparently including User Identification and Password credentials used
for online banking and other types of online commerce. Apparently, this
Eastern European cyber-crime group controls a large number of PC’s
infected with Clampi and is stealing money from both consumers and
businesses.

Brian Krebs of the Washington Post ran a story on July 2, 2009 about a similar exploit using a different PC-based Trojan called Zeus. $415,000 was stolen from Bullitt County, KY.

Trojans like Clampi and Zeus have been around for years. What makes these exploits so high risk is the methods by which these Trojans infect us and the sophistication of the exploits’ processes for extracting money from bank accounts.

Security has always been a “cat-and-mouse” game where the bad guys develop new exploits and the good guys respond. So now I am sure we are going to see the creativity of the security vendor industry applied to reducing the risk associated with this type of exploit. At the most basic level, firewalls need to be much more application and user aware. Intrusion detection systems may already be able to detect some aspect of this type of exploit. We also need better anomaly detection capabilities.