26. October 2011 · Comments Off on Australia DSD’s Top Four Security Strategies · Categories: blog · Tags: , , , , ,

The SANS Institute has endorsed Australia’s Defense Signals Directorate (DSD) four top strategies for mitigating  information security risk:

  1. Patching applications and using the latest version of an application
  2. Patching operating systems
  3. Keeping admin right under strict control (and forbidding the use of administrative accounts for email and browsing)
  4. Whitelisting applications
While there is nothing new with these four strategies, I would like to discuss #4. The Australian DSD Strategies to Mitigate Targeted Cyber Intrusions defines Application Whitelisting as preventing unapproved programs from running on PCs. I recommend extending whitelisting to the network. In other words, define which applications are allowed on the network by user groups, both internally and Web-based, and deny all others.
My recommendation is not really a new idea either. After all, that’s what firewalls are supposed to do. The issue is that the traditional stateful inspection firewall does it using port numbers and IP addresses. For at least the last five years applications and users have routinely bypassed these firewalls by using applications that share open ports.
This is why in October 2009, Gartner started talking about “Next Generation Firewalls” which enable you to implement whitelisting on the network at Layer 7 (Application) as well as down the stack to Layer 4 and 3. In other words extend the traditional “Positive Control Model” firewall functionality up through the Application Layer. (If you have not seen that Gartner research report, please contact me and I will arrange for you to receive a copy.)
18. October 2011 · Comments Off on Practical SIEM Deployment | SecurityWeek.Com · Categories: blog · Tags: , ,

Practical SIEM Deployment | SecurityWeek.Com. Chris Poulin, from Q1 Labs, has an excellent article in SecurityWeek on practical SIEM deployment.

Chris points out that SIEM is more challenging than say Anti-Virus because (1) there are so many possible use cases for a modern SIEM and (2) context is a factor.

Chris describes the general use cases that apply to most organizations and are mostly ready to deploy using out-of-the-box rules, dashboard widgets, reports, and saved searches:

  • Botnet detection
  • Excessive authentication failures
  • Traffic from darknets
  • IDS alerts that a particular attack is targeting an asset that the VA scanner confirms is vulnerable to that exploit
These cover many of the controls in compliance mandates and provide a good foundation for your log management program, not to mention they’re the main log sources used by default rules in most SIEMs. 
It surely makes sense that Phase 1 of a SIEM project should focus on collecting telemetry from key log sources and implementing the general use cases itemized above.
Chris points out that while IPS/IDS is core telemetry, it should not be part of Phase 1 because there can be a lot of tuning work needed if the IPS/IDSs have not already been tuned. So I will call IPS/IDS integration Phase 2, although Phase 1 includes basic IPS/IDS – Vulnerability matching.
If the IPS/IDSs are well tuned for direct analysis using the IPS/IDS’s console, then by all means include them in phase one. In addition, if the IPS/IDSs are well-tuned, it’s been my experience that you ought to consider “de-tuning” or opening up the IPS/IDSs somewhat and leverage the SIEM to generate more actionable intelligence.
Chris says that the next phase (by my count, Phase 3) ought to be integrating network activity, i.e. flows if the SIEM “has natively integrated network activity so it adds context and situational awareness.” If not, save flow integration for organization-specific use cases.
Network activity can automatically discover and profile assets in your environment, and dramatically simplify the tuning process. Vulnerability Assessment (VA) scanners can also build an asset model; however, VA scanners are noisy, performing active probes rather than passively watching the network, and only give you a point -in-time view of the network. To be sure, VA scanners are core telemetry, and every SIEM deployment should integrate them, but not as a replacement for native network activity monitoring, which provides a near-real-time view and can alert to new assets popping up, network scans—even low and slow ones—and DDoS attacks. And if you’re monitoring network activity and collecting flows, don’t forget to collect the events from the routers and switches as well.
At this point, with the first three phases complete, the security team has demonstrated the value of SIEM and is well down the product learning curve. Therefore you are ready to focus on organization-specific use cases (my Phase 4), for example adding application and database logs.
13. October 2011 · Comments Off on Looking for Infected Systems as Part of a Security Assessment · Categories: blog · Tags: , , , , , , , ,

Looking for Infected Systems as Part of a Security AssessmentLooking for Infected Systems as Part of a Security Assessment. Lenny Seltzer describes techniques for identifying signs of malware or compromise in an enterprise setting.

Lenny mentions Damballa’s consultant-friendly licensing option, Damballa Failsafe. We partner with Seculert, who provides a cloud-based service for detecting botnet infected devices in the enterprise.


 

12. October 2011 · Comments Off on Controlling remote access tool usage in the enterprise · Categories: blog · Tags: , , ,

Palo Alto Networks’ recent advice on controlling remote access tools in the enterprise was prompted by Google releasing a remote desktop control feature for Chrome, which also has the ability to be configured “to punch through the firewall.”

As Palo Alto Networks points out, the 2011 Verizon Data Breach Report showed that the initial penetrations in over 1/3 of the 900 incidents analyzed could be tracked to remote access errors.

Here are Palo Alto Networks’ recommendations:

  1. Learn which remote access tools are in use, who is using them and why.
  2. Establish a standard list of remote access tools for those who need them
  3. Establish a list of who should be allowed to use these tools.
  4. Document the usage guidelines, complete with ramifications of misuse and educate ALL users.
  5. Enforce the usage using traffic monitoring tools or better yet, a Palo Alto Networks next-generation firewall.

 

 

12. October 2011 · Comments Off on Rethinking the balance between Prevention, Detection, and Response · Categories: blog · Tags: , , , ,

One of Information Security’s basic triads is Prevention, Detection, and Response. How many organizations consciously use these categories when allocating InfoSec budgets? Whether intentional or not, I have found most organizations are over-weighted to Prevention.

Perhaps spending most of the InfoSec budget on Prevention made sense in the late 90’s and the first half of the 2000’s. But the changes we’ve seen during the last five to seven years in technology, threats, and the economy have led to an inevitability of organizations experiencing successful attacks. Therefore more budget must be allocated to Detection and Response.

What’s changed during the last several years?

Technology

  • The rise of Web 2.0 applications and social networking for business use, in response to the need to improve collaboration with customers and suppliers, and among employees.
  • Higher speed networks in response to the convergence of data, voice, and video which helps organizations cut operating costs
  • Increased number of remote and mobile workers, in response to efforts to reduce real estate costs and avoid wasting time commuting. I put this under technology because without high speed, low cost Internet connections this would not be happening.

Threats

  • Attacker motives have changed from glory to profits.
  • Attackers don’t bother building fast-spreading worms like Code Red and Nimda. Now adversaries work stealthily while they steal credit card information, bank account credentials, and intellectual property.
  • The main threat vector has shifted to the application layer and what I call the “inside-out” attack vector where social engineering actions like phishing lure users out to malware-laden web pages.

Economy

  • The Great Recession of 2008-2009 and the slow growth of the last couple of years have put enormous pressure on InfoSec budgets.

Using Bejtlich’s Security Effectiveness Model, the Threat Actions have changed but, for the most part, the Defensive Plans and Live Defenses have not kept up.

Organizations cannot continue to simply add new prevention controls to respond to the new reality. More effective and lower cost prevention controls must replace obsolete ones to improve Prevention and to free up budget for Detection and Response.

 

 

 

 

10. October 2011 · Comments Off on California Governor Vetoes Bill Requiring Warrant to Search Mobile Phones | Threat Level | Wired.com · Categories: blog · Tags: ,

California Governor Vetoes Bill Requiring Warrant to Search Mobile Phones | Threat Level | Wired.com.

California Gov. Jerry Brown is vetoing legislation requiring police to obtain a court warrant to search the mobile phones of suspects at the time of any arrest.

The Sunday veto means that when police arrest anybody in the Golden State, they may search that person’s mobile phone — which in the digital age likely means the contents of persons’ e-mail, call records, text messages, photos, banking activity, cloud-storage services, and even where the phone has traveled.

My question is, what if you password protect your phone? Must you give the police the password? Would that not be akin to incriminating yourself? In other words, could you refuse to give the police the password to your phone on the grounds of 5th Amendment protection?

04. October 2011 · Comments Off on The 20 Controls That Arent – The Falcons View · Categories: blog · Tags: , ,

The 20 Controls That Arent – The Falcons View.

I would like to respond to Ben Tomhave’s attack on the SANS 20 Critical Security Controls.

Ben says they are not actionable. They surely are actionable. While SANS refrains from specifying actual implementation recommendations, Cymbel does not. Also each control includes metrics to enable you to evaluate its effectiveness.

Ben says they are not scalable, i.e. they are only appropriate for large organizations with deep pockets. In reality the SANS 20CCs provide a maturity model with four levels, so you can start with the basics and mature over time.

Ben says they are designed to sell products. Sure, 15 of 20 are technical controls. As the SANS 20CCs document says, the attackers are automated so the defenders must be as well. And while technical controls without well trained people and good process are useless, the inverse is also true. And SANS surely covers this in the 20CCs document. I’ve seen too many really good security people forced to waste their time with poor tools.

Most importantly, I would contend that the SANS 20CCs were developed from a threat perspective, while the IT UCF which Ben favors (and is the basis of the GRC product Ben’s employer, LockPath sells) is more compliance oriented. In fact, UCF stands for “Unified Compliance Framework.”

While I surely don’t agree with every aspect of the SANS 20CCs, there is a lot of value there.

For example, the first four controls relate to discovering devices and the adherence of their configurations to policies. How can you argue with that? If you don’t know what’s connected to your network, how can you assure the devices are configured properly?

How many organizations can actually demonstrate that all network-attached devices are known and properly configured? Who would attempt to do this manually? How many organizations perform the recommended metric, i.e. add several new devices and see how long it takes to discover them – minutes, hours, days, months?

In closing, I find SANS to be a great organization and I applaud their efforts at developing a set of threat-oriented controls. In fact, I post a summary of the 20 Critical Security Controls on our web site.