Lenny mentions Damballa’s consultant-friendly licensing option, Damballa Failsafe. We partner with Seculert, who provides a cloud-based service for detecting botnet infected devices in the enterprise.
As Palo Alto Networks points out, the 2011 Verizon Data Breach Report showed that the initial penetrations in over 1/3 of the 900 incidents analyzed could be tracked to remote access errors.
Here are Palo Alto Networks’ recommendations:
Learn which remote access tools are in use, who is using them and why.
Establish a standard list of remote access tools for those who need them
Establish a list of who should be allowed to use these tools.
Document the usage guidelines, complete with ramifications of misuse and educate ALL users.
Enforce the usage using traffic monitoring tools or better yet, a Palo Alto Networks next-generation firewall.
One of Information Security’s basic triads is Prevention, Detection, and Response. How many organizations consciously use these categories when allocating InfoSec budgets? Whether intentional or not, I have found most organizations are over-weighted to Prevention.
Perhaps spending most of the InfoSec budget on Prevention made sense in the late 90’s and the first half of the 2000’s. But the changes we’ve seen during the last five to seven years in technology, threats, and the economy have led to an inevitability of organizations experiencing successful attacks. Therefore more budget must be allocated to Detection and Response.
What’s changed during the last several years?
Technology
The rise of Web 2.0 applications and social networking for business use, in response to the need to improve collaboration with customers and suppliers, and among employees.
Higher speed networks in response to the convergence of data, voice, and video which helps organizations cut operating costs
Increased number of remote and mobile workers, in response to efforts to reduce real estate costs and avoid wasting time commuting. I put this under technology because without high speed, low cost Internet connections this would not be happening.
Threats
Attacker motives have changed from glory to profits.
Attackers don’t bother building fast-spreading worms like Code Red and Nimda. Now adversaries work stealthily while they steal credit card information, bank account credentials, and intellectual property.
The main threat vector has shifted to the application layer and what I call the “inside-out” attack vector where social engineering actions like phishing lure users out to malware-laden web pages.
Economy
The Great Recession of 2008-2009 and the slow growth of the last couple of years have put enormous pressure on InfoSec budgets.
Using Bejtlich’s Security Effectiveness Model, the Threat Actions have changed but, for the most part, the Defensive Plans and Live Defenses have not kept up.
Organizations cannot continue to simply add new prevention controls to respond to the new reality. More effective and lower cost prevention controls must replace obsolete ones to improve Prevention and to free up budget for Detection and Response.
10. October 2011 · Comments Off on California Governor Vetoes Bill Requiring Warrant to Search Mobile Phones | Threat Level | Wired.com · Categories: blog · Tags: privacy, smartphones
California Gov. Jerry Brown is vetoing legislation requiring police to obtain a court warrant to search the mobile phones of suspects at the time of any arrest.
The Sunday veto means that when police arrest anybody in the Golden State, they may search that person’s mobile phone — which in the digital age likely means the contents of persons’ e-mail, call records, text messages, photos, banking activity, cloud-storage services, and even where the phone has traveled.
My question is, what if you password protect your phone? Must you give the police the password? Would that not be akin to incriminating yourself? In other words, could you refuse to give the police the password to your phone on the grounds of 5th Amendment protection?
Ben says they are not actionable. They surely are actionable. While SANS refrains from specifying actual implementation recommendations, Cymbel does not. Also each control includes metrics to enable you to evaluate its effectiveness.
Ben says they are not scalable, i.e. they are only appropriate for large organizations with deep pockets. In reality the SANS 20CCs provide a maturity model with four levels, so you can start with the basics and mature over time.
Ben says they are designed to sell products. Sure, 15 of 20 are technical controls. As the SANS 20CCs document says, the attackers are automated so the defenders must be as well. And while technical controls without well trained people and good process are useless, the inverse is also true. And SANS surely covers this in the 20CCs document. I’ve seen too many really good security people forced to waste their time with poor tools.
Most importantly, I would contend that the SANS 20CCs were developed from a threat perspective, while the IT UCF which Ben favors (and is the basis of the GRC product Ben’s employer, LockPath sells) is more compliance oriented. In fact, UCF stands for “Unified Compliance Framework.”
While I surely don’t agree with every aspect of the SANS 20CCs, there is a lot of value there.
For example, the first four controls relate to discovering devices and the adherence of their configurations to policies. How can you argue with that? If you don’t know what’s connected to your network, how can you assure the devices are configured properly?
How many organizations can actually demonstrate that all network-attached devices are known and properly configured? Who would attempt to do this manually? How many organizations perform the recommended metric, i.e. add several new devices and see how long it takes to discover them – minutes, hours, days, months?
In closing, I find SANS to be a great organization and I applaud their efforts at developing a set of threat-oriented controls. In fact, I post a summary of the 20 Critical Security Controls on our web site.
I like Richard Bejtlich’s Security Effectiveness Model because it highlights the key notion that information security must start with (my words) an understanding of your organization’s adversaries’ motives and methods. Richard calls these “Threat Actions.” From there, you would develop a “Defensive Plan,” and implement “Live Defenses.”
This is represented as a Venn Diagram made up of three circles. The more overlap you have, the more effective your infosec security program is. Here is the diagram:
Bejtlich calls this”threat-centric” security.
So the first question that needs to be addressed in making this approach operational is, how do you get the needed visibility to understand the Threat Actions?
I see this visibility coming from two sources:
Third party, generally available research. One such source would be SANS. In fact, SANS developed the SANS 20 Critical Security Controls specifically in response to its understanding of threat actions. In fact, the latest version provides a list of “Attack Types” in Appendix C on page 72.
Organizational assessment. At the organizational level, it seems to me you are faced with an evaluation problem of selecting controls that are good at finding Threat Actions. Based on my experience, there is agreement that the primary attack vector today is at the application level. If this is correct, then the organizational assessment would focus on (a) a black-box vulnerability assessment of the organization’s customer-facing web applications and (2) an assessment of the web applications (and related threats) the organization’s employees and contractors are using.
I am looking forward to Richard and others expanding on his ideas. Could be another book is coming. 🙂
The PCI Guru defends the PCI standard as a good framework for security in general, arguing against the refrain that compliance is not security.
My view is that the PCI Guru is missing the point. PCI DSS is a decent enough security framework. Personally I feel the SANS 20 Critical Security Controls is more comprehensive and has a maturity model to help organizations build a prioritized plan.
The issue is the approach management teams of organizations take to mitigate the risks of information technology. COSO has called this “Tone at the Top.”
A quote that rings true to me is, “In theory, there is no difference between theory and practice. But in practice there is.”
Applying here, I would say, in theory there should be no difference between compliance and security. But in practice there often is when management teams of an organizations do not take an earnest approach to mitigating the risks of information technology. Rather they take a “check-box” mentality, i.e. going for the absolute minimum on which the QSA will sign off. It is for this reason that many in our industry say that compliance does not equal security.
Marcus Ranum’s The Six Dumbest Ideas in Computer Security, written in 2006, is still referred to regularly as the gospel. I think it should be reviewed in the context of the 2011 threat landscape.
Default Permit – I agree. This still ranks as number one. In this age of application level threats, it’s more important than ever to implement “default deny” policies at the application level in order to reduce the organization’s attack surface. The objective must be “Unknown applications – Deny.”
Enumerating Badness – While I understand that in theory it’s far better to enumerate goodness and deny all other actions (see #1 above), in practice, I have not yet seen a host or network intrusion prevention product that uses this model that is reliable enough to replace traditional IPS technology that uses vulnerability-based signatures. If you know of such a product, I would surely be interested in learning about it.
Penetrate and Patch – I’m not sure I would call this dumb. Practically speaking, I would say it’s necessary but not sufficient.
Hacking is Cool – I agree, although this is no different than “analog hacking.” Movies about criminals are still being made because the public is fascinated by them.
Educating Users – I strongly disagree here. The issue is that our methods for educating users have left out the idea of “incentives.” In other words, the problem is that in most organizations, users’ merely inappropriate or thoughtless behavior does not cost them anything. Employees and contractor behavior will change if their compensation is affected by their actions. Of course you have to have the right technical controls in place to assure you can properly attribute observed network activity to people. Because these controls are relatively new, we are at the beginning of the use of economic incentives to influence employee behavior. I wrote about Security Awareness Training and Incentives last week.
Action is better than Inaction – Somewhat valid. The management team of each organization will have to decide for themselves whether they are “early adopters” or what I would call, “fast followers.” Ranum uses the term “pause and thinkers,” clearly indicating his view. However, if there are no early adopters, there will be no innovation. And as has been shown regularly, there are only a small number of early adopters anyway.
Of the “Minor Dumbs” I agree with all of them except the last one – “We can’t stop the occasional problem.” Ranum says “yes you can.” Not possible. You must assume you will suffer successful attacks. Good security means budget is allocated to Detection and Response in addition to Prevention.
I had an interesting conversation last week about the importance of security awareness training. I know this is a controversial topic, with many in the industry believing that it’s a waste of time. Ben Tomhave makes a really important point about getting users to pay attention to security policies.
The problem is this: people are once again falling into that rut of blaming the users for making bad security decisions, all the while having created, sustained, and grown an enablement culture that drastically abstracts users from the impact of those decisions. Plainly put: if the users don’t feel the pain of their bad decisions, then they have no incentive to make a change. This is basic psychology.
It’s time to quit trying the same old stupid donkey tricks. What we’re doing has failed, and will continue to fail. The rules of this game mean we lose – every. single. time. We need to change those rules, and fast. Specifically, we need to:
Include security responsibilities in all job descriptions.
Tie security performance into employee performance reviews.
Include disciplinary actions for all security incidents.
Tomhave calls this psychology. I relate it to the “economics of security” as described by Tyler Moore and Ross Anderson.
Yesterday I wrote about Apple’s latest fixes for iWork and iOS and encouraged folks to update. Now that more information is available it is clearly critical that all users update as soon as possible, unless they only use their device for telephone calls.
The flaws in iOS 4.3.4, 4.2.9 and 5.0b3 and lower are a lot more serious than Apple’s description of their fix: “This issue is addressed through improved validation of X.509 certificate chains.”
Do not do any e-commerce or banking transactions until you upgrade.