I would like to comment on RSA’s use of the term Advanced Persistent Threat (APT) in their Open Letter to RSA Customers. From my perspective, any company’s trade secrets are subject to APTs from someone. There is always some competitor or government that can benefit from your trade secrets. All APT means is that someone is willing to focus on your organization with resources of approximately the value of a penetration test plus the cost of acquiring a 0-day attack.
This means that you must assume that you are or will be compromised and therefore you must invest in “detection controls.” In other words, your security portfolio must include detection as well as prevention controls. Important detection controls include intrusion detection, behavior anomaly detection, botnet command & control communications detection, and Security Information & Event Management (SIEM). If you don’t have the resources to administer and monitor these controls then you need to hire a managed security services provider (MSSP).
Furthermore, organizations must take a close look at their internal access control systems. Are they operationally and cost effective? Are you compromising effectiveness due to budget constraints? Are you suffering from “role explosion?” A three thousand person company with 800 Active Directory Groups is difficult to manage, to say the least. Does your access control system impede your responsiveness to changes in business requirements? Have you effectively implemented Separation of Duties? Can you cost effectively audit authorization?
RSA’s announcement was not specific in the information it gave, so exactly what this means for SecurID isn’t clear. In the likely worst case, the seed values and their distribution among RSA’s 25,000 SecurID-using customers, may have been compromised. This would make it considerably easier for attackers to compromise systems dependent on SecurID: rather than having to acquire a suitable token, they would be required only to eavesdrop on a single authentication attempt (so that they could determine how far through the sequence a particular token was), and from then on would be able to generate numbers at their whim.
The article also covers more benign, more grave, and less likely possibilities. I would think that RSA customers are receiving more precise information.
While Secure-ID is probably the most popular two-factor authentication solution, it may be worth noting that there are many other choices available from RSA and its competitors.
14. March 2011 · Comments Off on Fear, Information Security, and a TED Talk « The New School of Information Security · Categories: blog · Tags: Security Metrics
TEDMed talk by Thomas Goetz – great talk about making health information understandable to patients in order to motivate them to action. Adam blogged about it because it reinforces his notion that fear does not motivate management to invest in information security.
Thomas suggests a four step feedback loop – Personalized Data, Relevance, Choices, Action.
For health care Thomas shows that the key problem is poor information presentation design. Is the problem the same in information security or is it the lack of relevant information to present?
In information security, people, and especially management, don’t act because they don’t believe that more firewalls, SSL and IDS will protect their cloud services. They don’t believe that because we don’t talk about how well those things actually work. Do companies that have a firewall experience fewer breaches than those with a filtering router? Does Brand X firewall work better than Brand Y? Who knows? And absent knowing, why invest? There’s no evidence of efficacy. Without evidence, there’s no belief in efficacy. Without a belief in efficacy, there’s no investment.
We’re going to need to move away from fear and to evidence of efficacy. Doing so is going to require us all to talk about investments and outcomes. When we do, we’re going to start getting better rapidly.
The PCI Guru (a pseudonymous PCI QSA) wrote a nice introduction to virtualization security with respect to PCI compliance. If you are not familiar with virtualization, he/she starts with the basics – defining “bare-metal” vs. “hosted” hypervisors and pointing out that hypervisors are operating systems.
Maybe PCI Guru is planning another post which will go further, but I feel it’s important to point out that along with the virtual machines, there are virtual switches which are located on the host system. Therefore traditional networked based security solutions have no visibility into and therefore no control of the traffic between VMs on the same host.
In addition, when organizations take advantage of the flexibility of virtualization by quickly creating and moving VMs as needed to meet application performance and availability requirements, it’s very difficult, to say the least, for network security administrators to keep up with the changes.
For these reasons, a new type of product has entered the market – the hypervisor-based firewall, which should reside right in the hypervisor. In addition to controlling traffic among VMs on a host, the hypervisor-based firewall needs to be able to identify newly added VMs and automatically apply the appropriate policies.
Furthermore, a good hypervisor-based firewall should perform host intrusion detection functions since it’s in the hypervisor and can see into the VMs.
Finally, there are performance considerations. Since we are talking about host-based technology, the question of CPU resource drain must be examined. In other words,how much performance are you giving up in return for the security you are gaining?
As you may have gathered from previous posts, I recommend the SANS 20 Critical Security Controls for Effective Cyber Defense as an information security road map for medium and large enterprises. The controls are selected and prioritized by answering the following questions:
Who are the attackers?
What are their objectives?
What attack vectors do they use?
What target systems did they use to gain entry?
What types of protection could have stopped them?
Roger Grimes provides a comprehensive answer to the first question with the following seven types of attackers:
04. March 2011 · Comments Off on Carpe Breachum: How the HBGary breach can make us stronger – CSO Online – Security and Risk · Categories: blog · Tags: breach disclosure, HBGary
Nick Selby makes an interesting point in his analysis of the HBGary Federal breach – we are all targets and we all get hacked. Therefore we should be more willing to share information about attacks which will enable us all to better defend ourselves.
A famous security researcher once answered my question about how he avoids being hacked, “Hell, Nick, I get hacked all the time”. He said it as if I were asking a really stupid question, because in fact, I was.
Admitting that we are all targets; admitting that we’ve all been hacked; admitting that we all face the same issues, means that we can move from psychological and marketing objections, and look instead to solving or at least addressing the logistical and pragmatic barriers to information and intelligence sharing.
Rich Mogull at Securosis is claiming that security vendors should not use the HHS HIPAA fine to Cignet Health for $4.3 million as a motivator to improve information security.
While I agree that this HHS fine and the $1 million Mass General fine had nothing to do with IT security, it seems to me that HHS is signaling that it is serious about enforcing HIPAA security and privacy rules. After all, HIPAA was passed in 1996 and these are the first ever fines issued.
You certainly can take Rich’s approach that the Cignet fine is just about “big boxes of paper and a bad attitude.” But I would not want to be the organization that suffers an information security breach due to lax controls.
For example, if you had decided to use the SANS 20 Critical Security Controls as your prescriptive information security guide and had implemented all of the Quick Wins and Visibility/Attribution sub-controls, some/most of the Config/Hygiene sub-controls, with a plan for the rest and the appropriate Advanced sub-controls, and still suffered a breach, you surely could not be tagged with “willful negligence.”
We will see what if any fine HHS levies against the New York City hospital system which admitted to a breach affecting 1.7 million hospital staff, patients, vendors, and contractors.
W3C today released a draft specification for a method to detect and block XSS-type attacks:
The purpose of this specification is to provide a method for web applications to broadly address a large class of vulnerabilities known as content injection which is the primary focus of Content Security Policy. Other threats, such as cross-site request forgery, are not a focus of this specification.
Content Security Policy is a declarative policy framework that enables web authors and server administrators to specify the permitted sources of content in their web applications and to restrict the capabilities of that content. Content Security Policy mitigates and detects content injection attacks such as cross-site scripting (XSS).
Content Security Policy is not intended to be a fool-proof security system, but it is intended to provide an effective layer of security that will dovetail with any site’s existing web application security program.
Content Security Policy is an opt-in mechanism which requires that servers explicitly declare a security policy in order to receive any of the protection described in this document. Content Security Policies are applied by the user-agent on a per resource basis, so servers must emit a security policy with each resource that the server wants protected.
Last week Cignet Health was fined $4.3 million by the OCR for violating privacy provisions in HIPAA. The fine was based on a failure of that organization to comply with requests from 41 patients to access their records and resulting failure to cooperate with the HHS Office for Civil Rights investigation. In addition, Massachusetts General Hospital was fined $1 million for potential HIPAA violations.
These are the first two fines issued by HHS and they were large due to HHS’s classifying these incidents as “willful neglect.”
I would say the answer is yes, it’s time to take HIPAA seriously.