OAuth 2.0 is sweeping through the industry, becoming the standard method of authentication across multiple web applications/sites. Other methods such as SAML and WS-Security are losing out because they are too difficult for web developers to learn and use.
Unfortunately, there is a growing opinion that in an effort to make OAuth 2.0 simple for developers to use, security was compromised.
The main concern is that rather than using digital signatures to assure that the “tokens” transmitted between sites are not tampered with, the sites simply connect to each other via SSL, which is susceptible to man-in-the-middle attacks.
Eran Hammer-Lahav, Yahoo’s director of standards development and one of the creators of OAuth said:
“It is clear that once discovery is used, clients will be manipulated to send their tokens to the wrong place, just like people are phished. Any solution based solely on a policy enforced by the client is doomed.”
I meant to post this last week. Ryan Paul at ars technica wrote an important article detailing the flaws in Twitter’s implementation of OAuth. This is serious because it is the only method for “users to grant a third-party application access to their account without having to provide that application with their credentials.” He also details the flaws of OAuth 1.0a, but holds out hope for OAuth 2.0, which the IETF is currently working on. Let’s hope they get it right this time.
Twitter officially disabled Basic authentication this week, the final step in the company’s transition to mandatory OAuth authentication. Sadly, Twitter’s extremely poor implementation of the OAuth standard offers a textbook example of how to do it wrong. This article will explore some of the problems with Twitter’s OAuth implementation and some potential pitfalls inherent to the standard. I will also show you how I managed to compromise the secret OAuth key in Twitter’s very own official client application for Android.
The article goes on to trash OAuth 1.0a as well:
…OAuth 1.0a is a horrible solution to a very difficult problem. It works acceptably well for server-to-server authentication, but there are far too many unresolved issues in the current specification for it to be used as-is on a widespread basis for desktop applications. It’s simply not mature enough yet.
There is hope though:
I think that OAuth 2.0—the next version of the standard—will address many of the problems and will make it safer and more suitable for adoption. The current IETF version of the 2.0 draft still requires a lot of work, however. It still doesn’t really provide guidance on how to handle consumer secret keys for desktop applications, for example. In light of the heavy involvement in the draft process by Facebook’s David Recordon, I’m really hopeful that the official standard will adopt Facebook’s sane and reasonable approach to that problem.
Finally:
Although I think that OAuth is salvageable and may eventually live up to the hype, my opinion of Twitter is less positive. The service seriously botched its OAuth implementation and demonstrated, yet again, that it lacks the engineering competence that is needed to reliably operate its service. Twitter should review the OAuth standard and take a close look at how Google and Facebook are using OAuth for guidance about the proper approach.
Adobe Flash Player 10.1 will make "its privacy settings more prominent and explicit to the user and also supports private browsing, which lets a user browse without logging his browsing history on his machines," according to an article in Dark Reading. The side effect is that e-commerce sites which have been using Flash's Local Storage to store machine ID's without the user's consent or knowledge will no longer be a viable machine authentication method.
This is actually good news because e-commerce sites will be forced to use technology designed specifically for authentication rather than relying on this Adobe externality.
Anti-Virus – signature based anti-virus products simply cannot keep up with the speed and creativity of the attackers. What's needed is better behavior anomaly based approaches to complement traditional anti-virus products.
Firewalls – The article talks about the disappearing perimeter, but that is less than half the story. The bigger issue is that traditional firewalls, using stateful inspection technology introduced by Check Point over 15 years ago, simply cannot control the hundreds and hundreds of "Web 2.0" applications. I've written about or referenced "Next Generation Firewalls" here, here, here, here, and here.
IAM and multi-factor authentication – Perhaps IAM and multi-factor authentication belong on the list. But the rationale in the article was vague. The biggest issue I see with access management is deciding on groups and managing access rights. I've seen companies with over 2,000 groups – clearly an administrative and operational nightmare I see access management merging with network security as network security products become more application, content, and user aware. Then you can start by watching what people actually do in practice rather than theorize about how groups should be organized.
NAC – The article talks about the high deployment and ongoing administrative and operational costs outweighing the benefits. Another important issue is that NAC does not address the current high risk threats. The theory in 2006, somewhat but not overly simplified, was that if we checked the end point device to make sure its anti-virus signatures and patches were up-to-date before letting it on the network, we would reduce worms from spreading.
At present in practice, (a) worms are not major security risk, (b) while patches are important, up-to-date anti-virus signatures does not significantly reduce risk, and (c) an end point can just as easily be compromised when it's already on the network.
A combination of (yes again) Next Generation Firewalls for large locations and data centers, and cloud-based Secure Web Gateways for remote offices and traveling laptop users will provide much more effective risk reduction.
Computerworld reported last week that a judge in Illinois ruled that a couple who lost $26,500 when their bank account was breached can sue the bank for negligence for not implementing "state-of-the-art" security measures which would have prevented the breach.
While bank credit card issuers have been suing credit card processors and retailers regularly to recoup losses due to breaches, this is the first time that I am aware of that a judge has ruled that a customer can sue the bank for negligence.
The more detailed blog post by attorney David Johnson, upon which the Computerworld article is based, discusses some really interesting details of this case.
The plaintiffs sued Citizens Financial Bank for negligence because it had not implemented multifactor authentication. The timeline is important here. The Federal Financial Institutions Examination Council (FFIEC) issued multifactor authentication guidelines in 2005. By 2007, when the plaintiffs' breach occurred, the bank had still not implemented multifactor authentication. The judge, Rebecca Pallmeyer of the District Court of Northern Illinois, found this two year delay unacceptable.
Two interesting complications – (1) The account from which the money was stolen was from a home equity line of credit account, not a deposit or consumer asset account. (2) This credit account was linked to the plaintiffs' business checking account. I discussed the differences between consumer and business account liability here. Fortunately for the plaintiffs, the judge brushed these issues aside and focused on the lack of multifactor authentication.
One issue that was not addressed – where was Fiserv in all of this?
They are the provider of the online banking software used by Citizens
Financial Bank. Were they offering some type of multifactor
authentication? I would assume yes, although I have not been able to
confirm this.
In conclusion, attorney David Johnson makes clear that this ruling increases the risk to banks (and possibly other organizations responsible for protecting money and/or other assets of value) if they do not implement state-of-the-art security measures.
Roger Grimes at InfoWorld's Security Central wrote a very good article about password management. I agree with everything he said, except Roger did not go far enough. For several of Roger's attack types password guessing, keystroke logging, and hash cracking, one of the mitigation techniques is strong (high entropy) passwords.
True enough. However, I am convinced that it's simply not possible to memorize really strong (high entropy) passwords.
I wrote about this earlier and included a link to a review of password managers.
Weak passwords and other password issues continue to be the bane of every security manager's existence. Becky Waring from Windows Secrets reports on a Gmail vulnerability where an attacker can repeatedly guess your password using Gmail's, "Check for mail using POP3"
capability. This is a service Gmail provides that enables you to use an email client rather than the Gmail browser interface. You can read the details of the vulnerability at Full Disclosure.
The unfortunate reality is that we have reached a point in the evolution of technology that if an attacker is in a position to implement an unimpeded repetitive "guessing" attack on your password, like this Gmail vulnerability, there is no password you can remember that can survive the attack. In other words, if you can remember the password, it's too weak, and it will be cracked.
NIST Special Publication 800-63 rev1 "Electronic Authentication Guideline" Appendix A (Page 86) discusses the concepts of password strength (entropy) in detail.
The only way you can really protect yourself is by using an automated password manager. LifeHacker has a very good review of the top choices available.One of the side benefits of these products, is that you should not have to physically type your passwords, thus reducing the risk associated with keyloggers, which I discussed in previous posts here and here.
Steve Gibson has a site called Perfect Passwords that automatically generates high entropy passwords.
At the very least, follow the advice in Becky Waring's column.