12. February 2012 · Comments Off on OAuth – the privacy time bomb · Categories: blog · Tags: , ,

Andy Baio writes in Wired about the privacy dangers of OAuth.

While OAuth enables OAuth Providers to replace passwords with tokens to improve the security of authentication and authorization to third party applications, in many cases it gives those applications access to much more of your personal information than is needed for them to perform their functions. This only increases the risk associated with breaches of personal data at these third party application providers.

Andy focuses on Gmail because the risk of using them as an OAuth Provider is greater. As Andy says:

For Twitter, the consequences are unlikely to be serious since almost all activity is public. For Facebook, a mass leak of private Facebook photos could certainly be embarrassing. But for Gmail, I’m very concerned that it opens a major security flaw that’s begging to be exploited.

“You may trust Google to keep your email safe, but do you trust a three-month-old Y Combinator-funded startup created by three college kids? Or a side project from an engineer working in his 20 percent time? How about a disgruntled or curious employee of one of these third-party services?”

If you are using your GMail (Google) credentials to just authenticate to a third party application, why should the third party application have access to your emails? In the case of Xobni or Unsubscribe, for example, you do need to give them access rights because they are providing specific functions that need access to Gmail content. But why does Unsubscribe need access to message content when all it really needs is access to email senders? When you decided to use Unsubscribe, why can’t you limit them to only your Senders? The bottom line is that by using OAuth you are trusting the third party applications not to abuse the privileges you are giving them and that they have implemented effective security controls.

While Andy provides some good advice to people who use their Google, Twitter, or Facebook credentials for other applications, there is no technical reason for the third party applications to get access to so much personal information. In other words, when you allow a third party application to use one of your primary applications (OAuth Providers) for authentication and/or authorization, you should be able to control the functions and data to which the third party has access. In order for this to happen, the Googles, Facebooks, and Twitters must build in more fine-grained access controls.

At present, the OAuth providers do not seem to be motivated to limit access to user content by third party applications based on the needs of those applications. One reason might be that most users simply don’t realize how much access they are giving to third party applications when they use an OAuth Provider. With no user pressure requesting finer grained access, why would the OAuth Providers bother?

Aside from lack of user pressure, it seems to me that the OAuth Providers are economically motivated to maintain the status quo for two reasons. First, they are competing with each other to become the cornerstone for their users’ online lives and want keep the OAuth user interface as simple as possible. In other words, if authorization is too fine grained, users will have too many choices and will decide not to use that OAuth Provider. Second, the OAuth Providers want to keep things as simple as possible for third party developers to attract them.

I would hate to see the Federal Government get involved to force the OAuth Providers to provide more fine-grained access control. But I am afraid that a few highly publicized breaches will have that affect.

As Enterprises are moving to a Zero Trust Model, so must consumers.









11. February 2012 · Comments Off on You Can Never Really Get Rid of Botnets · Categories: blog · Tags: , , , , ,

You Can Never Really Get Rid of Botnets.

Gunter Ollmann, the Vice President of Research at Damballa, provides insight into botnets in general and specifically into the Kelihos botnet takedown.

What is lost in these disclosures is an appreciation of number of people and breadth of talent that is needed to build and operate a profitable criminal botnet business.  Piatti and the dotFREE Group were embroiled in the complaint because they inadvertently provisioned the DNS with which the botnet was dependent upon. Other external observers and analysts of the Kelihos botnet believe it to be a relative of the much bigger and more damaging Waledac botnet, going as far as naming a Peter Severa as the mastermind between both botnets.

Botnets are a business. Like any successful business they have their own equivalents of financiers, architects, construction workers and even routes to market.

Past attempts to takedown botnets have focused on shutting down the servers that command the infected zombie computers. Given the agile nature of modern botnet design, the vast majority of attempts have failed. Microsoft’s pursuit of the human operators behind botnets such as Kelihos and Waledac are widely seen as the most viable technique for permanently shutting them down. But, even then, there are problems that still need to be addressed.

While taking down botnet servers is a worthy activity for companies like Microsoft, enterprises still must deal with finding and remediating compromised endpoints.

29. January 2012 · Comments Off on Cloud Provider security requirements · Categories: blog · Tags:

Grok Computer Security: I’ll tell you what I want, what I really, really want from a Cloud Provider.

Micheal Berman, the CTO of Catbird, summarizes his cloud provider requirements. For security, he is looking for:

  • Auditing: network and management
  • Control: policy and assurance
  • Metrics: continuous and interoperable
Are these capabilities to be provided by the cloud provider or should the enterprise adopt a solution it can use across multiple cloud providers? What about compatibility with private cloud deployments?


Abana 1 pc

29. January 2012 · Comments Off on Anticipating The Future of User Account Access Sharing · Categories: blog · Tags: ,

Anticipating The Future of User Account Access Sharing.

Insightful post by Lenny Zeltser regarding teenagers and adults sharing sharing accounts. i.e. sharing passwords.

Of course, those of us in security find this horrifying. Teenagers see this as a way of expressing affection. Adults in business do this to expedite accomplishing goals.

Can Security Awareness Training effectively communicate the risks of this behavior?

29. January 2012 · Comments Off on Encryption Key Management Primer – Requirement 3.6 « PCI Guru · Categories: blog · Tags: ,

Encryption Key Management Primer – Requirement 3.6 « PCI Guru.

Insightful article on PCI DSS requirement 3.6 – encryption key management, which is very complex when done manually. If you doubt it, read this article.

The PCIGuru also points out that “… for users of PGP or hardware security module (HSM), you will have no problem complying with the sub-requirements of 3.6.”


29. January 2012 · Comments Off on Financial Cryptography: Why Threat Modelling fails in practice · Categories: blog · Tags: , ,

“…threat modelling will always fail in practice, because by definition, threat modelling stops before practice.”

via Financial Cryptography: Why Threat Modelling fails in practice.

Insightful post highlighting the difference between threat and risk.

Let us now turn that around and consider *threat modelling*. By its nature, threat modelling only deals with threats and not risks and it cannot therefore reach out to its users on a direct, harmful level. Threat modelling is by definition limited to theoretical, abstract concerns. It stops before it gets practical, real, personal.

Risks are where harm is done to users. Risk modelling therefore is the only standard of interest to users.


23. January 2012 · Comments Off on Wall St. Journal and NYTimes interest in Information Security · Categories: blog · Tags:

The subject of Information Security and its risks to the enterprise is becoming more mainstream. Last week, the World Economic Forum called out Cyber Attacks as a top risk. Today both the Wall St. Journal and the New York Times have significant information security articles:

Bassam Alghanims Email-Hacking Allegations Against His Brother, Kutayba, Exposes Hackers-For-Hire Trade – WSJ.com.

Flaws in Videoconferencing Systems Make Boardrooms Vulnerable – NYTimes.com.


17. January 2012 · Comments Off on Adopt Zero Trust to help secure the extended enterprise · Categories: blog · Tags: , , ,

John Kindervag, a principal analyst at Forrester, has developed an interesting approach to securing the extended enterprise. He calls it the Zero Trust Model which he describes in this article: Adopt Zero Trust to help secure the extended enterprise.

First,  let me say I am not connected to Forrester in any way. I am connected to John Kindervag on LinkedIn based on a relationship from a prior company.

Second, the Zero Trust Model rings true for me in that the incident data available for review shows that we must assume that prevention controls can never be perfect. We must assume that (1) devices will be compromised including user authentication credentials and (2) some users interacting with systems will behave badly either accidentally or on purpose.

John uses the term Extended Enterprise to refer to an organization’s functional network which extends to (1) remote and mobile employees and contractors connecting via smartphones and tablets as well as laptops, and (2) business partners.

The Zero Trust Model of information security simplifies how information security is conceptualized by assuming there are no longer “trusted” interfaces, applications, traffic, networks or users. It takes the old model — “trust but verify” — and inverts it, since recent breaches have proven when an organization trusts, it doesn’t verify.

Here are the three basic ideas behind the Zero Trust Model:

  1. Ensure all resources are accessed securely – regardless of location
  2. Adopt the principle of least privilege, and strictly enforce access control
  3. Inspect and log all traffic

Here are Kindervag’s (Forrester) top recommendations:

  • Conduct a data discovery and classification project
  • Embrace encryption
  • Deploy NAV (Network Analysis & Visibility) tools to watch dataflows and user behavior
  • Begin designing a zero-trust network
The article provides some detail on each of these key ideas and recommendations.
14. January 2012 · Comments Off on Cyber attacks a top risk says World Economic Forum · Categories: blog · Tags: , ,

Via Clerkendweller’s blog post about the 2012 edition of the Global Risks report from the World Economic Forum, Cyber attacks came in #4 among the top 50 global risks as a function of likelihood.

The report divides risks into five categories – Economic, Environmental, Geopolitical, Societal, and Technological. What I also found interesting is that within the Technological category, Cyber attacks scores highest as a function of likelihood and impact. See the chart below:

The report further defines “connectivity” as one of the “Three distinct constellations of risks that present a very serious threat to our future prosperity and security…” The report then goes on to identify the three types of objectives of cyber attacks using physical world “military strategy” and “intelligence analysis” analogies: sabotage, espionage, and subversion. Here are the examples they provide:


  • Users may not realize when data has been maliciously, surreptitiously modified and make decisions based on the altered data. In the case of advanced military control systems, effects could be catastrophic.
  • National critical infrastructures are increasingly connected to the Internet, often using bandwidth leased from private companies, outside of government protection and oversight.


  • Sufficiently skilled hackers can steal vast quantities of information remotely, including highly sensitive corporate, political and military communications.


  • The Internet can spread false information as easily as true. This can be achieved by hacking websites or by simply designing misinformation that spreads virally.
  • Denial-of-service attacks can prevent people from accessing data, most commonly by using “botnets” to drown the target in requests for data, which leaves no spare capacity to respond to legitimate users.

These do not map easily into our traditional method of categorizing threats as risks to confidentiality, integrity, and availability of information but may be useful because what’s really important is the focus on adversaries and the actions they take to threaten the confidentiality, integrity, and availability of our cyber assets.

Of course we need to focus on assets in the sense that we have to “harden” them to reduce the likelihood of a successful attack. But we cannot stop there due to the following.

The Connectivity case provides two axioms for the Cyber Age:

  • Any device with software-defined behaviour can be tricked into doing things its creators did not intend.
  • Any device connected to a network of any sort, in any way, can be compromised by an external party. Many such compromises have not been detected.
If these axioms are true, then we must go beyond hardening assets. We must also invest in technical controls that can detect obviously negative and anomalous behavior of assets.
Overall, a document well worth reading.
30. December 2011 · Comments Off on XSS and Verizon DBIR; PCI DSS and anti-malware · Categories: blog · Tags: , , ,

Alex’s post, Web Application Security – from the start: XSS and Verizon DBIR suggests a conclusion that since the Verizon 2010 DBIR, released in April, 2011, shows that only 1% of breaches are a result of XSS, OWASP is putting too high a priority on XSS.

Here are my thoughts based on my review of the Verizon 2010 DBIR:

  1. Table 2 shows that of the 761 analyzed breaches, only 163 were from companies with 1,001 or more employees. over 70% (522 of 761) had fewer than 101 or an unknown number of employees. It’s been my experience that there is a huge disparity in deployed security controls between small and large companies, which, it seems to me, might alter the conclusions you could draw from the report.
  2. Figure 33 shows that the number of records stolen in the report is only 3.9 million. The previous five years the numbers ranged from 104M to 361M. I find this odd. This may reflect the high number of small companies in the report. Also, the number of records lost may not be the best indicator of breach severity. If Coca Cola lost only one record, but it was the Coke formula, the breach would be severe indeed.
  3. This report is heavily tied to Verizon’s PCI DSS practice. Table 15 shows that 96% of stolen records are Payment card numbers/data. We have seen very serious breaches where email addresses were the main data lost. See Epsilon where some estimate that 250 million email addresses were breached.
  4. Another indicator of the heavy PCI DSS orientation is that for each company examined they do a PCI DSS analysis. And (Table 16) shows the low percentage of these 761 companies that met basic PCI DSS security requirements. These percentages are not surprising given the large number of small companies in the report.

Of course, the conclusion they draw is the significant value of PCI DSS compliance in reducing breaches.

However, there is something else in the report that is worth noting that might refute the value of limiting your security goals to complying with PCI DSS. Figure 15 shows that 49% of the breaches involved Malware, representing 79% of the records breached. Of the malware analyzed, 63% (Figure 21) was custom! Could one conclude then that traditional anti-virus controls are not sufficient?

So what does the PCI DSS standard have to say about this? Requirement 5 is all about anti-virus. In fact, the recommend testing procedures are simply to “verify that anti-virus software is deployed,” and “verify that automatic updates and periodic scans are enabled.” So, based on PCI DSS one might conclude that as long as you have anti-virus deployed, you are safe from malware. However, since most of the malware that results in breaches is custom, and traditional anti-virus is not sufficient, then one could conclude that PCI DSS compliance is not a sufficient goal for mitigating malware risk.

I am not saying that PCI DSS does not have any value in risk reduction. But I am saying that in the all-important anti-malware area, PCI DSS is insufficient. Cymbel’s 12 Best Practices for mitigating the risks of modern malware is much more comprehensive and is aimed at larger organizations with more to protect than just credit card data.