24. February 2012 · Comments Off on Botnet communicates via P2P instead of C&C · Categories: blog · Tags: , ,

Symantec reported on a version of Zeus/Spyeye that communicates via P2P among its bot peers rather than “traditional” C&C directly to its control servers. (I put traditional in quotes because I don’t want to give the impression that detecting C&C traffic is easy.)

…it seems that the C&C server has disappeared entirely for this functionality. Where they were previously sending and receiving control messages to and from the C&C, these control messages are now handled by the P2P network.

This means that every peer in the botnet can act as a C&C server, while none of them really are one. Bots are now capable of downloading commands, configuration files, and executables from other bots—every compromised computer is capable of providing data to the other bots. We don’t yet know how the stolen data is communicated back to the attackers, but it’s possible that such data is routed through the peers until it reaches a drop zone controlled by the attackers.

Now if you are successfully blocking all P2P traffic on your network, you don’t have to worry about this new development. However, when P2P is blocked, this version of Zeus/Spyeye reverts to C&C methods. So you still need a technical network security control that can reliably detect compromised end points by monitoring egress traffic to proxies and firewalls and DNS traffic because you surely cannot rely on your host-based security controls. (If you doubt my claim, please contact me and I will prove it to you.)

But what if you have a business requirement for access to one or more P2P networks? Do you have a way to implement a positive control policy that only allows the specific P2P networks you need and blocks all the others? A Next Generation Firewall ought to enable you to meet this business requirement. I say “ought to” because not all of them do. I have written about NGFWs here, here, here, and here.

23. February 2012 · Comments Off on Black Cat, White Cat | InfoSec aXioms · Categories: blog · Tags: , , , , ,

Ofer Shezaf highlights one of the fundamental ways of categorizing security tools in his post Black Cat, White Cat | InfoSec aXioms.

Black listing, sometimes called negative security or “open by default”, focuses on catching the bad guys by detecting attacks.  Security controls such as Intrusion Prevention Systems and Anti-Virus software use various methods to do so. The most common method to detect attacks matching signatures against network traffic or files. Other methods include rules which detect conditions that cannot be expressed in a pattern and abnormal behavior detection.

 White listing on the other hand allows only known good activity. Other terms associated with the concept are positive security and “closed by default” and policy enforcement. White listing is commonly embedded in systems and the obvious example is the authentication and authorization mechanism found in virtually every information system. Dedicated security controls which use white listing either ensures the build-in policy enforcement is used correctly or provide a second enforcement layer. The former include configuration and vulnerability assessment tools while the latter include firewalls.

Unfortunately, when manufactures apply the term “Next Generation” to firewalls, they may be misleading the marketplace.  As Ofer says, a firewall, by definition, performs white listing, i.e. policy enforcement. One of the key functions of a NGFW is the ability to white list applications. This means the applications that are allowed must be defined in the firewall policy. On the other hand, if you are defining applications that are to be blocked, that’s black listing, and not a firewall.

Also note that Next Generation Firewalls also perform Intrusion Prevention, which is a black listing function. So clearly, NGFWs perform white listing and black listing functions. But to truly earn the right to call a network security appliance a “Next Generation” Firewall, the device must enable application white listing. Adding “application awareness” as a blacklist function is nice, but not a NGFW. For more information, I have written about Next Generation Firewalls and the difference between UTMs and NGFWs.

 

19. February 2012 · Comments Off on Stiennon’s confusion between UTM and Next Generation Firewall · Categories: blog · Tags: , , , ,

Richard Stiennon has published a blog post on Netasq, a European UTM vendor called, A brief history of firewalls and the rise of the UTM. I found the post indirectly from Alan Shimmel’s post about it.

Stiennen seems to think that Next Generation Firewalls are just a type of UTM. Shimmel also seems to go along with Stiennon’s view. Stiennon gives credit to IDC for defining the term UTM, but has not acknowledged Gartner’s work in defining Next Generation Firewall.

My purpose here is not to get into a debate about terms like UTM and NGFW. The real question is which network security device provides the best network security “prevention” control. The reality is that marketing people have so abused the terms UTM and NGFW, you cannot depend on the term to mean anything. My remarks here are based on Gartner’s definition of Next Generation Firewall which they published in October 2009.

All the UTMs I am aware of, whether software-based or with hardware assist, use port-based (stateful inspection) firewall technology. They may do a lot of other things like IPS, URL filtering and some DLP, but these UTMs have not really advanced the state (pardon the pun) of “firewall” technology. These UTMs do not enable a positive control model (default-deny) from the network layer up through the application layer. They depend on the negative control model of their IPS and application modules/blades.

Next Generation Firewalls, on the other hand, as defined by Gartner’s 2009 research report, enable positive network traffic control policies from the network layer up through the application layer. Therefore true NGFWs are something totally new and were developed in response to the changes in the way applications are now written. In the early days of TCP/IP, port-based firewalls worked well because each new application ran on its assigned port. For example, SMTP on port 25. In the 90s, you could be sure that traffic that ran on port 25 was SMTP and that SMTP would run only port 25.

About ten years ago applications began using port-hopping, encryption, tunneling, and a variety of other techniques to circumvent port-based firewalls. In fact, we have now reached the point where port-based firewalls are pretty much useless at controlling traffic between networks of different trust levels. UTM vendors responded by adding application identification functionality using their intrusion detection/prevention engines. This is surely better than nothing, but IPS engines use a negative enforcement model, i.e. default allow, and only monitor a limited number of ports. A true NGFW monitors all 65,535 ports for all applications at all times.

In closing, there is no doubt about the value of a network security “prevention” control performing multiple functions. The real question is, does the device you are evaluating fulfill its primary function of reducing the organization’s attack surface by (1) enabling positive control policies from the network layer through the application layer, and (2) doing it across all 65,535 ports all the time?

 

 

 

 

 

Phenergan

12. February 2012 · Comments Off on OAuth – the privacy time bomb · Categories: blog · Tags: , ,

Andy Baio writes in Wired about the privacy dangers of OAuth.

While OAuth enables OAuth Providers to replace passwords with tokens to improve the security of authentication and authorization to third party applications, in many cases it gives those applications access to much more of your personal information than is needed for them to perform their functions. This only increases the risk associated with breaches of personal data at these third party application providers.

Andy focuses on Gmail because the risk of using them as an OAuth Provider is greater. As Andy says:

For Twitter, the consequences are unlikely to be serious since almost all activity is public. For Facebook, a mass leak of private Facebook photos could certainly be embarrassing. But for Gmail, I’m very concerned that it opens a major security flaw that’s begging to be exploited.

“You may trust Google to keep your email safe, but do you trust a three-month-old Y Combinator-funded startup created by three college kids? Or a side project from an engineer working in his 20 percent time? How about a disgruntled or curious employee of one of these third-party services?”

If you are using your GMail (Google) credentials to just authenticate to a third party application, why should the third party application have access to your emails? In the case of Xobni or Unsubscribe, for example, you do need to give them access rights because they are providing specific functions that need access to Gmail content. But why does Unsubscribe need access to message content when all it really needs is access to email senders? When you decided to use Unsubscribe, why can’t you limit them to only your Senders? The bottom line is that by using OAuth you are trusting the third party applications not to abuse the privileges you are giving them and that they have implemented effective security controls.

While Andy provides some good advice to people who use their Google, Twitter, or Facebook credentials for other applications, there is no technical reason for the third party applications to get access to so much personal information. In other words, when you allow a third party application to use one of your primary applications (OAuth Providers) for authentication and/or authorization, you should be able to control the functions and data to which the third party has access. In order for this to happen, the Googles, Facebooks, and Twitters must build in more fine-grained access controls.

At present, the OAuth providers do not seem to be motivated to limit access to user content by third party applications based on the needs of those applications. One reason might be that most users simply don’t realize how much access they are giving to third party applications when they use an OAuth Provider. With no user pressure requesting finer grained access, why would the OAuth Providers bother?

Aside from lack of user pressure, it seems to me that the OAuth Providers are economically motivated to maintain the status quo for two reasons. First, they are competing with each other to become the cornerstone for their users’ online lives and want keep the OAuth user interface as simple as possible. In other words, if authorization is too fine grained, users will have too many choices and will decide not to use that OAuth Provider. Second, the OAuth Providers want to keep things as simple as possible for third party developers to attract them.

I would hate to see the Federal Government get involved to force the OAuth Providers to provide more fine-grained access control. But I am afraid that a few highly publicized breaches will have that affect.

As Enterprises are moving to a Zero Trust Model, so must consumers.

 

 


 

 

 

 

 

 

11. February 2012 · Comments Off on You Can Never Really Get Rid of Botnets · Categories: blog · Tags: , , , , ,

You Can Never Really Get Rid of Botnets.

Gunter Ollmann, the Vice President of Research at Damballa, provides insight into botnets in general and specifically into the Kelihos botnet takedown.

What is lost in these disclosures is an appreciation of number of people and breadth of talent that is needed to build and operate a profitable criminal botnet business.  Piatti and the dotFREE Group were embroiled in the complaint because they inadvertently provisioned the DNS with which the botnet was dependent upon. Other external observers and analysts of the Kelihos botnet believe it to be a relative of the much bigger and more damaging Waledac botnet, going as far as naming a Peter Severa as the mastermind between both botnets.

Botnets are a business. Like any successful business they have their own equivalents of financiers, architects, construction workers and even routes to market.

Past attempts to takedown botnets have focused on shutting down the servers that command the infected zombie computers. Given the agile nature of modern botnet design, the vast majority of attempts have failed. Microsoft’s pursuit of the human operators behind botnets such as Kelihos and Waledac are widely seen as the most viable technique for permanently shutting them down. But, even then, there are problems that still need to be addressed.

While taking down botnet servers is a worthy activity for companies like Microsoft, enterprises still must deal with finding and remediating compromised endpoints.