17. January 2017 · Comments Off on Evolution of Network Intrusion Detection · Categories: Network Security · Tags: ,

Traditional Network Intrusion Detection Systems (NIDS), which became popular in the late 1990s, still have limited security efficacy. I will briefly discuss the issues limiting NIDS effectiveness, attempts at improvements that provided only minimal incremental advances, the underlying design flaw, and a new approach that shows a lot of promise.

The initial problem with signature-based NIDS was “tuning.” When they are tuned too loosely, they generate too many false positives. When tuned too tightly, false negatives become a problem. Furthermore, most organizations don’t have the resources for continual tuning.

In the early 2000s, Security Information and Event Management (SIEM) systems were developed, in part, to address this issue. The idea was to leave the NIDS loosely tuned, and let the SIEM’s analytics figure out what’s real and what’s not by correlating with other log sources and contextual information. Unfortunately, SIEMs still generate too many false positives, and can only alert on predefined patterns of attack.

Over the next 15+ years, there have been several innovations in NIDS and network security. These include (1) using operating system and application data to reduce false positives and reduce tuning efforts, (2) complementing NIDS with sandboxed file detonation, (3) adding machine learning based static file analysis, and (4) reducing the network attack surface with next generation firewalls.

There has also been a school of thought claiming anomaly detection is the answer to complement or even replace signature-based NIDS. Different statistical approaches have been tried over the years, the latest being various types of machine learning.

However, we are still seeing far too many successful cyber attacks that take weeks and even months to detect. The question is why?

The underlying design flaw for the last 20 years in virtually all NIDS is that they are restricted to examining an individual packet at line speed, deciding if it’s good or bad, and then going on to the next packet. There are some NIDS that are session oriented rather than stream oriented, but the decision-making time frame is still line speed. If malware or a protocol violation is detected, the NIDS generates an alert. Typically the alert is sent to a log repository or SIEM for further analysis.

This approach means that network security expert have been limiting themselves to detection algorithms that can be used in an appliance (physical or virtual) at line speeds. This is the key flaw that must be addressed in order to enable these experts to build an advanced NIDS with dramatically improved efficacy.

The cost effectiveness of cloud computing and storage makes a new kind of NIDS possible. Now all you need on the network, at the perimeter, on internal subnets, and/or in your cloud environment, are lightweight software sensors to capture, filter, and compress full packet streams. The captured packets are stored in the cloud. This means that traditional line speed analysis is only step one.

The full packets are available for retrospective analysis based on new incoming threat intelligence.  And because full packets are available, threat intelligence not only includes IP addresses, URLs, and file hashes, but new signatures built to detect attacks on newly discovered vulnerabilities. Ideally, this is done automatically by the vendor with no effort required by the customer. Also, adding new signatures from your own sources is supported.

Furthermore, additional methods of analysis are performed including anomaly detection, machine learning static file analysis, and sandboxed file detonation. And the solution uses multiple correlation methods to analyze the results of these different processes. In other words, initially detected weak signals are correlated to generate strong signals with a low false positive rate and without increasing false negatives.

Alerts are categorized using the Lockheed Martin Kill Chain™ to enable the SOC analyst to prioritize his efforts. The NIDS user interface provides layered levels of detail, down the packets if necessary, showing why the alert was generated, thus shortening the time needed to triage each alert.

Finally, new methods of analysis can be added in the cloud without worrying about on-premise appliances having to be forklift upgraded due to added processing and/or memory requirements.

This is the type of solution I recommend you evaluate to reduce the risk of successful cyber attacks. Dare I call it a “Next Generation NIDS?” “Advanced NIDS?” What would you call this solution?

This article was originally posted on LinkedIn. https://www.linkedin.com/pulse/evolution-network-intrusion-detection-bill-frank?trk=mp-author-card

30. April 2010 · Comments Off on Four questions to ask your firewall vendor and Gartner on the future of firewalls · Categories: Application Security, Innovation, IT Security 2.0, Network Security, Next Generation Firewalls, Web 2.0 Network Firewalls · Tags: ,

Gartner's John Pescatore blogged about his view on the future of firewalls today. Many pundits have opined about enterprise deperimeterization. Not so says Pescatore, although the functionality of the firewall is changing to respond to the changes in technology and the threat landscape. Gartner calls this new technology, "next-generation firewalls."

It is really just border control – we don’t declare countries
“deperimeterized” because airplanes were invented, we extend border
control into the airport terminals.

Unfortunately every firewall vendor in the industry has jumped on the term. So in order to help you separate marketing fluff from reality, whenever you are speaking to a firewall vendor, be ready with these questions:

  • How have you adapted your stateful inspection engine in your next-generation firewall?
  • When in the firewall's packet/session analysis is the application detected?
  • Is all packet analysis performed in a single pass?
  • How does your appliance hardware support you analysis approach?
  • is there a single user interface for all aspects of policy definition?
  • What is the degradation in performance as functionality is turned on?

If you like the answers, ask for more thing – show me.

11. April 2010 · Comments Off on Spotlighting the Botnet business model · Categories: Malware, Network Security · Tags:

TrendLabs has a nice article on the botnet business model. It features an illustration showing the relationships between different botnets including CUTWAIL, BREDO, KOOBFACE, ZEUS, WALEDEC, and others.

The level of cooperation and coordination is stunning. If you are not monitoring for and blocking botnet activity in your organization, you are exposing your organization to serious risks. If you are seeing no botnet activity in your organization, you are not using the right tools.

CSOonline published an article entitled, "What Are the Most Overrated Security Technologies?" At the head of the list are, no surprise, Anti-Virus and Firewalls.

Anti-Virus – signature based anti-virus products simply cannot keep up with the speed and creativity of the attackers. What's needed is better behavior anomaly based approaches to complement traditional anti-virus products.

Firewalls – The article talks about the disappearing perimeter, but that is less than half the story. The bigger issue is that traditional firewalls, using stateful inspection technology introduced by Check Point over 15 years ago, simply cannot control the hundreds and hundreds of "Web 2.0" applications. I've written about or referenced "Next Generation Firewalls" here, here, here, here, and here.

IAM and multi-factor authentication – Perhaps IAM and multi-factor authentication belong on the list. But the rationale in the article was vague. The biggest issue I see with access management is deciding on groups and managing access rights. I've seen companies with over 2,000 groups – clearly an administrative and operational nightmare  I see access management merging with network security as network security products become more application, content, and user aware. Then you can start by watching what people actually do in practice rather than theorize about how groups should be organized.

NAC – The article talks about the high deployment and ongoing administrative and operational costs outweighing the benefits. Another important issue is that NAC does not address the current high risk threats. The theory in 2006, somewhat but not overly simplified, was that if we checked the end point device to make sure its anti-virus signatures and patches were up-to-date before letting it on the network, we would reduce worms from spreading.

At present in practice, (a) worms are not major security risk, (b) while patches are important, up-to-date anti-virus signatures does not significantly reduce risk, and (c) an end point can just as easily be compromised when it's already on the network.

A combination of (yes again) Next Generation Firewalls for large locations and data centers, and cloud-based Secure Web Gateways for remote offices and traveling laptop users will provide much more effective risk reduction.

31. December 2009 · Comments Off on Good guys bring down a botnet. Or did they? · Categories: Botnets, Malware, Network Security

Earlier this week PC World reported that a security researcher at FireEye took down a major botnet, Mega-D. However, LonerVamp weighed in with a more objective analysis of what FireEye accomplished.

Symantec's Hon Lau, senior security response manager, is reporting that the Koobface worm/botnet began a new attack using fake Christmas messages to lure Facebook users to download the Koobface malware.

This again shows the flexibility of the command and control function of the Koobface botnet. I previously wrote about Koobface creating new Facebook accounts to lure users to fake Facebook (or YouTube) pages.

These Facebook malware issues are a serious security risk for enterprises. While simply blocking Facebook altogether may seem like the right policy, it may not be for two reasons: 1) No access to Facebook could become a morale problem for a segment of your employees, and 2) Employees may be using Facebook to engage customers in sales/marketing activities.

Network security technology must be able to detect Facebook usage and block threats while allowing productive activity.

22. November 2009 · Comments Off on Koobface botnet creates fake Facebook accounts to lure you to fake Facebook or YouTube page · Categories: Botnets, IT Security 2.0, Malware, Network Security, Next Generation Firewalls, Risk Management, Security Policy · Tags: , ,

TrendMicro's Malware Blog posted information about a new method of luring Facebook users to a fake Facebook or Youtube page to infect the user with the Koobface malware agent. 

The Koobface botnet has pushed out a new component that automates the following routines:

  • Registering a Facebook account
  • Confirming an email address in Gmail to activate the registered Facebook account
  • Joining random Facebook groups
  • Adding Facebook friends
  • Posting messages to Facebook friends’ walls

Overall, this new component behaves like a regular Internet user that starts to connect with friends in Facebook. All Facebook accounts registered by this component are comparable to a regular account made by a human. 

Here is yet another example of the risks associated with allowing Facebook to be used within the enterprise. However simply blocking Facebook may not be an option either because (1) it's demotivating to young employees used to accessing Facebook, or (2) it's a good marketing/sales tool you want to take advantage of.

Your network security solution, for example a next generation firewall, must enable you to implement fine grained policy control and threat prevention for social network sites like Facebook.

NetworkWorld has an interesting article today on the perils of social networking. The article focuses on the risk of employees transmitting confidential data. However, it's actually worse than that. There are also risks of malware infection via spam and other social engineering tactics. Twitter is notorious for its lax security. See my post, Twitter is Dead.

Blocking social networks completely is not the answer just as disconnecting from the Internet was not the answer in the 90's. Facebook, Twitter, and LinkedIn, among others can be powerful marketing and sales tools.

The answer is "IT Security 2.0" tools that can monitor these and hundreds of other web 2.0 applications to block incoming malware and outgoing confidential documents.

17. September 2009 · Comments Off on How to leverage Facebook and minimize risk · Categories: Application Security, IT Security 2.0, Network Security, Web 2.0 Network Firewalls · Tags: , , , ,

Marketing and Sales teams can benefit from using Web 2.0 social networks like Facebook to reach new customers and get customer feedback. It's about conversations rather broadcasting. So simply denying the use of Facebook due to security risks and time wasting applications is not a good option, much as in the 90's denying access to the Internet due to security risks was not feasible.

IT Security 2.0 requires finer grained monitoring and control of social networks like Facebook as follows:

  1. Restrict access to Facebook to only those people in sales and marketing who legitimately need access.
  2. Facebook is not a single monolithic application. It's actually a platform or an environment with many functions and many applications, some of which are pure entertainment and thus might be considered business time wasters. Create policies that restrict usage of Facebook to only those functions that are relevant to business value.
  3. Monitor the Facebook stream to detect and block incoming malware and outgoing confidential information.

Palo Alto Networks, which provides an "Application/User/Content aware" firewall (is that a mouthful?), appears to be able to provide such capabilities. Perhaps we might call it a Web 2.0 network firewall.

Is anyone aware of another firewall that can provide similar functionality?

McKinsey's just released report on its third annual survey of the usage and benefits of Web 2.0 technology was enlightening as far as it went. However, it completely ignores the IT security risks Web 2.0 creates. Furthermore, traditional IT security products do not mitigate these risks. If we are going to deploy Web 2.0 technology, then we need to upgrade our security to, dare I say, "IT Security 2.0."

Even if Web 2.0 products had no vulnerabilities for cybercriminals to exploit, which is not possible, there is still the need for a control function, i.e. which applications should be allowed and who should be able to use them. Unfortunately traditional security vendors have had limited success with both. Fortunately, there are security vendors who have recognized this as an opportunity
and have built solutions which mitigate these new risks.

In the past, I had never subscribed to the concept of security enabling innovation, but I do in this case. There is no doubt that improved communication, learning, and collaboration within the organization and with customers and suppliers enhances the organization's competitive position. Ignoring Web 2.0 or letting it happen by itself is not an option. Therefore when planning Web 2.0 projects, we must also include plans for mitigating the new risks Web 2.0 applications create.

The Web 2.0 good news – The survey results are very positive:

"69 percent of respondents report that their companies have gained
measurable business benefits, including more innovative products and
services, more effective marketing, better access to knowledge, lower
cost of doing business, and higher revenues.

Companies that made
greater use of the technologies, the results show, report even greater
benefits. We also looked closely at the factors driving these
improvements—for example, the types of technologies companies are
using, management practices that produce benefits, and any
organizational and cultural characteristics that may contribute to the
gains. We found that successful companies not only tightly integrate
Web 2.0 technologies with the work flows of their employees but also
create a “networked company,” linking themselves with customers and
suppliers through the use of Web 2.0 tools. Despite the current
recession, respondents overwhelmingly say that they will continue to
invest in Web 2.0."

The Web 2.0 bad news – Web 2.0 technologies introduce IT security risks that cannot be ignored. The main risk comes from the fact that these applications are purposely built to bypass traditional IT security controls in order to simplify deployment and increase usage. They use techniques such as port hopping, encrypted tunneling, and browser based applications. If we cannot identify these applications and the people using them, we cannot monitor or control them. Any exploitation of vulnerabilities in these applications can go undetected until it's too late.

A second risk is bandwidth consumption. For example, unauthorized and uncontrolled consumer-oriented video and audio file sharing applications consume large chunks of bandwidth. How much? Hard to know if we cannot see them.

In case we need some examples of the bad news, just in the last few days see here, here, here, and here.

The IT Security 2.0 good news – There are new IT Security 2.0 vendors who are addressing these issues in different ways as follows:

Database Activity Monitoring – Since we cannot depend on traditional perimeter defenses, we must protect the database itself. Database encryption, another technology, is also useful. But if someone has stolen authorized credentials (very common with trojan keyloggers), encryption is of no value. I discussed Database Activity Monitoring in more detail here. It's also useful for compliance reporting when integrated with application users.

User Activity Monitoring – Network appliances designed to
monitor internal user activity and block actions that are out of
policy. Also useful for compliance reporting.

Web Application Firewalls – Web server host-based software or appliances specifically designed to analyze anomalies in browser-based applications. WAFs are not meant to be primary firewalls but rather to be used to monitor the Layer 7 fields of browser-based forms into which users enter information. Cybercriminals enter malicious code which, if not detected and blocked, can trigger a wide range of exploits. It's also useful for PCI compliance.

"Web 2.0" Firewalls – Next generation network firewalls that can detect and control Web 2.0 applications in addition to traditional firewall functions. They also identify users and can analyze content. They can also perform URL filtering, intrusion prevention, proxying, and data leak prevention. This multi-function capability can be used to generate significant cost reductions by (1) consolidating network appliances and (2) unifying policy management and compliance reporting.

I have heard this type of firewall referred to as an Application Firewall. But it seems confusing to me because it's too close to Web Application Firewall, which I described above and performs completely different functions. Therefore, I prefer the term, Web 2.0 Firewall.

In conclusion, Web 2.0 is real and IT Security 2.0 must be part of Web 2.0 strategy. Put another way, IT Security 2.0 enables Web 2.0.