26. October 2011 · Comments Off on Australia DSD’s Top Four Security Strategies · Categories: blog · Tags: , , , , ,

The SANS Institute has endorsed Australia’s Defense Signals Directorate (DSD) four top strategies for mitigating  information security risk:

  1. Patching applications and using the latest version of an application
  2. Patching operating systems
  3. Keeping admin right under strict control (and forbidding the use of administrative accounts for email and browsing)
  4. Whitelisting applications
While there is nothing new with these four strategies, I would like to discuss #4. The Australian DSD Strategies to Mitigate Targeted Cyber Intrusions defines Application Whitelisting as preventing unapproved programs from running on PCs. I recommend extending whitelisting to the network. In other words, define which applications are allowed on the network by user groups, both internally and Web-based, and deny all others.
My recommendation is not really a new idea either. After all, that’s what firewalls are supposed to do. The issue is that the traditional stateful inspection firewall does it using port numbers and IP addresses. For at least the last five years applications and users have routinely bypassed these firewalls by using applications that share open ports.
This is why in October 2009, Gartner started talking about “Next Generation Firewalls” which enable you to implement whitelisting on the network at Layer 7 (Application) as well as down the stack to Layer 4 and 3. In other words extend the traditional “Positive Control Model” firewall functionality up through the Application Layer. (If you have not seen that Gartner research report, please contact me and I will arrange for you to receive a copy.)
20. February 2010 · Comments Off on Top 25 Most Dangerous Programming Errors · Categories: Research, Security Management · Tags: , , ,

Mitre, via its Common Weakness Enumeration effort, in conjunction with SANS, just published the 2010 CWE/SANS Top 25 Most Dangerous Programming Errors. Heading the list are:

  1. Cross-site Scripting (Score = 346)
  2. SQL Injection (330)
  3. Classic Buffer Overflow (273)
  4. Cross-Site Request Forgery (261)
  5. Improper Access Control (219)

For each weakness this report provides a Description, Prevention and Mitigation techniques, and links to more reference material. This is well worth reading.

21. September 2009 · Comments Off on Empirical evidence shows that the top cyber security risks are related to Web 2.0 · Categories: Application Security, IT Security 2.0, Risk Management · Tags: , , ,

Every consultant and vendor has a theory about the top cyber security risks. But what's really going on? SANS has the answer. Last week they released their analysis of threat and vulnerability data collected from 6,000 organizations and 9 million systems during the period from March 2009 to August 2009.

SANS says that two threat types dominate the analysis, both of which are tied to Web 2.0:

  • Threats associated with people using Web 2.0 applications, i.e. their workstations' vulnerabilities that are not patched and are exploited when they visit web sites.

My take: While the hype around NAC has definitely waned, the importance of comprehensive and continuous end point discovery, vulnerability analysis, configuration compliance checking, and patching at the application level as well as the operating system level is increasing.

  • Organizations' Internet-facing web sites remain vulnerable to threats like SQL Injection and Cross-Site Scripting.

My take: It's clear that using a rigorous Software Development Life Cycle process is just not getting the job done. Web application firewalls are a must have.

I thought a post about Database Activity Monitoring was timely because one of the DAM vendors, Sentrigo, published a Microsoft SQLServer vulnerability today along with a utility that mitigates the risk. Also of note, Microsoft denies that this is a real vulnerability.

I generally don't like to write about a single new vulnerability because there are just so many of them. However, Adrian Lane, CTO and Analyst at Securosis, wrote a detailed post about this new vulnerability, Sentrigo's workaround, and Sentrigo's DAM product, Hedgehog. Therefore I wanted to put this in context.

Also of note, Sentrigo sponsored a SANS Report called "Understanding and Selecting a Database Activity Monitoring Solution." I found this report to be fair and balanced as I have found all of SANS activities.

Database Activity Monitoring is becoming a key component in a defense-in-depth approach to protecting "competitive advantage" information like intellectual  property, customer and financial information and meeting compliance requirements.

One of the biggest issues organizations face when selecting a Database Activity Monitoring solution is the method of activity collection, of which there are three – logging, network based monitoring, and agent based monitoring. Each has pros and cons:

  • Logging – This requires turning on the database product's native logging capability. The main advantage of this approach is that it is a standard feature included with every database. Also some database vendors like Oracle have a complete, but separately priced Database Activity Monitoring solution, which they claim will support other databases. Here are the issues with logging:
    • You need a log management or Security Information and Event Management (SIEM) system to normalize each vendor's log format into a standard format so you can correlate events across different databases and store the large volume of events that are generated. If you already committed to a SIEM product this might not be an issue assuming the SIEM vendor does a good job with database logs.
    • There can be significant performance overhead on the database associated with logging, possibly as high as 50%.
    • Database administrators can tamper with the logs. Also if an external hacker gains control of the database server, he/she is likely to turn logging off or delete the logs. 
    • Logging is not a good alternative if you want to block out of policy actions. Logging is after the fact and cannot be expected to block malicious activity. While SIEM vendors may have the ability to take actions, by the time the events are processed by the SIEM, seconds or minutes have passed which means the exploit could already be completed.
  • Network based – An appliance is connected to a tap or a span port on the switch that sits in front of the database servers. Traffic to and, in most cases, from the databases is captured and analyzed. Clearly this puts no performance burden on the database servers at all. It also provides a degree of isolation from the database administrators.Here are the issues:
    • Local database calls and stored procedures are not seen. Therefore you have an incomplete picture of database activity.
    • Your must have the network infrastructure to support these appliances.
    • It can get expensive depending on how many databases you have and how geographically dispersed they are.
  • Host based – An agent is installed directly on each database server.The overhead is much lower than with native database logging, as low as 1% to 5%, although you should test this for yourself.  Also, the agent sees everything including stored procedures. Database administrators will have a hard time interfering with the process without being noticed. Deployment is simple, i.e. neither the networking group nor the datacenter team need be involved. Finally, the installation process should  not require a database restart. As for disadvantages, this is where Adrian Lane's analysis comes in. Here are his concerns:
    • Building and maintaining the agent software is difficult and more time consuming for the vendor than the network approach. However, this is the vendor's issue not the user's.
    • The analysis is performed by the agent right on the database. This could mean additional overhead, but has the advantage of being able to block a query that is not "in policy."
    • Under heavy load, transactions could be missed. But even if this is true, it's still better than the network based approach which surely misses local actions and stored procedures.
    • IT administrators could use the agent to snoop on database transactions to which they would not normally have access.

Dan Sarel, Sentrigo's Vice President of Product, responded in the comments section of Adrian Lane's post. (Unfortunately there is no dedicated link to the response. You just have to scroll down to his response.) He addressed the "losing events under heavy load" issue by saying Sentrigo has customers processing heavy loads and not losing transactions. He addressed the IT administrator snooping issue by saying that the Sentrigo sensors doe not require database credentials. Therefore database passwords are not available to IT administrators.