15. April 2010 · Comments Off on Conventional password policy recommendations questioned · Categories: Security Policy · Tags:

Microsoft researcher Cormac Herley recently published a paper casting doubt on the economic value of following conventional password policy recommendations. Whether you agree with Herely or not, his economic analysis is well worth reading.

Security Watch has a nice summary.

10. February 2010 · Comments Off on Schneier vs. Ranum: Should we (can we) ban anonymity? · Categories: Security Policy, Theory vs. Practice · Tags: , ,

The February 2010 issue of Information Security magazine has a face-off between Bruce Schneier, the realist, and Marcus Ranum, the dreamer, on the topic of anonymity on the Internet. 

Schneier says attempting to eliminate anonymity cannot work. More importantly, he goes on to say:

"Mandating
universal identity and attribution is the wrong goal. Accept that there
will always be anonymous speech on the Internet. Accept that you'll
never truly know where a packet came from. Work on the problems you can
solve: software that's secure in the face of whatever packet it
receives, identification systems that are secure enough in the face of
the risks. We can do far better at these things than we're doing, and
they'll do more to improve security than trying to fix insoluble
problems.
"

Schneier's piece is so good, you must read the whole thing.

I like the idea of maturity models as they can help an organization improve the state of a process in an organized fashion and enables the organization to compare itself to others. The granddaddy of maturity models is Carnegie Mellon University's software development Capability Maturity Model which was started in 1987. Now comes the Building Security In Maturity Model which is focused on building security into the software development process.

Here is the opening paragraph of their web site:

The Building Security In Maturity Model (BSIMM) described on this website is designed to help you understand
and plan a software security initiative. BSIMM was created through a process of understanding and analyzing
real-world data from nine leading software security initiatives. Though particular methodologies differ (think OWASP
CLASP, Microsoft SDL, or the Cigital Touchpoints), many initiatives share common ground. This common ground
is captured and described in BSIMM. As an organizing feature, we introduce and use a Software Security Framework
(SSF), which provides a conceptual scaffolding for BSIMM. Properly used, BSIMM can help you determine where
your organization stands with respect to real-world software security initiatives and what steps can be taken to make
your approach more effective.

The organizers are Gary McGraw and Sammy Migues of Cigital and Brian Chess of Fortify. Cigital and Fortify are both leading vendors in the software security market. Please do not interpret this as a negative. Putting out valuable information for free and enabling two-way communications with users is about as ethical marketing as there is.

They are promoting the very worthwhile and intuitively obvious notion that your software will be more secure if you build security in during design and development rather than bolt it on afterward.

BTW, Carnegie Mellon's Software Engineering Institute is still very active with respect to maturity models. Check them out here. Wikipedia provides a nice summary here.

30. December 2009 · Comments Off on Schneier’s take on aviation security as theater · Categories: Security Management, Security Policy · Tags: ,

In light of TSA's reaction to the near-miss catastrophe on Northwest Flight 253 on Christmas Day, I'm glad to see that CNN republished an article by Bruce Schneier entitled, "Is Aviation security mostly for show?"

Symantec's Hon Lau, senior security response manager, is reporting that the Koobface worm/botnet began a new attack using fake Christmas messages to lure Facebook users to download the Koobface malware.

This again shows the flexibility of the command and control function of the Koobface botnet. I previously wrote about Koobface creating new Facebook accounts to lure users to fake Facebook (or YouTube) pages.

These Facebook malware issues are a serious security risk for enterprises. While simply blocking Facebook altogether may seem like the right policy, it may not be for two reasons: 1) No access to Facebook could become a morale problem for a segment of your employees, and 2) Employees may be using Facebook to engage customers in sales/marketing activities.

Network security technology must be able to detect Facebook usage and block threats while allowing productive activity.

22. November 2009 · Comments Off on Koobface botnet creates fake Facebook accounts to lure you to fake Facebook or YouTube page · Categories: Botnets, IT Security 2.0, Malware, Network Security, Next Generation Firewalls, Risk Management, Security Policy · Tags: , ,

TrendMicro's Malware Blog posted information about a new method of luring Facebook users to a fake Facebook or Youtube page to infect the user with the Koobface malware agent. 

The Koobface botnet has pushed out a new component that automates the following routines:

  • Registering a Facebook account
  • Confirming an email address in Gmail to activate the registered Facebook account
  • Joining random Facebook groups
  • Adding Facebook friends
  • Posting messages to Facebook friends’ walls

Overall, this new component behaves like a regular Internet user that starts to connect with friends in Facebook. All Facebook accounts registered by this component are comparable to a regular account made by a human. 

Here is yet another example of the risks associated with allowing Facebook to be used within the enterprise. However simply blocking Facebook may not be an option either because (1) it's demotivating to young employees used to accessing Facebook, or (2) it's a good marketing/sales tool you want to take advantage of.

Your network security solution, for example a next generation firewall, must enable you to implement fine grained policy control and threat prevention for social network sites like Facebook.

01. October 2009 · Comments Off on Block Facebook? · Categories: Application Security, IT Security 2.0, Risk Management, Security Policy · Tags: ,

I just received an email advertisement from a "Web 2.0 security" vendor recommending that I use its product to block the evil Facebook. This is rather heavy handed.

Sales and marketing people want to use Facebook to reach prospects and interact with customers.
Sure there are issues with Facebook, but an all-or-nothing solution does not make sense. A more granular approach is much better. I discussed this issue recently in a post entitled, How to leverage Facebook and minimize risk.

22. September 2009 · Comments Off on Twenty Critical Cyber Security Controls – a blueprint for reducing IT security risk · Categories: Risk Management, Security Management, Security Policy · Tags: , , , ,

The Center for Strategic & International Studies, a think tank founded in 1962 focused on strategic defense and security issues, published a consensus driven set of "Twenty Critical Controls for Effective Cyber Defense." While aimed at federal agencies, their recommendations are applicable to commercial enterprises as well. Fifteen of the twenty can be validated at least in part in an automated manner.

Also of note, the SANS' Top Cyber Security Risks report of September 2009 refers to this document as, "Best Practices in Mitigation and Control of The Top Risks."

Here are the twenty critical controls:

  1. Inventory of authorized and unauthorized devices
  2. Inventory of authorized and unauthorized software
  3. Secure configurations of hardware and software on laptops, workstations, and servers
  4. Secure configurations for network devices such as firewalls, routers, and switches
  5. Boundary defense
  6. Maintenance, monitoring, and analysis of Security Audit Logs
  7. Application software security
  8. Controlled use of administrative privileges
  9. Controlled access based on need to know
  10. Continuous vulnerability assessment and remediation
  11. Account monitoring and control
  12. Malware defenses
  13. Limitation and control of network ports, protocols, and services
  14. Wireless device control
  15. Data loss prevention
  16. Secure network engineering
  17. Penetration tests and red team exercises
  18. Incident response capability
  19. Data recovery capability
  20. Security skills assessment and appropriate training to fill gaps

I find this document compelling because of its breadth and brevity at only 49 pages. Furthermore, for each control it lays out "Quick Wins … that can help an organization rapidly improve its security stance generally without major procedural, architectural, or technical changes to its environment," and three successively more comprehensive categories of subcontrols.

Roger Grimes at InfoWorld's Security Central wrote a very good article about password management. I agree with everything he said, except Roger did not go far enough. For several of Roger's attack types password guessing, keystroke logging, and hash cracking, one of the mitigation techniques is strong (high entropy) passwords.

True enough. However, I am convinced that it's simply not possible to memorize really strong (high entropy) passwords.

I wrote about this earlier and included a link to a review of password managers.

I thought a post about Database Activity Monitoring was timely because one of the DAM vendors, Sentrigo, published a Microsoft SQLServer vulnerability today along with a utility that mitigates the risk. Also of note, Microsoft denies that this is a real vulnerability.

I generally don't like to write about a single new vulnerability because there are just so many of them. However, Adrian Lane, CTO and Analyst at Securosis, wrote a detailed post about this new vulnerability, Sentrigo's workaround, and Sentrigo's DAM product, Hedgehog. Therefore I wanted to put this in context.

Also of note, Sentrigo sponsored a SANS Report called "Understanding and Selecting a Database Activity Monitoring Solution." I found this report to be fair and balanced as I have found all of SANS activities.

Database Activity Monitoring is becoming a key component in a defense-in-depth approach to protecting "competitive advantage" information like intellectual  property, customer and financial information and meeting compliance requirements.

One of the biggest issues organizations face when selecting a Database Activity Monitoring solution is the method of activity collection, of which there are three – logging, network based monitoring, and agent based monitoring. Each has pros and cons:

  • Logging – This requires turning on the database product's native logging capability. The main advantage of this approach is that it is a standard feature included with every database. Also some database vendors like Oracle have a complete, but separately priced Database Activity Monitoring solution, which they claim will support other databases. Here are the issues with logging:
    • You need a log management or Security Information and Event Management (SIEM) system to normalize each vendor's log format into a standard format so you can correlate events across different databases and store the large volume of events that are generated. If you already committed to a SIEM product this might not be an issue assuming the SIEM vendor does a good job with database logs.
    • There can be significant performance overhead on the database associated with logging, possibly as high as 50%.
    • Database administrators can tamper with the logs. Also if an external hacker gains control of the database server, he/she is likely to turn logging off or delete the logs. 
    • Logging is not a good alternative if you want to block out of policy actions. Logging is after the fact and cannot be expected to block malicious activity. While SIEM vendors may have the ability to take actions, by the time the events are processed by the SIEM, seconds or minutes have passed which means the exploit could already be completed.
  • Network based – An appliance is connected to a tap or a span port on the switch that sits in front of the database servers. Traffic to and, in most cases, from the databases is captured and analyzed. Clearly this puts no performance burden on the database servers at all. It also provides a degree of isolation from the database administrators.Here are the issues:
    • Local database calls and stored procedures are not seen. Therefore you have an incomplete picture of database activity.
    • Your must have the network infrastructure to support these appliances.
    • It can get expensive depending on how many databases you have and how geographically dispersed they are.
  • Host based – An agent is installed directly on each database server.The overhead is much lower than with native database logging, as low as 1% to 5%, although you should test this for yourself.  Also, the agent sees everything including stored procedures. Database administrators will have a hard time interfering with the process without being noticed. Deployment is simple, i.e. neither the networking group nor the datacenter team need be involved. Finally, the installation process should  not require a database restart. As for disadvantages, this is where Adrian Lane's analysis comes in. Here are his concerns:
    • Building and maintaining the agent software is difficult and more time consuming for the vendor than the network approach. However, this is the vendor's issue not the user's.
    • The analysis is performed by the agent right on the database. This could mean additional overhead, but has the advantage of being able to block a query that is not "in policy."
    • Under heavy load, transactions could be missed. But even if this is true, it's still better than the network based approach which surely misses local actions and stored procedures.
    • IT administrators could use the agent to snoop on database transactions to which they would not normally have access.

Dan Sarel, Sentrigo's Vice President of Product, responded in the comments section of Adrian Lane's post. (Unfortunately there is no dedicated link to the response. You just have to scroll down to his response.) He addressed the "losing events under heavy load" issue by saying Sentrigo has customers processing heavy loads and not losing transactions. He addressed the IT administrator snooping issue by saying that the Sentrigo sensors doe not require database credentials. Therefore database passwords are not available to IT administrators.