31. January 2010 · Comments Off on Top IT Security Risk stories of the week · Categories: Top Stories · Tags: , ,

Due to time constraints this week, I'm doing a new type of post. Rather than commenting on the stories I find most interesting, I am posting a list of stories I found interesting but without commenting. For each one, I provide the headline linked to the story and the first paragraph or two of the story so you can decide if it's worth reading in it's entirety. 

Monday, January 25, 2010

What's Your DEP and ASLR Status? If you recall, Google says they were attacked by hackers based in China using a zero-day vulnerability in Internet Explorer. That vulnerability affected almost all versions of IE, but the attack was mitigated on some by systemic defenses like DEP and ASLR.

Flaws in the 'Aurora' Attacks  The attackers who unleashed the recent wave of
targeted attacks against Google, Adobe, and other companies, making off
with valuable intellectual property and source code, shocking the
private sector into the reality of the potential threat of
state-sponsored cyberespionage — but they also made a few missteps
along the way that might have prevented far worse damage.

Tuesday, January 26, 2010

'Aurora' code circulated for years on English sites; Where's the China connection?  An error-checking algorithm found in software used to attack Google and other large companies circulated for years on English-speakinglanguage
books and websites, casting doubt on claims it provided strong evidence
that the malware was written by someone inside the People's Republic of
China.

Aurora-style attacks swiped oil field data from energy giants; Social networks implicated in planning Google assault   At least three US oil giants were hit by cyberattacks aimed at
stealing secrets, in the months before the high-profile Operation
Aurora attacks against Google, Adobe et al in December.

Targeted attacks against Marathon Oil, ConocoPhillips, and
ExxonMobil took place in 2008 and followed the same pattern as the
later Aurora assaults. Information harvested by the attacks included
"bid data" that gave information on new energy discoveries, according
to documents obtained by the Christian Science Monitor.

Wednesday, January 27, 2010

Hydraq (aka Aurora) attack's resiliency uncovered   Security researchers continue to peel back the layers on the
Trojan.Hydraq aka Operation Aurora attacks first reported publicly
earlier this month, and the techniques employed by the threat to stay
alive on infected machines were apparently neither cutting-edge, nor
particularly sophisticated.

According to researchers with Symantec — who've published a series of blogs examining various technical elements of the Trojan.Hydraq
campaign — the attack used methods commonly observed in other malware
programs to remain alive inside of the organizations it infiltrated,
restart after systems restart.

Cost of data breaches increased in 2009; Ponemon Institute research says malicious attacks are the most costly breaches   The cost of data breaches continues to rise,
and malicious attacks accounted for more of them in 2009 than in
previous years, according to a study published today.

In conjunction with study sponsor PGP Corp., Ponemon Institute
today released the results of its fifth annual "U.S. Cost of a Data
Breach" report. The news isn't good, according to the research firm's
founder, Larry Ponemon.

Personal data stolen? Don't count on being told promptly  Andrea Rock of Consumer Reports highlights one of the findings of the new Ponemon report: Not only are data breaches from criminal attacks on U.S.-based
companies’ financial and customer data on the rise, but your odds of
being promptly informed if you’re a breach victim aren’t very high,
according to a new data breach report just released by the Ponemon
Insitute.

The rise of point-and-click botnets  This post highlights a graphic from Team Cymru, a group that monitors studies online attacks and other badness in the
underground economy. It suggests an increasing divergence in the way
criminals are managing botnets, those large amalgamations of hacked PCs
that are used for everything from snarfing up passwords to relaying
spam and anonymizing traffic for the bad guys, to knocking the targeted
host or Web site offline.

Where art thou conficker?  Researchers noted this week that the buzzworthy Trojan.Hydraq campaign
that was used to hack Google and some other tech giants employed some
of the same techniques used by our dear old pal Conficker to remain
resident on infected PCs. Which causes one to ponder, what happened to this attack which a
year ago captured the interest of so many people for some particular
reason?

Thursday, January 28, 2010

Haiti spam leads to new malware  As rescue efforts continue in Haiti, the world
waits with bated breath for more good news about survivors.
Unfortunately, while most people are thinking of ways to help victims,
cybercriminals are using the tragedy to further their own malicious
causes. Blackhat search engine optimization (SEO) poisoning attacks related to this tragedy have already led to FAKEAV infections. However, the most recent FAKEAV run appears to be only the start of more Haiti-related malware attacks.

Friday, January 29, 2010

The state of computer security in the UK  eSecurity Planet reports: British security consulting firm 7Safe and the University of Bedfordshire have released the UK Security Breach Investigations Report 2010, which looks at the current state of computer security in the UK through an analysis of actual data breaches.

Key findings include the fact that 69 percent of data compromises
occurred in the retail sector, 85 percent of cases resulted in stolen
payment card information, and SQL injection was used in 60 percent of
attacks.

Simmering over a 'Cyber Cold War'  New reports released this week on recent, high-profile data breaches
make the compelling case that a simmering Cold War-style cyber arms
race has emerged between the United States and China.

A study issued Thursday by McAfee and the Center for Strategic and International Studies
found that more than half of the 600 executives surveyed worldwide said
they had been subject to “stealthy infiltration” by high-level
adversaries, and that 59 percent believed representatives of foreign
governments had been involved in the attacks.

Here is a link to another story about the above mentioned McAfee survey.

CIA, PayPal under bizarre SSL assault   The Central Intelligence Agency, PayPal, and hundreds of other
organizations are under an unexplained assault that's bombarding their
websites with millions of compute-intensive requests.

The "massive" flood of requests is made over the websites' SSL, or
secure-sockets layer, port, causing them to consume more resources than
normal connections, according to researchers at Shadowserver
Foundation, a volunteer security collective. The torrent started about
a week ago and appears to be caused by recent changes made to a botnet known as Pushdo.

Saturday, January 30, 2010

A tad too late, Google begins phase-out of IE6  Not that long after a Google employee running Internet Explorer 6 was hacked, creating an international incident, Google has announced that they will begin withdrawing support for IE6 in their own services.

New security features in Google Chrome  Google has announced a number of security enhancements that are being implemented in Chrome. Some have already been implemented in other browsers, including Firefox and IE and in significant add-ons like NoScript.

12. October 2009 · Comments Off on IBM CIO study ranks Risk Management and Compliance #3 of 10 CIO visionary plans · Categories: IT Security 2.0, Risk Management · Tags: , , ,

On September 10th, IBM released the results of a global study (registration required) they conducted of 2,500 CIO's from around the world. Of the ten top "visionary plans," these CIO's ranked Risk Management and Compliance third. Business Intelligence and Analytics was first followed by Virtualization. Also, I found it significant that Customer and Partner Collaboration came in fourth.

Unfortunately, the report did not divulge details of the methodology used beyond saying that over 2,500 CIO's were interviewed. If one grants that IBM is an able marketing organization, it genuinely wants to understand the priorities of CIO's so it can respond with the right services to increase its revenue. Therefore these priorities do represent what CIOs are thinking.

A more cynical opinion would be that this study is simply a marketing tool of IBM Global Services. In this case, IBM Global Services is advising CIOs that Risk Management and Compliance should be their third highest priority. Either way, this report highlights the importance of Risk Management and Compliance.

Looking at the study as a whole, it correlates the use of information technology to drive innovation with higher corporate profits. (Reminder – correlation and causation are not the same thing.)  In addition, information technology creates new risks which must be understood and mitigated.

Perhaps I am writing this because it supports my previously stated position that risk management enables innovation, e.g. Web 2.0 creates new risks which if not mitigated completely outweigh the value.

01. October 2009 · Comments Off on Block Facebook? · Categories: Application Security, IT Security 2.0, Risk Management, Security Policy · Tags: ,

I just received an email advertisement from a "Web 2.0 security" vendor recommending that I use its product to block the evil Facebook. This is rather heavy handed.

Sales and marketing people want to use Facebook to reach prospects and interact with customers.
Sure there are issues with Facebook, but an all-or-nothing solution does not make sense. A more granular approach is much better. I discussed this issue recently in a post entitled, How to leverage Facebook and minimize risk.

22. September 2009 · Comments Off on Twenty Critical Cyber Security Controls – a blueprint for reducing IT security risk · Categories: Risk Management, Security Management, Security Policy · Tags: , , , ,

The Center for Strategic & International Studies, a think tank founded in 1962 focused on strategic defense and security issues, published a consensus driven set of "Twenty Critical Controls for Effective Cyber Defense." While aimed at federal agencies, their recommendations are applicable to commercial enterprises as well. Fifteen of the twenty can be validated at least in part in an automated manner.

Also of note, the SANS' Top Cyber Security Risks report of September 2009 refers to this document as, "Best Practices in Mitigation and Control of The Top Risks."

Here are the twenty critical controls:

  1. Inventory of authorized and unauthorized devices
  2. Inventory of authorized and unauthorized software
  3. Secure configurations of hardware and software on laptops, workstations, and servers
  4. Secure configurations for network devices such as firewalls, routers, and switches
  5. Boundary defense
  6. Maintenance, monitoring, and analysis of Security Audit Logs
  7. Application software security
  8. Controlled use of administrative privileges
  9. Controlled access based on need to know
  10. Continuous vulnerability assessment and remediation
  11. Account monitoring and control
  12. Malware defenses
  13. Limitation and control of network ports, protocols, and services
  14. Wireless device control
  15. Data loss prevention
  16. Secure network engineering
  17. Penetration tests and red team exercises
  18. Incident response capability
  19. Data recovery capability
  20. Security skills assessment and appropriate training to fill gaps

I find this document compelling because of its breadth and brevity at only 49 pages. Furthermore, for each control it lays out "Quick Wins … that can help an organization rapidly improve its security stance generally without major procedural, architectural, or technical changes to its environment," and three successively more comprehensive categories of subcontrols.

07. September 2009 · Comments Off on Court allows bank customer to sue bank for “negligent” security practices · Categories: Authentication, Breaches, Funds Transfer Fraud, Legal, Risk Management, Security Management, Vendor Liability · Tags: , , , ,

Computerworld reported last week that a judge in Illinois ruled that a couple who lost $26,500 when their bank account was breached can sue the bank for negligence for not implementing "state-of-the-art" security measures which would have prevented the breach.

While bank credit card issuers have been suing credit card processors and retailers regularly to recoup losses due to breaches, this is the first time that I am aware of that a judge has ruled that a customer can sue the bank for negligence.

The more detailed blog post by attorney David Johnson, upon which the Computerworld article is based, discusses some really interesting details of this case.

The plaintiffs sued Citizens Financial Bank for negligence because it had not implemented multifactor authentication. The timeline is important here. The Federal Financial Institutions Examination Council (FFIEC) issued multifactor authentication guidelines in 2005. By 2007, when the plaintiffs' breach occurred, the bank had still not implemented multifactor authentication. The judge, Rebecca Pallmeyer of the District Court of Northern Illinois, found this two year delay unacceptable. 

Two interesting complications – (1) The account from which the money was stolen was from a home equity line of credit account, not a deposit or consumer asset account. (2) This credit account was linked to the plaintiffs' business checking account. I discussed the differences between consumer and business account liability here. Fortunately for the plaintiffs, the judge brushed these issues aside and focused on the lack of multifactor authentication.

One issue that was not addressed – where was Fiserv in all of this?
They are the provider of the online banking software used by Citizens
Financial Bank. Were they offering some type of multifactor
authentication? I would assume yes, although I have not been able to
confirm this.

In conclusion, attorney David Johnson makes clear that this ruling increases the risk to banks (and possibly other organizations responsible for protecting money and/or other assets of value) if they do not implement state-of-the-art security measures.

07. September 2009 · Comments Off on Older versions of WordPress are under attack – Welcome to the real world · Categories: Breaches, Risk Management · Tags: , ,

In the last week, vulnerabilities in older versions of WordPress software have been exploited resulting in blog posts being deleted and the blog sites being used for malicious purposes. Welcome to the real world.

The shock that some people are expressing, like Robert Scoble on Scobleizer is somewhat surprising. It's clear that WordPress knew about the vulnerabilities for some time and urged self-hosting customers to upgrade to WordPress version 2.8.4. Some of those that did not, have paid the price. Here is some additional useful information.

People have been snickering for years about Microsoft's security travails. We have since learned that Microsoft does not have a monopoly on security vulnerabilities and exploits. All software products have vulnerabilities. The issue is that as a software product becomes popular, it attracts cyber criminals. Therefore, software companies, as they become successful, must increase their focus on security issues, which WordPress seems to have done.

And we as consumers of software have risk management responsibilities too:

  • Upgrading to current releases
  • Backing up regularly to increase resiliency, i.e. the ability to recover quickly from an attack.

Roger Grimes at InfoWorld's Security Central wrote a very good article about password management. I agree with everything he said, except Roger did not go far enough. For several of Roger's attack types password guessing, keystroke logging, and hash cracking, one of the mitigation techniques is strong (high entropy) passwords.

True enough. However, I am convinced that it's simply not possible to memorize really strong (high entropy) passwords.

I wrote about this earlier and included a link to a review of password managers.

I thought a post about Database Activity Monitoring was timely because one of the DAM vendors, Sentrigo, published a Microsoft SQLServer vulnerability today along with a utility that mitigates the risk. Also of note, Microsoft denies that this is a real vulnerability.

I generally don't like to write about a single new vulnerability because there are just so many of them. However, Adrian Lane, CTO and Analyst at Securosis, wrote a detailed post about this new vulnerability, Sentrigo's workaround, and Sentrigo's DAM product, Hedgehog. Therefore I wanted to put this in context.

Also of note, Sentrigo sponsored a SANS Report called "Understanding and Selecting a Database Activity Monitoring Solution." I found this report to be fair and balanced as I have found all of SANS activities.

Database Activity Monitoring is becoming a key component in a defense-in-depth approach to protecting "competitive advantage" information like intellectual  property, customer and financial information and meeting compliance requirements.

One of the biggest issues organizations face when selecting a Database Activity Monitoring solution is the method of activity collection, of which there are three – logging, network based monitoring, and agent based monitoring. Each has pros and cons:

  • Logging – This requires turning on the database product's native logging capability. The main advantage of this approach is that it is a standard feature included with every database. Also some database vendors like Oracle have a complete, but separately priced Database Activity Monitoring solution, which they claim will support other databases. Here are the issues with logging:
    • You need a log management or Security Information and Event Management (SIEM) system to normalize each vendor's log format into a standard format so you can correlate events across different databases and store the large volume of events that are generated. If you already committed to a SIEM product this might not be an issue assuming the SIEM vendor does a good job with database logs.
    • There can be significant performance overhead on the database associated with logging, possibly as high as 50%.
    • Database administrators can tamper with the logs. Also if an external hacker gains control of the database server, he/she is likely to turn logging off or delete the logs. 
    • Logging is not a good alternative if you want to block out of policy actions. Logging is after the fact and cannot be expected to block malicious activity. While SIEM vendors may have the ability to take actions, by the time the events are processed by the SIEM, seconds or minutes have passed which means the exploit could already be completed.
  • Network based – An appliance is connected to a tap or a span port on the switch that sits in front of the database servers. Traffic to and, in most cases, from the databases is captured and analyzed. Clearly this puts no performance burden on the database servers at all. It also provides a degree of isolation from the database administrators.Here are the issues:
    • Local database calls and stored procedures are not seen. Therefore you have an incomplete picture of database activity.
    • Your must have the network infrastructure to support these appliances.
    • It can get expensive depending on how many databases you have and how geographically dispersed they are.
  • Host based – An agent is installed directly on each database server.The overhead is much lower than with native database logging, as low as 1% to 5%, although you should test this for yourself.  Also, the agent sees everything including stored procedures. Database administrators will have a hard time interfering with the process without being noticed. Deployment is simple, i.e. neither the networking group nor the datacenter team need be involved. Finally, the installation process should  not require a database restart. As for disadvantages, this is where Adrian Lane's analysis comes in. Here are his concerns:
    • Building and maintaining the agent software is difficult and more time consuming for the vendor than the network approach. However, this is the vendor's issue not the user's.
    • The analysis is performed by the agent right on the database. This could mean additional overhead, but has the advantage of being able to block a query that is not "in policy."
    • Under heavy load, transactions could be missed. But even if this is true, it's still better than the network based approach which surely misses local actions and stored procedures.
    • IT administrators could use the agent to snoop on database transactions to which they would not normally have access.

Dan Sarel, Sentrigo's Vice President of Product, responded in the comments section of Adrian Lane's post. (Unfortunately there is no dedicated link to the response. You just have to scroll down to his response.) He addressed the "losing events under heavy load" issue by saying Sentrigo has customers processing heavy loads and not losing transactions. He addressed the IT administrator snooping issue by saying that the Sentrigo sensors doe not require database credentials. Therefore database passwords are not available to IT administrators.

Controversy around the PCI DSS compliance program increased recently when Robert Carr, the CEO of Heartland Payment Systems, in an article in CSO Online, attacked his QSAs saying, "The audits done by our QSAs (Qualified Security Assessors) were of no value whatsoever. To the extent that they were telling us we were secure beforehand, that we were PCI compliant, was a major problem."

Mike Rothman, Senior VP of eIQNetworks responded to Mr. Carr's comments not so much to defend PCI but to place PCI in perspective, i.e. compliance does not equal security. I discussed this myself in my post about the 8 Dirty Secrets of IT Security, specifically in my comments on Dirty Secret #6 – Compliance Threatens Security

Eric Ogren, a security industry analyst, continued the attack on PCI in his article in SearchSecurity last week where he said, "The federal indictment this week of three men for their roles in the
largest data security breach in U.S. history also serves as an
indictment of sorts against the fraud conducted by PCI – placing the
burden of security costs onto retailers and card processors when what
is really needed is the payment card industry investing in a secure
business process."

The federal indictment to which Eric Ogren referred was that of Albert Gonzalez and others for the breaches at Heartland Payment Services, 7-Eleven, Hannaford, and two national retailers referred to as Company A and Company B. Actually this is the second federal indictment of Albert Gonzalez that I am aware of. The first, filed in Massachusetts in August 2008, was for the breaches at BJ's Wholesale Club, DSW, OfficeMax, Boston Market, Barnes & Noble, Sport Authority, and TJX.

Bob Russo, the general manager of the PCI Security Standards Council disagreed with Eric Ogren's characterizations of PCI, saying that retailers and credit card processors must take responsibility for protecting cardholder information.

Rich Mogull, CEO and Analyst at Securosis, responded to Bob Russo's article with recommendations to improve the PCI compliance program which he characterized as an "overall positive development for the state of security." He went on to say, "In other words, as much as PCI is painful, flawed, and ineffective, it
has also done more to improve security than any other regulation or
industry initiative in the past 10 years. Yes, it's sometimes a
distraction; and the checklist mentality reduces security in some environments, but overall I see it as a net positive."

Rich Mogull seems to agree with Eric Ogren that the credit card companies have the responsibility and the power to improve the technical foundations of credit card transactions. In addition, he calls the PCI Council to task for such issues as:

  • incomplete and/or weak compliance requirements
  • QSA shopping
  • the conflict of interest they created by allowing QSA's to perform audits and then sell security services based on the findings of the audits.

Clearly organizations have no choice but to comply with mandatory regulations. But the compliance process must be part of an overall risk management process. In other words, the compliance process is not equal to the risk management process but a component of it.

Finally, and most importantly, the enterprise risk management process must be more agile and responsive to new security threats than a bureaucratic regulatory body can be. For example, it may be some time before the PCI standards are updated to specify that firewalls must be able to work at the application level so all the the Web 2.0 applications traversing the enterprise network can be controlled. This is an important issue today as this has been a major vector for compromising systems that are then used for funds transfer fraud.

27. August 2009 · Comments Off on Estonian Internet Service Provider is a front for a cyber crime network · Categories: Risk Management, Security Management · Tags: , , , , , ,

TrendMicro's security research team announced a
white paper detailing their investigation of an Estonian Internet
company that was actually a front for a cybercrime network. This white
paper is important because it shows just how organized cyber criminals
have become. I have pointed this out in an earlier post here.

Organizations in the U.S. and Western Europe may wonder how this is relevant to them:

"From its office in Tartu [Estonia], employees administer sites that host codec
Trojans and command and control (C&C) servers that steer armies of
infected computers. The criminal outfit uses a lot of daughter
companies that operate in Europe and in the United States. These
daughter companies’ names quickly get the heat when they become
involved in Internet abuse and other cybercrimes. They disappear after
getting bad publicity or when upstream providers terminate their
contracts."

The full white paper is well worth reading.