04. November 2013 · Comments Off on Response to Stiennon’s attack on NIST Cybersecurity Framework · Categories: blog · Tags: , , , ,

In late October, NIST issued its Preliminary Cybersecurity Framework based on President Obama’s Executive Order 13636, Improving Critical Infrastructure Cybersecurity.

The NIST Cybersecurity Framework is based on one of the most basic triads of information security – Prevention, Detection, Response. In other words, start by preventing as many threats as possible. But you also must recognize that 100% prevention is not possible, so you need to invest in Detection controls. And of course, there are going to be security incidents, therefore you must invest in Response.

The NIST Framework defines a “Core” that expands on this triad. It defines five basic “Functions” of cybersecurity – Identify, Protect, Detect, Respond, and Recover. Each Function is made up of related Categories and Subcategories.

Richard Stiennon, as always provocative, rales against the NIST Framework, calling it “fatally flawed,” because it’s “poisoned with Risk Management thinking.” He goes on to say:

The problem with frameworks in general is that they are so removed from actually defining what has to be done to solve a problem. The problem with critical infrastructure, which includes oil and gas pipelines, the power grid, and city utilities, is that they are poorly protected against network and computer attacks. Is publishing a turgid high-level framework going to address that problem? Will a nuclear power plant that perfectly adopts the framework be resilient to cyber attack? Are there explicit controls that can be tested to determine if the framework is in place? Sadly, no to all of the above.

He then says:

IT security Risk Management can be summarized briefly:

1. Identify Assets

2. Rank business value of each asset

3. Discover vulnerabilities

4. Reduce the risk to acceptable value by patching and deploying defenses around the most critical assets

He then summarizes the problems with this approach as follows:

1. It is impossible to identify all assets

2. It is impossible to rank the value of each asset

3. It is impossible to determine all vulnerabilities

4. Trying to combine three impossible tasks to manage risk is impossible

Mr. Stiennon’s solution is to focus on Threats.

How many ways has Stiennon gone wrong?

First,  if your Risk Management process is as Stiennon outlines, then your process needs to be updated. Risk Management is surely not just about identifying assets and patching vulnerabilities. Threats are a critical component of Risk Management. Furthermore, while the NIST Framework surely includes identifying assets and patching vulnerabilities, they are only two Subcategories within the rich Identify and Protect Functions. The whole Detect Function is focused on detecting threats!! Therefore Stiennon is completely off-base in his criticism. I wonder if he actually read the NIST document.

Second, all organizations perform Risk Management either implicitly or explicitly. No organization has enough money to implement every administrative and technical control that is available. And that surely goes for all of the controls recommended by the NIST Framework’s Categories and Subcategories. Even the organizations that want to fully commit the NIST Framework still will need to prioritize the order in which controls are implemented. Trade-offs have to be made. Is it better to make these trade-offs implicitly and unsystematically? Or is it better to have an explicit Risk Management process that can be improved over time?

I am surely not saying that we have reached the promised land of cybersecurity risk management, just as we have not in virtually any other field to which risk management is applied. There is a lot of research going on to improve risk management and decision theory. One example is the use of Prospect Theory.

Third, if IT security teams are to communicate successfully with senior management and Boards of Directors, explain to me how else to do it? IT security risks, which are technical in nature, have to be translated into business terms. That means, how will a threat impact the business. It has to be in terms of core business processes. Is Richard saying that an organization cannot and should not expect to identify the IT assets related to a specific business process? I think not.

When we in IT security look for a model to follow, I believe it should be akin to the role of lawyers’ participation in negotiating a business transaction. At some point, the lawyers have done all the negotiating they can. They then have to explain to the business executives responsible for the transaction the risks involved in accepting a particular paragraph or sentence in the contract. In other words, lawyers advise and business executives decide.

In the same way, it is up to IT security folks to explain a particular IT security risk in business terms to the business executive, who will then decide to accept the risk or reduce it by allocating funds to implement the proposed administrative or technical control. And of course meaningful metrics that can show the value of the requested control must be included in the communication process.

Given the importance of information technology to the success of any business, cybersecurity decisions must be elevated to the business level. Risk Management is the language of business executives. While cybersecurity risk management is clearly a young field, we surely cannot give up. We have to work to improve it. I believe the NIST Cybersecurity Framework is a big step in the right direction.

 

20. April 2012 · Comments Off on A response to Stiennon’s analysis of Palo Alto Networks · Categories: blog · Tags: , , , , , , ,

I was dismayed to read Richard Stiennon’s article in Forbes, Tearing away the veil of hype from Palo Alto Networks’ IPO. I will say my knowledge of network security and experience with Palo Alto Networks appear to be very different from Stiennon’s.

Full disclosure, my company has been a Palo Alto Networks partner for about four years. I noticed on Stiennon’s LinkedIn biography that he worked for one of PAN’s competitors, Fortinet. I don’t own any of the stocks individually mentioned in Stiennon’s article, although from time to time, I own mutual funds that might. Finally, I am planning on buying PAN stock when they go public.

Let me first summarize my key concerns and then I will go into more detail:

  • Stiennon overstates and IMHO misleads the reader about the functionality of stateful inspection firewall technology. While he seems to place value in it, he fails to mention what security risks they can actually mitigate in today’s environment.
  • He does not seem to understand the difference between UTMs and Next Generation Firewalls (NGFW). UTMs combine multiple functions on an appliance with each function processed independently and sequentially, and each managed with a separate user interface. NGFWs integrate multiple functions which share information, execute in parallel, and are managed with a unified interface. These differences result in dramatically different risk mitigation capabilities.
  • He does not seem to understand Palo Alto Networks unique ability to reduce attack surfaces by enabling a positive control model (default deny) from the network layer up through the application layer.
  • He seems to have missed the fact that Palo Alto Networks NGFWs were designed from the ground up to deliver Next Generation Firewall capabilities while other manufacturers have simply added features to their stateful inspection firewalls
  • He erroneously states that Palo Alto Networks does not have stateful inspection capabilities. It does and is backwards compatible with traditional stateful inspection firewalls to enable conversions.
  • He claims that Palo Alto Networks uses a lot of third party components when in fact there are only two that I am aware of. And he completely ignores several of Palo Alto Networks latest innovations including Wildfire and GlobalProtect.
  • He missed the reason why Palo Alto Networks Jan 2012 quarter revenue was slightly lower than its Oct 2011 quarter which was clearly stated in the S-1.

Here are my detailed comments.

Stateful inspection is a core functionality of firewalls introduced by Check Point Software over 15 years ago. It allows an inline gateway device to quickly determine, based on a set policy, if a particular connection is allowed or denied. Can someone in accounting connect to Facebook? Yes or no.

The bolded sentence is misleading and wrong in the context of stateful inspection. Stateful inspection has nothing to do with concepts like who is in accounting or whether the session is attempting to connect to Facebook. Stateful Inspection is purely a Layer 3/Layer 4 technology and defines security policies based on Source IP, Destination IP, Source Port, Destination Port, and network protocol, i.e. UDP or TCP.

If you wanted to implement a stateful inspection firewall policy that says Joe in accounting cannot connect to Facebook, you would first have to know the IP address of Joe’s device and the IP address of Facebook. Of course this presents huge administrative problems because somebody would have to keep track of this information and the policy would have to be modified if Joe changed locations. Not to mention the huge number of policy rules that would have to be written for all the possible sites Joe is allowed to visit. No organization I have ever known would attempt to control Joe’s access to Facebook using stateful inspection technology.

Since the early 2000s, hundreds and hundreds of applications have been written, including Facebook and its subcomponents, that no longer obey the “rules” that were in place in the mid-90s when stateful inspection was invented. At that time, when a new application was built, it would be assigned a specific port number that only that application would use. For example, email transport agents using SMTP were assigned Port 25. Therefore the stateful inspection firewall policy implementer could safely control access to the email transport service by defining policies using Port 25.

At present, the usage of ports is totally chaotic and abused by malicious actors. Applications share ports. Applications hop from port to port looking for a way to bypass stateful inspection firewalls. Cyber predators use this weakness of stateful inspection for their gain and your loss. Of course the security industry understood this issue and many new types of network security device types were invented and added to the network as Stiennon acknowledges.

But, inspecting 100% of traffic to implement these advanced capabilities is extremely stressful to the appliance, all of them still use stateful inspection to keep track of those connections that have been denied. That way the traffic from those connections does not need to be inspected, it is just dropped, while approved connections can still be filtered by the enhanced capability of these Unified Threat Management (UTM) devices (sometimes called Next  Generation Firewalls (NGFW), a term coined by Palo Alto Networks).

The first bolded phrase is true when a manufacturer adds advanced capabilities like application identification to an existing appliance. Palo Alto Networks understood this and designed an appliance from the ground up specifically to implement these advanced functions under load with low latency.

In the second bolded phrase, Steinnon casually lumps together the terms UTMs and Next Generation Firewalls as if they are synonymous. They are not. While it is true that Palo Alto Networks coined the term Next Generation Firewalls, it only became an industry defined term when Gartner published a research paper in October, 2009 (ID Number G00171540) and applied a rigorous definition.

The key point is that a next generation firewall provides fully integrated Application Awareness and Intrusion Prevention with stateful inspection. Fully integrated means that (1) the application identification occurs in the firewall which enables positive traffic control from the network layer up through the application layer, (2) all intrusion prevention is applied to the resulting allowed traffic, (3) all this is accomplished in a single pass to minimize latency, and (4) there is a unified interface for creating firewall policies. Running multiple inspection processes sequentially, controlled by independently defined policies results in increased latency and excessive use of security management resources, thus not qualifying as a Next Generation Firewall

But PAN really has abandoned stateful inspection, at a tremendous cost to their ability to establish connections fast enough to address the needs of large enterprises and carriers.

This is simply false. Palo Alto Networks supports standard stateful inspection for two purposes. First to ease the conversion process from a traditional stateful inspection firewall. Most of our customers start by converting their existing stateful inspection firewall policy rules and then they add the more advanced NGFW functions.

Second, the use of ports in policies can be very useful when combined with application identification. For example, you can build a policy that says (a) SMTP can run only on port 25 and (b) only SMTP can run on port 25. The first part (a) assures that if SMTP is detected on any of the other 65,534 ports it will be blocked. This means that no cyber predator can set up an email service on any of your non-SMTP servers. The second part (b) says that no other application besides SMTP can run on port 25. Therefore when you open a port for a specific application, you can assure it will be the only application running on that port. Palo Alto Networks can do this because its core functionality monitors all 65,535 ports for all applications all the time.

Steinnon then goes on to quote Bob Walder of NSS Labs and interprets his statement as follows:

In other words, an enterprise deploying PAN’s NGFW is getting full content inspection all the time with no ability to turn it off. That makes the device performance unacceptable as a drop-in replacement for Juniper, Cisco, Check Point, or Fortinet firewalls.

This statement has no basis in facts that I am aware of. Palo Alto Firewalls are used all the time to replace the above mentioned companies’ firewalls. Palo Alto has over 6,500 customers! Does full packet inspection take more resources than simple stateful inspection? Of course. But that misses the point. As I said above, stateful inspection is completely useless at providing an organization a Positive Enforcement Model, which after all is the sine qua non of a firewall. By Positive Enforcement Model, I mean the ability to define what is allowed and block everything else. This is also described as “default deny.”

Furthermore, based on my experience, in a bake-off situaton where the criteria  are a combination of real-world traffic, real-world security policy requirements designed to mitigate defined high risks, and total cost of ownership, Palo Alto Networks will always win. I’ll go a step further and say that in today’s world there is simply no significant risk mitigation value for traditional stateful inspection.

It’s the application awareness feature. This is where PAN’s R&D spending is going. All the other features made possible by their hardware acceleration and content inspection ability are supported by third parties who provide malware signatures and URL databases of malicious websites and categorization of websites by type. 

This is totally wrong. In fact, the URL filtering database and the end point checking host software in GlobalProtect (explained further on) are the only third party components Palo Alto Network uses that I am aware of. PAN built a completely new firewall engine capable of performing stateful inspection (for backward compatibility and for highly granular policies described above), application control, anti-virus, anti-spyware, anti-malware, and URL Filtering in a single pass. PAN writes all of its malware signatures and of course participates in security intelligence sharing arrangements with other companies.

Palo Alto Networks has further innovated with (1) Wildfire which provides the ability to analyze executables being downloaded from the Internet to detect zero-day attacks, and (2) GlobalProtect which enables remote and mobile users to stay under the control and protection of PAN NGFWs.

While anecdotal, the reports I get from enterprise IT professionals are that PAN is being deployed behindexisting (sic) firewalls. If that is the general case PAN is not the Next Generation Firewall, it is a stand alone technology that provides visibility into application usage.  Is that new? Not really. Flow monitoring technology has been available for over a decade from companies like Lancope and Arbor Networks that provides this visibility at a high level. Application fingerprinting was invented by SourceFire and is the basis of their RNA product.

Wow. Let me try to deconstrust this. First, it is true that some companies start by putting Palo Alto Networks behind existing firewalls. Why not? I see this as an advantage for PAN as it gives organizations the ability to leverage PAN’s value without waiting until it’s time to do a firewall refresh. Also PAN can replace a proxy to improve content filtering. I’ll save the proxy discussion for another time. I am surely not privy to PAN’s complete breakdown of installation architectures, but “anecdotally” I would say most organizations are doing straight firewall replacements.

Much more importantly, the idea of doing application identification in an IPS or in a flow product totally misses the point. Palo Alto Networks ships the only firewall that does it to enable positive control (default deny) from the network layer up through the application layer. I am surely not saying that there is no value in adding application awareness to IPSs or flow products. There is. But IPSs use a negative control model, i.e. define what should be blocked and allow everything else. Firewalls are supposed to provide attack surface reduction and cannot unless they are able to exert positive control.

While I will agree that application identification and the ability to enforce policies that control what applications can be used within the enterprise is important I contend that application awareness is ultimately a feature that belongs in a UTM appliance or stand alone device behind the firewall. Like other UTM features it must be disabled for high connection rate environments such as large corporate gateways, data centers, and within carrier networks.

This may be Stiennon’s opintion, but I would ask, what meaningful risks, besides not meeting the requirements of a compliance regime, does a stateful inspection firewall mitigate considering the ease with which attackers can bypass them? I have nothing against compliance requirements per se, but our focus is on information security risk mitigation.

In the three months ending Jan. 31 2012 PAN’s revenue is off from the previous quarter. The fourth quarter is usually the best quarter for technology vendors. There may be some extraordinary situation that accounts for that, but it is not evident in the S-1

There is no denying that year-over-year PAN has been on a tear, almost doubling its revenue from Q4 2010 to Q4 2011. But the glaring fact is that PAN’s revenue growth has completely stalled out in what was a great quarter for the industry.

Perhaps my commenting on these last paragraphs does not belong in this blog post as they are not technical in nature, but IMHO Stiennon is wrong again. Stiennon glosses over the excellent quarter that preceded the last one where PAN grew its revenue from $40.22 million to $57.11 million. Thus the last quarter’s $56.68 million looks to Stiennon like a stall with no explanation. However, here is the exact quote from the S-1 explaining what happened, “For the three month period ended October 31, 2011, the increase in product revenue was driven by strong performance in our federal business, as a result of improved productivity from our expanded U.S. government sales force and increased U.S. government spending at the end of its September 30 fiscal year.” My translation from investment banker/lawyer speak to English is that PAN did so well with the Federal government that quarter that the following quarter suffered by comparison. I could be wrong.

In closing, let me say I fully understand that there is no single silver bullet in security. Our approach is about balancing resources among Prevention, Detection, and Incident Response controls. There is never enough budget to implement every technical control that mitigates some risk. The exercise is to prioritize the selection of controls within budget constraints to provide the maximum information security risk reduction based on an organization’s understanding of its risks. While these priorities vary widely among organizations, I can confidently say that based on my experience, Palo Alto Networks provides the best network-based, Prevention Control, risk mitigation available today. Its, yes, revolutionary technology is well worth investing time to understand.

 

 

 

29. January 2012 · Comments Off on Financial Cryptography: Why Threat Modelling fails in practice · Categories: blog · Tags: , ,

“…threat modelling will always fail in practice, because by definition, threat modelling stops before practice.”

via Financial Cryptography: Why Threat Modelling fails in practice.

Insightful post highlighting the difference between threat and risk.

Let us now turn that around and consider *threat modelling*. By its nature, threat modelling only deals with threats and not risks and it cannot therefore reach out to its users on a direct, harmful level. Threat modelling is by definition limited to theoretical, abstract concerns. It stops before it gets practical, real, personal.

Risks are where harm is done to users. Risk modelling therefore is the only standard of interest to users.

 

23. December 2010 · Comments Off on The Only Trust Models You’ll Ever Need « The New School of Information Security · Categories: blog · Tags: ,

The Only Trust Models You’ll Ever Need « The New School of Information Security.

What is this “trust’ meme all about? Easy – it’s the other side of the risk coin.”Yet another hypothetical construct.”

IF YOU USE QUALITATIVE RISK STATEMENTS

Trust = Opposite of Risk

So “Low Risk” becomes “High Trust”.

IF YOU USE RISK SCORING WITHOUT MEASUREMENT SCALES

Trust = 1/Risk

So The larger the risk score, the smaller the trust score.


Roger Grimes at InfoWorld's Security Central wrote a very good article about password management. I agree with everything he said, except Roger did not go far enough. For several of Roger's attack types password guessing, keystroke logging, and hash cracking, one of the mitigation techniques is strong (high entropy) passwords.

True enough. However, I am convinced that it's simply not possible to memorize really strong (high entropy) passwords.

I wrote about this earlier and included a link to a review of password managers.

I thought a post about Database Activity Monitoring was timely because one of the DAM vendors, Sentrigo, published a Microsoft SQLServer vulnerability today along with a utility that mitigates the risk. Also of note, Microsoft denies that this is a real vulnerability.

I generally don't like to write about a single new vulnerability because there are just so many of them. However, Adrian Lane, CTO and Analyst at Securosis, wrote a detailed post about this new vulnerability, Sentrigo's workaround, and Sentrigo's DAM product, Hedgehog. Therefore I wanted to put this in context.

Also of note, Sentrigo sponsored a SANS Report called "Understanding and Selecting a Database Activity Monitoring Solution." I found this report to be fair and balanced as I have found all of SANS activities.

Database Activity Monitoring is becoming a key component in a defense-in-depth approach to protecting "competitive advantage" information like intellectual  property, customer and financial information and meeting compliance requirements.

One of the biggest issues organizations face when selecting a Database Activity Monitoring solution is the method of activity collection, of which there are three – logging, network based monitoring, and agent based monitoring. Each has pros and cons:

  • Logging – This requires turning on the database product's native logging capability. The main advantage of this approach is that it is a standard feature included with every database. Also some database vendors like Oracle have a complete, but separately priced Database Activity Monitoring solution, which they claim will support other databases. Here are the issues with logging:
    • You need a log management or Security Information and Event Management (SIEM) system to normalize each vendor's log format into a standard format so you can correlate events across different databases and store the large volume of events that are generated. If you already committed to a SIEM product this might not be an issue assuming the SIEM vendor does a good job with database logs.
    • There can be significant performance overhead on the database associated with logging, possibly as high as 50%.
    • Database administrators can tamper with the logs. Also if an external hacker gains control of the database server, he/she is likely to turn logging off or delete the logs. 
    • Logging is not a good alternative if you want to block out of policy actions. Logging is after the fact and cannot be expected to block malicious activity. While SIEM vendors may have the ability to take actions, by the time the events are processed by the SIEM, seconds or minutes have passed which means the exploit could already be completed.
  • Network based – An appliance is connected to a tap or a span port on the switch that sits in front of the database servers. Traffic to and, in most cases, from the databases is captured and analyzed. Clearly this puts no performance burden on the database servers at all. It also provides a degree of isolation from the database administrators.Here are the issues:
    • Local database calls and stored procedures are not seen. Therefore you have an incomplete picture of database activity.
    • Your must have the network infrastructure to support these appliances.
    • It can get expensive depending on how many databases you have and how geographically dispersed they are.
  • Host based – An agent is installed directly on each database server.The overhead is much lower than with native database logging, as low as 1% to 5%, although you should test this for yourself.  Also, the agent sees everything including stored procedures. Database administrators will have a hard time interfering with the process without being noticed. Deployment is simple, i.e. neither the networking group nor the datacenter team need be involved. Finally, the installation process should  not require a database restart. As for disadvantages, this is where Adrian Lane's analysis comes in. Here are his concerns:
    • Building and maintaining the agent software is difficult and more time consuming for the vendor than the network approach. However, this is the vendor's issue not the user's.
    • The analysis is performed by the agent right on the database. This could mean additional overhead, but has the advantage of being able to block a query that is not "in policy."
    • Under heavy load, transactions could be missed. But even if this is true, it's still better than the network based approach which surely misses local actions and stored procedures.
    • IT administrators could use the agent to snoop on database transactions to which they would not normally have access.

Dan Sarel, Sentrigo's Vice President of Product, responded in the comments section of Adrian Lane's post. (Unfortunately there is no dedicated link to the response. You just have to scroll down to his response.) He addressed the "losing events under heavy load" issue by saying Sentrigo has customers processing heavy loads and not losing transactions. He addressed the IT administrator snooping issue by saying that the Sentrigo sensors doe not require database credentials. Therefore database passwords are not available to IT administrators.

Controversy around the PCI DSS compliance program increased recently when Robert Carr, the CEO of Heartland Payment Systems, in an article in CSO Online, attacked his QSAs saying, "The audits done by our QSAs (Qualified Security Assessors) were of no value whatsoever. To the extent that they were telling us we were secure beforehand, that we were PCI compliant, was a major problem."

Mike Rothman, Senior VP of eIQNetworks responded to Mr. Carr's comments not so much to defend PCI but to place PCI in perspective, i.e. compliance does not equal security. I discussed this myself in my post about the 8 Dirty Secrets of IT Security, specifically in my comments on Dirty Secret #6 – Compliance Threatens Security

Eric Ogren, a security industry analyst, continued the attack on PCI in his article in SearchSecurity last week where he said, "The federal indictment this week of three men for their roles in the
largest data security breach in U.S. history also serves as an
indictment of sorts against the fraud conducted by PCI – placing the
burden of security costs onto retailers and card processors when what
is really needed is the payment card industry investing in a secure
business process."

The federal indictment to which Eric Ogren referred was that of Albert Gonzalez and others for the breaches at Heartland Payment Services, 7-Eleven, Hannaford, and two national retailers referred to as Company A and Company B. Actually this is the second federal indictment of Albert Gonzalez that I am aware of. The first, filed in Massachusetts in August 2008, was for the breaches at BJ's Wholesale Club, DSW, OfficeMax, Boston Market, Barnes & Noble, Sport Authority, and TJX.

Bob Russo, the general manager of the PCI Security Standards Council disagreed with Eric Ogren's characterizations of PCI, saying that retailers and credit card processors must take responsibility for protecting cardholder information.

Rich Mogull, CEO and Analyst at Securosis, responded to Bob Russo's article with recommendations to improve the PCI compliance program which he characterized as an "overall positive development for the state of security." He went on to say, "In other words, as much as PCI is painful, flawed, and ineffective, it
has also done more to improve security than any other regulation or
industry initiative in the past 10 years. Yes, it's sometimes a
distraction; and the checklist mentality reduces security in some environments, but overall I see it as a net positive."

Rich Mogull seems to agree with Eric Ogren that the credit card companies have the responsibility and the power to improve the technical foundations of credit card transactions. In addition, he calls the PCI Council to task for such issues as:

  • incomplete and/or weak compliance requirements
  • QSA shopping
  • the conflict of interest they created by allowing QSA's to perform audits and then sell security services based on the findings of the audits.

Clearly organizations have no choice but to comply with mandatory regulations. But the compliance process must be part of an overall risk management process. In other words, the compliance process is not equal to the risk management process but a component of it.

Finally, and most importantly, the enterprise risk management process must be more agile and responsive to new security threats than a bureaucratic regulatory body can be. For example, it may be some time before the PCI standards are updated to specify that firewalls must be able to work at the application level so all the the Web 2.0 applications traversing the enterprise network can be controlled. This is an important issue today as this has been a major vector for compromising systems that are then used for funds transfer fraud.

27. August 2009 · Comments Off on Estonian Internet Service Provider is a front for a cyber crime network · Categories: Risk Management, Security Management · Tags: , , , , , ,

TrendMicro's security research team announced a
white paper detailing their investigation of an Estonian Internet
company that was actually a front for a cybercrime network. This white
paper is important because it shows just how organized cyber criminals
have become. I have pointed this out in an earlier post here.

Organizations in the U.S. and Western Europe may wonder how this is relevant to them:

"From its office in Tartu [Estonia], employees administer sites that host codec
Trojans and command and control (C&C) servers that steer armies of
infected computers. The criminal outfit uses a lot of daughter
companies that operate in Europe and in the United States. These
daughter companies’ names quickly get the heat when they become
involved in Internet abuse and other cybercrimes. They disappear after
getting bad publicity or when upstream providers terminate their
contracts."

The full white paper is well worth reading.

The Washington Post reported yesterday that there is an increase in "funds transfer fraud" being perpetrated by organized crime groups from Eastern Europe against small and medium U.S. businesses. 

It's hard to know the extent of this type of crime because there is no breach notification requirement since no customer information is disclosed. However, many companies are reporting these crimes to the FBI and of course to their banks.

The risk of funds transfer fraud to businesses is much higher than to consumers for the following reasons:

  • Dollar amounts are higher.
  • Under the Uniform Commercial Code, businesses only have two days to dispute charges they feel are unauthorized. Consumers have 60 days from the time they receive their statements.
  • Because banks are liable for the consumer losses and less so for the business losses, they invest more resources in protecting consumers.

The complete article in the Washington Post is well worth reading.

In a previous post, I highlighted one of the techniques used by cyber criminals where they surreptitiously install the Clampi trojan on a PC in order to get the login credentials needed for online banking.

Recommended actions:

  • Install anti-virus/anti-malware agents on all workstations and keep them up-to-date
  • Use an end-point configuration management system to discover all workstations, to assure the above mentioned agents are installed and up-to-date, and to assure that unauthorized software is not installed
  • Implement firewall policies to (1) assure that only authorized people (i.e. people in authorized roles) using only authorized workstations can connect to financial institutions to perform funds transfer transactions, (2) assure that people not authorized cannot connect to financial institutions, (3) generate alerts when there are attempts to violate these policies
  • Implement a process where funds transfer transactions are reviewed on a daily basis by someone other than the person or people who perform the transactions

The Department of Health and Human Services this week published the regulations for the "breach notification" provision of the Health Information Technology for Economic and Clinical Health (HITECH) Act, of the American Recovery and Reinvestment Act of 2009 (ARRA). In effect, this is an extension of HIPAA and further strengthens HIPAA's Privacy Rule and Security Rule.

The new breach notification regulations are in a 121 page document. HHS also issued a press release that summarizes the new regulations.

This type of breach notification regulation started in California with SB 1386 which went into effect on July 1, 2003. Since then about 40 other states passed a similar law.

In 2008, California went on to pass a specific health care information protection law, SB 541, which requires notification of breaches and financial penalties up to $250,000 per incident. Here is a Los Angeles law firm's presentation on it.  Since SB 541 went into effect on January 1, 2009, there have been over 800 incidents reported.