Sunday, May 15, 2011

Multidisciplinary Practices the Only Way to Go

I just made the following post yesterday to an American Bar Association message board about a DC firm implementing a sort of multidisciplinary practice.  I believe this is particularly relevant for those practicing information security law given its interdisciplinary nature.

The current law firm model is an anachronism.  Nearly all other professions have embraced the needs of their customers/clients who want quality services at reasonable prices.  The law profession, particularly Big Law, has used legal and social barriers to drive up costs for their clients.  The hiring process is skewed so in favor of pedigree (what other profession cares about class rank 20 years after someone has been practicing) that the labor pool for Big Law is artificially constrained and salaries and billing rates are inevitably driven up as a result.  Lawyers, knowing the law better than anyone, have managed to maintain rules barring MDP under the guise of ethics even though nearly ever other profession uses MDPs and are frequently viewed as being more ethical than lawyers.  Let’s rid ourselves of notions that lawyers who report to non-lawyers can’t be ethical.  General counsel and many government lawyers do it everyday.  Law firms do it when they say no to an important client who makes up the majority of their revenue.

We need to stop pretending that we’re so different in order to justify the competitive restraints we’ve imposed on our clients.  The law profession in the United States is one huge anti-trust violation.  For the sake of our clients, who are frequently left making substandard decisions based on limited or no representation, we need to adopt MDP and move into the 21st century like everyone else.

Tuesday, June 15, 2010

More Cybersecurity Bills, More of the Same

With the amount of cybersecurity legislation being proposed these days, one would think that someone would come up with an innovative and fresh perspective on the issue.  Unfortunately, we’ve been largely subjected to more of the same attempts to give the issue more prominence by creating new bureaucracy and slightly more funding and offering largely symbolic mechanisms to respond to incidents.  As an example, the bill entitled “Protecting Cyberspace as a National Asset Act of 2010,” S. 348, offers great promise but falls short when examined more closely.

Among other things, the bill would create “an Office of Cyberspace Policy in the Executive Office of the President run by a Senate-confirmed Director.”  And while I like the fact that the newly created office will have the power to review cyber security budgets of federal agencies, it is only an advisory power, which presumably DHS could do right now if they wanted to.  Moreover, the enforcement power seems a little weak:

"(4) if the policies or activities of a Federal agency are not in compliance with the responsibilities of the Federal agency under the National Strategy— (A) notify the Federal agency; (B) transmit a copy of each notification under subparagraph (A) to the President and the appropriate congressional committees; and (C) coordinate the efforts to bring the Federal agency into compliance;"

We already have inspector generals that are supposed to perform this function as well as GAO.  I'm not sure why adding another oversight body, who will presumably be underfunded anyway, will make much of a difference.  In the end, agencies will just end up punishing those who have been pleading for more funding when there are audit findings.  While we can hope that the Cyberspace Policy Office will be able to force more funding, I'm not optimistic given the current budget constraints.  Moreover, much of the language encourages CYA behavior of agencies that may just lead to the generation of more useless paperwork in response to endless audits.  Additionally, more extensive monitoring, which is generally a good thing, will likely lead to the discovery of more compromises that went unnoticed before, leading to more finger pointing, more audits, and less time spent trying to secure systems.  The better solution is for more individual accountability, which is hard to achieve with CIOs and CISOs bouncing from one agency to another.

On the private sector side, there is even less to be excited about.  Like most previous bills, the Lieberman bill tries to thread the needle by attempting to give the federal government more authority to act in an emergency without taking over private assets.  And according to National Journal, the bill is not intended to provide a "kill switch" for Internet connections but instead provides other mechanisms.  "Such actions might include ordering a private sector operator not to accept incoming traffic from a particular source, Lieberman said."  Of course, without a mechanism to force DHS and other agencies to declassify, de-SBU, or de-FOUO such information, one may wonder if private sector agencies will ever get the instructions needed to block the source.  Moreover, private sector organizations have been clamoring for such actionable information for some time.  Few of them would need to be forced to block such sources of attack.  That's already being done regularly by informal relationships between government and private sector entities facilitated by groups such as the Forum of Incident Responders and Security Teams (FIRST).  What we don't see is any evidence of private sector agencies refusing to take a specific action based on government recommendations.  Instead, what is missing are specific recommendations from government based on threats occurring in real time.  This legislation does nothing to force that to happen.  Moreover, by the time the government has decided what action to take, the private sector organization would have probably already taken the needed action or it will be too late.

Finally, the language on liability protection seems a bit troubling.  According to CNET, "If there's an ‘incident related to a cyber vulnerability’ after the president has declared an emergency and the affected company has followed federal standards, plaintiffs' lawyers cannot collect damages for economic harm. And if the harm is caused by an emergency order from the Feds, not only does the possibility of damages virtually disappear, but the US Treasury will even pick up the private company's tab."  First of all, the government has little knowledge of the private sector's business processes, and while this defense falls within the common law principles of public necessity, this is hardly as simple as tearing down a house to keep the entire city from starting on fire.   Private sector entities will likely use the emergency declaration as a fig leaf to imply that all its actions or failures to act were dictated by the emergency declaration and that the government's failure to prescribe other actions was intentional and precluded them from taking such actions.  Do we really want private sector, or even public sector organizations, blindly accepting an order from DHS and not being held accountable for not telling the government about the potential consequences?  For example, what happens if the government asks an oil pipeline operator to block a TCP port that the operator uses to monitor pressure in the pipeline and as a result, the pipe bursts and millions of gallons of oil spill into an environmentally sensitive area.  Do we want the government to take all the heat for something like that when the pipeline operator knew about the potential consequences?

I hate to pour cold water on all these seemingly genuine, but misguided, efforts to improve cybersecurity, but I've yet to see any evidence that the federal government is capable of doing more than providing intelligence and thought leadership to the private sector and funding research and development into new security technologies.  The only other thing is for laws that demand accountability, but so far we don't seem to be very good at that either if the financial sector reforms are any guide.  Moreover, we can't even definitively say what controls and funding is needed for a given organization to prevent a compromise with any degree of actuarial reliability.

Saturday, April 17, 2010

Is Privacy Destroying Security?

In reading a recent post entitle “The Smart Grid Privacy Smoke Screen” describing how relatively low impact privacy concerns are masking some more significant security vulnerabilities, it got me think that this issue is broader than just Smart Grid.  And it makes me question what is Privacy’s role today some thirty years after the OECD Privacy Guidelines were first released.  Back then, security was just a single reference (Security Safeguards Principle) that simply noted that security was an important element to making privacy successful.  After all that adage that you can have security without privacy but you can’t have privacy with security is as true as ever today.  Perhaps that why so many in the privacy community have been the champions of encryption at all costs even if they don’t completely understand how it works.

The truth is that privacy’s purview is relatively narrow.  It really asks who should be given access and for what purpose.  Everything else is about security.  And not surprisingly, because the answer to that question can vary significantly depending upon the organization, the subject of the information, and type of information, discussions in that area become somewhat unsatisfying.  Instead many become involved in somewhat high-level discussions of security issues.  As many security professionals will tell you, privacy professionals are often less technical, but because many are lawyers who have the ear of the CEO, such high-level technical guidance suddenly becomes the new mandate for the chief information security officer.  Moreover, the objectives of privacy are often somewhat squishy and personal.  People willingly give up their privacy on a daily basis in exchange for access to some information or to save money on what they buy.  That makes defining misuse of private information much more difficult.  We all agree that stealing one’s bank account information for the purpose of withdrawing funds is always a bad thing.  However, but selling magazine subscription information to marketing firms that send out junk mail is just one of many consequences we’ve come to expect.

Let’s not forget that compared with cyber attacks that put lives at risk or result in significant financial losses, the value of the dignitary right of privacy hardly holds a candle.  The message to the privacy professionals out there is to focus on the who for what purpose and leave the security to the real experts.

Saturday, March 20, 2010

Compliance in the Cloud?

A few weeks ago, I was part of a panel at the RSA Security Conference called “Cloudy with a Chance of Litigation.”  This panel of lawyers and security practitioners tried to anticipate the kinds of legal issues that would arise in litigation, from liability of providers for security compromise to the dicey issues of e-discovery of something amorphous as a cloud.  There was a general sense of trepidation about cyber security in the cloud that permeated this and other sessions.  While most admitted that the technology of today’s clouds isn’t much different than time-share style computing mainframes have offered for 40 years, many have highlighted some possible pitfalls in moving quickly into technologies that require little upfront planning or expense and therefore often miss the radars of both cyber security professionals and legal counsel.  Moreover, seemingly innocuous uses of cloud computing can quickly evolve into “bet the business” style operations when the initial pilots seem to work without a hitch.

But amidst all the confusion, hype, and understandable worry, we may find a bit of a silver lining.  And that is a somewhat standardized platform that security service providers and software developers can target.  Because one of the biggest problems for security professionals is often defining and maintaining secure configurations in a heterogeneous environment, the cloud, by necessity, offers some solutions.  While cloud providers do offer some flexibility in their software-as-a-service, platform-as-a-service, and infrastructure-as-a-service capabilities, service providers need to offer consistency and manageability in their packages to make money and stay competitive.  And so while storage, processors, and memory can vary, certain virtualization technologies and management tools may be the same across all customers.  That makes it easier for security service provider solutions like McAfee’s Cloud Secure Program to be effective.  By working closely with the cloud provider, Amazon in this case, McAfee can focus its energies on offering a secure and compliant service and less on addressing interoperability and customizations issues that plague far too many technology deployments.  Through economies of scale and competition, we have the possibility for innovative approaches that are relatively inexpensive and easy to deploy.  The best part is that it offers the best hope yet for automating compliance processes as well as simple security tasks, so security professionals can focus on evolving threats and issues that are more unique to their businesses.  If that can happen, maybe all this hype surrounding cloud won’t be so bad.

Friday, March 19, 2010

Does Privacy Matter with the Smart Grid?

As we hear more stories about the Smart Grid and the media focuses attention on potential security risks, it is worth asking whether privacy has a role to play.  But first let me set the ground rules.  I do not subscribe to the notion that privacy is security for the legal/policy types who can’t talk techie at a detailed level.  All too often I’ve seen privacy articles and discussions devolve into discussions about password length or the need for encryption.  That’s not privacy; it’s security, and it pains me to see two perfectly good disciplines get pillaged by folks who run out of material and decide to veer into a different discussion.  That’s not to say that they both can’t be discussed at the same time, but people need to know the difference.  So for me, privacy is about only the process involved in determining who should get access to a piece of information and for what purpose.  In some senses privacy is what makes security useful as security is concern deploying restrictions to access based on the criteria set by privacy.

With that said, are we really facing privacy issues with the Smart Grid?  And I caveat that by saying that privacy here means how personally identifiable information about energy customers is used and shared.  In theory, privacy could cover trade secrets and other sensitive information as in a sense that data is meant to stay private, but for purposes here, we’re just talking about the customer information.  In an excellent article entitled “Smart Privacy for the Smart Grid:  Embedding Privacy into the Design of Electricity Conservation” the authors go to great lengths in describing the various types of measurement that can be taken and how it could be used:

Whether individuals tend to cook microwavable meals or meals on the stove; whether they have breakfast; the time at which individuals are at home; whether a house has an alarm system and how often it is activated; when occupants usually shower; when the TV and/or computer is on; whether appliances are in good condition; the number of gadgets in the home; if the home has a washer and dryer and how often they are used; whether lights and appliances are used at odd hours, such as in the middle of the night; whether and how often exercise equipment such as a treadmill is used. Combined with other information, such as work location and hours, and whether one has children, one can see that assumptions may be derived from such information.  For example: the homeowner tends to arrive home shortly after the bars close; the individual is a restless sleeper and is sleep deprived; the occupant leaves late for work; the homeowner often leaves appliances on while at work; the occupant rarely washes his/her clothes; the person leaves their children home alone; the occupant exercises infrequently.

Practically speaking, I suppose that exposing this information could be a bit embarrassing, but much of it serves little purpose to someone with malicious intent unless the person is a celebrity as I’m sure Al Gore can attest to when his house legitimately became drawn into charges of hypocrisy.  However, for the average person, burglary would seem to a more compelling motive even though there are probably easier and more accurate ways to find out if someone isn’t home.  There, of course, is the dignitary right to privacy that can’t be easily tied to economic value.  Us capitalist Americans tend to deride such sentiments as idle chatter standing in the way of progress.  Nonetheless, at some level, we all succumb to that luxury of privacy for privacy sake whether it is out of embarrassment that a parent, spouse, co-worker, neighbor, or Greenpeace activist finds out that we like to crank up the AC on days with energy shortages or partake in that guilty pleasure of a 30 minute shower.

But aside from those issues, survey after survey has shown that people will gladly give up some of their privacy for something in return.  That’s the dirty secret behind web news registrations, free samples, and loyalty cards.  While there are some die hard privacy proponents out there, most people are just trying to get the best deal for what they have to give up.  Hence, the lesson for utilities is to be responsible in how you protect information that you say you’re going to protect by putting in effective cyber security measures.  However, if you want to do some marketing with all that data you’re getting from your customer, make they know they’re getting a discount for all this personal information they’re supposedly giving up that never was available to sell before.  Isn’t capitalism wonderful.

Wednesday, March 17, 2010

Our Users aren’t as Dumb as We Think

For system administrators and security professionals, one of the most common complaints is that end users don’t care, they don’t take the right precautions, or they don’t listen to what we tell them.  Well, all that may be true to some extent, but maybe we’re missing the obvious as well.  While complying with the law is not optional, the methods we use to secure our data often give us more flexibility when we think.  An alert reader on the Security Metrics list noted an article entitled “So long, And No Thanks for the Externalities: The Rational Rejection of Security Advice by Users,” which discusses the results work done by Microsoft Research.  The article abstract notes:

“It is often suggested that users are hopelessly lazy and unmotivated on security questions. They chose weak passwords, ignore security warnings, and are oblivious to certificates errors. We argue that users' rejection of the security advice they receive is entirely rational
from an economic perspective. The advice offers to shield them from the direct costs of attacks, but burdens them with far greater indirect costs in the form of effort.  Looking at various examples of security advice we find that the advice is complex and growing, but the benefit is largely speculative or moot. For example, much of the
advice concerning passwords is outdated and does little to address actual threats, and fully 100% of certificate error warnings appear to be false positives. Further, if users spent even a minute a day reading URLs to avoid phishing, the cost (in terms of user time) would be two orders of magnitude greater than all phishing losses.
Thus we find that most security advice simply offers a poor cost-benefit tradeoff to users and is rejected. Security advice is a daily burden, applied to the whole population, while an upper bound on the benefit is the harm suffered by the fraction that become victims annually. When that fraction is small, designing security advice that is beneficial is very hard. For example, it makes little sense to burden all users with a daily task to spare 0.01% of them a modest annual pain.”

In our private lives, following these security and safety precautions because not all activities are measured in dollars and sense.  If someone derives some utility from washing his hands every fifteen minutes on the off chance that some invisible deadly germ might suddenly appear, who are we to argue.  There is an infinitesimally small chance that the failure to wash at that frequency could lead to death, but few for-profit businesses could survive if they required their employees to follow that regimen.  People often misperceive risks, but many perceive risks correctly and still engage in abnormally risky or cautious behavior.  To them, it’s simply a matter of utility.  For example, the very small risk of dying in a plane crash can be enough for one to forgo flying in planes if the need nor desire is minimal.  However, driving down a highway at 10 mph for fear of car crashes would be inappropriate even if it only impeded other drivers and didn’t increase the risk of an accident, which very slow driving has been shown to do. 

When our actions work in tandem with others, some happy medium is needed.  Hence, there is the risk/reward tradeoff we find in business.  When one chooses to work for or do business with a company, one is necessarily accepting their risk model.  Consequently, a super secret intelligence agency may find it worth the cost to carefully read every URL to avoid phishing.  For the rest of us, it’s just not worth it.  And yet the tips keep coming and organizations plug them into training materials without any thought to their risk profile (or bottom line).  We lock people out of their accounts after three unsuccessful complex password tries even though there’s not a scintilla of evidence to suggest that a higher number would be too risky.  We simply take what we receive from some official sounding source to be gospel and implement because making a risk-based decision would mean someone would have to be accountable.  Some day people will realize that deferring all decisions to some unknown entity who knows little about your environment or risk tolerance is not only economically silly, it may be riskier.  I relish the lawsuit where a company is sued successfully for having security policies that are too strict to the point that any reasonable person in that environment would choose to ignore them.  Now, who was it that needed security training?

Sunday, February 07, 2010

Another Cybersecurity Bill; Yawn

And so now yet another cybersecurity bill is seeing the light of day as H.R. 4061 has just cleared the House.  Based on my reading, the bill authorizes, but doesn't appropriate, funds for cyber security research, an agency-by-agency review of cyber security skills and scholarships, more of NIST guidance efforts, including an effort to normalize security standards internationally, which is what NIST has been working on for several years.  While more funding for cyber security is usually a good thing, that's about all it does.  Everything else is already being done.  And in fact I understand that the Senate is looking to wrap this in with one of their appropriations bills.  The most telling sign of the bill's underwhelming nature is that it passed 422-5.  That ranks up there with the naming of post offices in terms of lacking any controversy.

As a New York Times article notes, the Obama budget actually cuts the cyber security division in DHS where most of the cross government cyber security efforts are spearheaded.  That doesn't bode well for a bill whose only distinguishing characteristic is more funding.

The suggestion that the government needs to hire 1000 more "cyber warriors" has been bandied about by various government officials with little idea what those folks would do or how they would be paid for.  As has been frequently pointed out, most of our critical infrastructure and much of what hackers are interested in are owned and operated by the private sector.  The parts of the government most at risk, mainly the military and intelligence communities, are much further along in both protecting their infrastructure and providing appropriate staffing.  However, even then there is a mismatch.

Most of the top technical skills they're looking for, such as malware
analysis, exploit development, and penetration testing, are held by
individuals who command salaries above the government GS scale and would not likely want to sit in windowless rooms all day examining network traffic they can't talk about.  Contractors will inevitably fill some of this void, but they're also having a hard time keeping talent and still fitting in under the often rigid rate structures the government demands.  Right now the going rate for strong penetration testers with about 5 years of experience in IT/security exceeds $100K in many markets.

I do believe that we will be able to implement appropriate security controls for the federal government and that a sufficient amount of staff will eventually be hired and many may simply need to grow to the needed skills while on the job.  However, I do not believe the federal government will ever be able to provide operational support for cyber security matters for the private sector.  Guidance on control frameworks and funding for research are useful endeavors.  But the government is simply not structured to advise the private sector on evolving threats in a timely manner.  Even if they had
timely information, they would not be able to share it.  It's hard enough sharing information among federal agencies, as the December 25 Northwest bombing demonstrated, but providing actionable information to the private sector is next to impossible.  Moreover, it is highly unlikely that private sector agencies will share the needed information on incidents that affect them.  The fear of fines and unwanted regulator attention has all but precluded those activities. 

Unlike physical security where jurisdiction is easy and techniques are well understood and slower to change, I don't believe the government can ever be able to protect us from cyber security
threats other than to make a few arrests in the more serious cases or where the hackers are idiots.  The FBI and Justice Department should be congratulated for the busts they have made.  However, things are only going to get harder.  The source for timely threat information and the development of defenses will need to reside predominantly in the private sector.  There simply is no other way in my opinion.