Archive for April, 2008

Privacy is not just for advocates

It seems like every year I have to buy a new hard drive; each almost doubles the size of the previous one, and yet my OS keeps warning me I am nearly out of space. For a time I feel irritated (I haven’t really got that much more information have I?). Then I go and look at the contents of my old drives, and realise that it’s not just that the new OS is bigger. I really have been collecting much more information, and in much richer formats.

And it’s not just me. What about governments and corporations? The desire to collect and store more and more data seems unstoppable. Mostly the reasons are not just benign; they are as close to altruistic as possible for these organisations. Take eHealth for example; contrary to some fringe conspiracy theorists (aka my mates in the pub) eHealth is not a nefarious plan by shadow government to mine our most intimate of data (you know, the stuff we talk to our mates about in the pub) but rather a responsible reaction to the increased load on a vital service we all at some point or other rely on.

But just because something is benign and responsible doesn’t mean it isn’t impactful and corruptible. The desire to produce a system, for a reasonable cost, inevitably means that tradeoffs must be made. The reality is when these tradeoffs occur, those not represented at the decision tend to be those most disappointed by the result. How many meetings have you sat in where those not present have ended up with the most "to do’s"? 

Of all the trade off that provide the most concern; those surrounding consent and access to data are the most troubling. Again, it’s important to realise these tradeoffs are not malignant in nature, but often the most logical solution to a complex problem. We (being society in general) want solutions to problems for a reasonable amount of money, and unfortunately managing privacy is a costly business.

Consider; if there are four million people in a jurisdiction who are treatable by a health service, that’s four million people who will have differing views on consent. Some will openly give consent to all healthcare professionals as long as they know that the information will only be used for intended purpose (in this case ensuring treatment is effective, timely, and safe). Other will consider that only directly associated healthcare professionals should have access, whilst another group will only want those people in immediate contact to have access. Of course others who would grant no one access, nor have any data related to them stored.

These are the broad stokes but as any system designer or developer will tell you, the devil is in the detail. A common scenario in design of these systems is that of a young woman (let’s call her Mary). Mary’s father is a physician, and as such has access to the eHealth system. Mary is concerned she might be pregnant and attends a clinic for investigation. Under no circumstances does she want her father to know, yet she cannot hide the entire record from him. How does the solution account for this?

Within eHealth solutions two broad models are considered for consent; implicit and explicit. Briefly explicit considers that all storage and release of information is allowed only with the patients expressed consent. Implicit works on the premise that the patient is deemed to have given consent by their presence in the health system. Implicit consent is for example how most hospital systems work. The important piece, as far as privacy is concerned, in implicit consent is that the patient has an ability withdraw consent, for a particular record (e.g. that visit to a family planning clinic), to a particular provider (e.g. Mary’s father), or in general.

Typically we start from the laudable position of explicit consent, and rapidly move to the concept of implicit consent. This is not because of an inherent deviancy, but simply due to the inescapable fact that explicit consent is intensely complex and costly to implement. In fact in many cases this one point would result in the end of the project, the project which was initiated to bring benefit to the population in the first place.

So how do we proceed? There is a reasonable fear of exposure of personal information, either intended or unintended. Governments have a long history of using information for purposes other than the reason it was initially collected. This tends to happen over time. System A is built for a ministry and works so well that another ministry wants access to this very valuable data source, and often for benign purposes. As the information is released further and further away from the initial collection point, the protection of the data tends to diminish and the risk of misuse of the data increases. Worse, the original owner of the data may be entirely unaware of the reuse of the data and any consent system they originally had control via is breached.

In most western countries we have strong Freedom of Information and Privacy acts. They generally say the same thing, that is the data belongs to the individual, and they have rights over its collection, use and distribution. The danger is in the push to move government services onto electronic systems, these Acts are weakened.

All of these systems require trust. In many societies there is an inherent lack of trust in government agencies as they are often "faceless" and not personally accountable. To build trust we need to have some adult conversations about how much data we are willing to provide to these systems, and how that information will be controlled. The benefits of the systems must be rationally discussed as well as the risks associated with data loss. All of these solutions must be developed with the end user (the citizen) in mind rather than the simplification of the system.

More than anything though, we need to be careful that the rush to develop a beneficial solution does not result in too many tradeoffs being made. At some point we have to consider that if a solution is currently too complex to implement, we should not proceed until the complexity is manageable. The benefits of data collection, and the benefits can be huge, cannot blindly outweigh the risks of unintended release or reuse of intensely personal data.



Models of Explicit Consent

It may be an sign of the advancement of years, but I have discovered an increasing interest in the way we interact with computers and systems which is starting to outweigh the interest in the way those computers and systems work. Maybe this was how my Grandmother felt about her VCR?

Consent models (so similar to VCR’s in so many ways) are of a particular interest, especially given my work in healthcare.

An idea I have been playing around with is explicit consent (the ability for an end user, a patient, to direct how their information may be collected and used in an electronic health solution). It started, like many of these ideas, with a fairly simple premise, the patient should have absolute control over their data. Which is possibly unfair in that patients don’t have absolute control over their data in a non-electronic environment (you know, paper – that stuff we’ve used for hundreds of years?).

That’s actually quite hard, especially if the patient isn’t there all the time. We could, for example, develop a system where patients have some form of physical key which is required to be physically present to "unlock" their records, but they would have to be there. What about a form of DRM? That’s been obviously so successful for the music corporations. Nobody has every been able to access music without permission once that was implemented.

While I was pondering all this, I popped online to my bank to make sure I had enough money in my account to buy some clothes. While there, I paid some bills, checked out a couple of security warnings and decided that unless I stopped buying gear for my camera a holiday in Hawaii was never going to happen.

Wait a moment – actually the online banking model is pretty good. Obviously the banks are able to deal with a large number of users in a secure manner. It was my choice to use the service, I was responsible for signing up, and managing my security. I can choose which accounts can be managed online, and I can cancel the service any time I choose.

So how about a model for consent management in electronic health where;

  1. Every patient has to explicitly opt in by signing up for the system
  2. If a patient has not opted into the system, no data is uploaded to it
  3. A patient can opt out, and in doing so, all requests for information are subsequently denied by the solution

Initially it would appear that such a model is difficult to implement and control, and certainly implicit consent is easier. However much could be learnt/reproduced from online banking in this case.