Archive for the 'Healthcare' Category

Privacy is not just for advocates

It seems like every year I have to buy a new hard drive; each almost doubles the size of the previous one, and yet my OS keeps warning me I am nearly out of space. For a time I feel irritated (I haven’t really got that much more information have I?). Then I go and look at the contents of my old drives, and realise that it’s not just that the new OS is bigger. I really have been collecting much more information, and in much richer formats.

And it’s not just me. What about governments and corporations? The desire to collect and store more and more data seems unstoppable. Mostly the reasons are not just benign; they are as close to altruistic as possible for these organisations. Take eHealth for example; contrary to some fringe conspiracy theorists (aka my mates in the pub) eHealth is not a nefarious plan by shadow government to mine our most intimate of data (you know, the stuff we talk to our mates about in the pub) but rather a responsible reaction to the increased load on a vital service we all at some point or other rely on.

But just because something is benign and responsible doesn’t mean it isn’t impactful and corruptible. The desire to produce a system, for a reasonable cost, inevitably means that tradeoffs must be made. The reality is when these tradeoffs occur, those not represented at the decision tend to be those most disappointed by the result. How many meetings have you sat in where those not present have ended up with the most "to do’s"? 

Of all the trade off that provide the most concern; those surrounding consent and access to data are the most troubling. Again, it’s important to realise these tradeoffs are not malignant in nature, but often the most logical solution to a complex problem. We (being society in general) want solutions to problems for a reasonable amount of money, and unfortunately managing privacy is a costly business.

Consider; if there are four million people in a jurisdiction who are treatable by a health service, that’s four million people who will have differing views on consent. Some will openly give consent to all healthcare professionals as long as they know that the information will only be used for intended purpose (in this case ensuring treatment is effective, timely, and safe). Other will consider that only directly associated healthcare professionals should have access, whilst another group will only want those people in immediate contact to have access. Of course others who would grant no one access, nor have any data related to them stored.

These are the broad stokes but as any system designer or developer will tell you, the devil is in the detail. A common scenario in design of these systems is that of a young woman (let’s call her Mary). Mary’s father is a physician, and as such has access to the eHealth system. Mary is concerned she might be pregnant and attends a clinic for investigation. Under no circumstances does she want her father to know, yet she cannot hide the entire record from him. How does the solution account for this?

Within eHealth solutions two broad models are considered for consent; implicit and explicit. Briefly explicit considers that all storage and release of information is allowed only with the patients expressed consent. Implicit works on the premise that the patient is deemed to have given consent by their presence in the health system. Implicit consent is for example how most hospital systems work. The important piece, as far as privacy is concerned, in implicit consent is that the patient has an ability withdraw consent, for a particular record (e.g. that visit to a family planning clinic), to a particular provider (e.g. Mary’s father), or in general.

Typically we start from the laudable position of explicit consent, and rapidly move to the concept of implicit consent. This is not because of an inherent deviancy, but simply due to the inescapable fact that explicit consent is intensely complex and costly to implement. In fact in many cases this one point would result in the end of the project, the project which was initiated to bring benefit to the population in the first place.

So how do we proceed? There is a reasonable fear of exposure of personal information, either intended or unintended. Governments have a long history of using information for purposes other than the reason it was initially collected. This tends to happen over time. System A is built for a ministry and works so well that another ministry wants access to this very valuable data source, and often for benign purposes. As the information is released further and further away from the initial collection point, the protection of the data tends to diminish and the risk of misuse of the data increases. Worse, the original owner of the data may be entirely unaware of the reuse of the data and any consent system they originally had control via is breached.

In most western countries we have strong Freedom of Information and Privacy acts. They generally say the same thing, that is the data belongs to the individual, and they have rights over its collection, use and distribution. The danger is in the push to move government services onto electronic systems, these Acts are weakened.

All of these systems require trust. In many societies there is an inherent lack of trust in government agencies as they are often "faceless" and not personally accountable. To build trust we need to have some adult conversations about how much data we are willing to provide to these systems, and how that information will be controlled. The benefits of the systems must be rationally discussed as well as the risks associated with data loss. All of these solutions must be developed with the end user (the citizen) in mind rather than the simplification of the system.

More than anything though, we need to be careful that the rush to develop a beneficial solution does not result in too many tradeoffs being made. At some point we have to consider that if a solution is currently too complex to implement, we should not proceed until the complexity is manageable. The benefits of data collection, and the benefits can be huge, cannot blindly outweigh the risks of unintended release or reuse of intensely personal data.

 

Advertisements

Models of Explicit Consent

It may be an sign of the advancement of years, but I have discovered an increasing interest in the way we interact with computers and systems which is starting to outweigh the interest in the way those computers and systems work. Maybe this was how my Grandmother felt about her VCR?

Consent models (so similar to VCR’s in so many ways) are of a particular interest, especially given my work in healthcare.

An idea I have been playing around with is explicit consent (the ability for an end user, a patient, to direct how their information may be collected and used in an electronic health solution). It started, like many of these ideas, with a fairly simple premise, the patient should have absolute control over their data. Which is possibly unfair in that patients don’t have absolute control over their data in a non-electronic environment (you know, paper – that stuff we’ve used for hundreds of years?).

That’s actually quite hard, especially if the patient isn’t there all the time. We could, for example, develop a system where patients have some form of physical key which is required to be physically present to "unlock" their records, but they would have to be there. What about a form of DRM? That’s been obviously so successful for the music corporations. Nobody has every been able to access music without permission once that was implemented.

While I was pondering all this, I popped online to my bank to make sure I had enough money in my account to buy some clothes. While there, I paid some bills, checked out a couple of security warnings and decided that unless I stopped buying gear for my camera a holiday in Hawaii was never going to happen.

Wait a moment – actually the online banking model is pretty good. Obviously the banks are able to deal with a large number of users in a secure manner. It was my choice to use the service, I was responsible for signing up, and managing my security. I can choose which accounts can be managed online, and I can cancel the service any time I choose.

So how about a model for consent management in electronic health where;

  1. Every patient has to explicitly opt in by signing up for the system
  2. If a patient has not opted into the system, no data is uploaded to it
  3. A patient can opt out, and in doing so, all requests for information are subsequently denied by the solution

Initially it would appear that such a model is difficult to implement and control, and certainly implicit consent is easier. However much could be learnt/reproduced from online banking in this case.

 

FAST Search and Enterprise Data Management

Left over from my time at Microsoft, I have a few friends down in Seattle I try and keep in touch with. Last year while I was down at the SOA conference, I met up with my friend who works for a newspaper down there. Specifically he works with FAST Search (http://www.fastsearch.com/) to catalogue and provide retrieval of all past articles from the paper. We had a long chat about its capabilities and its speed (a good thing for a product called FAST). After a while we got onto the idea that an organisation with a large amount of data in various formats and storage mechanisms, say a health organisation with Clinical Information Systems, Radiology, scanned paper records and so on, could utilise a platform like FAST to develop a complete view of their business without going through the usual construction of enterprise data models and repositories.

So in healthcare could we use something like FAST to create an electronic health record system? I wonder. Actually I think it would be entirely possible and might work well in an environment where the decision between indexing and central repository is leaning towards an indexing solution. Instead of developing a publish mechanism where data providers are required to submit events and data pointers to a central index repository, and then developing search algorithms to retrieve the data pointers at run time, why not use an enterprise search engine to crawl the data sources, and use the native functionality of a search engine to index, catalogue and retrieve information for the users? Obviously there are some aspects that would have to be worked on, privacy and security would be high on the list,  but the ability of a search platform to extract metadata (search keys) from both structured and non-structured data is almost perfectly suited to this type of environment.

Development of electronic health record (EHR) solutions is complex and time consuming, particularly on the design phase where, justifiably, all stake holders want to ensure the system provides everything they need in a secure way. The large groups of stakeholders and the long term requirements of an EHR can often mean the projects can take years to define, providing no benefit in the short term. Could an enterprise search engine based solution provide an interim solution for those jurisdictions determined to pursue a centralised data model?

 

Smart Tags, ICD Coding – General Pondering

As usually happens when I am writing documents, my mind started wandering off on a tangent! One of the end points was the research pane in Office, which lead onto smart tags!

Ok, here’s the general ponderance; a large portion of my clients (by which I mean all) have at least Office 2000 installed, and the majority have Office 2003. I have noticed more and more just how much work is performed in the healthcare setting in Office, and how much more could be if IT or users were aware of the capabilities; case in point, there’s an Organisation Chart Wizard in Visio! Do people really need to dedicate a resource to actually creating Visio diagrams when they could easily create a Excel spreadsheet to hold the data and auto-generate the required diagrams?

That got me thinking about all the documents that are written in a healthcare setting using Word, and all the emails sent using Outlook. Medical coding is a pretty complex area where conditions are coded using a variety of systems (e.g. SNOMED and ICD-10). The systems are very verbose and detailed, and have a hierarchical nature; for example there is a relationship between heart failure and chest pains. So what if a smart tag system was developed to identify terminology in the documents as they are written, and insert the codes and formal names into the document? That could be pretty useful!

I figure the idea solution would separate out the coding vocabulary from the interpretation engine so multiple coding mechanisms could be selected by the end user. A well rounded engine would of course have use outside of coding, and could be used (I hope) outside of smart tags as well (e.g. the research pane)

A quick search of the web doesn’t show me anything particularly helpful, so I was wondering if anybody out there had experience in this area and/or a product or solution?

"Cobble-Tecture"

In the last few months I have been spending an increasing amount of time with clients talking about their needs on both specific projects and on general supporting requirements. This has crossed both Clinical requirements and supporting non-clinical requirements within Healthcare. One trend has emerged more obviously in that time; most of the functionality they are looking for already exists in one shape or form in their organisation already, yet they are generally unaware of it. In many cases the technology required to deliver the functionality they require is already on their desktops, or the organisation already holds licences for the software that would provide the functionality.

One of the great promises of SOA (as far as I am concerned anyway) is the ability to surface discreet units of functionality (or services) from existing Information systems, and to provide those units in an open way to allow compositional applications to be developed. In many ways this is one of the intents of service clouds and mash-ups. Yet mash-ups, particularly due to their name, do not appear to have the seriousness or business stability that would be required to deliver mission critical enterprise solutions to healthcare.

However I firmly believe that in order for the IT industry, and consultants in particular, to deliver the value our customers expect we need to do a better job of identifying existing capabilities within the client organisation and utilise those where-ever possible rather than developing and purchasing new systems.

So I have been playing around with a term for architecture that makes use of functionality and capabilities already existing in an organisation, yet with a view to a progressive overarching architecture. So far the term I have come up with is Cobble-tecture; the art of building what you need with what you have. Actually I don’t think it inspires anymore confidence that mash-ups now I write this, but I think the strategy is an important one, particularly in an industry where cost sensitivity is a huge issue. As I repeated put it to clients, ever dollar spent on an IT system is one less spent on providing healthcare, which is after all, the purpose of healthcare.

Oh and an addendum; One area I am increasingly interested in is the idea of a Search Driven Architecture. This would be where instead of collating all the data into a central structure, or building an index of data spread out over a large number of stores, a search engine is used to index and catalogue both structured and non-structured data over a wide range of systems and then provides the entry point for data search, aggregation and retrieval from the disparate data sources. More on that later…

Time for a little heresy?

Amongst many of the news articles I read today, one jumped out at me (“The ‘silver tsunami’ that threatens to overwhelm US social security system” Oct 18th 2007, Guardian – http://www.guardian.co.uk/usa/story/0,,2193443,00.html). The moment we have been talking about repeatedly has arrived. The baby boomers are retiring.

I have lost count of how many positioning presentations I have either given, or sat through that called out this moment as the coming challenge and a serious source of concern for healthcare. Usually (and and my presentations are included) the presentations are centered around the need to invest in technology to help reduce the burden on the system by consolidating information, and reducing the “waste” within the system, such as re-ordering laboratory tests. Large scale technology projects will remove the problem and save our budgets.

Over the years I have been involved in technology in healthcare and have friends and colleagues involved in large scale projects in both Canada and Europe, in particular in the UK. There are some very complex and noble designs on the table, which stand to greatly improve both the delivery of healthcare and costs associated with healthcare when they are completed. But here’s the rub; these are expensive and complex projects, taking many years to design and implement. Realistically a large scale jurisdictional Electronic Health Record project can take upwards of a decade and multiple millions (choose your currency) to deliver.

Now for the part that might get me burnt at the stake; is there a more efficient way to achieve the desired end-state? Might a better way to achieve an Electronic Health Record (EHR) be to use a series of small projects to provide clear incremental steps rather than a large centrally controlled project? What about enlisting the owners of the data within the system; the patients?

I have to admit to being suspicious of large organisations setting up “free” or even subscription based Personal Health Record (PHR) systems (see Google and Microsoft) but there may be something in it. True, a PHR doesn’t provide all the benefits of a full blown EHR (especially around Performance Management) but that ultimately is not a concern of the patient. 

So while we design, build and deploy large scale Electronic Health Record solutions, maybe Personal Health Record systems have, at a minimum, a stop gap role to play, even in a non-competitive market?

I hear the crackle of fire.. gotta run!

Tags: , , , , , , ,

Powered by Qumana

Part Two – Compiled vs. Config

I remember back when I first wrote an application professionally; it seemed highly complex and well thought out, and I thought I had all the customer requirements covered off. Then came deployment and bang, my app hit the real world and a huge problem. All the details were hard coded and based off my development machine not the production environment. Horrible. Quickly I learnt (over many, many cups of coffee) that I should use configuration files so that details such as database connections, usernames/passwords and so on could be altered.

Later I learnt an important lesson in application design. Things are NEVER static or fixed. A customer had a list of codes that defined various subdivisions of properties they owned. I suggested these should be in a database so if they changed the application would not break. I was repeatedly assured they never change. Against my better judgement, I backed down and hard coded the values. The day of go live, the codes changed. Fancy that. Lesson learnt.

Now you may be reading this and thinking well obviously these elements should be stored in a configurable way. You might even be thinking, idiot. But before you give up on me completely I would like to point out that every day, almost every single solution I have come across hard codes volatile logic and business description. In fact it could be pointed out that the tendency to use object models and relational databases for every application leads to customer dissatisfaction since they are constantly stuck in a cycle of change management. In fact a common complaint I hear is that the software they procure/write/inherit is incapable of keeping up with the speed of changes in their business.

So is there a better way?

 I think so. A combination of a few emerging and existing technologies offers the ability to build applications that hard code only the supporting functions (think logging, security etc) whilst providing a flexible framework to host the business functions. Let’s start off with business logic.

Currently this tends to be expressed through detailed object models that expose methods and properties to other object. These models can be extensive but crucially bury the core business processes in the model. When business changes, as it usually does, these models must be updated. Often this can be painful, and usually it is expensive. What if the business process was represented in a more decoupled and configurable way? What if a core workflow engine could consume business process descriptions and orchestrate the invocation of component functions in a way that did not require direct dependencies? What if business rules could be stored and expressed in a non-code manner that would enable businesses to directly alter their own rules without involvement of coding? Would that be a benefit?

Some of the components of this idea already exist. Business Rules Engines are available (BizTalk for example has one). Workflow solutions and engines exist. De-coupled invocation of business functions can already be achieved. So what’s required to bring the concept to reality? Actually, I don’t think a lot more is required. Just a change in the way we develop applications, and maybe a change in the way we see our roles supporting business. Although on the surface it could be argued that change equates to dollars for us, I would suggest that more flexible software would lead to more dollars through increased desire to engage.

So what about business data? Typically this is stored in a database within complex relational tables. Well and good, but changes to the business data lead to changes in the data model. And that gets complex. How do you account for legacy data in a new data model? What if the data needs to be retrievable in its original form? What happens with keys, and required fields that are missing?

For a while now I have been talking to a number of colleagues about the idea of using XML as native data types in SQL. This is especially attractive for complex message schemas such as HL7 v3.0. The core advantage here is versioning. Multiple versions of the model can co-exist on a single database, and legacy data can be accessed in it’s original form, often a requirement for medical data.

So just a few thoughts. But I am convinced that there are more examples and possibilities. Can a core app be built and then used to support multiple businesses? Is there really a need for 100’s of applications to be written every year to do the same job? Maybe not.

Tags: , , , ,

Powered by Qumana