Part One – A brief history of Healthcare Integration

I think, first off, it’s worth discussing healthcare integration in general and looking at a quick history, if only to see how the complexity of the integration has increased over the last few years and how that impacts the design requirements of modern solutions.

Initially systems were not required to integrate, and often had no connection to the "outside world". These systems were monolithic and were wholly responsible for executing their given functions. It’s worth noting many of these systems have not gone away. Many healthcare organisations still pursue a single-vendor strategy which do not expose message based interfaces, not provide documented data models to enable integration.

Over time however, specialised solutions were implemented and initially required rekeying of data. Now it has long been known that data entry is a problem. The Federal Drug Administration (FDA) estimate that one entry in fifty contains an error. The impact of this and the additional administration work associated with rekeying along with out of sync data sources constitutes a serious problem. Quite aside from the source of medical errors the cost associated with the additional effort is sufficiently high to warrant a solution to the problem. Health Level Seven (HL7) provided an early and increasingly ubiquitous series of protocols and standards for messaging between medical systems. Past versions of the standards focused on pipe-delimited messages that communicate “triggers” between the systems and are organised into domains such as Admission and Order Entry. The triggers are analogous to events, an example being an ADT (Admit, Discharge and Transfer) A01 message representing an in-patient admit event.

An inherent problem with the HL7 standards is there is no solid conformance mechanism, enabling vendors to claim compliance with the standard even when messages vary wildly from the published standard. This can obviously lead to huge difficulties for both clients and system integrators. How close to the standard is a given vendors standard? Where does it deviate?

Initial use of the protocols enabled transmission of data between various CIS’s. Typically this would be done with port to port interfaces, where the originating system directly transmitted the message to a known port on a destination system. If multiple systems required a copy of the message, multiple interfaces were set up on the originating system. This was a costly mechanism as each of the interfaces usually incurred a fee from the system vendor. It did however have the advantage of enabling specific “flavours” of the message to be generated for each recipient of the message.

As middleware engines began to develop it became possible to have a single interface from a source system and use the middleware engine to make multiple copies of the message and “push” them to downstream systems. This reduced the cost of maintaining multiple interfaces, but initially lack of mapping capabilities meant that vendor solution needed to be on the same version of the standards. Another issue was the single threaded nature of engines. A single message was transmitted and ACKs from the downstream systems would have to be managed prior to the solution accepting another message from the originating systems.

Mapping services was large step forward in middleware engines. Moving the solution from a “dumb” repeater to an intelligent platform enabled solutions with multiple versions to be integrated with the issues of specific adherence to standards being migrate to a single middleware platform. The arrival of mapping services enabled the development of the Canonical Data Model. The Canonical Data Model essentially provides for abstraction between the source system and the destination system by translating the original message to a super-set message, and then translating the super-set message to the outbound message. If a system is replaced, the model should insulate the remaining systems from the changes. In practice this pattern requires the developers to “design the world”. If elements are not included in the canonical data model, changes to the system at a later date can be more complicated than they would have been in direct mapping scenarios.

The Canonical Data Model is key for a successful Hub and Spoke design in integration. Each of the spokes (individual participant systems) attachment to the hub (the middleware engine) should be through an adaption piece which deals with the specifics of both the transportation mechanism and the message format specifics. Within the core middleware engine, all messages are equal.

The canonical data model is not without issues. As mentioned it requires a large amount of upfront work and a very clear understanding of all systems involved. Although on paper this should be relatively painless, in practise the effort required to ensure the accuracy of the model is huge. Within medical integration there is an advantage; since most systems implement the HL7 model, the core HL7 model can form the basis of the data model.

Another key step in the development of middleware platforms and the implementation of integrated medical environments was the development of orchestrations and the ability to utilise the platform to execute business processes. Since a large amount of business data is flowing through a common point in the enterprise, it makes sense to leverage the platform to execute business processes and real time reporting. This, combined with multi-threading platforms enables large scale composite applications to be developed, increasing the value of the middleware platform as a Business Intelligence enabler.

Integration then has moved from point to point interfaces between two systems to enterprise business process platforms embedded in the center of an enterprise. Modern platforms are able to manage and orchestrate high volumes of messages and enable organisations to create large enterprise wide composition applications, trade with external partners and mine real time business/medical data. More and more organisations are aware of the possibilities inherent in strong middleware strategy and vendor middleware engines continue to develop into mission critical platforms.

Next post: Compiled verus Configuration…

Tags: , , ,

Powered by Qumana

Advertisements

4 Responses to “Part One – A brief history of Healthcare Integration”


  1. 1 Charlie January 21, 2009 at 9:28 pm

    Has anyone found any software providers that are up to date on HL7 Standards?

    • 2 simonchester January 22, 2009 at 2:15 am

      Hi Charlie
      Lots of software providers are up to speed on versions of HL7. Can you be more specific? Which versions and messages are you interested in? Whats the issue you are looking to solve?

  2. 3 Charlie January 26, 2009 at 6:47 pm

    I am just trying to make sure that the HL7 standards I am following are correct, I think the software that I am using maybe a little out dated, I have been reading articles online about a new versions that are quote up to date, but I was wondering how long i could go without updating the software.

    I’m sure that would be something that my software provider could tell me. I guess what I really want to know is who is the best software provider, I have CastIron, but I am unhappy with them. I have been looking at Pervasive Software and other providers to see if they are any better.

    What would be your opinion?

  3. 4 simonchester January 27, 2009 at 6:48 pm

    Hi Charlie
    The issue as to whether a software provider is up to date on their HL7 standards really comes down to what you are trying to do. For HL7 v2.x v2.6 is the latest and greatest. However HL7 is trying to move people onto v3.0 which is a major shift away from the pipe-delimited 2.x standard. However it really all comes down to that systems you are integrating. For example your Lab information system might be using a v2.3.1 message spec while a Clinical Information system such as Cerner or Meditech may be using 2.4. In that case you need to transform the messages from one to the other or pay the vendor to redevelop their message specs (very expensive!). Most software vendors for the Information Systems tend to have a stable spec, but also tend to develop the actual message depending on client requirements.
    A messaging platform such as BizTalk from Microsoft helps you manage the integration points by centralising the transformation logic in your organisation to a single place, as well as providing an accelerator to kick start the process.
    Perhaps if you provide more specifics I could help give you some more targeted advice?
    Cheers
    Simon


Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s





%d bloggers like this: