Archive for August, 2007

Part One – A brief history of Healthcare Integration

I think, first off, it’s worth discussing healthcare integration in general and looking at a quick history, if only to see how the complexity of the integration has increased over the last few years and how that impacts the design requirements of modern solutions.

Initially systems were not required to integrate, and often had no connection to the "outside world". These systems were monolithic and were wholly responsible for executing their given functions. It’s worth noting many of these systems have not gone away. Many healthcare organisations still pursue a single-vendor strategy which do not expose message based interfaces, not provide documented data models to enable integration.

Over time however, specialised solutions were implemented and initially required rekeying of data. Now it has long been known that data entry is a problem. The Federal Drug Administration (FDA) estimate that one entry in fifty contains an error. The impact of this and the additional administration work associated with rekeying along with out of sync data sources constitutes a serious problem. Quite aside from the source of medical errors the cost associated with the additional effort is sufficiently high to warrant a solution to the problem. Health Level Seven (HL7) provided an early and increasingly ubiquitous series of protocols and standards for messaging between medical systems. Past versions of the standards focused on pipe-delimited messages that communicate “triggers” between the systems and are organised into domains such as Admission and Order Entry. The triggers are analogous to events, an example being an ADT (Admit, Discharge and Transfer) A01 message representing an in-patient admit event.

An inherent problem with the HL7 standards is there is no solid conformance mechanism, enabling vendors to claim compliance with the standard even when messages vary wildly from the published standard. This can obviously lead to huge difficulties for both clients and system integrators. How close to the standard is a given vendors standard? Where does it deviate?

Initial use of the protocols enabled transmission of data between various CIS’s. Typically this would be done with port to port interfaces, where the originating system directly transmitted the message to a known port on a destination system. If multiple systems required a copy of the message, multiple interfaces were set up on the originating system. This was a costly mechanism as each of the interfaces usually incurred a fee from the system vendor. It did however have the advantage of enabling specific “flavours” of the message to be generated for each recipient of the message.

As middleware engines began to develop it became possible to have a single interface from a source system and use the middleware engine to make multiple copies of the message and “push” them to downstream systems. This reduced the cost of maintaining multiple interfaces, but initially lack of mapping capabilities meant that vendor solution needed to be on the same version of the standards. Another issue was the single threaded nature of engines. A single message was transmitted and ACKs from the downstream systems would have to be managed prior to the solution accepting another message from the originating systems.

Mapping services was large step forward in middleware engines. Moving the solution from a “dumb” repeater to an intelligent platform enabled solutions with multiple versions to be integrated with the issues of specific adherence to standards being migrate to a single middleware platform. The arrival of mapping services enabled the development of the Canonical Data Model. The Canonical Data Model essentially provides for abstraction between the source system and the destination system by translating the original message to a super-set message, and then translating the super-set message to the outbound message. If a system is replaced, the model should insulate the remaining systems from the changes. In practice this pattern requires the developers to “design the world”. If elements are not included in the canonical data model, changes to the system at a later date can be more complicated than they would have been in direct mapping scenarios.

The Canonical Data Model is key for a successful Hub and Spoke design in integration. Each of the spokes (individual participant systems) attachment to the hub (the middleware engine) should be through an adaption piece which deals with the specifics of both the transportation mechanism and the message format specifics. Within the core middleware engine, all messages are equal.

The canonical data model is not without issues. As mentioned it requires a large amount of upfront work and a very clear understanding of all systems involved. Although on paper this should be relatively painless, in practise the effort required to ensure the accuracy of the model is huge. Within medical integration there is an advantage; since most systems implement the HL7 model, the core HL7 model can form the basis of the data model.

Another key step in the development of middleware platforms and the implementation of integrated medical environments was the development of orchestrations and the ability to utilise the platform to execute business processes. Since a large amount of business data is flowing through a common point in the enterprise, it makes sense to leverage the platform to execute business processes and real time reporting. This, combined with multi-threading platforms enables large scale composite applications to be developed, increasing the value of the middleware platform as a Business Intelligence enabler.

Integration then has moved from point to point interfaces between two systems to enterprise business process platforms embedded in the center of an enterprise. Modern platforms are able to manage and orchestrate high volumes of messages and enable organisations to create large enterprise wide composition applications, trade with external partners and mine real time business/medical data. More and more organisations are aware of the possibilities inherent in strong middleware strategy and vendor middleware engines continue to develop into mission critical platforms.

Next post: Compiled verus Configuration…

Tags: , , ,

Powered by Qumana


To build a better broker…

Back when I started working for a previous company I remember sitting in meeting after meeting with clients explaining the benefits of integration and having an enterprise class integration platform. It seemed like every customer needed to be educated on its possibilities and on the level of effort that is required to build out an enterprise integration strategy. These days I sit in less of those meetings. People are more and more aware of the need for an integration strategy and the possibilities that a good broker affords them; no longer constrained to simply route messages, we can now execute complex business logic and mine for critical business data within these platforms.

So it should be pretty straightforward now to design, implement and deploy a stable, flexible and performant platform. In fact I should be out of a job. But that’s not the case, much to the relief of my mortgage company. Integration is still complex and bespoke. There are no really good one size fits all approaches to integration, no single tool or solution that solves all challenges. Partly this is to be expected. Most organisations have different requirements, different philosophies that affect which trade-offs they would take in any given scenario. Moreover integration is primarily approached on a project by project basis. Rarely does an organisation have the foresight or budget to sit down and design a fully blown strategy for integration, and deign out a consistent framework for all integrations within the organisation. This leads to some challenges, not least of which is the complications driven by unchecked growth of the overall solution. A great example here is solutions built on top of the HL7 Accelerator for BizTalk. The accelerator makes life much easier in developing healthcare integration solutions, but has a tendency to funnel development down a particular design pattern that works well for individual integrations, but poses serious challenges for enterprise scale integrations of full clinical environments.

For the last few years it’s been my privilege to work with some very intelligent and insightful colleagues and partners as well as customers. In that time we have all batted around the idea of a productised Clinical Broker. Now I should add the caveat here that I am fundamentally a consultant, not a product person. That may well colour my view of what’s required. In conjunction with a number of others, I have spent some time going over the challenges inherent in healthcare integration solutions and worked to design out a number of different approaches to integration solutions. More and more these days I have come to the conclusion that a framework is more of a benefit to the customers and a design pattern that will allow the flexibility that is required, but at the same time satisfy the biggest requirement I repeatedly come across; the customers rarely want to build a full solution from the ground up.

So over the next few posts I would like to go through the following thoughts;

  1. History and Issue of Healthcare Integration
  2. Compiled versus Configuration
  3. Separation of Business Logic and Routing Logic
  4. Performance
  5. Microsoft Framework Offerings
  6. Guidance Automation Toolkit/Domain Specific Languages
  7. The Canonical Data Model

As a starting point (actually back to front, but there you go) there are a few core requirements I believe are crucial for a successful deign pattern/framework;

  1. Separation of Business Logic and Routing Logic: This may well be one of the most crucial issues I have come across and the failure to achieve this appropriately is at the core of most management problems associated with large, multiple integration segment solutions I have been involved with or reviewed.
  2. Encapsulation of Business Logic as "Services": This requirement supports the previous one. Business logic must be represented as discrete services to support manageability of the platform
  3. Consumable Service Repository: Once we accept the encapsulation of business logic as services, the need to register and consume these services is a given.
  4. Subscription Registration System: A flexible design would allow non-developer subscription to messages within the ecosystem. A registry of available messages and providers should be provided to enable downstream systems to choose which messages/events to subscribe to from which providers.
  5. Message Wrappering: A typical way of passing messages through the system is to fully represent the message within the broker engine. This however leads to a rapidly complex system. A different way to look at it is to wrapper the message in a standard header and promote required business values into the header for in-engine processing.
  6. Ordered Delivery: This is an area which provides a large level of complexity. Briefly HL7 messages can have an accumulative effect and often need to be received by downstream systems in the intended order. HL7 provides a protocol for ensuring order, but this is rarely implemented by vendor systems. It should be noted that the requirement here is for ordered messages per patient NOT overall. This provides some interesting options for solving this issue.
  7. Independent Cessation of Message Delivery: An enterprise integration solution will have multiple downstream systems consuming the same messages. For example if a Hospital Information System is responsible for admitting patients, it will often provide an ADT stream, which multiple downstream systems (say a Laboratory Information System and a Radiological Information System) will require. The solution should enable any one of the downstream systems to temporarily shut down it’s inbound interfaces without forcing all downstream systems that share common messages to shut down.

This is by no means intended as an exhaustive list. But its a start. In my next post I’ll cover off some of the history in the way medical systems are integrated with HL7 and some of the issues inherent in the past solutions.

Tags: , , , , ,

Powered by Qumana

BizTalk Server 2006 R2 – HL7 Accelerator

One of the most interesting presentations for me at MS-HUG was the update on the HL7 Accelerator for BizTalk Server 2006 R2 given by Stuart Landrum. There are a number of areas of interest here, in particular work that has been done to enable processing of large schema (such as HL7 v3.0 schema), ordered delivery and schema generation.

HL7 v3.0 schema’s are very big. In the past, you simply could not effectively work with the BizTalk mapper to map these schema. In fact, in BizTalk 2004 it was pretty much impossible to natively access these messages, primarily due to issues with the .Net Framework v 1.1. Version 2.0 of the framework helped a lot but the schema still caused huge issues. Solutions ranged from object model representations of the schema, something I personally think is a mistake due to the large nature of elements within the schema that often carry no meaningful business data. I have written before on the issues with large XML schema, HL7 v3.0 in particular. With the R2 release, BizTalk provides two strategies/alterations to help support these schema; the GenerateDefaultFixedNodes flag, and the change to expansion of schema nodes. I am told these two changes effectively reduce the performance requirements of the system to such a point where the schema can be dealt with natively.

The GenerateDefaultFixedNodes flag changes the way the compiler deals with the linked nodes. In the past the compiler would recursively walk through all the links, consuming more and more resources. In many cases, depending on your system, you would simply run out of resources and the system would crash. I personally spent a very frustrating couple of weeks trying to put together a proof of concept in the past where my machine would spin out of control and eventually crash. Now the GenerateDefaultFixedNodes flag will prevent this behaviour by only accounting for linked nodes.

The expansion of the schema is another big step forward. In the previous versions, if you clicked on the expand all nodes in the mapper, you were asking for trouble. The mapper would try and expand all the nodes and would eventually crash out. The change in R2 alters the behaviour to only expand the first instance of the complex type, rather than opening all.

I’ll be trying these out as soon as I get a chance and will post on the results asap.

Ordered Delivery is a big deal in healthcare and was difficult to achieve with the previous versions of BizTalk. The underlying problem seen in BizTalk is the multiple threading nature and the efficiency of processing BizTalk displays. In older style integration engines this simply was not a problem since many were single threaded, and did not execute discrete steps of business logic in a decoupled manner. Subsequent attempts at global solutions often did not take into account the use of orchestrations, or the use of multiple message types within a message class (for example, ADT is a message class with multiple types of messages as represented by triggers – A01, A02 and so on. The HL7 Accelerator generates individual schemas for each of these, resulting in multiple pathways through the system).

The ordered deliver solution within R2 uses a ticketing system with a gate keeper orchestration to re-sequence the messages. The solution uses a custom pipeline component to insert a sequence number into the context properties of the message in the pipeline. Normal processing of the message can then occur during the solution. Prior to transmission to the consumer, the Gate Keeper orchestration essentially ensure the message is the first in the sequence.

The cool part about this approach is it uses correlation sets to achieve the sequencing. This is cool since it will enable developers to use additional data elements for correlation. So for example, you could ensure that all messages that relate to an individual patient are in sequence with each other, but not with other patients. This is huge since the business requirement is usually that individual patient messages are in sequence, for example an A08 cannot come before an A01, or even more critical an A08 does not come before a proceeding A08. It is rarely a requirement that messages are in sequence between patients. It’s not important for the downstream system to receive Patient A’s messages before Patient B. This has a large impact on performance. Fundamentally any attempt to sequence messages will reduce the performance of the system. So the general rule should always be, if you don’t need it, don’t use it.

Now there are a few caveats; the solution assumes that there is one provider and one consumer. This is often not the case. But i still think this is a good start, and provides a good basis for further solution development. For example, I can see a possibility using correlation sets to include consumer information in the solution as well. Also the solution requires that transport level does not alter the sequence. Or to put it another way, BizTalk can ensure the sequence it received the messages in, but not re-order out of sequence messages with this solution.

I do have one concern with the mechanism. In the past we have had significant issues with use of pipeline components in conjunction with the HL7 Accelerator. The issue lies with the way ACK messages are generated. If a message is received the accelerator pipeline component will attempt to convert the message to the appropriate XML representation. If for some reason this fails, a NACK is generated and help in memory until the pipeline completes. This NACK is then sent to the messagebox and then transmitted back to the message sender. If a map, for example, is included in the pipeline, and the message has failed transformation to the XML version, the map will error out and cause the pipeline to collapse, destroying the NACK in the process. The upstream system never receives the NACK and basically sulks, doing nothing, and sending no more messages until an operator resets the interface. I wonder if the same behaviour will be exhibited with this solution?

Image Taken from Stuart Landrums MS-HUG Presentation

The last VERY cool area was Schema Generation. In the past the accelerator basically hid the underlying schema description database from the developers. So if your applications HL7 v2.x schemas deviated from the standard (and what applications don’t?) you were forced to generate a native schema, and then handtool the differences. When you are dealing with multiple systems and schemas this was highly time consuming and frustrating, as well as a significant source of errors. The new model enables you to customize the underlying Access database that HL7 provide on their web site to match your applications and then generate conformant messages directly. Localisation would be a good example of how this could be beneficial. One thought here, you have to be a HL7 member to get hold of this Access Database. But there are plenty of benefits to a membership so it’s not too much of a hardship!

So all in all I am very impressed with the changes, and I’m looking forward to completely rebuilding a couple of client solutions with these and other changes in the platform!

Tags: , , , , ,

Powered by Qumana

Microsoft Common User Interface

Its funny how things seem to line up just at the right time some times. I am working on a couple of projects that have a significant healthcare user UI component. Now my focus is primarily around Business Process Integration and Electronic Health Records, not user interface. So I was immediately attracted to the possibilities when I heard about the Common User Interface project Microsoft and the UK NHS are involved in.

Briefly (and there are much better descriptions on the web site), the CUI project came about through a realisation as to the impact of user interface design on medical errors. In the UK, as with just about everywhere else in the world, medical errors are a serious issue that result in fatalities. The number are always pretty shocking, 900,000 medical errors in the UK per year, with roughly 8% leading to fatalities. That probably seems suprising, but when you look at the range and diversity of application interfaces that care providers are faced with in a working day, it’s not really that amazing. Simple things, like misunderstanding dates and times, as well as medication names can lead to errors, and when treatment of ill individuals in involved, the mistakes can be fatal.

The project intends to develop out a series design guideline that will be used to standardise the interfaces for medically critical data areas, and over time will be used by the NHS to ensure compliance from the vendor community that supports it. The part here that interests me is that the design guidelines that are being produced are done with input from care providers and risk assessors that are focused on providing effective interfaces for care providers. The design guidelines are halted at the wire frame level, providing flexibility in final design for the individual vendors.

Taking this one stage further, Microsoft is developing out a series of common controls based on the framework. The controls are obviously in line with the guidelines being developed and are available on the MS CUI web site.

Now I’ll be the first to point out (actually not the first, but since this is my blog, I’m the first on this page) that the controls are not yet complete and there is a lot of further work to be done before its completes. But both Microsoft and the NHS are committed to the further development of the CUI over the next few years. Furthermore the MS-CUI components have been published up on a Codeplex library, and the controls can be embedded in projects with the standard community licensing. Importantly here, they are aiming for a community. So get involved!

For myself, I’ll be looking to use the common controls and design guidance to shorten the time to develop out client solutions. Being able to tell clients that the design for the controls had clinician input is huge.

Tags: , , , ,

Powered by Qumana

MS-HUG Technical Forum 2007

For various reasons I am down in Redmond at the Microsoft campus attending the MS-HUG technical forum. I had a chance to meet Roberto Ruggeri (see his blog) in person finally and one of the things he pointed out was I have been pretty lax in keeping my blog up to date! Fair enough, so I thought I’d better at least mention the conference!

The Microsoft Healthcare Users Group (MS-HUG) is unified with Healthcare Information Management Systems Society (HIMSS) and provides a forum for the healthcare industry to discuss and exchange ideas around the use of Microsoft technologies in the industry. The conference is a chance to get both vendors and healthcare professionals together to discuss latest trends and technologies as well as requirements for the industry.

Personally, I was interested in a few of the developer track presentations as well as the opportunity to meet up with some people I have dealt with on email and phone over the last few years.

As far as presentations go, the MS-CUI, HCE and XDS.b and the BizTalk Server 2006 R2 presentations were of the most interest on the first day.

I’ll post separately on all of these topics briefly…

Details for the conference can be found here.

Tags: , , , , , ,

Powered by Qumana