Archive Page 2

Time for a little heresy?

Amongst many of the news articles I read today, one jumped out at me (“The ‘silver tsunami’ that threatens to overwhelm US social security system” Oct 18th 2007, Guardian – http://www.guardian.co.uk/usa/story/0,,2193443,00.html). The moment we have been talking about repeatedly has arrived. The baby boomers are retiring.

I have lost count of how many positioning presentations I have either given, or sat through that called out this moment as the coming challenge and a serious source of concern for healthcare. Usually (and and my presentations are included) the presentations are centered around the need to invest in technology to help reduce the burden on the system by consolidating information, and reducing the “waste” within the system, such as re-ordering laboratory tests. Large scale technology projects will remove the problem and save our budgets.

Over the years I have been involved in technology in healthcare and have friends and colleagues involved in large scale projects in both Canada and Europe, in particular in the UK. There are some very complex and noble designs on the table, which stand to greatly improve both the delivery of healthcare and costs associated with healthcare when they are completed. But here’s the rub; these are expensive and complex projects, taking many years to design and implement. Realistically a large scale jurisdictional Electronic Health Record project can take upwards of a decade and multiple millions (choose your currency) to deliver.

Now for the part that might get me burnt at the stake; is there a more efficient way to achieve the desired end-state? Might a better way to achieve an Electronic Health Record (EHR) be to use a series of small projects to provide clear incremental steps rather than a large centrally controlled project? What about enlisting the owners of the data within the system; the patients?

I have to admit to being suspicious of large organisations setting up “free” or even subscription based Personal Health Record (PHR) systems (see Google and Microsoft) but there may be something in it. True, a PHR doesn’t provide all the benefits of a full blown EHR (especially around Performance Management) but that ultimately is not a concern of the patient. 

So while we design, build and deploy large scale Electronic Health Record solutions, maybe Personal Health Record systems have, at a minimum, a stop gap role to play, even in a non-competitive market?

I hear the crackle of fire.. gotta run!

Tags: , , , , , , ,

Powered by Qumana

Advertisements

Part Two – Compiled vs. Config

I remember back when I first wrote an application professionally; it seemed highly complex and well thought out, and I thought I had all the customer requirements covered off. Then came deployment and bang, my app hit the real world and a huge problem. All the details were hard coded and based off my development machine not the production environment. Horrible. Quickly I learnt (over many, many cups of coffee) that I should use configuration files so that details such as database connections, usernames/passwords and so on could be altered.

Later I learnt an important lesson in application design. Things are NEVER static or fixed. A customer had a list of codes that defined various subdivisions of properties they owned. I suggested these should be in a database so if they changed the application would not break. I was repeatedly assured they never change. Against my better judgement, I backed down and hard coded the values. The day of go live, the codes changed. Fancy that. Lesson learnt.

Now you may be reading this and thinking well obviously these elements should be stored in a configurable way. You might even be thinking, idiot. But before you give up on me completely I would like to point out that every day, almost every single solution I have come across hard codes volatile logic and business description. In fact it could be pointed out that the tendency to use object models and relational databases for every application leads to customer dissatisfaction since they are constantly stuck in a cycle of change management. In fact a common complaint I hear is that the software they procure/write/inherit is incapable of keeping up with the speed of changes in their business.

So is there a better way?

 I think so. A combination of a few emerging and existing technologies offers the ability to build applications that hard code only the supporting functions (think logging, security etc) whilst providing a flexible framework to host the business functions. Let’s start off with business logic.

Currently this tends to be expressed through detailed object models that expose methods and properties to other object. These models can be extensive but crucially bury the core business processes in the model. When business changes, as it usually does, these models must be updated. Often this can be painful, and usually it is expensive. What if the business process was represented in a more decoupled and configurable way? What if a core workflow engine could consume business process descriptions and orchestrate the invocation of component functions in a way that did not require direct dependencies? What if business rules could be stored and expressed in a non-code manner that would enable businesses to directly alter their own rules without involvement of coding? Would that be a benefit?

Some of the components of this idea already exist. Business Rules Engines are available (BizTalk for example has one). Workflow solutions and engines exist. De-coupled invocation of business functions can already be achieved. So what’s required to bring the concept to reality? Actually, I don’t think a lot more is required. Just a change in the way we develop applications, and maybe a change in the way we see our roles supporting business. Although on the surface it could be argued that change equates to dollars for us, I would suggest that more flexible software would lead to more dollars through increased desire to engage.

So what about business data? Typically this is stored in a database within complex relational tables. Well and good, but changes to the business data lead to changes in the data model. And that gets complex. How do you account for legacy data in a new data model? What if the data needs to be retrievable in its original form? What happens with keys, and required fields that are missing?

For a while now I have been talking to a number of colleagues about the idea of using XML as native data types in SQL. This is especially attractive for complex message schemas such as HL7 v3.0. The core advantage here is versioning. Multiple versions of the model can co-exist on a single database, and legacy data can be accessed in it’s original form, often a requirement for medical data.

So just a few thoughts. But I am convinced that there are more examples and possibilities. Can a core app be built and then used to support multiple businesses? Is there really a need for 100’s of applications to be written every year to do the same job? Maybe not.

Tags: , , , ,

Powered by Qumana

Getting addicted to the stats page – “Map fails in custom pipeline”

I know, it’s wrong. But oh so interesting!

The latest search term I noticed that reminded me of a problem was "map fails in custom pipeline". This was the bane of my life in a past project and resulted in an additional 100+ orchestrations in the eventual solution. When developing a solution with the HL7 accelerator you cannot put a map in the pipeline. The basic issue is that if a message is received in the pipe-delimited HL7 format which cannot be disassembled by the HL7 pipeline component it is not converted to XML. The map expects XML as an input and when it receives the non-XML message it fails, collapsing the pipeline.

That is a bit of a pain, and more so with solutions based on the Canonical model that require the ACK/NACK generated by the pipeline. Specifically if the pipeline collapses prematurely then the ACK/NACK is not written to the MessageBox and therefore never returned to the upstream system. Which then generally sulks and does nothing.

The solution is to place the map in an orchestration after the pipeline has completed. Which lead to the 100+ orchestrations in my solution; one for every HL7 trigger for every upstream system. Not fun.

So if there are any other issues that lead you to my blog, leave a comment, and I’ll see if I can help!

Tags: , , ,

Powered by Qumana

“biztalk mllp does not work”

I was looking through the stats on the blog (not sure if that counts as sad or just narcissistic) and was amused to see the search term "biztalk mllp does not work" as one of the links to a post. Now I feel a bit bad as I don’t think I have written anything that would be of particular use to the searcher. So, if you happen to read this, post what the issue is. I’d definitely interested. Sometime picking up parts of this technology can be highly frustrating for no apparent reason!

As a matter of interest, I believe I have said those very words. There was definitely an issue in the released versions where the MLLP adapter would do "odd" things. The oddest would occur when the adapter had not received a message for a while. It would go to sleep and the first message it would receive would "wake" the port, but the message would be eaten without trace. Highly frustrating when you are developing/testing, and a problem not usually seen in a production environment where messages are received frequently enough to avoid this issue.

Check out the changes made to the MLLP Adapter in BizTalk 2006 R2; specifically to the Persistent Connection Management on MLLP send and receive. It’s now possible to set the connection to persist; ie. never close. This may well solve the issue.

Tags: , , ,

Powered by Qumana

Part One – A brief history of Healthcare Integration

I think, first off, it’s worth discussing healthcare integration in general and looking at a quick history, if only to see how the complexity of the integration has increased over the last few years and how that impacts the design requirements of modern solutions.

Initially systems were not required to integrate, and often had no connection to the "outside world". These systems were monolithic and were wholly responsible for executing their given functions. It’s worth noting many of these systems have not gone away. Many healthcare organisations still pursue a single-vendor strategy which do not expose message based interfaces, not provide documented data models to enable integration.

Over time however, specialised solutions were implemented and initially required rekeying of data. Now it has long been known that data entry is a problem. The Federal Drug Administration (FDA) estimate that one entry in fifty contains an error. The impact of this and the additional administration work associated with rekeying along with out of sync data sources constitutes a serious problem. Quite aside from the source of medical errors the cost associated with the additional effort is sufficiently high to warrant a solution to the problem. Health Level Seven (HL7) provided an early and increasingly ubiquitous series of protocols and standards for messaging between medical systems. Past versions of the standards focused on pipe-delimited messages that communicate “triggers” between the systems and are organised into domains such as Admission and Order Entry. The triggers are analogous to events, an example being an ADT (Admit, Discharge and Transfer) A01 message representing an in-patient admit event.

An inherent problem with the HL7 standards is there is no solid conformance mechanism, enabling vendors to claim compliance with the standard even when messages vary wildly from the published standard. This can obviously lead to huge difficulties for both clients and system integrators. How close to the standard is a given vendors standard? Where does it deviate?

Initial use of the protocols enabled transmission of data between various CIS’s. Typically this would be done with port to port interfaces, where the originating system directly transmitted the message to a known port on a destination system. If multiple systems required a copy of the message, multiple interfaces were set up on the originating system. This was a costly mechanism as each of the interfaces usually incurred a fee from the system vendor. It did however have the advantage of enabling specific “flavours” of the message to be generated for each recipient of the message.

As middleware engines began to develop it became possible to have a single interface from a source system and use the middleware engine to make multiple copies of the message and “push” them to downstream systems. This reduced the cost of maintaining multiple interfaces, but initially lack of mapping capabilities meant that vendor solution needed to be on the same version of the standards. Another issue was the single threaded nature of engines. A single message was transmitted and ACKs from the downstream systems would have to be managed prior to the solution accepting another message from the originating systems.

Mapping services was large step forward in middleware engines. Moving the solution from a “dumb” repeater to an intelligent platform enabled solutions with multiple versions to be integrated with the issues of specific adherence to standards being migrate to a single middleware platform. The arrival of mapping services enabled the development of the Canonical Data Model. The Canonical Data Model essentially provides for abstraction between the source system and the destination system by translating the original message to a super-set message, and then translating the super-set message to the outbound message. If a system is replaced, the model should insulate the remaining systems from the changes. In practice this pattern requires the developers to “design the world”. If elements are not included in the canonical data model, changes to the system at a later date can be more complicated than they would have been in direct mapping scenarios.

The Canonical Data Model is key for a successful Hub and Spoke design in integration. Each of the spokes (individual participant systems) attachment to the hub (the middleware engine) should be through an adaption piece which deals with the specifics of both the transportation mechanism and the message format specifics. Within the core middleware engine, all messages are equal.

The canonical data model is not without issues. As mentioned it requires a large amount of upfront work and a very clear understanding of all systems involved. Although on paper this should be relatively painless, in practise the effort required to ensure the accuracy of the model is huge. Within medical integration there is an advantage; since most systems implement the HL7 model, the core HL7 model can form the basis of the data model.

Another key step in the development of middleware platforms and the implementation of integrated medical environments was the development of orchestrations and the ability to utilise the platform to execute business processes. Since a large amount of business data is flowing through a common point in the enterprise, it makes sense to leverage the platform to execute business processes and real time reporting. This, combined with multi-threading platforms enables large scale composite applications to be developed, increasing the value of the middleware platform as a Business Intelligence enabler.

Integration then has moved from point to point interfaces between two systems to enterprise business process platforms embedded in the center of an enterprise. Modern platforms are able to manage and orchestrate high volumes of messages and enable organisations to create large enterprise wide composition applications, trade with external partners and mine real time business/medical data. More and more organisations are aware of the possibilities inherent in strong middleware strategy and vendor middleware engines continue to develop into mission critical platforms.

Next post: Compiled verus Configuration…

Tags: , , ,

Powered by Qumana

To build a better broker…

Back when I started working for a previous company I remember sitting in meeting after meeting with clients explaining the benefits of integration and having an enterprise class integration platform. It seemed like every customer needed to be educated on its possibilities and on the level of effort that is required to build out an enterprise integration strategy. These days I sit in less of those meetings. People are more and more aware of the need for an integration strategy and the possibilities that a good broker affords them; no longer constrained to simply route messages, we can now execute complex business logic and mine for critical business data within these platforms.

So it should be pretty straightforward now to design, implement and deploy a stable, flexible and performant platform. In fact I should be out of a job. But that’s not the case, much to the relief of my mortgage company. Integration is still complex and bespoke. There are no really good one size fits all approaches to integration, no single tool or solution that solves all challenges. Partly this is to be expected. Most organisations have different requirements, different philosophies that affect which trade-offs they would take in any given scenario. Moreover integration is primarily approached on a project by project basis. Rarely does an organisation have the foresight or budget to sit down and design a fully blown strategy for integration, and deign out a consistent framework for all integrations within the organisation. This leads to some challenges, not least of which is the complications driven by unchecked growth of the overall solution. A great example here is solutions built on top of the HL7 Accelerator for BizTalk. The accelerator makes life much easier in developing healthcare integration solutions, but has a tendency to funnel development down a particular design pattern that works well for individual integrations, but poses serious challenges for enterprise scale integrations of full clinical environments.

For the last few years it’s been my privilege to work with some very intelligent and insightful colleagues and partners as well as customers. In that time we have all batted around the idea of a productised Clinical Broker. Now I should add the caveat here that I am fundamentally a consultant, not a product person. That may well colour my view of what’s required. In conjunction with a number of others, I have spent some time going over the challenges inherent in healthcare integration solutions and worked to design out a number of different approaches to integration solutions. More and more these days I have come to the conclusion that a framework is more of a benefit to the customers and a design pattern that will allow the flexibility that is required, but at the same time satisfy the biggest requirement I repeatedly come across; the customers rarely want to build a full solution from the ground up.

So over the next few posts I would like to go through the following thoughts;

  1. History and Issue of Healthcare Integration
  2. Compiled versus Configuration
  3. Separation of Business Logic and Routing Logic
  4. Performance
  5. Microsoft Framework Offerings
  6. Guidance Automation Toolkit/Domain Specific Languages
  7. The Canonical Data Model

As a starting point (actually back to front, but there you go) there are a few core requirements I believe are crucial for a successful deign pattern/framework;

  1. Separation of Business Logic and Routing Logic: This may well be one of the most crucial issues I have come across and the failure to achieve this appropriately is at the core of most management problems associated with large, multiple integration segment solutions I have been involved with or reviewed.
  2. Encapsulation of Business Logic as "Services": This requirement supports the previous one. Business logic must be represented as discrete services to support manageability of the platform
  3. Consumable Service Repository: Once we accept the encapsulation of business logic as services, the need to register and consume these services is a given.
  4. Subscription Registration System: A flexible design would allow non-developer subscription to messages within the ecosystem. A registry of available messages and providers should be provided to enable downstream systems to choose which messages/events to subscribe to from which providers.
  5. Message Wrappering: A typical way of passing messages through the system is to fully represent the message within the broker engine. This however leads to a rapidly complex system. A different way to look at it is to wrapper the message in a standard header and promote required business values into the header for in-engine processing.
  6. Ordered Delivery: This is an area which provides a large level of complexity. Briefly HL7 messages can have an accumulative effect and often need to be received by downstream systems in the intended order. HL7 provides a protocol for ensuring order, but this is rarely implemented by vendor systems. It should be noted that the requirement here is for ordered messages per patient NOT overall. This provides some interesting options for solving this issue.
  7. Independent Cessation of Message Delivery: An enterprise integration solution will have multiple downstream systems consuming the same messages. For example if a Hospital Information System is responsible for admitting patients, it will often provide an ADT stream, which multiple downstream systems (say a Laboratory Information System and a Radiological Information System) will require. The solution should enable any one of the downstream systems to temporarily shut down it’s inbound interfaces without forcing all downstream systems that share common messages to shut down.

This is by no means intended as an exhaustive list. But its a start. In my next post I’ll cover off some of the history in the way medical systems are integrated with HL7 and some of the issues inherent in the past solutions.

Tags: , , , , ,

Powered by Qumana

BizTalk Server 2006 R2 – HL7 Accelerator

One of the most interesting presentations for me at MS-HUG was the update on the HL7 Accelerator for BizTalk Server 2006 R2 given by Stuart Landrum. There are a number of areas of interest here, in particular work that has been done to enable processing of large schema (such as HL7 v3.0 schema), ordered delivery and schema generation.

HL7 v3.0 schema’s are very big. In the past, you simply could not effectively work with the BizTalk mapper to map these schema. In fact, in BizTalk 2004 it was pretty much impossible to natively access these messages, primarily due to issues with the .Net Framework v 1.1. Version 2.0 of the framework helped a lot but the schema still caused huge issues. Solutions ranged from object model representations of the schema, something I personally think is a mistake due to the large nature of elements within the schema that often carry no meaningful business data. I have written before on the issues with large XML schema, HL7 v3.0 in particular. With the R2 release, BizTalk provides two strategies/alterations to help support these schema; the GenerateDefaultFixedNodes flag, and the change to expansion of schema nodes. I am told these two changes effectively reduce the performance requirements of the system to such a point where the schema can be dealt with natively.

The GenerateDefaultFixedNodes flag changes the way the compiler deals with the linked nodes. In the past the compiler would recursively walk through all the links, consuming more and more resources. In many cases, depending on your system, you would simply run out of resources and the system would crash. I personally spent a very frustrating couple of weeks trying to put together a proof of concept in the past where my machine would spin out of control and eventually crash. Now the GenerateDefaultFixedNodes flag will prevent this behaviour by only accounting for linked nodes.

The expansion of the schema is another big step forward. In the previous versions, if you clicked on the expand all nodes in the mapper, you were asking for trouble. The mapper would try and expand all the nodes and would eventually crash out. The change in R2 alters the behaviour to only expand the first instance of the complex type, rather than opening all.

I’ll be trying these out as soon as I get a chance and will post on the results asap.

Ordered Delivery is a big deal in healthcare and was difficult to achieve with the previous versions of BizTalk. The underlying problem seen in BizTalk is the multiple threading nature and the efficiency of processing BizTalk displays. In older style integration engines this simply was not a problem since many were single threaded, and did not execute discrete steps of business logic in a decoupled manner. Subsequent attempts at global solutions often did not take into account the use of orchestrations, or the use of multiple message types within a message class (for example, ADT is a message class with multiple types of messages as represented by triggers – A01, A02 and so on. The HL7 Accelerator generates individual schemas for each of these, resulting in multiple pathways through the system).

The ordered deliver solution within R2 uses a ticketing system with a gate keeper orchestration to re-sequence the messages. The solution uses a custom pipeline component to insert a sequence number into the context properties of the message in the pipeline. Normal processing of the message can then occur during the solution. Prior to transmission to the consumer, the Gate Keeper orchestration essentially ensure the message is the first in the sequence.

The cool part about this approach is it uses correlation sets to achieve the sequencing. This is cool since it will enable developers to use additional data elements for correlation. So for example, you could ensure that all messages that relate to an individual patient are in sequence with each other, but not with other patients. This is huge since the business requirement is usually that individual patient messages are in sequence, for example an A08 cannot come before an A01, or even more critical an A08 does not come before a proceeding A08. It is rarely a requirement that messages are in sequence between patients. It’s not important for the downstream system to receive Patient A’s messages before Patient B. This has a large impact on performance. Fundamentally any attempt to sequence messages will reduce the performance of the system. So the general rule should always be, if you don’t need it, don’t use it.

Now there are a few caveats; the solution assumes that there is one provider and one consumer. This is often not the case. But i still think this is a good start, and provides a good basis for further solution development. For example, I can see a possibility using correlation sets to include consumer information in the solution as well. Also the solution requires that transport level does not alter the sequence. Or to put it another way, BizTalk can ensure the sequence it received the messages in, but not re-order out of sequence messages with this solution.

I do have one concern with the mechanism. In the past we have had significant issues with use of pipeline components in conjunction with the HL7 Accelerator. The issue lies with the way ACK messages are generated. If a message is received the accelerator pipeline component will attempt to convert the message to the appropriate XML representation. If for some reason this fails, a NACK is generated and help in memory until the pipeline completes. This NACK is then sent to the messagebox and then transmitted back to the message sender. If a map, for example, is included in the pipeline, and the message has failed transformation to the XML version, the map will error out and cause the pipeline to collapse, destroying the NACK in the process. The upstream system never receives the NACK and basically sulks, doing nothing, and sending no more messages until an operator resets the interface. I wonder if the same behaviour will be exhibited with this solution?

Image Taken from Stuart Landrums MS-HUG Presentation

The last VERY cool area was Schema Generation. In the past the accelerator basically hid the underlying schema description database from the developers. So if your applications HL7 v2.x schemas deviated from the standard (and what applications don’t?) you were forced to generate a native schema, and then handtool the differences. When you are dealing with multiple systems and schemas this was highly time consuming and frustrating, as well as a significant source of errors. The new model enables you to customize the underlying Access database that HL7 provide on their web site to match your applications and then generate conformant messages directly. Localisation would be a good example of how this could be beneficial. One thought here, you have to be a HL7 member to get hold of this Access Database. But there are plenty of benefits to a membership so it’s not too much of a hardship!

So all in all I am very impressed with the changes, and I’m looking forward to completely rebuilding a couple of client solutions with these and other changes in the platform!

Tags: , , , , ,

Powered by Qumana