Making Health Data Impossible to Misunderstand

By John D’Amore, President and Chief Strategy Officer, Diameter Health
Twitter: @DiameterHealth
Twitter: @jddamore

Back in the first century AD, the Roman orator Quintilian aptly characterized communication of health information as it should be in the 21st century: “One should not aim at being possible to understand but at being impossible to misunderstand.” Most EHRs today, however, send data with a potential of being understood but with no guarantee. That guarantee is known as semantic interoperability—the expectation that data will be understood and usable by downstream recipients regardless of the source. What are the forces leading the industry toward the achievement of semantic interoperability to unlock the full potential of digital health data?

The federal government has consistently been the leading actor to advance healthcare interoperability. Prior to the 1990s, most medical billing was done almost entirely on paper. In 1996, Congress passed the Health Insurance Portability and Accountability Act (HIPAA), which includes a provision that billing claims for services were required to be sent electronically. While HIPAA is best known for its privacy and security provisions, the “P” stands for data portability, and HIPAA catapulted medical billing into the digital age.

The advancement of clinical data exchange follows a more recent path. The Office of the National Coordinator for Health Information Technology (ONC) was established in 2004. The ONC, through programs like Meaningful Use, is advancing clinical data exchange like HIPAA did for medical billing. This journey began with the adoption of a fully electronic record. In 2008, ambulatory EHR adoption was only 10-20% so little digital data existed outside of acute care environments. Meaningful Use has spent over $37 billion for the digitization of medical records and kick-started the healthcare IT industry, now valued at over $100 billion. EHR adoption subsequently skyrocketed and today virtually every hospital and 90% of ambulatory physicians record data digitally. That’s a faster rate of technology adoption than just about any other technology; even faster than Internet adoption (10 to 75% over eighteen years).

More recent regulation through the 21st Century Cures Act and Trusted Exchange Framework and Common Agreement (TEFCA) represent the latest government initiatives to enable improved nationwide clinical data exchange. Rules enacted from this legislation aim to minimize information blocking and include new exchange standards like Fast Healthcare Interoperability Resources (FHIR). By mandating use of FHIR – which will make clinical data more fluid across stakeholders and business processes – the government will be opening the floodgates for innovation through Application Programming Interfaces (API). FHIR APIs enable the pinpointing and request of specific types of clinical data, commonly known as FHIR “resources.” This allows health IT to metaphorically pick up the phone and talk to other systems.

But we all need to speak the same language to guarantee semantic interoperability during clinical data exchange. Why would this be difficult? First, over 4 million clinicians had myriad ways to record information before EHR adoption of the past decade. The recording of medical data historically has not been standardized. Second, the vocabularies for medical data are complex. While the average English speaker only knows 40,000 words, medical dictionaries contain millions of concepts. Finally, there are over 100 EHRs in common use throughout the United States, each with their own ways to record clinical data.

In our experience, we’ve seen hundreds of ways a single lab test can be recorded. Many of these are possible for a clinician to understand, such as “HbA1c” or “HgbA1c” for a common measure of glucose control. When you want to guarantee these data can be found using APIs, however, they all need the standard of “4548-4 (Hemoglobin A1c/Hemoglobin.total in Blood).” When you compound the variation of a single lab test to the broad spectrum of clinical data, the problem grows exponentially. These include allergies, clinical narratives, diagnoses, immunizations, imaging studies, medications, plans of care, procedures, social history observations, status assessments and vital signs. Organizing and translating all this data to a lingua franca is a formidable challenge.

This is why interoperability of healthcare information remains elusive as we enter the 2020s. This impacts how clinicians feel about the usability of health information technology. Half of office-based primary care physicians believe using an EHR diminishes their clinical effectiveness. A recent report by CMS highlighted that over 15 hours a week is spent per physician to reconcile information related to care quality. Isaac Kohane, a computer scientist and chair of the department of biomedical informatics at Harvard Medical School, puts it bluntly: “Medical records suck.”

The good news is that healthcare is not the first industry to face the challenge of complex interoperability. Eric Schmidt, former CEO of Google (Alphabet) shared the secret with his audience at HIMSS in 2018: “The conclusion that you come to speaking as a computer scientist and not a doctor in the room is that you need a second tier of data. Rather than a replacement to primary data stores such as EHR systems, the healthcare cloud supplement rather than supplant the former as ‘a second tier that looks a lot like an unstructured database.’ …This is the key architectural point. This phenomena has been repeated for the 40 years I’ve been doing this in enterprise software.”

To deliver a “second-tier” of healthcare data, software applications need to transform data recorded in EHRs. Curating data successfully into this second-tier requires the use of NERDS:

  • Normalization- standardization of coding, display names, units and content, with reference to national standards
  • Enrichment- automated ontology and category assignment, inference of missing medical concepts, and addition of meta-data for streamlined analytics
  • Reorganization- Grouping of inbound data into consistent, logical categories
    (e.g. vaccinations sent as procedure codes reclassified as immunizations)
  • Deduplication- creation of a longitudinal record that eliminates redundant information
  • Summarization- Selecting and displaying the most important or relevant information within the original content

Healthcare interoperability has come a long way since 2008 when data simply did not exist in digital form. In the past few years, data exchange has accelerated with hundreds of millions of transactions exchanged annually to improve care quality and efficiency. Now, we can begin to fulfill the guarantee that data are universally usable to address healthcare’s biggest pain points. A second tier of “nerds” is required for semantic interoperability, making it impossible to misunderstand.