Data Series: Context is Key

Context is Key.jpg

“Data = Noise

Data + Context = Information

Experimentation + Error = Experience

Information + Experience = Knowledge

Knowledge + Humility = Wisdom”

- Fergus Connolly

For data to have value, one of the basic rules in data science is that data requires “context.” That means that you need to know that the data was generated from a definable, whole process. In healthcare, an example of data not in context is when a surgeon is given their “quality” dashboard. The dashboard typically shows the rate of post-surgical wound infections, for example. But that outcome measure is for all patients that had an operation by that specific surgeon. If the surgeon is a general surgeon, that wound infection rate might include patients that had breast procedures, colon resections, hernia repairs, and maybe even operations for patients who were gunshot victims. Combining an outcome measure from all of these different contexts makes the data confusing, nearly worthless, and may lead to inappropriate interpretations and responses.

It’s also common in healthcare that improvement efforts are implemented at the levels of a fragment of care without attention to the context. This leads to waste, at best, and can lead to unintended harm. Any patient, family member, nurse, or doctor can give you numerous examples of the frustrations, workarounds, waste, and even harm, that occur regularly due to our fragmented and disconnected organizational structures in healthcare.

This fragmentation also induces the inappropriate application of process improvement tools such as Lean or Six Sigma. Because improvement tools are applied to a fragment of care (a subprocess) rather than in the context of a whole, definable patient care process, the effort may lead to improved outcomes for the subprocess, but this will not likely result in improved outcomes for the whole patient process. The term for improving a subprocess without measuring the impact on the whole process is called Suboptimization. I talked about suboptimization in a General Surgery News video series, here.

Most of the “quality” measures that hospitals are judged by (which might impact their financial reimbursement) are examples of suboptimization: central line infection rate, urinary catheter infection rate, 30-day rehospitalization rate, etc. To make the situation worse, the data from many different types of patient care processes (different contexts) is lumped together at the hospital level, so the interpretation of the data is just noise.

Let’s look at central line infection process improvement efforts in more detail. There are many published reports of process improvement efforts that can result in central line infection rates approaching or even achieving zero infections – which sounds wonderful. But this is not a good outcome if it is isolated to only the subprocess without a measurement of the impact on the outcomes for each whole, definable patient care process in which this subprocess is a part. If it’s not measured, you don’t know the outcomes for each type of patient process in that group of patients who received a central line. You don’t know if they really needed a central line in the first place. You don’t know if the patients suffered from any other types of complications related to central lines, like a collapsed lung, bleeding, or blood clots. Some of these other complications may be more dangerous than an infection which can often be treated by just removing the central line. By only looking at central line infection you almost guarantee that other unintended consequences will occur and might not be identified because of the problem of suboptimization.

When I was the Chief of General Surgery at the University of Missouri, one of my duties was to run the weekly Morbidity and Mortality (M & M) conference, which is when the surgery residents present the complications and deaths from the various surgical services over the previous week. M & M is intended to be a learning experience but probably induces more trauma than knowledge for these young surgical trainees. I would review the list of patients and their complications as I walked to the conference room, looking through the list to decide what order they should be presented. One day, I saw a curious group of complications – four patients with penile necrosis (a part of their penis had died) on the Surgical Critical Care service. As I listened to the resident present these complications it became clear that this was an unintended consequence of suboptimization.

Because hospitals were being judged more and more by certain quality measures, the care teams were under more pressure to improve these outcomes to avoid financial penalties. As I mentioned, in addition to central line infection, urinary catheter infection rates were also being recorded, with financial penalties tied to rates that were deemed too high. Apparently, someone in the surgical intensive care unit read a journal article promoting a technique for taping the catheter more rigidly to the leg to keep it from moving. The idea was that the lack of movement of the catheter would lead to less incidence of infection. Unfortunately, the taping of the catheter also put constant pressure on the penis in male patients and over the period of several hours, the pressure led to dead tissue on that part of the penis. This is one example of the type of unintended consequence that is predictable when suboptimization occurs.

A final example of lack of context and the problem of suboptimization is the attention to the problem of sepsis – when a patient develops a possibly life-threatening infection. But sepsis as a diagnosis alone provides no context. Is it an 80-year-old nursing home patient with sepsis from a urinary tract infection or is it a 20-year-old motorcycle trauma patient with an open pelvic fracture that has developed sepsis from a necrotizing wound infection? These are very different contexts. Developing a treatment for a subprocess like “sepsis” that is a one-size-fits-all solution for different contexts can lead to variable outcomes and unintended waste and harm, especially when the outcome of the whole process is not measured. And this is what has happened with the effort to implement a protocol to treat patients with sepsis, called a “sepsis bundle,” at essentially every hospital. Although these sepsis bundles have led to a decreased short-term (in-hospital) death rate, the unintended long-term harm includes high rates of weakness, cognitive impairment, hospital readmission, and late death documented in this published study.

There are no short cuts when applying data science to healthcare or any other industry. If the whole context is not defined, data is analyzed without an understanding of the context from which it was obtained, or process improvements are applied to a subprocess without attention to the whole context, therefore the outcome will not be ideal. In other industries, this might lead to an unsatisfied customer, but when the process is a patient care process, the outcome might include unintended and potentially preventable human suffering. We need to learn to apply data science in the context of whole, definable patient processes if we want a sustainable global healthcare system. There is a commonly used saying that “the devil is in the details,” but with an understanding of data science there is a need to complete the thought: “but an angel can be found in understanding the whole.”

Previous
Previous

Data Series: Stop Proving and Start Improving

Next
Next

Data Series: Decentralize the Data