Many years ago, a compliance officer said to me, “I’m not interested in averages; I want to know about that one facility that’s going to get us into trouble.”

While that comment came from a multifacility organization working diligently to avoid a corporate integrity agreement, it has broad relevance from the boardroom to the bedside and everywhere in between, regardless of your organization’s size or your job: averages — like Nazareth’s perspective on love — can be helpful and guiding, but they can also be misleading, hide opportunity and ultimately hurt you.

This particular memory came to mind last week as I was preparing a presentation for the California Association of Health Facilities (CAHF). It was on using data and QAPI principles to improve process and outcomes and reduce rehospitalizations. We examined a case study that focused on a 14-SNF corporation with an overall rehospitalization rate of 12% and very competitive percentile ranking within the markets. Bravo!

OK, hold the “Bravo” for just a moment. Two of the 14 SNFs had rates at 19 and 20% — not so good, particularly in their markets and in comparison to their corporate average.  

Further scrutiny into these two facilities revealed that they were actually solid, doing a fine job with overall rehospitalization prevention, but when it came to diabetes management, they were dropping the ball. In these two facilities, their return-to-hospital rates for people with diabetes was in the low 20-percents. However, when you factored those diabetic patients out, their rehospitalization rate was more in line with their corporate average of 12%. (Wouldn’t it be great if we could all just “factor those cases out” to make our numbers look better?) But wait, there’s more.

In both of those outlier SNFs, patients who returned to the hospital with a diagnosis of diabetes did so, on average, within 12 days. Rehospitalization that occurs almost two weeks after admission suggests specific quality improvement activities and changes in practice. But here again, averages get us into trouble. Only three of the returning patients had a very high number of days in a SNF prior to going to the hospital. The rest of the diabetes cohort had a much lower number of SNF days prior to hospitalization, closer to two. The average once again pointed me in one direction, albeit the wrong one, obscuring the opportunity by leading me to focus on one set of remedies, when in fact I should have focused more on hospital-to-SNF handoff, including changes in diet and insulin dosing.    

In this case study, averages confused or hid the real opportunity. [The case study, incidentally, was based on real data, real facilities and a real corporation — all appropriately stripped of identification]. What’s more, in our new value-based world where clinical and financial outcomes become the same, this averaging has strong implications for many stakeholders. Echoing back to my earlier “boardroom to bedside” comment, a few outliers can really make or break your assessment of performance, regardless of whether your ultimate goal is clinical or financial in nature.

So, what can you do about this? Abandon averages? No way. Averages offer you a quick view of the entire organization’s, portfolio’s or patient’s/resident’s status. But a few additional statistics or data points might add significant value to your analysis and improvement plans. Here is a guideline to ensure greater pinpoint precision in your analysis:

  1. Do outcomes cluster around your average? In other words, is your average painting the picture that most SNFs, patients/residents, or staff are behaving in a “typical” manner, with little variation? (Meaning very close to the mean.) This implies a low standard deviation. Standard deviation is a number that tells you how measurements for a group are spread out from the mean. If standard deviation is low, you can feel more confident in making your conclusion based on the mean. If high, additional data points are required.

  2. What happens to your average (the mean value) when you remove the outliers? If the story considerably changes, consider another measure to describe your average performance, referred to as the median. The median is the middle value in a list of numbers. While the mean average is more sensitive to outliers, the median is less so.

  3. If outliers are clinically meaningful, what do they represent? Are there characteristics that they share? For example, you may determine that they are in the same survey district, receive patients from the same referral hospital, report to the same person, or that they use one formulary or vendor different from all the others.

  4. Know your data! What does it actually represent beyond the short name or label that is associated with it? Just think about any CMS Quality Measure’s “short name.” It’s not until you actually read the technical manual that you discover what the outcome actually does and does not represent.

Averages can help or hinder your evaluation of performance. Adding a few additional pieces of data can really support your efforts to properly see opportunity and manage risk. When I think back to the sage comment made by that compliance officer all those years ago, I wonder if she realized how formative her comment would be for me. Just like love, averages can sometimes hurt, but — also just like love — not always.

Steven Littlehale is a gerontological clinical nurse specialist, and executive vice president and chief clinical officer at PointRight Inc.