Multistate corporations, listen up!
It is not a numbers game. Well, I guess it kind of is. Let's face it: It is now common practice to monitor your facilities' Five Star ratings.
Whether we like it or not, Five Star remains a major relationship driver with hospitals, accountable care organizations and managed care organizations to name a few.
But that doesn't mean you have to fall into some of the Five-Star traps unique to multistate organizations that use the ratings for oversight and reward. Here are a few key points that inform our discussion:
- The Five Star Survey domain accounts for the majority of each facility's overall five-star rating.
- Deficiencies received on the last three cycles of standard and compliant surveys are assigned a specific point value.
- Revisits act as a multiplier — meaning you are additionally penalized if you don't respond to past and current deficiencies.
- Each state has specific survey cut points that correspond to a survey five-star designation of 1, 2, 3, 4 or 5.
This last point is key to our discussion. While the survey and certification process follows federal standards, there are variations between the states. To “level the playing field,” the Centers for Medicare & Medicaid Services uses state-level comparisons to control for this variation. However, a facility in a “survey aggressive state” will, by location in that state, have a greater likelihood of accumulating more survey points based on the deficiencies generated.
What characterizes a survey aggressive state? It is simply that the mean number of deficiencies delivered per skilled nursing facility is higher than the national average and other normative benchmarks.
For example, we ended 2013 with the nation having an average of 6.86 deficiencies (standard survey and complaint deficiencies combined) or 56 “survey points,” while Rhode Island had an average of 2.29 deficiencies (32 survey points) and Delaware 15.36 (95 survey points). Are facilities in Delaware 7 times worse than those in Rhode Island? Having visited most of the SNFs in these states, I assure you that there are differences — but not that much!
How about this example:
PointRight identified a facility in Louisiana. We are not picking on this fine state; the same example can be made for any state. Following the Five Star algorithm for counting survey points, the SNF's cumulative survey points were 40. In Louisiana, that 40 would equate to a survey rating as a 3 Star. However, that same number of survey points in North Carolina would earn a 1 Star and in Puerto Rico a 5 Star.
My advice to my multistate SNF colleagues: Don't try to do comparisons using Five Star Survey Ratings across states and also don't simply count survey points. The ratings and survey points should best be used to assess performance within the state the facility resides. Then, go a little deeper. Meaning you should know what is driving the final rating and point count.
Let's go back to the example to clarify.
If the facility earned 40 points, what comprised those points? Is it an old survey that was followed by better ones? Are the points coming from multiple visits required to correct? Was it 10 “D-level,” two “G-level” or two sub-standard quality-of-care deficiencies? Were they all originating from complaints vs. standard survey? In some of these cases, you'd likely move to congratulate, whereas in others, not so much.
And finally, this facility was in Louisiana. Do you think its survey performance would have been identical in North Carolina or Puerto Rico? Would the same tags been awarded?
In any case, having a deeper understanding about the Five-Star system helps you turn this data into a valuable tool for quality improvement.