Without a doubt, the MDS 3.0, like its predecessor MDS 2.0, is the driver for skilled nursing facilities. Some might say it’s driving us “to drink,” but truly it is the measure by which we are paid, monitored and evaluated by regulators and consumers. And as I like to remind people, when you’re holding the multi-paged assessment instrument, you are holding a person in your hands. Their strengths and weaknesses, needs and preferences, hopes and desires are all captured in the MDS.

In 2013, the Office of Inspector General made clear its intentions to “determine whether and the extent to which CMS and the States oversee the accuracy and completeness of MDS data.” This coincides with new Corporate Compliance requirements that require MDS accuracy prior to federal submission. These realities follow a 2012 federal report that asserted that nursing homes erroneously billed Medicare $1.5 billion. These developments all stem from the same source: concerns over MDS data quality.

We investigated this concern with an analysis of our MDS 3.0 database of approximately 11,000,000 records and determined that 70% of assessments submitted had data integrity issues. Furthermore, if an assessment had an issue, it had on average 2.46 issues per assessment.

The impact of these issues was felt not only in reimbursement but also in publicly reported CMS Quality Measures. Some were inappropriately identified, while others that should have triggered did not. Yes, it seems the concern about data quality is valid.

Here is the rub: When I share this information with providers, I usually get two responses: 1) I have the most amazing MDS coordinator or 2) my MDS software scrubs the data. Therefore, the problem is not with me.

Both responses are shortsighted and perhaps a little naive. Yes, you do have the most amazing MDS coordinator. She is incredible — you’re lucky to have her.  But just like it take a village to raise a child, it takes a team and thoughtful systems to ensure that 100% of MDS assessments are accurate and corporate compliance requirements are met. (Seriously, have you seen the MDS assessment or tried to follow the scheduling rules?).  Also, MDS software systems are not necessarily hitting home runs in terms of data quality.

We studied MDS data from 11 software vendors and two suppliers of MDS scrubbing services, after it passed through their scrubbing modules. The “best” had MDS error rates of 62%, the “worst,” 79%.  Another metric studied was Medicare dollars “at risk” and Medicare dollars “not realized.”  “At risk” means that the MDS does not support the billed RUG, while “not realized” are Medicare monies potentially lost when acuity is not accurately captured. In all vendors studied, the Medicare dollars not realized was a fraction of that “at risk.” Revenue was captured, but not supported. Could these somewhat lopsided assessments be why there is so much attention to MDS data quality?

Open up the hood and look at the engine. Does the data scrubbing module focus only on standard CMS coding and consistency checks and the “RUG” items, or is it more robust and emphasize clinical quality and risk management? Does the system coach you to find that extra ADL point or is it fair and impartial, identifying both, and challenging claims of acuity? And beyond your software, when you attend MDS training, is it solely focused on the RUG items or does it include the entire MDS? These questions should give you a sense of whether you have the systems in place to ensure accurate assessment and avoid the pitfalls associated with inaccurate data. 

Steven Littlehale is a gerontological clinical nurse specialist, and EVP and chief clinical officer at PointRight Inc.