A deeper understanding of MCID

Quick little update here, stimulated by a recent conversation with a clinician.  This is one of those things that maybe only really invested people will care about, but I think it’s important for all clinicians to at least be aware of.  Talking about this concept of the ‘minimum clinically important difference’ of a scale.  No doubt, the relatively recent introduction of this concept has helped with clinical decisions, and it’s great to see more and more clinicians getting on board with it.  But I’m realizing that perhaps those of us reporting this statistic haven’t been adequately clear about it.  Without going into depth on the way it’s calculated (which in a nutshell is through construction of a Receiver Operating Characteristic curve and then choosing that cut-score on the curve that most accurately discriminates between ‘changed’ and ‘not changed’ groups), people should recognize that what is really presented is the average MCID of the scale.  But so far all indications are that this is not a uniform value across the entire length of the scale.  As a concrete example: clinicians can appreciate the patient with neck pain who has rated 50/50 on the NDI (chosen the highest score on every item) over the past month but then one day comes in, and scores two items a 4/5 rather than 5/5, for a total score of 48/50.  Conceptually, we all know this is a meaningful event, but it doesn’t reach the threshold MCID that’s been reported as 5 points /50.  So is it not actually important?  Of course not.  The same can be said for that patient who’s almost ready for discharge but has been stuck at 2/50 for the past couple weeks, maybe due to some niggling headaches and a mild problem driving.  Then one day they come in and score 0.  Not meaningful because it’s not 5 points right?  Of course not, this is clearly meaningful.  And that’s to say nothing of the fact that, in this example, anyone starting at 4 or less out of 50 can’t possibly hit the MCID of 5 points, so can they not improve?

The point of all this is that in every case of which I’m aware, ‘meaningful change’ is not uniform across the scale – small changes in overall score are much more meaningful if they occur at the extreme (very high or very low) ends of the scale than they are in the mid-points.  I realize this doesn’t really help adoption of evidence-informed practice by clinicians, but I figure that since we’re still at the beginning of this revolution, we might as well get it right from the start before it’s too entrenched.  This is one of the reasons we’re trying to provide apps or spreadsheets for the scales we create or evaluate, in order to facilitate interpretation of scale scores (check the ‘clinician resources’ section of this website).

All for now, more to come shortly.