What Clinical Trials Can, and Cannot, Tell us about treatment

There seems to be a growing disconnect between clinical research and clinical practice. I continue to see policies being made and online battles being waged based on the results of randomized trials. 'This works, but that doesn't work', and here's the evidence, that sort of thing. Anyone who follows rehab trials in particular will recognize that it's much more common to find evidence that a particular intervention, be it manual therapies, specific exercises, acupuncture, education, or electrotherapeutic modalities are either not effective or not very effective (overall small effect sizes). My experience from talking with clinicians is that these kinds of findings are at odds with clinical experience - many clinicians will know that if you find the right patient, a certain intervention approach can be quite effective despite the results of clinical trials stating otherwise. And this represents a serious problem - in many cases clinicians are forced to practice either with strict adherence to best evidence, or strict adherence to patient-centred care, but not both. This represents a tension I see many struggling to resolve, so let's dig into this a little further.

Let's start with a short description of the philosophy behind the randomized controlled trial. RCTs are the favoured design for those attempting to ascertain cause-and-effect relationships, meaning that it is driven from largely positivist or perhaps post-positivist epistemic positions. That's a bit of a mouthful so a brief explanation seems warranted: positivism is focused on understanding knowledge and reality through looking for universal constants or laws. So traditional physics, for example, would be rooted in positivistic thinking in that it assumes we can create laws that are universally applicable. Newton's laws of motion would be good examples - if you remember back to your days of learning physics you'll likely remember being given a problem and having to find a single correct solution by applying formulae to sort it out. In such a case there would be one correct answer and an infinite number of incorrect answers - drop an apple, the apple falls, and you can calculate the precise time it will hit the ground if you can quantify all the important variables (gravity, wind speed, mass, etc..). Post-positivism softens that position a little, representing a position where we can never really prove cause-and-effect but if we disprove enough alternative hypotheses, then we get closer and closer to the right one. A classic example would be that of testing the hypothesis that all swans are white. Post-positivists would argue that it would be impossible to sample all swans in the world so we usually recruit what we hope is a representative sample, then determine their colour. If our sample is all white, then we reject the alternative (sometimes called the 'null') hypothesis and state that we are certain to within a particular level of confidence that all swans are white (recognizing that we did not, in fact, sample all swans). This is where the concept of probability (p) values and confidence limits come from - a p value can never be exactly zero (100% confidence), but we generally accept that 95% confidence is pretty confident and are ready to reject the null. Of course you can understand the fragility of this way of knowing - all it would take is a single black swan to disprove your hypothesis. So, it is much easier to disprove a causal relationship than it is to prove one in post-positivist thinking, but this is where most of our quantitative research comes from. I won't go into the other end of the epistemic spectrum where more interpretive or contructivist philosophies live, because it's more than we need right now. However, suffice it to say that p values and confidence intervals are not the only means of creating and understanding knowledge, and perhaps that's our first important takeaway as far as what RCTs can and cannot tell us.

There are many useful elements of a good quality blinded (or double, or triple blind) RCT. If done well they reduce observer or subject bias, they should effectively eliminate several confounders, and provide support for a number of the Bradford-Hill criteria for cause-and-effect including strength of relationship, dose-response associations, and reversibility. It's the confounder piece that I want to focus on here because I think it's critical to understand. If a study is designed with a rigorous randomization protocol, then all the individual variables of humans that makes them very messy and could otherwise interfere with the researcher's ability to interpret their results should be equally distributed between the two or more arms of the study. In doing so, researchers can usually safely ignore the potential confounding effects of individual variations in people that may influence their response to the intervention under study. Most good study descriptions that conform to the CONSORT reporting guidelines statement will include a table of participant characteristics that includes a comparison of some key variables between the groups. These are usually age and sex, with others dependent on the nature of the study. If those aren't different, the researchers are generally happy that their results aren't being confounded too much by other person-level variables. But of course, it would be highly unlikely that age and sex are the only variables that affect outcome. Where they are known and can't be adequately randomized, another strategy is to simply exclude people who have known confounders. Pregnant women, as an example, are commonly excluded from rehab trials, as are those with neuromuscular disorders or complex comorbid conditions, unless of course pregnancy, neuro or complex conditions are the ones under study. 

Both randomization and exclusion should be of particular concern to clinicians. While good randomization (and keep in mind there is such a thing as bad randomization) should allow the researchers to effectively ignore those bothersome individual differences that would otherwise affect response to treatment, clinicians CANNOT ignore them! The results drawn from large group means represent only the average response from the average person in each group. Who is this average patient? Are you average? Am I average? What about those people who show more extreme treatment effects? Who are those on the very low end that didn't respond at all (or maybe even got worse), or on the very high end who showed a very strong response? It would be nice for clinicians to know more about those people, even if they only represent 10% of the sample combined, because that's 1 in 10 patients in front of them that may in fact be expected to react really well or really poorly to the treatment. Most research studies are of course not adequately powered to conduct such sub-analyses so we never really learn about those people. Exclusion criteria should also be closely reviewed, as most researchers will do their best to recruit a sample that is as 'clean' as possible, people who are otherwise completely healthy with the exception of the one condition under study. Again, clinicians will likely recognize that such people rarely come through their clinic door, so these criteria become critical to understanding to whom the results apply (and to whom they do not).

There are important considerations here, as the results of RCTs are often used to influence clinical practice. The recent opioid guidelines are a good example - while we now recognize that all of us can be grouped into different classes of opioid metabolism based on mutations of certain alleles of the cytochrome P450 gene, most RCTs of which I'm aware do not break results down by metaboliser type. Rather all we get are group averages, and policy makers then draw lines that should meet the 'average' need but likely don't consider those at the extremes. The same can most certainly be said for rehab research, though here we're even more in the dark about those individual differences that affect response to treatment.

So this is where the nature of true evidence-based practice becomes critical - clinicians must know the current state of good evidence, then be able to recognize patterns in the patient in front of them to determine if the results apply (based on experience or an intimate knowledge of all research), and finally determine if the intended treatment is in alignment with this patient's personal values. This is a pretty big ask - it's no wonder many surveys reveal that don't often practice in strict accordance with practice guidelines.

So perhaps before you get into your next 'this works or that works or that doesn't work' online flame war, you'll take a moment to consider something I continue to believe: that nothing works for everyone, but everything will work for someone. Heck, even a good punch in the ribs might be just the tonic a particular person needs to get on the right path, though I doubt you'll find an RCT on that any time soon (and perhaps you shouldn't). The point is that practice needs to be focused on applying the right treatment for the right person at the right time, and that is a very tall order but starts with sound clinical reasoning.

For those who are interested, here's my little infographic on N of 1 studies, which are probably due to be elevated to their proper place in the pantheon of healthcare research methodologies. In fact, I foresee a future where we move in two directions from the RCT - both down towards the individual N-of-1 level, and then use those data to feed up to the big data neural networks that can in fact consider the messiness of human nature when fed enough data.

Vision of Future of PT part 2 - reply to comments and thoughts on tech

Hi everyone, and thanks to all who have reached out with their thoughts and comments, whether here on this site, on YouTube, Twitter, or personally.  On balance the comments have been fruitful and constructive, and I’m quite pleased to see engagement from the broader community.  I just wanted to take a moment to reply to some of them here on this site as the ability to have meaningful conversation through Twitter or YouTube is limited.

For those who haven’t yet seen my YouTube follow-up post (linked above), please view it for a bit more context.  In short, in none of these predictions have I tried to make value judgments about whether they’re good or bad.   I’ll leave that to the broader professional community – these are predictions based on trends I’m seeing, the purpose of sharing those to be to stimulate thought on where we’re going, whether we’re happy with where we’re going, and if not, what we can be doing now to control where we’re going.

Glenn Ruscoe raises an important point in his comment below – that the future will not only affect PT in isolation, every other professional group should be, or ought to be, planning for a very different practice in the not distant future.  Many futurists believe we are currently living in the midst of a third industrial revolution, a ‘digital revolution’ you might say, that probably dates back to the widespread accessibility of the internet (late 1980’s-early 1990’s) through to now and likely for another decade or two to come before progress flattens slightly for a while until the next one.  If that’s in fact true, then every profession will feel the impact.  Think of what the internet and automation has already done to librarians (I can’t remember the last time I’ve actually been to the physical libraries on campus to get a book), accountants (thanks to products like TurboTax, I do my own taxes), auditors, paralegals and the like.  Lawyers, teachers, doctors, dentists, physios, engineers, all should be planning for a future where we are no longer the gatekeepers of a highly specialized and private pool of knowledge.  If I’m being quite frank, I actually think teachers and doctors are probably the next ones to be affected in a big way.  Like it or not, much of medical practice has been reduced to checklists and algorithms now – the kind of routinized activities that machines are far better suited to perform, and which they should be.  Offloading/downloading of those tasks should not only make things like medication errors or misdiagnoses less common, but should free up the professionals themselves to more completely perform up to their mandates.  So, good call Glenn, let’s recognize that we’re just a small cog in a much larger machine – but by jumping ahead of the curve we may be able to manage challenges and seize opportunities before others are able to mobilize.

Another question from YouTube came from an ex-student named Praneeth Ellanti who asks “how do you think rise of machines and automation of labour jobs will affect our practice/caseload?” – which makes me realize that I probably didn’t do a great job of describing some of the tech that I see changing PT practice in the near future (will try to avoid getting too far into the distant future as that becomes a far more nebulous exercise):

1.            Big data, machine learning, and artificial / mixed intelligence: Probably the best example of this that springs immediately to mind are the online data collection and interpretation platforms, with the good folks over at Focus on Therapeutic Outcomes (FOTO) leading the way in PT right now.  Using machine learning algorithms applied to a very large and growing database of clinical encounters, the FOTO platform is now able to predict things like an individual patient’s most likely outcome, how long they will take to achieve it, and how many visits will need to happen over that timeframe with freakish accuracy.  Whether it be FOTO or another provider, think of what will happen when they get into things like screening and differential diagnoses – now the patient will answer a computer’s questions in the waiting room (or at home more likely) and before you even see the person you will have an indication of most likely diagnosis, prognosis, duration and number of treatments, and eventually (as it grows) treatment recommendations.  In fact, if you can buy this vision, then it shouldn’t be a stretch to imagine that consumers can also get this information, and make their own decisions about whether they even need to come see you.  Again, I’m avoiding value judgments on this, if you want to know my personal views on whether this sort of thing is good or bad you’ll have to contact me directly for now.

2.            Human-machine interfaces: We’re already seeing the rapid growth of a new field of ‘neuroprosthetics’, where artificial limbs are being connected directly to the brain.  Whether it be bionics, exoskeletons, injectable nanobots that repair damaged nerves and tissues, the fact is that recipients of these technologies will require some form of physical rehabilitation.  It’s an interesting question isn’t it?  Who are the stewards of rehabilitation for augmented humans?  There’s again a broader ethical question at play in many of these, but I’ll leave those for now.  Of interest, if you watch the video on artificial limbs that can also feel (linked above), you’ll see the patients using an adapted version of a Myo armband, which has been developed in Waterloo Ontario (45 mins down the road from me) and can be purchased for about $100 – I have one sitting on my desk.

3.            Virtual, Augmented, and Mixed reality:  There is a pile of potential with these technologies, in terms of improving accessibility to care providers, creating more engaging and more effective interventions, and optimizing patient evaluation and diagnosis.  I feel like a blog post on potential uses of VR in rehab is forthcoming.  For now though, get your creative juices flowing with this video on holoportation.  And then blow your mind farther by this story of a conversation I had with a researcher from Alberta a couple of months back who has created a system whereby he can where a glove with haptic feedback, and his target (‘patient’) wears a suit with similar haptics built in, and he can actually reach out and touch a patient who is thousands of miles away, and on the other end the patient feels the touch.  Don’t stop dreaming folks.

4.            3D printing: As a clinician I spent hours creating customized hand splints for patients with various upper limb disorders, usually by starting with a big sheet of thermoreactive plastic, trimming, shaping, and retrimming until I hit on a splint that felt comfortable for the patient to wear.  This is just one example of where 3D scanning and printing could make those processes not only easier (and eventually cheaper), but something that a consumer can do themselves, from home.  As 3D printing technology improves, we’ll see use of different materials (already people are printing metal, concrete, and food) that could allow consumers to create many of the custom orthotic-type products for which they currently rely on professionals.

5.            Gaming systems:  For several years now, I’ve had an unbelievably advanced piece of 3D full-body motion tracking hardware sitting on top of my living room television in the form of a Microsoft Kinect camera.  Prior to that, I had an accurate, wireless, battery powered force platform connected to a Nintendo Wii.  Now, newer systems (Xbox One, Playstation 4) are including virtual reality peripherals which, in addition to body capture cameras, are now turning the average living room into what a lab-based researcher even 10 years ago would have been thrilled to boast in a high-end laboratory.  As games are becoming more realistic and engaging, and connected to the ‘cloud’, it should be no stretch (has already been occurring) to think about how rehab providers will be prescribing video games to their patients, and can receive feedback on performance metrics in a way not previously seen.  This begs a rather interesting question again: Who among us are ready to prescribe games?  Do we know anything about the correct parameters to prescribe?  What game (and do we now have to include knowledge of available video games as part of our education)?  For how long?  At what intensity?  These are actually rather important questions – when the Nintendo Wii first came out it was seen as a boon for rehab providers, and I know of several clinics who quite excitedly bought one and had it connected to a TV in the exercise area.  And there it sat, because while the concept was right, it turned out none of us had any clue as to how to properly prescribe a video game for rehab.  This feels like a field ripe for exploration by an enterprising clinician or graduate student.

6.            The quantified human:  While the fitness tracking wearables market seems to be cooling, and smartwatches have yet to really catch on, this has not stopped progress towards the development of new technologies to quantify all aspects of human health, including physical activity, cardiovascular health, blood sugar, diet and nutrition, and sleep patterns among others.  My take on this market is that it’s still trying to figure itself out, and that no one’s really hit on the right form factor yet (for what it’s worth, I’ve now gone through 3 fitbits – lost one and washed two), but eventually we’ll get to a point where we can reliably and continuously track several aspects of our health, and then provide those when asked to our healthcare providers.  This might even be in the form of injectable accelerometers or other sensors that live under our skin.  Regardless, think of the potential that this constant, real-time monitoring of health could have on your interactions with patients.  Have you been doing your exercise?  Oh yes, I see you have.  And your sleep has improved, and your resting heart rate is down.  But oh, what was this blip? 

These are just simple examples of course, the real interesting questions come into play when we think about connecting all of these technologies I’ve described (and several more yet to come): Your injected health sensor sends information to a deepmind-type machine learning algorithm, that provides you with real-time feedback about your health on the heads-up display from your AR contact lens, and when something seems amiss provides you with likely diagnoses, suggested actions to take, and maybe suggests you get off the couch and play some games or go out for a walk.  All of this tech currently exists but is in various stages of development.  If Moore’s law continues, then 10-15 years is not a stretch for many of these.

One final note here – often times when people here about ‘the rise of the machines’, they (depending on their age) immediately conjure images of genocidal robots as shown in mass media – HAL 9000, Skynet, or Ultron.  I won’t say I’m not concerned at all about AI getting out of hand – and people far more clever than I are currently working to put important boundaries on how it gets developed.  Rather, I’m far more optimistic of a future where humans and machines work together to solve problems faster, better, and more effectively than we ever could before.  If I nerd out a moment and draw an analogy from one of my favourite movie franchises: I envision a future where I’m less concerned about Ultron and more excited about J.A.R.V.I.S.

 

A vision of the Future of Physical Therapy

Have been toying with writing this post for some time now.  Even started writing it a few times, then deleted it, started over, and deleted that one (damn you maladaptive perfectionism!).  But this time I'm going in hot and will attempt to get these ideas down on the screen.  Full disclosure: this has been written in bits and pieces as the mood strikes me, but hopefully it flows well enough to be interesting enough that it sparks a little discussion.

So the question is this: what does physical therapy look like in 10-15 years?  Of course the answer is "no one knows".  I'm going to offer a set of perspectives that come from someone who thinks a lot about these kinds of things, ear to the ground tea-leaf reading type stuff really.  But something could happen tomorrow that renders many of these predictions completely moot.  Such is the life of someone who fancies himself a futurist.  An additional caveat - while I do my best to keep an eye to global trends, I can't possibly shake my Canada-centric biases, so some of these predictions may be applicable to a relatively small swath of the physical therapy profession (e.g. those in Ontario, Canada).

So, without further ado, here come my predictions for Physical Therapy over the next 10-15 years:

1.  Cloud computing, Artificial Intelligence, the Quantified Self, and other tech that's not even on our radar yet will be a huge thing.

So this one may be an easy prediction to make.  Technology will advance - bold prediction there Dave.  But here's where I'm not sure PTs are entirely prepared - in their book 'The Future of Professions', Richard and Daniel Susskind make a very convincing argument that many of today's professions - PTs, doctors, lawyers, teachers, etc.. will have their profession radically altered by the increase in ability and accessibility of adaptable and increasingly 'smart' machines.  Already professions like paralegal are being replaced by computers who can sift through far more information far more quickly and for far less money while make fewer errors than humans.  Clever programmers are harnessing open-source code such as Google's Cloud Vision API to create smart computers that can analyze an x-ray or MRI and spot very minute problems that may have otherwise gone unnoticed by humans, and then go out to the web and collate all the available information on what it may mean and what the person ought to do about it (radiologists should be scared). Wearable devices now mean, for quite literally the first time in history, clients will start coming into physical therapy visits with a quantified, objective record of their daily activity habits over the past X months and say "there's your outcome, right there - when I'm doing that again, I'll be satisfied".  While PT is likely to be a slower target of advanced computing than some of the other higher profile professions, it would be naive to think we're immune.  Here at Western we're harnessing accelerometers such as those embedded in almost every piece of tech we own to identify small deviations from normal movement patterns that would go unnoticed by human observers to help identify and even diagnose joint pathology.  Open-source platforms like Kinovea allow people to video record themselves performing a movement, then go back and analyze it with very user-friendly and highly accurate biomechanical tools, for free, from their kitchen.  Then go ask WebMD what they should do about it.  Looking forward a few years, we should expect to see augmented reality systems where you as clinician put on a headset that overlays digitized holograms on the real world, and conducts these kinds of biomechanical movement analyses in real-time using a head's up display.  And these are just examples - the point is that physical therapists need to be leading in this field; 3D printing, augmented, virtual, and mixed reality, artificial intelligence, cloud computing and big data, wearables and the quantified self.  You know who are making the really big innovations in rehab right now?  Computer scientists and biomedical engineers, many of whom have never touched a patient.  PT needs to start thinking now about where they fit into this technologically-focused future - are you ready to prescribe virtual-reality based treatment?  How about prescriptions that include home gaming platforms?  Many futurists believe we will look back at the period we're currently in as the third industrial revolution - are PTs leading this revolution, or getting dragged along behind?

2.  The PT of the future will be working with augmented humans

Keeping with the tech theme, beyond augmented rehabilitation using peripheral devices, physios of the future will find themselves met with making rehab decisions for augmented people.  Already advances in nanotechnology, smart 'skins', and brain-machine interfaces will lead to injectable nanobots that seek out and repair damaged cortical or spinal cord neurons, are allowing people with amputations to regain a limb and plug it directly into their brain, will provide sight to the blind and hearing to the deaf.  Literally the potential is endless, I can't tell right now where it will end.  Literally science is identifying the key genetic switches for ageing, and it's very likely that within the career lifetime of people reading this post, humans will be living to a mean age of 120+ years.  What does that mean for rehab?  What kinds of new challenges will this new cohort of, uh, 'ultra-geriatric?' people present?  For that matter - when you're 120, what kinds of things will you be expecting of your care providers?  And of course this is to say nothing of the potential of gene-splicing/hacking and genetic programming approaches like CRISPR/Cas9 or whatever the next iteration of that will be - what genetically-mediated conditions will be extinct in the future?  What new ones will pop up as a result?

3.  Your hours and dollars spent on mastering hands-on 'technical'  skills for managing common neuromusculoskeletal disorders are going to lose their value.

I fully realize I'm stabbing directly at the very heart of what physical therapy has been for the past 100 years, and it's this section in particular that I've deleted and re-written several times.  But I can't ignore this trend and every time I think I'm overreacting, I see something that reminds me once again that we are likely to start losing out to cheaper allied health professionals who are generally able to perform most of the same maneuvers that we do and, at least according to whatever empirical evidence exists (which is very little), get similar outcomes that we do.  I'm thinking about clinical kinesiologists (there's that Canadian bias), athletic therapists, physical therapy assistants, registered massage therapists and new allied professions that likely don't currently exist.  There are already training programs for clinical kinesiologists here in Canada that are teaching them to perform many of the manual therapy techniques that have traditionally been the exclusive domain of the physical therapist and maybe chiropractor.  And it's unlikely that anyone's going to find much funding support to conduct PT vs. other provider RCTs to determine whether one gets better outcomes than the other.  In the absence of evidence to support one over the other, and with simple dollars and cents pushing the decision, I don't see how we can ignore the fact that the days of PTs pushing on spines and watching people exercise are likely nearing an end.  So where does that leave PT?  Well, if we choose to resist this change, dig our heels in and stomp our feet, we may delay this shift for a while but eventually we'll lose.  Perhaps an alternative, more fruitful endeavour is to create the future that we want - what if PTs instead become the primary care providers, diagnosticians and gatekeepers to the rest of healthcare services for problems related to mobility?  And we continue to possess an advanced level of technical skill for highly complex patient problems but leave the routine stuff to those who can do it cheaper?  What would a future look like where you walk into your 'Family Health Clinic' and turn right to see your family physician if you've got a weird rash or the sniffles, and turn left to see your family physiotherapist if you've got a mobility problem?  There may well be other approaches, I'm just predicting the future not telling anyone how to prepare for it, but I feel like we need to be strategically planning for this future now.

4.  Regardless of whether no. 2 comes to pass, we'll likely see fewer neck and low back problems in the future.

Not because they won't exist, but because I continue to see policy makers attempting to legislate these problems away.  There has long been a pervasive belief within the world of healthcare funders that things like chronic neck and low back pain are at least partly due to overly generous compensation paradigms, and they point to studies showing the, for example, chronic whiplash symptoms doesn't exist in places like Greece or Lithuania where compensation for WAD is neither available nor expected (I've been involved in several such conversations).  Perhaps the most grievous and explicit example of this thinking are a new set of WAD guidelines from the UK's government that state, and I quote: " ...measures to disincentive minor soft tissue injury claims & arrangement for personal injury...".  So the UK government appears to be quite openly stating that they feel the problem with chronic WAD is that the claims process is to easy and generous.  A similar approach is about to be launched by Ontario's provincial insurance body that amounts to essentially an ability for clinicians to select from a very narrow 'menu' of treatment options for acute WAD and a very small funding pot, each of which will be better delivered by professionals other than physical therapists.  A few years back Ontario's Workplace Safety and Insurance Board (WSIB) introduced a series of Programs of Care (PoCs) for common workplace injuries including low back pain and upper extremity pain - while I find these PoC's more acceptable than the upcoming WAD guidelines, they continue to restrict the type and amount of treatment that patients can receive, and if we look at trends, there's no indication this will reverse any time soon.  It won't be long before it's simply not viable, at least in Ontario, for PTs to continue to treat WAD and LBP, which will end up going to other professionals who can provide care for less money.  Of course, much of this may be made moot by, for example, autonomous (self-driving) vehicles that, once everyone in the road is in one, should remove the human error component of driving and result in far fewer MVC-related traumas.  That said, wouldn't surprise me at all if a different type of trauma increased (perhaps car-pedestrian?) in the short term, but eventually self-driving vehicles should make our roads safer and result in fewer whiplash problems in the first place.

5.  Training programs in Canada will have to radically change

Fully accept my Canadian bias on this one, and am cognizant of painting all programs with the same broad brush.  However, the writing is quite clearly on the wall that training programs will need to be reformed.  The easiest prediction to make here is that the DPT, or some other doctoral-level training credential, will become the minimum standard for entry into the practice as it has south of the border.  10-15 years may be a little quick for that to change, knowing the pace of change at all levels of administration and legislation that would be required, but it will happen.  And this isn't only going to be a response to the US push - as an educator myself I'm acutely aware that our current 2-year MPT programs are not going to continue to be sufficient as the profession evolves and as new controlled acts (e.g. ordering diagnostic imaging, prescribing certain meds, providing a diagnosis) fall within the scope of PT practice.  Again my language here is Ontario-centric, but I know similar shifts are happening elsewhere.  Additional skills that may not be adequately addressed in existing training programs include interdisciplinary communication, advocacy, working with technology, and so on.  So, like it or not, prepare for 3-year DPT-type programs from Canadian universities (Canadian universities will not offer 3-year master's level programs, so the only option will be to transition to a doctoral level, or go back to bachelor's level which is highly unlikely).  

6.  The process of critically appraising scientific literature will change dramatically once all the data are known

Think about this - most healthcare providers that I know view the world through a post-positivistic epistemic lens, which (in a very brief nutshell) says that we can never prove a hypothesis because it would be impossible to capture all of the data in the world to test it.  Rather, the best we can do is disprove competing hypotheses ('null' hypotheses) such that we become more and more confident that the data we have are accurate representations of all possible data on a question that exist.  We use p values, confidence intervals, standard error, that sort of thing to provide a quantitative metric of how confident we are that our data would hold true under different conditions.  But what if we could capture LITERALLY ALL THE DATA on a question, everywhere.  There would be no further need for estimating population level data - we'd have it.  Cloud computing, the internet of things, big data, machine learning, and the quantified self mean that, at least in health research, it's very likely that research in the next 10-15 years will no longer consist of RCTs or exploratory cohort studies of a selected subset of a population, but the entire population itself.  Want to know the mean resting heart rate of every 20-25 year old in the US?  Here you go, no need to estimate from a smaller subset.  The world?  Might take a little longer, but it will get there.  And then what about knowledge syntheses (e.g. systematic reviews/meta-analyses) - we'll have bots for that.  Program a bot (there are already plenty of open-source ways to do this) that trawls the internet for new evidence on a topic, applies a set of algorithms to determine the quality of the research (like it or not, much of what we do in an SR is algorithmic now) and then incorporates into an existing knowledge pool so you no longer have to wait for a new systematic review on a topic, it's right there, up to date, all the time.  10 years is probably too long on this one, I give it 5 tops.

A Note on Reacting to Change

I fully realize that some of these predictions may seem undesirable at first blush, and in some cases may even cause an emotional reaction in some readers.  And, let's be clear, I fully realize that I may be dead wrong on every one of these.  But short of a nuclear or zombie apocalypse (or both) I'm fairly confident that most of these will come to pass in the next 10-15 years or so (granted, it will be several decades before we're all in self-driving cars).  

As someone who's always been interested in the future and the questions of 'what next?', I've learned a few things about how people react when confronted with change.  In fact, in the spirit of Elizabeth Kubler-Ross and the 'Change Curve', I've fashioned my own stages of accepting change like this, they are:

  • Fear it: An actual emotional aversive response where people fear and try to escape from the potential change.
  • Actively resist it: Less fear, and more belligerence or perhaps some form of tenacious nostalgia, people here actively speak out against the change, attempt to undo it, and discourage others from embracing it.
  • Ignore it: Neither fear nor resistance, this is more a passive 'head in the sand' approach - if I don't see it, it doesn't exist sort of thing.  Ambivalence would fit here.
  • Accept it: Here people, either through force (e.g. policy change) or personal intrinsic motivation, people at least accept and embrace the change and are willing to implement it.
  • Lead it: At this stage people become 'agents of change', championing whatever the change or innovation is, encouraging their colleagues/coworkers/family members to embrace through active and vocal support of the change.
  • Create it: The highest level, people at this stage are the ones creating the change or innovation.  Asking then answering their questions of 'what could I do to help myself and others make this a better, more effective, safer, or more efficient world?', these are the folks that roll up their sleeves and create the change that leaders and accepters can then implement.

What stage are you at?  Where would you like to be?  Don't like any of my predictions here?  Then create your own future.  Let's not just ride the curve that others are setting us on - let's create the curve!  Let's lead the charge into the future of healthcare such that we as PTs ensure we remain not only relevant but critical to provision of effective healthcare in the future.

And to be clear, I'm not personally super excited about all of these myself - part of what attracts futurists to looking for trends and predicting the future is a fear of what could happen if these (and other) advancements are allowed to continue to iterate and advance unchecked.  Do I want designer babies? no.  Do I really think it's wise to be tampering with natural processes and, say, reverse or halt ageing or eradicate all disease?  Essentially render Darwinian natural selection (and, hence, evolution and adaptation of the species to new demands) obsolete?  Not without at least some very serious thought.  So this isn't a call to blindly accept all new advancements, rather it's a snapshot of one man's thoughts and predictions and a call to become more involved in guiding the future rather than following it.

Have any comments?  Any thoughts of your own re: the future of PT?  Think I'm totally daft and off base here?  Comment below, it's only through open discussion that things move forward.

What makes a 'Good Physio'?

Perhaps it's a function of my personal and professional maturation (which can be hard to see sometimes) or perhaps it's due to my romantic passion for seeing this profession realize its full potential, but I've become quite interested in capturing different peoples' views on this topic of 'what is a good physio?'.  In fact, in collaboration with another colleague at Western we've recently kicked off the 'good physio project', which is expected to be a 2 to 3 year series of related research studies that is currently in the information collection stage.  To be clear, we're not the first ones to be interested in this area; for example Margaret Potter and colleagues published their findings on the qualities of a 'good' physiotherapist back in 2003, finding that good communication was the top ranked quality of a good physio.  However it's fairly safe to say that this is an area that has received relatively little attention in comparison to the qualities of good practitioners in other fields such as nursing and psychotherapy.  It's an area that holds relevance to me not only as a passionate advocate of physical therapy, but also as an educator who works to develop physios of tomorrow, and as a father/husband/son/brother/friend/consumer who has personal reasons for wanting to drive the value of quality physiotherapy forward.

This past Saturday I had the very good fortune to be one of the presenters of a 1-hour interactive session on Valuing the Softer Side held during the Ontario Physiotherapy Association's annual congress in Toronto Ontario.  We had a great turnout, standing room only in fact, for this session that was quite clearly focused on an area that not only has received little attention, but is also seen as a sort of 'fluff', that being soft interpersonal skills that can't be easily quantified.  It was heartening to see the turnout and also to realize that we were not the only session that included a component of soft skills in their talk.  I was particularly impressed with the session offered by McMaster University PhD student Folarin Babtunde who presented his comprehensive exam work on therapeutic alliance in physiotherapy.

I decided to start the session by asking the 150 or so attendees to consider the following question: Think about a physiotherapist who you consider to be particularly 'good', this may be yourself or someone you look up to as a model or mentor.  Or if you can't think of a single person, think about the qualities that you think of when you think of a good physio.  I then somewhat dangerously provided my cell phone number and asked the participants to text me their answers.  Within 2 minutes or so I received 96 texts, to the point I had to take my phone out of my pocket due to the buzzing.  In the days since I've conducted a thematic analysis of the responses I received, and from those 96 texts extracted 186 qualities that the audience thought of when they thought of a good physio.  Below is the summarized pie chart showing the relative representation of the different meta-themes I identified within their responses.

UPDATE: I've done this several more times since then, we're now at over 400 texts describing qualities of a good physio.  The image below is the most up to date.

While this was not an ethically-approved study and did not conform to best practices for descriptive thematic analysis (read: I didn't have a second person independently conduct the same analysis then compare results), I've personally conducted this type of analysis several times in the past and like to think I observe above-average academic rigor when doing so.  One thing to make note of here is that, while the responses were overwhelmingly in favor of interpersonal skills, there was still representation for clinical or instrumental skills.  So it's not all about the therapeutic alliance but clinical skills do matter.  However, evidence indicates that the effect of many treatments may be a 50/50 split of the skill or technique of the clinician and the quality of the therapeutic alliance.  This is in contrast to the overwhelming preponderance of pre- and post-licensure professional development opportunities available to physiotherapists that skew strongly towards perfecting instrumental or clinical skills with a conspicuous absence of opportunities to develop these critical interpersonal skills despite what appear to be, at least from the group of PTs attending our session on Saturday, important qualities of a good physiotherapist.

If you're interested in an upcoming opportunity to develop these skills in yourself, I encourage you to actively seek professional development opportunities in this areas as they do exist.  I am running one with my colleagues Jim Millard PT and Jas Dhir PT in Mississauga this coming weekend and it's not too late to register.  Click on the courses link above to find out more.

Yet more support for pain FREQUENCY as an important outcome

A few posts ago I reported the results of a quick and dirty survey we ran online a few months back that was intended to get at the degree to which people indicated what property of a pain experience is more important to their sense of wellness or disability: the intensity of the pain or other symptoms, or the frequency with which those symptoms are experienced.  You can read the post about the results from the main Blog Central page, but in a nutshell the survey results appeared to lean towards frequency of experience being a more important influence than the intensity or severity of the experience.  This of course didn't come out of thin air, it's something that we hear at the PIRL have been thinking about and working on for some time now.  Back in 2013 we published the results of a series of interviews and focus groups in which we tried to explore the concept of recovery and what that would mean to people with neck pain.  Not surprisingly 'absent or manageable symptoms' was a consistent theme from these discussions (recently independently reproduced by Carroll and colleagues).  But when the phrases were explored more deeply, we realized that what many of our respondents were getting at wasn't the intensity of pain, but the frequency with which they experienced it.  And it wasn't just pain, in fact we identified at least 10 different broad classes of symptoms that people with chronic neck pain were experiencing, from pain and stiffness through to weakness, fogginess, sensitivity to light or odor right to fatigue and negative emotions.  It also was quite clear that the problem with intensity was more about what the symptoms prevented people from doing (interference).  So, recognizing that the frequency and interference resulting from a number of different symptoms appears to be important to patients, we set out to find a PRO that would cover these things.  It will likely be surprising to no one that we found none.  The year was 2012 and we set out to therefore create the 3rd in what would ultimately become a trilogy of new PROs for predicting and measuring chronic pain and disability following acute MSK injuries.

Being the newest of the 3 (the others being the Satisfaction and Recovery Index that can be found here and the Traumatic Injuries Distress Scale that is currently under review for publication) the Multidimensional Symptom Inventory is not yet quite ready for prime time.  So far we've collected 96 responses on the new tool, that provides 3 primary indices (number of symptoms experience /10, mean frequency of those symptoms /3, mean interference caused by those symptoms /4) and 2 secondary indices (the product of frequency x intensity for each symptom, and a regression-weighted Symptom Index that incorporates all 3 of the primary indices).  In the interest of making this post not overly long I'll cut to the chase: in our regression analyses where region-specific disability (e.g. NDI, RMQ, LEFS, UEFI) or health-related satisfaction (SRI) are the dependent variables, mean frequency is coming out as a much stronger explanatory variable for disability and satisfaction than is the intensity/interference parameter.  And, in the 23 people for whom we've also collected 3 month data, same deal - we can predict outcome using the baseline frequency parameter far better than using the intensity parameter.  

We're continuing to collect data on the new MSI tool and will publish it once we feel it's ready, but for now I'd encourage clinicians to include some measure of symptom frequency in addition to the routine of intensity NRS, and also don't forget that body diagram for location!

Are you looking at my graph?   Nice little cross-sectional relationship between the regression-weighted symptom index and health-related satisfaction.  Don't get too excited, I intentionally picked the prettiest one.

Are you looking at my graph?  

Nice little cross-sectional relationship between the regression-weighted symptom index and health-related satisfaction.  Don't get too excited, I intentionally picked the prettiest one.


Survey follow-up: Breakdown by region

Let me see, I'm fairly confident that nothing bad has ever come from comparing one country against another, right?  Good, then I feel completely comfortable reporting on this geographical comparison from the PD survey results, as some have asked me for.  I'll highlight only those comparisons in which there were significant differences in the mean rankings between the US (n = 67), Canada (n = 57) and the UK (n = 65).  There were not enough respondents from the other regions to allow meaningful statistical comparisons.  The results below are the results of one-way ANOVA with 'region' as the between-groups variable and mean ranking (where a higher number is rated as more preferable/important) as the dependent variable.  Significant main effects were further explored using Tukey's post-hoc test.  One caution though, with this many comparisons being done it is entirely possible some of these results were significant by chance (that's an alpha error, for those that remember their stats courses).  With that said...

The US and Canada respondents ranked 1-2 day in-person courses significantly more preferable than the UK respondents (US = 1.6/7, Canada = 1.3/7, UK = 2.5/7)

The UK (3.5/7) respondents indicated they were significantly more likely to participate in  'reading scientific literature' than the US (4.4/7) and Canada (4.9/7) groups.

The UK (3.9/7) respondents indicated they were significantly more likely to participate in  'reading summaries of scientific literature and answering comprehension questions' than the US respondents (4.6/7).

The US respondents (5.9/7) ranked an instructor's social media presence as significantly more valuable than either the Canada (6.7/7) or UK (6.4/7) groups.

The UK respondents ranked the instructors global reputation as significantly more important than the Canada respondents (3.7 vs. 4.4/7).

The Canadian respondents ranked 'receiving a certificate upon course completion' as significantly more important than the UK respondents (7.1 vs. 7.8/10)

The UK respondents ranked 'the course materials clearly show the information is supported by scientific evidence' as significantly more important than the US respondents (3.9 vs. 4.6/10)

The UK respondents ranked 'distance you have to travel' as significantly more influential on their PD decisions than the US respondents (2.2 vs. 2.8/7).

The UK (6.1/7) ranked 'the amount of additional materials you have to purchase' as significantly less influential on their PD decisions than either of the Canada (5.1/7) or US (5.2/7) groups.

There you have it, those are the only items on which there were any statistical differences in mean importance/preference rank between the regions.  For the record, the item asking respondents to choose their preference between a 2-day live in-person course or equivalent 2-day online course at half the price of the in-person course did not show a significant difference in proportion of responses via chi square test (US: 82.1% favor in person, Canada 77.6% favor in person, UK 75.4% favor in person).

I'm just glad we weren't comparing hockey or soccer superiority between regions.  That would have been professional suicide.

PD values of Physical Therapists - survey results are in

In June 2015, Dr. James (Jim) Elliott and I created and distributed a survey intended to try and capture the continuing professional development (CPD) preferences and values of physical therapists (PTs) across as much of the globe as we could.  This was not a formal research study, that is, you won't see it published in a scientific journal.  Rather it was for our own information and for anyone out there who provides CPD to PTs.  Those who participated will know we made the results openly available in real-time for anyone who wished to access them.  This is meant to be open and anonymous data.  

Questions included traits of the instructor, CPD content, logistics and delivery.  We pushed the link out through our respective social networks including Twitter (@uwo_dwalton and @elliottjim), Facebook, and LinkedIn.  We were fortunate that the link got picked up by people and groups far more popular than us who pushed them out to their respective followers.  A nice example of social media working well.  In the end we received replies from 207 physical therapists, mostly from Canada, the US, and the UK.  Not a huge response, but nonetheless enough to make us adequately comfortable in the accuracy of the results.  Of course, normal caveats apply (this is researcher Dave talking now) - the sample was biased to those who use social media, we have not taken the time to calculate confidence intervals or run any inferential statistics, meaning that when we say 40% is more than 25%, we're just talking in absolute values but we can't say they are (statistically) truly different.  And of course, there's only so much one can glean from these types of surveys: they are not exhaustive and there's no chance for follow-up clarifying questions.  Perhaps some day we'll run this formally, but for now here are the results.

Starting with the demographics of the respondents:

Interpretation:  First of all, it was hilarious to see the tweets of respondents appalled that they were being considered 'late career' at only 16 years.  It was an arbitrary cut-off folks, you've no doubt got plenty of gas left in the tank.  Otherwise interpretation is fairly straightforward, twice as many females as males is generally in keeping with what we know of the broader demographics of PTs.  Interesting that we got the smallest representation from early career therapists, not sure how to interpret that.

With the demographics in mind, here are the meaty questions:

Q1:  Rank the following modes of professional development in order from ‘most likely to participate’ (1) to ‘least likely to participate’ (7).  

Ordered from most likely to least likely to participate.

Interpretation:  There's a clear preference for the traditional in-person face-to-face CPD offering.  Interestingly the webcast format is lagging somewhat.  Hard to interpret in this single snapshot.  Will be interesting to see what this trend looks like in another 5 years.

Q2:  When considering your options for professional development, please rank the below qualities of the instructor that might influence your decision, from most important (1) to least important (7).

Ordered from most important to least important.

Interpretation:  Huh, guess we need to stop flaunting our PhD's and 'awesome' Twitter numbers.  An interesting follow-up question would be to ask how people determine whether an instructor is considered a knowledge leader in the field.  One for the next round...

Q3:  When considering your options for professional development, please rank the below qualities of the course that might influence your decision, from most important (1) to least important (10).

Ordered from most to least important.

Interpretation:  Well, guess I might as well cancel that cruise ship.  These responses are actually quite heartening and, if we read a little between the lines, it would appear that people still value useful, impactful and accurate information that improves their practice/client outcomes over certificates for their wall/portfolio and marketing potential.  My physical therapy romaticism is being justified.

Q4:  When considering your options for professional development, please rank the below course logistics that influence your decision, from most important to least important.

Perhaps no major surprise here, the cost of course registration is rated most influential factor.  Of course, we could have asked something more nuanced, like 'course value', since surely some people will have been internally monologuing something to the effect of 'well, I'd be willing to pay more if I thought the course was worth it', but we intentionally did not include a value question since that would have quite obviously been number 1.  The notion of cost is a more challenging concept that fit better with the other elements here (travel, days off work, they also factor into value).  Of interest is that reviews of the course are mid- to low importance, but I wonder if that's because there are currently few options for leaving online reviews of courses.  An interesting thought for ongoing work.

Q5: 

1.       Please consider the following two scenarios, and then choose your preference:

Option A: The course is offered live and in-person (class or lab-based) in your home town.  Registration cost is $X for the day, offers a catered lunch, requires you to be physically present, and offers full interactivity with the instructor.

 Option B: The course is offered live and online (via webcast).  Registration cost is half of $X for the day, allows you to participate from home, will be recorded and video archived for later viewing, but interactivity is limited to typed questions in a question box.

Interpretation:  We got right down to brass tacks on this one and asked the question fairly point blank: face-to-face or online?  The results are overwhelmingly in favor of the face-to-face option, even at twice the cost of the online offering.  Of course, economists will tell us that if we were to do this properly, we would offer several options of cost and eventually we'd hit a point where preferences were equal (if you drop the cost for online low enough, eventually more people will prefer that).  My take away from this is that if you're going to offer an online course, half the price of live in-person isn't enough of a discount to entice participants.  We also realize that many PT CPD offerings include practice of technical skills, which are hard to do from behind a computer screen.  So, another more nuanced exploration for future surveys would be to explore the types of material that would be preferred online and the type that would be preferred in person.

So to summarize, it would appear that the PTs in our sample (remember: recruited largely through social media) don't generally care how many Twitter followers your have, or how many letters are after your name, and certainly don't care if your course is held in a beautiful exotic locale.  There's a clear preference for the tried-and-true here, in-person face-to-face weekenders that provide immediately applicable information for a reasonable price.  Perhaps no real surprise on that, although we can't help but wonder what these rankings will look like in another 5 years as technology and teaching strategies continue to improve.

For those who wish it, we are able to slice these data any way you want based on demographics.  Want to know if preferences differ by stage of career?  Or geographic region?  Feel free to reach out to me at dwalton5@uwo.ca and we can discuss your needs.

Got any thoughts or comments?  Feel free to leave them below.

Survey says... Pain Intensity or Pain Frequency?

One of the things we like to do in the top secret PIRL lab is wile away our time coming up with new an innovative ways to quantify the unquantifiable.  This is one of the reasons we love the field of pain measurement - not only is it clinically relevant and the number 1 reason that people come to see physiotherapists, but it is also an intensely personal and subjective experience.  Here at the PIRL we subscribe to the 'pain is what the patient says it is, and occurs when the patient says it does' doctrine, regardless of what is happening in their other physiological systems (although we poke around in those as well).  

About 5 years ago, as part of a project we launched exploring the concept of recovery from musculoskeletal injuries, we held a series of focus sessions, one-on-one interviews, and invited written reflections asking people in pain what 'recovery' would look like in their view.  This work has already led to a number of publications and the development and publication of the Satisfaction and Recovery Index (SRI).  However, those who look at the SRI will note that there is nothing in there specifically asking about symptoms.  'Absent or manageable' symptoms was one of the primary indicators of recovery in our data, so why is it not in the SRI?  Because I wanted the SRI to be more about those other domains of life that affected health-related satisfaction.  Symptoms appeared to be adequately important as to warrant a dedicated scale.  So we've also been working on the "MSK Symptom Index" (MSI), which is meant to be a complementary tool to the SRI, both of which will ultimately serve to identify adequate recovery.  The MSI is not out in the wild yet so don't bother looking for it, but with luck it will be by the end of this year.

The symptoms on the MSI have been informed directly by our patient informants.  They identified several symptoms that were not captured on any symptom intensity scale we could find.  They will be presented below.  Interestingly, those informants frequently talked in language that would indicate that it's not necessarily the intensity of the symptoms that was the biggest factor to a sense of recovery, but how often they were experienced (frequency).  Drawing on my years as a clinician, I know that we routinely measure symptom intensity using a Numeric Rating Scale or such tool, but rarely would we capture frequency of each symptom.  And I'm not aware of a single tool that would allow measurement of both frequency and intensity of a variety of symptoms on the same tool.

So, that's what we've set out to create with the MSI.  It's been over 5 years in development now so I won't bore you, dear reader, with the finer details of its birth and refinement.  We have now been collecting data on it for several years and have some 150 response sets to work with.  As I started working on it last week, I realized that my original idea for scoring the tool wasn't working the way I expected it to.  Long story short, it's hard to know how to incorporate several constructs (number of symptoms, frequency of those symptoms, intensity/interference from those symptoms) into a single summative score.  In fact, those who know me know I prefer a unidimensional scale for clinical measurement.  But as an index, I wanted something more multidimensional for this purpose.  

I explored different ways of looking at this tool, and have now been working with a health economist to decide whether we can use the tool from a personal valuation perspective.  What does the population at large value for example, the symptom types, their frequency or their intensity?  Knowing this will help us to create unique weightings for the tool responses and create a more patient-centered scale score.  It's all rather complex and is pushing the boundaries of my knowledge (not hard).  A full standard gamble or time-trade-off study is likely in the cards, but for now I wanted to get a sense of whether it's even worth doing.  If everyone simply values symptom intensity as the primary construct, then why go any further?  With that in mind, I put a very simple online survey together last week and posted it on my Twitter account (@uwo_dwalton).  There were 4 questions, to which 20 people responded over the week.  While not a large (and by no means representative or definitive) sample, the results would suggest that there is something interesting to be explored here.  So, without further ado, here are the results of this short survey:

Question 1: Rank these symptoms from the worst (1) to the best (10):

Symptom ranks.jpg

These are sorted such that the symptom rated the worst receives the lowest score, while the symptom ranked the best receives the highest score.  From this we can see that these rankers clearly put the two pain-related symptoms, localized sharp and general dull, in a tier unto themselves as the worst of the bunch.  Weakness or giving way was in the next tier, with the remainders forming a sort of third tier, with increased sensitivity to environmental stimuli possibly forming a 4th 'least bothersome' tier.

That's interesting, but without collecting any information on the rankers it's hard to really say what this all means.  So the next questions got to the heart of the severity v frequency question.

Question 2: Please choose which of the following would be better: constant but mild (1-2/10) pain, or rare but severe (9-10/10) pain.

cons mild v rare severe

Complete wash on this one, perfect 50/50 split.  In this case the respondents were split on whether they value severity over frequency.  I had a sense this might happen, so question 3 was a bit more nuanced.  Here I reduced the gap between the intensities and the frequencies.

Question 3: Would you rather have: pain that is moderately intense (4-5/10) but often present (part of every day), OR pain that is quite intense (7-8/10) but rarely present (every few days)?

moderate vs quite intense.jpg

A clear winner here - these respondents appear to be indicating that frequency (rarely present) is more important to them than would be intensity.  Finally, I got right to the heart of the matter with question 4.

Question 4: In your mind, what is the more pressing matter when it comes to pain management: the frequency of the pain (how often you feel it) or the intensity of the pain (how severe it is when you do feel it)?

freq v intensity

A close race on this one, and plenty of caveats apply, but among these 20 mystery raters, frequency nudges out intensity 11 to 9 as the more pressing pain management issue.

At the end of the day, I don't know who these people were, this was a quick and dirty survey, and I can't confidently say that this wasn't the same person responding to the survey 20 times (though I highly doubt it).  But, assuming these were at least 20 different people, the take home from this simple little survey would suggest that while pain (or other symptom) intensity is still important, it would appear as though there is rationale for exploring pain frequency as a potentially important outcome for pain management treatment and research.  This lends support to the MSI tool we're developing, so stay tuned for updates on that.

Have something to say?  Comment below.

Of Kangaroos and Vegemite: An Ozzie visits the PIRL

We were thrilled this past week to welcome rising superstar Dr. Siobhan Schabrun from the University of Western Sydney to the PIRL lab and the Western University community.  This was one part of an IASP-funded travel grant program intended to foster international collaboration between pain researchers.  Siobhan brought her considerable experience in mapping the motor cortex of key body regions using transcranial magnetic stimulation (TMS)-based methods.  This elegant and largely pain-free technique provides superb spacial and temporal resolution for exploring things such as the size of motor body maps and cortical plastic change that is especially useful for understanding the mechanisms of acute-to-chronic or painful-to-not-painful transitions.  We won't give away too many of her methods, but a new experimental pain model using a key signalling model that she's working on is also especially attractive for understanding pain mechanisms.

The visit included time spent with members of Western's world-class Brain and Mind Institute, the Robarts Research Institute, and members of the PIRL lab itself.  Siobhan was also included in 3 days worth of workshops, the first being the Advances in Neck and Spinal Pain Workshop at which she delivered 2 hours worth of discussion about advances in understanding cortical mechanisms of chronic pain, including some very exciting new information on alternative methods to affect brain plasticity by rehabilitation professionals (videos will be coming soon, watch this site).  The second workshop was a 2-day invitation-only interdisciplinary priority- and consensus-generating workshop in trauma-related MSK pain and disability.  Watch this space for more information about that highly successful workshop.

And of course no visit to Canada would be complete without enjoying some traditional Canadian stereotypes.  Our extended cold March gave Siobhan the first chance to see snow falling from the sky (we're glad at least someone was excited about that), to watch deer and squirrels forage near her hotel, enjoy Canada's finest cuisine including poutine, and have pancakes with maple syrup fresh from the tree.  Only thing we didn't do was get her into a hockey fight, but guess we need to save something for next time.

Siobhan enjoying some fine local cuisine, including maple syrup sucked directly from the trees outside at mclachlan family maple syrup farm in komoka ontario.  the jug of syrup would unfortunately not fit in her carry on luggage.

Siobhan enjoying some fine local cuisine, including maple syrup sucked directly from the trees outside at mclachlan family maple syrup farm in komoka ontario.  the jug of syrup would unfortunately not fit in her carry on luggage.

We were thrilled to have Siobhan share her considerable expertise with us here at Western, and already some exciting new research directions have come to light.  The reciprocal will come when Dave visits U Western Sydney in July.

Presentation tips

Today I delivered a short discussion with the Master's of Clinical Science (Manipulative Therapy) students here at Western on presentation tips.  Since I know many readers of this blog post will at least occasionally  have to present something to a group, whether that be a clinical inservice, a lecture for students, or a podium or breakout session at a conference, I figured I might as well share some of the tips I've learned over the years of doing this, should it help someone who's just starting out on this path.

With that all said, here are my tips:

  1. Find a presentation/teaching framework that you like, and get good at it.  I like the BOPPPS model from the Instructional Skills Workshop (ISW).  BOPPPS stands for:
    • Bridge: Start with something to engage your audience.  Why is what you're about to say important?  An emotional element is especially effective.
    • Objectives: Be clear about what it is you're expecting your audience to get from your talk.
    • Pre-test: Recognize that your audience may already have considerable background in your topic area, or this may be completely new.  There are many ways to gauge the general knowledge level of your audience.  This is important both to serve as a baseline, and to make sure you're not talking above or below your audience's level.
    • Participatory Learning: This can be hard in a conference setting, but not impossible.  Learning occurs more deeply when people are physically engaged in the activity.  Breakout groups, think-pair-share, dot parade, there are several 'active learning' strategies out there that you can use dependent on your context.
    • Post-Test: Was your teaching effective?  Did your learners learn what you were hoping for? Were your objectives met?  The only way to know is to conduct some kind of post-test.  This can take many forms, from a formal quiz, to 'clickers' or web-based apps like poll everywhere, or even direct questioning.
    • Summary: Summarize the main points of your session.  For serial learning (e.g. university courses), the summary can set the stage for the next session.
  2. Less is more.  If you had to err on the side of either too much information or too little, I would suggest the latter.  There's nothing worse than listening to someone sprint through a presentation just to get all of the information out, speeding through crowded slides and not allowing anyone to really take it all in.  General guidelines are no more than 1 slide per minute of presentation, limit the text and number of bullets on a slide, and where possible use pictures and graphs rather than text to explain a concept.
  3. Avoid the pre-emptive knowledge strike.  I find this is common in novice speakers, especially younger graduate students.  I'm referring to the tendency to describe every nuance of your study in an attempt to completely head off any critical comments.  However, doing so also tends to preempt any conversation at all.  Provide the primary points, and then let those who are truly engaged (which tend to be a minority) to ask for greater clarification during the question period.  In this way you get the main points out to those who just want the quick and dirty 'what's it mean?', and also have the chance to engage in academic discourse with those few who are truly invested.
  4. Beware the laser.  I'm not sure what it is about having something in your hand that compels constant use of that thing, whether it's needed or not.  I always chuckle when I see someone with a laser pointer who underlines or circles every word their reading off the slide, almost like a 'follow the bouncing ball' presentation.  It's unnecessary and distracting.  I prefer to use the mouse on the computer if I need to focus on something.
  5. Use your environment.  A presentation should be like a mini stage show.  You've got lighting (if even just what's emanating from the slide projector), props (lecterns, mics, tables, pointers) and an audience that you can use to full effect.  Want to emphasize a point?  Step into the light a little more.  Want to highlight something on the screen?  Step back into the shadows.  Without being overly dramatic, don't be afraid to pound on the lectern a bit.  There's nothing wrong with showing passion, people really respond to that.
  6. Engage your audience.  Respect that your audience are adults, and come with a lifetime of knowledge and experience rather than being empty vessels waiting to be filled with knowledge.  Don't be afraid to step out from behind the lectern, remove those perceptual barriers between you and your audience.  Why not ask them a question?  Those who have seen me talk at a conference will know it's not uncommon for me to ask the audience a question and engage them in a bit of discussion.  This shows them you respect their knowledge, and prevents them from falling asleep!
  7. Inject a bit of lightheartedness.  No one expects you to be Jim Gaffigan up on stage.  But you also don't need to be Stoneface McHardScience.  Comedy is in the eye of the beholder of course.  No matter how golden you think a little joke is, it will tank at some point.  Crickets will be chirping.  But you know who else that happens to?  Professional comedians.  At least you won't get heckled, and the chance that you may induce a small chuckle out of some of your audience may be enough to snap everyone back to your topic.  Use jokes sparingly and only when appropriate, but that's what will make you stand out from all the other presenters your audience has heard that day.
  8. Do like the comedians do.  Speaking of professional comedians, they are some of the best public speakers I have ever seen.  Next time you're watching a Gaffigan, Seinfeld, or Foxworthy, step back from the comedy a bit and just observe their behaviors.  Listen to the pace of their voice, the language they use, the amount they move around on stage.  Try to emulate these folks a bit, they've learned from the school of hard knocks so you don't need to.
  9. Drown your 'uh' and 'um' moments.  I will usually have a bottle of water with me during a talk.  This is so I can take strategically-placed drinks whenever I find myself starting to forget where I was going with a statement.  Rather than standing there going 'uh, uh, uh', grab a short drink of water.  No one will think twice about it, and you'll give yourself the chance to get back on track.
  10. Use effective pauses.  This one takes a bit of practice, but every time you want to say 'uh' or 'um', replace that with silence.  Slowing down and not filling every second with sound will give your audience the occasional chance to digest what you just said.  And nothing makes you look more novice than 'uh, um, uh,'.  Pause instead, you'll come across pensive and deep if nothing else.
  11. Address application. There ought to be a reason you're presenting what you're presenting, in some way you should have at least a semblance of how your information should fit into practice.  Even if it's not ready for prime time yet, your audience will be much more likely to engage in your material if they can see where it might fit into the real world.
  12. Wash, Rinse, Repeat.  Most adult learning theories indicate that learning is facilitated when it is applicable, respectful of their existing knowledge, and repeated.  Let me repeat that, repeated.  Yes, repetition appears to be useful.  Almost to the point that it seems a bit over the top, repeat the key messages and those will be the ones that your learners are likely to remember.

How Google and Apple may help with one of chronic pain's more vexing questions

At the recent Apple iOS8 announcement, a new software tool referred to as ‘HealthKit’ was revealed, that is intended to be a central hub for consolidating all of your health-related data.  This appears to include not just your history of illness or disease, but also your health-promoting behaviours such as activity, sleep, and diet.  Not surprisingly, Google appears to be preparing its own health-information consolidation service, which from all appearances will do largely the same thing.  These services make sense on many levels, especially considering the explosion in health- and activity-tracking devices known largely as ‘wearables’, leading us closer to the notion of a quantified self.  In fact, my Samsung smartphone gives me a tally of the steps I’ve taken every day by virtue of using the embedded accelerometer in the phone, just for carrying it around with me.  It’s by no means a completely accurate picture of my daily activity, it doesn’t discriminate walking from running or climbing for example, and the toll on battery life means I often leave my office with the phone sitting on the charger.  However, assuming my behaviour in this regard is reasonably consistent, that is, assuming I tend to leave my phone on the charger for quick trips around the building but take it with me when I leave for any extended period of time, these types of activity trackers (whether built into your phone or a stand-alone wearable) should be able to identify significant changes in my behaviour over an extended period of time.

So how does this address the vexing problem in the title?  The problem I’m referring to is the influence of pre- and post-traumatic physical activity status on the subsequent onset and/or maintenance of chronic pain or disability.  Our systematic reviews would indicate that pre-accident activity doesn’t appear to have much of an effect on post-trauma trajectory in whiplash, and the balance of evidence is equally ambiguous in other conditions.  We have yet to see compelling and consistent evidence that inactivity following injury in fact plays an important role in the development of chronic problems, as the ‘vicious cycle’ or fear-avoidance models would suggest.  One of the big challenges in conducting this research is that, in all but the largest of population-based cohort studies, most researchers can’t go back in time and collect pre-injury physical activity or sleep data, necessarily relying instead on self-report of study participants.  Recall or desirability bias is of course a huge problem with such approaches, so I would say that we really don’t know the impact of either pre-trauma or post-trauma daily activity on recovery/chronicity.  However, what if we were able to tap into the potential wealth of activity and health data that iOS and Android promise to start collating for us in the near future?  While they may not offer accurate indicators of actual activity yet, there is certainly potential to use these tools to find relative changes in activity from the average 1-year activity pre-injury to the average daily activity post-injury.  Of course, this also opens the possibility of abuse of such systems (perhaps, if I’m trying to prove how much my injury has affected my physical activity, I may leave my phone in my office a bit more often), but until people pick up on that game, there may be a window of opportunity for those of us ready to take advantage of these data that can lend some useful insights into pre-injury health and activity and the subsequent development of post-injury chronicity.  Something to keep an eye on perhaps.

New Comprehensive Pain Assessment Workshop

Hopefully you'll have noticed the link to the top right for the Comprehensive Pain Assessment Workshop.  This is an idea I've been toying with for probably 2 years now, but haven't pulled the trigger to this point for a variety of reasons; some pragmatic (no time, not sure on the content I wanted to offer), some more philosophical (the overly romantic notion of the academic martyr), some technological.  Nonetheless, I've finally decided to jump in to the Professional Development (PD) pool, hoping to make a rather large splash with a hybrid in-person / online livecast workshop open to any health care provider.  I'm trying a number of new things (at least for me) in an effort to see just how much we can use open-source technology to improve access to PD for those in otherwise underserviced areas, with low overhead and therefore low costs.  

Broadly there are two options for participating - either live and in-person (the most interactive option that will offer the greatest learning opportunity) or by live webcast that still allows participation through use of the Google Moderator question engine.  The entire session will also be video recorded and made available for all registered participants (even web participants) afterwards, or sold for a small fee to those who weren't able to join the day.

The impetus for this event has come from different stimuli.  In terms of PD for health care providers, we recognize that access is a barrier to participation.  I am currently chairing a PD Task Force for the Canadian Physiotherapy Association, and access is one big barrier we've identified as a challenge.  This has also come from our recent set of ICON publications, from which survey data tells us that clinicians are aware of different pain assessment techniques, but few are using them properly.  Finally, they say you should teach what you know, and I know pain assessment.  'Quantifying the unmeasurable' is a mantra in our lab - how can we give clinicians a window into the experience of their patients, and how can they properly use triangulation to understand a) what the problem is, and b) how to address it.  I intend to cover all of these topics in an easily accessible way with this workshop.

So, this is either going to be a smashing success, or go over like a lead zeppelin.  Probably the biggest risk at this point is technology-related - will the mic work, will the camera work, will the bandwidth hold up?  I'm nervous but excited about the opportunity, and if it all goes adequately smoothly it may open a new, highly-accessible approach to PD that circumvents the problems of access.  Want to join me on this adventure?

The Satisfaction and Recovery Index (SRI) is out in the wild

After nearly 6 years in development, a new health-related satisfaction and recovery scale is now available for use.  The manuscript describing its development and preliminary validation studies is currently under review with the Health and Quality of Life Outcomes open-access journal, which does mean that the tool has yet to be formally scrutinized through peer review so until the reviews come back we must consider the rest of what I’m about to say pilot work.

The SRI contains 9 condition- and region-agnostic (generic) items that ask respondents to provide two scores for each item: the first is an importance score, the second is a satisfaction score in relation to the interference due to the respondent’s health condition.  It sounds a little complex, but in practice it seems that patients are able to understand what they’re being asked to do.  It does mean that the scoring is a little more complex and is probably easiest with a calculator (but not overly difficult).  An importance-weighted satisfaction score is calculated for each item by multiplying the importance and satisfaction scores together then dividing by 10.  So, for example, if a patient rates the item about future potential as being extremely important to them (10/10) and rates their current satisfaction in that area at a 5, the weighted score is 5 x 10 / 10, or 5 which is added to all other items.  If a different patient rates the importance of that item at a 7, then the weighted score is 5 x 7 /10, or 3.5 in the final summed score.  This is done for each item, so all 9 items get a weighted score.  To get the final SRI score, you then use the following formula:

SIRI score = [(Sum of weighted scores)/(Sum of importance scores only)]  x 100

What this means is that you end up with a percentage ‘importance-weighted health-related satisfaction’ score that is more heavily influenced by those items that the patient feels are important to their own sense of satisfaction, while those areas less important are weighted lower.  The added benefit here is that change in those items that are rated as important leads to a greater overall score change than does change in less important items.  Here’s a concrete example of that:

Let’s say on day 1, your patient provides the following scores for the SRI items:

SRIexample.jpg



The total SRI score would be the sum of the weighted scores (33.8) divided by the sum of the importance scores (79) x 100, or in this case, 42.8% health-related satisfaction.  Two weeks later that same patient completes the SRI again.  As the first example, let’s say that they improved by 2 points in the area of positive emotions (from 3 to 5).  This item being weighted a full ‘10’ on importance, the sum of the weighted scores is now 35.8, and the percent satisfaction is 45.3%, and increase of 2.5%.  To draw the contrast, let’s instead say that the same patient improved by 2 points in the area of intimate relationships (from 4 to 6), no doubt due to your excellent physiotherapeutic interventions.  The sum of the weighted scores is now 34.8, and the percent satisfaction is now 44.1%, an increase of only 1.3% for the same 2 point satisfaction increase.  You can play with this a little yourself and see how it pans out in practice.

It’s worth noting that each of the items represent areas of importance that were gleaned from focus groups with people with pain (mostly neck pain) and from existing theories on health, happiness, and human potential.  You can read the ‘companion’ paper to this one, from which many of the items were drawn, here for free: http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3793580/

Another interesting potential for this tool is in the area of response shift, or shifting priorities in the presence of chronic conditions.  Over time, some people will change in what is important to them, and the SRI allows for that change to occur, meaning that people can feel ‘satisfactorily recovered’, even if they haven’t completely achieved satisfaction in areas that used to be important to them.

So far our preliminary data indicates that a score over 80% is likely indicative of someone adequately satisfied to be considered recovered in most cases (likely to be back at work, not requiring further treatment), and a score over 90% is highly indicative of a recovered status. Keep in mind that these numbers come from a ‘normal’ adult population, elite athletes or other such people outside of the norm may have different expectations.  It also bears mentioning that the tool includes a single ‘validation’ check item, which asks people to indicate a ‘4’ in item 6.  Interestingly, nearly 10% of our respondents in the validation study did this incorrectly, so keep an eye open.

So, feel free to give this a try.  It appears to be as responsive as region-specific scales (like the NDI, Roland-Morris, UEFI or LEFS) and more responsive than other generic tools like the SF-12.  We’re still working on MCID, but so far it looks like 5-7% change indicates meaningful change.  If you do decide to give this a spin, I’d love to hear your thoughts and experiences.

The Satisfaction and Recovery Index

A deeper exploration of pressure pain threshold in neck pain

In May of 2013 I pulled a conference double-date, moving between Montreal, Canada for the Canadian Physiotherapy Association’s Annual Congress and Toronto, Canada for the IASP Neuropathic Pain SIG International Congress in the same week.  It was hectic, but as usual took plenty of good things away from both congresses.  While at the NeuPSIG meeting in Toronto, I had the good fortune of sitting in on a session entitled ‘Central sensitization in neuropathic and non-neuropathic pain conditions’ presented by people I consider pioneers in the field: Dr. Clifford Woolf, Dr. Per Hansson and Dr. Michael Sullivan.  The session was very interesting but it was the Q & A session at the end that was quite lively (incidentally, I wish PT conferences had more lively Q & A sessions as well, for some reason we all seem quite subdued when it comes time to talk to the presenters).  In a very small nutshell, the discussion that interested me most was around this idea of quantitative sensory testing (e.g. mechanical, thermal, electrical pain thresholds) and how, if at all, it could be used to a) identify the presence of centrally sensitized nociceptive pathways, and b) determine the mechanism behind the sensitization.  Those of you who are familiar with the presenters will not be surprised to learn that the mechanisms put forth ranged from the cellular (genetics) to the psychosocial (esp. catastrophizing).  It was this aspect of Q & A that got me most engaged, with many posing different versions of the question ‘so, what is the mechanism behind QST sensitivity?’  As I sat in the audience (and after I had my time with the mic), I got thinking that back in my office here in London, Ontario I had a whole database of pressure pain threshold measures and concurrent pain-related cognitive scores that might at least be able to shed some light on that aspect of this discussion.  Concurrently, I had also been getting increasing questions from clinicians around the application of PPT in the clinical context as a result of the this series of 3 papers in JOSPT in 2011.  One of the common questions I kept getting was ‘do I have to do it 3 times in each location?’, which admittedly means the full test, if you were to do it across 4 sites (bilateral neck, bilateral tib. Ant.) takes nearly 15 minutes.  Finally, I was curious to know if PPT was actually a tool that could be used not only for prognostic purposes but also as an evaluative tool to track change over time.

So, for these reasons (that was quite the build up wasn’t it?) I set about working on a new paper on PPT in neck pain that explored these 3 questions: What are the measurement properties like between 1, 2 or 3 repetitions?  What is the responsiveness of PPT?  What are the associations between PPT and common pain-related cognitive or emotional scales?  What’s great for me about these papers also is that I get the chance to offer authorship to some of our clinicians who pass through or Master’s of Clinical Science program here at Western, which is often their first publication and gives them a foot in the door of scientific writing.

To summarize, here are my interpretations of the responses to the 3 questions we set to answer:

1.  Do you have to do 3 repetitions at each site?  Answer: Yes with a caveat.  We found that, on average (N = 206), using the first repetition only led to what appear to be inflated thresholds.  Disregarding the first 1 and using only the final 2 offered the lowest thresholds (side note: while this looks like evidence of some kind of wind-up pain, in actuality the second repetition was on average the lowest of the 3). Using all 3 offers a mid-range score that is adequately reliable and was the protocol we used for establishing prognosis in the 3rd of the 2011 papers.  That said, the ‘last 2 only’ approach also offers good reliability, but since it also requires you do 3 repetitions, there’s no real time savings there.  The caveat I mentioned is this: If the first 2 tests are consistent, in that they don’t vary by greater than the standard error reported in this paper (and calculated automatically for you on our PPT Android apps), then you should be able to get away with 2 repetitions and get the same general properties as the ‘all 3’ protocol.

2.  Is PPT responsive to change?  This one gets a ‘sort of’.  For people with neck pain, PPT at the Tibialis Anterior, while potentially valuable from a prognostic or screening perspective, does not appear to be a very useful instrument for measuring change in neck pain (perhaps not surprisingly).  When measure over the angle of the upper trapezius, it appears to be more useful, but is much better at ruling change in than ruling it out.  In other words, when discriminating between those who self-reported clinically-meaningful change over 1 week (by virtue of the GROC score) and those reporting no change, a change of about 80-85 kPa was only 50% sensitive but offer up to 92% specificity.  What that means is that many people will have improved a clinically meaningful amount but not seen a PPT increase of at least 80 kPa, but on the other hand, those who have changed at least 80 kPa are very likely to have also changed a meaningful amount.  If that doesn’t make sense, email me and I can explain it to your further.

3.  What is the association between PPT and pain catastrophizing, fear, anxiety, depression, and ‘kinesiophobia’?  My aversion to the word ‘kinesiophobia’ notwithstanding, the answer to this one is: weak, and only of real relevance for pain catastrophizing.  This surprised me, I won’t lie.  When it came to the debate regarding PPT as a true measure of nociceptive sensitivity vs. simply an indicator of psychological aversion to pain, I admit I fell on the latter side of the fence (psychological aversion).  However, at least the results that we found in our database would suggest that if negative psychological states are related to PPT, in this sample it would appear as though the association is very weak and of questionable clinical relevance.  I keep saying ‘in this sample’ because different results have been found in different populations using different QST measures.  So, had I had these results at the NeuPSIG meeting in May 2013, I would have had to state that, based on our data, we’re not looking at a strong psychological correlate here, but that it may well be that QST is providing us with a quantifiable indication of sensitivity of the nociceptive aspects of the nervous system.  This will no doubt continue to evolve with time.

Comments or questions?  Leave them below.

Will tech render WAD research obsolete soon?

Putting this out there for those of us in the field, would love to get a little back-and-forth discussion going here.

With self-driving cars from Google and others, advanced collision detection systems in many entry-level vehicles, and now the introduction of radio-frequency based vehicle-to-vehicle communication, all stand to dramatically reduce the number of motor vehicle collisions.  Which is great of course, I don't think anyone's going to argue against improved safety and reduced injuries.  What does this mean for whiplash research though?  Will whiplash-associated disorder become so infrequent as to no longer be a significant public health problem (and drop off the radar of major granting agencies)?  Is it time to see the writing on the wall and shift to other conditions, or will WAD research continue to be important 10 or 15 years from now?  Would love to have some conversation on this in the comments section below.

A deeper understanding of MCID

Quick little update here, stimulated by a recent conversation with a clinician.  This is one of those things that maybe only really invested people will care about, but I think it’s important for all clinicians to at least be aware of.  Talking about this concept of the ‘minimum clinically important difference’ of a scale.  No doubt, the relatively recent introduction of this concept has helped with clinical decisions, and it’s great to see more and more clinicians getting on board with it.  But I’m realizing that perhaps those of us reporting this statistic haven’t been adequately clear about it.  Without going into depth on the way it’s calculated (which in a nutshell is through construction of a Receiver Operating Characteristic curve and then choosing that cut-score on the curve that most accurately discriminates between ‘changed’ and ‘not changed’ groups), people should recognize that what is really presented is the average MCID of the scale.  But so far all indications are that this is not a uniform value across the entire length of the scale.  As a concrete example: clinicians can appreciate the patient with neck pain who has rated 50/50 on the NDI (chosen the highest score on every item) over the past month but then one day comes in, and scores two items a 4/5 rather than 5/5, for a total score of 48/50.  Conceptually, we all know this is a meaningful event, but it doesn’t reach the threshold MCID that’s been reported as 5 points /50.  So is it not actually important?  Of course not.  The same can be said for that patient who’s almost ready for discharge but has been stuck at 2/50 for the past couple weeks, maybe due to some niggling headaches and a mild problem driving.  Then one day they come in and score 0.  Not meaningful because it’s not 5 points right?  Of course not, this is clearly meaningful.  And that’s to say nothing of the fact that, in this example, anyone starting at 4 or less out of 50 can’t possibly hit the MCID of 5 points, so can they not improve?

The point of all this is that in every case of which I’m aware, ‘meaningful change’ is not uniform across the scale – small changes in overall score are much more meaningful if they occur at the extreme (very high or very low) ends of the scale than they are in the mid-points.  I realize this doesn’t really help adoption of evidence-informed practice by clinicians, but I figure that since we’re still at the beginning of this revolution, we might as well get it right from the start before it’s too entrenched.  This is one of the reasons we’re trying to provide apps or spreadsheets for the scales we create or evaluate, in order to facilitate interpretation of scale scores (check the ‘clinician resources’ section of this website).

All for now, more to come shortly.

The Clinical Diagnosis App

Thanks once again to high school co-op student Tyler Beattie for putting this simple but useful app together for Android-based smartphones.  The Clinical Diagnosis Helper app asks you to enter know information about the pre-test probability of a condition, the positive (Sn/1-Sp) or negative (1-Sn/Sp) likelihood ratios of the test you're using, and then indicate whether the result of the test was positive or negative.  Note that you'll need to corresponding PLR or NLR dependent on whether the test was positive or negative, but don't need both.  Hit the calculate button to get the post-test odds that the condition exists.  If you thought a nomogram was simple, this is even easier.  There are other app options out there that do this, but this one's free and couldn't be easier to use.  Hit the link above on your smartphone to download it.

Enclosure

NDI-5 Application

First of all, extra special thanks to high-school co-op student extraordinaire Tyler Beattie for putting this app together.  Once again, this is for Android smartphones only (sorry iOS users, Apple makes it really tough for us occasional and non-paid app developers to work with their platform).  This is for the NDI-5 as described above.  The app has two functions: First, patients can complete the NDI-5 directly on the smartphone, after which the app will auto-calculate the total raw score, the linearly-transformed score, and the raw score needed for a clinically significant linear change to occur, which is probably a more accurate reflection of MCID than change in the raw score alone (subject to debate I acknowledge).  The second function is where the patient complete the NDI-5 in paper form, and the clinician just wants to enter the total score and get the target raw score for clinically important change.

Enclosure

A longitudinal exploration of the short-term course of conservatively managed neck pain

Re: Walton, Eilon-Avigdor, Wonderham, Wilk. Exploring the clinical course of neck pain in physical therapy: A longitudinal study. Archives of Physical Medicine and Rehabilitation (online first)

Something to note off the bat about this paper: the file structure for this paper in my windows explorer goes like this: initial submission, revision, re-revision, re-re-revision.  Suffice it to say, it took a while to get published J.

It’s not going to light the world on fire, but for those of us in the field, it certainly holds relevance.  The paper describes a study in which we followed 50 patients with mechanical neck pain over the course of 1 month of conventional, outpatient physical therapy.  We captured outcomes at inception (baseline), and then weekly for the next 4 weeks, meaning a total of 5 data points over 1 month for each subject.  The goal of this project was really to establish trends in the data amongst these 50 people.  We had actually hoped for 100 subjects but as all too often happens in clinical research, recruitment didn’t go as planned and we had to eventually settle at 50 in order to complete the project in a reasonable timeframe.

To summarize the results before getting into the meat of things, the mean trend in NDI score was an improvement of 1.5 NDI points per week, and for pain intensity the mean trend was an improvement of 0.5 points per week.  That right there holds importance if we agree that the MCID on the NDI is about 5 points and on the NRS is about 2 points (reasonably well-established by this point).  Using these averages, on average you could estimate that a meaningful change will occur in the average patient after about 4 weeks, on average.  In case you hadn’t caught it, we’re speaking largely in averages here.  As a researcher that does give me some confidence in study design – a follow-up period of 4 weeks should be adequate to capture meaningful change in the average patient, and 6 weeks might make even more sense.  Great from a funding/ethical research design standpoint.

But of course, an average simply means that people fall the same distance away from the line on each side.  It is entirely possible that no one actually falls right on the average.  This is where I personally wanted to learn more about longitudinal modeling and was very fortunate that at my own institution (Western University) we have the brilliant Piotr Wilk, who has forgotten more about longitudinal modeling than I currently know.  With Piotr we were able to explore latent growth curves within that mean trend.  That is, we were able to look for other trends within the overall group that appeared mathematically justifiable.  Within the NDI data we identified 3 trends, one showed a fairly rapid improvement (about 20%, or 1 in 5) which improved at a rate of 4.5 NDI points per week, which would mean you would expect to see meaningful change in those folks within a week, two at most.  Another group showed a slow but steady worsening of about 1 NDI point per week or so.  Fortunately this was the smallest group, about 15% of the sample (1 in every 7 or so).  The third group was the largest (about 66% or 1 in 3) who improved at about the average trend, 1.1 NDI points per week give or take.  Interestingly the NRS showed only 2 trends, about half the sample improved about 1 point per week, the other half stayed largely stable over the 4 weeks.

Once again, we’re seeing some value here.  When patients ask how quickly they should expect to see improvement, these are starting to give you some clues as to how to answer.  And for research design purposes, I may for example decide that I don’t really want the rapid improvement group in my study, since chances are they’ll improve regardless of what you do.  But there’s still one piece missing, and that is to understand what’s different about these groups that might tell us what trajectory they’re most likely to follow.

This is where our small sample size sort of hampered what we could do.  While I can only report in the paper what was prudent, in a blog post I have a bit more laterality to wax philosophically.  Normal caveat however, that with a small sample I can’t say that my opinions are necessarily compelling.  But if I look deeply at the data, there are some clear trends that may help us explain who is going to end up in which trajectory.  Try these on for size, see if you agree:

Rapid improvement group: only 1 out of the 10 had their symptoms for > 6 months, only 2 of the 10 reported radiating arm pain, and none were currently taking pain medications.  Each of these proportions were the lowest of the 3 groups.  Their mean number of pain locations (on a body diagram) were the lowest of the 3, and interestingly their NDI was the highest (but not by much vs. the worsening group).

Slow improvement group: The only notable difference between this group and the others was that they started with the lowest NDI score (perhaps meaning less room for improvement?).

Worsening group: Highest proportion of chronic symptoms, and nearly ¾ described a traumatic cause of their symptoms.  Both were the highest proportions of the 3 groups.  Their mean TSK-11 score was also the highest of the 3, but only by about 4 points over the rapid improvement group.

So, there do appear to be some trends in the data that may be useful.  Those with localized acute or subacute symptoms without radiating pain for which pain meds aren’t necessary appear more likely to fall into the rapid improvement group.  Those with chronic symptoms especially of traumatic cause and a slightly elevated (though not necessarily pathologically so) TSK-11 score appear more likely to fall into the slowly worsening group.  I also have PPT and Fear of Pain Questionnaire scores for these folks as well, which I’ll dive into next.  Let me know if you think this is of use to you clinically.