Why Naturopathic Physicians Particularly Need to be Evidence-Based Medicine Rock Stars
Warning, this article is going to be about evidence-based medicine (EBM)… Check in with yourself. What are you feeling?
Joking aside, while EBM has become established over the past 20 years in conventional and integrative medicine environments, it is often still met with skepticism.1 Let’s explore that.
In the aptly titled British Medical Journal paper, “EBM: What it is and what it isn’t,” David Sackett, arguably the father of EBM, defined it as the “conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.’’ Sackett further delineated EBM as a confluence of physician experience, patient values, and best evidence (Figure 1).2
The first two elements don’t ruffle too many feathers: Physician experience comes from just that, experience, and being respectful of and working with patient values is something I believe naturopaths specifically excel at, so no problem there; but the third “best evidence” element is the one with which I often see my students, both ND and MD, struggle. The reason for this struggle, I believe, comes down to two issues regarding medical research: there is too much of it and most of it isn’t very good.
In this essay I would like to discuss these two issues, suggest what we as clinicians can do about them, and explain why this is particularly important for naturopathic physicians.
Let us begin with the glut of medical research. There is a tome of research articles out there and it is growing at an exponential rate. Currently, Pubmed alone has more than 26 million citations. Gone are the days when you could get your JAMA in the mail each week, flip through it on the weekend, and feel up-to-date with current medical thinking.
And what about the quality of the deluge? Unfortunately, we can take little solace here either. Most research is of low methodological quality and likely reaches false conclusions. Even the papers that look good initially should be suspect.3 Why? According to EBM proponent John Ioannidis, one reason is certainly because the classic EBM hierarchy, often depicted as a pyramid with systematic reviews and randomized controlled trials (RCTs) on top, has been “hijacked.” Ioannidis argues that vested interests have taken advantage of this hierarchy and often produce papers that appear to meet criteria for ‘top of the pyramid’ evidence, but actually have a significant risk of bias that requires some digging to uncover (Ioannidis 2016). Examples would include publication bias (the selective publication of studies with advantageous results), weak comparator choice, selective outcome reporting and spin in the abstract and discussion sections.
For these reasons, overworked and time-poor physicians have come to rely on pre-appraised evidence sources such as UpToDate, Dynamed, and other clinical practice guidelines to guide decision-making in their practices.5 Although there are some issues with relying solely on pre-appraised evidence, and some have argued this may actually undermine the anti-authoritarian nature of EBM,6 their use is likely a net positive. In addition, the time-saving nature of these information sources appeals to time-crunched physicians.
Evaluating the Evidence
Now comes the point where I tell you why naturopathic physicians, in particular, need to be EBM rock stars. While naturopathic researchers are toiling away every day establishing an evidence base for the profession, to date, naturopathic medicine has a relative dearth of high-quality clinical evidence. This thin evidence base makes evidence-based guidelines difficult to construct.
So what is an evidence-based naturopath to do? Well, pragmatically this means that we tend to go directly to whatever primary research exists. Unfortunately, these studies tend to be methodologically weaker and are rarely pre-appraised by a methodologist.
Therefore, I propose that we need to be our own methodologists. I know we went to naturopathic medical school to be clinicians, but I believe that modern medicine needs clinicians to also be part-time scientists. We need to be able to identify relevant evidence, ruthlessly assess it, and apply it in practice. But how do we do a good job at ferreting out bias and reflecting only the most trustworthy material in our work with patients?
I have taught critical evaluation of the medical literature to NDs, MDs, DOs, Chiropractors, medical students, and others; I have taught it online and in person, lectured in front of large classes, guided small journal clubs, and taught day-long boot camps and multiple quarter-long training programs. So believe me when I say there are MANY tips and techniques you can learn to master evidence evaluation. You don’t need to be a master to start, but you do need to start somewhere.
Becoming an EBM Rock Star
Like our medical practices themselves, critical evaluation of the medical literature takes practice. While we left naturopathic medical school with competence, we certainly didn’t leave as experts. That’s what a life of practice is all about: honing and refining your work and becoming the best physician you can be. It is the same with research evaluation. Take a page out of Dr. Bastyr’s book and critically read an article every night, or at least once a week. Excellence will come with time and practice.
I would suggest that you start with the following three questions whenever you read a research paper:
- Can you trust it? This speaks to the risk of bias of the study. Some of these elements you likely already know such as checking for blinding but there are many more domains and in-depth tools you can use. I would recommend learning the Cochrane Risk of Bias Tool7 if you wanted to do a deep dive, but the simplest thing to look for is conflict of interest. Who are the authors of the paper? Who was the sponsor? Try to understand their motivations. Be skeptical. A quick scan of the authors and their affiliated institutions can be a tip: Is one of the authors an employee of the company whose natural product is being investigated?
- How big is it? We often get so caught up in p values. While certainly important, p values only tell you about statistical significance. They tell you nothing about clinical significance. We need to be better at understanding the effect sizes of interventions. This can be described in risk ratios, odds ratios, numbers needed to treat, standardized mean differences and more. While often confusing, try to get a sense of what the actual effect size is. Ask yourself, is this an important difference? Would this matter to my patient? For example, consider a hypothetical study in which one group, after treatment with supplement X, had a mean fasting blood glucose of 188 while the mean fasting glucose in the control group was 189. Would that statistically significant difference be significant to your patient?
- Does it apply to my patient? If the study passes the trust test and clinical relevance check, we need to then ask if it is applicable to our patient. I often advise my students to look at the baseline data table for the study participants. Are all the participants normotensive, normal weight and of Asian descent while your patient is hypertensive, overweight and Caucasian? Is the analysis based on per protocol compliance but your patient at best does half the things you recommend? While a research study will rarely match up perfectly with your patient, at least see if you are in the same ballpark.
In summary, busy clinicians who strive to practice in an evidence-based manner face the conundrum of a plethora of poor quality evidence from which to extract the best evidence. To deal with this, clinicians have moved away from individual article review and toward pre-appraised evidence. Unfortunately, naturopathic physicians (and any other medical providers in a field with limited clinical evidence) are forced to source directly from primary literature. This literature is often riddled with bias, much of it subtle and un-appraised. We must never accept the results of a paper at face value but rather be our own methodologists, our own ferreters of bias, our own critical evaluation rock stars.
- Trinder L, Reynolds S. Evidence-Based Practice: A Critical Appraisal. Oxford: Blackwell Science;2000: 249.
- Sackett D, Rosenberg W, Gray J, et al. Evidence based medicine: what it is and what it isn’t. BMJ. 1996; 312(7023):71–72.
- Ioannidis JP. Why most published research findings are false. PLoS Med. 2005;2(8): e124.
- Ioannidis, JP. Evidence-based medicine has been hijacked: a report to David Sackett. J Clin Epidemiol. 2016;73:82–86.
- Chapa D, Hartung M, Mayberry L, Pintz C. Using preappraised evidence sources to guide practice decisions. J Am Assoc Nurse Pract. 2013; 25(5):234–243.
- Goldenberg MJ, Borgerson K, Bluhm R. The nature of evidence in evidence-based medicine: Guest editors’ introduction. Perspect in Biol and Med. 2009; 52(2):164–167.
- Higgins JP, Green S, eds. Cochrane Handbook for Systematic Reviews of Interventions Version 5.1.0 [updated March 2011]. The Cochrane Collaboration; 2011. Available from handbook.cochrane.org.
This article was originally published by Naturopathic Doctor News & Review.