There has been a push in recent years for people (e.g., front-line practitioners, managers, policymakers) working in all sectors (e.g., health, social, environment) to use evidence in their decision-making. For the most part this “evidence” is considered to be “research findings” that come from peer-reviewed or professional journals. The quality of this research is then judged according to dimensions defined by the scientific method. For example, studies that can demonstrate cause and effect (e.g., randomized controlled trials) are considered the best and strongest type of research.

Generally speaking there is an emerging dissatisfaction with approaches to evidence-informed decision-making that emphasize research evidence too much or at the expense of other types of knowledge. What if you need to make a decision and there is no research available? What if the available research isn’t relevant or meaningful to your situation? Where do you turn? What information should you use to inform your decision? In order to reflect the idea that “evidence” is more than “research,” evidence-informed decision-making has become known as “the systematic application of the best available evidence to the evaluation of options and to decision-making in clinical, management and policy settings”1.

So what does this mean to you – the decision maker? In the absence of or to complement research evidence, a front-line practitioner, manager or policymaker may look to non-research evidence that adds local perspective and context (e.g., an evaluation report, strategic plan, or data from online statistical databases such as those found at Statistics Canada). After you identify evidence that will be useful, the next step involves deciding whether the evidence you want to use is good quality.

There are criteria available to judge the quality of evidence from a research study. However, no similar guidance exists about how to judge the quality of non-research evidence. How can you decide if the evidence available to you is worth using or not? There is no easy answer to this question, but you might consider asking yourself some or all of the following questions2-3:

  • Is the evidence relevant given what you need to know?
  • Is the aim or purpose clearly stated?
  • Is rationale or background provided? If so, is it reasonable?
  • Are the approach and methods used clearly described?
  • Do the data sufficiently support any interpretations or conclusions?
  • Is the analytical approach appropriate and adequately explained?
  • Does the evidence include enough information or argumentation to support claims or conclusions?
  • Is the evidence produced by a credible source?

Think carefully about the above questions and deliberate with others. If you judge the evidence as being questionable in terms of relevance or quality, give it a second thought and then potentially avoid it.

Jennifer Boyko
Postdoctoral Fellow
School of Health Studies
University of Western Ontario
jboyko@uwo.ca

  1. Health Canada (1997). Canada Health Action: Building on the Legacy: Synthesis reports and issues papers – Volume II. Health Canada.
  2. Dixon-Woods, M., Cavers, D., Agarwal, S., Annandale, E., Arthur, A., Harvey, J. et al. (2006). Conducting a critical interpretive synthesis of the literature on access to healthcare by vulnerable groups. BMC Medical Research Methodology, 6(35).
  3. Boyko, J.A., Lavis, J.N., Abelson, J., Dobbins, M., Carter, N. (2012). Deliberative dialogues as a knowledge translation strategy: Development of a model using critical interpretive synthesis. Social Science and Medicine, 75(11), 1938-45