Not all research is created equal
12th June 2015
Jessica Vince of DFID reflects on their Assessing the Strength of Evidence framework for determining quality when engaging with evidence. The views expressed are her own and do not reflect official DFID policy.
Not all research is created equal. While this seems like an obvious point, it can be difficult to know which studies to give weight to and which ones to take with a heavy dose of caution. A judgement of quality is based on an assessment of a number of factors, but even the task of deciding which factors to consider can be difficult. There is an increasing emphasis on looking for the best evidence to inform decisions about investments, but there are myriad quality frameworks, indicators and methodological concepts which can be a challenge to harmonise, even for experienced researchers.
DFID’s How to Note Assessing the Strength of Evidence was developed to provide a consistent framework that can be used to assess quality. It is part of a bigger push to embed a consideration of quality into our engagement with evidence. It is not enough to say that particular methods are somehow inherently better than others. Quality is about the design and implementation of an approach that is methodologically suited to answer a given research question. The How to Note has been useful in distilling the variety of possible indicators and offering a clear steer about how to critically appraise the quality of a study on its own merit.
DFID staff use the note to carry out in-house quality assessment, and when commissioning evidence synthesis to set standards and communicate expectations to suppliers regarding quality. We are interested both in the strength and limitations of individual studies, but also in the overall evidence base. This is a combination of the size of a particular evidence base, its consistency and the quality of the individual studies that comprise it. The note provides a common language for how we, and our suppliers, can approach these considerations.
However, there are of course limitations to the note and its use. For example, there is debate about whether it is right to apply a single framework to different research designs. When assessing qualitative research, for instance, it is arguable that considerations of validity and reliability have different conceptualisations than those applied to quantitative studies. Application of the framework also requires some understanding of concepts underpinning the principles of quality and of research methods. At DFID, we have complemented the note with online and in-person training, which has helped increase people’s confidence and ability. Yet it can still be intimidating to engage with methodological or analytical nuance if you are unfamiliar with the approach being used. Which highlights another challenge: although the note offers a consistent framework, deciding whether a particular study has met the criteria to a high standard is subjective. It is important that users are open and frank about this. And, of course, it takes time to critically appraise the quality of research, which is the one thing that most policy makers are short of.
The point of this note is to help enable people to be more intelligent consumers of evidence; it isn’t about trying to get people to rewrite the equations used in analysis or redesign a research project. It’s about offering prompts to help people consider the uses and limitations of a particular piece of research and enable us to do what we do, better.
Not all research is created equal. While this seems like an obvious point, it can be difficult to know which studies to give weight to and which ones to take with a heavy dose of caution. A judgement of quality is based on an assessment of a number of factors, but even the task of deciding which factors to consider can be difficult. There is an increasing emphasis on looking for the best evidence to inform decisions about investments, but there are myriad quality frameworks, indicators and methodological concepts which can be a challenge to harmonise, even for experienced researchers.
DFID’s How to Note Assessing the Strength of Evidence was developed to provide a consistent framework that can be used to assess quality. It is part of a bigger push to embed a consideration of quality into our engagement with evidence. It is not enough to say that particular methods are somehow inherently better than others. Quality is about the design and implementation of an approach that is methodologically suited to answer a given research question. The How to Note has been useful in distilling the variety of possible indicators and offering a clear steer about how to critically appraise the quality of a study on its own merit.

However, there are of course limitations to the note and its use. For example, there is debate about whether it is right to apply a single framework to different research designs. When assessing qualitative research, for instance, it is arguable that considerations of validity and reliability have different conceptualisations than those applied to quantitative studies. Application of the framework also requires some understanding of concepts underpinning the principles of quality and of research methods. At DFID, we have complemented the note with online and in-person training, which has helped increase people’s confidence and ability. Yet it can still be intimidating to engage with methodological or analytical nuance if you are unfamiliar with the approach being used. Which highlights another challenge: although the note offers a consistent framework, deciding whether a particular study has met the criteria to a high standard is subjective. It is important that users are open and frank about this. And, of course, it takes time to critically appraise the quality of research, which is the one thing that most policy makers are short of.
The point of this note is to help enable people to be more intelligent consumers of evidence; it isn’t about trying to get people to rewrite the equations used in analysis or redesign a research project. It’s about offering prompts to help people consider the uses and limitations of a particular piece of research and enable us to do what we do, better.