Send email for updates
Pearson M., Coomber R.
Addiction: 2010, 105(1), p. 136–145.
Unable to obtain a copy by clicking title? Try asking the author for a reprint by adapting this prepared e-mail or by writing to Dr Pearson at email@example.com. You could also try this alternative source.
Observations by researchers who participated in the process suggest that the development of UK guidance on the prevention of substance misuse in young people was hampered by a focus on methodological purity rather than the real-world relevance of the studies included in the underlying review of evidence.
Summary Aim To critically evaluate the methods utilised in the conduct of a systematic review of studies conducted to inform the development of guidance on interventions to reduce substance misuse in young people. This analysis also extends to the deliberations of the committee responsible for developing the guidance.
Design Participant observation in the review process, semi-structured interviews with review team members and management and structured observation of the process of guidance development.
Setting An 'arm's-length' government body.
Participants Review team members, management and the committee responsible for producing evidence-based guidance for policy and practice.
Measurements Data from interviews and (participant-) observation were reflected upon critically in order to increase understanding of the systematic review process.
Findings The application of systematic review methods produced an evidence base that did not inform the development of guidance to the extent that it could have done:
• an emphasis upon internal research validity Explanation from Findings: The extent to which the research design is tight enough to enable conclusions to be drawn about whether the intervention caused the observed impacts. This depends on there being no relevant differences between treatment and control conditions other than the intervention and on the elimination of other influences which might obscure the intervention's impact. Achieving this degree of purity may reduce generalisability to real-world situations. produced an evidence base with an emphasis on short-term, discrete packages of interventions at the level of the individual, which insufficiently recognised the role played by the wider determinants of health;
• the criteria for establishing external validity Explanation from Findings: The degree to which an evaluation tests an intervention in conditions which permit us to assume that similar impacts will be observed with respect to the intended 'real-life' conditions. Can be maximised either by limiting the claims made for the study in terms of its generalisability to real-world situations, or by employing more naturalistic research designs which more closely match research and real-world conditions. were undeveloped, resulting in ad hoc inferences being made at the stage of guidance development rather than relevant evidence being searched for, evaluated and synthesised in the course of the review; and
• no matter how rigorously the criteria for internal and external validity might be developed, the systematic review of evidence and development of guidance are strongly reliant upon the expert judgement of reviewers and committee members, whether or not this judgement is openly acknowledged.
Conclusions Prioritising internal validity in a systematic review risks producing an evidence base that is not informed adequately by the wider determinants of health and which does not give sufficient consideration to external validity. It is imperative to avoid adhering to narrow methodological criteria at the expense of exercising critical judgement or acknowledging its role. The use of appropriate methods requires that commissioners of systematic reviews are clear at the outset how the review is proposed to be utilised. Review methods such as meta-ethnography and realist synthesis could contribute to making the frameworks within which judgements are made more explicit.
Last revised 08 March 2011
Give us your feedback on the site (one-minute survey)
Open Effectiveness Bank home page
Add your name to the mailing list to be alerted to new studies and other site updates
STUDY 2001 Prevention is a two-way process