Send email for updates

About updates
Research analysis

This entry is our analysis of a study added to the Effectiveness Bank. The original study was not published by Findings; click Title to order a copy. Free reprints may be available from the authors – click prepared e-mail. Links to other documents. Hover over for notes. Click to highlight passage referred to. Unfold extra text Unfold supplementary text The Summary conveys the findings and views expressed in the study.

Title and link for copying Comment/query to editor

The challenge of external validity in policy-relevant systematic reviews: a case study from the field of substance misuse.

Pearson M., Coomber R.
Addiction: 2010, 105(1), p. 136–145.
Unable to obtain a copy by clicking title? Try asking the author for a reprint by adapting this prepared e-mail or by writing to Dr Pearson at You could also try this alternative source.

Observations by researchers who participated in the process suggest that the development of UK guidance on the prevention of substance misuse in young people was hampered by a focus on methodological purity rather than the real-world relevance of the studies included in the underlying review of evidence.

Summary Aim To critically evaluate the methods utilised in the conduct of a systematic review of studies conducted to inform the development of guidance on interventions to reduce substance misuse in young people. This analysis also extends to the deliberations of the committee responsible for developing the guidance.

Design Participant observation in the review process, semi-structured interviews with review team members and management and structured observation of the process of guidance development.

Setting An 'arm's-length' government body.

Participants Review team members, management and the committee responsible for producing evidence-based guidance for policy and practice.

Measurements Data from interviews and (participant-) observation were reflected upon critically in order to increase understanding of the systematic review process.

Findings The application of systematic review methods produced an evidence base that did not inform the development of guidance to the extent that it could have done:
• an emphasis upon internal research validity Explanation from Findings: The extent to which the research design is tight enough to enable conclusions to be drawn about whether the intervention caused the observed impacts. This depends on there being no relevant differences between treatment and control conditions other than the intervention and on the elimination of other influences which might obscure the intervention's impact. Achieving this degree of purity may reduce generalisability to real-world situations. produced an evidence base with an emphasis on short-term, discrete packages of interventions at the level of the individual, which insufficiently recognised the role played by the wider determinants of health;
• the criteria for establishing external validity Explanation from Findings: The degree to which an evaluation tests an intervention in conditions which permit us to assume that similar impacts will be observed with respect to the intended 'real-life' conditions. Can be maximised either by limiting the claims made for the study in terms of its generalisability to real-world situations, or by employing more naturalistic research designs which more closely match research and real-world conditions. were undeveloped, resulting in ad hoc inferences being made at the stage of guidance development rather than relevant evidence being searched for, evaluated and synthesised in the course of the review; and
• no matter how rigorously the criteria for internal and external validity might be developed, the systematic review of evidence and development of guidance are strongly reliant upon the expert judgement of reviewers and committee members, whether or not this judgement is openly acknowledged.

Conclusions Prioritising internal validity in a systematic review risks producing an evidence base that is not informed adequately by the wider determinants of health and which does not give sufficient consideration to external validity. It is imperative to avoid adhering to narrow methodological criteria at the expense of exercising critical judgement or acknowledging its role. The use of appropriate methods requires that commissioners of systematic reviews are clear at the outset how the review is proposed to be utilised. Review methods such as meta-ethnography and realist synthesis could contribute to making the frameworks within which judgements are made more explicit.

Last revised 08 March 2011

Comment/query to editor
Give us your feedback on the site (one-minute survey)
Open Effectiveness Bank home page
Add your name to the mailing list to be alerted to new studies and other site updates

Top 10 most closely related documents on this site. For more try a subject or free text search

DOCUMENT 2011 European drug prevention quality standards: a manual for prevention professionals

REVIEW 2011 Early intervention: the next steps. An independent report to Her Majesty's Government

STUDY 2010 Bridging the gap between evidence and practice: a multi-perspective examination of real-world drug education

STUDY 2003 Drug education: inspections show that tick box returns are no guarantee of quality

REVIEW 2009 School-based programmes that seem to work: Useful research on substance use prevention or suspicious stories of success?

STUDY 2001 Prevention is a two-way process

REVIEW 2010 Alcohol-use disorders: Preventing the development of hazardous and harmful drinking

REVIEW 2014 Interventions to reduce substance misuse among vulnerable young people

STUDY 2010 One-year follow-up evaluation of the Project Towards No Drug Abuse (TND) dissemination trial

STUDY 2003 Substances, adolescence (meta-analysis)