Send email for updates
Mason T., Sutton M., Whittaker W. et al
Addiction: 2015, 110(7), p. 1120–1128.
Unable to obtain a copy by clicking title? Try asking the author for a reprint by adapting this prepared e-mail or by writing to Dr Mason at email@example.com.
A flagship drug treatment policy initiative appears to have backfired in England, where the government’s pilot payment-by-results schemes seem to have led to fewer successful completions of treatment and more prospective patients declining treatment.
Summary Typically pay-for-performance schemes have linked payments to health service providers to measures of clinical quality rather than outcomes, but recent schemes introduced by the UK government have focused more clearly on paying for ‘outcomes’. These funding arrangements are promoted as paying treatment services for the value they produce, and are intended to incentivise innovation rather than the adoption of pre-set procedures and techniques. However, they can also generate unintended effects and transfer financial risk from the funders to the providers of services. Among these initiatives has been a scheme for funding treatment of drug and alcohol problems, the subject of the featured study.
In 2012 the Department of Health in England introduced a pilot programme in eight areas under which drug and alcohol treatment services received payments based on their patients achieving ‘recovery-focused’ outcomes.
Compared to other areas, these payment-by-results schemes appear to have led to slightly fewer successful completions of treatment and more patients declining treatment.
These unwanted results may be reversed in the longer term, and it remains possible that performance on other yardsticks of success has been improved by the schemes, particularly post-treatment relapse.
Previously commissioning of drug treatment services had focused on retention in treatment as the principal measure of effectiveness, and service providers were paid using ‘block’ or ‘activity’ contracts. A break from this came in April 2012 when the Department of Health in England introduced a pilot programme under which drug and alcohol treatment services received payments based on their producing ‘recovery-focused’ outcomes. This pilot in eight areas was a pillar of the 2010 drug strategy, which prioritised service users recovering from their dependence on drugs or alcohol and successfully completing and leaving treatment. Primarily against the yardstick of successful completion, the featured study compared the performance of the pilot areas in their first year against those of other areas in England, drawing on data routinely collected by the National Drug Treatment Monitoring System on patients being treated for problems with drugs other than alcohol.
Under the pilot scheme drug action teams responsible for organising treatment services in their areas linked provider payments to performance indicators specified (with limited scope for local variations) in a national outcomes framework. Outcomes were grouped under: progression towards abstinence from problem drug(s); reduction in offending; and improved health and wellbeing. The evaluation focused on the first domain, and specifically on successful treatment completion as the key proxy for recovery used by Public Health England. Together with not returning to treatment within the next year, this outcome attracted a large part of the payments to services in the pilot areas. Since the non-return outcome cannot be assessed at least 12 months after discharge, the featured report was confined to the first phase – successful completion of treatment – which triggers the first payment to providers in the abstinence domain. The monitoring system records a successful completion when an individual completes their planned treatment having been judged by a clinician to be free of dependence on the drug in respect of which they were being treated, and also using neither heroin nor crack. The rate of successful completions is the proportion these form of all treatment episodes during the financial year.
Because it seemed possible that treatment changes generated by the schemes might have weakened engagement with treatment services, the evaluation also assessed rates of treatment refusal – patients referred to a treatment service and who met face-to-face with service staff, but decided not to start treatment.
The key measure was not the absolute rates of completions and refusals, but how these changed from before (April to December of 2011) to after (April to December of 2012) the start of the pilot schemes. If after adjusting for other influences, they changed to a significantly different degree in the pilot areas compared to other areas, this would be evidence that introducing the schemes had had an impact. To assess this, before-after trends in the eight pilot areas were benchmarked against trends in:
• all 140 other drug action team areas in England;
• the 42 areas most similar to the pilot areas in the percentages of the local population who used opiates or crack and the areas’ deprivation indices;
• the 90 areas in the same four government-office regions of England (East, North East, South West, West Midlands).
Some non-pilot areas have nevertheless implemented payment-by-results schemes, so similar comparisons were made between trends in successful completions and refusals across all payment-by-results areas versus areas with no such schemes as of July 2012.
Whether confined to the pilot areas or across England, and whatever the benchmark, in payment-by-results areas successful treatment completions fell once schemes started and treatment refusals rose relative to other areas. Details below.
Relative to other areas, after the schemes started patients in pilot areas became significantly less likely to successfully complete treatment. After adjusting for other influences, it was calculated that 1.3% fewer treatment episodes ended in successful completion than would have done had the pilot areas followed the same before-after trajectory as other areas. Raw figures were in pilot areas, about 13.7% of episodes ending in successful completion in 2011 falling to 11.3% in 2012, and in all other areas, a smaller fall from 12.5% to 12.2%. The extra drop in the completion rate in pilot areas was comparable when the benchmark was changed to similar areas or to areas in the same regions.
While successful completions appeared to fall as a result of the schemes opening, the proportion of prospective patients who declined treatment rose slightly but significantly relative to all other areas by about 1%. Raw figures were in pilot areas, 0.47% declining treatment in 2011 rising to 1.03% in 2012, while in all other areas there was a slight fall from 0.55% to 0.43%. This extra increase in refusals remained stable when the benchmark was changed to similar areas or to areas in the same regions.
When all payment-by-results areas (ie, not just the pilot areas) were compared with those without such schemes, there remained an extra 1% decrease in the rate of successful completions. At 0.15% the extra increase in treatment refusals was smaller than in the pilot areas, but still statistically significant.
After the introduction of payment-by-results pilot schemes, patients in those areas were significantly less likely to complete treatment and significantly more likely to decline to commence treatment. It remains to be seen whether these apparently negative impacts would be matched by similar impacts in the other outcome domains rewarded by payment-by-results schemes and on other measures of treatment quality.
Possibly the decrease in successful completions was due to treatment services holding on longer to clients to try to ensure they were sufficiently recovered not to return to treatment in the following year; if they did, the service would lose the single largest element of funding under the schemes. An increase in treatment refusals seems an unintended consequence of the schemes, one which can be expected to undermine recovery from addiction and increase the chances of the drug users concerned having to attend emergency departments and be admitted to hospital.
It should be remembered that these results derive from the first year of the pilot schemes. Rates of treatment completion may improve in the longer term as the considerable change in treatment systems necessitated by the schemes settles down. Also, this study was not able to randomly allocate areas to implement payment-by-results schemes. Despite adjusting for other influences, there may remain differences between the areas which volunteered for and were chosen as pilot areas (or those which chose themselves to implement schemes) and other areas which affected trends in treatment completions and refusals.
commentary This study funded by the Department of Health found a strikingly consistent picture of payment-by-results schemes having the opposite effect to that intended on successful completions, weakening performance on what the government and Public Health England see as a critical indicator of successful treatment. Whether in terms of patient welfare and societal benefits this really did mean introducing the schemes was counterproductive could depend on whether fewer successful completions were counteracted by more patients staying in rather than dropping out of treatment, and whether in the longer term the schemes will be shown to have performed better than in the first year of operation. More below.
As the authors speculate, it is possible that services in the pilot areas held on slightly longer to patients who qualified for successful discharge in order to bolster the stability needed to avoid relapse and treatment return, and that in this they were successful. Though being retained in treatment is no longer favoured by government, it has been associated in England with a better crime-reduction record than successful completion, and is the most consistent indicator of desired outcomes for patients and society. If retention did improve, both patients and society more broadly may have gained from the pilot schemes. However, if fewer successful completions meant, not more patients staying in treatment, but more dropping out, the consequences are likely to have been negative. Also as the authors suggest, it seems likely that the results could have been due to a transient disruption consequent on introducing the schemes, and that later years will see improvements.
Another strand to the featured evaluation tried to find out what was happening in the pilot areas by interviewing or holding focus groups with stakeholders, service users and carers. According to an interim report, these sources appears on balance to favour continuing with some element of payment by results in their local funding formulas, with adaptations based on the experience of the pilots, but this picture comes from within areas which had volunteered for the schemes. The clearest negatives seemed to be unwelcome pressure towards abstinence and treatment exit, particularly curtailing the prescribing of medications like methadone intended to alleviate the need for opiate-dependent patients to use illegal heroin.
An interesting aside in the featured report is the almost total lack of competition between services in the pilot areas, meaning patients had little or no choice of which treatment organisation to go to: “Across the eight pilot areas, the average number of providers in 2011–12 was 1.875. In theory, competition existed in only one of these eight areas, and this area had two providers offering distinct, stand-alone services.” This might be happening too in other areas as commissioners opt for the cost-savings and convenience of bundling all local services under one mega-service umbrella, and smaller organisations lose out in the expensive and time-consuming business of tendering for re-commissioned services.
Findings from areas which implemented payment-by-results schemes on their own initiatives are complicated by the lack of clarity over when those schemes came in to operation. What can be made of the small increase in non-commencement of treatment is unclear. The most worrying interpretation would be that services are becoming more choosy about who they take on in the attempt to cherry-pick those most likely to more rapidly be successful, and that this results in more patients being persuaded that the service is not for them. From the featured report, it seems that patients deterred in this way would normally have had no alternative service to go to, and appear on the records as having declined treatment. However, if services were doing this, the strategy would at least in the first year appear to have backfired. Other possibilities are that cherry-picking went the other way, services opting for higher tariff but more difficult patients, and/or that patients reacted badly to the first step in the process being assessment of the tariff they would attract rather than assessment by a treatment service.
The government has produced its own assessment of the early performance of the pilot schemes based on the 6,582 patients being treated for drug rather than alcohol problems to February 2013. It found consistent gains (compared to what happened in the same areas before and to the national average) only in the proportion of patients who while in treatment said (via forms completed by staff) they had stopped using their problem substance(s). But in line with the featured study, against the same comparators the proportion exiting treatment free of dependence worsened in the pilot areas. Other measures such as housing status and quality of life were largely unaffected by the opening of the schemes. For the 3,081 patients whose problems were mainly with alcohol, things seemed somewhat worse. There was no indication that the pilots had elevated abstinence rates and the proportion exiting treatment free of dependence was lower than in the rest of England and lower than in the same areas before the pilots.
For more on payment-by-results in the UK and on treatment commissioning in general, see this Effectiveness Bank hot topic.
Thanks for their comments on this entry in draft to Russell Webster. Commentators bear no responsibility for the text including the interpretations and any remaining errors.
Last revised 23 June 2015. First uploaded 16 June 2015
Comment/query to editor
Give us your feedback on the site (one-minute survey)
Open Effectiveness Bank home page
Add your name to the mailing list to be alerted to new studies and other site updates
HOT TOPIC 2016 What about evidence-based commissioning?
DOCUMENT 2012 Quality standard for drug use disorders