Skip to main content

Cost-effectiveness of the Adaptive Implementation of Effective Programs Trial (ADEPT): approaches to adopting implementation strategies

Abstract

Background

Theory-based methods to support the uptake of evidence-based practices (EBPs) are critical to improving mental health outcomes. Implementation strategy costs can be substantial, and few have been rigorously evaluated. The purpose of this study is to conduct a cost-effectiveness analysis to identify the most cost-effective approach to deploying implementation strategies to enhance the uptake of Life Goals, a mental health EBP.

Methods

We used data from a previously conducted randomized trial to compare the cost-effectiveness of Replicating Effective Programs (REP) combined with external and/or internal facilitation among sites non-responsive to REP. REP is a low-level strategy that includes EBP packaging, training, and technical assistance. External facilitation (EF) involves external expert support, and internal facilitation (IF) augments EF with protected time for internal staff to support EBP implementation. We developed a decision tree to assess 1-year costs and outcomes for four implementation strategies: (1) REP only, (2) REP+EF, (3) REP+EF add IF if needed, (4) REP+EF/IF. The analysis used a 1-year time horizon and assumed a health payer perspective. Our outcome was quality-adjusted life years (QALYs). The economic outcome was the incremental cost-effectiveness ratio (ICER). We conducted deterministic and probabilistic sensitivity analysis (PSA).

Results

Our results indicate that REP+EF add IF is the most cost-effective option with an ICER of $593/QALY. The REP+EF/IF and REP+EF only conditions are dominated (i.e., more expensive and less effective than comparators). One-way sensitivity analyses indicate that results are sensitive to utilities for REP+EF and REP+EF add IF. The PSA results indicate that REP+EF, add IF is the optimal strategy in 30% of iterations at the threshold of $100,000/QALY.

Conclusions

Our results suggest that the most cost-effective implementation support begins with a less intensive, less costly strategy initially and increases as needed to enhance EBP uptake. Using this approach, implementation support resources can be judiciously allocated to those clinics that would most benefit. Our results were not robust to changes in the utility measure. Research is needed that incorporates robust and relevant utilities in implementation studies to determine the most cost-effective strategies. This study advances economic evaluation of implementation by assessing costs and utilities across multiple implementation strategy combinations.

Trial registration

ClinicalTrials.gov Identifier: NCT02151331, 05/30/2014.

Peer Review reports

Background

Evidence-based treatments for mental health conditions, including depression, are essential to improving the public’s health [1]. Mental health conditions frequently co-occur with substance use disorders, and other co-occurring conditions, inciting sequelae of short- and long-term consequences [2]. Mental health conditions have a significant financial toll: researchers estimated in 2008 that the annual earnings loss for serious mental illness in 2008 was $193.2 billion [3]. Collaborative care models (CCMs) have demonstrated effectiveness in improving outcomes among patients with mental disorders; collaborative care models such as Life Goals are designed to improve medical and psychiatric outcomes for persons with mood disorders through personal goal-setting aligned with wellness and symptom coping strategies and supported through collaborative care [4,5,6]. Life Goals is an evidence-based CCM that focuses on three components recognized as central to effective CCMs: patient self-management, care management, and provider decision support [7, 8]. Several randomized trials have shown Life Goals to be effective in improving mental and physical health outcomes for patients with unipolar and bipolar depression [4,5,6, 9]. The Life Goals self-management component comprises six psychosocial sessions for patients, to be delivered in either individual or group format. While all Life Goals patients complete core introduction and conclusion modules, the four intermediary sessions can be chosen by patients and providers from among several mental health and wellness subjects, including depression, mania, physical activity, or substance abuse. Life Goals also provides manualized support for care management and provider decision support, including templates for tracking patient progress and guides to common medications for unipolar/bipolar depression patients. Most individuals suffering from depression and other mental health conditions are not receiving evidence-based practices (EBPs) such as Life Goals in community settings, resulting in poor and costly health outcomes and millions of research dollars wasted when EBPs fail to reach those most in need [10,11,12]. Researchers increasingly recognize that EBPs must be complemented by effective implementation strategies (i.e., implementation interventions) to achieve desired public health outcomes [13]. Replicating Effective Programs (REP) is an implementation strategy focused on maximizing flexibility and fidelity in EBP delivery [14]. REP, based on the CDC’s research-to-practice framework [15], is guided by Social Learning [16] and Diffusion of Innovations Theories [17]. Standard REP includes three primary components: program packaging, provider training, and facilitation. Standard REP is a low intensity, minimal cost intervention that is akin to standard implementation for many evidence-based programs and practices; standard REP has improved uptake of brief HIV-focused interventions but has been less successful with the uptake of more complex behavioral interventions [18]. Researchers have also developed enhanced REP for more complex clinical behavioral interventions, which include added customization for program packaging and training, and implementation facilitation [19]. Implementation facilitation (i.e., facilitation) is a promising implementation strategy from the integrating Promoting Action on Research Implementation in Health Services (iPARIHS) framework that provides ongoing, individualized assistance for program delivery that can help enhance uptake of EBPs such as Life Goals in community clinics [19, 20]. Facilitation applies principles of interactive problem solving with practice-based knowledge to support providers as they engage in program delivery [21, 22]. Individuals within (internal facilitator, IF) and outside of (external facilitator, EF) the organization can provide ongoing support for EBP implementation [19]. External facilitators (EF) provide expertise, active guidance, and support for intervention delivery. Internal facilitators (IF) work in tandem with EFs to support providers in program delivery and communicate with organizational leadership and the external facilitator.

The costs associated with implementation strategies, especially multicomponent strategies such as REP+facilitation, can be substantial. Cost is a key consideration from an organizational or system perspective when implementing new innovations [11]. Understanding the resources needed to achieve desired behavioral outcomes (e.g., improved mental health) is essential to implementing and sustaining EBPs in communities [23]. Most economic evaluation of implementation, however, has focused on intervention costs and not the costs of implementation strategies required to deploy and sustain them [24]. Economic evaluation of implementation refers to the systematic evaluation of what outcomes a specific implementation strategy or set of competing strategies achieves and the costs of achieving them [25]. Economic evaluation provides key information for decision makers regarding implementation strategies to support and sustain EBP delivery. Organizations benefit from evidence that supports (or refutes) investment in specific strategies as an efficient use of resources, and this can help prioritize implementation efforts [11, 24, 26]. Despite this need for practical economic information that will provide decision makers with information on whether the cost of deploying an implementation strategy is worth the added cost (versus standard implementation or an alternative strategy), less than 10% of implementation studies include cost information, and even fewer conduct comparative economic analyses [25, 27]. Thus, additional research is needed to advance economic evaluation of implementation as this will be instrumental in demonstrating if investment in implementation strategies is worth the additional costs [28].

Many types of cost evaluation exist, but one well suited to implementation science is cost-effectiveness analysis. Cost-effectiveness analysis (CEA) assesses whether incremental benefits of one strategy versus another are sufficient to justify additional costs and has been used to support mental health treatment-focused EBPs for clinical settings [29]. CEA can inform decisions about resource allocation for program selection and delivery [30].

The objective of this study is to estimate the costs and conduct a CEA as part of an adaptive implementation trial comparing different implementation strategies. The goal of Adaptive Implementation of Effective Programs Trial (ADEPT) is to use a sequential multiple assignment randomized trial (SMART) design to compare the effectiveness of different augmentations to REP using EF or a combination of EF + IF on mental health outcomes among patients diagnosed with depression or bipolar disorders in community-based practices; details of the ADEPT trial are described in more detail elsewhere [19]. A secondary ADEPT aim was to assess the costs for different scenarios of combining REP+facilitation (see Fig. 1 and Fig. 4 in the Appendix) to identify the most cost-effective implementation strategy approach. We compare four different implementation strategy combinations and evaluate relative cost-effectiveness to identify which implementation strategies are most cost-effective in achievi program goals: Strategy 0: REP only, Strategy 1: REP+EF, Strategy 2: REP+EF add IF if needed, and Strategy 3: REP+EF/IF. Clinics responding to their respective implementation strategy (e.g., > 10 patients receiving Life Goals) discontinued the implementation strategy. Among those that did not respond during the second phase of the trial, for the final phase Strategy 1 continued with EF, Strategy 2 added IF, and Strategy 3 continued with EF/IF.

Fig. 1
figure 1

Decision tree of the ADEPT trial. aSites that responded to the implementation strategy after the initial 6 months of the Trial Phase: either < 10 patients receiving Life Goals or > 50% of patients receiving Life Goals had ≤ 3 sessions, min dose for clinically significant results. Sites that responded to the implementation strategy discontinued the strategy during the second 6 months/Phase III of the trial

Methods

This study will use a simulation modeling approach using data from a previously conducted clinical trial [19, 31]. Our results are reported using the Consolidated Health Economic Evaluation Reporting (CHEERS) guidelines [32]. Implementation strategies included in the model reflect implementation strategies that could be developed using data from the trial. In this study, we focus on the ADEPT community-based mental health or primary care clinics who were non-responsive after 6 months of Replicating Effective Programs (REP) and would receive additional implementation support (i.e., facilitation) to enhance uptake of Life Goals. Non-responsive to REP was defined as 10 or fewer patients receiving Life Goals or < 50% of patients receiving a clinically significant dose of Life Goals, fewer than three Life Goals sessions (< 3 out of 6), after 6 months [33,34,35]. Eligible sites had at least 100 unique patients diagnosed with depression and could designate at least 1 mental health provider to administer individual or group collaborative care sessions for patients. The study was approved by local institutional review boards (IRBs) and registered under clinicaltrials.gov (identifier: NCT02151331).

Modeling approach

Using data from the ADEPT trial, we designed a cost-effectiveness study to evaluate three strategies that could be implemented to support the uptake and clinical effectiveness of Life Goals. These strategies do not exactly match the arms in the clinical trial because our goal was to evaluate the optimal implementation strategy approach among non-responders. We developed a decision tree to assess 1-year costs and outcomes for different intervention strategies following 6 months of REP (baseline) among non-responsive sites (i.e., slow adopter sites). Implementation strategies included in the model (see Fig. 1) were as follows: Strategy 0: REP only, Strategy 1: REP+EF, Strategy 2: REP+EF, ADD IF if needed, and Strategy 3: REP+EF/IF. The probability of non-response to the implementation strategies in the model was based on observed response rates in the study, which remained consistent across each phase at approximately .09 (that is, 9% were responders). Sites who responded to their assigned implementation strategy after 6 months of the trial (Phase II) discontinued the strategy. Sites who did not respond at after 6 months proceeded with Phase III as follows: for Strategy 1: continued REP+EF, for Strategy 2, added IF and for Strategy 3 continued with REP+EF/IF. The analysis uses a 1-year time horizon and assumes a health sector perspective. Parameter inputs were derived using primary data from ADEPT.

Costs

Implementation strategy costs for baseline REP were the same for all participants and include costs for training program providers, training compensation (e.g., pay during non-work hours), time costs for assessing organizational needs, and pre-implementation meetings. Non-labor costs included costs of the curriculum (manual and materials) and travel costs [24, 36]. Facilitation costs were based on the facilitation logs. The study EF and site IFs logged their tasks, categorizing mode, personnel interaction, duration, and the primary focus of each task. These tasks included coaching, developing an implementation plan, education, linking to outside resources, and consultation. We calculated costs based on time spent by hourly wage plus fringe rates for facilitators. As there was one EF employed by the study team, we used the EF hourly wage + fringe. For the IFs, training, and background (and thus costs) varied. We based the IF salary and fringe rates on current rates for Licensed Masters of Social Work (LMSW) professional using Bureau of Labor Statistics data, as many of the IFs were LMSWs. As we anticipated differences in uptake, that is the number of patients receiving Life Goals by condition, we calculated the total site-level cost per strategy (the level of randomization) and divided by the number of patients in that implementation strategy condition. The number of patients per condition was obtained from site-level records. Costs were collected in 2014 and adjusted to US 2018 dollars using the Consumer Price Index [37]. A summary of cost parameters is provided in Table 1. We report summary statistics for implementation costs with 95% confidence intervals. We estimated the costs of REP using the available cost data to obtain a comprehensive assessment of total implementation intervention costs, plus the costs of facilitation activities in each condition (EF and EF/IF).

Table 1 Model inputs

Health outcomes

Quality-adjusted life years (QALYs)

To develop a preference-based health utility measure for the current study, we mapped the SF-12 (which was collected as part of the patient-level evaluation in the ADEPT trial) to the EQ-5D, a multi-attribute utility instrument, using an established algorithm developed by Franks and colleagues [38]. The EQ-5D yields interval-level scores ranging from 0 (dead) to 1 (perfect health). This mapping provides a health utility measure for each health state experienced by patients in the study and can be used to calculate quality-adjusted life years, the preferred measure for health benefits used in cost-effectiveness analysis.

Data analytic approach

We used a decision-tree model to compare the cost-effectiveness across different scenarios for combining REP+facilitation for the Life Goals EBP (see Fig. 1 and Fig. 4 in the Appendix). The time horizon for this analysis was 12 months as this is the duration of the trial phase of the study. In this analysis, we adopted a health system/payer perspective. This narrower perspective stands in contrast to the full, societal perspective, which incorporates all relevant costs and benefits and is recommended for most economic evaluations [39]. While this narrower perspective can potentially ignore important costs or benefits from the broad societal standpoint, it has the practical value of explicitly addressing the budgetary concerns of payers. Thus, this approach fits well with implementation science contexts where financial factors are often central to whether programs and services are adopted and sustained [40].

Assumptions were made on the psychometric properties of the outcome measures, the effectiveness of the Life Goals intervention, and the reliability of time reporting by the facilitators. We test these assumptions in the sensitivity analyses by varying the costs and outcomes related to each intervention condition at low and high values (95% confidence interval). To address missing data on our utility (outcome) measures, we employed an inverse probability weighting (IPW) approach [41].

We estimated per-patient costs and QALYs for each implementation strategy sequence. We calculated the per-patient cost by dividing the total costs per condition by the number of patients in each condition. To compare interventions, we divided net incremental costs (net increase in costs from REP+EF/IF versus REP+EF, for example) by incremental effectiveness (net increase in QALYs in REP+EF/IF versus REP+EF groups, for example) to calculate the incremental cost-effectiveness ratio for patient-level outcomes across the conditions. We conducted a one-way sensitivity analysis on all input parameters listed in Table 1 to create a tornado diagram using net monetary benefits (NMB). We used NMB as this facilitates multiple comparisons, as in the current study, and incremental cost-effectiveness ratios (ICERs) are less suitable with more than 2 comparators [42]. The sensitivity analysis evaluated how costs and incremental cost-effectiveness are affected by variations in key parameters [30]. When available, we based upper/lower bound estimates on the 95% confidence intervals. We also conducted a probabilistic sensitivity analysis (PSA). PSA characterizes uncertainty in all parameters simultaneously, reflecting the likelihood that each model parameter takes on a specific value and provides information on overall decision uncertainty based on parameter uncertainty [43]. We conducted 1000 model simulations to quantify the probability that the implementation strategy is cost-effective for a range of thresholds of willingness-to-pay [44]. We conducted a scenario analysis to evaluate results for longer analytic time horizons, from 2 to 10 years. In this additional analysis, the effects of the intervention were assumed to remain constant over time, consistent with values estimated during the final phase for each condition of the trial.

Results

Results of base case analysis

Base case results are presented in Table 2, and a plot of cost-effectiveness across implementation strategies is depicted in Fig. 2. Our base case analysis results indicate REP ONLY is the least expensive. REP+EF, ADD IF has an ICER of $593/QALY. REP+EF had higher QALYs than REP alone, but the QALYs were not as high as REP+EF, ADD IF, and it was higher cost than REP+EF, ADD IF. REP+EF/IF had higher costs and lower QALYs than REP ONLY.

Table 2 Base case analysis results
Fig. 2
figure 2

Cost-effectiveness plane, organization/payer perspective

Sensitivity analysis

To test our model assumptions, we conducted sensitivity analyses on all parameters (Appendix Table 3). In the tornado diagram (see Fig. 3), we found the results were most sensitive to the following variables (in order): utility of individuals in the REP+EF, ADD IF arm at Phase III, the utility of individuals in the REP+EF arm at Phase II, the utility of individuals in the REP+EF arm at Phase III for responders, and utility of individuals in the REP+EF only arm at Phase III. We then conducted threshold analyses for each of the most sensitive parameters. We found that at utility values below .44 for REP+EF, ADD IF at Phase III, the value of REP+EF, ADD IF is no longer cost-effective and REP+EF becomes the most cost-effective choice. We also found that at utility values above .57 for REP+EF at Phase III, REP+EF ADD IF is no longer the most cost-effective option and REP+EF becomes the most cost-effective choice.

Fig. 3
figure 3

Tornado diagram showing one-way sensitivity analyses for the base case with the most sensitive parameters. All parameters were evaluated and data are provided in the appendix. Thick vertical black lines on the ends of the bars indicate values at which the initial preferred option is no longer cost-effective

In addition to the deterministic sensitivity analyses, we also conducted probabilistic sensitivity analysis. The results indicate that the intervention with the best results in terms of utility would be most preferred. The willingness-to-pay threshold was not important (unless using a $0 willingness-to-pay threshold). REP+EF, ADD IF is the optimal strategy in about 30% of iterations, REP ONLY is the optimal strategy in 31% of the iterations, and REP+EF/IF the optimal strategy in 22% of the iterations.

We also conducted sensitivity analyses to explore an extended time horizon. In this analysis, we investigated the effects of extending the utilities during the final 6-month period in each condition from the current 12-month time horizon to 10 years. We found that if there are no additional benefits beyond 1 year, our cost-effectiveness ratio of REP+EF, ADD IF is $593.42/QALY. If benefits continued to 2 years, the ICER was $223.06/QALY; at 3 years, the ICER was $137.34/QALY; and at 10 years, patients gain 1.14 QALYs and the cost-effectiveness ratio is $33.71/QALY. Full results are provided in Appendix Table 4. REP+EF, ADD IF remained the most cost-effective option with an extended time horizon.

Discussion

Effective implementation of EBPs for mental health treatment in communities is critical to improving the public’s health. Most individuals suffering from depression and other mental health conditions are not receiving evidence-based practices (EBPs) such as Life Goals (LG) in community settings, resulting in poor and costly health outcomes and millions of research dollars wasted when EBPs fail to reach those most in need [10,11,12]. Implementation strategies are key to improving uptake of EBPs in communities and achieving public health objectives of evidence-based treatments such as Life Goals. Implementation strategies, however, vary in intensity and cost. More research is needed on applying these strategies with consideration of the economic impact, given that community clinics often have limited—and carefully allocated—resources to promote EBP uptake [47]. This research is vital to bridging the research-to-practice gap, but economic evaluation of implementation strategies remains understudied [47]. This study is one of the first to investigate the cost-effectiveness of implementation strategies as part of an adaptive trial. Adaptive trials are an effective way to accelerate research-to-practice translation by simultaneously evaluating multiple strategies and combinations of strategies, based on clinics’ needs.

We found that, overall, REP+facilitation in its various permutations is a relatively low-cost implementation strategy. Identifying the costs and potential utilities for each REP+facilitation combination can help decision makers with resource allocation for implementation. Understanding the resources needed to achieve desired behavioral outcomes (e.g., reduced ATOD use) is essential to implementing and sustaining EBIs [23]. Also, we found that REP+EF, ADD IF may be the most cost-effective implementation strategy. But these results are still uncertain based on highly variable quality-of-life assessments by participants. Although researchers have debated if a step-up versus step-down approach to evidence-based clinical treatment is most effective, the optimal approach for implementation strategies to enhance the uptake of these treatments is also unclear. Our results are consistent with other clinical research that suggests a step-up strategy is a more cost-effective approach [48]. This information will support organizations in making informed decisions by providing evidence that supports (or refutes) investment in specific implementation strategies as an efficient use of resources and thus can help prioritize implementation efforts.

We also found that stepping up non-responsive sites immediately to REP+EF/IF, the most intensive and costly strategy (at the site level), was not cost-effective. This may be for several reasons. First, EF alone may be sufficient for community clinics to effectively implement the Life Goals intervention and, thus, in most cases IF may not be necessary [31]. Second, many sites had difficulty identifying an internal facilitator. Subsequent analyses into time data indicate that the mean time to identify an IF was 69 days. This suggests that many sites assigned to the IF condition did not have one for the first 2 months of the evaluation period. These results also indicate that community clinics may have a limited capacity to identify and effectively utilize an IF. Finally, we may have had more favorable results with the REP+EF, ADD IF condition during Phase II as the EF was able to work with the clinic on their barriers to uptake immediately and may have been working with several versus a single staff member.

Our results were highly dependent on the assessment of utility. The utilities were variable and uncertain across the different intervention arms. This had a strong influence on the overall assessment of cost-effectiveness. Further research is needed that incorporates robust and relevant utilities in implementation research to identify the most cost-effective strategies. Although the trial only evaluated patients up until 1 year, our results did not change if we simulated a longer time horizon of benefits. Extending benefits out from the current trial (12 months) to 10 years, the cost-effectiveness of REP+EF, ADD IF improved to $33.71/QALY. As the clinical benefit from engaging in evidence-based practices for mental health treatment may extend beyond the time horizon of the trial itself, studies that only observe outcomes over a short time horizon may report artificially high CE ratios [49]. We have found that extending the time horizon does reduce the CE ratio.

Limitations

We adopted a health payer perspective, which may not account for other relevant costs if considering the societal perspective. This may include indirect costs such as patient time, costs of hospitalization, or other treatments or lost productivity. Yet, this narrower perspective has the practical value of explicitly addressing the budgetary concerns of payers and fits well with implementation science contexts where financial factors are often central to whether programs and services are adopted and sustained [40]. We did not have additional information on estimates of REP costs to vary parameters and these cost estimates primarily relied on research team recall. There may be additional costs not included in the estimates that may have implications on the CEA results. Also, additional information to vary specific parameters may also help inform those parameters that are most influential on our estimates. In our CEA analyses, however, all groups had REP costs incorporated into total costs, so this is unlikely to influence the CEA results across the REP+facilitation permutations. We did not have a direct measure of QALYs and thus our utility estimates may be especially susceptible to measurement error. A notable amount of research has been done, however, on mapping the SF-12 components to the EQ-5D thus reducing the likelihood of error as a result of the mapping process. Next, we had a notable amount of missing patient-level survey data, increasing the likelihood of biased estimates. We did, however, attempt to reduce this bias using inverse probability weighting.

The current study would benefit from a mixed methods approach, specifically a sequential design, to obtain qualitative data following the quantitative data collection to better understand challenges to utilizing IF/EF. Finally, our study has a limited time horizon. Using incremental QALYs gained based on the survey results and running sensitivity analysis to evaluate potential effects of a longer time horizon showed REP+EF, ADD IF was still highly cost-effective. However, a longer time horizon within the RCT would provide additional information for a longer-term return-on-investment and could provide more confidence about which adaptive implementation strategy is best.

Conclusions

Our study has several practice implications. First, our results support using a step-up strategy for implementation support for sites that are slow to implement as a cost-effective approach to enhancing uptake and clinical outcomes. Second, our results provide information for decision makers and community health clinic leadership on the costs and relative benefits of using various implementation strategies to improve clinical outcomes. Third, our results support the need for further cost-effectiveness research and incorporating robust utility assessments in community health clinics to provide evidence that will support or refute investments in specific strategies. Finally, our results point to the need for mid-program utility evaluation for both effectiveness and cost-effectiveness to make accurate determinations of the most efficient implementation strategy approach.

Availability of data and materials

Deidentified data are available on request.

Abbreviations

EBPs:

Evidence-based practices

REP:

Replicating Effective Programs

CDC:

Centers for Disease Control and Prevention

iPARIHS:

Integrating Promoting Action on Research Implementation in Health Services

IF:

Internal facilitation/facilitator

EF:

External facilitation/facilitator

VA:

Veteran’s Administration

CEA:

Cost-effectiveness analysis

ADEPT:

Adaptive Implementation of Effective Programs Trial

SMART:

Sequential multiple assignment randomized trial

LMSW:

Licensed Masters of Social Work

QALYs:

Quality-adjusted life years

IPW:

Inverse probability weighting

NMB:

Net monetary benefits

ICER:

Incremental cost-effectiveness ratio

PSA:

Probabilistic sensitivity analysis

References

  1. Onken L, Carroll K, Shoham V, Cuthbert B, Riddle M. Reenvisioning clinical science: unifying the discipline to improve the public health. Clin Psychol Sci. 2014;2(1):22–34.

    Article  PubMed  PubMed Central  Google Scholar 

  2. Bauer M, Altshuler L, Evans D, Beresford T, Williford W, Hauger R. Prevalence and distinct correlates of anxiety, substance, and combined comorbidity in a multi-site public sector sample with bipolar disorder. J Affect Disord. 2005 Apr;85(3):301–15.

    Article  PubMed  Google Scholar 

  3. Kessler R, Heeringa S, Lakoma M, Petukhova M, Rupp AE, Schoenbaum M, et al. Individual and societal effects of mental disorders on earnings in the United States: results from the national comorbidity survey replication. Am J Psychiatry. 2008 Jun;165(6):703–11.

    Article  PubMed  PubMed Central  Google Scholar 

  4. Kilbourne AM, Li D, Lai Z, Waxmonsky J, Ketter T. Pilot randomized trial of a cross-diagnosis collaborative care program for patients with mood disorders. Depress Anxiety. 2013;30(2):116–22.

    Article  PubMed  Google Scholar 

  5. Kilbourne A, Goodrich D, Nord K, Van Poppelen C, Kyle J, Bauer M, et al. Long-term clinical outcomes from a randomized controlled trial of two implementation strategies to promote collaborative care attendance in community practices. Adm Policy Ment Health. 2015;42(5):642–53.

    Article  PubMed  PubMed Central  Google Scholar 

  6. McBride BM, Williford W, Glick H, Kinosian B, Altshuler L, et al. Collaborative care for bipolar disorder: part II. Impact on Clinical Outcome, Function, and Costs. Psychiatr Serv. 2006;57(7):937–45.

    Article  PubMed  Google Scholar 

  7. Woltmann E, Grogan-Kaylor A, Perron B, Georges H, Kilbourne A, Bauer M. Comparative effectiveness of collaborative chronic care models for mental health conditions across primary, specialty, and behavioral health care settings: systematic review and meta-analysis. Am J Psychiatry. 2012;169(8):790–804.

    Article  PubMed  Google Scholar 

  8. Miller C, Grogan-Kaylor A, Perron B, Kilbourne A, Woltmann E, Bauer M. Collaborative chronic care models for mental health conditions: cumulative meta-analysis and metaregression to guide future research and implementation. Med Care. 2013;51(10):922–30.

    Article  PubMed  PubMed Central  Google Scholar 

  9. Kilbourne A, Goodrich D, Lai Z, Clogston J, Waxmonsky J, Bauer M. Life goals collaborative care for patients with bipolar disorder and cardiovascular disease risk. Psychiatr Serv. 2012;63(12):1234–8.

    Article  PubMed  PubMed Central  Google Scholar 

  10. Proctor E, Landsverk J, Aarons G, Chambers D, Glisson C, Mittman B. Implementation research in mental health services: an emerging science with conceptual, methodological, and training challenges. Adm Policy Ment Health Ment Health Serv Res. 2009;36(1):24–34.

    Article  Google Scholar 

  11. Proctor E, Silmere H, Raghavan R, Hovmand P, Aarons G, Bunger A, et al. Outcomes for implementation research: conceptual distinctions, measurement challenges, and research agenda. Adm Policy Ment Health Ment Health Serv Res. 2011;38(2):65–76.

    Article  Google Scholar 

  12. Institute of Medicine. Improving the quality of health care for mental and substance-use conditions [Internet]. Improving the quality of health care for mental and substance-use conditions: quality chasm series. Washington, DC: National Academies Press; 2006. Available from: http://0-www-ncbi-nlm-nih-gov.brum.beds.ac.uk/pubmed/20669433.

  13. Kirchner J, Waltz T, Powell B, Smith J, Proctor E. Implementation Strategies. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and implementation research in health: translating science to practice. 2nd ed. New York, NY: Oxford University Press; 2018.

    Google Scholar 

  14. Kilbourne A, Neumann M, Pincus H, Bauer M, Stall R. Implementing evidence-based interventions in health care: application of the replicating effective programs framework. Implement Sci IS. 2007;2:42.

    Article  PubMed  Google Scholar 

  15. Neumann M, Sogolow E. Replicating effective programs: HIV/AIDS prevention technology transfer. AIDS Educ Prev Off Publ Int Soc AIDS Educ. 2000;12(5 Suppl):35–48.

    CAS  Google Scholar 

  16. Bandura A. Social learning theory. Englewood Cliffs, N.J: Prentice Hall; 1977. viii, 247 p.

    Google Scholar 

  17. Rogers E. Diffusion of innovations. New York: Free Press; 2003. xxi, 551 p.

    Google Scholar 

  18. Tones K, Green J. Health promotion: planning and strategies. London ; Thousand Oaks, Calif: SAGE Publications; 2004. p. xv, 376.

    Google Scholar 

  19. Kilbourne A, Almirall D, Eisenberg D, Waxmonsky J, Goodrich D, Fortney J, et al. Adaptive Implementation of Effective Programs Trial (ADEPT): Cluster randomized SMART trial comparing a standard versus enhanced implementation strategy to improve outcomes of a mood disorders program. Implement Sci. 2014;9(1):132.

    Article  PubMed  PubMed Central  Google Scholar 

  20. Harvey G, Kitson A. PARIHS revisited: from heuristic to integrated framework for the successful implementation of knowledge into practice. Implement Sci. 2016;11(1):1–13.

  21. Ritchie M, Dollar K, Miller C, Oliver K, Smith J, Lindsay J, et al. Using implementation facilitation to improve care in the Veterans Health Administration (Version 2). 2017.

    Google Scholar 

  22. Ritchie M, Dollar K, Kearney L, Kirchner J. Research and services partnerships: responding to needs of clinical operations partners: transferring implementation facilitation knowledge and skills. Psychiatr Serv. 2014;65(2):141–3.

    Article  PubMed  Google Scholar 

  23. Saldana L, Chamberlain P, Bradford W, Campbell M, Landsverk J. The cost of implementing new strategies (COINS): a method for mapping implementation resources using the stages of implementation completion. Child Youth Serv Rev. 2014;39:177–82.

    Article  PubMed  Google Scholar 

  24. Raghavan R. The role of economic evaluation in dissemination and implementation research. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and implementation research in health: translating science to practice. Oxford; New York: Oxford University Press; 2012. p. xxiii, 536 p.

  25. Raghavan R. The role of economic evaluation in dissemination and implementation research. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and implementation research in health: translating science to practice. Oxford; New York: Oxford University Press; 2018. p. 89–106.

  26. Bauer M, Damschroder L, Hagedorn H, Smith J, Kilbourne A. An introduction to implementation science for the non-specialist. TT -. BMC Psychol. 2015;3(32):12.

    Google Scholar 

  27. Vale L, Thomas R, MacLennan G, Grimshaw J. Systematic review of economic evaluations and cost analyses of guideline implementation strategies. Eur J Health Econ. 2007 Jun 1;8(2):111–21.

    Article  PubMed  Google Scholar 

  28. Powell B, Fernandez M, Williams N, Aarons G, Beidas RS, Lewis C, et al. Enhancing the impact of implementation strategies in healthcare: a research agenda. Front Public Health [Internet]. 2019 [cited 2019 Feb 14];7. Available from: https://www.frontiersin.org/articles/10.3389/fpubh.2019.00003/full.

  29. Saldana L. The stages of implementation completion for evidence-based practice: protocol for a mixed methods study. Implement Sci IS. 2014;9(1):43.

    Article  PubMed  Google Scholar 

  30. Drummond M, Sculpher M, Torrance G, O’Brien B, Stoddart G. Methods for the economic evaluation of health care programmes. Oxford ; New York: Oxford University Press; 2005. p. 379. (Oxford medical publications).

    Google Scholar 

  31. Smith S, Almirall D, Prenovost K, Liebrecht C, Kyle J, Eisenberg D, et al. Change in patient outcomes after augmenting a low-level implementation strategy in community practices that are slow to adopt a collaborative chronic care model: a cluster randomized implementation trial. Med Care. 2019 Jul;57(7):503–11.

    Article  PubMed  PubMed Central  Google Scholar 

  32. Husereau D, Drummond M, Petrou S, Carswell C, Moher D, Greenberg D, et al. Consolidated Health Economic Evaluation Reporting Standards (CHEERS) statement. BMJ [Internet]. 2013 Mar 25 [cited 2020 Jul 2];346. Available from: https://0-www-bmj-com.brum.beds.ac.uk/content/346/bmj.f1049.

  33. Spitzer R, Kroenke K, Williams J, Löwe B. A brief measure for assessing generalized anxiety disorder: The GAD-7. Arch Intern Med. 2006;166(10):1092–7.

    Article  PubMed  Google Scholar 

  34. Bauer M, Vojta C, Kinosian B, Altshuler L, Glick H. The Internal State Scale: replication of its discriminating abilities in a multisite, public sector sample. Bipolar Disord. 2000;2(4):340–6.

    Article  CAS  PubMed  Google Scholar 

  35. Glick H, McBride L, Bauer M. A manic-depressive symptom self-report in optical scanable format. Bipolar Disord. 2003;5(5):366–9.

    Article  PubMed  Google Scholar 

  36. Zarkin GA, Dunlap LJ, Homsi G. The substance abuse services cost analysis program (SASCAP): a new method for estimating drug treatment services costs. Eval Program Plann. 2004;27(1):35–43.

    Article  Google Scholar 

  37. U.S. Bureau of Labor Statistics. Consumer Price Index [Internet]. Consumer Price Index. 2019 [cited 2020 Jan 23]. Available from: https://www.bls.gov/cpi/.

  38. Franks P, Lubetkin E, Gold M, Tancredi D, Jia H. Mapping the SF-12 to the EuroQol EQ-5D Index in a National US Sample. Med Decis Making. 2004 Jun;24(3):247–54.

    Article  PubMed  Google Scholar 

  39. Sanders G, Neumann P, Basu A, Brock D, Feeny D, Krahn M, et al. Recommendations for conduct, methodological practices, and reporting of cost-effectiveness analyses: second panel on cost-effectiveness in health and medicine. JAMA - J Am Med Assoc. 2016;316(10):1093–103.

    Article  Google Scholar 

  40. Humphreys K, Wagner T, Gage M. If substance use disorder treatment more than offsets its costs, why don’t more medical centers want to provide it? A budget impact analysis in the Veterans Health Administration. J Subst Abuse Treat. 2011 Oct;41(3):243–51.

    Article  PubMed  Google Scholar 

  41. Li L, Shen C, Li X, Robins J. On weighting approaches for missing data. Stat Methods Med Res. 2013 Feb;22(1):14–30.

    Article  PubMed  Google Scholar 

  42. Messori A, Trippoli S. Incremental cost-effectiveness ratio and net monetary benefit: promoting the application of value-based pricing to medical devices—a European perspective. Ther Innov Regul Sci. 2018 Nov 1;52(6):755–6.

    Article  PubMed  Google Scholar 

  43. Neumann P, Sanders G, Russell L, Siegel J, Ganiats T. Cost-effectiveness in health and medicine. Second Edition, New to this Edition: Oxford. New York: Oxford University Press; 2016. p. 536.

    Google Scholar 

  44. Krishnan A, Finkelstein E, Levine E, Foley P, Askew S, Steinberg D, et al. A digital behavioral weight gain prevention intervention in primary care practice: cost and cost-effectiveness analysis. J Med Internet Res. 2019;21(5):e12201.

    Article  PubMed  PubMed Central  Google Scholar 

  45. Neumann P, Cohen J, Weinstein M. Updating cost-effectiveness--the curious resilience of the $50,000-per-QALY threshold. N Engl J Med. 2014;371(9):796–7.

    Article  CAS  PubMed  Google Scholar 

  46. Muennig P, Bounthavong M. Cost-effectiveness analyses in health: a practical approach. 3rd ed. San Francisco: Jossey-Bass; 2016. p. xvi, 266.

    Google Scholar 

  47. Lee R, Gortmaker S. Health Dissemination and Implementation within Schools. In: Brownson R, Colditz G, Proctor E, editors. Dissemination and implementation research in health: translating science to practice. 2nd ed. New York, NY: Oxford University Press; 2018. p. 401–16.

    Google Scholar 

  48. van Marrewijk C, Mujakovic S, Fransen G, Numans M, de Wit NJ, Muris JWM, et al. Effect and cost-effectiveness of step-up versus step-down treatment with antacids, H2-receptor antagonists, and proton pump inhibitors in patients with new onset dyspepsia (DIAMOND study): a primary-care-based randomised controlled trial. Lancet Lond Engl. 2009;373(9659):215–25.

    Article  CAS  Google Scholar 

  49. Cohen D, Reynolds M. Interpreting the results of cost-effectiveness studies. J Am Coll Cardiol. 2008;52(25):2119–26.

    Article  PubMed  PubMed Central  Google Scholar 

Download references

Funding

National Institute of Mental Health (R01 MH 099898) and National Institute on Drug Abuse (5K01DA044279-02).

Author information

Authors and Affiliations

Authors

Contributions

AE conceived of the study, conducted the analyses, and drafted the manuscript. DH and LP aided with design and interpretations of cost-effectiveness models and results provided critical input on the full manuscript. SS oversaw data collection and cleaning, aided with interpretations of models and results, drafted sections of the manuscript, and provided critical input on the full manuscript. AK led the data collection and project conception and provided input on study conception. All authors worked on the interpretation of data and critical review and approval of the final manuscript.

Corresponding author

Correspondence to Andria B. Eisman.

Ethics declarations

Ethics approval and consent to participate

The study recruited primary care and community mental health sites across the state of Michigan. Before the first randomization, sites that agreed to participate were asked to provide the names of at least 20 patients suitable for Life Goals. These patients were then contacted by the study survey coordinator to confirm eligibility and obtain patient consent. The study was approved by local institutional review boards and registered under clinicaltrials.gov.

Trial registration: ClinicalTrials.gov Identifier: NCT02151331, registered 05/30/2014, https://clinicaltrials.gov/ct2/show/NCT02151331.

Consent for publication

Not applicable.

Competing interests

None.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Fig. 4
figure 4

The original study design to evaluate effectiveness (a) and decision tree model to evaluate cost-effectiveness (b). This cost-effectiveness analysis focuses on implementation strategies for sites not responding to the REP alone intervention (the “sites not responding to REP alone” portion of the tree in 2a). In the original study, baseline data were gathered prior to initiation of the trial phase (Phase I). In this study, we sought to determine the most cost-effective option for deploying an implementation strategy with multiple components across its all possible permutations (e.g., REP+EF/IF) and comparing this to usual implementation (baseline REP). To accomplish this, we created the decision tree to represent all the decision options and their subsequent steps and estimate their respective costs and consequences to allow for comparison. This modeling approach represents the possible implementation strategy decision options for decision makers, quantifies the uncertainty, and allows for evaluation of alternatives. a In the original trial, non-responding sites were randomized following Phase I to REP+EF or REP+EF/IF. b Following Phase II, non-responding sites in the REP+EF condition were randomized again to either continue REP+EF or add IF (REP+EF/IF). Details of the trial are published elsewhere (see Kilbourne et. al., 2014). c Sites that responded to the implementation strategy after the initial 6 months of the Trial Phase: either < 10 patients receiving Life Goals or > 50% of patients receiving Life Goals had ≤ 3 sessions, min dose for clinically significant results. Sites that responded to the implementation strategy discontinued the strategy during the second 6 months/Phase III of the trial

Table 3 Tornado text report including results for all model input parameters
Table 4 One-way sensitivity analysis results for extended time horizon

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Eisman, A.B., Hutton, D.W., Prosser, L.A. et al. Cost-effectiveness of the Adaptive Implementation of Effective Programs Trial (ADEPT): approaches to adopting implementation strategies. Implementation Sci 15, 109 (2020). https://0-doi-org.brum.beds.ac.uk/10.1186/s13012-020-01069-w

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s13012-020-01069-w

Keywords