Skip to main content

Measurement of sustainment of prevention programs and initiatives: the sustainment measurement system scale

Abstract

Background

Enhancing the sustainability of evidence-based prevention programs for mental and behavioral health requires tools for measuring both sustainability determinants and sustainment outcomes. The aim of this study was to develop the Sustainment Measurement System Scale (SMSS) and to assess its reliability and construct validity for measuring both determinants and outcomes of efforts to sustain prevention programs and initiatives.

Methods

A 42-item scale comprised of items identified from qualitative data collected from 45 representatives of 10 programs and 8 SAMHSA program officers was administered to 186 representatives of 145 programs funded by 7 SAMHSA prevention grant initiatives. Cronbach’s alphas were used to determine inter-item reliability. Convergent validity was assessed by comparisons of a global measure of sustainment with current SAMHSA-funding status and continued operation in the same form. Discriminant validity was assessed by comparisons of sustainability determinants with whether or not the program had undergone adaptations.

Results

Confirmatory factor analysis provided support for a 35-item model fit to the data. Cronbach’s alpha was .84 for the sustainment outcome construct and ranged from .70 to .93 for the sustainability determinant constructs. All of the determinant constructs were significantly associated with sustainment outcome individual and global measures for the entire sample (p < 0.01 to 0.001) and for community-based programs and programs with a substance abuse focus (p < 0.05 to 0.001). Convergent validity was supported by significant associations between the global sustainment measure and current SAMHSA funding status and continued operation in the same form (p < 0.001). Four of the sustainability determinant constructs (responsive to community needs; coalitions, partnerships, and networks; organizational staff capability; and evaluation, feedback, and program outcomes) were also significantly associated with current SAMHSA funding status (p < 0.5 to 0.01). With the exception of organizational staff capability, all sustainability determinants were unrelated to program adaptation as predicted.

Conclusions

The SMSS demonstrated good reliability and convergent and discriminant validity in assessing likelihood of sustainment of SAMHSA funded prevention programs and initiatives. The measure demonstrates potential in identifying predictors of program sustainment and as a tool for enhancing the likelihood of successful sustainment through ongoing evaluation and feedback.

Peer Review reports

Background

Sustainment is considered to be the final stage of the process of implementation of evidence-based practices, policies, and programs (EBPs) [1, 2]. Defined as “the continued use of program components and activities for the continued achievement of desirable program and population outcomes,” [3] sustainment is considered to have occurred when, after a defined period of time, a program, clinical intervention, and/or implementation strategies continue to be delivered and/or individual behavior change (i.e., clinician, patient) is maintained, either as originally planned or with some degree of adaptation, while continuing to produce benefits for individuals/systems [4]. However, what is to be sustained differs from one program to the next [5, 6]. For instance, with respect to the community coalitions supporting drug and suicide prevention activities, some definitions of sustainment focus on the continued existence of the coalition itself while others focus on the activities and impacts of the coalition [7]. Moreover, sustainability, defined as the capacity to maintain EBP components [8], is increasingly being viewed as a dynamic process with shifting outcomes that represents increased likelihood that the program or infrastructure will continue, while sustainment refers to the continued operation of a program or initiative, often relating to the time after initial funding has ended [8, 9].

In recent years, there has been a proliferation of frameworks that focus specifically on sustainability [3, 4, 7,8,9]. However, despite the growing consensus as to how sustainability should be defined [3, 4, 9], the underdeveloped state of measurement of sustainment poses one of the most serious methodological challenges to understanding and facilitating sustainability of evidence-based practices and programs [9,10,11]. Some instruments like the Stages of Implementation Completion (SIC) [12] have been developed to measure the implementation outcomes of a specific intervention [13]. The SIC is an 8-stage assessment tool developed as part of a large-scale randomized implementation trial that contrasted two methods of implementing an EBP for youth with serious behavioral problems in the juvenile justice and child welfare systems. The eight stages range from Engagement (stage 1) with the developers/purveyors in the implementation process, to achievement of Competency in program delivery (stage 8) that span three phases of implementation including pre-implementation, implementation, and sustainability. The SIC was developed to measure a community or organization’s progress and milestones toward successful implementation of Treatment Foster Care Oregon (TFCO) regardless of the implementation strategy utilized. Within each of the eight stages, subactivities are operationalized and completion of activities are monitored, along with the length of time taken to complete these activities. Competency in program delivery (stage 8) is considered to be a measure of program sustainment [13], although continued use of the EBP once competency has been attained is not assessed. Data on the validity and reliability of the SIC to assess sustainment outcomes is lacking. Other instruments like the Program Sustainability Assessment Tool (PSAT) [14,15,16], Sustained Implementation Support Scale (SISS) [17], and Program Sustainability Index (PSI) [18], take a broader ecological approach to sustainability and are used primarily to assess capacity and to plan for sustainability by measuring determinants or factors that influence sustainability. The PSAT is a 40-item scale, containing 8 sustainability domains (Environmental Support, Funding Stability, Partnerships, Organizational Capacity, Program Evaluation, Program Adaptation, Communications, and Strategic Planning), with 5 items per domain [15], and designed to measure capacity for sustainability a public health program’s capacity for sustainability. The SISS is a 28-item scale composed of five subscales (Program Benefits, Program Burdens, Workplace Support, Workplace Cohesion, and Leadership Style) and designed to measure capacity to maintain implementation of components of an EBP for parent training. The PSI is a 53-item scale reflecting 7 sustainability elements (leadership competence, effective collaboration, understanding the community, demonstrating program results, strategic funding, staff involvement and integration, and program responsivity). All three instruments have demonstrated validity and reliability as measures of a program’s capacity for sustainability. However, none of these instruments assess both sustainability determinants and sustainment outcomes.

One of the reasons for developing a valid and reliable measure of sustainment is to provide a means of conducting an audit and providing feedback to organizations engaged in implementing innovative and evidence-based programs and practices. While monitoring and feedback are recognized as important for prevention [19, 20], much of the relevant science on feedback in health has involved improvement in clinical performance [21,22,23,24,25,26,27]. This includes clinical supervision and use of technology like electronic dashboards in measurement-based quality improvement (MBQI) strategies that monitor patient behavior and clinician activity [28,29,30], while prevention has a more limited history of using computational technologies for monitoring [31,32,33,34]. Such feedback offers the clinician a better understanding of whether they are on course to achieve a successful outcome or need to alter their treatment in order to improve the likelihood of a successful outcome. MBQI strategies also hold great promise for facilitating implementation of evidence-based practices [35]. Audit and feedback has demonstrated that it can be an effective strategy for implementation [36, 37], but there is little evidence to suggest that it is equally effective in achieving the sustainment stage of implementation [38].

Federal agencies responsible for wide-scale delivery of prevention programs, including the Substance Abuse and Mental Health Services Administration (SAMHSA), routinely collect information from their grantees to monitor progress toward completion of goals and objectives. SAMHSA supports a wide array of prevention grant programs targeting mental, emotional, and behavioral disorders including illicit substance use, suicide, and antisocial behavior. Each of SAMHSA’s prevention initiatives have specific sets of goals and objectives, and each have different prevention approaches that the administration expects will be sustained once support from SAMHSA is no longer available. As part of their initial proposal for funding, all SAMHSA grantees are required to submit a plan for sustainment of the grantee’s activities once the federal funding has come to an end. SAMHSA programs currently rely on electronic data collection systems including the Transformation Accountability (TRAC) data collection system for SAMHSA’s Center for Mental Health Services (CMHS) programs, and the Coalition Online Management and Evaluation Tool (COMET) and the Performance Management Reporting Tool (PMRT) used by SAMHSA’s Center for Substance Abuse Prevention (CSAP). This information is used to provide feedback to grantees when there is evidence of failure to achieve goals and objectives. However, there is no empirical evidence that such feedback leads to an improvement in performance or increases the likelihood of sustainment.

While SAMHSA remains deeply concerned about sustainment, its mission does not allow monitoring of its grantees after funding has ended. In building a partnership with SAMHSA, the National Institute on Drug Abuse (NIDA)-funded Center for Prevention Implementation Methodology (Ce-PIM, P30DA027828) responded to this need by undertaking the development of a sustainability measure across diverse prevention programs and test its predictive validity over time [39]. This measure was intended for use in a Sustainment Measurement System (SMS), a process designed to evaluate likelihood toward successful sustainment of SAMHSA-funded programs and initiatives and to provide feedback to grantees that would enhance the likelihood of successful sustainment. The SMS combined existing sources of information obtained from SAMHSA grantees described above with information gathered by an instrument that assessed determinants as well as outcomes of sustainment to identify and support both the unique requirements for improving sustainment for individual funding initiatives as well as for developing a generalizable framework of sustainment across diverse prevention approaches, thereby bringing precision to monitoring the structures and processes for sustaining each prevention approach and improving the likelihood of achieving sustainment of any grantee’s prevention efforts, regardless of source of funding [39].

The objective of the current study was to explore and identify dimensions of the Sustainment Measurement System Scale (SMSS) that was designed to assess the sustainment of prevention programs and initiatives by generating quantitative items from qualitative content domains of sustainment outcomes and sustainability determinants and subjecting them to confirmatory factor analysis in order to discern their factor structure. Our goal was to develop a scale that focused on the determinants and outcomes of sustainment of prevention programs and initiatives and to examine its factor structure, reliability, and construct validity.

Methods

Background

Representatives from grantees supported by the following seven SAMHSA programs took part in the study: (1) the Sober Truth on Preventing Underage Drinking Act (STOP-Act) grants committed to prevent and reduce alcohol use among youth and young adults; (2) the Implementing Evidence-based Prevention Practices in Schools (PPS) grants designed to address the prevention of early childhood behavioral disorders; (3) the Strategic Prevention Framework State Incentive (SPF-SIG) grants to prevent the onset of substance use amongst youth; (4) the Garrett Lee Smith (GLS) State and Tribal Youth Suicide Prevention grants to address depression, suicidal attempts, and behavioral health problems linked to suicide; (5) the Substance Abuse and HIV Prevention Navigator Program for Racial/Ethnic Minorities (Prevention Navigator, PN) grants serving minority populations at-risk for substance use and HIV; (6) Minority Serving Institutions Partnerships with Community-Based Organizations (MSI-CBOs) grants serving communities at risk for substance use, HIV, and hepatitis-C infections; and (7) the Capacity Building Initiative for Substance Abuse (SA) and HIV Prevention Services for At-Risk Racial/Ethnic Minority Youth and Young Adults (HIV-CBI) grants to prevent and reduce the onset of SA and transmission of HIV/AIDS among at-risk populations ages 13–24.

Participants

Program Officers representing the 7 SAMHSA funding initiatives provided names and email addresses of all project directors and key personal for each organization currently or previously funded under these initiatives. Researchers identified other people to contact from information provided by project directors of all PPS grantees funded by RFA SM-10-017. Names and email addresses for two to three contacts (i.e., director, coordinator, local evaluator) per GLS grantee site were obtained from ICF international, the GLS contracted evaluators of the grant program, for grantees belonging to Cohorts 8 through 12, per recommendation of SAMHSA program officers. Invitations to participate in the study were sent to 528 representatives of 306 grantee sites supported by the 7 SAMHSA grants between November 2017 and March 2019. One-hundred eighty-six representatives of 145 grantees agreed to participate, resulting in an individual representative response rate of 35.2% and a grantee response rate of 47.5%.

Procedure

The study was approved by the appropriate Institutional Review Boards prior to participant recruitment, and informed consent was obtained prior to administering surveys. Study data were collected and managed using REDCap electronic data capture tools hosted at Northwestern University. REDCap (Research Electronic Data Capture) is a secure, web-based application designed to support data capture for research studies, providing (1) an intuitive interface for validated data entry, (2) audit trails for tracking data manipulation and export procedures, (3) automated export procedures for seamless data downloads to common statistical packages, and (4) procedures for importing data from external sources [40].

Each participant was emailed an invitation to participate including a link to the web-based survey. Participants reviewed informed consent and after agreeing to participate were able to access the survey and proceed to the survey items. Once participants logged in to the online survey, they were able to answer questions and could pause and resume at any time. The online survey took approximately 15 to 20 min to complete.

Measures

The Sustainment Measurement System Scale (SMSS)

Item development for the SMSS is described in detail elsewhere [41]. Briefly, interviews were conducted with 45 representatives of 10 SAMHSA grantees and 9 SAMHSA government program officers that was comprised of three parts: (1) a series of semi-structured questions relating to experience with implementing and sustaining the grantee’s program; (2) a free list exercise [42] asking participants to respond to whatever came to mind when asked what was meant by the term sustainment or sustainability, what components of their programs they most wanted to sustain, and what it would take to sustain those components; and (3) a template of Consolidated Framework for Implementation Research (CFIR) domains and components [43] in which participants were asked to rate each of the domains and elements as being unimportant (0), somewhat important (1), important (2), or very important (3) to sustainability of their program and to explain the basis for their assessment of each component to sustainability. The CFIR template was used to determine whether the components believed to be associated with successful implementation are also associated with successful sustainment. Templates of the PSAT and SIC sustainability measures informed the semi-structured interview questions and analysis of the free-list exercise.

The data sets containing sustainability elements from each of the three components of the interview (semi-structured interview, free lists, and CFIR checklist) were then compared through a process of data triangulation (i.e., determining consistency of findings obtained from different sources of data) to identify items that were elicited from more than one data set. Items were then placed into three groups: (1) those that appeared in only one of the three data sets, (2) those that appeared on two of the three data sets, and (3) those that appeared on all three data sets. Four sustainability elements were identified by all three data sets: (1) ongoing coalitions, collaborations, networks, and partnerships; (2) infrastructure and capacity to support sustainability; (3) community need for program; and (4) ongoing evaluation of performance and outcomes). An additional 11 elements were identified by two of three data sets: (1) availability of funding; (2) consistency with organizational culture; (3) evidence of positive outcomes; (4) development of a plan for implementation and sustainment; (5) presence of a champion; (6) institutionalization and integration of program; (7) institutional support and commitment; (8) community buy-in and support; (9) program continuity; (10) supportive leadership; and (11) opportunities for staff training [41].

To construct this survey, the fifteen elements were condensed to create a 42-item scale divided into the following subscales:

  • Sustainment outcomes: 4 items assessing the continued operation of the program or initiative, including delivering prevention services to intended population that are evidence-based as described in the original application for funding and periodically measuring service fidelity.

  • Financial stability: 6 items measuring funding from federal, state, or local governments and non-profit and non-governmental sources, combination of earmarked and discretionary funding, sustained funding, financial support from diverse community interests, and financial solvency.

  • Responsiveness to community needs and values: 7 items measuring meeting needs of intended target populations and behavioral health needs of communities/populations being served, adaptability to meet these needs, consistency with norms and values of participating organizations, fit with values of sustaining organizations and communities, shared perception of project importance by participating organizations, and unacceptability of public health problem addressed by project.

  • Coalitions, partnerships, and networks: 8 items measuring networking of grantee organization with other organizations committed to program sustainability, community engagement in development of project goals, community access to knowledge and information about the project, project support by a coalition/partnership/network of community organizatons, network expansion, commitment to continued operation of project, level of networking, and communications within organizations responsible for sustaining the project.

  • Infrastructure and capacity to support sustainment: 9 items measuring available resources for project implementation and sustainment, integration into operations of the organization and partners, advanced development of plans for implementing and sustaining the project, execution of the project according to these plans, adequacy of staff to sustain program goals and activities, sufficiency of training available to staff and community members, staff knowledge and supportive beliefs, and staff self-efficacy to implement the project.

  • Implementation leadership: 5 items measuring active engagement of leaders in project implementation and sustainment, involvement of community leaders in the project, appointment of someone responsible for coordinating project implementation and sustainment, support from a program champion, and process in place for sustainment in the event the champion leaves.

  • Monitoring, evaluation, and program outcomes: 3 items measuring ongoing evaluation of progress made toward sustainment, sufficiency, and timeliness of feedback about project delivery and quality improvement, and evidence of positive outcomes.

Study participants were asked to indicate level of agreement with a series of statements using a Likert scale ranging from 1 (not at all) to 5 (all the time) for the four items contained in one subscale measuring sustainment outcomes and seven different domains of determinants of sustainment; lower scores on all subscales indicate lower levels of agreement, while higher scores indicate higher levels of agreement with the respective statements. Each subscale score is represented as an average of the scores for each item included in the subscale. The average of the scores for each item included in the Sustainment Outcomes subscale was defined as Global Sustainment.

Program characteristics

Two sets of variables measuring characteristics of the funded programs and initiatives were included in the analysis to determine whether the association between sustainability determinants and sustainment outcome varied by one or more program characteristics and to assess the convergent and discriminant validity of the SMSS. The first set included a categorical variable describing program type (community or state) and a categorical variable describing program focus (mental health or substance use). The second set also included two variables, one describing whether the grantee is currently funded by one of the seven SAMHSA grant initiatives (yes or no), and if not, a variable describing the current status of the grantee’s program (the project no longer exists, the project continues to exist and it has been substantially adapted, and the project continues to exist in much the same form as it did when funded by the SAMHSA program).

Statistical analyses

To evaluate the psychometric properties of the SMSS, confirmatory factor analyses (CFA) with maximum likelihood estimation were conducted using EQS statistical software [44, 45]. The aim of the CFA was to determine if the data fit the hypothesized model of sustainability determinants and sustainment outcomes based on the qualitative research conducted in earlier study described above [41, 45]. CFA is commonly used to confirm a hypothesized model based on theory or prior empirical research, as well as evaluate the reliability and validity of measures employed in implementation research [15, 17, 18, 46,47,48]. Initially, confirmatory factor analysis was applied to all 42 items to identify poorly performing items and test our hypothesized sustainment domain structure, followed by subsequent modifications based on resulting model modification indices and theoretical justification [49]. Poor items were those that had low reliability or poor fit (i.e., factor loadings below 0.300) with the intended latent factor (or subscale). Principal axis factoring (PAF) with Promax oblique rotation was also conducted to provide guidance on item reduction and construct reclassification. Principal axis factoring was selected for factor extraction because it allows for consideration of both systematic and random error [50], and Promax oblique rotation was utilized as we assumed that the derived factors would be correlated [51]. Three criteria were used to determine the number of factors to retain: (1) examination of the oblique rotated factor pattern matrix, (2) parallel analysis [52], and (3) interpretability of the factor structure as indicated in the rotated solution. Examination of the rotated factor structure included identification of eigenvalues above 1.0 and Scree test results, as well as absence of multicollinearity and presence of outliers [53]. We then used four measures of model fit to assess model adequacy for the initial model and revised models: chi-square statistic, the comparative fit index (CFI), root mean square error of approximation (RMSEA), and the standardized root mean square residual (SRMR). CFI values greater than 0.95, RMSEA values less than 0.06, and SRMR values less than 0.08 indicate model fit that is deemed acceptable [49].

Reliability of the SMSS was assessed by examining Cronbach’s alpha internal consistency for each of the subscales and the total scale. Cronbach’s alpha of .70 and above is considered good internal consistency for a newly developed scale [54]. Corrected item total correlation tests were also conducted in order to check each item’s contribution to the total scale. An item to total correlation higher than .4 was considered acceptable [55]. Item analyses were also conducted, including an examination of inter-item correlations and alpha if the item is removed.

Construct validity of an instrument is typically assessed by comparing constructs or measurements that theoretically should be related to one another to determine if that is the case (convergent validity) and by comparing concepts or measurements that are not supposed to be related to determine if they are actually unrelated (discriminant validity) [56]. In this study, convergent and discriminant validity were assessed by computing Pearson product-moment correlations of SMSS determinant subscales and global sustainment outcome scores, and one-way analysis of variance of mean outcome and determinant subscale scores by current funding from the original SAMHSA initiative (yes/no) and the program status (no longer in operation, exists but has been extensively adapted, exists pretty much in the same form), respectively. Only 5 programs were no longer in operation at the time the study was conducted and therefore removed from the comparison, even though their mean values were lower than those for the other two types of programs. Current funding by the original SAMHSA initiative was chosen to assess the convergent validity of the SMSS as it was hypothesized that grantees currently funded would have higher mean scores of sustainment outcomes than grantees no longer funded. Program status was chosen to assess the divergent validity of the SMSS because the determinants were hypothesized to be unrelated to whether the sustained program was adapted or not as long as it continued to exist.

Results

There were no missing values for any of the 4 sustainment outcome variables; however, missing data for each of the determinant variables ranged from 15.8 to 17.9%. The same 29 to 31 individual grantee representatives did not provide responses to any of the determinant items and were removed from analyses of sustainability determinants by sustainment outcomes or determinants by current funding status or program status. As a result, 113 (77.9%) of grantees were represented by one individual, 26 (17.9%) grantees were represented by two individuals, and 6 grantees were represented by 3 (4.1%) individuals. A flow chart describing the numbers of participants from recruitment to data collection to data analysis is presented in Fig. 1. The effects of nesting of individuals within grantees was confirmed by the absence of any differences in associations between determinants and outcomes by grantee and associations between determinants and outcomes by individual participants.

Fig. 1
figure 1

Flowchart of participants at each stage of study

Details of participating representatives and programs used in the data analysis are provided in Table 1. Thirty-four of the grantees (23.4%) and 79 of the grantee representatives (42.5%) focused their efforts on mental health, while 111 (76.6%) grantees and 107 grantee representatives (57.5%) focused on substance use. Furthermore, 92 grantees (63.4%) and 107 grantee representatives (57.5%) addressed these issues at the community level, and 53 grantees (36.6%) and 79 grantee representatives (42.4%) addressed these issues at the state level. Individual representatives included 43 (23.1%) men and 143 (76.9%) women; racial/ethnic groups represented included 138 (74.2%) non-Hispanic whites, 21 (11.3%) African-Americans, 18 (9.7%) Latinx, 5 (2.7%) Asian/Pacific Islanders, and 4 (2.2%) Native Americans; and project roles represented included 15 (7.0%), principal investigators, 89 (47.9%) project directors, 54 (29.4%) coordinators, 19 (10.4%) evaluators, and 8 (4.3%) coalition members.

Table 1 Characteristics of participating programs in tests of the sustainment measurement system

Confirmatory factor analysis

Results of the CFA analyses are presented in Table 2. In the initial model, we gave no consideration of program type or program focus affecting either the measurement model (e.g., loadings and unique variances) or the structural model (e.g., means, variances, and covariances of the latent variables). Although the 6 sustainment subscales and 1 sustainment outcome subscale exhibited good to excellent reliability, CFA analyses indicated poor fit of the 42-item nine-factor model (labeled Initial Model) across the four major indices (chi-square, CFI, RMSEA, SRMR). Elimination of items with a factor correlation of less than .300 resulted in 36 items. Results of an exploratory factor analysis revealed 10 domains that included a segmentation of the responsiveness to community needs and values subscale into two subscales—responsiveness to community needs (2 items) and responsiveness to community values (3 items)—and a segmentation of infrastructure and capacity subscale into two subscales—organizational capacity (4 items) and organizational staff capability (3 items). Only 1 item (community leadership) loaded onto the 10th factor, which was also eliminated for further analysis. One of the items from the leadership subscale also exhibited a much higher loading in the coalition, partnerships, and networks factor (.660) than on the leadership factor (.129). The resulting 35-item nine factor model (labeled Final Model) provided better fit with an absolute model fit of χ2(517) = 960.23, p < .001, an RMSEA of .076 and an SRMR of .066 meeting criteria for acceptability; however, fit was not supported by a CFI of .95 or greater. The standardized factor loadings ranged from .56 to .88, and all were statistically significant (all p levels < 0.001). Separating the responsiveness to community needs and values subscale into two separate subscales also increased the Cronbach’s alpha coefficient from .60 to .70 for both subscales. The 35-item version of the SMSS is provided in the Appendix.

Table 2 Item-factor loadings for initial and final itemized subscales of the sustainment measurement system scale

Table 3 shows the correlations among the SMSS subscales. In general, the correlations between the responsiveness to community values subscale and the other eight subscales (average r = .19) were lower than the correlations among the other eight subscales (average r = .42). Of particular note, sustainment outcome was significantly associated with all 8 determinants (average r = .43), with significance levels < 0.001 except for financial stability (p < 0.01).

Table 3 Factor correlations of global sustainment outcomes and sustainment determinants

Convergent and discriminant validity

Table 4 shows the results of correlations between the global sustainment outcome score and the 8 determinant subscale scores stratified by program type and focus. Global sustainment was significantly associated with all 8 determinants in community-based programs and programs targeting substance use prevention. Global sustainment was associated with responsiveness to community needs and values and organizational capacity and staff capability in statewide prevention programs, and with six of the eight determinants (all but financial stability and responsiveness to community needs) in programs that focused on mental health.

Table 4 Correlations of global sustainment outcomes by sustainment determinants, program type, and program focus

A comparison of sustainment determinants and outcomes by program current funding status in provided in Table 5. Participants of programs no longer funded by the original SAMHSA initiative reported significantly less global sustainment (F = 37.67, d.f. = 154, p < 0.001) as well as the four individual forms of sustainment. They also reported significantly less responsiveness to community needs (F = 5.14, d.f. = 127, p = 0.025), coalitions, partnerships, and networks (F = 6.99, d.f. = 125, p = 0.009), organizational staff capability (F = 7.34, d.f. = 125, p = 0.008), and evaluation, feedback, and outcomes (F = 9.37, d.f. = 117, p = 0.003).

Table 5 Mean individual and global sustainment outcomes and sustainment determinants by current program funding

The discriminant validity of the SMSS was assessed in comparisons of total sustainment outcomes and determinants by program status. The results are presented in Table 6 below. The global sustainment outcome score and the individual measures of sustainment were significantly greater in programs that continued to exist in the same form. However, only one determinant was found to be significantly greater in such programs (organizational staff capability; F = 4.52, d.f. = 146, p = 0.035), suggesting strong support for the discriminant validity of the SMSS.

Table 6 Mean individual and global sustainment outcomes and sustainment determinants by current program status

Discussion

Although there exist other measures designed to evaluate sustainability of evidence-based programs and interventions from the point of view of the determinants of sustainability or sustainment as an outcome, the Sustainment Measurement System Scale (SMSS) is the first instrument designed to assess both sustainability determinants and sustainment outcomes. Further, the SMSS is designed to assess different prevention programs of different types and foci. As such, it contains elements that are specific to particular programs but enable comparisons across different types of programs. It also demonstrates some potential as a tool for providing feedback to organizations, enabling them to monitor their trajectory towards achieving the final stage of state of implementation and increase the likelihood of successfully doing so.

The SMSS draws from and shares several features of existing measures of sustainability. For instance, like the Stages of Implementation Completion Scale [12], the SMSS measures sustainment as an outcome and can be adapted for use as a tool for monitoring progress toward sustainment [13]. Unlike the SIC, it measures characteristics of programs and their inner and outer settings that predict these outcomes and excludes earlier phases of implementation. Like the Program Sustainability Assessment Tool [15], Sustained Implementation Support Scale [17], and Program Sustainability Index [18], the SMSS measures determinants of sustainability and exhibits similar internal consistency, model fit, and construct validity. With the possible exception of responsiveness to community values, the domains are highly correlated with one another. SMSS has similar chi-square significance, RMSS, and SRMR as the PSAT. All of these instruments have less than desirable CFI of .95, suggesting room for improvement.

The SMSS is the product of a standardized process of eliciting determinants and outcomes of sustainability that are both specific to the program being sustained and generalizable to other types of programs and initiatives [41]. This approach is also more consistent with the Dynamic Sustainability Framework [5] and the growing consensus of sustainability as a dynamic process with changes in both determinants and outcomes over time. In this study, correlations between sustainability determinants and sustainment outcomes were stronger with community-based programs and programs targeting substance use than with state-based programs and programs targeting mental health. However, there was consistency across programs by type and focus with respect to certain determinants such as responsiveness to community values (which included adaptability), organizational capacity, and organizational staff capability.

The design and evaluation of the SMSS also provides some important insights into sustainment as the final stage of the implementation of evidence-based prevention programs and initiatives. The process of achieving sustainment is the product of eight domains of sustainability determinants: financial stability; responsiveness to community needs; responsiveness to community values; coalitions, partnerships, and networks; organizational capacity; organizational staff capability; implementation leadership; and evaluation and feedback and positive program outcomes. Luke and colleagues [15] found similar associations between program manager and staff perceptions of sustainability of their programs and the determinants of funding stability, partnerships, organizational capacity, and program evaluation. Hodge and colleagues [17] found similar associations between sustained implementation of the Triple P parent training program and the determinants of program benefits, workplace support, and leadership style and. Mancini and Marek [18] found similar associations between meeting at risk needs and the determinants of leadership competence, effective collaboration, demonstrating program results strategic funding, staff involvement and integration, and program responsivity. In our study, these associations are consistent across community level programs and/or a focus on substance use prevention, less so with state level programs, and/or a focus on mental health. Many of these determinants decline once funding that supported their implementation comes to an end, which is to be expected since many of them like coalitions, partnerships and networks, staff capability (e.g., training), and the capacity for conducting evaluations and providing feedback are resource-dependent. As predicted, however, it makes no difference whether the program continues to exist in same form as originally proposed or has been adapted to improve fit, also consistent with the Dynamic Sustainability Framework. This may be because many programs are not necessarily implementing a specific practice, and some are just trying to sustain coalitions.

Nevertheless, there are a number of factors that limit the findings of this study. The findings reflect the experience of specific prevention programs and initiatives that have been or are currently funded by SAMHSA; thus, their generalizability to prevention programs funded by other sources (e.g., state or local funds, research grants), to programs targeting treatment, or event to other SAMHSA-funded programs, is limited. The confirmatory factor analysis was constrained by the small number of programs and program representatives. Only two items loaded onto one of the identified constructs (responsiveness to community needs), while only three items loaded onto the construct of responsiveness to community values. Further research is required to determine whether or not they represent two distinct constructs.

With these limitations in mind, future steps include the following: (1) evaluation of the utility of the SMSS with other types of programs in other settings, (2) use of larger samples to confirm fit of the data to the model, (3) development of guidelines for providing feedback to organizations seeking to sustain programs and initiatives based on ongoing monitoring efforts using the SMSS, and (4) consistent with the SMS process, ongoing revision of the tool itself.

Conclusion

The SMSS is innovative in three specific respects. First, it draws upon the experience of evaluating sustainability in different types of prevention programs and initiatives with different aims and areas of emphases. This enabled us to identify a set of common elements of sustainment that can be used to generate a model and set of testable hypotheses that apply to a broad array of substance use/mental disorder/suicide prevention programs, practices and initiatives, regardless of objectives, outcomes, and infrastructure to achieve these outcomes. Second, although we did not assess the effectiveness of the various programs, practices, and initiatives participating in this study, a measure like the SMSS offers the potential to determine whether the extent to which a program or practice is evidence-based or evidence-informed determines whether it can be sustained. Such information would be invaluable to determining whether the program or practice merits initial or continued funding. Third, this measure can be used to monitor progress toward sustainment and provide feedback to stakeholders as to how to increase the likelihood of sustainment. The SMSS in particular and the SMS in general can be used as a tool for program management as well as research purposes. Although the SMSS was based on programs funded by SAMHSA, the instrument should have general applicability across diverse federal, statewide, and local prevention implementation initiatives.

Availability of data and materials

The datasets collected and analyzed during the current study will be available from the corresponding author on reasonable request.

References

  1. Aarons GA, Hurlburt M, Horwitz SM. Advancing a conceptual model of evidence-based practice implementation in public service sectors. Adm Policy Ment Health. 2011;38:4–23.

    PubMed  Google Scholar 

  2. Greenhalgh T, Robert G, Macfarlane F, Bate P, Kyriakidou O. Diffusion of innovations in service organizations: systematic review and recommendations. Milbank Q. 2004;82(4):581–629.

    PubMed  PubMed Central  Google Scholar 

  3. Scheirer MA, Dearing JW. An agenda for research on the sustainability of public health programs. Am J Public Health. 2011;101:2059–67.

    PubMed  PubMed Central  Google Scholar 

  4. Moore JE, Mascarenhas A, Bain J, Straus SE. Developing a comprehensive definition of sustainability. Implement Sci. 2017;12:110.

    PubMed  PubMed Central  Google Scholar 

  5. Scheirer MA. Linking sustainability research to intervention types. Am J Public Health. 2013;103:e73–80.

    PubMed  PubMed Central  Google Scholar 

  6. Mancini JA, Marek LI. Sustaining community-based programs for families: conceptualization and measurement. Fam Relat. 2004;53:339–47.

    Google Scholar 

  7. Cooper BR, Bumbarger BK, Moore JE. Sustaining evidence-based prevention programs: correlates in a large-scale dissemination initiative. Prev Sci. 2015;16:145–57.

    PubMed  Google Scholar 

  8. Chambers DA, Glasgow RE, Stange KC. The dynamic sustainability framework: addressing the paradox of sustainability amid ongoing change. Implement Sci. 2013;8:117.

    PubMed  PubMed Central  Google Scholar 

  9. Shelton RC, Cooper BR, Stirman SW. The sustainability of evidence-based interventions in public health and health care. Ann Rev Public Health. 2018;39:55–76.

    Google Scholar 

  10. Proctor EK, Luke D, Calhoun A, McMillan C, Brownson R, McCrary S, et al. Sustainability of evidence-based healthcare: research agenda, methodological advances, and infrastructure support. Implement Sci. 2015;10:88.

    PubMed  PubMed Central  Google Scholar 

  11. Walugembe DR, Sibbald S, Le Ber MJ, Kothari A. Sustainability of public health interventions: where are the gaps? Health Res Pol Syst. 2019;17:8.

    Google Scholar 

  12. Chamberlain P, Brown CH, Saldana L. Observational measures of implementation progression community based settings: the Stages of Implementation Completion (SIC). Implement Sci. 2011;6:116.

    PubMed  PubMed Central  Google Scholar 

  13. Saldana L, Bennett I, Powers D, Vredevoogd M, Grover T, Schaper H, Campbell M. Scaling implementation of collaborative care for depression: adaptation of the stages of Implementation Completion (SIC). Adm Policy Ment Health. 2020;47(2):188–96.

    PubMed  Google Scholar 

  14. Schell SF, Luke DA, Schooley MW, Elliott MB, Herbers SH, Mueller NB, et al. Public health program capacity for sustainability: a new framework. Implement Sci. 2013;8:15.

    PubMed  PubMed Central  Google Scholar 

  15. Luke D, Calhoun A, Robichaux CB, Elliott MB, Moreland-Russell S. The Program Sustainability Assessment Tool: a new instrument for public health programs. Prev Chron Dis. 2014;11:130184.

    Google Scholar 

  16. Calhoun A, Mainor A, Moreland-Russell S, Maier RC, Brossart L, Luke DA. Using the Program Sustainability Assessment Tool to assess and plan for sustainability. Prev Chronic Dis. 2014;11:130185.

    PubMed  Google Scholar 

  17. Hodge LM, Turner KM, Sanders MR, Filus A. Sustained implementation support scale: validation of a measure of program characteristics and workplace functioning for sustained program implementation. J Behav Health Serv Res. 2017;44:442–64.

    PubMed  Google Scholar 

  18. Mancini JA, Marek LI. Sustaining community-based programs for families: conceptualization and measurement. Fam Relat. 2004;53(4):339–47.

    Google Scholar 

  19. Bonner C, Fajardo MA, Doust J, McCaffery K, Trevena L. Implementing cardiovascular disease prevention guidelines to translate evidence-based medicine and shared decision making into general practice: theory-based intervention development, qualitative piloting and quantitative feasibility. Implement Sci. 2019;14(1):86.

    PubMed  PubMed Central  Google Scholar 

  20. Smiddy MP, Murphy OM, Savage E, Fitzgerald AP. O' Sullivan B, Murphy C, et al. Efficacy of observational hand hygiene audit with targeted feedback on doctors' hand hygiene compliance: a retrospective time series analysis. J Infect Prev. 2019;20(4):164–70.

    PubMed  PubMed Central  Google Scholar 

  21. Sutherland R, Campbell E, Nathan N, Wolfenden L, Lubans DR, Morgan PJ, et al. A cluster randomised trial of an intervention to increase the implementation of physical activity practices in secondary schools: study protocol for scaling up the Physical Activity 4 Everyone (PA4E1) program. BMC Public Health. 2019;19(1):883.

    PubMed  PubMed Central  Google Scholar 

  22. Asch SM, McGlynn EA, Hogan MM, Hayward RA, Shekelle P, Rubenstein L, et al. Comparison of quality of care for patients in the Veterans Health Administration and patients in a national sample. Ann Intern Med. 2004;141(12):938–45.

    PubMed  Google Scholar 

  23. Hysong SJ. Audit and feedback features impact effectiveness on care quality. Med Care. 2009;47:356–63. https://0-doi-org.brum.beds.ac.uk/10.1097/MLR.0b013e3181893f6b.

    Article  PubMed  PubMed Central  Google Scholar 

  24. Hysong SJ, Khan MM, Petersen LA. Passive monitoring versus active assessment of clinical performance: impact on measured quality of care. Med Care. 2011;49(10):883–90.

    PubMed  Google Scholar 

  25. Institute of Medicine. Committee on Quality of Health Care in America: Crossing the quality chasm: a new health care system for the 21st century. Washington: National Academies Press; 2001.

    Google Scholar 

  26. Institute of Medicine. Committee on Quality of Health Care in America: Performance measurement: accelerating improvement. Washington DC: National Academies Press; 2006.

    Google Scholar 

  27. Ivers N, Jamtvedt G, Flottorp S, Young JM, Odgaard-Jensen J, French SD, et al. Audit and feedback: effects on professional practice and healthcare outcomes. Cochrane Database Syst Rev. 2012;13(6):CD000259.

    Google Scholar 

  28. Bearman SK, Weisz JR, Chorpita BF, Hoagwood K, Ward A, Ugueto AM, et al. More practice, less preach? The role of supervision processes and therapist characteristics in EBP implementation. Adm Policy Ment Health. 2013;40(6):518–29. https://0-doi-org.brum.beds.ac.uk/10.1007/s10488-013-0485-5.

    Article  PubMed  PubMed Central  Google Scholar 

  29. Persell SD, Kaiser D, Dolan NC, Andrews B, Levi S, Khandekar J, et al. Changes in performance after implementation of a multifaceted electronic-health-record-based quality improvement system. Med Care. 2011;49(2):117–25.

    PubMed  Google Scholar 

  30. Seidman E, Chorpita BF, Reay WE, Stelk W, Garland AF, Kutash K, et al. A framework for measurement feedback to improve decision-making in mental health. Adm Policy Ment Health. 2010;37(1-2):128–31.

    PubMed  Google Scholar 

  31. Brown CH, Mohr DC, Gallo CG, Mader C, Palinkas LA, Wingood G, et al. A computational future for preventing HIV in minority communities: how advanced technology can improve implementation of effective programs. JAIDS. 2013;63(Suppl 1):S72–84.

    PubMed  Google Scholar 

  32. Gallo C, Pantin H, Villamar J, Prado G, Tapia M, Ogihara M, et al. Blending qualitative and computational linguistics methods for fidelity assessment: experience with the Familias Unidas preventive intervention. Adm Policy Ment Health. 2015;42(5):574–85.

    PubMed  PubMed Central  Google Scholar 

  33. Wang D, Ogihara M, Gallo C, Villamar JA, Smith JD, Vermeer W, et al. Automatic classification of communication logs into implementation stages via text analysis. Implement Sci. 2016;11(1):119.

    PubMed  PubMed Central  Google Scholar 

  34. Berkel C, Gallo CG, Sandler IN, Mauricio AM, Smith JD, Brown CH. Redesigning implementation measurement for monitoring and quality improvement in community delivery settings. J Prim Prev. 2019;40(1):111–27.

    PubMed  PubMed Central  Google Scholar 

  35. Hermann RC, Chan JA, Zazzali JL, Lerner D. Aligning measurement-based quality improvement with implementation of evidence-based practices. Adm Policy Ment Health. 2006;33(6):636–45.

    PubMed  Google Scholar 

  36. Foy R, Eccles M. Audit and feedback interventions. In: Straus S, Tetroe SJ, Graham ID, editors. Knowledge translation in health care: moving from evidence to practice. Oxford, England: Wiley-Blackwell; 2009. p. 126–31.

    Google Scholar 

  37. Powell BJ, McMillen JC, Proctor EK, Carpenter CR, Griffey RT, Bunger AC, et al. A compilation of strategies for implementing clinical innovations in health and mental health. Med Care Res Rev. 2012;69(2):123–57.

    PubMed  Google Scholar 

  38. Ivers NM, Sales A, Colquhoun H, Michie S, Foy R, Francis JJ, et al. No more ‘business as usual’ with audit and feedback interventions: towards an agenda for a reinvigorated intervention. Implement Sci. 2014;9:14. https://0-doi-org.brum.beds.ac.uk/10.1186/1748-5908-9-14.

    Article  PubMed  PubMed Central  Google Scholar 

  39. Palinkas LA, Spear SE, Mendon SJ, Villamar J, Valente T, Chou CP, et al. Measuring sustainment of prevention programs and initiatives: a study protocol. Implement Sci. 2016;11:95.

    PubMed  PubMed Central  Google Scholar 

  40. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde J. Research electronic data capture (REDCap) - a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377–81.

    Google Scholar 

  41. Palinkas LA, Spear SW, Mendon SJ, Villamar J, Reynolds C, Green CD, et al. Conceptualizing and measuring sustainability of prevention programs, policies and practices. Translat Behav Med. 2020;10(1):136–45. https://0-doi-org.brum.beds.ac.uk/10.1093/tbm/ibz170.

    Article  Google Scholar 

  42. Weller SC, Romney AK. Systematic data collection. Newbury Park, CA: Sage; 1988.

    Google Scholar 

  43. Damschroeder LJ, Aron DC, Keith RE, Kirsh SR, Alexander JA, Lowery JC. Fostering implementation of health services research findings into practice: a consolidated framework for advancing implementation science. Implement Sci. 2009;4:50.

    Google Scholar 

  44. Bentler PM. EQS 6 Structural Equations Program Manual. Encino, CA: Multivariate Software, Inc; 2006.

    Google Scholar 

  45. Brown TA. Confirmatory factor analysis for applied research. New York: Guilford; 2006.

    Google Scholar 

  46. Cook CR, Davis C, Brown EC, Locke J, Ehrhart MG, Aarons GA, et al. Confirmatory factor analysis of the Evidence-Based Practice Attitudes Scale with school-based behavioral health consultants. Implement Sci. 2018;13:116.

    PubMed  PubMed Central  Google Scholar 

  47. Ehrhart MG, Torres EM, Hwang J, Sklar M, Aarons GA. Validation of the implementation Climate Scale (ICS) in substance use disorder treatment organizations. Subst Abuse Treat Prev Policy. 2019;14:35.

    PubMed  PubMed Central  Google Scholar 

  48. Huig JM, Gebhardt WA, Dusseldorp E, Verheijden MW, van der Zouwe N, Middlekoop BJC, Crone MR. Measuring determinants of implementation behavior: psychometric properties of a questionnaire based on the theoretical domains framework. Implement Sci. 2014;9:33.

    Google Scholar 

  49. Hu LT, Bentler PM. Cutoff criteria for fit indices in covariance structure analysis: conventional criteria versus new alternatives. Struct Equ Model. 1999;6(1):1–55.

    Google Scholar 

  50. Fabrigar LR, MacCallum RC, Wegener DT, Strahan EJ. Evaluating the use of exploratory factor analysis in psychological research. Psychol Methods. 1999;4:272–99.

    Google Scholar 

  51. Russell DW. In search of underlying dimensions: the use (and abuse) of factor analysis in personality and social psychology bulletin. Pers Soc Psychol Bull. 2002;28:1629–46.

    Google Scholar 

  52. Patil VH, Singh SN, Mishr S, Donovan T. Efficient theory development and factor retention criteria: a case for abandoning the ‘Eigenvalue Greater Than One’ criteria. J Busness Res. 2008;61:162–70.

    Google Scholar 

  53. DeVellis R. Scale development: theory and applications: theory and application. Thousand Oaks, CA: Sage; 2003.

    Google Scholar 

  54. Nunnally JC. Psychometric theory. 2nd ed. New York: McGraw Hill; 1978.

    Google Scholar 

  55. Nunnally JC, Bernstein IH. Psychometric theory. 3rd ed. New York: McGraw Hill; 1994.

    Google Scholar 

  56. Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait-multimethod matrix. Psychological Bull. 1959;56:81–105.

    CAS  Google Scholar 

Download references

Acknowledgements

We thank our colleagues in the Substance Abuse and Mental Health Services Administration (SAMHSA) for their help in connecting with their grantees and for the grantees themselves for the time and information they provided for this project. The content of this article is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health or SAMHSA.

Funding

This work is supported by the National Institute on Drug Abuse (1 R34 DA037516-01A1 and P30 DA02878). REDCap is supported at the Feinberg School of Medicine by the Northwestern University Clinical and Translational Science (NUCATS) Institute. Research reported in this publication was supported, in part, by the National Institutes of Health's National Center for Advancing Translational Sciences, Grant Number UL1TR001422.

Author information

Authors and Affiliations

Authors

Contributions

Conceptualization and design: LAP and CHB. Data collection: JV. Data management and analyses: LAP and CPC. Manuscript development and revisions: all authors. The authors read and approved the final manuscript.

Corresponding author

Correspondence to Lawrence A. Palinkas.

Ethics declarations

Ethics approval and consent to participate

All procedures performed in studies involving human participants were in accordance with the ethical standards of the 1964 Helsinki Declaration and its later amendments or comparable ethical standards. The study was approved by the Institutional Review Boards at the University of Southern California and Northwestern University prior to participant recruitment.

Consent for publication

Not applicable

Competing interests

All authors declare that they have no conflicts of interest.

Additional information

Publisher’s Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Appendix

Appendix

Sustainment Measurement System Scale

INSTRUCTIONS: Please respond to the following statements relating to your [ ]-funded project using a scale ranging from 1 = little or no extent to 5 = a great extent. If this project no longer exists or no longer provide services, answer based on the experience during the period when [ ]-funded the project. If you feel the statement is not relevant to your program, answer N/A.

Sustainment indicators

Responses

1. The project continues to operate as described in the original application for funding.

1 2 3 4 5 N/A

2. The project continues to deliver prevention services to its intended population.

1 2 3 4 5 N/A

3. The project continues to deliver prevention services that are evidence-based.

1 2 3 4 5 N/A

4. This project periodically measures the fidelity of the prevention services that are delivered.

1 2 3 4 5 N/A

Funding and financial support

Responses

5. The project is funded through non-profit, private, and/or non-governmental sources.

1 2 3 4 5 N/A

6. The project has a combination of stable (i.e., earmarked) and flexible (i.e., discretionary) funding.

1 2 3 4 5 N/A

7. The project has sustained funding.

1 2 3 4 5 N/A

8. Diverse community organizations are financially invested in the success of the project.

1 2 3 4 5 N/A

9. The project is financially solvent

1 2 3 4 5 N/A

Responsiveness to community needs

Responses

10. The project delivered meets the needs of the intended target populations.

1 2 3 4 5 N/A

11. The project addresses the behavioral health needs of the communities/populations being served.

1 2 3 4 5 N/A

Responsiveness to community values

Responses

12. The project can be adapted to meet the needs of the communities or populations being served.

1 2 3 4 5 N/A

13. The project is consistent with the norms, values and guiding principles of participating organizations.

1 2 3 4 5 N/A

14. The project fits well with the values of the organization(s) responsible for sustaining it and the communities where it is being sustained.

1 2 3 4 5 N/A

Coalitions, partnerships, and networks

Responses

15. The community members are passionately committed to sustaining the project.

1 2 3 4 5 N/A

16. The community is actively engaged in the development of project goals.

1 2 3 4 5 N/A

17. The community has access to knowledge and information about the project.

1 2 3 4 5 N/A

18. The project is supported by a coalition/partnership/network of community organizations.

1 2 3 4 5 N/A

19. Coalition/partnership/network members actively seek to expand the network of community organizations, leaders, and sources of support for this project.

1 2 3 4 5 N/A

20. The coalition/partnership/network is committed to the continued operation of this project.

1 2 3 4 5 N/A

21. There is a high level of networking and communication within the organizations responsible for sustaining the project.

1 2 3 4 5 N/A

22. Community leaders are actively involved in the project.

1 2 3 4 5 N/A

Infrastructure and capacity to support sustainment

Responses

23. The project exhibits sound fiscal management.

1 2 3 4 5 N/A

24. The project is well integrated into the operations of the organization and its partners.

1 2 3 4 5 N/A

25. Plans for implementing and sustaining the project are developed in advance.

1 2 3 4 5 N/A

26. The project is carried out or accomplished according to those plans.

1 2 3 4 5 N/A

27. The project offers sufficient training to agency staff and community members.

1 2 3 4 5 N/A

28. Staff possesses adequate knowledge and supportive beliefs about the project.

1 2 3 4 5 N/A

29. Staff feel themselves to be capable of implementing the project.

1 2 3 4 5 N/A

Implementation leadership

Responses

30. The project has a formally appointed person responsible for coordinating the process of implementing and sustaining the project.

1 2 3 4 5 N/A

31. The project is also supported by a champion who is actively engaged in the process of implementing and sustaining the project.

1 2 3 4 5 N/A

32. We have a process in place to sustain the project in the event our champion leaves.

1 2 3 4 5 N/A

Evaluation, feedback, and evidence of positive outcomes

Responses

33. There is ongoing evaluation of progress made towards sustainment.

1 2 3 4 5 N/A

34. There is sufficient and timely feedback about the project delivery to maintain or improve quality.

1 2 3 4 5 N/A

35. The project provides strong evidence of positive outcomes.

1 2 3 4 5 N/A

Rights and permissions

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/. The Creative Commons Public Domain Dedication waiver (http://creativecommons.org/publicdomain/zero/1.0/) applies to the data made available in this article, unless otherwise stated in a credit line to the data.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Palinkas, L.A., Chou, CP., Spear, S.E. et al. Measurement of sustainment of prevention programs and initiatives: the sustainment measurement system scale. Implementation Sci 15, 71 (2020). https://0-doi-org.brum.beds.ac.uk/10.1186/s13012-020-01030-x

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://0-doi-org.brum.beds.ac.uk/10.1186/s13012-020-01030-x

Keywords