There are three key components to the NCAEP review process: Rigor, Results, and Reach. These guide our work in identifying and disseminating evidence based practices.
The first step is a systematic review of the literature. Our review includes peer-reviewed studies from 2012-2017 that:
What is a focused intervention practice?
A focused intervention practice is a procedure or set of procedures that service and care providers use to produce specific behavioral and developmental outcomes for infants, children, and youth with ASD (Odom, Collet-Klinenberg, Rogers, & Hatton, 2010). They are designed for use for a relatively brief period (e.g., several months) with the clear objective of changing targeted behaviors or skills (e.g. verbal language, attention to task). Focused interventions are explicit in that they specify practitioner and/or care provider behavior that can be described and measured (e.g., prompting, reinforcement, discrete trial teaching, or use of visual schedules.
A Comprehensive Treatment Model (CTM) is a set of practices that follow a conceptual framework and implemented over a relatively lengthy period of time (e.g., 1-2 years or more). They are designed to have a broad impact on a core features of ASD and/or associated learning needs by using multiple practices to target skills across multiple domains (e.g., social communication, repetitive behavior). Also, they are intense in their application (e.g., 25 hours per week,). CTMs have been in existence for over 30 years and new models continue to be created. Examples of historic CTMs are the UCLA Young Autism Project (now the Lovaas Institute), Treatment and Education of Autistic and Communication Handicapped Children (TEACCH), the Denver Model (now Early Start Denver Model), and the Princeton Child Development Institute (PCDI).
Currently, the Clearinghouse is conducting a review solely on focused interventions, with the intention of conducting a second review of CTMs in the next phase of this project. More information about the definition of focused intervention practices and comprehensive treatment models may be found in Hume and Odom (2011) and Odom, Boyd, Hall, & Hume (2014), respectively.
Hume, K. & Odom, S. (2011). Best practice, policy, and future directions: Behavioral and psychosocial interventions. In D. Amaral, G. Dawson, & D. Geschwind, (Eds), Autism Spectrum Disorders. New York: Oxford University Press
Odom, S. L., Boyd, B., Hall, L., & Hume, K. (2014). Comprehensive treatment models for children and youth with autism spectrum disorders. In F. Volkmar, S. Rogers, K. Pelphrey, & R. Paul (Eds.), Handbook of autism and pervasive developmental disorders, Vol. 2 (pp. 770-778). Hoboken, NJ: John Wiley & Sons.
Why Include Single Case Design Studies in the Review?
Single case research design is an experimental methodology (Shadish, Hedges, Horner, & Odom, 2015). It tests the causal relationship between an independent variable (e.g., focused intervention practices) and a dependent variable (e.g., outcomes for children and youth with autism). Like experimental group design, the methodology is arranged to rule out threats to internal validity (Campbell & Stanley, 1963). High quality single case design methodology includes detail descriptions of participants, repeated and reliable measurement of the dependent variables, precise specification of the independent variable, and at least three demonstrations of the functional relationship between the independent and dependent variable (i.e., when an intervention is implemented, a change occurs in the child outcome) (Horner et al., 2005). Historically, data analysis has been through visual inspection of graphed data by trained professionals (Kazdin, 2011). Statistical analyses have now been developed to aid in the analysis (Kratochwill & Levin, 2014). With single case design, external validity is built primarily through direct and systematic replications of intervention effect across multiple studies by different researchers and/or research groups. A detailed description of logic and types of single case designs may be found in Horner and Odom (2014).
For NCAEP, we have made the decision to include single case design research as evidence for the efficacy of focused intervention practices. This is based on the rationale that the methodology is experimental in nature, and with sufficient replication by different research groups, single case design studies can provide evidence that is as substantial as experimental group designs. The NCAEP evidence-based criteria for a focused intervention practices having only single case design evidence is that there has to be:
- At least five single case design studies that reviewed verified as high quality;
- The studies have to have been conducted by at least three different research groups;
- There has to be a cumulative total of at least 20 participants with ASD across studies.
We realize that other systematic review projects have not included single case design as evidence for efficacy. As stated in our previous review of the focused intervention literature (Wong et al., 2014), “By excluding SCD studies, such reviews a) omit a vital experimental research methodology now being recognized as a valid scientific approach (Kratochwill et al., 2013) and b) eliminate the major body of research literature on interventions for children and youth with ASD.”
Campbell, D. T.,& Stanley, J. C. (1963). Experimental and quasi-exper- imental designs for research. Chicago: Rand McNally.
Chambless, D. L., Sanderson, W . C., Shoham, V , Bennett Johnson, S., Pope, K. S., Crits-Christoph, P., Baker, M., Johnson, B., Woody, S. R., Sue, S., Beutler, L., Williams, D. A., & McCurry, S. (1996). An update on empirically validated therapies. Clinical Psychologist, 49, 5-18.
Council for Exceptional Children. (2014). Council for Exceptional Children Standards for Evidence-Based Practices in Special Education. Arlington, VA: Author
Horner, R., Carr, E., Halle, J., McGee, G., Odom, S., & Wolery, M. (2005). The use of single subject research to identify evidence-based practice in Special Education. Exceptional Children, 71, 165-180.
Horner, R. H., & Odom, S. L. (2014). Constructing single-case research designs: Logic and options. In T. Kratchowill (Ed.), Single-case intervention research: Methodological and data-analysis advances (pp. 27-52). Washington D. C.: American Psychological Association.
Kazdin, A. E. (2011). Single-case research designs, Second Edition. New York, NY: Oxford University Press.
Kratochwill, T. R., Hitchcock, J., Horner, R. H., Levin, J. R., Odom, S. L., Rindscopf, D. M., & Shadish, W. R. (2010). Single case design technical document, Version 1.0. Washington, D. C.: Institute of Education Science. Retrieved from https://ies.ed.gov/ncee/wwc/Docs/ReferenceResources/wwc_scd.pdf
Kratochwill, T. R., Hitchcock, J. H., Horner, R. H., Levin, J. R., Odom, S. L., Rindskopf, D. M., & Shadish, W. R. (2013). Single-case intervention research design standards. Remedial and Special Education, 34, 26–38.
Kratochwill, T., & Levin, J. (Ed.). (2014). Single-case intervention research: Methodological and data-analysis advances. Washington D. C.: American Psychological Association.
Kratochwill, T. R., & Stoiber, K.C. (2002). Evidence-based interventions in school psychology: Conceptual foundations of the Procedural and Coding Manual of Division 16 and the Society for the Study of School Psychology Task Force. School Psychology Quarterly, 17, 341–389.
Shadish, R. S., Hedges, L. V., Horner, R. H., & Odom, S. L. (2015). The role of between-case effect size in conducting, interpreting, and summarizing single-case research. Paper commissioned by the Institute of Education Science. Retrieved from https://ies.ed.gov/pubsearch/pubsinfo.asp?pubid=NCSER2015002
Wong, C., Odom, S. L., Hume, K. Cox, A. W., Fettig, A., Kucharczyk, S., … Schultz, T. R. (2014). Evidence-based practices for children, youth, and young adults with Autism Spectrum Disorder. Chapel Hill: The University of North Carolina, Frank Porter Graham Child Development Institute, Autism Evidence-Based Practice Review Group. Retrieved from https://autismpdc.fpg.unc.edu/sites/autismpdc.fpg.unc.edu/files/imce/doc...
Criteria for Evidence
The contemporary origins of the identification of evidence-based practices can be traced to Cochrane’s work in the 1960s, which intended to conduct evaluative summaries of medical research that could inform practice. This summary included only randomized group design as acceptable evidence.
In the 1990s, the American Psychological Association was charged with identifying effective psychosocial intervention practices. They broadened the type of evidence to be included and specified the amount of evidence needed for an intervention to be determined as efficacious Specifically, they determined that the level of evidence for efficacious interventions could include two randomized controlled trials (RCTs), quasi-experimental designs, or a series of single case designs that meet well established treatment criteria.
For children and youth with disabilities, a task force initiated by the Division for Research of the Council for Exceptional Children further articulated levels of evidence needed and also specified methodological standards needed to be included in such considerations of evidence. They specified that to be considered evidence-based, a practice had to be supported by two high quality RCTs or quasi-experimental design studies or five high quality single case design studies conducted by at least two difference research groups and collectively including at least 20 participants. The Council for Exceptional Children has since re-established these criteria as acceptable levels of evidence.
Studies are determined to be of high quality if specified criteria are met, and only if this criteria are met will the findings from the study be examined for effects, and if positive, included in the evidence base for a specific practice(s). Quality standards for both group and single case designs are described below:
- Do the results demonstrate changes in the dependent variable when the independent variable is manipulated by the experimenter at three different points in time or across three phase repetitions?
- Were appropriate procedures used to increase the likelihood that relevant characteristic of participants in the sample were comparable across conditions? To meet this standard, one of the following criteria must be met:
- Were participants randomly assigned across study conditions?
- OR were participants matched on key demographic variables?
- OR did researchers statistically control for effects of differing key variables to ensure equivalence of groups?
- Were outcomes for capturing the intervention’s effect measured at appropriate times (at least pre- and post-test)?
- Was there evidence for adequate reliability and validity for the key outcome measures? And/or when relevant, was inter-observer reliability assessed and reported to be at an acceptable level?
- Was the intervention described and specified clearly enough that it could be replicated by another interventionist?
- Was the control/comparison condition(s) described?
- Were data analysis techniques appropriately linked to key research questions and hypotheses?
- Attrition was not a significant threat to internal validity.
- Were the measures of effect attributed to the intervention? (no obvious unaccounted confounding factors)
Single Case Design:
- Does the dependent variable align with the research question or purpose of the study?
- Was the dependent variable clearly defined such that another person could identify an occurrence or nonoccurrence of the response?
- Does the measurement system align with the dependent variable and produce quantifiable index?
- Did a secondary observer collect data on the dependent variable for at least 20% of sessions across conditions?
- Was mean interobserver agreement (IOA) 80% or greater OR kappa of .60 or greater?
- Is the independent variable described with enough information to allow for a clear understanding about the critical differences between the baseline and intervention conditions, or were references to other published material used if description does not allow for a clear understanding?
- Was the baseline described in a manner that allows for a clear understanding of the differences between the baseline and intervention conditions?
- Are the results displayed in graphical format showing repeated measures for a single case (e.g, behavior, participant, group) across time?
Next, the data from studies that make it through the rigorous quality review process are examined to determine if there are positive effects on outcomes for learners with ASD. Focused interventions are identified as an evidence based practice only when the following criteria are met in studies with positive effects:
Our final step is unique to our review process-- the focus on broad dissemination, beyond traditional peer-reviewed venues. This ensures that practitioners and families have access to easily digested and referenced information that details the level of evidence for each practice, as well as the step-by-step process of planning for, using, and monitoring each EBP. The online training is called Autism Focused Intervention Resources and Modules (AFIRM) and these modules have helped more than 72,000 users around the world.