Welcome to part II of our Q&A with Neil Naftzger, American Institutes for Research (AIR), about his evaluation work related to 21st CCLC programs specifically and the afterschool field broadly. Below are answers to oneof the questions we asked, with our emphasis added in bold, which establish that there is in fact clear evidence demonstrating that 21st CCLC work for students. To read part I, click here.
What changes would you like to see in terms of 21st CCLC data collection and evaluation?
This is a big question. First, I think we need to be clear around the purposes we’re trying to support through data collection and evaluation. Normally, we think about this work as falling within three primary categories:
States have done an amazing job over the span of the past decade to develop quality improvement systems predicated on using quality data to improve practice (purpose #1). Effective afterschool quality improvement systems start with a shared definition of quality. In recent years, state 21st CCLC systems have come to rely upon formal assessment tools like the Youth Program Quality Assessment (YPQA) and the Assessment of Program Practices Tool (APT-O) to provide that definition, allowing 21st CCLC grantees to assess how well they are meeting these criteria and crafting action plans to intentionally improve the quality of programming. Use of these tools typically involves assigning a score to various program practices in order to quantify the program’s performance and establish a baseline against which to evaluate growth. A recent report completed by AIR indicates approximately 70 percent of states have adopted a quality assessment tool for use by their 21st CCLC grantees. Our sense is that these systems have been critical to enhancing the quality of 21st CCLC programs, and any efforts to modify the 21st CCLC data collection landscape should ensure program staff have the support and time necessary to participate in these important processes.
Secondly, additional work needs to be done to define key indicators for the program to support efforts to monitor the participation and progress of enrolled youth and inform efforts to make targeted refinements to programming to enhance quality and effectiveness (purpose #2). For example, our sense is that indicators could be crafted to answer the following three questions:
We also would advocate for the collection and use of these data using a quality improvement, as opposed to an accountability, framework. The focus here should be on using data to enhance program implementation, as opposed to making summative judgments about efficacy or impact.
Assessing program impact represents the final way of looking at 21st CCLC data (purpose #3). In this instance, the focus should be on using rigorous quasi-experimental designs to support making causal inferences about how participation in 21st CCLC programming may be having a positive impact on participating youth. Here, we would like to see improvement in two areas: (1) enhancing the rigor of evaluation efforts and (2) improving on the measurement of outcomes that are especially likely to be impacted through 21st CCLC participation.
In terms of rigor, if we are trying to make causal inferences on the impact of 21st CCLC, we need to make sure we are using research designs that will support these inferences. In all the statewide 21st CCLC evaluations completed by AIR, each of the analyses undertaken to assess the relationship between regular program participation and youth outcomes compared youth participating in programming for 60 days or more during the school year with a similar group of youth from the same schools who did not participate in programming. In order to ensure the participating and non-participating groups were as similar as possible, we used an approach called propensity score matching to create the non-participant comparison group. Given that youth were not randomly assigned to participate in programming or not, there is always the concern that youth who did opt to participate differed in important ways from youth who did not enroll in programming. That is, potential program effects could be driven more by existing differences between the groups at baseline than a true relationship between program participation and youth outcomes. The goal in using propensity score matching was to mitigate this selection bias when estimating potential program effects by accounting for preexisting differences between youth who attended the program regularly and those who did not. As a result, these analyses helped us to isolate the potential effect participation in 21st CCLC had on youth outcomes.
Efforts to evaluate the impact of 21st CCLC should increasingly try to employ these types of designs, particularly at the state-level where there additional evaluation resources are available to support these types of analyses. In addition, very little work has been done to understand the longitudinal effects of 21st CCLC participation on the educational and career trajectories of participating youth. More work needs to be done in this area as well.
Finally, there is a great opportunity to improve upon how we are measuring youth outcomes supported by 21st CCLC participation. When we talk with center coordinators about how youth benefit from participating in the program, quite often their responses can be classified as falling in one of the following categories:
Unfortunately, many of these areas are currently going unmeasured, leading to a gaping hole in our understanding of how 21st CCLC programming is truly impacting participating youth. Our sense is that if we really want to understand how 21st CCLC may be impacting youth, we need to dedicate some additional effort to examining these types of outcomes. However, before states and grantees widely pursue efforts to measure these types of program outcomes, they should wait for the research community to provide more concrete recommendations regarding which skills, beliefs, and attitudes can be reliably measured (and under what conditions), and what protections need to be in place to ensure youth are not adversely impacted by participating in these measurement efforts.
Close to half of children (45 percent) in the U.S. have experienced at least one adverse childhood experience (ACE)—an experience that could have negative and lasting effects on one’s...
As the prominence of social and emotional learning (SEL) to support students’ development in school and beyond continues to grow in education circles, challenges implementing SEL programming...
Over the years, research surrounding participation in early childhood programs—such as preschool, home visiting programs, and parent education programs—has...
Last month, we saw afterschool programs across the country open their doors to host Lights On Afterschool events, providing a firsthand look at the broad array of fun, enriching, and engaging...