MINDSET SCHOLARS NETWORK BLOG

While practitioners and policy members often seek to make research-based decisions, it can be difficult to meaningfully interpret studies’ results. For example, if a school-wide program designed to promote positive social interactions between teachers and students is effective in a group of second grade classrooms, will it have similar results for all K-5 students? Will the program be more effective than what schools are currently using? Sometimes studies that examine these kinds of questions will report that a program has ‘no effect’. But what does this mean and how might it affect decisions for practice and policy?

The Institute of Education Sciences (IES) just released a brief by Neil Seftor at Mathematica Policy Research on how practitioners and policymakers can interpret a no effect result.

What does ‘no effect’ mean?

As outlined in the brief, a program is found to have no statistically significant effect when statistical analysis suggests that there is a reasonable chance (often defined as 5 times out of a 100 or greater) that the observed effects of a program could have occurred due to random chance alone, rather than because of the program itself.

It does not mean the program being studied has ‘zero’ effect on the outcomes of interest. When a study yields a no effect finding, it cannot be determined that the program is significantly better or worse than the condition to which it’s being compared—either a placebo control condition or, in many cases, just the status quo without the program.

It is also important to remember that an effect that is statistically significant may not be large enough to be practically meaningful. And an effect that is not statistically significant may still be important to practitioners or policymakers. 

Why do no effect results occur?

The brief outlines three factors that could result in a finding of no effects:

  • Failure of implementation: Did the participants adequately follow all components of the protocol outlined by the research team?
  • Failure of research design: How were the effects of the study measured?
  • Failure of theory: Was there a flaw in the theory behind the design of the program?

The following table from the brief provides more details on how implementation and research design could lead to a no effect finding.

chart-1

chart-2

What are the implications of a ‘no effect’ result for policy and practice?

Not all studies are designed in ways that can detect statistically significant and practically meaningful effects. For example, studies with small sample sizes cannot statistically detect an effect unless it is very large. Programs may also benefit certain subgroups in the sample, but such effects could be masked in an analysis that estimates an effect for the entire sample.

While it is important to use evidence-based practices in classrooms and to inform policy, no single study can encapsulate how a program will work across all contexts. Ideally, multiple studies should be completed in multiple settings to get a clearer picture of not only whether a program works but how it creates change and for whom. What Works Clearinghouse is one helpful tool that looks across multiple studies to measure effectiveness of programs across contexts and environments. The Mindset Scholars Network’s first two flagship initiatives, the National Study of Learning Mindsets and the College Transition Collaborative, are designed to determine the effectiveness of mindset programs in numerous settings.

When making decisions about programs, policymakers and practitioners have many factors to consider, from the cost of program implementation, to unintended consequences of replacing a different program. Understanding and interpreting educational research offers one more tool for decision makers to use when deciding what will work best for the populations they serve.

<  Back to blog

UPDATES Sign up to receive periodic updates from the Mindset Scholars Network.

Open Popup