Special Issue of Educational Researcher Examines the Nature and Consequences of Null Findings in Education Research
Special Issue of Educational Researcher Examines the Nature and Consequences of Null Findings in Education Research
 
Print

December 2019

A newly released special issue of Educational Researcher, titled “Randomized Controlled Trials Meet the Real World: The Nature and Consequences of Null Findings,” focuses on important questions raised by the prevalence of null findings—the absence of expected or measurable results—particularly in randomized control trials. In the issue, leading researchers address what it means when an evaluation produces null findings, why null findings are so prevalent, and how they can be used to advance knowledge. Educational Researcher is a peer-reviewed journal of the American Educational Research Association.

In their introduction, special issue editors Carolyn D. Herrington (Florida State University) and Rebecca Maynard (University of Pennsylvania) write that the growing emphasis on evaluating the effectiveness of education programs, policies, and practices, along with the expanded use of randomized controlled trials for those evaluations, “has contributed to growing angst among some in the research, policy, and funder communities that so many experimental evaluations are producing null findings.”

Herrington and Maynard note that “much of this angst arises from confusion over the meaning of a null finding,” especially since “commonly, a null finding is interpreted as evidence that the tested strategy did not work or the study design was flawed.” However, as examined in the special issue, null results are actually an expected and valuable product of evaluation research.

“This special issue goes a long way in clarifying how to understand and interpret null results, especially in the context of education interventions,” said AERA Executive Director Felice J. Levine. “AERA is encouraging our journal editors to publish studies that have null results and to explore other ways to encourage researchers to share the knowledge that comes from them. This ultimately will enhance our collective understanding of what programs work, why they do, and how they can be implemented elsewhere to improve education outcomes.”

In addition to the editors’ introduction, the special issue—which is provided open access on the AERA website—includes the following research articles and commentaries.

“A Framework for Learning from Null Results,” Robin T. Jacob (University of Michigan), Fred Doolittle (MDRC), James Kemple (New York University), and Marie-Andree Somers (MDRC)

In this article, the authors propose a framework for defining null results and interpreting them. They also propose a method for systematically examining a set of potential reasons for a study’s null findings that would provide a more nuanced and useful way to understand them. The authors also argue that if studies were designed in preparation for weak, null, or even negative findings, they would be better situated to add useful information to the field.

“Using Implementation Fidelity to Aid in Interpreting Program Impacts: A Brief Review,” Heather C. Hill (Harvard University) and Anna Erickson (University of Michigan)

Hill and Erickson examine the relationship of poor program implementation (“fidelity”) to null results in trials of educational interventions. As expected, better implementation fidelity correlates with better program outcomes; they also find that the presence of new curriculum materials positively predicts fidelity level. However, their results also suggest that the quality of program implementation is a partial but not complete explanation for null results.

“Making Every Study Count: Learning from Replication Failure to Improve Intervention Research,” James S. Kim (Harvard University)

Kim draws on case studies of the impact findings for a particular intervention that varied across the studies—with null findings in one study and confirmatory in another—to  illustrate how such findings can be used to understand the role of context in determining impact. He advocates for greater attention to context by evaluators in their designs and reporting and for researchers to see replication failure as an opportunity to improve intervention research and explore new research questions. 

“Commentary on the Null Results Special Issue,” Carolyn J. Hill (MDRC)

In her commentary on the research articles in the special issue, Hill notes that null effects remain relatively under-examined, yet often contain important knowledge. She writes, “I share the authors’ general view that the presence of null results is not in itself reason for despair.” Instead of downplaying null results, “We have an obligation to anticipate their occurrence, interrogate their presence, and support continued, healthy attention to them.”

“Expecting and Learning From Null Results,” Jeffrey Valentine (University of Louisville)

In his commentary, Valentine argues that (1) it is critical that conversations about replication efforts begin with an agreed-upon definition of what it means to say that a study did or did not replicate the results of another study; (2) if a replication failure has been identified, using the surface similarity of the studies to reverse-engineer an explanation is unlikely to help; and (3) researchers and consumers should expect small and differing effects, and this fact points to the need to think across broad bodies of research evidence. 

 
Designed by Weber-Shandwick   Powered by eNOAH