Education Week’s Sarah Sparks Discusses Covering Education Research
Education Week’s Sarah Sparks Discusses Covering Education Research

August 2018

The following Q&A is one in an occasional series of conversations with policy and opinion leaders with an interest in and commitment to education research.

Sarah Dockery Sparks is an assistant editor and research reporter at Education Week, where she has covered education research and the science of learning for more than a decade.  She also writes the blog “Inside School Research.” Sparks has been covering education, both national and local, for several news agencies since 2002. She can be reached at

Q. What factors do you consider when determining whether a new research study is worth covering?

A. I’m looking for three basic things: Does the study address a problem or issue that teachers, school or district staff, or policymakers care about? Does it advance the conversation in the education or research fields, either by moving in a new direction or making us question old assumptions? Does it come to reasonable conclusions based on its findings? While the last is a necessity, the first two balance each other; I might write about a study that researchers think is old news if it offers clear, practical conclusions that teachers could use in the classroom. On the other hand, I might write about a study that acts as a way into talking about the evolution of a whole field of research.

Finally, there’s an old concept in newspaper journalism called the “Hey, Mable”—the kind of surprising or quirky tidbit in a story that would make a guy reading at the breakfast table lean over to read it to his wife. Today we might ask if there’s something tweetable. I’m always looking for findings that surprised their authors, or that made me think about a problem in a new way.

Q. How do you decide if research is of high enough quality to cover?

A. It’s tricky; it sometimes seems like every other day we’re finding that a well-regarded researcher fudged data, or that a well-known study with seemingly solid methodology was the result of p-hacking.

I look for signs the study was done with care. Has the study been registered in advance? Is the methodology clearly laid out? Does it have a strong sample—acknowledging that sample sizes can look very different in, say, neuroscience versus behavioral intervention evaluations? Do the researchers point to potential criticisms or limits of their findings? All of those are good signs.

I’ve been covering education research for about 15 years, but I don’t have a degree in research, so I also rely on trusted researchers both in my office and in the field to help me think through studies that feel complex or just a little fishy.

Q. What areas of current education research are ripe for expanded coverage?

A. The sheer quantity and variety of education data emerging in the last decade has been a boon for education research, and it’s been great to cover emerging insights about, for example, early warning signs that a student is struggling. But there hasn’t been enough coverage of research into how to change systems rather than just identify problems. And there hasn’t been enough coverage of how people use big data or of how biases can affect the conclusions people draw from data.

Q. What education issues are especially newsworthy right now? Which ones do you see emerging over the next six months or year? What questions related to those issues lend themselves to education research?

A. The momentum in education accountability has shifted from the federal level to states, and researchers can offer a lot of insight by comparing how critical education issues play out in different states. I already see this happening in areas like school organization, charter accountability, and using social-emotional and school climate data.

But there are other areas which haven’t been explored as much. For example, as states roll out new school data web sites for parents (as required by the Every Student Succeeds Act), it would be fascinating to see how different groups of parents are accessing this information and how it changes their decision-making.

I’m also interested in how the skyrocketing popularity of virtual classes affects how students learn and develop socially, and how virtual schools will be addressed in new accountability systems.

Q. What advice do you have for researchers and other science communicators to make research findings more accessible to reporters and other non-researchers?

A. Explain your findings as you would to your neighbor or your grandmother. In addition to relaying effect sizes, talk about what they mean in practical terms. Also, consider registering your study with a database like the one by the Society for Research on Educational Effectiveness, to make it easier for your findings to be replicated.

Q. What are some best practices researchers and science communicators should consider when pitching research?

A. Be clear about your findings, but don’t oversell them. If you are asking for coverage, make sure the journalist has access to the full study (not just a press release), and is able to talk to the study authors. If you have charts or photos for the study (or participants who are willing and able to talk about an intervention), be sure to mention that. And when you talk to reporters, don’t shy away from asking them if they are following your explanations.

Designed by Weber-Shandwick   Powered by eNOAH