Understanding the data: In the battle over the conclusions of a major research reproducibility study, a VCU professor weighs in

Share this story
Jennifer Joy-Gaba, Ph.D.
Jennifer Joy-Gaba, Ph.D.

There is renewed buzz over the findings of the Reproducibility Project: Psychology, published last year in Science. To recap, researchers from all over the world as part of the Open Science Collaboration set out to replicate 100 psychology experiments published in three journals in 2008. As it turned out, RPP researchers were only able to replicate about 40 percent of the studies. They could not replicate 30 percent, and findings from another 30 percent were inconclusive.

The results caused a stir in the research community, pulling the rug out from under some widely accepted research findings. Now the chatter has gotten louder again with this month’s criticism of RPP by researchers from Harvard and University of Virginia. They contend that, because of statistical errors, there is insufficient support for RPP’s conclusions.

Questions are indeed swirling. For insight, VCU News spoke with Virginia Commonwealth University’s Jennifer Joy-Gaba, Ph.D., assistant professor in the Department of Psychology in the College of Humanities and Sciences. She has been a part of the Open Science Collaboration and is co-author on the RPP study.

Why even go through this process of replicating studies?

For many years there have been conversations about what scientists refer to as the “file-drawer problem.” These are studies that either did not produce an effect and/or studies which could not be replicated. However, few research projects have examined this idea scientifically. This project was a first step to test replicability in a systematic, large-scale manner.

Given all the variables involved with time and place and so much more, is it truly possible to replicate a study?

One of the main goals of science is to have studies be able to be publicly verified. That is, write the study methods in such a way so that another researcher can read it, understand it, and conduct the study in the same way. The hope is that researchers should converge on similar results, perhaps with a smaller or larger effect size. Of course, error variance is always important to consider. A few examples might include the temperature outside, the demographics of the sample or even current news events. Any one of these might change participants' responses at different time assessments.

The idea isn't that a researcher obtains the exact same finding, but conducting the same methods multiple times should lead the researcher's results to converge in the same direction. So in this way, absolutely. Studies should be able to be replicated, and with good reason. If industry uses science to inform them of which new drugs are safe or what makes a vehicle user friendly, shouldn't we be confident in science's findings?

Absolutely. RPP researchers were only able to replicate a third of the studies — and the study seemed to open the door to more questions. What are your thoughts on the recent criticism?

In general, I believe that criticism in science is a good thing. It is what helps science continue to move forward and provides new questions to test. I won't get into the criticisms as many others have discussed this in great detail. I will say that the majority, if not all, of the research conducted on RPP used materials directly obtained from the original researchers. Each study wrote an initial proposal that was reviewed by other colleagues and/or the original authors. Power analyses were used to ensure there would be a large enough sample to detect an effect. Lastly, a final report was written and vetted. All materials, analyses scripts, data, and reports are publicly available. In addition, the paper, critique, and rebuttal are public.

In other words, RPP researchers were rigorous and thorough. What’s happening now to understand the results better?  

The great thing about the RPP is that the data and methods are open to anyone. Any follow-up questions that individuals have can be answered simply by downloading and running the analyses themselves. There have been replies to the original critique by several of the researchers involved in the project as well as outside commentary from scientists not involved in the project. And, of course, I wouldn't discount follow-up studies.  

More studies, and changes, too. It seems like the results are provoking shifts in the research process – more transparency through pre-registration; larger, more diverse samples and collaboration across institutions. These all seem like positive solutions and changes. Is there resistance to this in the research community?

I'm not sure there is resistance per se but I do think it's a new technique that people are adjusting to. Other fields, like physics and medicine, already have some of these requirements in place for their own research programs. Transparency in science is a wonderful thing – and I would hope other researchers would see this new process as a positive step.