Home‎ > ‎IRB and our Terms of Use‎ > ‎

Open Science

Recent News August 2015: Read this New York Times Article for result from the Reproducibility Project, dedicated to seeing what studies will in fact reproduce (Carey, 2015, Aug).


    There is a current crisis in faith in the scientific process, and the results of RCT designs. Ioannidis’ (2005) provocatively named paper “Why Most Published Research Findings Are False?” (as well as others) has set off a firestorm of debate. He argues persuasively that things like publication bias (null results tend not to be reported) as well as researchers well-meaning selective picking of samples to analyze, can cause havoc in the library of published results, making people lose faith in something as fundamental as an RCT. This resulted in a number of articles in the mainstream press such as the NY Times, Economist, NPR ,and Wall Street Journal (Johnson, 2014 Jan; Johnson, 2014 Mar; Broad, 2014 Mar; Lombrozo, 2014 June; The Economist, 2013, Oct) . Amgen Inc. tried to replicate 53 “landmark” cancer studies, but could only succeed with six of them (Begley & Ellis, 2012). Bayer Inc. did a similar attempt for drugs, and of the 67 results deemed important enough to replicated, they could only replicate the findings in one quarter of the studies (Prinz, Schlange, & Asadullah, 2011). Failure to replicate a finding does not mean it is false, but with large numbers one should have the power to detect such results. This is what makes a recent attempt in psychology to replicate 13 landmark studies so unique. Dozens of labs all did the same experiments, creating tremendous statistical power, and in conclusion, showed that two of the 13 seminal findings were simply false (Yong, 2013, November ). But given there are hundreds of RCTs published each year, and the cost of the multi-lab replication is very high, we need a better way to ensure science can proceed.
    The openness of the scientific community is often seen as one of its strengths so that over time, false results will cause other researchers to run studies to show that someone else is wrong. This, in theory, should help a great deal, but it turns out that researchers are not as open about their data and materials as we would like them to be. Ioannidis and colleagues (Alsheikh-Ali et al., 2011) report that in the top 50 leading journals with data sharing policies, only 143 of the 351 papers he sampled complied with those policies (Alsheikh-Ali et al., 2011). We think this proposal will help combat this: the terms of use that researchers will have to agree to before they can construct their study to commit to our open data policies. Specifically, one year after their study is published all the materials and data will become available. To avoid file drawer problems from null results often not being published, data will be made public one years (that is our current best estimate for a reasonable length of time for writing up a study, but may be subject to revision) after completion of the study regardless of whether it is published. You can request and extension if you can show you are actively in the peer-review cycle.

    More recently The Economist reported that only about 10% of drug companies and researchers were reporting their results on their clinical trials within the required deadline ("Spilling the Beans" 2015). Any only 50% of them have reported after being five years late. The Economist points out the costs of delay in publishing results causes financial harm. They said: 

"Even when no medical harm is done, financial harm can be. Since 2006, the British government has spent £424m ($660m) stockpiling Tamiflu, an antiviral drug, in order to anticipate an influenza pandemic. At the time the decision was taken, 60% of the trial data about this drug remained unpublished. Those data have now been analyzed, and that analysis has raised questions about Tamiflu’s effectiveness in reducing hospital admissions, and thus about whether creating the stockpile was money well spent."

    It's for these reason we ask you for your permission to publish (i.e., in fact we require you to agree) your data and materials, before before you submit a study. Fundamentally, "registries" (i.e., https://clinicaltrials.gov, or the US Dept. of Ed equivalent), that require "after-the-fact action" by researchers will tend to be slower and more susceptible to these problems, than our system, that gets researchers to agree, upfront, with the terms. With our system researchers have nothing to do in order for open-science to progress.

    Our terms of use will require that all publications acknowledge the precise data dump page that we will send you when your study started to produce data. That page will also allow any researcher to get the raw data and the materials. Ioannidis also points out that “Selective or distorted reporting is a typical form of ... bias.” For instance, a research might choose to selectively report on the one trial that worked. The fact that the data dump web page will link to your complete history of studies that you have done, will allow others to ask “Did that researcher [i.e., "you"] really try 17 different times to get effect and then only report the last one that worked?”
    It seems like a tall order, but we think that the ASSISTmentsTestBed will help learning scientists and education researchers to take steps to solve these problems and produce more accurate scientific results. It is for all these reason we ask you to agree to our terms of use ahead of time.

References 
  1. Alsheikh-Ali, A., Qureshi, W., l-Mallah, M., & Ioannidis, J. (2011). Public Availability of Published Research Data in High-Impact Journals. Plus-One, September 2011, 6(9), e24357. Retrieved March 10, 2014, from http://www.plosone.org/article/info%3Adoi%2F10.1371%2Fjournal.pone.0024357
  2. Broad, W.J. (2014, Mar 15). Billionaires With Big Ideas Are Privatizing American Science, Retrieved on June 3, 2014, from New York Times Science http//www.nytimes.com/2014/03/16/science/billionaires-with-big-ideas-are-privatizing-american-science.html.
  3. Carey, B. (2015, Aug 27). Many Psychology Findings Not as Strong as Claimed, Study Says. New York Times. Retrieved August 31, 2015 from http://www.nytimes.com/2015/08/28/science/many-social-science-findings-not-as-strong-as-claimed-study-says.html?_r=1.
  4. Ioannidis J. P.A. (2005). Why Most Published Research Findings Are False. PLoS Med 2(8): e124. doi:10.1371/journal.pmed.0020124. Retrieved on March 17, 2014, from http://www.plosmedicine.org/article/info%3Adoi%2F10.1371%2Fjournal.pmed.002012.
  5. Johnson, G. (2014, Jan 20). New Truths That Only One Can See. New York Times Science, Retrieved on June 3, 2014, from http://www.nytimes.com/2014/01/21/science/new-truths-that-only-one-can-see.html.
  6. Johnson, G. (2014, Mar 7). When Studies Go Wrong: A Coda. New York Times Science, Retrieved on June 3, 2014, from http://www.nytimes.com/2014/03/07/science/when-studies-are-wrong-a-coda.html.
  7. Lombrozo, T. (2014, June 2). Science, Trust And Psychology In Crisis. National Public Radio Topics. June 3, 2014, from http://www.npr.org/blogs/13.7/2014/06/02/318212713/science-trust-and-psychology-in-crisis.
  8. Prinz, F., Schlange, T. & Asadullah, K. (2011). Believe it or not: how much can we rely on published data on potential drug targets? Nature Rev. Drug Discov. 10(9), 712.
  9. Spilling the Beans (2015, July 25). The Economist. Retrieved on August 31, 2015, from http://www.economist.com/news/science-and-technology/21659703-failure-publish-results-all-clinical-trials-skewing-medical.