Featured Articles
Jenny Baker
/ Categories: 581

Credibility Multipliers: Simple Yet Effective Tactics for Practicing Open Science Principles

Christopher M. Castille,Nicholls State University; Fred Oswald, Rice University; Sebastian Marin University of Minnesota–Twin Cities; and Tanja Bipp; Heidelberg University

Before diving into the main topic for this installment of Opening Up, we’d like to point out how some important advances and lessons in open science have been motivated by the devastating crisis of the coronavirus pandemic. Since the beginning of the year, research to understand the coronavirus and the disease of COVID-19 has been conducted in earnest, with over 7,000 papers produced in the past 3 months (The Economist, 2020), across a wide range of disciplines (e.g., virology, epidemiology, healthcare). Considering how slow and steady the scientific publishing process tends to be (ranging from a couple of weeks to over a year), what do we make of this vast output? What lessons might we take away in thinking further about open science?

First, note that a large volume of this research has been made publicly available in the form of preprints—manuscript drafts that have not yet undergone peer review—that are made available via online archives (e.g., bioArxiv and medArxiv are key repositories for this research; PsyArxiv is a similar repository for psychological research). Preprints allow scholars to share their work more rapidly and widely compared to journals, helping researchers get quick and wide-ranging feedback on their work outside the more formal peer review process. Thus, this opens up the potential for scholarly work as a whole to advance more rapidly. The word potential is key here. Submissions, particularly to bioArxiv and medArxiv, may be given cursory checks to weed out nonscientific work (e.g., opinion pieces), but the quality of the scientific value is expected (hoped, in fact) to be heavily, appropriately, and quickly examined and critiqued by the scholarly community. For instance, a paper falsely suggested that the coronavirus was created in a lab, a message that was quickly picked up by the press but was nearly equally quickly dismantled by the scholarly community, who pointed out genetic evidence and animal research that strongly supported more natural causes. Another preprint, shared as of this writing via news outlets and discussed particularly on The Daily Show with Trevor Noah (May 21, 2020-Taraji P. Henson), suggested that marijuana may have coronavirus-fighting benefits. Turns out this preprint was supported by a CBD company. Thus, the speed at which preprints are produced for scientific, media, and public consumption carries both real benefits and serious drawbacks. However, press representation of scientific work has always been a problem, even in traditional publishing, so there needs to be a continued examination of the tradeoffs of providing large volumes of scientific work that have not been fully vetted.

As you might have guessed by now, this leads us to preprints as one of the “simple yet effective” tactics we wish to highlight for opening up I-O psychology research.1 The Economist surmised that the practice of posting preprints will only become even more widespread, and, thus, we should become more active participants in the preprint community, getting in the fray of open science as it were, so that we can connect with other disciplines and help to manage how preprints can best serve our discipline. As a recent and pertinent example, consider a 42-author perspective piece by Van Bavel et al. (2020), which was posted as a preprint to PsyArxiv and then was published as a perspective piece in Nature Human Behavior. The authors, a collection of esteemed scholars in a variety of behavioral and social science areas, including SIOP’s own Michelle Gelfand, highlights psychological insights to be gained from several key areas of empirical work (e.g., leadership, threat perception, stress, and coping) that may be potentially relevant for guiding policy during the pandemic. A quick response to this preprint can be found in another preprint (IJzerman et al., 2020), which details why psychological science may prove helpful in handling the pandemic but is generally not “crisis ready” whenever life or death is at stake.

The point of these examples is that you now can undoubtedly appreciate how preprints can present opportunities for us as a field, and we should start discovering how. In a similar vein, the Journal of Applied Psychology has recently made a call for rapid research into COVID-19, and other psychology and management journals are making similar calls. Many of these journals plan to make their pandemic-relevant journal articles freely accessible to the scientific community and to the public, at least for a limited time. Like many others, we cannot help but think, “This is great! ...Can we do more open access so that I-O psychology can have an even bigger impact on the scientific community, policy makers, and the public?” This is not a simple question, actually. Certainly publishers seek to protect their assets, but they also promote professions, researchers, and the science we produce. They do assure a level of peer and editorial review that is not found in preprints. Furthermore, open science issues intersect with a wide range of scientific, professional, legal, and ethical concerns, and stakeholders who require continued discussions (Grand et al., 2018). In other words, and in brief, open science is not simply a matter of flipping a switch.

Even with all the aforementioned issues, and even as open science itself evolves, we are confident that almost every I-O psychologist can engage in a set of what we will call credibility multipliers (named as such in hopes that you will use them!). Credibility multipliers refer to a set of simple-yet-effective open science activities that almost always improve the process and outcomes of research and practice. Taking this approach, our goal is to inspire simple behaviors that stand not only to improve I-O research and practice but even our professional reputations over time. Sometimes a little extra detail goes a long way in helping to enhance the credibility of one’s work, and we are taking this stance so that I-O psychology will enter the stage and the growing community of open science in greater numbers and with capability and enthusiasm.

Fortunately, as beta testers of open science for I-O psychology, we are hardly starting from scratch. Scholars from a wide range of disciplines have offered a number of open science tools and points of guidance for shifting our culture toward greater openness and transparency in how we conduct and share our work (e.g., Kramer & Bosman, 2018; Nosek et al., 2015; Nuijten, 2019; Wicherts et al., 2016). One visual that captures the cornucopia of available practices across disciplines is the open science rainbow (Kramer & Bosman, 2018; see Figure 1), which includes a variety of open science practices and tools. Some of these tactics for practicing open science are relatively easy to put into practice almost immediately; for example, creating and sharing reference libraries via Zotero can allow scholars and practitioners to share articles rather seamlessly. Other practices would require much more additional training to implement, such as using Jupyter notebooks, a practice that might speak best to specialized audiences (e.g., data scientists in organizations).

Figure 1
The Rainbow of Open Science Practices (Kramer & Bosman, 2018).

 

 

What we will call specific attention to from the rainbow of open science are (a) minimally, preregistering a study on the Open Science Framework (OSF) and then at least considering the registered report publishing format, (b) ensuring analytical and computational reproducibility, (c) using checklists to avoid questionable research practices, and (d) sharing work via preprints.2

Preregistration and Registered Reports

Preregistration is similar to a project proposal or thesis proposal; it is the process of writing out and committing to your research questions and analysis plan prior to conducting the study and observing the outcomes of research. Turning to a quote by Richard Feynman, “The first principle is to not fool yourself —and you are the easiest one to fool.” Simply put, the intent is to distinguish which aspects of research were confirmatory (as planned beforehand) versus exploratory (as discovered, in follow-up), thus helping to guard against confirmation bias, the process of constructing theories, findings, or beliefs from findings retrospectively (Nosek et al., 2018). Preregistration can be as simple or as detailed as one would like. On the simple side, the website www.aspredicted.org provides a form asking nine questions (e.g., specifying the research question, dependent variable, and sample size), and this form can be kept to oneself or shared publicly. We hope most I-O psychologists will use this sort of form because it is easy, a nice beginning to get in the habit of preregistration and planning. I-O psychologists should also review other preregistration forms that are publicly available; although they are more intensive, they might prove even more useful (see templates provided by the OSF: https://osf.io/zab38/wiki/home/).  Depending on how detailed you wish to be, it can take anywhere from 30–60 minutes to preregister a study (Aguinis, et al., 2020), which makes it easy to place into your workflow. Also, it allows your lab to collaborate in jointly discussing, forming, and committing to a well-formed research plan.

What should be made extremely clear is that preregistration does not lock a research into any particular approach to the work; in fact, the preregistered plan itself might contain aspects of the work that are known a priori to involve exploratory or qualitative work of the sort featured in inductive research (e.g., Dirnagl, 2020; Haven & Van Grootel, 2019; see also the Special Issue on Inductive Research by the Journal of Business and Psychology edited by Paul Spector, 2013). Essentially, preregistration can front-load a project because investing greater time in planning on the front end generally makes execution on the back end easier (e.g., analysis and writing up results). As authors select and write up their preregistration forms, they would benefit from consulting the American Psychological Association’s Journal Article Reporting Standards (i.e., JARS; see https://0-apastyle-apa-org.library.alliant.edu/jars/). JARS are provided for quantitative, qualitative, and mixed methods studies. Research might be viewed as more credible when preregistration details are specified a priori (e.g., data-cleaning procedures, outlier detection tools) because by committing to a research process readers are reassured that authors are not mucking around with their research procedures post hoc to achieve a set of desired results. Of course, the more detail that is included, the more time this will take. However, many individuals are sharing their procedures for preregistration online (e.g., program scripts for screening outliers, scripts for screening inattentive responding), making easier the inclusion of such methodological details in a preregistration.

A registered report is a detailed preregistration plan submitted to a journal for review, where submitting scholars can gain constructive feedback and insights into the conceptual and methodological limitations of their study, while ideally earning an agreement to have their work accepted for publication, conditional on completion of the study as planned (study design, data analysis, etc.). I-O psychology journals that currently accept registered reports include The Leadership Quarterly; International Journal of Selection and Assessment; Journal of Personnel Psychology; Stress & Health; Work, Aging, and Retirement; and the Journal of Occupational and Organizational Psychology. We are big fans of the approach of working together with journals and editorial boards more collaboratively to develop great research before the work is executed, given that an ounce of prevention in the form of a well-designed research study is worth a pound of cure in terms of needing to apply statistical corrections in hopes for the best (Oswald, et al., 2015) or needing to wait for additional data in a replication or meta-analysis (Aguinis & Vandenberg, 2014). Registered reports can help to remedy the sort of psychological research that has fallen prone to endogeneity bias (Antonakis, 2017), low statistical power (Aguinis et al., 2020; Fanelli, 2010, 2012; Fraley & Vazire, 2014; Murphy & Russell, 2017), and the misuse of control variables (Bernerth & Aguinis, 2015; Spector & Brannick, 2011). Thus, the preregistration process can improve research and the training of researchers; prevent the escalation of commitment to poorer research practices; and help place important research criticisms up front, before a lot of time, reputational, and ego investments are made.

Furthermore, should a preregistration or registered report call for a larger sample than you originally anticipated, do not let that deter you. Sometimes it might make sense to consider simpler models that require less power, but in other cases, we suggest you become inspired to gather vastly more data via a multisite collaboration. In fact, one of the authors (Chris) is currently a part of a large-scale collaboration involving over 100 labs around the world investigating the universality of moral intuitions across the world. This multisite project is being sponsored under the umbrella of the Psychological Science Accelerator (PSA; see https://psysciacc.org/).3 The lead authors required a sample of almost 10,000 participants, which would often be well beyond a single author’s ease of sampling. By submitting a paper to the PSA, not only were they able to recruit labs from around the world to collect the necessary data, they were able to collect the data faster in parallel while also ensuring that the data were internationally more diverse than that found in any individual lab. In addition to data collection, scholars are also collaborating on the manuscript itself, producing a paper that is conditionally accepted by the journal Nature Human Behavior as part of a registered report, where data collection is ongoing (see Bago et al., 2019).

Notably, in the previous entries to Opening Up, a few other such multisite collaborations were highlighted (i.e., Many Labs, the Open Science Collaboration). Multisite collaborations have become a powerful vehicle for leveraging our collective expertise to make robust contributions. They are even currently occurring at the undergraduate level (see Wagge et al., 2019). What if they happened at the graduate level in I-O psychology? At the doctoral level? What if I-O psych programs leveraged their resources (e.g., access to organizations) to study phenomena (e.g., psychological reactions to COVID-19) that made a practical impact (e.g., studying how employers navigate COVID-19)?

Ensuring Analytic and/or Computational Reproducibility

Analytic and computational reproducibility refers to “the ability of other researchers to obtain the same results when they reanalyze the same data” (Kepes, et al., 2014, p. 456). To do so involves applying analyses to publicly shared data in some public repository (e.g., OSF, GitHub), while clearly indicating the variables, how the data were cleaned, and so on. Ideally, authors will also publicly share code/syntax, thereby facilitating independent reproducibility. Research suggests that studies with publicly available data are cited more frequently and also characterized by fewer statistical errors and more robustness (Piwowar et al., 2007; Piwowar & Vision, 2013; Wicherts et al., 2011). Of course, in some cases sharing data and code may prove insufficient. For example, maybe data and code were shared, but bootstrapping was conducted, and this cannot be reproduced exactly for lack of a random number seed (the solution is evident here: use random number seeds, or in machine learning in R, use caret [Kuhn, 2008] or other packages to manage modeling efforts). Or perhaps there is “code rot,” where software becomes unusable because it is outdated or the operating system on which the software depended has been upgraded and is now incompatible. Fortunately, the open science community is concerned about this issue and has created ways to preserve the functionality of past code and analyses (e.g., containerization via Docker; Peikert & Brandmaier, 2019). The interested reader is encouraged to pursue these and other resources for ensuring analytic and computational reproducibility. If data themselves cannot be shared, perhaps summary data that enters into analyses can be shared (discussed further below). Assuming you are allowed to do so as a researcher or practitioner, sharing of data and code can be done relatively easily using “quick files” within the Open Science Framework (which we routinely use). A complementary or alternative option involves creating an online mailing list (e.g., Google) for a paper that allows interested readers to discuss the paper in a public forum even in the absence of the original author (Masuzzo & Martens, 2017).

Data sharing does not come without drawbacks. Organizations are rightfully concerned about proprietary issues and loss of competitive advantage when disclosing any of their research efforts; we hope that through continued academic–practice partnerships and negotiations (e.g., embargo periods, de-identification approaches) that forward-thinking organizations will see the competitive advantage of thought leadership through open science. Furthermore, there is the legitimate concern of re-identification of research participants, even after the data are appropriately de-identified (Culnane et al., 2019). As such, we recommend reporting the adequate descriptive statistics for reproducing an effect. This means ensuring that reported descriptive statistics allow for independent reproducibility. For example, Bergh and colleagues’ (2017) advice to authors includes (a) disclosing variable values in all empirical models (coefficient estimates, standard errors, p values in decimals); (b) reporting a correlation matrix that includes means, standard deviations, correlations, and sample sizes for all variables in all models (including product terms, squared terms, and transformed variables) and for all subgroups if appropriate; (c) describing all data-related decisions, including how missing values and outliers were handled; and (d) attesting to the accuracy of the data and that the reporting of analytical findings and conclusions are based only on the reported data. Indeed, most statistical packages (e.g., IBM SPSS, SAS, R) offer the capability to analyze descriptives and reproduce modeling and results that rely on those descriptives (e.g., regression, structural equation modeling).

A Checklist for Avoiding Questionable Research Practices (QRPs)

One of the most pervasive issues highlighted by research into questionable research practices concerns hidden analytic flexibility. In other words, the analysis that is reported in a manuscript is occasionally not the one that was planned in advance (Crede & Harms, 2019; Wicherts et al., 2016)—and this can render analyses and reported p values suspect or meaningless. Sometimes these behaviors are unintentional, enacted out of näiveté, or even well-intentioned. For instance, imagine researchers who engage in an initial analysis plan, following which the research results were not as one hoped, causing greater scrutiny than if results were aligned with one’s hypotheses. This scrutiny reveals that the analysis approach was flawed; a subsequent, and thus a seemingly superior, one is adopted. Although seemingly innocuous (and perhaps even recommended in some circles), such behavior does render the results subject to confirmation bias (Chambers, 2017), because even if research results had been supportive, the original analysis approach was found to be incorrect.

Fortunately, Wicherts and colleagues have provided a checklist that can be used to avoid post-hoc reasoning on the basis of statistical results, also known as p-hacking (Wicherts et al., 2016). Their checklist includes tactics such as (a) making sure to have an outlier exclusion protocol in advance, (b) reporting all measures of independent and dependent variables (not administering a ton of measures and focusing only on statistically significant relationships), and (c) proposing well-specified hypotheses and being open about other hypotheses that are exploratory and less closely specified.4 In addition to Wicherts et al.’s checklist, we again encourage authors to use the JARS to guide preregistration and the conduct of research (https://0-apastyle-apa-org.library.alliant.edu/jars/).

The Search for Credibility Multipliers Continues

One might think of a continuum of researcher and practitioner guidelines where open science might contribute. One end of the continuum is prescriptive, things we definitely should do (e.g., report sample sizes and descriptive statistics) and things we definitely should not do (e.g., p-hacking, such as peeking at data until findings are statistically significant; misreporting results to appear more favorable). The other end of the continuum is descriptive, where we can learn from the decisions and behaviors of researchers and practitioners “in the wild” to see how (a) new types of prescriptive behaviors emerge (e.g., habitually reporting the multiplicative terms in correlation tables so that interactions can be reproduced; Bergh et al., 2017); (b) a more qualitative understanding and appreciation of judgment calls can be made (e.g., when data privacy must overrule data sharing, or when aggregated data and statistics can be shared as a productive compromise). Building an open science community will take time, but together, to the extent we all participate in open science, we can improve our science and our practice. By adopting even a few of the practices we’ve shared here, the small changes we make can compound over time for each of us as researchers and practitioners.

We are sharing and revising our knowledge about how to conduct research; we are contributing to our own “lifelong learning” efforts; we are mentoring others and cultivating our relationships with other scientists and practitioners. Is this not what SIOP conferences are all about? Is this not why we meet annually at SIOP? Well, I-Open science can do that! Yes, there are challenges to open science. These include (a) restrictions on flexibility (e.g., HARKing—we have to be open about mistakes), (b) the time cost (e.g., takes time to plan the hypotheses, research design, and analytic details of a study), and (c) the open science culture and external incentive structures are not yet firmly in place (Allen & Mehler, 2019). Tips for overcoming these challenges include (a) using registered reports and preregistration to clarify what is and is not planned, (b) identifying outlets that value preregistration or asking editors at major journals if they would be willing to accept such a proposal, and (c) being strategic with the open science practices that you adopt (e.g., focus on quality of inference rather than quantity of inferences; see Allen & Mehler, 2019). In spite of these many challenge, there are many benefits, including (a) greater trust in the claims that we make (and therefore enhanced credibility to the scholar making a claim), (b) use of novel systems to help promote collaboration in the spirit of open science (e.g., storing code in the OSF), and (c) investment in the our collective future (e.g., career advances, securing funding through open-science-backed resources). We should emphasize that not all deviations from a preregistration are bad. The point is to be open about what was committed beforehand, what was exploratory, what deviations were made, why data could be shared or could not be shared (the latter is okay, but disclose your rationale), and so on.

Next Time on Opening Up

We examine ways to improve how we review empirical work and highlight tools that are helpful for more critically evaluating empirical claims. We’ll take a step back and consider different perspectives—those of a practitioner and that of academics—and illustrate how to ensure the reproducibility of common claims. We’ll dive more deeply into peer review as a key area of inquiry in the open science movement. There are a variety of tools that have become publicly available for helping scholars more carefully scrutinize empirical work. We’ll highlight a few here to whet your appetite. These include tools that scan documents for errors reporting sample statistics, such as the StatCheck (http://statcheck.io/; see Nuijten et al., 2016), granularity-related inconsistency of means (GRIM) test (see Brown & Heathers, 2017), sample parameter reconstruction via iterative techniques (SPRITE; see Heathers, et al., 2018), and DEscriptive BInary Test (DEBIT; see Heathers & Brown, 2019). Such tools have been used to detect questionable work in the literature (see Chawla, 2019).

Notes

[1] It is worth pointing out that the APA is supportive of preprints. If you share a preprint via PsyArxiv, then the preprint could be submitted with relative ease to an APA journal with a few button clicks.

2 One other practice we lacked the space to mention includes drawing on Curate Science (https://curatescience.org/ app/home), which helps researchers to identify quality research with transparency and credibility metrics. This tool may serve as a useful vehicle for accelerating the development of cumulative knowledge in I-O psychology.

3 It is worth noting that the PSA also started a collaboration to identify ways in which psychological science can be brought to bear on the COVID-19 pandemic.

4 This can be much easier said than done. Landy et al. (2020) found that hypotheses in psychology often have sufficient verbal ambiguity that can influence design choices, making it difficult to test an effect. Similar issues with regard to models that are commonly tested in our literature (i.e., models involving mediation and moderation) have also been identified (see Holland, et al., 2017). Sometimes we need a good reminder of the basics. To this point, Daniël Lakens has an amazing and free course on statistical inference that features the application of open science principles to hypothesis testing (https://www.coursera.org/learn/statistical-inferences).

References

Aguinis, H., & Vandenberg, R. J. (2014). An ounce of prevention is worth a pound of cure: Improving research quality before data collection. Annual Review of Organizational Psychology and Organizational Behavior, 1(1), 569–595. https://doi.org/10.1146/annurev-orgpsych-031413-091231

Aguinis, H., Banks, G. C., Rogelberg, S. G., & Cascio, W. F. (2020). Actionable recommendations for narrowing the science-practice gap in open science. Organizational Behavior and Human Decision Processes, 158, 27–35. https://doi.org/10.1016/j.obhdp.2020.02.007

Allen, C., & Mehler, D. M. A. (2019). Open science challenges, benefits and tips in early career and beyond. PLOS Biology, 17(5), e3000246. https://doi.org/10.1371/journal.pbio.3000246

Antonakis, J. (2017). On doing better science: From thrill of discovery to policy implications. Leadership Quarterly, 28(1), 5–21. https://doi.org/10.1016/j.leaqua.2017.01.006

Bago, B., Aczel, B., Kekecs, Z., Protzko, J., Kovacs, M., Nagy, T., Hoekstra, R., Li, M., Musser, E. D., Arvanitis, A., Iones, M. T., Bayrak, F., Papadatou-Pastou, M., Belaus, A., Storage, D., Thomas, A. G., Buchanan, E. M., Becker, B., Baskin, E., … Chartier, C. R. (2019). Moral thinking across the world: Exploring the influence of personal force and intention in moral dilemma judgments [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/9uaqm

Bergh, D. D., Sharp, B. M., & Li, M. (2017). Tests for identifying “red flags” in empirical findings: Demonstration and recommendations for authors, reviewers, and editors. Academy of Management Learning & Education, 16(1), 110–124. https://doi.org/10.5465/amle.2015.0406

 Bernerth, J. B., & Aguinis, H. (2015). A critical review and best-practice recommendations for control variable usage. Personnel Psychology, 69(1), 229–283. https://doi.org/10.1111/peps.12103

Brown, N. J. L., & Heathers, J. A. J. (2017). The GRIM test: A simple technique detects numerous anomalies in the reporting of results in psychology. Social Psychological and Personality Science, 8(4), 363–369. https://doi.org/10.1177/1948550616673876

Chambers, C. (2017). The seven deadly sins of psychology: A manifesto for reforming the culture of scientific practice. Princeton University Press.

Chawla, D. S. (2019, November 26). Quintet of study retractions rocks criminology community. Science. AAAS. https://www.sciencemag.org/news/2019/11/quintet-study-retractions-rocks-criminology-community

Crede, M., & Harms, P. (2019). Questionable research practices when using confirmatory factor analysis. Journal of Managerial Psychology, 34(1), 18–30. https://doi.org/10.1108/JMP-06-2018-0272

Culnane, D., Rubinstein, A., Benjamin, I. P., & Teague, A. (2019). Stop the open data bus, we want to get off [Preprint]. arXiv. arXiv:1908.05004. https://arxiv.org/abs/1908.05004

Cummings, J. A., Zagrodney, J. M., & Day, T. E. (2015). Impact of open data policies on consent to participate in human subjects research: Discrepancies between participant action and reported concerns. PLoS ONE, 10(5), e0125208. https://doi.org/10.1371/journal.pone.0125208

Dirnagl, U. (2020). Preregistration of exploratory research: Learning from the golden age of discovery. PLoS Biology, 18(3), e3000690. https://doi.org/10.1371/journal.pbio.3000690

Fanelli, D. (2010). “Positive” results increase down the hierarchy of the sciences. PLoS ONE, 5(4), e10068. https://doi.org/10.1371/journal.pone.0010068

Fanelli, D. (2012). Negative results are disappearing from most disciplines and countries. Scientometrics, 90(3), 891–904. https://doi.org/10.1007/s11192-011-0494-7

Fraley, R. C., & Vazire, S. (2014). The N-Pact Factor: Evaluating the quality of empirical journals with respect to sample size and statistical power. PLoS ONE, 9(10), e109019. https://doi.org/10.1371/journal.pone.0109019

Grand, J. A., Rogelberg, S. G., Allen, T. D., Landis, R. S., Reynolds, D. H., Scott, J. C., Tonidandel, S., & Truxillo, D. M. (2018). A systems-based approach to fostering robust science in industrial-organizational psychology. Industrial and Organizational Psychology, 11(01), 4–42. https://doi.org/10.1017/iop.2017.55

Haven, T. L., & Van Grootel, L. (2019). Preregistering qualitative research. Accountability in Research, 26(3), 229–244. https://doi.org/10.1080/08989621.2019.1580147

Heathers, J. A., Anaya, J., van der Zee, T., & Brown, N. J. (2018). Recovering data from summary statistics: Sample Parameter Reconstruction via Iterative TEchniques (SPRITE) [Preprint]. PeerJ Preprints. https://doi.org/10.7287/peerj.preprints.26968v1

Heathers, J. A. J., & Brown, N. J. L. (2019). DEBIT: A simple consistency test for binary data. OFS. https://t.co/uritVVlKxA?amp=1

Holland, S. J., Shore, D. B., & Cortina, J. M. (2017). Review and recommendations for integrating mediation and moderation. Organizational Research Methods, 20(4), 686–720. https://doi.org/10.1177/1094428116658958

IJzerman, H., Lewis, N. A., Weinstein, N., DeBruine, L. M., Ritchie, S. J., Vazire, S., Forscher, P. S., Morey, R. D., Ivory, J. D., Anvari, F., & Przybylski, A. K. (2020). Psychological science is not yet a crisis-ready discipline [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/whds4

Kepes, S., Bennett, A., & McDaniel, M. (2014). Evidence‐based management and the trustworthiness of cumulative scientific knowledge: Implications for teaching, research and practice. Academy of Management Learning and Education, 13, 446– 466.

Kramer, B., & Bosman, J. (2018, January). Rainbow of open science practices. Zenodo. http://doi.org/10.5281/zenodo.1147025

Kuhn, M. (2008). Building predictive models in R using the caret package. Journal of Statistical Software, 28(5), 1– 26. doi:http://0-dx-doi-org.library.alliant.edu/10.18637/jss.v028.i05

Landy, J. F., Jia, M. (Liam), Ding, I. L., Viganola, D., Tierney, W., Dreber, A., Johannesson, M., Pfeiffer, T., Ebersole, C. R., Gronau, Q. F., Ly, A., van den Bergh, D., Marsman, M., Derks, K., Wagenmakers, E.-J., Proctor, A., Bartels, D. M., Bauman, C. W., Brady, W. J., … Uhlmann, Eric L. (2020). Crowdsourcing hypothesis tests: Making transparent how design choices shape research results. Psychological Bulletin, 146(5), 451–479. https://doi.org/10.1037/bul0000220

Masuzzo, P., & Martens, L. (2017). Do you speak open science? Resources and tips to learn the language [Preprint]. PeerJ Preprints. https://doi.org/10.7287/peerj.preprints.2689v1

Murphy, K. R., & Russell, C. J. (2017). Mend it or end it: Redirecting the search for interactions in the organizational sciences. Organizational Research Methods, 20(4), 549–573. https://doi.org/10.1177/1094428115625322

Nosek, B. A., Alter, G., Banks, G. C., Borsboom, D., Bowman, S. D., Breckler, S. J., Buck, S., Chambers, C. D., Chin, G., Christensen, G., Contestabile, M., Dafoe, A., Eich, E., Freese, J., Glennerster, R., Goroff, D., Green, D. P., Hesse, B., Humphreys, M., … Yarkoni, T. (2015). Promoting an open research culture. Science, 348(6242), 1422–1425. https://doi.org/10.1126/science.aab2374

Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. Proceedings of the National Academy of Sciences, 115(11), 2600–2606. https://doi.org/10.1073/pnas.1708274114

Nuijten, M. B. (2019). Practical tools and strategies for researchers to increase replicability. Developmental Medicine & Child Neurology, 61(5), 535–539. https://doi.org/10.1111/dmcn.14054

Nuijten, M. B., Hartgerink, C. H. J., van Assen, M. A. L. M., Epskamp, S., & Wicherts, J. M. (2016). The prevalence of statistical reporting errors in psychology (1985–2013). Behavior Research Methods, 48(4), 1205–1226. https://doi.org/10.3758/s13428-015-0664-2

Oswald, F. L., Ercan, S., McAbee, S. T., Ock, J., & Shaw, A. (2015). Imperfect corrections or correct imperfections? Psychometric corrections in meta-analysis. Industrial and Organizational Psychology, 8(2), e1-e4. https://doi.org/10.1017/iop.2015.17

Peikert, A., & Brandmaier, A. M. (2019). A reproducible data analysis workflow with R Markdown, Git, Make, and Docker [Preprint]. PsyArXiv. https://doi.org/10.31234/osf.io/8xzqy

Piwowar, H. A., Day, R. S., & Fridsma, D. B. (2007). Sharing detailed research data is associated with increased citation rate. PLoS ONE, 2(3), e308. https://doi.org/10.1371/journal.pone.0000308

Piwowar, H. A., & Vision, T. J. (2013). Data reuse and the open data citation advantage. PeerJ, 1, e175. https://doi.org/10.7717/peerj.175

Spector, P. E., & Brannick, M. T. (2011). Methodological urban legends: The misuse of statistical control variables. Organizational Research Methods, 14(2), 287–305. https://doi.org/10.1177/1094428110369842

Spector??? 2014

The Daily Show with Trevor Noah—May 21, 2020—Taraji P. Henson. (2020, May 21). Comedy Central. http://www.cc.com/episodes/kegsvm/the-daily-show-with-trevor-noah-may-21--2020---taraji-p--henson-season-25-ep-25111

The Economist. (2020, May 7). Scientific research on the coronavirus is being released in a torrent. https://www.economist.com/science-and-technology/2020/05/07/scientific-research-on-the-coronavirus-is-being-released-in-a-torrent

Van Bavel, J. J., Baicker, K., Boggio, P., Capraro, V., Cichocka, A., Cikara, M., Crockett, M., Crum, A., Douglas, K., Druckman, J., Drury, J., Dube, O., Ellemers, N., Finkel, E. J., Fowler, J., Gelfand, M., Han, S., Haslam, S. A., Jetten, J., … Willer, R. (2020). Using social and behavioural science to support COVID-19 pandemic response [Preprint]. PsyArXiv.  

Wagge, J. R., Brandt, M. J., Lazarevic, L. B., Legate, N., Christopherson, C., Wiggins, B., & Grahe, J. E. (2019). Publishing research with undergraduate students via replication work: The Collaborative Replications and Education Project. Frontiers in Psychology, 10, 247. https://doi.org/10.3389/fpsyg.2019.00247

Wicherts, J. M., Bakker, M., & Molenaar, D. (2011). Willingness to share research data is related to the strength of the evidence and the quality of reporting of statistical results. PLoS ONE, 6(11), e26828. https://doi.org/10.1371/journal.pone.0026828

Wicherts, J. M., Veldkamp, C. L. S., Augusteijn, H. E. M., Bakker, M., van Aert, R. C. M., & van Assen, M. A. L. M. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in Psychology, 7. https://doi.org/10.3389/fpsyg.2016.01832

Print
3943 Rate this article:
No rating
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.