Featured Articles
Jenny Baker
/ Categories: 602

Opening Up: Are Large-Scale Open Science Collaborations a Viable Vehicle for Building a More Cumulative Science in I-O Psychology?

Christopher M. Castille, Nicholls State University

As I’m writing this entry into Opening Up, SIOP’s column devoted to all things open science, I’m also attending the annual conference for the Academy of Management. This is a virtual-only attendance for me because on the morning of my flight to the Academy conference I was unfortunate enough to test positive for COVID-19. Although my conference plans were derailed, there was one positive development I can remark upon. Professional societies, such as SIOP and AOM, have normalized virtual options for developing professionally while socially distancing. The huge inequities revealed by the COVID-19 pandemic have brought about sweeping changes that, at this moment, I’m grateful for and benefit from. Specifically, being positive for COVID-19 has caused me to shift toward attending only the virtual sessions that I can access—sessions that I honestly had no plan to attend prior to receiving a positive test result. Despite being COVID positive, I have been fortunate enough to make small contributions, such as participating in these virtual sessions and helping a coauthor manage a presentation that I was supposed to lead.

As luck would have it, there were several virtual sessions on open science that I could attend. One session I attended was on the topic of metascience and technology and featured tools for conducting semiautomated meta-analysis with metaBUS (see Bosco et al., 2017) and integrating redundant theories in the social sciences with TheoryOn (see Li et al., 2016). Another session was on using open science to publish in management journals (e.g., how to use tools such as the Open Science Framework to preregister a study). This session featured a plethora of journals that have adopted the two-stage review process known as registered reports, where theory and methods undergo a rigorous peer review prior to a conditional acceptance made by a sponsoring journal. Both sessions featured excellent speakers showcasing fascinating tools for creating a more cumulative knowledge base, testing and integrating theories, and building a more robust science. The content and presenters did a fantastic job reminding audiences of the need for more widespread use of open science in our field and for building a community to support these research avenues. This is great because although scholars across the social sciences have increasingly adopted open science practices (Christensen et al., 2019), a common observation in management and organization studies is that open science practices are used infrequently (Aguinis et al., 2018).

How to capitalize on this enthusiasm for open science and encourage more widespread adoption of open science practices throughout I-O psychology and related fields? Journals obviously play one role by rewarding the use of open science practices (e.g., registered reports, required disclosure of preregistration), and such top-down influences are certainly welcome for benefitting our science. Still, are there bottom-up or grassroots influences that might also be valuable for promoting the uptake of open science practices?

With this backdrop, in this entry of Opening Up I would like to pose a broad question for critique by TIP’s readers as well as share narrower related questions: Would a large-scale open science collaboration among I-O psychologists be a viable vehicle for building a more cumulative and robust science for I-O psychology? Examples of these big team science efforts are plentiful, having become popular in several sciences (often termed multisite collaborations). They include the now well-known Open Science Collaboration (2015), which sought to replicate 97 effects from top psychology journals and found that 36% were replicated; the Many Labs studies led by Nosek and colleagues (Many Labs 2–5, see Williamson, 2022); the Reproducibility Project: Cancer Biology, an initiative that set out to replicate 50 highly influential studies in oncology (see Davis et al., 2018); the Psychological Science Accelerator, which provides the infrastructure to execute multisite collaborations (see Moshontz et al., 2018); and the Collaborative Replications and Education Project (CREP) Initiative, an initiative to leverage replication in teaching students research methods (see Grahe et al., 2013). If this brief small sample of fascinating initiatives sounds intriguing to you, then I strongly recommend reading Uhlmann et al. (2019), who discuss crowdsourcing research as a means of spurring multisite collaborations.

Why not attempt something like these initiatives within the I-O psychology content area? What if we pooled our resources (e.g., access to participants, our skill sets) to conduct more highly powered tests of key effects that are broadly relevant to our field? Could this be a valuable supplement to current undergraduate, master’s, and doctoral education training? For instance, following the CREP initiative, what if our students had to conduct a replication (preferably direct or constructive; see Köhler & Cortina, 2021) as part of attaining their degree, or at least help collect or analyze data? Such initiatives could not only build in replication but independent verification of findings via verification reports (i.e., reports where the findings of a manuscript are independently reproduced). Although conducting such replications and verification may be difficult for any single team, several teams pooling limited resources can facilitate more widespread replication and verification research. Such a standard practice throughout our field could help students gain a deeper appreciation for the methods that define our discipline, nicely supplementing the education occurring within academic institutions, and potentially generating new research ideas in the process.

Why might these collaborations be so important for our field to execute? Multisite collaborations have emerged as a pragmatic, if challenging, solution to key methodological challenges, including (a) achieving sufficiently high statistical power for testing hypotheses/generating precise estimates of effects, (b) assessing the generalizability (i.e., boundary conditions) and replicability of effects, (c) promoting the uptake of open science practices, and (d) promoting inclusion and diversity within the research community (Moshontz et al., 2018; Uhlmann et al., 2019). It is the third point—promoting greater uptake of open science practices—that I find so intriguing. In order to conduct high-powered multisite replications, sharing research materials (e.g., measures, code) is essential to executing replications. Not only would such collaborations cause a broad sharing of skill sets, but scholars contributing to these initiatives can learn more open science tactics that they can then take into their own research areas. Such collaborations may also be helpful for scholars from institutions with minimal resources (e.g., small, regional-focused, or teaching institutions) to nevertheless make small but meaningful contributions to our discipline. It may also include scholars from other countries, whose contributions are essential to probing the generalizability of claims in our field (see Moshontz et al., 2018). This inclusive element to multisite replications in I-O psychology is hard to overlook.

Please Tell Us What You Think: Why Not Start a Large-Scale, Open Science Collaboration in I-O Psychology?

What are the kinds of challenges that may arise in executing such a large-scale, multisite replication initiative, and how have these been overcome? Is there a way to include field settings in such an initiative without compromising competitive advantages or breaking employment law (e.g., violating General Data Protection Regulation in the European Union)? We do have at least one positive case where multisite collaboration has occurred within the field, albeit in lab settings (see the journal Leadership Quarterly, which has a few examples of collaborations occurring across sites; e.g., Ernst et al., 2021). What would it take to see more of this occurring in both lab and field settings in content domains relevant to I-O psychology? What are the key limitations of executing multisite replication initiatives in I-O psychology research? Could such an initiative bring together even stronger collaborations between academics and practitioners in our field? Or might it weaken our relevance to practice (see Guzzo et al., in press)? What kind of problems does this alternative research mechanism solve for I-O psychology? Please feel free to send your thoughts to me, Chris Castille, at christopher.castill@nicholls.edu.

References

Aguinis, H., Ramani, R. S., & Alabduljader, N. (2018). What you see is what you get? Enhancing methodological transparency in management research. Academy of Management Annals, 12(1), 83–110. https://doi.org/10.5465/annals.2016.0011

Bosco, F. A., Uggerslev, K. L., & Steel, P. (2017). MetaBUS as a vehicle for facilitating meta-analysis. Human Resource Management Review, 27(1), 237–254. https://doi.org/10.1016/j.hrmr.2016.09.013

Christensen, G., Wang, Z., Paluck, E. L., Swanson, N., Birke, D. J., Miguel, E., & Littman, R. (2019). Open science practices are on the rise: The State of Social Science (3s) Survey [Preprint]. MetaArXiv. https://doi.org/10.31222/osf.io/5rksu

Davis, R., Espinosa, J., Glass, C., Green, M. R., Massague, J., Pan, D., & Dang, C. V. (2018). Reproducibility project: Cancer biology. eLife. Retrieved from https://elifesciences.org/collections/9b1e83d1/reproducibility-project-cancerbiology

Ernst, B. A., Banks, G. C., Loignon, A. C., Frear, K. A., Williams, C. E., Arciniega, L. M., Gupta, R. K., Kodydek, G., & Subramanian, D. (2021). Virtual charismatic leadership and signaling theory: A prospective meta-analysis in five countries. Leadership Quarterly, 101541. https://doi.org/10.1016/j.leaqua.2021.101541

Grahe, J. E., Brandt, M. J., Wagge, J., Legate, N., Wiggins, B. J., Christopherson, C. D., Weisberg, Y., Corker, K. S., Chartier, C. R., Fallon, M., Hildebrandt, L., Hurst, M. A., Lazarevic, L., Levitan, C., McFall, J., McLaughlin, H., Pazda, A., IJzerman, H., Nosek, B. A.,…France, H. (2013). Collaborative replications and education project (CREP). OFS. https://doi.org/10.17605/OSF.IO/WFC6U

Guzzo, R., Schneider, B., & Nalbantian, H. (in press). Open science, closed doors: The perils and potential of open science for research in practice. Industrial and Organizational Psychology: Perspectives on Science and Practice.

Köhler, T., & Cortina, J. M. (2021). Play it again, Sam! An analysis of constructive replication in the organizational sciences. Journal of Management, 47(2), 488–518. https://doi.org/10.1177/0149206319843985

Li, J., Larsen, K. R., & Abbasi, A. (2016). TheoryOn: Designing a construct-based search engine to reduce information overload for behavioral science research. In J. Parsons, T. Tuunanen, J. Venable, B. Donnellan, M. Helfert, & J. Kenneally (Eds.), Tackling society’s grand challenges with design science (Vol. 9661, pp. 212–216). Springer International Publishing. https://doi.org/10.1007/978-3-319-39294-3_17

Moshontz, H., Campbell, L., Ebersole, C. R., IJzerman, H., Urry, H. L., Forscher, P. S., Grahe, J. E., McCarthy, R. J., Musser, E. D., Antfolk, J., Castille, C. M., Evans, T. R., Fiedler, S., Flake, J. K., Forero, D. A., Janssen, S. M. J., Keene, J. R., Protzko, J., Aczel, B., … Chartier, C. R. (2018). The Psychological Science Accelerator: Advancing psychology through a distributed collaborative network. Advances in Methods and Practices in Psychological Science, 1(4), 501–515. https://doi.org/10.1177/2515245918797607

Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716–aac4716. https://doi.org/10.1126/science.aac4716

Uhlmann, E. L., Ebersole, C., Chartier, C., Errington, T., Kidwell, M., Lai, C. K., McCarthy, R. J., Riegelman, A., Silberzahn, R., & Nosek, B. A. (2019). Scientific utopia III: Crowdsourcing science. Perspectives on Psychological Science, 14(5), 711–733. https://doi.org/10.1177/1745691619850561

Williamson, E. (2022, May 23). After 10 Years, “Many Labs” comes to an end—but its success is replicable. UVA Today. https://news.virginia.edu/content/after-10-years-many-labs-comes-end-its-success-replicable

Print
352 Rate this article:
5.0
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.