Featured Articles
Jenny Baker
/ Categories: 2024, 613

Success Stories Implementing Open Science Practices Into Scholarly Activities: A Virtual Q&A

Christopher M. Castille, Nicholls State University

Author Note: Thanks to Cort Rudolph, Don Zhang, and Jonas Lang for sharing their experience adopting open-science practices!

If you are interested in contributing to Opening Up, TIP’s column for all things open science, please contact christopher.castille@nicholls.edu. We are considering topics such as diversity and inclusivity, teaching open science, and areas where there may be value in spurring different kinds of replication projects (registered reports vs. registered replication reports).

Before we get into the republication, I would like to make a brief public service announcement. Andreas Schwab and I are starting a big team science replication initiative that services the broader management profession. The initiative is termed the Advancement of Replications Initiative in Management (https://www.arimweb.org/). ARIM promotes and supports replication research, particularly among early stage doctoral students working in management and adjacent areas (e.g., I-O psychology). We believe that embedding replication research in doctoral student training holds much promise for our science (see also Schwab et al., 2023). We are targeting our first set of publications for summer of 2024 and hope to pursue a publication at a top-tier journal shortly thereafter. Larry Williams, who is the director of the Consortium for the Advancement of Research Methods and Analysis (CARMA) will provide the infrastructure to support our initiative. If you too would like to support ARIM (e.g., review proposals; gather data, train, and mentor doctoral students), then please fill out this survey and help us out: https://tinyurl.com/22jnkwf7. We are actively looking for more collaborators!

Now, on to the Republication!

In this entry of Opening Up, I decided to republish this article highlighting how scholars in our field have put open science principles into practice in their work, particularly in the classroom. For me, this entry stands out because we have clear examples of scholars who rose to the challenge and advocated for making our science stronger and better. We highlight work by Cort Rudolph, Don Zhang, and Jonas Lang, who have been kind enough to share how they have incorporated open science practices into their scholarly activities. We will sample a body of the work they have opened up, take a look at some advice for adopting open science practices, point out interesting challenges, and whether adopting open science practices has caused them to rethink any assumptions about I-O psychology! Our virtual discussion was fascinating, and I hope you enjoy it!

Introducing Our Virtual Panel

Let’s start with brief introductions. First up is Cort Rudolph. He is an associate professor of industrial-organizational psychology at Saint Louis University1 where he studies the aging workforce, including applications of lifespan development theories, well-being and work longevity, and ageism. He is also consulting editor for Work, Aging and Retirement and serves on the editorial review boards of the Journal of Managerial Psychology, the Journal of Occupational and Organizational Psychology, the Journal of Vocational Behavior, Consulting Psychology Journal: Practice and Research, and the Journal of Organizational Behavior. He is committed to open science because he believes that making psychological science more transparent and accessible will maximize its impacts on society.

Next is Don Zhang. He is an assistant professor of psychology at Louisiana State University (LSU)2 who studies judgment and decision making, risk taking at work, and how to better communicate research findings to consumers of applied psychology (e.g., managers, policymakers, executives). He serves on the editorial boards of Journal of Behavioral Decision Making, Journal of Business and Psychology, and the International Journal of Selection and Assessment. He is particularly interested in the role of open science in the classroom and ways to ease students into open science practices.

Last, we have Jonas Lang. He is an associate professor in the Faculty of Psychology and Educational Sciences at Ghent University and a research professor at the Department of Management at the University of Exeter where he studies adaptability, cognitive abilities, personnel selection, and the influence of motivation on performance. He currently serves as an associate editor for the Journal of Applied Psychology; he is also the editor of the Journal of Personnel Psychology and is on the editorial boards of Psychological Assessment and Human Performance.

A Virtual Q&A for Implementing Open Science Practices

As it pertains to open science, what body of your work would you like to highlight and what are you proud to say about it? Also, is there anything that drove you to implement open science practices with this particular body of work?

Cort Rudolph: I am proud of successes with open science on a couple of fronts: One of my proudest pieces of work is a meta-analysis published in the Journal of Organizational Behavior on “thriving” at work (Kleine et al., 2019). We preregistered our hypotheses and analysis plan, and as far as I know (at least at the time), this was the first preregistered project that JOB had published. Teaching wise, I try to push students to consider open science practice in various ways in all of my statistics and research methods courses (i.e., univariate, multivariate, SEM, meta-analysis). To this end, we talk a lot about “forking paths” in analysis workflows and the need to make these decisions explicit. I try to drive home the point that our culture prioritizes telling clean and compelling narratives over transparently communicating how an insight was generated. I also want them to see how we are still exploring new terrain rather than rigorously testing theory, and even here, preregistration is valuable (see Rudolph, 2021). Even in small ways I try to normalize the language of open science when teaching, too (e.g., instead of saying, “when making a hypothesis,” say “when preregistering a hypothesis”).

Regarding your follow-up question, as a meta-analyst, I always tell my collaborators that being a meta-analyst is like being a detective who investigated methodological/statistical shortcomings in the literature. It’s always interesting to see “what you find” when coding studies for a meta-analysis. In grumbling about this, it finally occurred to me, “Why don’t meta-analysts hold themselves to higher standards, too.”

Don Zhang: For 2 semesters, I incorporated open science into the lab component of my research methods course where undergraduate students worked in groups to conduct a real experiment (data collection and all!). To streamline the process, I gave them one of two papers to replicate as their in-class project. Thanks to the extreme efficiency of LSU’s IRB (at one point, I was PI on over 20 IRB applications simultaneously) and a team of hard-working TAs, the majority of the students were able to recreate the experiment, preregister it on OSF, obtain IRB approval, and collect/analyze real data. By the time I was done, I had over 10 groups that all conducted preregistered direct replications on what turned out to be a pretty influential prereplication crisis Psychological Science paper (Balcetis & Dunning, 2010), where the authors found that visual perception is influenced by top-down processes (e.g., motivation).

As luck would have it, the phenomenon we studied turns out to be quite controversial in cognitive science (Firestone & Scholl, 2016). Being an opportunist and an amateur cognitive psychologist, I saw a great opportunity for an actual paper, so I enlisted a star undergraduate student to help collate and meta-analyze the student replication data. We then conducted another high-powered replication study in my lab and wrote up results. The resulting paper is currently under review at Cognition and Emotion. It is one of my favorite projects to date, even though it has nothing to do with I-O psychology. I think the students also benefited tremendously in the process by seeing that even published research may not replicate!

Jonas Lang: One success story was a piece on modeling group processes like emergence and group extremity with multilevel models that we recently published in the APS journal Advances in Methods and Practices in Psychological Science (see Lang et al., 2019). My coauthors were able to convince the organization to allow us to post the data on the OSF (https://osf.io/849kq/). Because the paper is focused on teaching people a new technique, the availability of the data (and also the code) was really important for making the work understandable and usable for other researchers (they can run the analyses using the data themselves). Regarding your second question: I have been involved in methodological work for some time, and especially as an AE at Organizational Research Methods (a role I had before I switched to the Journal of Applied Psychology), I noticed papers that shared materials tended to be more popular with readers as well as reviewers.

Were there any resources that helped you to implement these practices in this body of work?

Cort: I echo the Open Science Framework (osf.io) as a wonderful resource to facilitate open science (especially data and material sharing), but also PsyArxiv (https://psyarxiv.com) for posting preprints that are linked directly to OSF projects. I also use Github (github.com) for collaboration and hosting websites. I also want to mention that using open source statistics software, such that data and code that can be reproduced by anyone, is a key engine behind open science. Thus, R and RStudio are key resources for open science work. To this end, the ideas of “open” and “reproducible” science are, to me, inextricable.

Don: OSF is a great resource for students and myself. I also drew inspiration from a couple of great papers on open science and pedagogy (Chopik et al., 2018; Hawkins et al., 2018). One of my (and students’) favorite lectures drew heavily from Bill Chopik’s work. Most students have not been exposed to open science or the replication crisis. I think the lecture worked particularly well because college students still have a strong anti-establishment way of thinking, so stories about “bringing down the establishment” are naturally appealing to them I think. The Hawkins et al. paper outlined some great ways to involve undergraduate students in open science, and it was great knowing I’m not alone in recognizing the value of pedagogy in the open science movement.

Jonas: The OSF and related websites are certainly useful. I also tend to learn a lot about these initiatives at European psychology conferences and colleagues in other fields of psychology (especially personality and clinical).

Were there any challenges that you had to overcome to implement open science in this body of work and, if so, what helped you overcome these challenges?

Cort: I think this is still a pretty new space for a lot of people, and especially so in I-O, so my challenges so far have largely been in educating reviewers (and editors) about “why we are doing what we are doing” open science-wise (e.g., the value of preregistration, open data, and code, etc.). Still, and honestly this is a bit discouraging if I am being honest, what I have seen so far (especially with preregistration and sharing data/code) is that reviewers and editors often do not comment on this!

Don: I don’t think what I did is possible unless you have an extremely efficient IRB system. At one point, the IRB administrator was reviewing over a dozen IRB applications and turning them around within 24 hours or less. Personally, my team of TAs and I had to manage over 20 IRB applications and tried to obtain approval within 2 weeks just so the students had enough time to collect data. It was very hectic and took a lot of coordination. Looking back now, I’m surprised it worked out so well!

Jonas: It is generally not easy to convince organizations to share their data. This was not European data, but normally I am based in Europe, and I observed that sharing data or making it available is challenging, particularly in Europe where General Data Protection Regulation recently made people very cautious sharing their data.

Did implementing open science practices cause you to rethink any assumptions in our field?

Cort: To some extent, yes. I see this as the future that our field is headed in. A lot of this open science “movement” has bubbled up from the credibility crisis in social psychology. I think at some level we all know that I-O is equally susceptible to such a crisis, and I would rather be out in front of this thing than lagging. I think as a field we would benefit greatly from being a bit more self-critical about what we know and how we know it.

Do you have any wishes regarding the adoption of open science in our field?

Cort: I think it’s really important to recognize that open science is not a uniform prescription; it’s not “one thing.” Everybody can participate at some level in open science, and each incremental contribution thereto increases the broader credibility of our field as a whole. Moreover, there is not a one-size-fits-all approach to open science for each project; researchers can (and should) adopt principles of open science to the extent that they are practical and make sense for the goals of one’s work.

Don:  I wish editors and reviewers would all get on the same page about what “good” papers under this model look like and be more accepting of transparently flawed papers. I remember a story on Twitter where an author’s paper was rejected because they preregistered their hypotheses. The reviewer noted that had the study not been preregistered, there was a way to reframe the paper to allow it to be published. I think this type of story makes it hard for early career authors to commit to open science. It is too risky for early career authors to play Russian roulette and hope to get the right editor/reviewers who will sympathize with open science, especially when getting the wrong reviewer means your paper may be punished instead. The incentive structure needs to be changed.

Jonas: I think our field in the past tended to sometimes (but not always) put the editorial shotgun on the author’s chest and request, “For the revision to be successful, you need to show that there is a theoretical mechanism linking your predictor A to your outcome B.” What should the poor authors do in a situation like this? I think there is real change, and the Journal of Applied Psychology, for instance, has a practice to not ask for new analyses per se but require some assurance that the particular finding is there. With open science policies in general, I think it is important that people can decide depending on their research context and research question and that we do not uniformly require everybody to do the same. In some research areas, it would not be wise to share everything. It would be quite unfortunate, for instance, if we forced assessment firms to share items for their instruments as they relate to employee selection. Forcing open science into practice in this manner might backfire for our field—we may not see research from these organizations anymore. However, in many cases, something can be shared (e.g., covariance matrixes) that can improve the science so long as we all take care that people get credit for their work.

What advice do you have to offer to scholars who are either on the fence about adopting open science practices or are just getting started?

Cort: Just dive in. There are simple steps that any researcher can do to implement open science into their work. For example, posting preprints ensures open access and takes less than 5 minutes. Another example is developing a simple preregistration. The templates provided by OSF make this fairly easy, but it’s still a time commitment. Even easier, an https://aspredicted.org/ preregistration can be completed in about a half an hour.

Don: I would love to see senior scholars model open science practices and advocate for open science in search committees, tenure evaluations, and publications. I would like all scholars to be more open minded about what a “good” paper looks like when they review papers and recognize that a lot more papers will not “look” as good in the superficial sense (clean story, supported hypotheses) if open science is successful, and not to punish authors for it. I think reviewers sometimes run into the “like goes with the like” fallacy where “good” papers look like other papers published in the same journals. But with preregistration, papers don’t always tie themselves into a perfect bow, and reviewers need to recognize this. I think when incentives are aligned to promote and reward open science as much as they do novelty and “theoretical contribution,” junior scholars will respond accordingly. As any self-respecting armchair economist will tell you, people respond to incentives.

Jonas: I think there are clear benefits because it makes one’s work more accessible. I think one concern that people tend to have is that some discussions around open science on social media and at some conferences were somewhat difficult for younger scholars. I think it is important that the community is critical about findings and data but always supportive of people and that the field develops a culture where it is fine to sometimes not be right or have different opinions.

A Closing Reflection

A round of virtual applause for our panelists! In reflecting on their responses to my questions, two themes stand out: the importance of educating others (e.g., editors, reviewers, students) about the value of thinking critically about our science and how implementing open science practices go above and beyond more traditional or conventional scholarly contributions in our field. Those with a penchant for leading by example may, I think, see quite a bit of value in incorporating open science practices into their work. Starting small, making incremental changes (e.g., preregistering a hypothesis), will make our science more transparent and accessible, enhancing the already excellent impact we are having on society.

Notes

1 At the time of this current republication, Cort is now a professor at Wayne State University.

2 Don is now an associate professor at LSU.

References

Balcetis, E., & Dunning, D. (2010). Wishful seeing: More desired objects are seen as closer. Psychological Science, 21(1), 147–152. https://doi.org/10.1177/0956797609356283

Chopik, W. J., Bremner, R. H., Defever, A. M., & Keller, V. N. (2018). How (and whether) to teach undergraduates about the replication crisis in psychological science. Teaching of Psychology, 45(2), 158–163. https://doi.org/10.1177/0098628318762900

Firestone, C., & Scholl, B. J. (2016). Cognition does not affect perception: Evaluating the evidence for “top-down” effects. Behavioral and Brain Sciences, 39. https://doi.org/10.1017/S0140525X15000965

Hawkins, R. X. D., Smith, E. N., Au, C., Arias, J. M., Catapano, R., Hermann, E., Keil, M., Lampinen, A., Raposo, S., Reynolds, J., Salehi, S., Salloum, J., Tan, J., & Frank, M. C. (2018). Improving the replicability of psychological science through pedagogy. Advances in Methods and Practices in Psychological Science, 7–18. https://doi.org/10.1177/2515245917740427

Kleine, A. K., Rudolph, C. W., & Zacher, H. (2019). Thriving at work: A meta‐analysis. Journal of Organizational Behavior, 40(9–10), 973–999.

Lang, J. W. B., Bliese, P. D., & Adler, A. B. (2019). Opening the black box: A multilevel framework for studying group processes. Advances in Methods and Practices in Psychological Science, 2(3), 271–287. https://doi.org/10.1177/2515245918823722

Rudolph, C. W. (2021). Improving careers science: Ten recommendations to enhance the credibility of vocational behavior research. Journal of Vocational Behavior, 126, 103560. https://doi.org/10.1016/j.jvb.2021.103560

Schwab, A., Aguinis, H., Bamberger, P., Hodgkinson, G. P., Shapiro, D. L., Starbuck, W. H., & Tsui, A. S. (2023). How replication studies can improve doctoral student education. Journal of Management Scientific Reports, 1(1), 18-41. https://doi.org/10.1177/27550311231156880

Print
647 Rate this article:
No rating
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.