Society for Industrial and Organizational Psychology > Research & Publications > TIP > TIP Back Issues > 2016 > July

masthead710

Volume 54     Number 1    July 2016      Editor: Tara Behrend

Meredith Turner
/ Categories: 541

Cultivating a Future of Meaningful, Impactful, and Transparent Research

Jessica M. Nicklin, Jennifer L. Gibson, and James Grand

We live in an ever changing world where technology, globalization, the economy, and the way in which we work are constantly evolving. Our research practices, while slower, are no exception. With advances in methodology, statistical programs, analytic techniques, and theoretical developments our field is continuously moving forward. In order to help SIOP members meet the demands of the future, The Scientific Affairs Committee organized two panels addressing a variety of issues concerning research in Industrial / Organizational Psychology. Jennifer Gibson facilitated a session entitled: “Impact of Research Reproducibility and Study Registration on I/O Psychology,” with the following esteemed panelists: Frank Bosco, Jose Cortina, Ronald Landis, and Gilad Chen. The primary goal of this panel was to provide a platform for leaders in the field to discuss trends in study registration and research reproducibility, publication bias, and the accumulation of scientific knowledge.

Jessica Nicklin facilitated a parallel session entitled: “The Future of the Publication Process in I/O Psychology” with the following SIOP leaders: John Antonakis, Janet Barnes-Farrell, Gilad Chen, James LeBreton, and Steven Rogelberg. This panel sought to engage in a fruitful discussion of common issues faced by organizational researchers when publishing their research and provide clarity and direction for the future. While the two panels originally had distinct goals and foci, we found that both had similar themes woven throughout. Even more importantly, the themes were consistent with Outgoing President Steve Kozlowski’s Opening Plenary Message: We need to disrupt equilibrium, create meaningful change, and enhance the impact of our research. He called for us to “get out of our usual place and have an impact.” We seek to share our SIOP Conference experience with the larger I/O Community in hopes that it will generate impactful and meaningful research agendas for the future.

Panel Highlights

Should we Replicate? Not surprisingly, the topic of replication was central to the “Impact of Research Reproducibility and Study Registration on I/O Psychology” panel. A substantial point of discussion focused on the trustworthiness of the research literature and what research reproducibility and replication do to establish trustworthiness. The panelists began by distinguishing two approaches to verifying research findings. In the first approach, a researcher may redo a study that has been published, using as similar an approach as possible. This is often referred to as a replication, although there are many forms of replication (e.g., direct, constructive, theoretical). In the second approach, a researcher may redo the analyses from a study given the raw data and statistical code. This is often referred to as study reproduction, or establishing study reproducibility. The panelists noted that the latter is a relatively low bar for verification of scientific findings, yet there is evidence that social science research often doesn’t even meet that standard. For example, the American Journal of Political Science announced in 2015 that it attempted to reproduce the results of 15 studies and all 15 failed. There was also discussion of what constitutes verification, such as studies finding statistically significant effects, studies finding effects of the same magnitude, or some other criterion. Panelists brought up the issue of quality of evidence and how the qualifications of the researchers conducting replications impact the quality of the replication studies. Although conventions exist for rating quality of research evidence (e.g., GRADE criteria for clinical research), quality is not always integrated into empirical and narrative reviews or other aggregated research.

In “The Future of the Publication Process in I/O Psychology”, panelists also discussed the issue of replication; specifically whether there is a place for replication studies when the assumption is that our journals typically seek out new and interesting findings that push the field forward. The panelists generally agreed that replication studies can indeed add value to science. Similar to the other panel, they also noted that different types of replications exist and discussed how some studies, which claim to be replications are sometimes not true replications. They emphasized that there needs to be a compelling question and rationale for the replication – not just “it hasn’t been done” or “it should be done.” They also discussed how you can replicate previous findings without having a complete overlap – by replicating and offering new findings in the same study, which can add substantial value. Regardless of whether the study is a replication or not, they placed significant emphasis on the quality of the replication. Thus, they urged for carefully formulated hypotheses and methodology for replications and original studies.

            Where do Null Findings Belong? Relatedly, both panels discussed the quandary of “what to do with null findings?” Historically, the publication process has been biased toward findings of statistical significance (e.g., Dickersin, 1993); however, non-significant results can also be useful for guiding future research and practice (e.g., Mills & Woo, 2012). Panelists on “The Future of the Publication Process in I/O Psychology”, all of whom currently or have served as editors for respected journals, suggested that they would publish null findings based on the merit of the Introduction and Method sections. While researchers are frequently concerned with reaching statistical significance in order to get published, they consistently emphasized the need to ask compelling questions and conduct rigorous research.

            If results actually matter less than the quality of the question and methodological rigor employed, this further highlights the appeal of exploring alternative models for the publication process. When findings do not work out as anticipated, researchers are more likely to throw out non-significant findings or engage in other questionable research practices (p-Hacking, etc.). However, there are also challenges even when findings do work as anticipated. Steven Rogelberg noted during one discussion that because we are so quick to celebrate significant findings that support our hypotheses, there is little incentive to probe our data further. That is, we frequently halt potentially productive discussions with collaborators that may thwart creative discoveries. Failing to support an a priori hypothesis prompts us to ask questions, have discussions, and seek alternative explanations for what went “wrong.” Such efforts clearly have the potential to lead to questionable research practices, but this point clearly demonstrates an asymmetry in the motivation of researchers attributable to the incentive structure of the publication process.

The conversations in both panels converged towards the conclusion that it be time for our field to encourage the sharing of null results, rather than placing them in the file drawer. For instance, data sharing initiatives, such as the Harvard Dataverse, allow researchers to contribute their findings, even with null results. The Harvard Dataverse Network is an “open repository which provides a framework to publish, preserve, cite, and get credit for your data, and allow others to replicate and verify your social science research work” (library.harvard.edu/gdc). As noted by crowdsourcing research efforts (e.g., Silberzahn & Uhlmann, 2015), different research teams can learn different things from the same data, including but not limited to null findings. This approach would require us, as a field, to be more collaborative, transparent, and trusting of our colleagues.

Another approach discussed by both panels was “Hybrid Registered Reports,” which require that authors submit for review the introduction, methods, measurement information, and analysis plan of a completed study (much like a dissertation proposal). For instance, Journal of Business and Psychology is engaged in a special initiative (led by Ronald Landis and Steven Rogelberg, among others) to accept Hybrid Registered Reports. Their goal is to “encourage authors to propose conceptually sound, interesting, and methodologically rigorous research without concern for whether the results will be statistically significant.” Other similar efforts have accepted registrations for replication studies. Perspectives on Psychological Science, for example, has implemented this format to offer collections of independently conducted and direct replications of an original study. While interest in these alternative models of publication has been slow moving, they present possible models of producing “honest” research – where the emphasis is more on learning something new through quality research efforts rather than publishing significant results.

Questionable Research Practices. Lastly, a central theme in both panel discussions was the extent to which Questionable Research Practices (QRP; the grey area of acceptable practices preventable within the research community) impact the direction of our research. Examples of QRP include failing to report a dependent variable, collecting more data to reach a desired p-value, HARking (hypothesizing after results are known), or intentionally fabricating data. Research shows that 31% of organizational psychologists admit to having engaged in at least one QRP (John, Loewenstein, & Prelec, 2012). O’Boyle (2015) reported that in the late 1990s, 42% of first order interaction hypotheses in top journals were supported; that value is significantly higher today (72%) with similar sample sizes and statistical power. The panelists in “Impact of Research Reproducibility and Study Registration on I/O Psychology” discussed how this finding could be attributable to the process by which research makes its way to publication in top journals, though other causes such as the technological ease of tests for interactions were also offered.

One might venture a guess that QRPs occur because of the “publish or perish” culture persistent in academia (e.g., Fanelli, 2010), the demands of publishing in top-tier journals, or the expectations and requests of reviewers. We can’t help but draw attention to the irony that even we, as organizational scholars, fall victim to the classic fallacy of “rewarding A while hoping for B.” Our journals have historically rewarded novel findings (Nosek, Spies, & Motyl, 2012); interesting and elaborate theories (e.g., Mathieu, in press); and statistically significant results. In placing these objectives on such a high pedestal, we may be inadvertently promoting the use of QRPs to obtain these rewards rather than rewarding honest, accurate, and impactful research. In sum, QRPs may be an inevitable outcome of a flawed and fouled up reward system.

A Vision for the Future

            The theme of our opening plenary at this year’s SIOP was to disrupt unhealthy equilibriums in our field and enhance the impact of our research. We believe the discussions started by the panelists who participated in our sessions are indicative of the challenging path we face in achieving this goal, and we want to thank our panelists for imparting their experiences and recommendations in this domain. But we must ensure these conversations continue—in our academic institutions, in our professional societies, or even just around the proverbial water cooler. We encourage all members of the SIOP community to consider what we want our field to look like moving forward. We offer a starting point, by focusing on several priorities highlighted in our panels and throughout this article:

  1. Collaboration, Trustworthiness, and Transparency: Where and how should we align ourselves along the continuum of recognition based on the open exchange of ideas, thoughts, and data versus recognition based on personal/independent achievement and scholarship? How should we balance the value placed on verifying research and aggregating knowledge (i.e., minimizing Type I errors) versus the value of novelty and potential discovery (i.e., minimizing Type II errors)? Are we willing and able to participate in data sharing and crowdsourcing techniques or is this incompatible with many practical constraints within our field (e.g., intellectual property concerns with sharing organizational data, etc.)? If we favor the former options among each of these questions, then we have a long way to go in making these practices a reality in I/O Psychology. This might include being more receptive to publishing replication studies and null results, making data publicly available, and altering deeply entrenched values/reward structures.
  2. Research Quality: We don’t believe anyone wishes to see mediocre studies muddying up our top tier journals. However, are we ready as a field to place more emphasis on the process than on the end result? What steps can we take to fairly and accurately evaluate the quality of one’s research question and approach? How must our research practices change to improve our comfort and rigor with publishing non-significant findings? To this end, we strongly encourage members in the I/O community to consider submitting their introduction and method sections to a Hybrid Registered Report for consideration. We echo the sentiments of our panelists that compelling and meaningful questions coupled with rigorous and appropriate methodology should be everyone’s top priority (not statistical significance).
  3. Conduct Research that has Impact: Traditionally we have evaluated impact though the use of citation counts and journal impact factors. But these criteria reflect only a very narrow slice of what it means to be influential through our research. Furthermore, many avenues for achieving impact are seldom appreciated, recognized, or rewarded within our field. For example, to what extent are we willing to conduct “translational”/basic research in collaboration with scholars outside of our discipline? Are we willing to recognize and reward publishing translational research outside of our major I/O journals? To what extent are we willing to treat “non-traditional” indicators of impact (e.g., using blogs and other social media outlets to convey evidence-based principles to the public, engaging in outreach and advocacy of our science to non-academic outlets/groups, etc.) as comparable to traditional indicators of impact? The direction conveyed by our outgoing SIOP President’s call for impactful research points towards a road less traveled by our field. If we are serious about making an impact, we must be willing to change our views on how and where our resources and forms of recognition are channeled.

Conclusion

            In the immortal words of the late great Yogi Berra, “It’s tough to make predictions, especially about the future.” However, one prediction is certain—the future we face tomorrow will be shaped by the actions we take today. We would like to encourage every one of our fellow SIOP members to envision the future they hope to see for I/O psychology. What do you value as a scientist/practitioner of IO psychology? How would you like your impact and that of your colleagues to be recognized? The fact such questions—as well as the constructive discussions witnessed during our panels—are being actively encouraged and considered seems indicative that our field is approaching a critical summit. Whether that push maintains its current momentum and continues over the top or rolls back to where we began is an outcome to which we must each contribute and for which we must each take personal responsibility.

 

 

References

Dickersin, K., & Min, Y. (1993). Publication bias: The problem that won’t go away. Annals of the New York Academy of Sciences, 703, 135–148.

Fanelli, D. (2010). Do pressures to publish increase scientists’ bias? An empirical support from US States data. PLoS ONE, 5, 1-7. doi: 10.1371/journal.pone.0010271

John, L. K., Loewenstein, G. & Prelec, D. (2012). Measuring the prevalence of questionable research practices with incentives for truth telling. Psychological Science, 23, 524-532.  doi: 10.1177/0956797611430953

Mills, M. J.Woo, V. A.  (2012). It’s not insignificant: I/O psychologists’ dilemma of non-significance.  The Industrial/Organizational Psychologist, 49, 48-54.

Nosek, B. A., Spies, J. R., & Motyl, M. (2012). Scientific Utopia: II. Restructuring incentives and practices to promote trust over publishability. Psychological Science, 7, 615-631. doi: 10.1177/1745691612459058

O’Boyle, E. H., Banks, G. C., Walter, S., Kameron, C., & Weisenberger, K. (2015, January). What moderates moderators?  A meta-analysis of interactions in management research. In Academy of Management Proceedings (Vol. 2015, No. 1, p. 14779).  Academy of Management.

Silberzahn, R., & Uhlmann, E. L. (October 7, 2015).  Crowdsourced research: Many hands make tight work. Nature News, 526 (7572). Retrieved from: http://www.nature.com/news/crowdsourced-research-many-hands-make-tight-work-1.18508

Previous Article Industrial and Organizational Psychology in Romania
Next Article metaBUS: An Open Search Engine of I-O Research Findings
Print
2387 Rate this article:
No rating