The Results Are In! Updated Alternative I-O Graduate Program Rankings
Nicholas P. Salter, Joseph A. Allen, Allison S. Gabriel, Loren Naidoo, and David Sowinski
In the summer of 2016, we issued a Call for Proposals to submit unique and innovative methodologies to rank I-O graduate programs. In response to this, many projects were proposed to us. After much hard work (and the broader SIOP community’s help), the five selected projects have been completed. Each of these five papers are included in the current issue of TIP. We believe each of these papers will be an important contribution to our field and will guide individuals in the future – as well as generate much thought and discussion about the state of our field and the programs that educate the future of IO Psychology. In particular, we hope that these rankings will result in graduate programs examining themselves and thinking about ways they excel, as well as areas they could improve. Additionally, we hope that future undergraduate students applying to I-O programs will use these rankings, not to determine what the “best” programs are, but which programs are the best fit for them.
Project Description and Findings
The original goal of this project was to highlight alternative ways of ranking (and more generally: examining) I-O graduate programs. Previous rankings primarily focused (though not exclusively) on research productivity. Although this is an important index of graduate program success, it is not the only marker. Other ways of measuring the strengths of graduate programs are difficult to operationalize as well as execute. To this end, the current five projects answered this call to determine nontraditional ways of determining the strengths of graduate programs, as well as to compare schools against each other. We saw this project as an opportunity to widen the scope of how we as a field determine “success” in graduate programs, and to generally celebrate the various strengths different graduate programs offer.
The process through which these projects were completed sought to strengthen each project as much as possible, as well as ensure that the rankings reflected the included graduate programs as much as possible. First, authors submitted their proposals to the reviewer committee (the authors of this paper). The reviewer committee was carefully composed to ensure diversity of program types, practitioners and academics, and those with graduate program education experience were included. The reviewer feedback was incorporated into the projects, and then the authors were given approximately one year to collect their data and write their manuscripts. Once again, their manuscripts were sent to the reviewers for feedback, and revisions were made. Through this process, multiple people were involved to ensure the quality of the projects were as high as possible.
The first two papers examine master’s programs, an area understudied by previous graduate rankings. Vodanovich et al. (2018) examined objective, quantifiable indicators of master’s program success, such as how much applied experiences students in the program receive, how much faculty are involved with applied work, job and internship placement rate, and number of courses students take in various topics. To gain this information, the authors of this project surveyed master’s program directors. The second project, conducted by Acikgoz et al. (2018), also examined master’s programs but instead focused on student and alumni perceptions of programs. The inclusion of the alumni perspective made this project especially unique; master’s program directors were asked to forward the survey to recent alumni who had graduate within the past 5 years. These respondents helped give a different view on the programs than other respondents, thus adding a more nuanced understanding of program quality.
The third paper, authored by Landers et al. (2018), examined research productivity through a different lens than has been used previously. Instead of solely looking at the quantity of publications and/or conference presentation, this paper examines the interdisciplinarity of research output coming from graduate programs. This paper looks at how often faculty from gradute programs are publishing papers outside of the traditional core I-O journals, which provides an interesting way of thinking about scholarly output especially in addition to the previously published graduate rankings that focus on the quantity of publications and presentations.
The next paper, Howald et al. (2018) compared student perceptions of particular aspects of their programs with how subject matter experts in the field rated these aspects. The aspects they examined related to applied, teaching, and research developmental opportunities. The subject matter experts they surveyed in their study was the broad SIOP community; SIOP members rated the importance of various developmental opportunities graduate students may receive, and this was compared to whether or not current graduate students perceived these developmental opportunities to be available for them. Similarly, the final paper by Roman et al. (2018) also examined student perceptions of graduate programs. This paper looks broadly at multiple aspects of perceived program quality, including funding opportunities, class offerings, and general program culture.
In sum, the set of papers included in this volume cover a wide range of rankings. These include rankings of masters and doctoral programs from a variety of angles, such as students’ perceptions as well as more objective indicators (e.g. interdisciplinary research productivity). The hope is that, in addition to traditional rankings, these additional perspectives provide more insight into the strengths and growth areas of the educational environment of IO Psychology across the field.
Caveats and Conclusions
Overall, we are pleased with the results of these projects and are excited to share the results with the SIOP community. A few caveats and considerations should be mentioned, though. First, response rate limited many of these projects; programs could not be included if there was not enough data on it. For some of the projects (i.e., those that averaged ratings of multiple people representing the program), a small response rate meant that the program was omitted from the ranking (in an effort to protect the confidentiality of those who did respond). A casual reader might see that a school was not included and implied that the school was poorly ranked, but this is not the case. Please keep this in mind as you are reading and interpreting the results.
Also, it is important to remind the reader that although we believe these rankings provide unique and interesting ways of operationalizing and examining various program strengths, these are not the only methods of doing so. The premise of this endeavor was that there are multiple ways of ranking graduate programs not just research productivity. Therefore, it would be disingenuous for us to say that we have created a definitive list of all the ways graduate programs can excel. We acknowledge that there are many more ways graduate programs could have been analyzed and ranked, and we hope that these five papers will spark discussions among I-O psychologists about other program aspects that should be emphasized.
In fact, we hope that these papers spark multiple discussions among individuals in our field, not just about what else could have been examined, but just generally what people think about these rankings. What do you agree with? What do you disagree with? If you would like to join in on the discussion, we encourage you to attend our session at the 2018 SIOP conference entitled “Where Do We Stand? Alternative Methods of Ranking I-O Graduate Programs.” This session will be on Friday, April 20 from 11:30 – 12:50 in the Gold Coast room. The format of the session will be that the project authors will give a brief overview of their findings, the reviewer committee will discuss the findings, and then the audience will be open to ask questions and offer their thoughts. We hope to see you there!