Society for Industrial and Organizational Psychology > Research & Publications > TIP > TIP Back Issues > 2018 > April

masthead710

Volume 55     Number 4    Spring 2018      Editor: Tara Behrend

Meredith Turner
/ Categories: 554

The Interdisciplinarity of I-O Psychology PhD Programs and Faculty

Richard N. Landers, Michael B. Armstrong, Adrian B. Helms, and Alexis N. Epps, Old Dominion University

In this article, we propose a new criterion for use in evaluating and ranking graduate training programs in I-O psychology: their “interdisciplinarity,” which we conceptually define as the degree to which they contribute to and influence disciplines beyond I-O psychology, defined as publications in those disciplines and citations by those disciplines, respectively. We also present rankings of programs by their interdisciplinarity using various specific operationalizations and provide a listing of the secondary field of focus for all current individual I-O psychology PhD programs.  If you want to skip the details, you can find an alternative way of looking at the summary rankings and tables below in the web app found at https://tntlab.shinyapps.io/io_programs/, which provides the same data reported here plus summaries of individual faculty member I-O and interdisciplinary publication productivity, with the ability to filter, sort, and search.

Why Study Program Interdisciplinarity?

A scientific discipline can be defined as a group of researchers (a) that agree upon central problems to be solved, (b) rely upon at least some agreed-upon facts relevant to solving those problems, and (c) produce explanations, goals, and theories to address those problems (Wagner et al, 2011). Interdisciplinary research integrates information, data, techniques, tools, perspectives, concepts, and/or theories from two or more disciplines to advance knowledge or to solve problems whose solutions are beyond the scope of a single research discipline (National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, 2005). Thus, the “interdisciplinarity” of a graduate program in I-O psychology is the degree to which a given program emphasizes the importance of interdisciplinary research via publication in non-I-O outlets (i.e., interdisciplinary publication) and/or fosters citations within the research of non-I-O disciplines (i.e., interdisciplinary impact).

An interdisciplinary approach to education may not be the first feature that comes to mind when considering which graduate program to apply to or choose, but we contend that interdisciplinary education is important for a variety of reasons and worth consideration by graduate school applicants. Interdisciplinary education and research can benefit both students’ development of valuable KSAOs as well as the progress within a given scientific field such as I-O psychology.

 With interdisciplinary education, students gain a comprehensive and holistic understanding of science and research (Spelt, Biemans, Tobi, Luning, & Mulder, 2009). With expanded and integrated knowledge structures, students can develop better critical thinking skills and creative problem solving (Jones, 2009; Spelt et al., 2009). They learn how to communicate across disciplines, building better networks of collaboration (Jones, 2009; Kleinberg, 2008). A broader skill set is developed that can be applied beyond a single given discipline, including lifelong learning, the ability to change perspectives, and the ability to cope with complex issues (Jones, 2009; Spelt et al., 2009).

Interdisciplinary research is becoming an integral feature of modern science (National Academy of Sciences, National Academy of Engineering, and Institute of Medicine, 2005). The inherent complexity of nature, society, and technology necessitates a broader, comprehensive knowledge base to fully understand the phenomena at play that can only be obtained through interdisciplinary work. Through interdisciplinary research, scientists can build from the work of other disciplines, reducing duplication of effort and streamlining the construction of new knowledge to address modern problems. For example, I-O psychology has recently begun to investigate the use of natural language processing to assess psychological constructs (Campion, Campion, Campion, & Reider, 2016). I-Os do not need to reinvent the wheel here: Linguists, cognitive scientists, and computer scientists have already been working on natural language processing techniques that can be applied to our areas of interest, correlating language categories with personality, emotions, and other psychological characteristics (Tausczik & Pennebaker, 2010), as well as automatically scoring written essays (Deane, 2013).

More practically, research funding organizations have recognized the shifts in nature, society, and technology that require interdisciplinary approaches to science, with many now giving special preference to interdisciplinary work that cannot be accomplished with expertise from a single discipline alone.  Despite this shift and increasing recognition, I-O psychology and other domains have no convenient means of identifying the interdisciplinarity of a given academic or research-based program. The rankings presented here address this challenge.

How to Measure Interdisciplinarity

Interdisciplinarity as a research topic has been conceptualized broadly to include the integrations and interconnections of various research perspectives, concepts, and theories by individuals, facilities, or even countries (Porter, Cohen, Roessner, & Perreault, 2007). The practical question that remains is how to measure interdisciplinarity within these contexts in such a way that interdisciplinarity of entities can be meaningfully compared.  This idea is studied within the field of bibliometrics (Wagner et al., 2011), defined as the study of the pattern or model of publications with the use of statistical analysis (Narin, 1976). Techniques such as citation analysis and evaluation of spatial distances have helped to broaden the perspective of interdisciplinarity by measuring aspects of scholarly productivity such as citation counts and the breadth of encompassed research areas.

Perhaps the simplest type of bibliometric research, citation analysis involves study of the relationships between a part or the whole of a cited document and a part or the whole of the citing document (Smith, 1981) to identify in what specific ways the authors of a work were influenced by others. It also considers the relationship between individual papers and the works the paper has cited (McBurney & Novak, 2002). In contrast, a study by Nichols (2014) created a latent topic model of text contained within National Science Foundation’s award and proposal database to identify disciplinary influences, which revealed that disciplines that usually have less interdisciplinary interaction typically had less specialized language in their field; therefore, disciplines that have more interdisciplinarity are ones that have more general language. These approaches illustrate the varying perspectives taken to understand interdisciplinarity; as a result, there is no single best-in-class approach widely accepted in this research literature.  However, citation analysis emerged for us as the most interpretable for those without expertise in bibliometrics and thus became the focus of our rankings. 

Present Ranking Methodology

To rank programs in I-O psychology, we took a citation analysis approach which involved nine steps:

1.       We by-hand created a list of all I-O psychology programs that currently offer PhDs as recorded in SIOP’s online graduate program search tool.  This includes any I-O program from around the world that has signed up to be a part of the list, but due to SIOP’s headquarters and primary membership, the list is primarily made up of US programs. This was initially a list of 73.

2.       We by-hand searched the websites of each of those programs to determine (a) if they really were a traditional I-O program, and (b) who their faculty were.  This was a bit of a subjective judgment, but ultimately, we included any program explicitly labeled “I-O psychology” and considered everything else to be an explicitly interdisciplinary or “I-O adjacent” program, resulting in a final list of 53 I-O programs, 14 adjacent, and 6 non-I-O.  We considered adjacent programs to be less interesting for the rankings, because a graduate student applying to a program in (for example) “Applied Organizational Sciences” should already know that the program’s going to be interdisciplinary.  We still analyzed these programs, but they are not included in the tabular rankings.  Additionally, if faculty were listed as having a primary home outside of the I-O psychology program (most common with affiliated management faculty), those faculty were not included.  This was done in early spring 2017, so these placements are based on the database and website content at this point.

3.       We by-hand linked each faculty member to their id numbers in Elsevier’s Scopus database, which is a broad, cross-disciplinary database of journal publications and published conference proceedings from all academic fields. Some faculty had as many as three Scopus IDs due to variations in their names at time of publication and were combined by hand. Research has supported Scopus as having greater coverage than Web of Science and superior accuracy in comparison to Google Scholar, its primary competitors in bibliographic search (Falagas, Pitsouni, Malietzis & Pappas, 2007).  It is generally regarded as one of the most comprehensive databases of scholarly publications currently available, including journals, books, book chapters, and conference proceedings, but due to the complexities of academic publishing, its accuracy should not be considered as absolute (Mongeon & Paul-Hus, 2016).

4.       Using API scraping techniques (Landers, Brusso, Cavanaugh & Collmus, 2016) and an institutional membership to the Scopus database, we algorithmically, using R and the rscopus library, downloaded a list of every publication by every faculty member in our list from Scopus.  We also downloaded Scopus’ categorization of each journal these publications appeared in by discipline.  This resulted in a list of 15,554 publications, although publications were listed once per faculty not per publication. Although this introduced nonindependence into the dataset, as 2087 publications (13.4%) have more than one I-O faculty author, we preferred this to alternative strategies, such as considering only first-authored publications or developing some sort of “contribution” index.  Because authors on interdisciplinary publications are frequently not first in author order, we feared this would cut down our list of interdisciplinary publications dramatically and artificially.

5.       We algorithmically downloaded a list of every article citing any one of those articles, also collecting Scopus’ disciplinary categorization for these citations. This dataset was harvested in fall 2017 and resulted in a list of 577,120 citations, with the same caveat as described above.

6.       Using these data, filtered to only include journal articles and to exclude errata, we created a list of unique publications by I-O PhD faculty in reverse order of popularity, which appears in  1.  We also created something that looks vaguely like a scree plot using these data, which appears in Figure 1. Using this plot and table, we had hoped to identify a meaningful break for contrasting “core I-O” with “interdisciplinary” but were unable to find a clear division point except when comparing Journal of Applied Psychology with everything else.  At the suggestion of two reviewers, we consulted the websites of each of the top 40 to determine the training of the editor-in-chief and, in cases of ambiguity, the associate editors.  In other words, we looked to see if I-Os made editorial decisions for each of these journals.  Decisions based upon this standard were clear in all cases except for the Journal of Vocational Behavior (JVB), which also appears ranked 4th in Table 1.  At the time of our check, the editor-in-chief of this journal held a degree in Counseling Psychology, and only 4 of the 8 associate editors held degrees in I-O, with one of these editors currently holding a position in a

 

 

 

 

 

Figure 1.

 

business school. Given (a) JVB’s associate editor training and employment lean and (b) its current mission statement (i.e., from its webpage: “publishes original empirical and theoretical articles that contribute novel insights to the fields of career choice, career development, and work adjustment across the lifespan and which are also valuable for applications in counseling and career development programs”), we made the decision to classify JVB, at least in its current form, as “interdisciplinary.”  Journals classified as core I-O using this system appear in italics in Table 1.

7.       Using this categorization, we algorithmically labeled each publication and citation as either interdisciplinary or not and then tabulated these counts in several distinct ways, and these tables appear in Tables 2-8.  If a publication had multiple I-O authors, it was counted once per author in these lists, and authorship order was ignored.  Programs we previously categorized as “I-O adjacent” appear in Table 9.

8.       Next, we restructured the data to represent counts across disciplines within program and reverse-ordered them by count, excluding the categories “Applied Psychology” and “Psychology (all),” because these appeared as first and second for almost every program in our list.  These results appear in Table 10, summarizing the most common interdisciplinary fields published in by each program.

9.       Finally, we created an online app that summarizes all this information, which is available at https://tntlab.shinyapps.io/io_programs/

A correlation matrix of all study variables, which appears in Table 11.  We can summarize the ranking tables thus:

 

 

1.       Table 2. A list of all programs and all summary statistics.  This information is also presented in Tables 2-8 but sorted and filtered by relevance to particular questions.

2.       Tables 3-4. Rankings of programs by their interdisciplinarity, either raw counts or as a proportion of their total output.  Higher ranks in Table 3 indicate greater interdisciplinary publication output, whereas higher ranks in Table 4 indicate greater interdisciplinary focus relative to all work being done there.

3.       Table 5. Rankings of programs by their I-O centrality, that is, the degree to which they are not interdisciplinary, either raw counts or as a proportion of total output.  Higher ranks in Table 5 indicate greater I-O publication output.

4.       Tables 6-7. Rankings of programs by the raw interdisciplinary citations (Table 6) or proportion of I-O interdisciplinary citations (Table 7) they have received.  Higher ranks in Table 6 indicate greater overall interdisciplinary impact, whereas higher ranks in Table 7 indicate greater interdisciplinary impact as a proportion of all impact.

5.       Table 8. Rankings of programs by their I-O influence, that is, the degree to which they are cited by I-O publications.  Higher ranks indicate greater influence within I-O.

Importantly, all decisions were based upon Scopus’ internal database, and the reliability of that database is not perfect.  For example, some conference proceedings have been misclassified as journals (and vice versa), and it is unclear how the subject area categories Scopus uses were derived.  We believe the quality to be sufficiently high and consistent that the rankings we created and classifications we developed would not be substantially different with perfect reliability; nevertheless, this remains an important caveat.  Small count and/or ranking differences should be interpreted with this limitation in mind.  Additionally, it caused the category Catalysis to appear as in Table 10 as the number one non-I-O discipline for Virginia Tech because Scott Geller has published 19 articles in Bulletin of the Psychonomic Society, a journal which has not published an issue since 1993, and for some reason, Scopus considers one of that publication’s categorizations to be Catalysis.  From a manual review of disciplinary assignments, it appears that cases like these are relatively unusual, but this also illustrates why we ultimately went with a popularity metric to determine whether a publication was I-O or not rather than using Scopus’ internal categorization scheme.

Results

Although it may seem obvious with the data in front of you, we were surprised at how overwhelmingly popular the Journal of Applied Psychology is as an outlet for I-O psychologists in PhD programs, as shown in Table 1.  Nearly 1 out of every 10 publications by I-O faculty in PhD programs is in JAP, and this rate dwarfs all other outlets.  By looking at plateaus in Figure 1, additional tiers emerge.  Among core I-O publications, the second tier of popularity includes Journal of Business and Psychology, Journal of Organizational Behavior, and Journal of Vocational Behavior.  The third tier is larger, including Human Performance through Organizational Behavior and Human Decision Processes.  The fourth tier includes Organizational Research Methods through Educational and Psychological Measurement.  Beyond that, an extremely large tail appears.  Although it is not shown in Figure 1 due to space considerations, I-Os teaching in PhD programs have published in 1,288 distinct journals over their careers.  It is a very long tail indeed, with only one publication, across all I-O psychology faculty, appearing in 568 of those journals.  For example, only one I-O psychologist ever has published one article each in the journals Humor and Zygon.1  Missed opportunities for our field, clearly.

More broadly, perhaps most striking to us was that there appears to be a general productivity factor that crosses disciplinary borders.  Faculty/programs who publish more I-O work tend to also publish more interdisciplinary work (r = .64).  Nevertheless, substantial differences in terms of program focus did emerge.  For example, the University of Minnesota, which traditionally is ranked near the top of all 

I-O programs in terms of overall productivity, scored at the bottom of Table 4, suggesting those faculty much more rarely (32%) publish outside of core I-O psychology journals than within them, although this is relative their general elevated level of productivity (i.e., #1 in IO publications per faculty).  In fact, most traditionally highly ranked programs appear toward the middle or end of that table.  Nevertheless, many of these programs are highly ranked in terms of raw interdisciplinarity citation impact (Table 6), suggesting that highly ranked I-O programs also tend to publish I-O research that fields outside of I-O psychology find useful, despite not conducting as much interdisciplinary research themselves, relatively speaking. Thus, high-quality research within I-O does appear to be recognized as generalizable to other fields, at least to a degree.

In relation to individual programs, the most interesting interpretive results to us were in examining Tables 4 and 7 and tracking high interdisciplinarity impact programs in other tables.  For example, 91% of the citations that Clemson receives come from interdisciplinary publications, with an interdisciplinarity impact ratio of 13022:1229 (from Table 7), and publications coming from that program are 79% interdisciplinary, with a publication ratio of 314:83 (from Table 4).  This suggests high impact in literatures outside of I-O psychology that may not be evident at first glance in traditional rankings, most specifically in the areas of social psychology, psychiatry and mental health, and public health (from Table 10).  In terms of overall publication rates, Clemson appears lower ranked than would be predicted by their citation rate.  This contrast highlights cross-disciplinary differences in impact that are traditionally difficult to interpret.

Discussion and Takeaways

There were many interesting takeaways for us within these tables. We do not have the space to describe them all, but considering the sheer number of tables and the amount of information we needed to leave out of them even so, we thought it might be useful to provide three highlights.  First, an emergent finding for our team that is not evident in the tables is that programs with relatively lower interdisciplinarity rarely have that interdisciplinary identity across faculty.  Instead, individual faculty members often represent most or the entirety of an interdisciplinary focus of a program.  For example, the second most common interdisciplinary publication area for Old Dominion University, where the lead author is currently employed, is “Computer Science Applications.”  However, the lead author is also the only person at Old Dominion consistently publishing in this area.  Thus, the individual faculty rankings and details available in the web app we created (https://tntlab.shinyapps.io/io_programs/) may be more useful to a would-be graduate student trying to identify an advisor with a certain interdisciplinary focus than the program listing, if that program is not overwhelmingly interdisciplinary. 

Second, we were surprised at how high the interdisciplinary citation counts were (see Table 6) in relation to I-O citation counts.  All 53 I-O programs listed in the SIOP database have a greater impact on fields outside of I-O psychology than on I-O psychology itself in terms of number of publications influenced.  The smallest ratio here was the University of Tulsa, which had 2.56 interdisciplinary citations per I-O citation.  The greatest difference was found for Alliant University at San Diego, with 33.88 interdisciplinary citation per I-O citation.  In general, the small size of I-O psychology means that any one publication we produce on average has a much greater impact outside of our field than it does within it.  Thus, if a prospective graduate student is interested in “impact” in an absolute sense—minds changed, research influenced—it is even more important to apply to programs with faculty whose work has implications beyond I-O.

Third, we were also surprised to learn how much the productivity and citations rates of “star faculty” tend to influence program rankings.  For example, the University of South Florida appears to be cited within I-O psychology dramatically more than any other program, with a raw count of 6050 citations from the core I-O literature, about 1000 greater than either of the next two on the list, Michigan State and the University of Minnesota.  However, upon investigation of the person-level data (available in the shiny app), we discovered that they owe a lot of thanks for that to Paul Spector, who brought in 3077 of those citations by himself.  This observation led us to investigate the relative impact of “prolific” academics within programs more broadly.  Although the University of South Florida has several prolific and highly cited faculty, this varies widely by institution.  By using the shiny app, we can easily identify the academics fieldwide most highly cited within I-O psychology articles; in reverse order, the top 10 are Paul Spector, Deniz Ones, Paul Sackett, John Meyer, Chockalingam Viswesvaran, Michael Mumford, Robert Eisenberger, Eduardo Salas, Ann Marie Ryan, and Steve Motowidlo.  If any of these people were to leave their programs, the ranking of their program according to I-O citations would immediately and noticeably change, although by varying amounts.  The same pattern is true for any metric of choice, whether considering programs citation-wise or publication-wise, within I-O or in general.  This suggests some volatility in these rankings in relation to individual faculty members; when one prolific or highly cited person leaves a program, we would expect that program’s ranking to dip despite what would likely be a similar quality of graduate education, and the largest such drops would be in programs with the greatest disparity between the least and most prolific/cited faculty members.  In addition to highlighting the high stakes of hiring or replacing faculty, this reveals the riskiness of using program-level publication-based rankings as the sole decision-making tool for undergraduates considering graduate school; thus, we again recommend applicants use all rankings as pieces of information that contribute to their application decisions, not as focal criteria, and to consider both individual faculty profiles (in the shiny app) and program-level metrics.2   

In terms of general, practical recommendations for those applying to graduate school, we recommend undergraduates follow a multifaceted decision-making process that weighs the various pieces of evidence for each program individually.  This is a remarkably more complex strategy than the standard “apply as highly ranked as your GREs will support” advice that is often provided.  Instead, much like needs assessment and job analysis, we recommend undergraduates articulate their career goals and how graduate school figures into them before looking at any rankings.  With those goals written down, rankings should be chosen that help reach those goals.  The mentors that will help a student have maximal impact within I-O psychology are likely quite different from those that will help them have an interdisciplinary impact.  Considering the general shift of all academic disciplines towards interdisciplinary perspective, this will only become more important.  

 
For those seeking an interdisciplinary perspective in their I-O psychology graduate school experience as part of those goals, we recommend a specific process.  First, either consult Table 10 for programs that contain the interdisciplinary focus of interest or use the online app to find individual faculty with that interest.  Next, consult the other tables to determine how the programs identified rank on other metrics of interest. Finally, consult other non-citation-based rankings, both in this issue of TIP and elsewhere, and integrate that information into a holistic picture of a program.  In which rankings are your target programs strong and in which are they weak, and which balances are attractive to you?  Is there more than one faculty member with the type of interdisciplinary expertise you’re looking for at any particular program?  These questions should all be answered before applying—and frankly, a student able to articulate a specific rationale for applying to a particular school in a personal statement with such information considered would be impressive indeed.

 

Overall, we hope these rankings provide evidence that “impact” is multifaceted to people applying to graduate school.  There is no clear “best” program or set of programs on all dimensions of productivity, and this does not even consider other dimensions of the graduate school experience, such as reactions, specific skills gained, employability, cultural fit, and daily workload.  Undergraduates should decide what is most important to them and consider all programs available to them on those terms.  We also hope this message extends to those evaluating grant applications or tenure and promotion cases.  When interdisciplinarity is considered explicitly, the meaning of “impact” changes.  Although within-discipline and interdisciplinary output and impacts are correlated, they are measurably distinct.  With this article, its rankings, and our online app, we provide resources for people, particularly those applying to graduate school, to determine the degree of a program’s impact on other disciplines that was not previously possible.  We believe it is only through interdisciplinary efforts that I-O psychology will find itself in a position of influence within the broader scientific community, a goal our field has been struggling with for decades. The present study serves as the first systematic assessment of interdisciplinarity in I-O, and we hope it is only the first such effort as I-O continues to evolve to better meet the needs of both I-O psychologists and society more broadly.

Note 

[1] Comment from RNL: I really wish I could track how many people Googled “Zygon” as a result of reading this sentence. Perhaps I could get a pub in Humor out of it.

[2] This paragraph was expanded from the originally published version of this article; we thank a reader for calling our attention to some ambiguous language.

 

References

Campion, M. C., Campion, M. A., Campion, E. D., & Reider, M. H. (2016). Initial investigation into computer scoring of candidate essays for personnel selection. Journal of Applied Psychology, 101(7), 958-975. doi:10.1037/apl0000108

Deane, P. (2013). On the relation between automated essay scoring and modern views of the writing construct. Assessing Writing, 18, 7-24. doi:10.1016/j.asw.2012.10.002

Falagas, M. E., Pitsouni, E. I., Malietzis, G. A., & Pappas, G. (2007). Comparison of PubMed, Scopus, Web of Science, and Google Scholar: Strengths and weaknesses. The FASEB Journal, 22, 338-342.

Jones, C. (2009). Interdisciplinary approach – advantages, disadvantages, and the future benefits of interdisciplinary studies. ESSAI, 7(26), 1-6. Retrieved from: http://dc.cod.edu/essai/vol7/iss1/26

Kleinberg, E. (2008). Interdisciplinary studies at a crossroads. Liberal Education, 94(1), 6-11. Retrieved from https://www.aacu.org/publications-research/periodicals/interdisciplinary-studies-crossroads

Landers, R. N., Brusso, R. B., Cavanaugh, K. J., & Collmus, A. B. (2016). A primer on theory-driven web scraping: Automatic extraction of big data from the internet for use in psychological research. Psychological Methods, 21, 475-492.

McBurney, M. K. & Novak, P. L. (2002). What is bibliometrics and why should you care? IPCC 2002 Professional Communication Conference.  doi: 10.1109/IPCC.2002.1049094

Mongeon, P. & Paul-Hus, A. (2016). The journal coverage of Web of Science and Scopus: A comparative analysis. Scientometrics, 106, 213-228.

Narin, F. (1976). Evaluative bibliometrics: The use of publication and citation analysis in the evaluation of scientific activity (pp. 206-219). Washington, DC: Computer Horizons.

National Academy of Sciences, National Academy of Engineering, and Institute of Medicine. (2005). Facilitating interdisciplinary research. Washington, DC: The National Academies Press. https://doi.org/10.17226/11153.

Nichols, L. G. (2014). A topic model approach to measuring interdisciplinarity at the National Science Foundation. Scientometrics, 100(3), 741-754.

Porter, A. L., Cohen, A. S., Roessner, J. D., & Perreault, M. (2007). Measuring researcher interdisciplinarity. Scientometrics, 72(1), 117-147.

Smith, L. C. (1981). Citation analysis. Library Trends, 30(1), 83-106.

Spelt, E. J. H., Biemans, H. J. A., Tobi, H., Luning, P. A., & Mulder, M. (2009). Teaching and learning in interdisciplinary higher education: A systematic review. Educational Psychology Review, 21, 365-378. doi:10.1007/s10648-009-9113-z

Tausczik, Y. R. & Pennebaker, J. W. (2010). The psychological meaning of words: LIWC and computerized text analysis methods. Journal of Language and Social Psychology, 29(1), 24-54. doi:10.1177/0261927X09351676

Wagner, C. S., Roessner, J. D., Bobb, K., Klein, J. T., Boyack, K. W., Keyton, J., . . . Borner, K. (2011). Approaches to understanding and measuring interdisciplinary scientific research (IDR): A review of the literature. Journal of Informetrics, 165, 14-26.

 

Previous Article The Cream of the Crop: Student and Alumni Perceptions of I-O Psychology Master's Degree Program Quality
Next Article Ranking PhD I-O Programs by Development Opportunities
Print
9999 Rate this article:
4.8