Society for Industrial and Organizational Psychology > Research & Publications > TIP > TIP Back Issues > 2017 > October

masthead710

Volume 55     Number 2    October 2017      Editor: Tara Behrend

Meredith Turner
/ Categories: 554

I-O Graduate Programs Rankings Based on Student Perceptions

Jenna-Lyn R. Roman, Baruch College, CUNY; Christina N. Barnett, University of South Florida; and Erin M. Eatough, Baruch College & The Graduate Center, CUNY

Graduate program rankings occur within the industrial-organizational psychology community on a frequent basis. These rankings are calculated in a myriad of ways: research productivity of faculty (Levine, 1990; Winter, Healey, & Svyantek, 1995), number of faculty serving on editorial boards (Jones & Klimoski, 1991), number of student conference presentations (Payne, Succa, Maxey, & Bolton, 2001; Surette, 1989, 2002), expert opinions such as U.S. News and World Reports (1995; 2001), and finally - student perceptions (Kraiger & Abalos, 2004). Although all of the aforementioned methods provide valuable information to both incoming students and current faculty alike, we suggest that student perceptions provide a unique insight that can add value to the ranking of graduate programs, particularly for prospective I-O graduate students.

 

Kraiger and Abalos (2004) suggested that the perspective of current graduate students is both important and necessary. They used graduate student opinions to create a set of criteria, ratings of programs based on that criteria and in turn, rankings. In doing so, Kraiger and Abalos set several important precedents in industrial-organizational psychology graduate program ratings. First, they used current graduate students as subject matter experts (SMEs) in order to determine which criteria students value the most, a novel step in I-O program rankings. Second, they measured the perspectives of students from both doctoral programs and master’s programs. This is important because SIOP lists 157 MA and MS programs from which potential students can choose, but there is little ranking information available to guide these students in their decision besides what is provided on each program’s website or the searchable I-O program database on the SIOP website. Having an available and updated source for program comparison based on information provided by students currently at the programs they hope to attend would be invaluable to prospective MA/MS students.

 

Following their lead, we aimed to conduct student program ratings using similar methodology to Kraiger and Abalos (2004) with the goal of updating their previous rankings and offering a unique (student) perspective to the broader effort on program rankings.

 

Support and Criticism for Program Rankings

 

Program directors, faculty members, and students have a range of stances regarding program rankings, ranging from eager to cautious to completely distrustful of the results of any type of program ranking method. However, in defense of rankings, it is unwise for any organization to ignore the perspective of its customers (“Why customer satisfaction is important,” n.d.). In the case of graduate programs, the customers are the students. Prospective students, much like consumers choosing a new product or home, are likely looking for product information to make consequential decisions. The ranking of the schools can be one such piece of information gathered in an assessment process. It is information that prospective students, the consumers, may desire during their decision making processes.

 

However, it has been noted extensively that program rankings are not a flawless means by which to measure program quality. Kraiger and Abalos (2004) provided examples of ranking flaws: “rankings based on program reputation may be unrelated to current faculty productivity given halo (general reputation of the university), turnover, or raters who do not fully understand the discipline or activities of individual institutions” (p.28), demonstrating serious issues that must be considered when interpreting rankings. Still, we contend that although rankings may be flawed, they are an inescapable fact of a competitive academic market. Indeed the very approach we take here, that of rankings based on graduate student opinion, are not impervious to error. For example, it is natural for graduate students to believe that their program is a top program and rate it accordingly. Confirmation bias tendencies make it more likely for students to interpret and recall information that confirms their program is elite because that confirms their preexisting beliefs (i.e., that they chose a top program to attend, they made a good decision; Plous, 1993). Furthermore, students typically only have experience with one program. Thus, different from a consumer who may have experience with many brands, students only have their own individual experience with one individual program, which gives them little basis by which to evaluate it. However, despite these considerations, we believe that the information provided by such rankings can offer value as they do represent the lived experience of the very population graduate programs serve.

 

The Utility of Student Perceptions for Program Rankings

 

Using student perceptions in graduate rankings can be particularly valuable for both the prospective students and for the graduate programs who serve them. In terms of the prospective students, as mentioned, information gathering about programs prior to making a significant life commitment is natural, yet limited comparative information is available that reflects the student experience in programs. Kraiger and Abalos (2004) had produced a program ranking by using scores collected from current graduate students on criteria that were previously deemed important by current graduate students. This method resulted in rankings on a diverse grouping of criteria such as student perceptions of faculty support and accessibility, instruction quality, balance between applied and academic emphases, and cost of living. These are all factors prospective students may consider when choosing a program for the next stage of their academic career. Students have to make decisions that can influence the rest of their life and therefore information about programs (both positive and negative) may aid them during this process.

 

However, ranking based on student perceptions are not just potentially valuable to the prospective students shopping for programs. Student perceptions are also important for the graduate programs themselves. Because these rankings are based on criteria current students deem as important, these rankings can be used by programs as a way to develop areas of their program to better target prospective students. For example, programs can leverage the findings from the ratings to build a case to administration for program development purposes. For example, programs may ask for more internally funded research or teaching positions for graduate students if these areas are rated particularly low relative to other programs (Kraiger & Abalos, 2004). In addition, if programs are able to use the rankings to secure better funding opportunities for students, it should be beneficial for recruiting future applicants to the program.

 

Method

 

The authors of this article followed a similar methodology to the one used in Kraiger and Abalos (2004) to (a) determine the criteria graduate students deem important, (b) weigh the importance for those criteria, and (c) compute the total score for each program. This information was then used to determine the overall ranking of programs.

 

This project was conducted in two phases: a criterion development phase and an importance and rating phase. Both phases of the data collection were in conjunction with another project on student perceptions from Bowling Green State University. The initial criterion development phase was conducted with current I-O graduate students (N = 46) listing the criteria they used to evaluate or choose a graduate program (e.g., research interests of faculty, location, availability of funding). This phase had student responses from 3 different universities and colleges, with approximately two-thirds of students enrolled in PhD programs and the remaining one third enrolled in MS/MA programs. Students responded to the open-ended question, “List any and all criteria used when selecting your graduate program or that you use when recommending a program to another person.” The resulting responses were combined into 25 criteria and definitions of the criteria were written. See Table 1 for the final list of 25 variables.

 

Table 1

Graduate Student Criteria

Criteria

Definitions

Application process

Admission requirements

Alumni network

Success of alumni and connection of alumni to the program

Class offerings

Topics of interest offered and class times offered

Cost

Tuition, fees, and program-related expenses incurred due to program attendance

Facilities available

Labs, office space, technology options, statistical packages, journals access

Faculty quality/expertise

Quality of class instruction, salience of research advice, depth and breadth of faculty knowledge

Faculty productivity

Quality and quantity of graduate faculty journal publications and conference presentations

Faculty research interests

Major professor and other professors with research interests similar to yours

Teaching opportunities

Availability for students to student teach, lecture classes, serve as a teaching assistant

Funding resources

Financial package available to student, relationship between stipend amount and cost-of-living

Graduation requirements

Requirements are reasonable and match with student's goals (e.g., having internship requirements when a student is interested in going applied)

Internship opportunities

Availability of suitable internships to the program's students

Job/Internship placements

Successful placements of current students and alumni in appropriate internships and jobs

Location

Geographic qualities around the campus, access to nearby job/internship opportunities, cost of living

Opportunities for applied projects

Availability do consulting projects and other types of applied work as part of the program

Program culture

Atmosphere of the program, norms, collaborative vs. competitive

Program flexibility

Opportunity for student to arrange their schedule to fit other facets of their life, take a semester off for life events

Program ranking

Knowledge of a program's current ranking (e.g., U.S. News & World Report)

Program reputation

Knowledge of a program's reputation in I-O

Quality of life/fit and social relationships between grad students

Social fit with and relationships between students within the program

Research opportunities

Availability for students to engage in research that relates to their topics of interest within the program

Student productivity

Quality and quantity of graduate student journal publications and conference presentations

Student support by faculty/
department

Mentoring availability and formal and social interactions between the student and the faculty/department. Faculty accessibility to students.

Teaching model used

Balance between applied and academic focus

Learn practical skills

Relevant skills are learned by students that will be useful in I-O internships and jobs.

 

The second phase, the main data collection which involved collecting importance scores and ratings, involved a widely distributed survey. Data collection for this phase began in September of 2017. The survey was administered to current graduate student participants from I-O MA/MS and PhD programs across the country. The researchers distributed a Qualtrics survey administered to all affiliates of SIOP through the SIOP listserv. To augment the number of student participants for this study, particularly MA/MS students, SIOP distributed the study link to I-O program directors so that they could email it directly to their students.

 

Respondents were told that the purpose of this study was “to collect perceptions of the quality of the graduate programs from the perspective of their customers—the graduate students.” Participants were asked to rate the importance of the 25 criteria, in general, from their perspective as a current graduate student. A sample question for class offerings asked participants to rank the importance of class offerings for choosing a graduate school from your perspective as a current graduate student: “topics of interest offered and/or the class times that courses are offered.”  Ratings were collected on a four-point Likert-type scale (1=not important at all to 4 = very important).  The highest possible score for programs was 100. 

 

Students were then also asked to rank the quality of their program on each of the 25 criteria. A sample item asked Please provide your perceptions on how the following variables relate to the quality of the graduate program in which you are currently enrolled. Only answer this section if you are currently enrolled as a student in a graduate program (e.g., Class Offerings).”  Ratings for that variable were collected on a four-point Likert-type scale (1 = extremely poor class offerings to 4 = extremely good class offerings).  With the exception of two items (i.e., application process, graduation requirements) that had 3- point Likert scales, all other items were ranked by participants using a 4-point Likert-type scale with anchors tailored to the specific item.  Students also provided demographic information. All surveys were completed anonymously.

 

Notably, students from both MA/MS and PhD programs were included and provided importance ratings and rankings on the criteria. We expected that some criteria items may be differentially important to master’s students versus doctoral students but allowed the importance ratings to elucidate that information for us. Study respondents from each type of program dictated through the importance ratings, which items were key drivers to them. See Tables 2 and 3 for the list of which variables were included in the importance ratings for PhD and MA/MS programs respectively.  

 

Table 2

Criteria for Calculating PhD Rankings

Criteria included PhD rankings

Importance rating

Criteria not included for PhD rankings

Alumni network

-0.050109499

Application process

Class offerings

0.283953825

Facilities available

Cost

0.395308266

Graduation requirements

Faculty quality/expertise

1.917152296

Internship opportunities

Faculty productivity

0.061244943

Location

Faculty research interests

1.10055306

Program flexibility

Funding resources

1.026316766

Program ranking

Job/Internship placements

0.692253443

Student productivity

Learn practical skills

1.211907502

Teaching opportunities

Opportunities for applied projects

0.061244943

 

Program culture

1.917152296

 

Program reputation

-0.347054675

 

Quality of life/fit and social relationships between grad students

0.09836309

 

Research opportunities

0.914962325

 

Student support by faculty/department

1.397498237

 

Teaching model used

0.09836309

 

 

Table 3

Criteria for Calculating MA/MS Rankings

Criteria included MA/MS rankings

Importance rating

Criteria not included for MA/MS rankings

Application process

-0.753117766

Faculty productivity

Alumni network

0.329699052

Program ranking

Class offerings

1.001792249

Student productivity

Cost

0.591068628

Teaching opportunities

Facilities available

-0.902471809

 

Faculty quality/expertise

1.860578

 

Faculty research interests

-0.454409678

 

Funding resources

0.030990964

 

Job/internship placements

1.412515869

 

Learn practical skills

2.009932044

 

Location

-0.043686058

 

Opportunities for applied projects

0.703084161

 

Program culture

1.07646927

 

Program flexibility

-0.93981032

 

Program reputation

0.217683519

 

Quality of life/fit and social relationships between grad students

-0.155701591

 

Research opportunities

-0.977148831

 

Student support by faculty/department

0.964453738

 

Teaching model used

0.66574565

 

 

Importance ratings followed by student rankings on the program criteria were then used to determine the rankings of programs based on what graduate students deemed important (giving criteria with higher importance more weight) and how they rated their program. To calculate the overall program rankings, we separated PhD and masters programs and developed separate weights for each program type. We included all criteria that were above the mean level of importance rating as well as all criteria that were one standard deviation below the mean level of importance rating. By doing this we were able to exclude criteria that the sample did not rate as important (9 criteria for PhD students, 4 criteria for MA/MS students were excluded from ranking calculations). See Tables 2 and 3 for the list of which variables were included in the importance ratings for PhD and MA/MS programs respectively. After obtaining the mean importance ratings for each item, we calculated a weight for each criterion. The weight was the importance rating for each criterion subtracted from the mean importance rating of the total criteria set, divided by the standard deviation of the importance ratings. The weights for each criterion were then multiplied by the mean rating on that criterion from each program. The sum of all of the weighted criteria provides the rank for each program. 

 

This study received institutional review board approval and approval from the Institutional Research Committee at SIOP. SIOP was instrumental in facilitating the collection of data for this study. The Institutional Research Committee required that to protect the identity of participants, programs with fewer than four respondents could not be analyzed for importance ratings or rankings. Therefore, programs with students who did not wish to participate or program directors who did not distribute the study information to their students were not included in these analyses. We can make no assertions as to the quality or hypothetical ranking of such programs. Simply because a program is not included on our list does not imply that it is not a quality program.    

 

Results

 

Ratings were obtained from 1,049 current PhD and MA/MS students. We received data that met our inclusion criteria of having Ns greater than or equal to 4 from 44 PhD programs and 48 MA/MS programs.  The SIOP website states that there are 78 psychology PhD programs and 125 PhD programs total (which includes those programs housed in psychology departments, business departments, etc.). This website also states that there are 157 MA/MS programs. There were some programs from which we did not receive data, which could be due to various reasons (e.g., students are not current SIOP members and did not receive the notification about the study, program directors chose not to forward the study recruitment information on to their students).  There were a few programs in which we did not receive enough data to include their program in the rankings, which was unfortunate. We chose to report the top 20 PhD and MA/MS programs respectively for two reasons: (a) to showcase those truly exceptional programs and (b) to shield programs that had strong participation in this study but were scored less favorably than other programs by their students.

 

MA/MS (N = 583) student respondents were primarily female (N = 340), with N = 190 being male and N = 9 selecting either prefer not to answer or not providing a response on this item.  The average age of these respondents was 26.26 (SD = 6.02). Of the MA/MS students surveyed 66% were Caucasian, 9% were Asian, 8% selected Other, 8% were Hispanic/Latino(a), 5% were Black or African American, and 2 respondents were American Indian or Alaskan Native. On average the MA/MS students in our study had an average start year of 2016 (SD = .70), had been in their current program for approximately one and half years (M = 1.53, SD = .46) years, and estimate their degree completion in 2018 (SD = .59). 

Similar to the MA/MS sample, of the responding 466 doctoral students, N = 276 were female, N = 161 were male, and N = 18 selected either prefer not to answer or did not provide a response on this item.  The average age of these respondents was approximately 29 years old (M = 28.85, SD = 7.41). The racial makeup of the PhD students was also analogous to the MA/MS students, and 68% of PhD students surveyed were Caucasian, 9% were Asian, 5% selected Other, 5% were Hispanic/Latino(a), 4% were Black or African American, and one respondent was American Indian or Alaskan Native. The PhD students reported an average start year of 2014 (M = 2014, SD = 1.98), had been in their current program for more than 3 years (M = 3.32, SD = 1.75) years, and estimate degree completion in 2019 (M = 2019, SD = 1.42).  

 

Table 4

Top 20 PhD Overall Based on Student Rankings

Rank

Program

N

Raw score

Z-score

1.

Portland State University

5

38.348

1.721

2.

Pennsylvania State University

10

38.064

1.615

3.

Michigan State University

10

37.629

1.453

4.

Texas A&M University

8

37.582

1.435

5.

Old Dominion University

9

36.766

1.169

6.

University of South Florida

4

36.759

1.128

7.

Rice University

14

36.263

.943

8.

University of Georgia

12

36.052

.864

9.

Columbia Teacher’s College

9

35.830

.781

10.

George Mason University

22

35.772

.760

11.

Louisiana Tech University

18

35.484

.652

12.

Wayne State University

6

35.452

.640

13.

University of Minnesota

7

35.406

.623

14.

Northern Illinois University

5

35.018

.478

15.

University of Houston

10

35.012

.476

16.

Seattle Pacific University

10

35.002

.472

17.

University of Missouri–St. Louis

11

34.989

.467

18.

University of Oklahoma

9

34.820

.404

19.

Florida International University

7

34.819

.404

20.

University of Akron

15

34.739

.374

 

Table 5

Top 20 MA/MS Programs Overall Based on Student Rankings

Rank

Program

N

Raw score

Z-score

1.

Xavier University

19

32.107

1.865

2.

University of Tennessee at Chattanooga

18

31.764

1.706

3.

Appalachian State University

13

31.058

1.379

4.

New York University

4

30.619

1.176

5.

Middle Tennessee State University

25

30.597

1.166

6-tie.

San Diego State University

5

30.329

1.041

6-tie.

University of Maryland, College Park

12

30.329

1.041

8.

George Mason University

15

30.027

.902

9.

Missouri State University

21

29.874

.831

10.

Columbia Teacher’s College

21

29.785

.789

11.

University of Akron

8

29.778

.786

12.

Radford University

19

29.737

.767

13.

Hofstra University

11

29.614

.710

14.

Florida Institute of Technology

6

29.526

.670

15.

Minnesota State University–Mankato

17

29.489

.652

16.

Chicago School of Professional Psychology

17

29.448

.633

17.

University of Guelph

8

29.058

.453

18.

University of Georgia

18

28.947

.401

19.

Wayne State University

5

28.756

.313

20.

Indiana University-Purdue University Indianapolis

6

28.717

.295

 

In an effort to highlight the specific criteria that are most important to I-O graduate students and to provide more information on certain program characteristics that may be of interests to faculty and students we provide that information here.  The top five criteria PhD students ranked highest were faculty quality/expertise (M = 3.83), program culture (M = 3.70), student support by faculty/department (M = 3.69), learn practical skills (M = 3.64), and faculty research interests (M = 3.61).  The top five criteria MA/MS students ranked highest were the opportunity to learn practical skills (M = 3.85), followed by faculty quality/expertise (M = 3.81), job/internship placements (M = 3.69), program culture (M = 3.60), and class offerings (M = 3.58). The following tables below are the program rankings based on the top three criteria that were rated as highly important to both PhD and MA/MS students. 

 

Table 6

Rankings of Program Culture

Rank

PhD programs

Score

Rank

MS/MA programs

Score

1.

Old Dominion University

3.89

1-tie.

Indiana University-Purdue University Indianapolis

4.00

2.

University of Akron

3.87

1-tie.

Keiser University

4.00

3-tie.

Wayne State University

3.83

1-tie.

University of Guelph

4.00

3-tie.

Clemson University

3.83

4.

Xavier University

3.95

5-tie.

Pennsylvania State University

3.80

5.

George Mason University

3.88

5-tie.

Portland State University

3.80

6.

Appalachian State University

3.85

7.

University of Oklahoma

3.78

7.

Florida Institute of Technology

3.83

8.

Texas A&M University

3.75

8-tie.

Missouri State University

3.78

9-tie.

Florida International University

3.71

8-tie

University of Tennessee at Chattanooga

3.78

9-tie.

Rice University

3.71

10.

Carlos Albizu University

3.77

11.

Seattle Pacific University

3.70

11-tie.

California State University, Long Beach

3.75

12.

University of Georgia

3.67

11-tie.

Elmhurst College

3.75

13-tie.

George Mason University

3.64

11-tie.

University of Nebraska at Omaha

3.75

13-tie

University of Missouri-St. Louis

3.64

14-tie.

University of West Florida

3.71

15.

Teachers College, Columbia University

3.63

14-tie.

University of Wisconsin-Stout

3.71

16-tie.

Michigan State University

3.60

16.

University of Maryland, College Park

3.69

16-tie.

Northern Illinois University

3.60

17-tie.

Minnesota State University-Mankato

3.67

16-tie.

Hofstra University

3.60

17-tie.

Missouri University of Science and Technology

3.67

16-tie.

Chicago School of Professional Psychology

3.60

19-tie.

Hofstra University

3.64

20.

Florida Institute of Technology

3.58

19-tie.

Middle Tennessee State University

3.64

 

Table 7

Learn Practical Skills

Rank

PhD programs

Score

Rank

MS/MA programs

Score

1.

Louisiana Tech University

3.94

1-tie.

East Carolina University

4.0

2.

Michigan State University

3.89

1-tie.

Illinois Institute of Technology

4.0

3.

Alliant International University

3.75

1-tie.

Keiser University

4.0

4-tie.

Pennsylvania State University

3.70

1-tie.

San Diego State University

4.0

4-tie.

Seattle Pacific University

3.70

1-tie.

Seattle Pacific University

4.0

6.

Columbia Teachers College

3.63

1-tie.

University of Georgia

4.0

7-tie.

Keiser University

3.60

1-tie.

University of Nebraska at Omaha

4.0

7-tie.

Portland State University

3.60

1-tie.

University of Texas at Arlington

4.0

9.

Florida Institute of Technology

3.55

1-tie.

University of West Florida

4.0

10-tie.

University of Connecticut

3.50

1-tie.

Wayne State University

4.0

10-tie.

University of Houston

3.50

1-tie.

West Chester University

4.0

10-tie.

University of Tulsa

3.50

12.

University of Tennessee at Chattanooga

3.94

13.

University of Akron

3.47

13.

Carlos Albizu University

3.93

14-tie.

Illinois Institute of Technology

3.44

14-tie.

California State University, Long Beach

3.92

14-tie.

Old Dominion University

3.44

14-tie.

University of Maryland, College Park

3.92

16-tie.

Roosevelt University

3.40

16-tie.

Hofstra University

3.91

16-tie.

Chicago School of Professional Psychology

3.40

16-tie.

University of Baltimore

3.91

18.

Central Michigan University

3.38

18.

University of Maryland, Baltimore County

3.90

19.

Wayne State University

3.33

19-tie.

Minnesota State University - Mankato

3.89

20-tie.

Texas A&M University

3.29

19-tie.

Xavier University

3.89

20-tie.

University of Guelph

3.29

 

 

 

 

Table 8

Faculty Quality

Rank

PhD programs

Score

Rank

MS/MA programs

Score

1-tie.

Portland State University

4.00

1.

Keiser University

4.00

1-tie.

University of South Florida

4.00

2-tie.

Indiana University-Purdue University Indianapolis

3.83

3.

Michigan State University

3.90

2-tie.

University of Tennessee at Chattanooga

3.83

4.

Texas A&M University

3.88

4-tie.

George Mason University

3.80

5-tie.

Rice University

3.86

4-tie.

San Diego State University

3.80

5-tie

University of Minnesota

3.86

4-tie.

Wayne State University

3.80

7.

University of Georgia

3.83

7-tie.

University of Akron

3.75

8.

Pennsylvania State University

3.80

7-tie.

University of Minnesota Duluth

3.75

9.

Columbia Teacher’s College

3.75

9.

Appalachian State University

3.69

10.

University of Missouri–St. Louis

 

3.73

10.

University of Maryland, College Park

3.67

11.

Florida International University

3.71

11.

University of Baltimore

3.64

12.

Clemson University

3.67

12.

Missouri State University

3.62

13.

George Mason University

3.64

13-tie.

Middle Tennessee State University

3.60

14.

Louisiana Tech University

3.61

13-tie.

New York University

3.60

15-tie.

Hofstra University

3.60

15.

Columbia Teacher’s College

3.56

15-tie.

Northern Illinois University

3.60

16-tie.

Florida Institute of Technology

3.50

15-tie.

University of Illinois at Urbana Champaign

3.60

16-tie.

Keiser University

3.50

18.

Bowling Green State University

3.56

16-tie.

Missouri University of Science and Technology

3.50

19-tie.

University of Central Florida

3.50

16-tie.

Touro College

3.50

19-tie.

University of Phoenix

3.50

16-tie.

University at Albany, SUNY

3.50

 

 

Discussion

 

The overarching goal of this research was to provide valuable information to both incoming students and current faculty about I-O graduate programs. We believe that using the perceptions of current students provides unique insight into ranking graduate programs.  Regardless of the method for determining which program to attend, prospective students are driven to attend the best program possible. As we have shown through this research and as evidenced by the other I-O program rankings notated above—there are many ways to evaluate said programs. A secondary aim of this research was to provide rankings for MA/MS programs in addition to the more frequent PhD program rankings. Even though it can be more challenging to find methods for ranking master’s programs, students who wish to attend those programs should have a resource similar to students evaluating PhD programs. 

 

That said, we must acknowledge that in the process of this research we observed that “program ranking” was not an important criterion for students. It was indeed one of the lowest factors of consideration for students from both PhD and masters programs. This may indicate that some of our presumptions about the use or practical value of rankings, at least for prospective students, were incorrect. It may be that the more fine-grained information such as the ratings of specific criteria is more useful (e.g. one student may very much care about funding packages whereas another student may very much care about research opportunities).

 

Furthermore, we want to also note that the samples collected from each institution may be biased. Students who were either very happy or very disgruntled may be those who were the most motivated to respond to our inquiry. Furthermore, students who are more research-focused may have been overrepresented here and those interested in teaching or applied work underrepresented as “teaching opportunities” was rated as low on importance in these samples as was “internship opportunities” in PhD programs.

 

We want to underscore that programs that were not included in the top 20 PhD or MA/MS programs may also be high quality. Also, any unique method of ranking programs will likely produce different results. We realize that this type of research has its drawbacks (e.g. potential biases of the raters as they are already in their chosen program) and that not all academics support the use of program rankings. We want to underscore that there may be a number of reasons that any given program could have been omitted from these rankings but that does not include the program is low in quality. However, we hope that this work has provided a useful update to previous attempts at student driven rankings of I-O graduate programs.

 

Ms. Roman and Ms. Barnett wish to thank Dr. Erin Eatough and Dr. Charles Scherbaum for serving as faculty advisors for this project as their advice and guidance were crucial.  We also wish to acknowledge Ms. Stefanie Gisler and Ms. Sabrina Yu for their valuable contributions to the project.  

 

References

 

Americas Best Graduate Schools. (1995, March 20). U.S. News and World Report.

 

Americas Best Graduate Schools. (2001, April 9). U.S. News and World Report.

 

Jones, R. G., & Klimoski, R. J. (1991). Excellence of academic institutions as reflected by backgrounds of editorial board members. The Industrial-Organizational Psychologist, 28(3), 5763.

 

Kraiger, K., & Abalos, A. (2004). Rankings of graduate programs in I-O psychology based on student ratings of quality. The Industrial-Organizational Psychologist, 42 (1), 28-43.

 

Levine, E. L. (1990). Institutional and individual research productivity in I-O psychology during the 1980s. The Industrial-Organizational Psychologist, 27(3), 2729.

 

Payne, S. C., Succa, C. A., Maxey, T. D., & Bolton, K. R. (2001). Institutional representation in the SIOP conference program: 19862000. The Industrial-Organizational Psychologist, 39(1), 5360.

 

Plous, Scott. (1993). The psychology of judgment and decision making. p. 233.

 

Surette, M. A. (1989). Ranking I-O graduate programs on the basis of student research presentations. The Industrial-Organizational Psychologist, 26(3), 4144.

 

Surrette, M.A. (2002). Ranking I-O graduate Programs on the basis of student research presentations at IOOB: An update. The Industrial-Organizational Psychologist, 40(1), 113116.

Why customer satisfaction is important (and how to focus on it). (n.d.). Retrieved January 07, 2018, from https://www.surveymonkey.com/mp/customer-satisfaction-important-focus/

 

Winter, J. L., Healy, M. C., & Svyantek, D. J. (1995). North Americas top I-O psychology doctoral programs: U.S. News and World Report revisited. The Industrial-Organizational Psychologist, 33(1), 5458.

 

Print
91264 Rate this article:
3.4