Featured Articles
Jenny Baker
/ Categories: 603

Is Cognitive Ability the Best Predictor of Job Performance? New Research Says It’s Time to Think Again

Patrick Gavan O’Shea and Adrienne Fox Luscombe, Human Resources Research Organization (HumRRO)

Meta-analyses have overestimated both the primacy of cognitive ability and the validity of a wide range of predictors within the personnel selection arena, according to groundbreaking research conducted by Paul R. Sackett and Charlene Zhang, University of Minnesota; Christopher Berry, Indiana University; and Filip Lievens, Singapore Management University.

In a world embracing simplicity and certainty, researchers often take great pains to emphasize the tentative nature of their conclusions—well-captured by the phrase, “Statistics means never having to say you’re certain,” and the shopworn joke about psychologists responding to all questions with, “It depends.”

Even so, there must be some things researchers assert confidently, right? Some unassailable, unimpeachable principles strong enough to build decades of research on?

Within the field of industrial-organizational (I-O) psychology, there has been at least one such fundamental truth: cognitive ability is the best predictor of work performance. Rooted in numerous meta-analyses (e.g., Schmidt & Hunter, 1998) and confidently proclaimed for over half a century, decades of research and hiring and promotion methods have been built on this proclamation.

Thus, it would take a Herculean effort to thoughtfully and rigorously revisit the statistical corrections that lie at the heart of meta-analytic methods and challenge 50 years of research. In doing so, Sackett et al.’s (2022) work, among other intriguing findings, revealed that structured interviews may in fact be the strongest predictor of job performance—not cognitive ability.

“I view this as the most important paper of my career,” Sackett said, noting that it offers a “course correction” to the I-O field’s cumulative knowledge about the validity of personnel selection assessments. This consequential paper, “Revisiting Meta-Analytic Estimates of Validity in Personnel Selection: Addressing Systematic Overcorrection for Restriction of Range,” was recently published in the Journal of Applied Psychology (Sackett et al., 2022).

Correcting the Corrections

The critique levied by Sackett and his coauthors (2022) aims directly at the “nuts and bolts” of meta-analytic methodology, so a brief review of those methods helps one fully appreciate the nature and importance of their contributions. As the most common approach to synthesizing research findings across studies, meta-analyses typically involve the following steps:

  1. Specifying the research domain, which in Sackett et al.’s case involved reviewing the predictive validity evidence for a wide variety of personnel selection assessments, including cognitive ability tests, structured and unstructured interviews, job knowledge tests, and personality and interest inventories.
  2. Identifying studies that have previously explored these relations quantitatively, including those used in earlier meta-analyses and in new primary studies. The metrics synthesized through this process are generally correlation coefficients.
  3. Using statistical adjustments to correct the correlations identified during Step 2 for limitations in the primary studies. Although mathematically complex, these adjustments rest on a straightforward premise: to adjust or “fix” the correlations found in the primary studies that consistently underestimate relations among various personnel selection assessments and job performance, thus obtaining a more accurate picture of the “true” correlations. Although a variety of corrections can be employed at this step, range restriction in the assessment scores and criterion unreliability are the two most common.
  4. Statistically summarizing the corrected correlations emerging from Step 3 to arrive at more stable and accurate estimates of the relations among specific personnel selection assessments and the outcome of interest.

Focusing on Step 3, Sackett and his colleagues (2022) argue that commonly used corrections systematically inflate relations among personnel selection assessments and job performance. They are particularly critical of one widespread practice that involves using range restriction estimates generated from predictive validation studies to correct the full set of studies included in a meta-analysis that also includes many adopting concurrent designs.

The two shouldn’t be treated in like fashion. Whereas predictive validation designs include actual job applicants who are hired on the basis of the assessment, concurrent validation designs involve administering the same assessment to current employees. Because current employees were not selected based on the assessment administered in concurrent studies, Sackett and his colleagues (2022) convincingly argue that “across the board” corrections overinflate validity estimates—sometimes to a substantial degree.

Future Research Implications

The nuanced and thoughtful critiques of meta-analytic corrections shared by Sackett and his colleagues (2022) extend beyond the example noted above, yet they all reflect a set of guiding principles that future meta-analytic work would be wise to follow:

  • Critically evaluate your assumptions. At the very least, Sackett and his colleagues advocate “an end to the practice of simply assuming a degree of restriction with no empirical basis.” However, this critical approach should be extended more broadly whenever meta-analysts evaluate sources of information, such as assessment score norms, that could potentially serve as the basis for range restriction corrections yet may not be relevant to personnel selection contexts.
  • Be conservative. This principle could also be expressed as “when in doubt, don’t correct.” If, after some critical thought, you conclude that you don’t have a credible estimate of range restriction or unreliability for a given study, it is better to be conservative and not correct than to apply an inaccurate correction.
  • Think locally. Rather than base corrections on general rules of thumb (for example, “the criterion reliability is .52”), ask yourself if more “local” sources of information would likely provide a more accurate estimate (for example, reliability estimates for a specific type of criterion such as task versus contextual performance).

Practical Take Aways

Using these principles as a guide, Sackett and his colleagues (2022) re-analyzed studies included in earlier meta-analyses along with more recently conducted research, and the outcomes of their work hold many lessons for I-O researchers and practitioners alike:

  • Structured interviews emerged as the strongest predictors of job performance. Sackett and his colleagues offer that this finding “suggests a reframing: Although Schmidt and Hunter (1998) positioned cognitive ability as the focal predictor, with others evaluated in terms of their incremental validity over cognitive ability, one might propose structured interviews as the focal predictor against which others are evaluated.”
  • Structured interview validities are somewhat variable. While structured interviews had the highest mean operational validity (r = .42), they also showed a relatively high degree of spread around that mean. Particularly given the wide range of constructs targeted by structured interviews, not to mention the advent of digital interviewing and AI-based interview scoring, this finding is a compelling call for researchers to identify the factors responsible for this variation and the approaches to developing, administering, and scoring structured interviews that foster strong validities.
  • Job-specific assessments fared quite well. Along with structured interviews, several other job-specific assessments—including job knowledge tests, empirically keyed biodata, and work sample tests—appeared among the top five strongest predictors of job performance (with validities of .40, .38, and .33, respectively). Cognitive ability rounded out this list with a validity estimate of .31.
  • Interests should be measured via the synergies among personal interests and the interest profile of a specific job. Compared to Schmidt and Hunter’s (1998) work, the operational validity of interests increased from .10 to .24—a boost due to Sackett and his colleagues defining interests in a fit-based (i.e., between personal interests and unique job demands) rather than a general way (i.e., the relation between a general type of interest, such as artistic or investigative, and overall job performance).
  • Tailoring personality items to the job context increases their predictive validity. In fact, the validities were so much stronger for contextualized personality assessments (i.e., adding “at work” to each item or asking applicants to respond in terms of how they behave at work) that Sackett and his colleagues suggest viewing them as essentially a different type of assessment relative to more general personality inventories.

This work will certainly have a lasting impact within I-O psychology’s research and practice domains, with clear promise to ignite fruitful collaborations between them. The findings are also consistent with the experiences of many I-O practitioners that well-crafted structured interviews, grounded in detailed job analytic data and conducted by well-trained interviewers, are one of the best personnel selection tools we have to offer our clients.

Note: An earlier version of this article appeared as a blog on HumRRO’s website. We wish to thank Cheryl Paullin and Paul Sackett for reviewing previous drafts.

References

Sackett, P. R., Zhang, C., Berry, C. M., & Lievens, F. (2022). Revisiting meta-analytic estimates of validity in personnel selection: Addressing systematic overcorrection for restriction of range. Journal of Applied Psychology, 107, 2040–2068. https://doi.org/10.1037/apl0000994

Schmidt, F. L., & Hunter, J. E. (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin, 124, 262–274. https://doi.org/10.1037/0033-2909.124.2.262

Print
2467 Rate this article:
4.6
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.