Featured Articles
India Worthy
/ Categories: Items of Interest

Beyond “Moneyball”

Dan J. Putka & David Dorsey, Human Resources Research Organization (HumRRO)

I-O Psychology and the Maturation of AI/ML Technology in HR

When people think of data and HR, one of the first images that may still come to mind is Moneyball.  For those of you who don’t follow movies or baseball, Moneyball was a 2011 movie that chronicled major league baseball’s Oakland Athletics’ 2002 experience developing objective methods for quantifying human behavior and improving job performance using copious amount of data. 

Moneyball portrayed the subjective assessment of talent (scouts’ intuition) as a technique far inferior to that of using of data and analytic methods, something that Industrial-Organizational (I-O) psychologists have recognized for more than 100 years. The idea of “evidence-based decision making” evangelized in Moneyball opened the eyes of many HR and business practitioners to methods that have been at the core of the I-O field since its inception.

Today, we are well beyond Moneyball. Organizations are dealing with an avalanche of new HR-related data and technologies. Data are accruing faster, they’re getting bigger, and they’re arriving in myriad forms (i.e., “Big Data”). Amidst this onslaught of data, technology is emerging that presumes to make use of such data, but it is evolving at a pace faster than organizations can assimilate, and certainly faster than science can rigorously evaluate. Organizational leaders have been racing to figure out how to capitalize on this newfound wealth of data and technology, but in an environment where advances are happening so quickly, it is easy to feel overwhelmed. 

Artificial Intelligence (AI) and Machine Learning (ML) are two major drivers of the advancement of big data and technology. Part of the challenge in understanding, evaluating, and leveraging these technologies is that, unlike other areas of HR, they are inherently multidisciplinary. Indeed, look at nearly any AI/ML-related start-up with a nexus to HR and you’ll find teams dominated by engineers, computer scientists, developers, “data scientists” and other tech/math savvy disciplines. As the market for AI/ML applications in HR has continued to grow, the focus continues to be on the technology—a trend that we don’t see abating anytime soon. Moreover, in such a technology dominated environment, it is easy for a relatively small field like I-O psychology to get lost in the shuffle and lose sight of the critical role “I-Os” can play. 

To appreciate the differentiating value of I-O psychology, it is necessary for executives to look beyond the hype surrounding AI/ML HR technology and consider tough downstream questions.  It is in these questions that I-O psychology, and the research and standards established by our field over the past century, shine through. In short, I-Os can be of great value in not only helping executives separate wheat from the chaff when it comes to evaluating existing AI/ML technology for HR, but also in creating more robust AI/ML HR technology for their organization in the first place.  In the remainder of this piece we pose and answer five questions that help illustrate the value of I-O in the evaluation and creation of AI/ML technologies for HR.

  • How does the AI/ML technology ensure the quality of the data it ingests to inform predictions or forecasts?  The old adage “garbage in garbage out” doesn’t disappear in the Big Data age, it just makes the potential landfill much larger and the process of sifting through it more challenging. At the end of the day, it has to be a person or team’s responsibility to ensure data quality, not a machine’s. Along these lines, I/O psychology offers a depth and breadth of research and experience that eclipses many other disciplines for objectively evaluating the quality of “people data” and the inferences made with that data. 
     
  • What evidence can the AI/ML technology developer provide for the quality of the output it produces? Here “evidence” must withstand judgment in light of professional principals and standards that have existed for decades. Yes, such standards do exist, and they apply to big and small data alike, and they are regularly updated and refined (e.g. SIOP, 2003AERA, APA, NCME, 2014ITC Guidelines). These principles and standards draw heavily on research and practice in the field of I-O psychology and related scientific disciplines that concern themselves with the measurement, prediction, and explanation of people’s psychological attributes, behavior and attitudes.
     
  • What evidence can the application developers provide that it will have demonstrable positive impact on the organization?Adopting this technology will reduce turnover among new hires by 20% and save your organization tens of thousands of dollars a year, and we have a study to back it up!  Sound familiar?  Claims regarding what any given piece of AI/ML can do vary wildly in terms of the quality of the evidence upon which they based.  Not all studies are equal, and not all are executed with the same level or rigor or attention to professional principles and standards. Evaluating the quality of studies and data designed to evaluate the efficacy of AI/ML related HR tech is something that I-O psychologists are well trained to do.
     
  • What potential is there for the implementation of the technology to have undesirable consequences? Popular treatments of possible AI/ML misuse or unintentional bias have become big sellers (e.g., Weapons of Math Destruction). Even if an AI/ML HR app lives up to its hype (e.g., adoption leads to significant reduction in turnover, increase in speed to hire, more engaged workforce, increased efficiency), it may come at a hidden cost that your organization is unwilling to accept (e.g., a reduction in workforce diversity, violation of employment law, infringement on employee privacy). With regard to these concerns, we in the HR/I-O community are not alone. For example, in the healthcare industry, researchers and practitioners alike are contemplating the next killer AI app that revolutionizes medical decision making but in the process violates every component of HIPAA legislation (Health Insurance Portability and Accountability Act of 1996). In the employment arena, I-O psychologists are well attuned to the tradeoffs and consequences associated with various types of assessment and decision-making strategies. These unintended consequences can be very hard to see without going beyond the technology and getting into the substance of why the technology “works” and having subject matter expertise with the content involved.  This brings us to the last question…
     
  • Why does the technology work? On one level, you might wonder, why should we care (if it works, it works)?  Why should we seek to open the black box? The answer to this question lies in the fact that employment decision making does not happen in a vacuum. It happens in an increasingly complex regulatory environment (e.g., employment laws, data privacy laws), which becomes even more complex if one is working across countries (e.g., see the unfolding consequences of Europe’s GDPR legislation).  Not typically on the radar of engineers and computer scientists, the field of I-O psychology has been immersed in these issues since legal requirements underlying workforce decisions have existed. This is a bread-and-butter issue for the I-O field. Understanding why the technology produces the solutions it does is critical to evaluating its defensibility from a regulatory perspective.  

AI/ML HR tech doesn’t just have potential legal implications for the organization. To the extent the technology affects peoples’ lives, they often want answers to “why” types of questions. Matters of inherent trust are a major facet of technology adoption that are often overlooked. Consider the employee who is receiving career altering recommendations for training or a career path from a machine or the established manager tasked with making promotion decisions who receives augmented machine advice. They must have access to the “why” behind the recommendations offered. Fortunately, the area of “explainable AI” is an active research area, but such lines of inquiry can only benefit from subject matter expertise and the use of established theory.  Given I-O psychologists’ training in assessment and underlying theories, we are well positioned to help explain and drive what’s happening “under the hood.”

Facing the questions above may not be as sexy or popular as jaw-dropping AI/ML tech, but if the end game focus is on results, impact, and long-term success, such questions must be answered. From a technology perspective, handling Big Data and creating AI/ML apps still requires a tech/math savvy team, but answers to the questions above fall clearly in the domain of I-O psychology and our field brings a long history of helping organizations address them. 

This leads us to one additional question: are -I-O psychologists driving AI/ML technology change, attempting to be “fast followers,” or merely standing on the sidelines hoping to influence the conversation down the road? Certainly, some I-Os are part of tech startups, conducting interesting research using AI/ML, and inventing great uses of the technologies, but we (the authors) see a more prominent role. We see I-O psychologists as helping to answer the harder questions posed above. In this role, I-Os can help to not only shape the great promise of AI/ML tech adoption, but also serve the larger purpose inherent in the mission of the I-O field, which is to enhance human well-being and ensure long-term organizational performance and flourishing.

Previous Article Does Your Vote Matter?
Next Article The GIT Taskforce Wants You!​
Print
5122 Rate this article:
5.0
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.