Featured Articles
Anonym

An I-O Perspective on Machine Learning in HR

Richard N. Landers answers a few questions about the impact of AI and machine learning on I-O psychology in support of SIOP’s Smarter Workplace Awareness Month.

Dr. Landers is an associate professor of Psychology at the University of Minnesota and holds the John P. Campbell Distinguished Professorship of Industrial-Organizational Psychology. His research concerns the use of innovative technologies in psychometric assessment, employee selection, adult learning, and research methods, with a recent focus on game-based assessment, gamification, artificial intelligence, unproctored and mobile Internet-based testing, virtual reality, and social media. His work has been published in the Journal of Applied Psychology, Computers in Human Behavior, and Psychological Methods, among others, and his work has been featured in popular outlets such as Forbes, Business Insider, and Popular Science.

  • Let’s start with a rather lengthy question: In the first chapter of the Cambridge Handbook of Technology and Employee Behavior,1 you say that I-O psychology as a field is “poised to plunge headfirst into [its] own obsolescence” and that this discontent is further amplified by the accelerating pace of change in technology. What role do artificial intelligence and machine learning play in this issue?

Artificial intelligence is a pretty general term and really refers to computers mimicking human behavior. In that sense, the kind of predictions that I-O psychologists have made for a long time are a sort of artificial intelligence. As soon as you tell a computer to create a predicted score on the basis of a regression line, you have essentially engaged in a sort of artificial intelligence.

A subset of that is machine learning, which is where predictive modeling has moved from its origins in the earlier days of artificial intelligence. When you talk about AI 30 years ago, in practice, the kind of things that people wanted to do—the kind of advanced prediction that they were trying to get—was not possible given the computing systems they had at the time, so the AI that I-O was using at that time was totally sufficient because it was state-of-the-art, cutting-edge, machine-based prediction.

What has happened in the last 5 to 10 years is that computing power has become extraordinarily quick; the rate of new technologies in relation to predictive power has increased, and suddenly we are able to do the kind of things that the AI and statistics communities were saying “this is where things are headed” 30 years ago. I-O, where we used to be cutting edge in terms of prediction, is now falling behind. If we don’t take the same attitude that we took then—an attitude that was across the whole field of psychology—that we should reinvent statistics for ourselves (like creating the entire field of psychometrics over half a century ago) and say “there is this unique, interesting stuff coming out of these new, modern prediction methods, and we need to figure out how to make that our own.” If we don’t do that, then the data science/computer science communities will do it instead; they’re not going to have our values, and they’re going to run away with the farm, so to speak.

  • How much of the rhetoric behind the integration of artificial intelligence and machine learning into organizational practices would you say misrepresents the reality? I think your distinction between “Valley hype” and “real research” at the Microsoft Research Faculty Summit2 kind of gets at this idea.

It reflects a really big difference in the culture of the pop-science communities that are pushing machine learning and the culture of I-O. I-O is historically--and still--extremely conservative in terms of integrating new ideas, whereas you have the opposite problem with the Silicon Valley community, which is an open embracing and perhaps overpromising of what new technologies and new advancements can do. The real value is going to be somewhere between the two of those, but it does make it a very confusing landscape for HR practitioners in this space; they see what I-O is doing, and say, “Well, that seems slow and boring; you’re telling me I can’t ask whatever questions I want in interviews and that you’re just going to plug some numbers into a formula and people are going to have to sit down and take five surveys…that all seems very boring.”

Then, the computer science people are saying, “We can use games, we can magically draw conclusions about people from them, we can look at social media profiles and get all kinds of crazy information that you wouldn’t otherwise have access to” and that just seems much more exciting, but the reality is that although there are some insights to be gleaned from those kinds of data, we don’t yet have much confidence in what kinds of insights those are.

We don’t really know where the real value is added, so we’re right now in a sort of hype war where I-O is saying “Slow down, you need to be careful, watch what you’re doing, know what you’re measuring, rely on tried-and-true practices,” and on the other side, you have Silicon Valley HR startups saying “We’ll magically solve all of your HR problems,” and they are able to get somewhat more impressive results in terms of appearances than we are, because they at least can say “Here, play these games, we’ll grab stuff from social media, we’ll do all these amazing technological things that you couldn’t do before,” and from a naïve perspective, it seems great.

So we have to be up in the front of that, I think, to say, “Let’s blend I-O psychology’s tried-and-true practices where we know what we’re measuring and we’re very confident in the kinds of recommendations we’re giving, and let’s figure out where the intersections are with some of the new stuff coming out, to figure out what is truly new and useful and what is just a faddish waste of time.”

  • What impact does this type of rhetoric have on HR practices?

I-O psychology has historically not been great at marketing itself. In my experience, the firms where I-O psychology has a presence do tend to be a little bit more resistant to some of the newer stuff coming out; the big challenge, then, just becomes exposure. If you go into your average company in the United States and ask HR “What is I-O psychology, and what is data science?” the penetration rate of data science is way higher despite being a much more recent and ambiguous term.

The biggest impact on HR now is that same kind of “flash over substance” problem, where there is some substance there, it’s just hard to tell what it is, and the salesmanship of the data science community is just phenomenally better than anything I-O has ever done. The direct effect on HR is that some HR practitioners, when they adopt these tools, are going to be using great new innovations that really help, whereas some are going to be pure snake oil, and there’s not really any way for an HR practitioner to know the difference at this point. It requires an expertise that isn’t common because to know what’s wrong with both, you have to have expertise in both, and there aren’t many people who can speak the language of data science and the language of I-O psychology, and having both of those is what’s critical here.

  • To what extent do you believe that organizational advances in artificial intelligence or machine learning are proprietary or protected by copyright or other means of secrecy prevents progress in our understanding of them?

It’s a funny sort of area; the power is typically less in the algorithms in terms of how the modeling is done, and the data itself is what is proprietary. When you look at some of the major companies that are doing AI research (like Google and Facebook), those companies very frequently offer and release free versions of some of the AI tools they’ve been using—TensorFlow being a good example of that.

The reason they can do that is that they know that without the massive datasets they have, those modeling techniques are not going to get any kind and quality of conclusions that the Googles and Facebooks of the world can do with them. So, it’s not so much about proprietary advances in AI algorithms per se, because those algorithms are used to build the predictive algorithms, and predictive algorithms require data, and that’s where the real value lies.

There are not very many industry advances in AI modeling I can point to, that haven’t been released publicly, that are really game-changers. It’s not as if there are many companies that have fundamentally found a way to increase generalizable R-squared by .2 in any model you throw at it, and that it’s a case of “oh, we would all use that if only it wasn’t proprietary.” Instead, it’s these large companies that are able to create really valuable datasets, and if the broader community had access to them, maybe there would be some bigger advances, but I don’t see the release of that kind of data being very likely.

There are a number of quotes in the last few years about data being the new economy or currency of tech, and that’s what it all comes down to. With regard to the actual modeling techniques, that’s like saying “if only ordinary least-squares regression had been copyrighted, we could earn so much money on our copyright on regression.” Those advances mean something, but they tend to be pretty incremental. The movement between neural network modeling and convolutional neural network modeling and other types of neural network modeling, you’re talking about very small increases in accuracy but without the need for gigantic datasets to support them. So, I don’t know that the algorithms themselves hold us back a lot, but the holding on to data perhaps does.

  • You’ve tweeted that “Pretty much every major tech innovation and threat can be explained very simply if you actually understand it.”3 Is it fair to say, then, that much of the buzz and ambiguity that often accompanies the discussion of artificial intelligence or machine learning comes from a misunderstanding of it?

Oh, yeah! A great deal of confusion comes from that; part of it is that you do need a broad understanding of statistics to understand the machine learning, but once you learn to understand statistics (like traditional classical statistics that I-Os are typically trained in), and once you have that kind of expertise, the jump from that to machine learning is not very big.

Machine learning is fundamentally just about iterating over a dataset in order to get better and better fitting models. So, if you understand the concept of model fit, that the best-fitting line minimizes all the sum-of-squares residuals in regression, then machine learning is a very small step from that.

In the data science community, about sum-of-squares residuals, they would call that minimizing the cost function, and the core of machine learning is that there are different cost functions. Instead of sum-of-squares residuals, you’re minimizing something else, and that something else is chosen based on your priorities. For example, do you want to minimize certain types of errors or have models with fewer or more predictors in them? There are a number of secondary decisions one must make to choose their cost function. It’s a different way of thinking than we traditionally do in statistics where it’s more like “I have five continuous predictors and a criterion and I need to predict it, so I use regression”; it’s not so plug-and-play in that way, and you instead have to reason through why the models work the way they do, but it’s not such a transformative change that it should be unreachable to your average I-O.

Where it gets scary is with people that don’t have any statistics at all, like HR practitioners; that’s when AI seems to become magic. As soon as you have people who really know nothing about statistics talking about AI, it seems like AI can do literally anything, and that’s dangerous. That doesn’t reflect reality at all, so any time I hear about a new AI-based product coming out, especially in the HR tech space, I try to look at it and say, “What kind of data are they predicting from? What kind of criterion data have they collected and they’re trying to build models on? What kind of datasets have they collected?” The same questions we ask in everyday I-O psychology research. So if I can’t answer those questions, that’s when the hype-meter starts to rise. If they say “We’ve found new types of people and we’ve classified them and we can tell this and that,” I’m skeptical of that claim because I don’t know what kind of data you’re basing your conclusion on, and that basic external/internal validity question is not all that different than what we do every day in I-O psychology.

  • But it’s not all doom and gloom. In your chapter of the Cambridge Handbook, you outline some of your own proposals for action. Could you talk about these, particularly as they relate to the integration of I-O psychology with human resources?

There’s a lot of dimensions to it, and that would probably take a while! But the main area where we would see the most improvement is if we were to have true integration across traditional silos.

Right now, we have HR specialists who dig deep into HR. and they’re great at it. You have I-O specialists who dig deep into the research and really understand how to interpret and apply research findings, and you have the data scientists and computer engineers who are actually creating new predictive solutions and new approaches to dealing with data.

Traditionally, what happens in a multidisciplinary environment is that everybody brings their own expertise to the table, they chat about it, and then they all go back to their own silos and try to figure out how to apply what they’ve learned by themselves. That works to a point, but it creates problems when you don’t have parity between the different groups.

Earlier, I described how data scientists are really masters of the hype train. That creates an imbalance because suddenly the data science group becomes far more convincing to the C-suite and other stakeholders, so they end up having a louder voice. In a multidisciplinary environment, that really means their silo gets prioritized; that’s bad. Instead, we need interdisciplinary environments where those teams aren’t just talking to each other but literally working together to create something new.

Every time a new HR algorithm is deployed in a company, every time data scientists have a new product, those would then have the voices of I-O psychologists and HR personnel in them to say, “Here is where this will work and won’t work, here’s what we know and what we don’t know, here’s where we’re at the cutting edge, and here’s where we’re doing something we’ve known for 50 years,” and it’s only with that combination of expertise working together to create something better that we’re going to get out of this hole we’re finding ourselves in now.

It’s always easier to stick to your silo, especially for PhDs who have been out in the field for a while; if you’ve been doing the same thing for 20 years and it’s worked just fine until now, there’s very little obvious motivation to try to stretch beyond what you know, but that’s the risk. That’s what’s going to eventually have us walking off the cliff. The solution to that is to really work together; not just beside each other, but together.

**

This interview was contributed by SIOP Student Affiliate Colin Omori. Colin is a third-year doctoral student at Louisiana Tech University and holds an MA in I-O Psychology from Minnesota State University-Mankato.

September is Smarter Workplace Awareness Month! Smarter Workplace Awareness Month is all about celebrating and promoting the science and practice of I-O psychology and how I-O psychology can help to make workplaces better. This year, we are focusing on the Top Ten Workplace Trends for 2019.

References

Landers, R. N. (2019). The existential threats to io psychology highlighted by rapid technological change. The Cambridge Handbook of technology and employee behavior, 3-21.

https://twitter.com/rnlanders/status/1151523052561162242

https://twitter.com/rnlanders/status/1152269622298763266

Previous Article Progress and Opportunities With Big Data in I-O
Next Article Last Chance for SIOP Officer Nominations
Print
7166 Rate this article:
4.6
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.