Society for Industrial and Organizational Psychology > Research & Publications > TIP > TIP Back Issues > 2016 > April

masthead710

Volume 53     Number 4    April 2016      Editor: Morrie Mullins

Meredith Turner
/ Categories: 534

We Feel a Change Comin’ On: I-O’s Rôle in the Future of Work

Olivia Reinecke and Steven Toaddy

We in I-O are fairly sporting when it comes to discussing the ambiguities and contradictions and inconsistencies associated with the nuances of human behavior in the workplace—cheers to us. We seem to falter, though, when it comes to talking about the future: the future of work, of organizations, of SIOP, of our own jobs. Our narratives become jumbled; we start talking past each other, focusing on different criteria, making different assumptions. Our background in science doesn’t prepare us to have meaningful conversations about speculation, prophecy, conjecture. This may be a point to our credit on most days, but it will not serve us if and when the world changes and we are caught off guard and unprepared.

Hence the focus for this edition of the I-Opener: Where is the world of work going and where will we fit in it? The discussion below is imperfect: It represents a single narrative among many possible narratives, a few perspectives among a myriad, many questionable assumptions. We simplified and filtered the prophecies; we asked leading and targeted questions; we, to some extent, knew what we were going to write before we began interviewing experts.

But this serves our purpose adequately. We want to start SIOP’s membership down this path of thought—and the more varied the conclusions at which members arrive, the better. We want to reveal the changes that are being anticipated. Instead of simply wondering at the forward march of technology, let’s start thinking (and talking) about what this means for us, not in the narrow sense of job security and personal leisure time but in terms of how I-O psychology will adapt to continue to serve humanity in the coming decades.[1]

What: The (Possible) Brave New World

A continual influx of new technology has become rather commonplace these days, and most of us are comfortable with and even dependent upon the rôle technology has assumed in our lives, but what about its rôle in our work? How and to what extent is technology improving the human work experience? How and at what point will technology become dangerous? Dangerous to whom or to what? Questions such as these are at the forefront of our field’s development, and the answers will transform I-O psychology as we know it.

Upon reading the preceding paragraph, one is likely to consider one of a few categories of technologies: telework, collaborative cloud services, and automation. “Telework” captures a variety of (in this case electronic) technologies that allow humans to better coördinate with each other in their work activities—and has siblings in the cloud in the form of electronic workflow-management suites, collaborative-document services, shared calendars. These technologies have their benefits and pitfalls and are—especially telework—the subject of scrutiny by our field.[2] Important, but not the focus of this column at present; let’s look at automation instead.

Sigh. This, uh, this is not an easy topic to tackle. The narrative that has grown around it has elements of Luddism and postscarcity economics and (perhaps not unfounded) fear tied up in it. Again, we’re capturing the path of a single flake in a blizzard; a Google search will get the interested reader into more discussion on this topic than can be reasonably taken in. Our first taste was a short YouTube documentary by C.G.P. Grey (2014) entitled Humans Need Not Apply.[3] As its title suggests, the documentary asserts that automation poses a very real threat to the need for human work. According to Grey (2014), while automation may not pose an immediate risk to all humans, it will occur “in large enough numbers and soon enough that it’s going to be a huge problem if we’re not prepared. And we’re not prepared.”

Humans Need Not Apply certainly sends a powerful message, but it left us with more questions than answers. Just how unprepared are we? If automation really is a threat to human work, what exactly are we up against? More deeply, is “human work” something that we should defend or is it a necessary evil that we have tolerated to this point? Automation has already demonstrated its power to significantly alter how (or if) humans work—look to Google’s self-driving car[4] and IBM’s Watson[5]—so this is not just some fanciful far-future discussion. As I-O psychologists, we need start considering how it might transform our field, both ideologically and in practice.

In an attempt to cut through the overabundance of automation information available online, we reached out to Marshall Brain. Best known as the founder of How Stuff Works and more recently for his Robotic Nation essay series, Brain is well versed in the development of artificial intelligence, what he calls the “second intelligent species.” Echoing Humans Need Not Apply, Brain explained that, although humans are currently the only “math-wielding, language-using, space-traveling intelligences,” we won’t be alone for much longer. The second intelligent species is well on its way and is no longer merely a figment of a mad scientist’s futuristic imagination. IBM’s Watson is an example of this type of species, and it is just a primitive form. So what’s the big deal? This second intelligent species has (and will continue to develop) the capacity to compete with the human species, especially in the context of work; and in Brain’s view, “humans, generally speaking, are not up to the challenge.”

After this conversation, we were no longer interested in debating whether the predictions offered by Grey (2014) and Brain were plausible. For the sake of the article’s overarching purpose—a pursuit of answers—we made a deliberate decision to assume that the “threat” automation poses to human employment is real. This assumption will be implicit through the remainder of this article.

Why: The (Debatable) Broader Purpose of I-O Psychology

So, automation is coming. Now what? We learned from Grey (2014) and Brain that automation could be bad news for the employed population, but would it really be so awful if no one had to work? According to Dr. David L. Blustein, who specializes in the psychology of working and vocational psychology, yes!

Blustein was quick to point out that, so far, technology has largely enhanced our work lives; our Skype interview, for example, wouldn’t have been possible without technology. But when technology replaces the need for human work, the human species is in trouble. Why? Simply put, humans need work. As Blustein explained, work satisfies our “fundamental need to contribute, collaborate, and create.” What happens when we can’t satisfy this need? Recent meta-analytic findings indicate that those who are unemployed, especially long-term, experience lower levels of mental health (i.e., higher levels of anxiety, depression, distress, and psychosomatic symptoms and lower levels of subjective well-being and self-esteem). Even worse, these negative effects have remained stable for the last 30 years, suggesting that society has yet to adapt to high rates of unemployment (Paul & Moser, 2009[6]). In Blustein’s words, “Work is essential for mental health. Work is essential for the welfare of our communities.”

If we take into account Blustein’s perspective (and the extensive research upon which it is founded) and if we make the assumption that we are in this game for the good[7] of humanity, it becomes clear that we must be mindful of how we integrate technology into our work. Blustein emphasized the need “to develop an active, engaged, compassionate approach to the discussion of the future of work in peoples’ lives.” Reacting to new technology as it comes (i.e., purchasing the next big thing because it’s more efficient and cool) with no consideration for its impact on human work—and subsequently on human well-being—will hurt us in the end. As we continue to explore this topic, the need for our species to take a proactive approach regarding automation in the workplace becomes more and more apparent.

Ah, but this is all the pedestrian discussion that you’ve likely heard before: Beware technology, oh no the robots are coming, hide your kids, hide your jobs. But of course we are not pedestrian; we are SIOP. We have a job to do. So given that we seldom pull the strings regarding the integration of technology into the world of work, the policies that our governments may put into place to protect work[8] and the social-media campaigns intended to take down the artificial intelligences are not for us. Instead, let’s start with our assumption about the onward march of automation and simulate where that will take us in I-O in the next, oh, quarter century or so.

How: The (Possible) Road Ahead

With much gratitude to Brain and Blustein, we turned our eye inward. What will we be doing in the early-middle 21st century? It’s possible that our major I-side tools such as WA, selection, and training may become obsolete. First, bots[9] will be able to perform these tasks better and faster than I-Os. Second, when the second intelligent species is doing most of the work, there won’t be a need for anyone to select and train them. They will build and train themselves, not as a species but as individuals, as they already do.[10] In the short run, we will be providing services in a different context; in the long run, we may be serving a humanity with a great deal of time on its hands. So how, precisely, will I-O operate?

We interviewed Dr. Anthony S. Boyce (consultant and leader of Research and Innovation for the Assessment and Leadership-Development practice at Aon Hewitt) with precisely these questions in mind. We framed our discussion around two points in time: within the next fiv5e to 10 years, and 15 to 20 years in the future. Boyce thinks we’ll still be hiring humans in the next 5 to 10 years but that our selection tools will look very different. Rather than revolving around assessment alone, Boyce envisions selection as a more integrated process, pulling in big data from applicants’ social media activity and other online behavior (with the aid of—you guessed it—our digital progeny).

With these big data, organizations may become less concerned about exactly what is being measured and why and may become more concerned with predictive power. If computer scientists can create algorithms that predict performance without causing adverse impact but also without theory or explanation behind them (i.e., a “black box” selection instrument), I-Os may fall behind. Boyce thinks I-Os can work backwards though, figuring out what these black boxes are measuring and how we can apply these constructs to onboarding, professional development, and other postselection areas. While our “I-side” tool belts may become less relevant in the next 5 to 10 years, Boyce thinks our “O-side” skills will remain vital to organizational success. People will still be making decisions and leading teams, and maybe we have a thing or two to teach bots about running successful organizations[11].

In the more distant future, where perhaps human work is no longer needed, Boyce suggests that I-O psychology could be leveraged to aid humans in finding the leisure activities that will be most fulfilling (Brain and Blustein spoke to this as well); rather than advising on job satisfaction and work engagement, I-O psychologists could use their expertise to promote life satisfaction and engagement with leisure activities.[12]

Who: Our (Debatable) Responsibility

            Boyce weaves a compelling narrative for the future of our field. We don’t know how accurate it is (though some of us will find out, I suppose), but it certainly paves the way for what is next for each of us individually. We’re not asking you to fight anything or anyone[13]. We are asking you to do exactly 3 things:

  • Develop your own model in your head of where the world of work is going in the next 5, 10, 20 years (Internet is probably your best resource here).
  • Simulate how you think I-O is going to fit into that model (SIOP is probably your best resource here; work with others, discuss, collaborate).
  • Adjust your skillset to proactively accommodate the changing responsibilities that you’ll experience in the future (attend and generate content for SIOP’s annual conference, take classes, practice).

            There is a wave coming. We can probably dig in, let it wash over us and move on without us, and leave us obsolete. We can let it catch us unawares and dash us on the rocks. Instead, let’s make sure we’re ready to ride it.


-------------------------------------------- 

[1] This may not be the responsibility of I-O psychology. We know. Calm down.

[2] And others, see http://0-www-siop-org.library.alliant.edu/tip/july14/pdfs/opener.pdf for a discussion of telework.

[3] https://www.youtube.com/watch?v=7Pq-S557XQU

[4] http://www.google.com/selfdrivingcar/

[5] http://www.ibm.com/smarterplanet/us/en/ibmwatson/

[6] Paul, K. I., & Moser, K. (2009). Unemployment impairs mental health: Meta-analyses. Journal of Vocational Behavior74(3), 264-282. doi:10.1016/j.jvb.2009.01.001; there’s a rich theoretically and empirically grounded conversation going on regarding boundary conditions on the impact of unemployment on well-being—SES, time, market sector, and so on—and we encourage the interested reader to refer to this work for an introduction to this conversation.

[7] Whatever the hell “good” means.

[8] That feels odd to type. It’s like writing “save the smallpox” or “end conservation.”

[9] The human factors/ergonomics people have much more to say about this, but as you envision the future, try not to think of automation in terms of bipedal ambulatory robots. Think of automated factories and invisible algorithms. Autopilots don’t look like they did in the movie Airplane and neither will the drivers of autonomous vehicles. Of course, there are bipedal ambulatory robots, but they are somewhat beside the point here. (shrug)

[10] Here we’re referring to machine learning. Have fun with that search string.

[11] Stop it. No, of course bots will not be sitting in boardrooms in business attire. Bots are cool. They’re going to be in casual clothing.

[12] In short, things may get much more huggy feely and O-side people, such as the second author, will finally win our shadow war against our I-side oppressors.

[13] What he said: http://news.discovery.com/tech/i-for-one-welcome-our-new-computer-overlords.htm

Previous Article The Bridge: Connecting Science and Practice
Next Article How Advising Doctoral Students can be the Greatest Research Gift of All
Print
2407 Rate this article:
No rating