Society for Industrial and Organizational Psychology > Research & Publications > TIP > TIP Back Issues > 2017 > April

masthead710

Volume 54     Number 4    April 2017      Editor: Tara Behrend

Jim Rebar
/ Categories: 544

From the Editor: Player Piano

Tara Behrend

I’ve been thinking about automation and artificial intelligence lately. In the news this week, a marketing company called Cambridge Analytica is being credited with affecting the outcome of the presidential election, by using AI to target Facebook users based on their personalities and expose them to personalized advertising. Also in the news, swaths of high-skill, professional jobs like radiology technicians are being replaced by robots. Also, legislators are suggesting that universities are biased and that professors tell students what to think and that they should be replaced by Ken Burns documentaries. All of this makes me wonder: How long until I-O psychologists are automated out of our jobs? Is it really so crazy to think that day might not be so far away?

In one of Kurt Vonnegut’s first novels, Player Piano (1953), only two jobs remain in society: engineers (to build and fix the robots) and managers (to manage the engineers). The remaining members of the human race are viewed as “useless.” This dystopian view of the future of automation was already on people’s minds at that time and seems not just possible but highly probable at this point. Where do we think I-O psychologists will end up in this automated future, and are we okay with that answer?

Consider some of the core skills that I-O psychologists have and what we offer to organizations, universities, and the scientific community:

  • SelectionHiring with algorithms is the way business people describe it. The question is not whether to use algorithms (of course we should; a regression-based approach to predicting performance is always better than using human judgment to do so) but, rather, whether a person needs to weigh in on what the inputs to that algorithm should be. Currently, we still need human I-Os to design and validate assessments and communicate results to decision makers. How long will that be true?
  • Feedback and coaching: In educational settings, personalized learning and automated tutors are ubiquitous. In organizational settings, productivity apps can track every aspect of your behavior and deliver feedback immediately. Currently, human experts are still needed to evaluate some aspects of the quality of work output. But this seems fairly easy to automate in the future. Artificial therapists have been used with some success to treat PTSD, for example. This issue of TIP has an article from Richard Landers about the capabilities of natural language processing. This could be the start of iCoach, the automated executive coach.
  • Scientific paper writing: Although automatic paper generators have been around for some time, they do not seem to be producing high-quality work quite yet. I can certainly imagine a future, however, in which a correlation matrix and some variable names are given as inputs, and a program finds the “most interesting” results and chooses a “theoretical framing” from a list. This is not how good research is done. But it is how some research currently gets written up, unfortunately. If this can happen, could the program also decide what should be included in the correlation matrix in the first place, based on a huge number of variables too unwieldy for a human? Could it then use natural language processing to find out which of the millions of existing scholarly articles are most relevant to understanding those variables? At some point, would we even need scientific papers any more, given that their primary purpose is to communicate to other humans? Could all that data get uploaded into a vast central database for anyone to access at will when a need arose? So, what is uniquely human about the process of doing research? Humans are still needed to (a) have ideas and )b) give meaning to data. But will we always be better at those things than a well-designed AI program?
  • Teaching: I’m convinced that some of my colleagues are already robots. It seems likely that automation will continue to advance in this area. After making some lectures for an online course last year, I realized that my university retained the rights to the lectures. Theoretically, they could continue to offer “my” course without me, and students would get a learning experience that was very similar to the one they got when I was teaching the course myself. Robot TAs have been tested and found to be indistinguishable in some cases from human TAs.  So, why not robot professors?

What is uniquely human about our skill set? What do we learn in graduate school that is truly precious? A feature article from Tilman Sheets and Bharati Belwalkar in this issue argues for more attention to technology training in graduate school. Should we instead focus on that which cannot be automated?

Maybe this is hyperbole. After all, claims about Cambridge Analytica were so grossly exaggerated that it seems better to call them plainly false.  For all the usual attacks on higher education, human professors won’t be going away any time soon. The feature article from Nathan Gerard in this issue offers a historical perspective on how Lewin’s legacy may be relevant to today’s automation-related challenges. A long view may be necessary to discuss these ideas productively. I welcome your ideas—you can email me at behrend@gwu.edu or tweet them to me @TaraBehrend—either I, or my robot assistant, will get back to you.

Next Article President’s TIP Column
Print
1863 Rate this article:
No rating