Featured Articles
Jenny Baker
/ Categories: 573

The Bridge: Connecting Science and Practice

Kimberly Adams, Independent Consultant, & Stephanie Zajac, UT MD Anderson Cancer Center

“The Bridge: Connecting Science and Practice” is a TIP column that seeks to help facilitate additional learning and knowledge transfer to encourage sound, evidence-based practice. It can provide academics with an opportunity to discuss the potential and/or realized practical implications of their research as well as learn about cutting-edge practice issues or questions that could inform new research programs or studies. For practitioners, it provides opportunities to learn about the latest research findings that could prompt new techniques, solutions, or services that would benefit the external client community. It also provides practitioners with an opportunity to highlight key practice issues, challenges, trends, and so forth that may benefit from additional research. In this issue, Dr. Michael Keeney describes blended reality training, technologies being leveraged, best practices, challenges, and areas for industrial-organizational psychologists to contribute.

 

Blended-Reality Training: A New Approach to Simulations

 Michael J. Keeney

 

Author and Company Profile

Michael J. Keeney is a principal scientist at Aptima, Inc. He applies methods of task and job analysis to understand work processes, identify and describe expertise needed for successful job performance, develop and assess the success of training, and measure worker performance. Dr. Keeney has developed and applied cognitive work analysis methodology in a variety of occupational settings. He has described tasks, knowledges, and skills, and the equipment and physical factors involved during underground coal mine emergencies, and then developed a description of miner decision making during these events. He has identified training needs and developed adaptive training methodology to aid intelligence analysts in locating and predicting adversary activities and to help software analysts locate security vulnerabilities in computer software. He created a model to integrate culture, attitudes, and personality to create representations of local people to train soldiers to operate effectively when performing missions in which positive personal interactions with the local population are critical. Dr. Keeney has applied the award winning Mission Essential Competencies (MEC) process to identify training needs of over 30 military and civilian organizations. He is currently creating content and display requirements for a blended reality training system that can provide training situations at reduced cost and increased availability compared to live training approaches. Dr. Keeney received his PhD in Industrial-Organizational Psychology from the University of Akron. During military service, he was certified as a U. S. Air Force Master technical training instructor, Career Development Course technical writer, and U. S. Navy Master training specialist.

Aptima’s mission is to optimize the performance of humans learning and operating in technology-intensive, mission-critical settings including defense, intelligence, aviation, law enforcement, and healthcare. They apply deep expertise in how humans think, learn, and behave to the goal of advancing readiness. By combining measurement with learning data analytics and personalized adaptive training, Aptima’s tools provide a successive cycle to measure, analyze, and improve human performance. The result is accelerated learning and enhanced human–machine teaming in preparation for the challenges that lie ahead—for individuals, teams, and the entire workforce. For more information, please visit www.aptima.com.

What Is Blended Reality?

Blended reality (BR) is an emerging training method that uses wearable technology to overlay visual and auditory stimuli onto what the trainee sees and hears from the real world (Bort, 2014).  By seamlessly integrating real and simulated stimuli, the vision is that BR systems will provide a venue to deliver optimal training within otherwise limited environments. The goal is for BR to deliver the benefits of live, virtual, and constructive (LVC) training methods (Department of Defense [DoD] Modeling and Simulation Office, 2018; Hays, 1989; Magnuson, 2019). BR should close the gap between live and virtual training, reduce costs, expand availability of training opportunities, reduce constraints from factors such as weather and travel, and eliminate safety-imposed limits on training activities. However, before BR training can deliver these advantages, engineers and training researchers must overcome a number of challenges. This column will offer a vision for BR training technology, review anticipated benefits and potential applications, discuss technological challenges that will require a blending of engineering and training design expertise, and offer suggestions for how industrial-organizational (I-O) psychologists can contribute to future research.

Vision for BR Technology

To appreciate potential benefits possible from BR, it is critical to first understand the three current constructs used to describe training delivery systems. Live simulations (the L in LVC) use real people operating real-world systems within physical environments that mimic real-world operations. One example of a live training event involves the actual ignition of flammable liquid in a metal pan. Trainees then use an operational fire extinguisher to extinguish the fire. Although this training provides experience with an actual fire and fire extinguisher, it presents real danger from the fire, limiting where and when this training can be provided. It also consumes the flammable liquid and requires recharging of the extinguisher after each training session. Live simulations typically are the most expensive among the training approaches’ and present actual risks to life and property.

The other two training delivery constructs seek to overcome issues in live simulations that can limit availability and require altered procedures to mitigate risks. Virtual simulations (the V in LVC) combine live people operating simulation systems that replace part or all of the operational equipment and environment. Our virtual alternative would replace the actual fire and operational extinguisher with a replica fire displayed on a sensor-equipped video screen (similar to a large television). The screen is paired with a life-sized fire extinguisher simulator, which looks, feels, and operates like its live counterpart except that instead of spraying agent, it emits a coded beam. The trainee directs the nozzle of the extinguisher simulator at the screen and uses the extinguisher in a manner that would extinguish the fire; the fire display on the screen goes out. This virtual trainer can be used anywhere (because it presents no actual hazards), it requires no fuel or recharging, and, as a further benefit, it can provide objective data describing trainee performance. Data such as these are not available from the live training fire. Finally, other than the trainees, constructive simulations (the C in LVC) replace everything—other people, equipment, and environments—with simulations, typically in a computer display. A constructive fire extinguisher trainer would present the fire and extinguisher through computer screen images and sound, and trainees would fight the fire through keyboard and mouse inputs.

The wearable BR technology uses three major components: (a) spectacles, goggles, or an monocle eyepiece and earpieces that present imagery and sound; (b) sensors that track the user’s location and orientation; and (c) a computer and power supply that processes location and orientation data and generates inputs for vision and hearing.

One wearable technology under development is currently in operational use as an augmented reality (AR) system. An AR system displays computer-generated information, typically as symbols or icons, into transparent spectacles or a monocle worn by the user (Flavian, Ibanez-Sanchez & Orus, 2019). The eyepiece screen is clear so the user can see through to the real world beyond. To illustrate how an AR system could be used, imagine it as a guide for visitors to a large museum. The system would track the user’s location and head orientation, and when the user’s head is pointed in the direction of a particular display, the system generates symbols and icons in the spectacles that alert the user to the display, highlights its location, and could provide an optional text or an audio description.

One approach to BR is to build upon and progress beyond AR technology by replacing the icons and symbols with imagery and sound to replicate the appearance of entities and events (Flavian, Ibanez-Sanchez & Orus, 2019). Specifically, BR entities would be moving images and sounds that replicate the visual and audio appearance and behaviors of their real-world counterparts. Examples of such entities and activities that BR could replicate include aircraft; ground vehicles; human role players; appearance of wounds, injuries, and other medical and mental conditions, as well as the behaviors of patients with these conditions; and damage resulting from mishaps or combat. Please see article for a discussion of nuanced perspectives on similarities and differences between BR and AR.

Consider an example in which the trainee is a first responder and the training area is a small field within a residential area. The training objective is to develop skills to perform helicopter evacuation of a severely injured patient, and the training task is to guide the helicopter to land and load the patient. Live training would require an expensive and noisy aircraft, its crew, and clearance for the helicopter to operate in the neighborhood, as well as either a human role player or a mannequin to portray the injured person. Weather or equipment problems could limit or suddenly cancel flight operations. To deal with these constraints, current alternatives would typically be low fidelity, such as to place a sign on a truck saying that it is a helicopter. Trainers might apply moulages (simulated wounds or illnesses created from casts or molds) to replicate the visual appearance of the injuries, and a human role player would have to understand how to behave in accordance with the injuries. Suppose instead that the trainee is wearing a BR training system. The patient appears in the first responder’s spectacles and displays appropriate injuries and behaviors. As the trainee looks skyward, the goggles and headphones present appropriate imagery and sound of an arriving and landing helicopter. When the trainee contacts and communicates with the BR helicopter to provide landing instructions, the system responds appropriately, and if the trainee provides the correct information, the helicopter will successfully land and its crew will prepare to load the patient. An important issue for training developers will be to identify which training requirements can efficiently be met using BR approaches and which will require other training methods. In this scenario, the BR is likely to provide an efficient method to train the procedures of interacting to land the helicopter and to visually assess injuries. However, skills that require tactile feedback and physical actions, such as performing first-aid procedures, will continue to require alternative training methods to supplement the BR technology.

Challenges for Delivering Anticipated Benefits and Research Opportunities for I-O Psychologists

As this technology begins to emerge in development, it is becoming possible to estimate where and how this technology will likely add training value. To date, the focus of research has been on overcoming engineering challenges to create wearable, portable, cost-effective hardware with sufficient processing power and battery life. The training technology needs to be sufficiently rugged to withstand field handling and exposure to dust, dampness, and weather; to be light enough not to generate undue user fatigue; and to not interfere with other equipment the user wears or carries while training. Engineering is a process of trade-offs, and often solving one problem can create another (Vaughn, 1996). For example, adding to data-processing demands requires more electrical power and hardware capacity, which could mean that a unit having sufficient power supply to meet processing requirements could become too heavy for users to wear. It currently appears that a first-generation technology that overcomes many of these challenges will soon become available. At least one software and hardware system has successfully demonstrated entity and operator tracking in a variety of environments.

The system will need to sense not only where the trainee is located in physical space but also the direction where the trainee is looking. The system will need to use this information in real time to manage and adjust entity presentations to account for several geometric factors, such as trainee movement through the real-world environment and the simulated location of the entity within this space, as well as the visibility of the entity within the trainee’s field of view. Although producing the correct imagery is largely an engineering challenge, the importance for training designers is to provide estimates of fidelity, meaning how accurately must entities mimic their real-world counterparts to provide a simulation adequate to meet training needs.

The tracking systems will need to integrate the user’s location and orientation in real-world space to adjust the appearance of stimuli in real time and consider how users will interact with the constructed entities. As an example of this issue, the targets in a carnival arcade shooting gallery react when fired on, but until this occurs, they wait patiently (or so one might hope). In contrast, real-world opponents would seek to detect and locate the trainee user, and escape, hide, or defend themselves. The training system will have to manage not only locations but also whether the virtual entities and real-world trainee users should be able to see and engage each other through obstacles.

Sound presents another significant engineering challenge. Many of the anticipated visual entities, such as a helicopter operating nearby, produce substantial sound, which would be presented to the trainee through headsets. Whether, under what training requirements, and to what degree of accuracy sound is needed are currently unanswered empirical questions for training researchers.

Equipment and software developers will provide sample training events with BR systems for testing and evaluation, but a long-term source to create and maintain a library of training presentations is also needed. The most feasible long-term solution to creating the presentations that trainees will see and hear through BR training systems appears to use experienced incumbents of the work involved who fill the role of instructors. The BR system should provide a method for these persons to select and program the behavior of the entities that enables trainees to meet training requirements. These instructors are optimal to create their own presentations because they are likely to understand the work involved as well as relevant training methods, capabilities, and shortfalls. Ideally, BR systems will include a training-authoring capability, consisting of both the technology to build training and a procedural, concept-of-operations methodology to perform this work. This authoring capability will enable instructors to create the entities that the BR training system will present to trainees; program the behavior of these entities to execute training scenarios; copy and alter existing scenarios and scenario entities to adapt them to different settings; and create a single scenario applicable to multiple settings. There is a need for research to inform optimal approaches to enable these instructor-users to build their own training in BR systems.

Feedback is one of the biggest differences between expertise gained through real-world experiences and expertise gained through training. Well-designed training provides feedback to correct mistakes and reinforce correct behaviors. The BR system will require a method to display what happened during the training and integrate these displays into a means to provide feedback during post-training debriefings. The optimal ways to do this is another unanswered opportunity for research.

Until prototypes of BR systems become available, assessment of the benefits they can deliver for given levels of fidelity may have to rest on current literature about simulator-delivered training. However, informative expertise appears to exist outside of sources typically used in mainstream I-O psychology. For example, the visual-arts community knows about perspective and creating illusions in visual media (Seckel, 2004). For BR applications, movement and perspective may be as important as visual features.

It is not yet clear how to integrate BR systems optimally with other part-task trainers. Consider our earlier example of the first responder with the helicopter evacuation of a patient. Because the patient and the helicopter exist only as images and sound, there is no physical body to move and no physical helicopter into which to load the patient. This problem could be addressed through two physical part-task trainers, one replicating the patient shape and form, and the other on the door and fuselage of a helicopter. The BR system could overlay the details of appearance onto the physical patient shape, and then the BR system could overlay the physical helicopter trainer with sounds and movement replicating an actual helicopter. The methodology, and indeed all the issues to be solved for this simple application, have not yet been identified so that they can be solved. I-O psychologists can contribute by identifying which types of activities and work tasks are appropriate for BR applications. One method to do this would be to examine training-requirement documentation and consult with subject matter experts to identify currently unmet training needs.

Determining what constitutes the needed fidelity is another important challenge that I-O psychologists can help address. Presenting only the level of fidelity that is needed for training effectiveness is critical to controlling development costs for training systems because excessive fidelity adds costs without commensurate training value (Hays, 1989). Methods are available to determine what level of detail in replicating a live entity is needed, and some aspects of psychological and physical fidelity are clearly more important than others (Stacy, Walwanis, Wiggins, & Bolton, 2013; Whetzel, McDaniel, & Pollack, 2012).

Once BR systems become available for research, it will be increasingly possible to evaluate costs of presenting the training against benefits from the training. Phillips and Stone (2000) offer one method to perform this assessment. Hung (2010) and Jasson and Govender (2017) provide innovative models for training evaluation that could enable researchers to estimate the value to be obtained from investments in BR technology.

In summary, BR training systems present both an exciting prospect to enhance training but also a number of technological challenges. I-O psychologists have opportunities to partner in developing these systems and in helping users obtain the benefits they promise.

References

Bort, J. 2014. “Blended reality” is the next tech buzzword and HP’s plans for it are really spectacular. Business Insider. Retrieved from https://www.businessinsider.com/blended-reality-is-the-next-big-thing-2014-10

Department of Defense Modeling and Simulation Coordination Office. (2018, September). Modeling and simulation glossary. Retrieved from https://www.msco.mil/MSReferences/Glossary/MSGlossary.aspx

Flavian, C., Ibanez-Sanchez, S., & Orus, C. (2019). The impact of virtual, augmented, and mixed reality technologies on the customer experience. Journal of Business Research, 100, 547–560.

Hays, R. T. (1989). Simulation fidelity in training system design: Bridging the gap between reality and training. Recent Research in Psychology series [Kindle Edition]. New York, NY: Springer.

Hung, T. K. (2010). An empirical study of the training evaluation decision-making model to measure training outcome. Social Behavior & Personality: An International Journal, 38, 87–101.

Jasson, C. C., & Govender, C. M. (2017). Measuring return on investment and risk in training—A business training evaluation model for managers and leaders. Acta Commercii, 17, 1–9.

Magnuson, S. (2019, Jan 2). Services declare breakthrough in LVC training. National Defense: The Business and Technology Magazine of NIDA, 103(782): 12. Retrieved from http://www.nationaldefensemagazine.org/articles/2019/1/2/services-declare-breakthrough-in-lvc-training

Phillips, J. J., & Stone, R. D. (2000). How to measure training results: A practical guide to tracking the six key indicators [Kindle Edition]. New York, NY: McGraw-Hill.

Seckel, A. (2004). Masters of deception: Escher, Dali, and the artists of optical illusion. New York, NY: Sterling.

Stacy, W., Walwanis, M., Wiggins, S., & Bolton, A. (2013, December). Proceedings of the 2013 Interservice/Industry Training, Education, and Simulation Conference: Layered fidelity: An approach to characterizing training environments. Orlando, FL.

Vaughn, D. (1996). The Challenger launch decision: Risky technology, culture, and deviance at NASA. Chicago, IL: University of Chicago Press.

Whetzel, D. L., McDaniel, M. A., & Pollack, J. M. (2012). Work simulations. In M. A. Wilson, W. Bennett, S. G. Gibson, & G. M. Alliger (Eds.), The handbook of work analysis: Methods, systems, applications and science of work measurement in organizations [Kindle Edition]. New York, NY: Taylor & Francis.

 

Print
2076 Rate this article:
5.0
Comments are only visible to subscribers.

Categories

Information on this website, including articles, white papers, and other resources, is provided by SIOP staff and members. We do not include third-party content on our website or in our publications, except in rare exceptions such as paid partnerships.