Equipping Emergency Responders for
A Tech-Infused Future of Work

There are 4.6 million people who serve as career or volunteer firefighters, police officers, emergency medical technicians, and paramedics in the United States. Emergency responders perform extremely demanding tasks in complex, stressful, and hazardous environments. One responder is lost in the line of duty every two days.

A team of researchers from Virginia Tech’s College of Engineering, Texas A&M University, and the University of Florida are actively working to improve the safety of these responders while enhancing their performance and quality of life with human augmentation technology.

The team is in the process of developing a learning platform called Learning Environments with Augmentation and Robotics for Next-gen Emergency Responders (LEARNER). The platform is a mixed reality, cloud-based system that will be used to train emergency responders on how to use human augmentation technologies, including augmented reality and exoskeletons, in the field.

[VIDEO]

By doing so, trained emergency responders will be able to boost their cognitive and physical capabilities while decreasing their risk for personal injury or loss in the line of duty.

LEARNER could transform the future of the emergency responder workforce, said Joe Gabbard, a Virginia Tech associate professor in the Grado Department of Industrial and Systems Engineering. To Gabbard, creating a scalable and accessible learning platform to help emergency responders navigate new technologies will safeguard their health, increase their career longevity, and ensure our nation’s preparedness to prevent and respond to emergencies.

“We want to train people how to do what they are already doing, but with these enhanced capabilities to make it easier, better, and safer,” Gabbard said.

Gabbard specializes in user-centered design of virtual and augmented reality and serves as the project’s principal investigator at Virginia Tech. For the learning platform, his team is designing and integrating a module on the use of augmented reality to assist emergency responders with wayfinding and visual communication. Alongside Gabbard, Divya Srinivasan, an associate professor in industrial and systems engineering, and Alexander Leonessa, a professor in mechanical engineering, are bringing learning on exoskeletons to the platform.

The project, a collaboration between academics, industry, and public service organizations, is part of a two-year, $5 million award from the “Future of Work at the Human-Technology Frontier” track of the National Science Foundation’s Convergence Accelerator program. The program supports use-inspired, team-based, multidisciplinary efforts that address societal challenges of national importance. The team was awarded $1 million for Phase I, during which they created the concept of the LEARNER platform. For Phase II, the team participated in a pitch competition and a proposal evaluation that outlined their plans to transition research into practice, before having their project selected for Phase II by the committee.

The LEARNER platform works in levels of immersion. A local emergency medical technician, for instance, could start with the system’s module for learning the basics of an augmented reality headset – turning on the device, adjusting it, setting preferences – by accessing the platform on a laptop or desktop. They’d move next to access the system through an augmented virtual reality headset, wherein users would wear the headset and simulate using the augmented reality technology in the field.

For another of the system’s learning modules, Leonessa and Srinivasan are working to integrate the use of upper body powered exoskeletons that can augment a user’s physical capacity with machine-like power, helping users minimize fatigue while preserving their decision-making capabilities. They are working on the module’s design and integration into the system in collaboration with industry sponsor SARCOS Robotics.

Gabbard envisions that once emergency responders familiarize themselves with these new technologies in the system, leaders like fire chiefs and emergency medical services captains will be comfortable one day purchasing the tools for their units.

Currently, his team is also working to develop a module on the use of augmented reality specifically in the context of a mass casualty incident, to assist emergency responders with rapid triage of patients and prepare them for transport more efficiently and faster.

Gabbard’s team is conducting use case studies to determine the best way to deliver the training. “We have a pretty good idea of how to train people to use two-dimensional standard interfaces,” he said. “But understanding how to train somebody in an immersive space, to use an immersive technology, is new, and we’re still learning how to do that.”

Kyle Tanous, a research associate, and Cassidy Nelson, an industrial and systems engineering doctoral student, are leading a team of four undergraduate researchers on the project to develop core augmented reality components for the learning system. She is responsible for the application of human factors and human-centered design approaches when considering human strengths and limitations in cognitive processing and perception.

The undergraduate researchers who are working under Nelson’s direction are Macey Cohn, majoring in computer science; Joe Conwell, double majoring in mathematics and computer science; Taylor Davis, majoring in industrial design; and Sam Lally, majoring in computer science.

“What excites me most about this project is the idea that it might literally save lives,” said Nelson. “My hope is that the training tools we develop will not only increase effectiveness and efficiency in mass casualty triage, thus saving more civilians, but also help protect the emergency responders themselves. The faster a responder can safely search, triage, and get out of a hazardous area, the better.”

Gabbard wants the training to be accessible at many different scales. “We want even the smallest volunteer units in rural America to be able to use a desktop-based version of this platform,” Gabbard said. “If someone happens to have a virtual reality headset because they bought it for their home gaming system, or they received a grant from their local municipality, then they can actually go into a virtual environment and learn how to use augmented reality in the field.”

Virginia Tech alumna Ranjana Mehta, an associate professor at Texas A&M University, is the lead principal investigator for Phase II of the project and is overseeing emergency responder use case studies, which involve testing responders’ use of the learning platform with human augmentation technologies at facilities of the Texas A&M Engineering Extension Service.

Mehta is providing her expertise in adaptive learning, human factors, and human systems integration into the learning platform. Jing “Eric” Du, an associate professor from the University of Florida, is utilizing his expertise in virtual reality to help the team integrate the use of virtual reality with exoskeletons.