Courtesy of Army Research Laboratory, Human Research and Engineering Directorate, Simulation and Training Technology Center
Military medical training presents trainees with a range of casualty scenarios and engagements involving hostile forces. Engagements may require Soldiers to suppress the opposing force and prevent additional casualties while treating the wounded. Medical training scenarios reinforce essential skills required to include proper tactics, such as maintaining a perimeter or properly using cover and concealment. The addition of a casualty requires trainees to adhere to the rules of Tactical Combat Casualty Care (TC3).The first phase of TC3 is care under fire, in which the Soldier must return fire, apply a tourniquet if necessary, and quickly move the casualty to cover. With the casualty in a safer location, tactical field care begins where additional treatments are allowed. Tactical field care treatments meant to address life-threatening conditions and once stabilized the casualty is evacuated. Throughout these phases, all squad members, including the Medic and Combat Life Saver, must remain vigilant to ensure their safety, and the safety of the patient. In this fast-paced and often chaotic environment, it is a difficult task for in instructors to evaluate performance effectively.
Current medical lane training requires several personnel including instructors to assess performance, actors to portray opposing forces and controllers for the simulation assets. The medical simulation research branch at the Army Research Laboratory’s Simulation and Training Technology Center (STTC) is working to unburden the instructors by automating the control of the simulation assets using Light Detection and Ranging (LiDAR) and sensor fusion. Automation of the simulators and effects will free up instructors allowing them to provide a comprehensive after action review; more commonly called AARs. LiDAR automation may not only reduce instructor load but also eliminate the need for operators at each simulator.
The STTC is currently researching the use of LIDAR to control video cameras, smoke generators, and other simulation assets. The LIDAR will serve as a “simulation director;” as trainees approach areas of interest, smoke generators and manikins turn on automatically. Additionally, the LIDAR is able to determine the location of trainees and focus pan-tilt-zoom cameras on them.
Future work will investigate sensor fusion and networking of multiple LiDAR to extend the range and augment data resolution. Fusion with a GPS sensor would allow the system to geo-locate trainees in the field without the need to instrument each Soldier.
Additionally, LiDAR sensors add significant capabilities to AARs, by providing analysis of troop formation and lines of sight providing a unique teaching tool for instructors and learning opportunity for trainees. The system will be easily portable and rapidly configurable, allowing sites to move lanes or lend the system to other units.
Feasibility tests for of the LiDAR sensor for tracking personnel in a training environment has been conducted at Fort Bragg, N.C. The initial results have been very positive and the research team is moving on to the next phase that will include significant hardware integration and software development and an evaluation of the system to include instructor feedback regarding usability and functionality of the AAR component.