For each attack scenario, multiple technical reconfiguration solutions can be derived, each accompanied by a set of possible HMT solutions. In order to understand the importance of selecting the most robust HMT solutions and their characteristics, this project will develop a methodology for organizing and evaluating the results of human-in-the-loop experiments. The initial version of the methodology will be determined based upon the results of conducting a case study focused on a mission whose performance can potentially be adversely impacted by enemy attacks (non-kinetic). This first use case will be evaluated through high fidelity experiments from the viewpoint of combined operator/autonomous system HMT performance.
Consider a team of autonomous aerial vehicles assigned to conduct a surveillance mission for purposes such as battle damage assessment related to required medical responses. Further, assume that one person is overseeing and controlling, when necessary, members of the team of aerial vehicles. In addition, assume that this mission is addressing the desire to collect information regarding where to send responders and how to most safely get them to those in need of help. As demonstrated in prior SERC research efforts (System Aware Cybersecurity), an adversary can, for example, execute an undetected cyberattack that hampers the ability to provide surveillance in selected areas that would be meaningful to our forces. Similarly, decoys or corrupted surveillance information could be used as means to misdirect our forces.
For the desired use case, a hardware in the loop experimental capability will be developed, using actual ground stations and simulated vehicle inputs. The HMT system will be developed to support the desired experiments. Experiments will be conducted at WPAFB. IRB approvals will be required. Given the nature of the intended test and experimental similarities to the earlier evaluations related to cyberattacks, expeditious IRB approvals are expected.
Experiments will include scenarios in which the user displays are augmented by trust metrics calculated on the basis of the consistency of sensor information. The relevant algorithms for trust metrics have already been developed in general form at UVA. Tuning to specifics of the experimental scenarios is anticipated to be a minor task. The trust metrics will serve to provide an augmentation of human intelligence for the task of assessing the trust-worthiness of the sensor information. This assessment task can be difficult for unaided humans if it involves assessing the correlations and consistency of networks of data sources, as is the case in many operational scenarios. The trust metrics are themselves the subject of operator confidence. Experiments will be conducted to understand the relationships between operator confidence in the metrics and their experience with them and with ultimate system outcomes.
Results from the evaluations will be mapped into a first version of the desired methodology. In addition, based upon what is learned regarding the need for a broader set of experimental use cases, the needs for follow-on research will be illuminated.