Technical Report
Performance Measures for Multi-Agent Systems of Autonomous Intelligent Agents in Satellite Systems
-
Systems Engineering and Systems Management Transformation
Report Number: SERC-2020-TR-015
Publication Date: 2020-09-30
Project:
PEAS Framework for Test and Evaluation of Multi-Agent Systems of Autonomous Intelligent Agents
Principal Investigators:
Dr. Laura Freeman
Co-Principal Investigators:
Dr. Jitesh Panchal
Ivan Hernandez
Multi-agent systems of autonomous intelligent agents (MASoAIA's) are a promising design framework for addressing complex problems in dynamic operating environments. These MASoAIA's are composed of individual, possibly heterogeneous cognitive agents that must coordinate to address mission-level objectives over time. This multi agent architecture allows for emergent system behavior, which can offer increased operational flexibility and lead to superior mission-level performance provided proper design and testing is carried out. However, the complexity of these systems and the evolving nature of the agents and environment make straightforward application of existing test and evaluation methods impractical.
In this project, we focus on the development of quantitative performance measures as the first step in developing a data-driven test strategy. We propose methods for abstracting MASoAIA's and relate these abstractions to relevant agent- and mission-level performance measures. Specifically, we explore the test and evaluation of exemplar MASoAIA's: networks of satellites designed to detect and track fire plumes in the United States. These MASoAIA's are studied at different levels of abstraction, where each lower level of abstraction provides more architectural, agent, and implementation detail. As we move down levels of abstraction, we may use this additional information -- along with insights from machine learning, complex systems, and cognition literature -- to identify increasingly refined performance measures for the system and individual agents. Importantly, this hierarchical approach allows system developers to (a) generate actionable performance measures using available knowledge of the functional and physical architecture of the system and its component agents, and (b) sequentially add performance measures as agent- and physical implementation-level details become known. Additional performance measures added at lower levels of abstraction can provide a better understanding of system-level performance even under dynamic environmental conditions.
We present analyses of MASoAIA's at two such levels of abstraction. First, we study the system as represented by a functional architecture of black-box cognitive agents with set inputs, actuators, objectives, and inter-satellite and ground station communication channels. Second, we study the system as represented by an architecture including more explicit grey-box cognitive agents with clearly defined inter-satellite communication channels and possible teaming strategies. In this second analysis, agents are more explicitly defined through block diagrams specifying cognitive agent-level tasks, potential agent-level software implementations, and the available data to be collected during agent operation that could be used to inform performance measures or other diagnostics. This analysis allows for more refined formulation of agent-level performance measures to supplement the mission-level measures.
This report at grey-box abstractions of systems. Future work will consider increasingly lower levels of abstraction. At lower levels of abstraction performance measures may be generated to evaluate individual agents for specific cognitive abilities, and these abilities may then be mapped back to mission-level performance. This level of analysis will not only allow for decision makers to understand the cognitive capability of current agents, but also close the design-test-evaluate-design loop by analyzing the contributions that increasing levels of agent cognition provide to the overall system.