Trusted Artificial Intelligence Systems Engineering Challenge
Report Number: SERC-2024-ITR-001_V2
Publication Date: 2024-09-30
Project: Trusted Artificial Intelligence Systems Engineering Challenge
The Trusted Artificial Intelligence Challenge for Armaments Systems Engineering is a novel approach to improving the performance of AI-enabled systems. Sponsored by the Office of the Under Secretary of Defense for Research and Engineering (OUSD(R&E)) and U.S. Army DEVCOM Armaments Center Systems Engineering Directorate, the challenge tasks student teams to develop engineering methods that consider system reliability and trustworthiness in the design and architecture of AI-enabled systems, particularly those used in life-critical situations.
“We [select student teams] and set a scenario where AI makes important decisions in a system but doesn’t have outstanding accuracy in all situations,” stated Principal Investigator Dr. Peter Beling (Virginia Tech) during a presentation at this year’s AI4SE & SE4AI Research and Application Workshop. “Student teams could do anything to manage the circumstances except improve the AI.” This shifts students’ focus from improving AI models to system architecture and developing SE methods to build and operate systems that provide trustworthy behaviors using components that are less trustworthy.
Seven student teams were selected for the challenge, representing University of Virginia, Virginia Tech, Purdue University, The George Washington University, University of Arizona, Stevens Institute of Technology, and Old Dominion University. The challenge mission involved creating a safe passage through a minefield using autonomous ground and aerial vehicles, remote sensing, and AI detection models whose accuracy is influenced by factors including ground conditions, terrain, and human operator performance. Over the course of three university semesters, the student teams will work toward the objectives for each of three heats:
- Heat 1 (Summer 2024): Get acquainted with the challenge and develop an analytic approach for future stages.
- Heat 2 (Fall 2024): Provide an initial computational environment to experiment with designs and control algorithms and conduct initial runs for analysis.
- Heat 3 (Spring 2025): Simulate an operational mission with additional complexity factors.
The available interim technical report summarizes Heat 1 activities and deliverables, including the approaches used by each student team to address the challenge. Activities to date show the benefits of focusing on improving both the design and the operation of AI-enabled systems and developing simulation approaches that evaluate overall measures of effectiveness against a variety of architectural options.
Mr. Scott Lucero (Virginia Tech), a SERC researcher also involved with the project, noted as an added benefit that “student teams got feedback in real-time operation and the opportunity to make changes to operational algorithms…everyone gets to participate in reviewing each other’s work so we’re all learning as we go through this.”
In Heats 2 and 3, the student teams will explicitly address the three key sponsor concerns, which are to identify:
- the SE activities and artifacts best suited to build trust in AI-enabled systems;
- the infrastructure needed to validate trust of such systems; and
- the key workforce skills and abilities required for an integrated product team to successfully develop and manage these systems.
Follow SERC on LinkedIn for updates on projects such as the Trusted AI Challenge and other systems engineering research.