Technical Report
Quantitative Risk – Phase 4

Report Number: SERC-2016-TR-117
Publication Date: 2016-04-15
Project:
Quantitative Technical Risk
Principal Investigators:
Dr. Gary Witus
Co-Principal Investigators:
This report documents the findings and recommendations from the fourth phase of the SERC project on Quantitative Risk Assessment for DoD development and acquisition programs. In this phase, we conducted empirical research to assess the practicality, specificity, and relevance of prospective risk leading indicators (RLI) via application to an ACAT I acquisition Program beginning at initiation of the Engineering and Manufacturing Development (EMD) phase. The objectives of this research were to:
- Refine and adapt the provisional RLI for reporting data and program processes in EMD
- Formulate and demonstrate additional, complementary risk leading indicators for EMD-phase programs
- Produce recommendations for how to incorporate risk leading indicators into EMD-phase risk management practices and program reporting
- Identify key areas for further applied research and development for RLI
RLI are evidence-based metrics that indicate the presence of, proximity to, or trend towards conditions conducive to or likely to cause future delays, overruns or other adverse acquisition outcomes. RLI indicate risk exposure. Staffing shortfall relative to the plan in a particular engineering Integrated Product Team (IPT) is a risk exposure condition. Large deviations between planned task durations and actual durations within a Work Breakdown Structure (WBS) element indicate unpredictability – a condition of risk exposure. Small margins or large gaps between Technical Performance Measure (TPM) value versus target value are risk exposure conditions. Trends toward conditions of greater risk exposure are risk leading indicators.
RLI direct management attention to areas of risk exposure for closer attention. They do not identify specific risk events, but instead detect risk conditions. RLI are a complementary addition to methods, procedures, and tools in to identify and manage specific technology risks as described in the DoD Risk Management Guide [1]. Specific technology risks are identified events that might or might not happen, whose progress can be tracked, for which an estimate of the likelihood and the time and cost consequences to recover should it occur or fail to occur.
The prospective RLI began with the System Development Leading Indicators (SDLI), developed by the National Defense Industrial Association (NDIA) in cooperation with the Practical Software and Systems Measurement (PSM) organization, and the International Council on Systems Engineering (INCOSE) [2, 3, 4, 5]. The SDLI were developed through a series of subject matter expert assessments, practitioner surveys and interviews with Government and Industry stakeholders, to identify information areas and metrics relevant to foreseeing issues in system development. While not originally intended to be risk leading indicators, the initial hypothesis was that system development leading indicators could be source of risk leading indicators.
In this phase, we conducted an empirical review of the SDLI as prospective RLI for EMD-phase programs. We addressed four major research questions:
- Are they practical – are the input data available in standard Contract Data Reporting List (CDRL) items, accessible as data, and reported in time to be useful for leading indicators? What changes in reporting or metrics would improve the situation?
- Are they informative – do they identify specific areas of risk exposure by IPT, WBS, and type or source of risk exposure for Risk Management investigation and action? How can this be improved?
- Are they relevant – are they related to cited causes and evidence of overruns and delay, systemic shortfalls in program progress, and progress unpredictability?
- Are they comprehensive – are there significant sources or types of risk exposure in EMD programs that they miss? Are there data available in EMD-phase programs indicate and specify risk exposure that the SDLI do not exploit? What refinements will resolve this? What additional metrics and analyses are practical, informative and relevant to address other causes?
We found that common CDRL reporting requirements provided input data too late for use as leading indicators, and to infrequently to assess trends. When key data are reported at major technical reviews, the data may be too late to establish trends or detect early evidence indicating risk exposure. We found that while the most of the input data needed were delivered in some form, they were scattered across many different CDRL items, often on different schedules. We found that some of the information was presented as diagrams, pictures, and tables that could did not lend themselves to evaluating quantitative metrics. The evidence was available but not accessible. We found that the contractor had the underlying data, but had not been asked to organize and deliver it in a schedule and format suitable for RLI assessment.
We found that the SDLI could be disaggregated to exploit information available during EMD to be more informative and relevant – to be more selective and sensitive. Averaging obscures risk. Averaging hides the least capable element, and prevents identification. The least capable elements of a program are the sources of greatest risk. To improve the sensitivity to risk exposure and specificity as to the source, for uee as RLI requires disaggregating them and modifying them to focus on least capable elements vice overall status and progress.
We found evidence that the SDLI appeared to be generally relevant to program progress. Some were more practical, informative and relevant than others.
We found that there are aspects of risk exposure, and sources of data, in an EMD program that were not addressed in the SDLI. We examined these in detail and formulated additional, complementary RLI for EMD-phase programs.
We found that there are EMD risk dynamics not addressed in the SDLI, and there are program progress evidence sources not exploited. We found that bias and uncertainty in estimating task duration planning it the Integrated Master Schedule (IMS) is a significant source of risk exposure. Bias is the result of systemic underestimating task difficulty and/or over-optimistic expectations in planning. Uncertainty is related to unpredictability of task difficulty and duration. We formulated methods to evaluate planning estimate bias and uncertainty as risk leading indicators. We developed methods to use these results in probabilistic schedule risk analysis (SRA) for evidence-based SRA, and to extend SRA methods to identify tasks with high likelihood of significant impact on internal milestone delay, i.e., having a significant effect on the probable critical path. We identified additional risk metrics closely related to program progress and uncertainty based on standard EMD documentation.
We found that a key R&D element of EDM programs - “Interfaces and Architectures” - is a not sufficiently well-defined for RLI reporting. Further work is needed to define the information domain. Cyber-physical systems have multiple, interdependent architectures (physical, signal, electronic, data, thermal, software, etc.). Size – numbers of architecture elements and interfaces – and EMD progress – number changed, number designed, number tested, and number returned for changes – are potentially valuable first-order RLI. Each architecture element and interface that has not been specified, has not been designed, has not been integrated, has not been tested is an opportunity for error and exposes the program to risk. Further definition of what constitutes an architecture element and interface is needed. Technical means to count the items from source data are also needed.
We found that, in EMD programs, integration readiness and integration risk are critical, but not welldefined. DoD has developed a framework for “Integration Readiness Level” (IRL) assessment. This framework may need further testing and adjustment. Integration testing and approval is a complex issue. The question are “what subsystems and at what level of integration to evaluate IRL?” This is related to the challenge of specifying “Interfaces and Architectures” progress indicators. The IRL categories address the level of specification, design and testing of an integration path. They do not say what integrations to address. As a point of reference, Technology Readiness Levels (TRL), are evaluated for designated specific technologies that have been designated as “critical” or “of interest” for the particular program by subject matter experts.
We found that delays in program data reporting reduce the value of risk leading indicators based on those
data – data reported too late for input to leading indicators, and too infrequent to estimate trends (a key
component of leading indicators). We found that the needed input data tend to be scattered across
diverse reports, and often presented as diagrams, pictures, and other “image” formats not conducive to
data processing. We recommend that a consolidated Risk Leading Indicator Report be included as a “Best
Practice” in the DoD Risk Management Guideline. We recommend that the RLI Report be delivered
quarterly. The input data for the RLI report is not different from that already commonly produced in
various different reports (with the possible exception of “Interfaces and Architectures” metrics), but
would be provided in a consolidated format, and updated with sufficient frequency for use as leading
indicators and to evaluate trends.