Submitted by Paulo Costa on
The evaluation of how uncertainty is dealt with within a given Information Fusion (IF) system is distinct from, although closely related to, the evaluation of the overall performance of the system. Metrics for evaluating the overall performance of IF systems are more encompassing in scope than those focused on the uncertainty handling within the system. The metrics for the overall system include the effects of the uncertainty representation, but there are also effects of other aspects of the fusion system that can affect the performance of the system.
For example, fusion-system-level metrics include timeliness (how quickly the system can come to a conclusion within a specified precision level), accuracy (where can an object be found for a specified localization level) and confidence (what level of a probability match for a defined recall level).Clearly, different choices in uncertainty representation approaches will affect the achievable timeliness, accuracy, and confidence of a system, and therefore must be considered when evaluating both the system’s performance as a whole and the specific impact of the uncertainty handling approach. Yet, when evaluating timeliness (or any other system-level metrics), one will likely find some factors not directly related to the handling of uncertainty itself[1], such as object tracking and identification report updates (i.e., Level 1 fusion), situation and threat assessment relative to scenario constraints (i.e., Level 2/3 fusion), overall system architecture (e.g. centralized, distributed, etc.), data management processes and feedback / input control processes (i.e., Level 4 fusion considerations), and user-machine coordination based on operating systems (i.e., Level 5 fusion), and others.
In an ideal situation, evaluating how the management of uncertainty affects the overall performance of a fusion system would be just a matter of isolating the directly related aspects. Unfortunately, real-life information fusion systems are usually too complex to allow for such a clear-cut separation, and most of the aspects considered in system-wide performance are entangled with or influenced by uncertainty representation considerations to some degree. Isolating the impact of uncertainty handling in an IF system is thus a matter of understanding how intertwined the choice of an uncertainty representation and reasoning approach is to the major performance metrics used to evaluate the IF system itself.
The basic premise of the ETURWG is that achieving a level of understanding that supports an unbiased evaluation of the impact of uncertainty in an IF system is a task by itself complex enough to warrant status as an open research area. Understanding what the distinctions are between fusion system-level performance criteria and fusion uncertainty representation performance criteria is the focus of the group. This paper presents one of the initial results of the ETURWG, the URREF ontology, and addresses how it is being currently used to support the development of an unbiased uncertainty reasoning evaluation framework.
This page is an initial attempt at determining the distinctions between evaluating the performance of an IF system and evaluating the impact of uncertainty on it. This would then allow us to establish an evaluation framework capable of supporting unbiased assessment of how the choice of uncertainty representation and reasoning impacts the performance of an IF system. The basic idea behind the framework is to analyze an abstract fusion system and define its input data and output products. In a futuristic prototypical information fusion system, the uncertainty representation approach would be “plug-and-playable.” That is, one can run it with a Bayesian approach, then switch out the Bayesian approach for a Dempster-Shafer approach, then for a Fuzzy Random Set approach or have a combination of uncertainty reasoning methods. The input data are the same in each case, as are the output products (but not necessarily the values in the output products). Figure 1 below depicts the boundaries of the uncertainty representation and reasoning evaluation framework (URREF).
Figure 1 - Boundaries of the Uncertainty Representation and Reasoning Evaluation Framework
There are two elements in the picture that are exogenous to the evaluation framework, named in the picture as “World being sensed” and “World being reported.” Between these two external elements, the boundary of the evaluation framework encompasses the way uncertainty is handled when data is input to the system, during the processes that occur within it, as well as when the final product is delivered to the IF system’s users.
The first external element refers to the events of interest to the IF system that happen in the world and are perceived by the system sources. Note that the implicit definition of sources in this case encompasses anything that can capture information and send it to the system. That is, both hard sources (e.g. imaging, radar, video, etc.) and soft sources (HUMINT reports, software alerts, etc.) are considered external to the evaluation system with respect to their associated sensorial capabilities, while the way they convey their information is within the scope of the system [1, 2, 3].
This is an important distinction between the evaluation of an IF system, which usually encompasses the sensitivity of its sensors, and the evaluation of its handling of uncertainty, which focuses only on how the uncertainty embedded in the sensors’ information is captured. The latter comprises what is called in the picture as the Input step, which involves assessing the system’s ability to represent uncertainty as an intrinsic part of the information being captured. As an example, information regarding trust on the input from a given sensor is an important item to evaluate how the overall system handles uncertainty, although it might not be as critical for its overall performance. A key question for evaluating uncertainty representation is what the uncertainty characteristics of the input data are, and how they affect the use of different uncertainty schemes.
In the ideal system model, having the appropriate data characteristics is critical. If the characteristics do not span the range of uncertainty techniques, then the model may not give meaningful results about the operationally significant differences between the techniques. Correctly identifying the desired input data characteristics will shape the future development of use cases and modeling data sets for those use case.
Once information is in the IF system, it will be processed to generate the system’s deliverable that requires uncertainty characterization and reporting in the Output step. This process involves fusion techniques and algorithms that are directly affected by the uncertainty handling technique being used, and its impact on the system’s inferential process. In this case, the URREF criteria focus on aspects that are specific to the way uncertainty is considered and handled within the fusion process. This is not an evaluation of the system’s performance as a whole. We want to understand how the uncertainty representation affects system performance, and whether different uncertainty representation schemes are more or less robust to variations in the remaining parts of the IF system architecture. But we want to focus specifically on the uncertainty representation aspects, and attempt, as best as possible, to separate those aspects from overall system performance and architecture issues.
After the information is fused and properly treated, then it is conveyed to the system’s users. In the figure, these are represented by an image depicting decision-makers who would likely be supported by the IF system in their daily tasks. The URREF output step involves the assessment of how information on uncertainty is presented to the users and, therefore, how it impacts the quality of their decision-making process.
References:
- P.C.G. Costa, KC Chang, K.B. Laskey, T. Levitt, and W. Sun, “High-Level Fusion: Issues in Developing a Formal Theory”, Int. Conf. on Information Fusion, 2010.
- P.C.G. Costa, R.N. Carvalho, K.B. Laskey, and C.Y.Park, “Evaluating Uncertainty Representation and Reasoning in HLF systems”, Int. Conference on Information Fusion, 2011.
- A-L Jousselme, P. Maupin, E. Bossé, “Quantitative Approaches” Ch. 8 in Concepts, Models and Tools for Information Fusion, Artech House, 2006.
1 This holds at least at a higher level of abstraction. There might be interactions between the uncertainty representation approach and these system factors, but we begin with a presumption that they are not significant.