Fusion 2012 Special Session

Paulo Costa's picture

Fusion 2012 Special Session on
Evaluation of Technologies for Uncertainty Reasoning

On this page:



The ETUR Session is intended to leverage on the latest results of the ISIF’s ETURWG, which aims to bring together advances and developments in the area of evaluation of uncertainty representation. The session will be attended by ETURWG participants, as well as other researchers and practitioners interested in uncertainty evaluation. The session will summarize the state of the art in uncertainty analysis, representation, and evaluation. By having a special session, the community can collectively address a common need for the ISIF community, coordinate with researchers in the area, and jointly assess perspectives in various evaluation techniques of uncertainty assessment and key to fusion, reduction of uncertainty.


One of the main goals of information fusion is uncertainty reduction. Quantification of uncertainty reduction depends on how uncertainty is represented. Uncertainty representation differs across the various Levels of Information Fusion (as defined by the JDL/DFIG models). For Level 1 Fusion, standard measures of uncertainty reduction are widely accepted in the community. For tracking, uncertainty reduction corresponds to reducing spatial (distance) and temporal (time) errors; for identification, the goal is to increase the probability of detection and reduce the probability of false alarms.
Information fusion of hard and soft information from diverse sensor types still depends heavily on human cognition. This results in a scalability conundrum that current technologies are incapable of solving. Although there is widespread acknowledgement that an Information Fusion Evaluation Framework must support automated knowledge representation and reasoning with uncertainty, there is no consensus on the requirements such framework must meet, on the most appropriate technologies to satisfy these requirements, and on how to evaluate how well they are being met. A clearly defined, scientifically rigorous evaluation framework and metrics are needed to help information fusion researchers assess the suitability of various approaches and tools to their applications. The ETUR special session will be devoted to foster discussions about this evaluation framework in the context of the most recent results obtained at the ETURWG proceedings.

Anticipated Impact

The impact to the ISIF community would be an organized session with a series of methods in uncertainty representation as coordinated with evaluation. The techniques discussed and questions/answers would be important for the researchers in the ISIF community; however, the bigger impact would be for the customers of information fusion systems to determine how measure, evaluate, and approve systems that assess the situation beyond Level 1 fusion. The customers of information fusion products would have some guidelines to draft requirements documentation, the gain of fusion systems over current techniques, as well as issues that important in information fusion systems designs.

Panel 1: Uncertainty Evaluation: Current Status and Major Challenges


One of the basic applications of information fusion is to reduce uncertainty. The notion of position accuracy from sensor covariance reduction, confidence improvement from false alarm rejection from multimodal collections, and data filtering to limit cognitive overload are key elements of information fusion techniques to reduce uncertainty. With the advent of the various applications of information fusion, there are many instances of uncertainty from source characterization (i.e. pedigree), limiting testing for robust operations, and association of data over wide gaps in spectral, temporal, or spatial collections. This panel discussion seeks to motivate and highlight the discussion of uncertainty evaluation challenges in an information age. We envision a discussion that utilizes and expands techniques from low- level information fusion to the higher levels of information fusion. The panel is part of the ETURWG and thus has its roots in the development and support of the ISIF ETURWG. To get a qualified and diverse viewpoint, we are inviting you to be a member of the panel.


Paulo Costa - George Mason University


  • Kathryn Laskey (GMU) – Evaluating Bayesian methods
  • Erik Blasch (AFRL) – Progress and future of evaluation techniques
  • Sten F Andler (University of Skövde) - Evaluation of Uncertainty Management Methods: precision vs. Imprecision
  • Jean Dezert (ONERA) - A Fundamental Contradiction in Dempster-Shafer Theory
  • Anne-Laure Jousselme (DRDC, Valcartier) - What's in an uncertainty representation?
  • Gavin Powell (EADS) - Uncertainty Evaluation: Current Status and Major Challenges


  1. Can an unbiased, general evaluation framework be achieved?
  2. What methods are appropriate for Evaluation of UR?
  3. What are the main criteria for evaluating UR in fusion systems?
  4. What are examples of successful ETUR?
  5. Are generated Data Sets useful, available, necessary?
  6. What is future of ETUR methods or top unsolved challenges?

Panel Paper


Panel 2: Issues of Uncertainty Analysis in High-Level Information Fusion


High-Level Information Fusion (HLIF) utilizes techniques from Low-Level Information Fusion (LLIF) to support situation/impact assessment, user involvement, and mission and resource management (SUM). Given the unbounded analysis of situations, events, users, resources, and missions; it is obvious that uncertainty is manifested by the nature of application requirements. In this panel, we seek appropriate discussions on methods and techniques to bound the problem of HLIF uncertainty analysis without alluding to high- performance statistical computational solutions (i.e. particle filters), mathematical assumptions (i.e. optimal Bayesian approaches with maximum likelihood solutions), or rigorous modeling and problem scoping (i.e. expert systems) which lead to time delays, brittleness, and rigidity, respectively. [We can change this sentence for the publication]. Given the various methods of LLIF and the complexity of HLIF, an interest to the ISIF community is to utilize diverse methods (such as those from other communities) that bridge the LLIF-HLIF gap of uncertainty analysis. The panel is part of the ETURWG and thus has its roots in the development and support of the ISIF ETURWG. To get a qualified and diverse viewpoint, we are inviting you to be a member of the panel.


Erik Blasch - Air Force Research Laboratory


  • Paulo Costa (GMU) - HLIF Analysis with Bayesian Approaches
  • Johan Schubert (FOI) – Information fusion management
  • Dafni Stampoulos, represented by Gavin Powel (EADS) – Evidential methods in HLIF
  • Ng Gee Wah – Computational Cognitive System for High Level Information Fusion
  • Pierre Valin (DRDC, Valcartier)  - Situation Awareness and Uncertainty
  • Rakesh Nagi (University at Buffalo) - HLIF Uncertainty Issues: Hard+Soft Fusion and Uncertainty


  1. Are methods for dealing with uncertainty (e.g. learning, reasoning, using parametric distributions, etc) equally applicable to all HLIF situations?
  2. Are preferable techniques of UR for LLIF and HLIF? How to bridge the LLIF and HLIF uncertainty evaluation gap?
  3. Is there a way of generalizing when one method is preferable? Is there a theoretical justification for such choice (i.e. based on axioms, assumptions, requirements)?
  4. Can multiple methods coexist and act synergistically?
  5. How to handle cases when the observations / sensors cannot directly provide propositions about the hypothesis the user are interested in?

Panel Paper


Special Session Papers

Top 10 trends in High Level Information Fusion
Erik Blasch, D. A. Lambert, and E. Bosse

Towards Unbiased Evaluation of Uncertainty Reasoning: The URREF Ontology
Paulo Costa, Kathryn Laskey, Erik Blasch, Anne-Laure Jousselme

Uncertainty representations for a Vehicle-Borne IED Survaillance Problem
Anne-Laure Jousselme and Patrick Maupin

Shallow semantic analysis to estimate HUMINT correlation
Valentina Dragos

A Generic Bayesian Network For Identification and Assessment of Objects in Maritime Surveillance
Juergen Ziegler, Max Krueger, and Kathryn Heller


File Attachments: