IEEE Access (Jan 2018)

Common Metrics to Benchmark Human-Machine Teams (HMT): A Review

  • Praveen Damacharla,
  • Ahmad Y. Javaid,
  • Jennie J. Gallimore,
  • Vijay K. Devabhaktuni

DOI
https://doi.org/10.1109/ACCESS.2018.2853560
Journal volume & issue
Vol. 6
pp. 38637 – 38655

Abstract

Read online

A significant amount of work is invested in human-machine teaming (HMT) across multiple fields. Accurately and effectively measuring system performance of an HMT is crucial for moving the design of these systems forward. Metrics are the enabling tools to devise a benchmark in any system and serve as an evaluation platform for assessing the performance, along with the verification and validation, of a system. Currently, there is no agreed-upon set of benchmark metrics for developing HMT systems. Therefore, identification and classification of common metrics are imperative to create a benchmark in the HMT field. The key focus of this review is to conduct a detailed survey aimed at identification of metrics employed in different segments of HMT and to determine the common metrics that can be used in the future to benchmark HMTs. We have organized this review as follows: identification of metrics used in HMTs until now, and classification based on functionality and measuring techniques. Additionally, we have also attempted to analyze all the identified metrics in detail while classifying them as theoretical, applied, real-time, non-real-time, measurable, and observable metrics. We conclude this review with a detailed analysis of the identified common metrics along with their usage to benchmark HMTs.

Keywords