IEEE Access (Jan 2022)

An Optimized Multi-Task Learning Model for Disaster Classification and Victim Detection in Federated Learning Environments

  • Yi Jie Wong,
  • Mau-Luen Tham,
  • Ban-Hoe Kwan,
  • Ezra Morris Abraham Gnanamuthu,
  • Yasunori Owada

DOI
https://doi.org/10.1109/ACCESS.2022.3218655
Journal volume & issue
Vol. 10
pp. 115930 – 115944

Abstract

Read online

Disaster classification and victim detection are two important tasks in enabling efficient rescue operations. In this paper, we propose a multi-task learning (MTL) model which accomplishes these two tasks simultaneously. The idea is to attach one pruned head model to another backbone network. We mathematically pinpoint the optimal branching location and the depth of the pruned head model. Apart from the decoupled task training capability, the MTL model offers lesser memory requirements (12.8 MB saving) and better disaster classification accuracy (1-2% gain), while preserving the same detection performance (0.694 of average precision (AP)), as compared to the traditional method. Such advantages of flexibility, speed and accuracy facilitate the large-scale deployment of Internet of Things (IoT) applications, where we explore the potential of federated learning (FL) and active learning (AL). Given the high ambiguity within disaster images, a modified version of AL-based technique is introduced. For realistic implementation, production-ready OpenFL and OpenVINO tools are adopted to update the global FL model and to optimize the trained model, respectively. Experiment results are promising: the FL-based techniques are comparable to or better than their centralized learning (CL) counterparts. Also, our application portability is demonstrated via different hardware such as CPU and Raspberry Pi.

Keywords