IEEE Access (Jan 2020)
Fusion Learning Model for Mobile Face Safe Detection and Facial Gesture Analysis
Abstract
Face pose analysis has a very broad application prospect in the fields of public safety monitoring, human-computer interaction. Traditional deep learning methods are mostly based on public dataset training, and the robustness is poor in specific application scenarios. Secondly, most models need to crop the facial region during analysis, which is not only slow but also loses facial context in the natural environment. In response to these problems, this paper proposes a joint learning network model for Mobile Face Safe Detection and pose analysis. This method first proposes a cloud-service assisted semi-automated image annotation method. The image of the driver's pose in road traffic monitoring scenes is marked for, which provides additional training data for subsequent joint learning. Secondly, through the cascaded multi-task network, the problem of face pose analysis relying on Mobile Face Safe Detection is solved. At the same time, the fusion loss function, classified training data and Online Hard Example Mining (OHEM) training strategies are used to improve the robustness of the model in complex environments. In the end, the FDDB, AFLW and Prima data sets are used to verify the superiority of our model by comparing with other algorithms.
Keywords