IEEE Access (Jan 2020)

DeepFood: Food Image Analysis and Dietary Assessment via Deep Model

  • Landu Jiang,
  • Bojia Qiu,
  • Xue Liu,
  • Chenxi Huang,
  • Kunhui Lin

Journal volume & issue
Vol. 8
pp. 47477 – 47489


Read online

Food is essential for human life and has been the concern of many healthcare conventions. Nowadays new dietary assessment and nutrition analysis tools enable more opportunities to help people understand their daily eating habits, exploring nutrition patterns and maintain a healthy diet. In this paper, we develop a deep model based food recognition and dietary assessment system to study and analyze food items from daily meal images (e.g., captured by smartphone). Specifically, we propose a three-step algorithm to recognize multi-item (food) images by detecting candidate regions and using deep convolutional neural network (CNN) for object classification. The system first generates multiple region of proposals on input images by applying the Region Proposal Network (RPN) derived from Faster R-CNN model. It then indentifies each region of proposals by mapping them into feature maps, and classifies them into different food categories, as well as locating them in the original images. Finally, the system will analyze the nutritional ingredients based on the recognition results and generate a dietary assessment report by calculating the amount of calories, fat, carbohydrate and protein. In the evaluation, we conduct extensive experiments using two popular food image datasets - UEC-FOOD100 and UEC-FOOD256. We also generate a new type of dataset about food items based on FOOD101 with bounding. The model is evaluated through different evaluation metrics. The experimental results show that our system is able to recognize the food items accurately and generate the dietary assessment report efficiently, which will benefit the users with a clear insight of healthy dietary and guide their daily recipe to improve body health and wellness.