IET Cyber-Physical Systems (Nov 2016)

ApesNet: a pixel-wise efficient segmentation network for embedded devices

  • Chunpeng Wu,
  • Hsin-Pai Cheng,
  • Sicheng Li,
  • Hai (Helen) Li,
  • Yiran Chen

DOI
https://doi.org/10.1049/iet-cps.2016.0027

Abstract

Read online

Road scene understanding and semantic segmentation is an on-going issue for computer vision. A precise segmentation can help a machine learning model understand the real world more accurately. In addition, a well-designed efficient model can be used on source limited devices. The authors aim to implement an efficient high-level, scene understanding model in an embedded device with finite power and resources. Toward this goal, the authors propose ApesNet, an efficient pixel-wise segmentation network which understands road scenes in near real-time and has achieved promising accuracy. The key findings in the authors’ experiments are significantly lower the classification time and achieving a high accuracy compared with other conventional segmentation methods. The model is characterised by an efficient training and a sufficient fast testing. Experimentally, the authors use two road scene benchmarks, CamVid and Cityscapes to show the advantages of ApesNet. The authors’ compare the proposed architecture's accuracy and time performance with SegNet-Basic, a deep convolutional encoder–decoder architecture. ApesNet is 37% smaller than SegNet-Basic in terms of model size. With this advantage, the combining encoding and decoding time for each image is 2.5 times faster than SegNet-Basic.

Keywords