IEEE Access (Jan 2021)

An Adaptive Threshold for the Canny Algorithm With Deep Reinforcement Learning

  • Keong-Hun Choi,
  • Jong-Eun Ha

DOI
https://doi.org/10.1109/ACCESS.2021.3130132
Journal volume & issue
Vol. 9
pp. 156846 – 156856

Abstract

Read online

The Canny algorithm is widely used for edge detection. It requires the adjustment of parameters to obtain a high-quality edge image. Several methods can select them automatically, but they cannot cover the diverse variations on an image. In the Canny algorithm, we need to set values of three parameters. One is related to smoothing window size, and the other two are the low and high threshold. In this paper, we assume that the smoothing window size is fixed to a predefined size. This paper proposes a method to provide adaptive thresholds for the Canny algorithm, which operates well on images acquired under various variations. We select optimal values of two thresholds adaptively using an algorithm based on the Deep Q-Network (DQN). We introduce a state model, a policy model, and a reward model to formulate the given problem in deep reinforcement learning. The proposed method has the advantage that it can adapt to a new environment using only images without labels, unlike the existing supervised way. We show the feasibility of the proposed algorithm by diverse experimental results.

Keywords