Mathematics (Jan 2022)

SinLU: Sinu-Sigmoidal Linear Unit

  • Ashis Paul,
  • Rajarshi Bandyopadhyay,
  • Jin Hee Yoon,
  • Zong Woo Geem,
  • Ram Sarkar

DOI
https://doi.org/10.3390/math10030337
Journal volume & issue
Vol. 10, no. 3
p. 337

Abstract

Read online

Non-linear activation functions are integral parts of deep neural architectures. Given the large and complex dataset of a neural network, its computational complexity and approximation capability can differ significantly based on what activation function is used. Parameterizing an activation function with the introduction of learnable parameters generally improves the performance. Herein, a novel activation function called Sinu-sigmoidal Linear Unit (or SinLU) is proposed. SinLU is formulated as SinLU(x)=(x+asinbx)·σ(x), where σ(x) is the sigmoid function. The proposed function incorporates the sine wave, allowing new functionalities over traditional linear unit activations. Two trainable parameters of this function control the participation of the sinusoidal nature in the function, and help to achieve an easily trainable, and fast converging function. The performance of the proposed SinLU is compared against widely used activation functions, such as ReLU, GELU and SiLU. We showed the robustness of the proposed activation function by conducting experiments in a wide array of domains, using multiple types of neural network-based models on some standard datasets. The use of sine wave with trainable parameters results in a better performance of SinLU than commonly used activation functions.

Keywords