Journal of Inequalities and Applications (Sep 2024)
Interpolation for neural-network operators activated with a generalized logistic-type function
Abstract
Abstract This paper defines a family of neural-network interpolation operators. The first derivative of generalized logistic-type functions is considered as a density function. Using the first-order uniform approximation theorem for continuous functions defined on the finite intervals, the interpolation properties of these operators are presented. A Kantorovich-type variant of the operators F n a , ε $F_{n}^{a,\varepsilon} $ is also introduced. The approximation of Kantorovich-type operators in L P $L_{P}$ spaces with 1 ≤ p ≤ ∞ $1 \leq p\leq \infty $ is studied. Further, different combinations of the parameters of our generalized logistic-type activation function θ s , a $\theta _{s, a}$ are examined to see which parameter values might give us a more efficient activation function. By choosing suitable parameters for the operator F n a , ε $F_{n}^{a,\varepsilon} $ and the Kantorovich variant of the operator F n a , ε $F_{n}^{a,\varepsilon} $ , the approximation of various function examples is studied.
Keywords