International Journal of Computational Intelligence Systems (Feb 2024)

Interval Type-2 Mutual Subsethood Cauchy Fuzzy Neural Inference System (IT2MSCFuNIS)

  • Hesham A. Hefny,
  • Nelly S. Amer

DOI
https://doi.org/10.1007/s44196-024-00405-y
Journal volume & issue
Vol. 17, no. 1
pp. 1 – 23

Abstract

Read online

Abstract An interval type-2 (IT2) mutual subsethood Cauchy fuzzy neural inference system has been proposed in this paper. The network architecture consists of 3-layers with all connection weights being IT2 Cauchy fuzzy membership functions (CMFs). The crisp inputs to the system are fuzzified into IT2CMFs with fixed centers and uncertain spreads. The hidden layer represents the rule-based knowledge. The firing degree of the antecedent part of each rule at the hidden layer is computed by aggregating the product of the mutual subsethood similarity measures between the inputs and the connection weights. A volume defuzzification is used to compute the numeric output. A gradient descent back-propagation algorithm is used to train the model. The novelty of the proposed model is threefold. First, is enriching the theory of the mutual subsethood fuzzy neural models by adopting the Cauchy membership function (CMF) as another powerful fuzzy basis function (FBF) rather than the classical choice of Gaussian fuzzy membership functions (GMFs). Second, is the success of computing the mutual subsethood similarity measure between the IT2CMFs and all the model parameters’ updating equations in analytic closed-form formulas, not numerically or approximately. Third, is the ability to extract the type-1 (T1) mutual subsethood Cauchy fuzzy neural inference system (T1MSCFuNIS) with all its analytic closed-form formulas directly as a special case from the general formulas of IT2MSCFuNIS model. Such a novelty makes the proposed model a concrete and effective development of the theory of mutual subsethood fuzzy neural models. Both IT2MSCFuNIS and T1MSCFuNIS models have been tested using different examples from the domains of function approximation, classification, and prediction. The results ensure the efficacy of both models compared with other models reported in the literature.

Keywords