Mathematics (Oct 2021)
Optimized Task Group Aggregation-Based Overflow Handling on Fog Computing Environment Using Neural Computing
Abstract
It is a non-deterministic challenge on a fog computing network to schedule resources or jobs in a manner that increases device efficacy and throughput, diminishes reply period, and maintains the system well-adjusted. Using Machine Learning as a component of neural computing, we developed an improved Task Group Aggregation (TGA) overflow handling system for fog computing environments. As a result of TGA usage in conjunction with an Artificial Neural Network (ANN), we may assess the model’s QoS characteristics to detect an overloaded server and then move the model’s data to virtual machines (VMs). Overloaded and underloaded virtual machines will be balanced according to parameters, such as CPU, memory, and bandwidth to control fog computing overflow concerns with the help of ANN and the machine learning concept. Additionally, the Artificial Bee Colony (ABC) algorithm, which is a neural computing system, is employed as an optimization technique to separate the services and users depending on their individual qualities. The response time and success rate were both enhanced using the newly proposed optimized ANN-based TGA algorithm. Compared to the present work’s minimal reaction time, the total improvement in average success rate is about 3.6189 percent, and Resource Scheduling Efficiency has improved by 3.9832 percent. In terms of virtual machine efficiency for resource scheduling, average success rate, average task completion success rate, and virtual machine response time are improved. The proposed TGA-based overflow handling on a fog computing domain enhances response time compared to the current approaches. Fog computing, for example, demonstrates how artificial intelligence-based systems can be made more efficient.
Keywords