IEEE Access (Jan 2024)
A Hybrid Task Scheduling Technique in Fog Computing Using Fuzzy Logic and Deep Reinforcement Learning
Abstract
This work presents an innovative method for scheduling tasks in a fog computing environments by combining the fuzzy logic with deep reinforcement learning. In Internet of Things there has been a significant raising the amount of data produced by different devices. This has created a need for more effective methods of processing and managing this data. Conventional cloud computing often fails to fulfill the need of IoT usage in terms of high bandwidth, low makespan, and real-time processing. Fog computing presents available solution by placing the processing resources near the data source but the issue of efficient task scheduling remains a major obstacle. We proposed a technique that combines an Hybrid task scheduling technique in fog computing using fuzzy logic and deep reinforcement learning (HTSFFDRL) algorithm with a Takagi-Sugeno fuzzy inference system. By continuously interacting with the environment, this hybrid technique allows for the dynamic prioritization of tasks and the real-time change of scheduling rules. The technique seeks to maximize many crucial performance measures, such as makespan, energy consumption, cost, and fault tolerance. Simulations extensively validate the suggested strategy, demonstrating significant enhancements compared to current approaches like LSTM, DQN, and A2C. The results indicate that combining fuzzy logic with reinforcement learning may greatly improve the effectiveness and dependability of task scheduling in fog computing, opening up possibilities for more resilient IoT applications.
Keywords