IEEE Access (Jan 2024)

A Novel Offloading Mechanism Leveraging Fuzzy Logic and Deep Reinforcement Learning to Improve IoT Application Performance in a Three-Layer Architecture Within the Fog-Cloud Environment

  • Dezheen H. Abdulazeez,
  • Shavan K. Askar

DOI
https://doi.org/10.1109/ACCESS.2024.3376670
Journal volume & issue
Vol. 12
pp. 39936 – 39952

Abstract

Read online

This paper presents a novel offloading technique designed to enhance the efficiency of Internet of Things (IoT) applications within a sophisticated three-layer architecture situated in a fog computing environment. The IoT layer contains various intelligent IoT devices that generate a large number of tasks, each characterized by distinct specifications such as size, computational demand, communication requirements, and latency constraints. owing to the limited storage and computing capacity of resource-constrained IoT devices, it is essential to offload these tasks to different layers to ensure effective processing while satisfying the required Quality of Service (QoS) goals. To address this challenge, a fuzzy logic-based task scheduler is employed to make informed offloading decisions, considering task attributes and determining the most suitable processing layers—whether locally at the IoT layer, on collaborative fog nodes, or in the cloud. Furthermore, the study leverages the Deep Q Network (DQN) method, a form of deep reinforcement learning, to identify the optimal fog node for offloading tasks and to maintain a balanced workload distribution across collaborative fog nodes. The experimental findings demonstrate that the proposed scheme outperforms state-of-the-art solutions in terms of latency, power consumption, network usage, throughput, and offloading rate in comparison with the Non-offload, First-Fit, GASDEO, and NAFITO-FLA methods.

Keywords