Applied Computational Intelligence and Soft Computing (Jan 2022)

Mayfly Taylor Optimisation-Based Scheduling Algorithm with Deep Reinforcement Learning for Dynamic Scheduling in Fog-Cloud Computing

  • G. Shruthi,
  • Monica R. Mundada,
  • B. J. Sowmya,
  • S. Supreeth

DOI
https://doi.org/10.1155/2022/2131699
Journal volume & issue
Vol. 2022

Abstract

Read online

Fog computing domain plays a prominent role in supporting time-delicate applications, which are associated with smart Internet of Things (IoT) services, like smart healthcare and smart city. However, cloud computing is a capable standard for IoT in data processing owing to the high latency restriction of the cloud, and it is incapable of satisfying needs for time-sensitive applications. The resource provisioning and allocation process in fog-cloud structure considers dynamic alternations in user necessities, and also restricted access resources in fog devices are more challenging. The global adoption of IoT-driven applications has led to the rise of fog computing structure, which permits perfect connection for mobile edge and cloud resources. The effectual scheduling of application tasks in fog environments is a challenging task because of resource heterogeneity, stochastic behaviours, network hierarchy, controlled resource abilities, and mobility elements in IoT. The deadline is the most significant challenge in the fog computing structure due to the dynamic variations in user requirement parameters. In this paper, Mayfly Taylor Optimisation Algorithm (MTOA) is developed for dynamic scheduling in the fog-cloud computing model. The developed MTOA-based Deep Q-Network (DQN) showed better performance with energy consumption, service level agreement (SLA), and computation cost of 0.0162, 0.0114, and 0.0855, respectively.