IEEE Access (Jan 2024)

Multi Objective Prioritized Workflow Scheduling Using Deep Reinforcement Based Learning in Cloud Computing

  • Sudheer Mangalampalli,
  • Syed Shakeel Hashmi,
  • Amit Gupta,
  • Ganesh Reddy Karri,
  • K. Varada Rajkumar,
  • Tulika Chakrabarti,
  • Prasun Chakrabarti,
  • Martin Margala

DOI
https://doi.org/10.1109/ACCESS.2024.3350741
Journal volume & issue
Vol. 12
pp. 5373 – 5392

Abstract

Read online

Workflow Scheduling is a huge challenge in cloud paradigm as many number of workflows dynamically generated from various heterogeneous resources and task dependencies in each workflow varies from each other. Therefore, if a workflow with more number of dependencies is not scheduled onto an appropriate Virtual Machine i.e. with low processing capacity which leads to delay in executing workflows and it results in increase of makespan, cost, energy consumption. In order to effectively schedule complex workflows i.e. with more task dependencies, we propose a novel multi objective workflow scheduling algorithm using Deep reinforcement Learning. Initially, priorities of all workflows calculated based on their dependencies and then calculated priorities of VMs based on electricity cost at datacenters to map workflows onto precise VMs. These priorities are fed to scheduler which uses Deep Q-Network model to dynamically schedule tasks by considering both priorities of tasks and VMs. Extensive simulations carried out on workflowsim by considering realtime scientific workflows (Montage, cybershake, Epigenomics, LIGO). Our proposed MOPWSDRL compared against existing state of art approaches i.e. Heterogeneous Earliest First Deadline, Cat Swarm Optimization, Ant Colony Optimization. Results revealed that our proposed MOPDSWRL outperforms existing state of art algorithms by minimizing makespan, energy consumption.

Keywords