IEEE Access (Jan 2019)
A Novel Task Provisioning Approach Fusing Reinforcement Learning for Big Data
Abstract
The large-scale tasks processing for big data using cloud computing has become a hot research topic. Most of previous work on task processing is directly customized and achieved through existing methods. It may result in relatively more system response time, high algorithm complexity and resource waste, etc. Based on this argument, aiming at realizing overall load balancing, bandwidth cost minimization and energy conservation while satisfying resource requirements, a novel large-scale tasks processing approach called TOPE (Two-phase Optimization for Parallel Execution) is developed. The deep reinforcement learning model is designed for virtual link mapping decisions. We treat whole network as a multi-agent system and the whole process of selecting each node's next hop node is formalized via Markov decision process. We train the learning agent by deep neural network to store parameters of deep network model while approximating the value function, rather than tons of state-action values. The virtual node mapping is achieved by designed distributed multi-objective swarm intelligence to realize our two-phase optimization for task allocation in topology structure of Fat-tree. We provide experiments to show the ability of TOPE in analyzing task requests and infrastructure network. The superiority of TOPE for large-scale tasks processing is convincingly demonstrated by comparing with state-of-the-art approaches in cloud environment.
Keywords