Journal of Engineering Science and Technology (May 2017)

EFFICIENT SCHEDULING OF DYNAMIC PROGRAMMING ALGORITHMS ON MULTICORE ARCHITECTURES

  • TAUSIF DIWAN,
  • S. R. SATHE

Journal volume & issue
Vol. 12, no. 5
pp. 1253 – 1264

Abstract

Read online

Dynamic programming is one of the Berkley 13 dwarfs widely used for solving various combinatorial and optimization problems, including matrix chain multiplication, longest common subsequence, binary (0/1) knapsack and so on. Due to nonuniformity in the inherent dependence in dynamic programming algorithms, it becomes necessary to schedule the subproblems of dynamic programming effectively to processing cores for optimal utilization of multicore technology. The computational matrix of dynamic programming is divided into three parts; growing region, stable region and shrinking region depending on whether the number of subproblems increases, remain stable or decreases uniformly phase by phase respectively. We realize the parallel implementations of matrix chain multiplication, longest common subsequence and 0/1 knapsack on Intel Xeon X5650 and E5-2695 using OpenMP with different scheduling policies and adequate chunk sizes. It is concluded that, for the growing or the shrinking region of dynamic programming parallelization adopted in this article, guided schedule is better as compared to other scheduling scheme. Static or dynamic schedule is better for the stable region of dynamic programming. Dynamic programming approach, where all three regions are present, more speedup is achieved by applying the mixed scheduling approach rather than applying only single scheduling technique for the entire computations. In LCS, approximately 20% more speedup is achieved using a mixed scheduling technique over the conventional single scheduling approach on Intel Xeon E5-2695.

Keywords