IEEE Access (Jan 2023)

Parallel FPGA Routers With Lagrange Relaxation

  • Rohit Agrawal,
  • Kapil Ahuja,
  • Dhaarna Maheshwari,
  • Mohd Ubaid Shaikh,
  • Mohamed Bouaziz,
  • Akash Kumar

DOI
https://doi.org/10.1109/ACCESS.2023.3328769
Journal volume & issue
Vol. 11
pp. 121786 – 121799

Abstract

Read online

Routing of the nets in Field Programmable Gate Array (FPGA) design flow is one of the most time consuming steps. Although Versatile Place and Route (VPR), which is a commonly used algorithm for this purpose, routes effectively, it is slow in execution. One way to accelerate this design flow is to use parallelization. Since VPR is intrinsically sequential, a set of parallel algorithms have been recently proposed for this purpose (ParaLaR and ParaLarPD). These algorithms formulate the routing process as a Linear Program (LP) and solve it using the Lagrange relaxation, an adapted sub-gradient method, and a Steiner tree algorithm. When tested on the MCNC benchmark circuits, using underlying VPR 7.0 for packing and placement, ParaLaR and ParaLarPD both outperformed VPR 7.0 for routing, with ParaLarPD being better. We have three main contributions here. Recently, in 2020, a new variant of VPR, i.e. VPR 8.0, has been proposed. Hence, first, we make ParaLarPD compatible for testing on MCNC benchmark circuits using VPR 8.0. Second, we adapt ParaLarPD for the larger benchmark circuits than MCNC, i.e., VTR, using both VPR 7.0 and VPR 8.0, and perform thorough evaluation. Finally, and third, we improve ParaLarPD further. We design a family of Lagrange heuristics that better the Lagrange relaxation process of ParaLarPD. We term our new algorithm ParaLarH and test it on both the benchmark circuits (MCNC and VTR) and using both the VPRs (VPR 7.0 and VPR 8.0). When tested on MCNC and VTR benchmark circuits, VPR (VPR 7.0 and VPR 8.0) is outperformed by both ParaLarH and ParaLarPD, with average gains given below. The minimum channel width improvements are 22% and 12%, respectively. The total wire length improvements for both are 45%. Finally, the average critical path delay improvements for both are almost the same (37% and 35%, respectively).

Keywords