IEEE Access (Jan 2023)
Learning to Solve Optimization Problems With Hard Linear Constraints
Abstract
Constrained optimization problems have appeared in a wide variety of challenging real-world problems, where constraints often capture the physics of the underlying system. Classic methods for solving these problems relied on iterative algorithms that explored the feasible domain in the search for the best solution. These iterative methods often became the computational bottleneck in decision-making and adversely impacted time-sensitive applications. Recently, neural approximators have shown promise as a replacement for the iterative solvers that can output the optimal solution in a single feed-forward providing rapid solutions to optimization problems. However, enforcing constraints through neural networks remains an open challenge. In this paper, we have developed a neural approximator that maps the inputs to an optimization problem with hard linear constraints to a feasible solution that is nearly optimal. Our proposed approach consists of five main steps: 1) reducing the original problem to optimization on a set of independent variables, 2) finding a gauge function that maps the $\ell _{\infty} $ -norm unit ball to the feasible set of the reduced problem, 3) learning a neural approximator that maps the optimization’s inputs to a virtual optimal point in the $\ell _{\infty} $ -norm unit ball, and 4) gauge mapping to project the virtual optimal point in the $\ell _{\infty} $ -norm unit ball onto the feasible space, then 5) finding the values of the dependent variables from the independent variable to recover the solution to the original problem. We can guarantee hard feasibility through this sequence of steps. Unlike the current learning-assisted solutions, our method is free of parameter-tuning (compared to penalty-based methods) and removes iterations altogether. We have demonstrated the performance of our proposed method in quadratic programming in the context of optimal power dispatch (critical to the resiliency of our electric grid) and constrained non-convex optimization in the context of image registration problems. Our results have supported our theoretical findings and demonstrate superior performance in terms of computational time, optimality, and the feasibility of the solution compared to existing approaches.
Keywords