IEEE Access (Jan 2024)
Leveraging Transfer Learning in Deep Reinforcement Learning for Solving Combinatorial Optimization Problems Under Uncertainty
Abstract
In recent years, addressing the inherent uncertainties within Combinatorial Optimization Problems (COPs) reveals the limitations of traditional optimization methods. Although these methods are often effective in deterministic settings, they may lack flexibility and adaptability to navigate the uncertain nature of real-world COP/s. Deep Reinforcement Learning (DRL) has emerged as a promising approach for dynamic decision-making within these complex environments. Yet, the application of DRL in solving COP/s highlights key limitations for the generalization process across various problem instances without extensive retraining and customization for each new variant, leading to notable computational costs and inefficiencies. To address these challenges, this paper introduces a novel framework that combines the adaptability and learning capabilities of DRL with the efficiency of Transfer Learning (TL) and Neural Architecture Search. This framework enables the leveraging of knowledge gained from solving COP/s to enhance the solving of different but related COP/s, thereby eliminating the necessity for retraining models from scratch for each new problem variant to be solved. The framework was evaluated on over 1,500 benchmark instances across 10 stochastic and deterministic variants of the vehicle routing problem. Across extensive experiments, the approach consistently improves solution quality and computational efficiency. On average, it achieves at least a 5% improvement in solution quality and a 20% reduction in CPU time compared to state-of-the-art methods, with some variants showing even more substantial gains. For large-scale instances over 200 customers, the TL process requires only 10-15% of the time needed to train models from scratch, while maintaining solution quality, laying the groundwork for future research in this area.
Keywords