Complex & Intelligent Systems (Nov 2024)
Influence maximization under imbalanced heterogeneous networks via lightweight reinforcement learning with prior knowledge
Abstract
Abstract Influence Maximization (IM) stands as a central challenge within the domain of complex network analysis, with the primary objective of identifying an optimal seed set of a predetermined size that maximizes the reach of influence propagation. Over time, numerous methodologies have been proposed to address the IM problem. However, one certain network referred to as Imbalanced Heterogeneous Networks (IHN), which widely used in social situation, urban and rural areas, and merchandising, presents challenges in achieving high-quality solutions. In this work, we introduce the Lightweight Reinforcement Learning algorithm with Prior knowledge (LRLP), which leverages the Struc2Vec graph embedding technique that captures the structural similarity of nodes to generate vector representations for nodes within the network. In details, LRLP incorporates prior knowledge based on a group of centralities, into the initial experience pool, which accelerates the reinforcement learning training for better solutions. Additionally, the node embedding vectors are input into a Deep Q Network (DQN) to commence the lightweight training process. Experimental evaluations conducted on synthetic and real networks showcase the effectiveness of the LRLP algorithm. Notably, the improvement seems to be more pronounced when the the scale of the network is larger. We also analyze the effect of different graph embedding algorithms and prior knowledge on algorithmic results. Moreover, we conduct an analysis about some parameters, such as number of seed set selections T, embedding dimension d and network update frequency C. It is significant that the reduction of number of seed set selections T not only keeps the quality of solutions, but lowers the algorithm’s computational cost.
Keywords