IEEE Access (Jan 2023)

Constrained Clustering: General Pairwise and Cardinality Constraints

  • Adel Bibi,
  • Ali Alqahtani,
  • Bernard Ghanem

DOI
https://doi.org/10.1109/ACCESS.2023.3236608
Journal volume & issue
Vol. 11
pp. 5824 – 5836

Abstract

Read online

In this work, we study constrained clustering, where constraints are utilized to guide the clustering process. In existing works, two categories of constraints have been widely explored, namely pairwise and cardinality constraints. Pairwise constraints enforce the cluster labels of two instances to be the same (must-link constraints) or different (cannot-link constraints). Cardinality constraints encourage cluster sizes to satisfy a user-specified distribution. However, most existing constrained clustering models can only utilize one category of constraints at a time. In this paper, we enforce the above two categories into a unified clustering model starting with the integer program formulation of the standard K-means. As these two categories provide useful information at different levels, utilizing both of them is expected to allow for better clustering performance. However, the optimization is difficult due to the binary and quadratic constraints in the proposed unified formulation. To alleviate this difficulty, we utilize two techniques: equivalently replacing the binary constraints by the intersection of two continuous constraints; the other is transforming the quadratic constraints into bi-linear constraints by introducing extra variables. Then we derive an equivalent continuous reformulation with simple constraints, which can be efficiently solved by Alternating Direction Method of Multipliers (ADMM) algorithm. Extensive experiments on both synthetic and real data demonstrate: 1) when utilizing a single category of constraint, the proposed model is superior to or competitive with state-of-the-art constrained clustering models, and 2) when utilizing both categories of constraints jointly, the proposed model shows better performance than the case of the single category. The experimental results show that the proposed method exploits the constraints to achieve perfect clustering performance with improved clustering to $2-5$ % in classical clustering metrics, e.g., Adjusted Random Index (ARI), Mirkin’s Index (MI), and Huber’s Index (HI), outerperfomring all compared-againts methods across the board. Moreover, we show that our method is robust to initialization.

Keywords