IEEE Access (Jan 2023)

Multi-Label Classification With Hyperdimensional Representations

  • Rishikanth Chandrasekaran,
  • Fatemeh Asgareinjad,
  • Justin Morris,
  • Tajana Rosing

DOI
https://doi.org/10.1109/ACCESS.2023.3299881
Journal volume & issue
Vol. 11
pp. 108458 – 108474

Abstract

Read online

Hyperdimensional computing (HDC) is a computational paradigm that leverages the mathematical properties of high-dimensional vector spaces to manipulate data as symbolic entities using a set of neurally plausible operations. Although HDC has demonstrated remarkable success in cognitive tasks, its potential in complex applications such as multi-label classificati has yet to be explored. In this research paper, we introduce three approaches to multi-label classification that strike a balance between computational efficiency and accuracy, based on the complexity of the problem. The first approach we propose is Power Set HD, a transformation method that is ideal for small-scale multi-label classification with label cardinality less than four and label set size less than ten. The second approach, One-vs-All HD, is another transformation method that is suitable for slightly more complex tasks with higher label cardinality, providing a better efficiency-accuracy trade-off over Power Set HD. However, due to the expensive linear complexity scaling of One-vs-All HD, we propose a novel neural approach called TinyXML HD for extreme scale tasks. This method learns hyperdimensional representations by decomposing the learning problem into multiple sub-problems, which are solved neurally through gradient-based optimization. Importantly, TinyXML HD fixes the output size of the model to the dimensionality of the hypervector, regardless of the label size, thereby scaling only by a small constant when evaluated on datasets with extremely large label spaces. Our approaches offer a valuable trade-off between computational efficiency and accuracy. We show that our methods provide a speedup of 16-60x on state of the art datasets, while maintaining comparable accuracy. Furthermore, our methods yield models that are 56x smaller on medium-scale tasks and up to 836x smaller on extreme-scale datasets, which is a significant reduction in model size while still achieving high accuracy.

Keywords