Scientific Reports (Aug 2024)

Enhancing source code classification effectiveness via prompt learning incorporating knowledge features

  • Yong Ma,
  • Senlin Luo,
  • Yu-Ming Shang,
  • Yifei Zhang,
  • Zhengjun Li

DOI
https://doi.org/10.1038/s41598-024-69402-7
Journal volume & issue
Vol. 14, no. 1
pp. 1 – 23

Abstract

Read online

Abstract Researchers have investigated the potential of leveraging pre-trained language models, such as CodeBERT, to enhance source code-related tasks. Previous methodologies have relied on CodeBERT’s ‘[CLS]’ token as the embedding representation of input sequences for task performance, necessitating additional neural network layers to enhance feature representation, which in turn increases computational expenses. These approaches have also failed to fully leverage the comprehensive knowledge inherent within the source code and its associated text, potentially limiting classification efficacy. We propose CodeClassPrompt, a text classification technique that harnesses prompt learning to extract rich knowledge associated with input sequences from pre-trained models, thereby eliminating the need for additional layers and lowering computational costs. By applying an attention mechanism, we synthesize multi-layered knowledge into task-specific features, enhancing classification accuracy. Our comprehensive experimentation across four distinct source code-related tasks reveals that CodeClassPrompt achieves competitive performance while significantly reducing computational overhead.