IEEE Access (Jan 2024)

Discrete Prompt Compression With Reinforcement Learning

  • Hoyoun Jung,
  • Kyung-Joong Kim

DOI
https://doi.org/10.1109/ACCESS.2024.3403426
Journal volume & issue
Vol. 12
pp. 72578 – 72587

Abstract

Read online

Compressed prompts aid instruction-tuned language models (LMs) in overcoming context window limitations and reducing computational costs. Existing methods, which are primarily based on training embeddings, face various challenges associated with interpretability, the fixed number of embedding tokens, reusability across different LMs, and inapplicability when interacting with black-box APIs. This study proposes prompt compression with reinforcement learning (PCRL), which is a discrete prompt compression method that addresses these issues. The proposed PCRL method utilizes a computationally efficient policy network that edits prompts directly. The training approach employed in the proposed PCRLs can be applied flexibly to various types of LMs, including both decoder-only and encoder-decoder architecture and it can be trained without gradient access to the LMs or labeled data. The proposed PCRL achieves an average reduction of 24.6% in terms of the token count across various instruction prompts while maintaining sufficient performance. In addition, we demonstrate that the learned policy can be transferred to larger LMs, and through a comprehensive analysis, we explore the token importance within the prompts. The source code is available at https://github.com/nenomigami/PromptCompressor.

Keywords