Entropy (Apr 2022)

MTS-Stega: Linguistic Steganography Based on Multi-Time-Step

  • Long Yu,
  • Yuliang Lu,
  • Xuehu Yan,
  • Yongqiang Yu

DOI
https://doi.org/10.3390/e24050585
Journal volume & issue
Vol. 24, no. 5
p. 585

Abstract

Read online

Generative linguistic steganography encodes candidate words with conditional probability when generating text by language model, and then, it selects the corresponding candidate words to output according to the confidential message to be embedded, thereby generating steganographic text. The encoding techniques currently used in generative text steganography fall into two categories: fixed-length coding and variable-length coding. Because of the simplicity of coding and decoding and the small computational overhead, fixed-length coding is more suitable for resource-constrained environments. However, the conventional text steganography mode selects and outputs a word at one time step, which is highly susceptible to the influence of confidential information and thus may select words that do not match the statistical distribution of the training text, reducing the quality and concealment of the generated text. In this paper, we inherit the decoding advantages of fixed-length coding, focus on solving the problems of existing steganography methods, and propose a multi-time-step-based steganography method, which integrates multiple time steps to select words that can carry secret information and fit the statistical distribution, thus effectively improving the text quality. In the experimental part, we choose the GPT-2 language model to generate the text, and both theoretical analysis and experiments prove the effectiveness of the proposed scheme.

Keywords