IEEE Access (Jan 2023)

On Designing Low-Risk Honeypots Using Generative Pre-Trained Transformer Models With Curated Inputs

  • Jarrod Ragsdale,
  • Rajendra V. Boppana

DOI
https://doi.org/10.1109/ACCESS.2023.3326104
Journal volume & issue
Vol. 11
pp. 117528 – 117545

Abstract

Read online

Honeypots are utilized as defensive tools within a monitored environment to engage attackers and gather artifacts for the development of indicators of compromise. However, once these honeypots are deployed, they are rarely updated, making them obsolete and easier to fingerprint as time passes. Furthermore, using fully functional computing and networking devices as honeypots presents the risk of an attacker breaking out from the controlled environment. Large-scale text-generating models, commonly referred to as Large Language Models (LLMs), have seen wide implementation using generative-pretrained transformer (GPT) models. These models have seen an explosion in popularity and have been tuned for various use cases. This paper investigates the use of these models to simulate honeypots that are adaptive to threat engagement without the risk of unintended breakouts. This investigation finds that the method these models use to generate output has limitations that can reveal the deception to a dedicated attacker in extended sessions. To overcome this challenge, this paper presents a method to manage the inputs and outputs to reduce non-deterministic output and token usage of a model generating text in a way that simulates a terminal. An example honeypot is evaluated against a traditional low-risk honeypot, Cowrie, where greater similarity to an actual machine for single commands is achieved. Furthermore, in several multi-step attack scenarios, the proposed architecture reduced the token usage by up to 77% when compared to a baseline scenario that did not manage the inputs to and outputs from an example model. A discussion on the utilization of LLMs for cyber deception, as well as the limitations hindering their broader adoption indicates that LLMs exhibit promise for cyber deception but necessitate further research before achieving widespread implementation.

Keywords