IEEE Access (Jan 2023)

The Development of Intelligent Agents: A Case-Based Reasoning Approach to Achieve Human-Like Peculiarities via Playback of Human Traces

  • Naveed Anwer Butt,
  • Zafar Mahmood,
  • Ghani Ur Rehman,
  • Moustafa M. Nasralla,
  • Muhammad Zubair,
  • Haleem Farman,
  • Sohaib Bin Altaf Khattak

DOI
https://doi.org/10.1109/ACCESS.2023.3274740
Journal volume & issue
Vol. 11
pp. 78693 – 78712

Abstract

Read online

Recent advances in the digital gaming industry have provided impressive demonstrations of highly skilful artificial intelligence agents capable of performing complex intelligent behaviours. Additionally, there is a significant increase in demand for intelligent agents that can imitate video game characters and human players to increase the perceived value of engagement, entertainment, and satisfaction. The believability of an artificial agent’s behaviour is usually measured only by its ability in a specific task. Recent research has shown that ability alone is not enough to identify human-like behaviour. In this work, we propose a case-based reasoning (CBR) approach to develop human-like agents using human gameplay traces to reduce model-based programming effort. The proposed framework builds on the demonstrated case storage, retrieval and solution methods by emphasizing the impact of seven different similarity measures. The goal of this framework is to allow agents to learn from a small number of demonstrations of a given task and immediately generalize to new scenarios of the same task without task-specific development. The performance of the proposed method is evaluated using instrumental measures of accuracy and similarity with multiple loss functions, e.g. by comparing traces left by agents and players. The study also developed an automated process to generate a corpus for a simulation case study of the Pac-Man game to validate our proposed model. We provide empirical evidence that CBR systems recognize human player behaviour more accurately than trained models, with an average accuracy of 75%, and are easy to deploy. The believability of play styles between human players and AI agents was measured using two automated methods to validate the results. We show that the high p-values produced by these two methods confirm the believability of our trained agents.

Keywords