IEEE Access (Jan 2024)

Assessing Performance of Cloud-Based Heterogeneous Chatbot Systems and A Case Study

  • Ganesh Reddy Gunnam,
  • Devasena Inupakutika,
  • Rahul Mundlamuri,
  • Sahak Kaghyan,
  • David Akopian

DOI
https://doi.org/10.1109/ACCESS.2024.3397053
Journal volume & issue
Vol. 12
pp. 81631 – 81645

Abstract

Read online

Recently, human-machine digital assistants gained popularity and are commonly used in question-and-answer applications and similar consumer-supporting domains. A class of more sophisticated digital assistants (chatbots) employing more extended dialogs follows the trend. Chatbots have become increasingly popular in recent years. Nowadays, chatbot deployments in the cloud have become a common practice because of their benefits, including flexibility, scalability, reliability, security, remote working, cost, and power outages. However, measuring the cloud-based chatbot systems’ performance is challenging as human-machine information exchanges are performed in heterogeneous environments such as cloud hosting platforms, information processing units, and several machine-to-machine and human-machine communication channels. This paper investigates different methodologies for assessing the performance of such heterogeneous deployments and identifies performance metrics for evaluating the performance of cloud-based chatbot deployment. The study employs chatbot performance measurements with both real users (human) and automated (simulated) users. The experimental results discuss communication metrics such as response time, throughput, and load testing (connection loss) through the performance assessment of a case study deployment that utilizes an automated protocol chatbot development framework. The findings presented in this paper can further help practitioners to better understand the performance characteristics of a cloud-based chatbot and assist in making informed decisions related to the chatbot development and deployment options.

Keywords