Machine Learning with Applications (Mar 2024)

ChatReview: A ChatGPT-enabled natural language processing framework to study domain-specific user reviews

  • Brittany Ho,
  • Ta’Rhonda Mayberry,
  • Khanh Linh Nguyen,
  • Manohar Dhulipala,
  • Vivek Krishnamani Pallipuram

Journal volume & issue
Vol. 15
p. 100522

Abstract

Read online

Intelligent search engines including pre-trained generative transformers (GPT) have revolutionized the user search experience. Several fields including e-commerce, education, and hospitality are increasingly exploring GPT tools to study user reviews and gain critical insights to improve their service quality. However, massive user-review data and imprecise prompt engineering lead to biased, irrelevant, and impersonal search results. In addition, exposing user data to these search engines may pose privacy issues. Motivated by these factors, we present ChatReview, a ChatGPT-enabled natural language processing (NLP) framework that effectively studies domain-specific user reviews to offer relevant and personalized search results at multiple levels of granularity. The framework accomplishes this task using four phases including data collection, tokenization, query construction, and response generation. The data collection phase involves gathering domain-specific user reviews from public and private repositories. In the tokenization phase, ChatReview applies sentiment analysis to extract keywords and categorize them into various sentiment classes. This process creates a token repository that best describes the user sentiments for a given user-review data. In the query construction phase, the framework uses the token repository and domain knowledge to construct three types of ChatGPT prompts including explicit, implicit, and creative. In the response generation phase, ChatReview pipelines these prompts into ChatGPT to generate search results at varying levels of granularity. We analyze our framework using three real-world domains including education, local restaurants, and hospitality. We assert that our framework simplifies prompt engineering for general users to produce effective results while minimizing the exposure of sensitive user data to search engines. We also present a one-of-a-kind Large Language Model (LLM) peer assessment of the ChatReview framework. Specifically, we employ Google’s Bard to objectively and qualitatively analyze the various ChatReview outputs. Our Bard-based analyses yield over 90% satisfaction, establishing ChatReview as a viable survey analysis tool.

Keywords