Journal of Pharmacy and Bioallied Sciences (Apr 2024)

Utilizing Artificial Intelligence Application for Diagnosis of Oral Lesions and Assisting Young Oral Histopathologist in Deriving Diagnosis from Provided Features – A Pilot study

  • Islam Atikul,
  • Banerjee Abhishek,
  • Banerjee Sumita,
  • Shrivastava Deepti,
  • Srivastava Kumar Chandan

DOI
https://doi.org/10.4103/jpbs.jpbs_1287_23
Journal volume & issue
Vol. 16, no. 6
pp. 1136 – 1139

Abstract

Read online

BackgroundAI in healthcare services is advancing every day, with a focus on uprising cognitive capabilities. Higher cognitive functions in AI entail performing intricate processes like decision-making, problem-solving, perception, and reasoning. This advanced cognition surpasses basic data handling, encompassing skills to grasp ideas, understand and apply information contextually, and derive novel insights from previous experiences and acquired knowledge. ChatGPT, a natural language processing model, exemplifies this evolution by engaging in conversations with humans, furnishing responses to inquiries. ObjectiveWe aimed to understand the capability of ChatGPT in solving doubts pertaining to symptoms and histological features related to subject of oral pathology. The study’s objective is to evaluate ChatGPT’s effectiveness in answering questions pertaining to diagnoses. MethodsThis cross-sectional study was done using an AI-based ChatGPT application that provides free service for research and learning purposes. The current version of ChatGPT3.5 was used to obtain responses for a total of 25 queries. These randomly asked questions were based on basic queries from patient aspect and early oral histopathologists. These responses were obtained and stored for further processing. The responses were evaluated by five experienced pathologists on a four point liekart scale. The score were further subjected for deducing kappa values for reliability. Result & Statistical AnalysisA total of 25 queries were solved by the program in the shortest possible time for an answer. The sensitivity and specificity of the methods and the responses were represented using frequency and percentages. Both the responses were analysed and were statistically significant based on the measurement of kappa values. ConclusionThe proficiency of ChatGPT in handling intricate reasoning queries within pathology demonstrated a noteworthy level of relational accuracy. Consequently, its text output created coherent links between elements, producing meaningful responses. This suggests that scholars or students can rely on this program to address reasoning-based inquiries. Nevertheless, considering the continual advancements in the program’s development, further research is essential to determine its accuracy levels in future versions.

Keywords