IEEE Access (Jan 2024)

Leveraging Local LLMs for Secure In-System Task Automation With Prompt-Based Agent Classification

  • Suthir Sriram,
  • C. H. Karthikeya,
  • K. P. Kishore Kumar,
  • Nivethitha Vijayaraj,
  • Thangavel Murugan

DOI
https://doi.org/10.1109/ACCESS.2024.3505298
Journal volume & issue
Vol. 12
pp. 177038 – 177049

Abstract

Read online

Recent progress in the field of intelligence has led to the creation of powerful large language models (LLMs). While these models show promise in improving personal computing experiences concerns surrounding data privacy and security have hindered their integration with sensitive personal information. In this study, a new framework is proposed to merge LLMs with personal file systems, enabling intelligent data interaction while maintaining strict privacy safeguards. The methodology organizes tasks based on LLM agents, which apply designated tags to the tasks before sending them to specific LLM modules. Every module is has its own function, including file search, document summarization, code interpretation, and general tasks, to make certain that all processing happens locally on the user’s device. Findings indicate high accuracy across agents: classification agent managed to get an accuracy rating of 86%, document summarization reached a BERT score of 0.9243. The key point of this framework is that it splits the LLM system into modules, which enables future development by integrating new task-specific modules as required. Findings suggest that integrating local LLMs can significantly improve interactions with file systems without compromising data privacy.

Keywords