Paladyn (Apr 2024)
Deep trained features extraction and dense layer classification of sensitive and normal documents for robotic vision-based segregation
Abstract
The digitization of important documents and their segregation can be a beneficial and time-saving activity as individuals will have greater access to important documents and will be able to use them in regular tasks as well as endeavours. In recent years, research into the application of deep networks in robot systems has increased as a direct consequence of the advancements made in classification algorithms over the past few decades. Robotic vision automation for the segregation of sensitive and non-sensitive documents is required for many security concerns. The methodology of this article is initially focused on the identification of a good computer vision-based technique for the classification of sensitive documents from non-sensitive documents. The authors first identified the standard parameters in terms of reliability, loss, precision, and recall by employing deep learning techniques, such as neural networks with convolutions and transfer learning (TL) algorithms. The extraction of features based on pre-trained deep learning models was referenced in numerous publications. Similarly, we applied most of the feature extraction techniques to identify feature extraction from the images. Then, these features were classified by machine and ensemble learning models. However, the pre-trained models-based feature extraction along with machine learning classification resulted better in comparison to the deep learning and TL procedures. Further, the better-identified techniques were applied as the brain behind the vision of a robotic structure to automate the segregation of sensitive documents from non-sensitive documents. This proposed robotic structure could be applied when we have to find some specific and classified document from the haystack.
Keywords