Applied Sciences (Apr 2024)

Enhancing Security in Industrial Application Development: Case Study on Self-Generating Artificial Intelligence Tools

  • Tomás de J. Mateo Sanguino

DOI
https://doi.org/10.3390/app14093780
Journal volume & issue
Vol. 14, no. 9
p. 3780

Abstract

Read online

The emergence of security vulnerabilities and risks in software development assisted by self-generated tools, particularly with regard to the generation of code that lacks due consideration of security measures, could have significant consequences for industry and its organizations. This manuscript aims to demonstrate how such self-generative vulnerabilities manifest in software programming, through a case study. To this end, this work undertakes a methodology that illustrates a practical example of vulnerability existing in the code generated using an AI model such as ChatGPT, showcasing the creation of a web application database, SQL queries, and PHP server-side. At the same time, the experimentation details a step-by-step SQL injection attack process, highlighting the hacker’s actions to exploit the vulnerability in the website’s database structure, through iterative testing and executing SQL commands to gain access to sensitive data. Recommendations on effective prevention strategies include training programs, error analysis, responsible attitude, integration of tools and audits in software development, and collaboration with third parties. As a result, this manuscript discusses compliance with regulatory frameworks such as GDPR and HIPAA, along with the adoption of standards such as ISO/IEC 27002 or ISA/IEC 62443, for industrial applications. Such measures lead to the conclusion that incorporating secure coding standards and guideline—from organizations such as OWASP and CERT training programs—further strengthens defenses against vulnerabilities introduced by AI-generated code and novice programming errors, ultimately improving overall security and regulatory compliance.

Keywords