Journal of Advanced Artificial Intelligence |
Foundation of Computer Science (FCS), NY, USA |
Volume 2 - Number 2 |
Year of Publication: 2025 |
Authors: Rajeshkumar Golani, Bhooshan Ravikumar Gadkari |
![]() |
Rajeshkumar Golani, Bhooshan Ravikumar Gadkari . Securing LLM-Integrated Critical Infrastructure: A Technical Framework for Industrial Control Systems and IoT. Journal of Advanced Artificial Intelligence. 2, 2 ( Sep 2025), 11-17. DOI=10.5120/jaai202445
The integration of Large Language Models into critical infrastructure systems creates unprecedented security challenges that extend beyond traditional cybersecurity paradigms. Contemporary industrial environments face emerging threats where linguistic manipulations can directly trigger physical consequences through prompt-to-physical attack vectors. The convergence of Information Technology, Operational Technology, and Artificial Intelligence establishes complex attack surfaces where conventional security frameworks prove inadequate. Hallucination-induced failures and data poisoning attacks represent particularly insidious threats that can compromise industrial operations through gradual behavioral modifications. The probabilistic nature of LLM outputs introduces fundamental uncertainty into deterministic control systems, necessitating specialized defensive architectures. AI-aware segmentation strategies provide essential isolation boundaries while maintaining operational connectivity through controlled communication channels. Human-in-the-loop governance mechanisms serve as critical safety barriers, requiring explicit validation before executing AI-generated commands affecting physical systems. Comprehensive output verification systems employ formal methods to validate AI recommendations against predetermined safety constraints. Independent redundant safety systems operate without AI dependencies, ensuring continued operation during system failures or compromises. Digital twin environments enable safe evaluation of defensive mechanisms without exposing operational infrastructure to potential harm. Contemporary risk assessment frameworks require specialized metrics capturing AI-specific failure modes, including attack success rates and safety violation frequencies. The article presents a comprehensive framework addressing the unique vulnerabilities of LLM-enabled industrial systems while proposing resilient architectures for safe AI deployment in critical infrastructure environments.