WIP: From Detection to Explanation: Using LLMs for Adversarial Scenario Analysis in Vehicles

Published in The 3rd USENIX Symposium on Vehicle Security and Privacy (VehicleSec '25), 2025

We propose a framework that leverages Large Language Models (LLMs) for adversarial scenario analysis in autonomous vehicles (AVs), generating interpretable explanations for anomalies and bridging the gap between detection and semantic understanding.

To address the limitations of traditional deep neural networks (DNNs) in robustness and interpretability, we introduce a zero-shot chain-of-thought (CoT) reasoning system that uses a domain-specific language (DSL) and incorporates formal traffic knowledge from the MUTCD.

We introduce AutoSec-X, a dataset of 40 driving scenarios (benign and adversarial); evaluate zero-shot CoT prompting with LLMs (e.g., Gemini, LLaMA, Qwen); and benchmark performance using BLEU, ROUGE, and SBERT. Our results show that Gemini 1.5-Pro delivers stronger semantic reasoning and rule-based interpretation of adversarial conditions.

👉 Read the full paper (PDF)