As Controller Area Network (CAN) Intrusion Detection Systems (IDS) become increasingly
sophisticated and incorporate deep learning architectures, adversarial attacks have evolved correspondingly
over the past decade. To assess the vulnerability of deep learning-based CAN IDS to evasion attacks, we
developed a Double Deep Q-Network (DDQN) agent capable of performing adversarial attacks against two
critical attack types: Denial of Service (DoS) and Fuzzing attacks. Our methodology employs reinforcement
learning to systematically discover perturbation strategies that can evade detection while maintaining
attack functionality.