Title: Advanced Defense Framework against Physical Adversarial Camouflage via Continual Adversarial Training
Abstract:
Physical adversarial camouflage has emerged as a significant threat to computer vision AI models, particularly in deceiving x-object detectors from any viewpoint with full-surface patterns on target x-objects. Despite the urgency, effective countermeasures have yet to be proposed. This dissertation introduces a new method, termed continual adversarial training, tailored for defending against physical adversarial camouflage. Traditional adversarial training involves retraining the model to enable it to identify adversarial examples. However, since adversarial camouflage typically targets specific classes, such as vehicles, conducting adversarial training exclusively with data from classes subjected to adversarial camouflage can lead to catastrophic forgetting, wherein the model loses previously learned information about other classes. To mitigate this, our method combines knowledge distillation-based continual learning with adversarial training to address catastrophic forgetting while enhancing robustness against adversarial camouflage. The framework further enables selective adversarial training on specific classes, making it particularly effective against adversarial camouflage. Additionally, we enhance performance by optimizing the loss term in continual adversarial training and employing an iterative, dynamic adversarial training framework. Our extensive experiments show robust applicability across diverse x-object detection models.