VLM-C4L: Continual Core Dataset Learning with Corner Case Optimization via Vision-Language Models for Autonomous Driving

Anonymous authors
MY ALT TEXT

Continual Update of the AD Model under the VLM-C4L Framework

MY ALT TEXT

Continual Update of Corner Case Dataset under the VLM-C4L Framework

Abstract

With the widespread adoption and deployment of autonomous driving, handling complex environments has become an unavoidable challenge. Due to the scarcity and diversity of extreme scenario datasets, current autonomous driving models struggle to effectively manage corner cases. This limitation poses a significant safety risk—according to the National Highway Traffic Safety Administration (NHTSA), autonomous vehicle systems have been involved in hundreds of reported crashes annually in the United States, some of which occurred corner case like sun glare and foggy, which caused a few fatal accident~\cite{tesla2024}. Furthermore, in order to consistently maintain a robust and reliable autonomous driving system, it is essential for models not only to perform well on routine scenarios but also to adapt to newly emerging scenarios—especially those corner cases that deviate from the norm. This requires a learning mechanism that incrementally integrates new knowledge without degrading previously acquired capabilities. However, to the best of our knowledge, no existing continual learning methods have been proposed to ensure consistent and scalable corner case learning in autonomous driving. To address these limitations, we proposal VLM-C4L, a continual learning framework that introduce Vision-Language Models (VLMs) to dynamically optimize and enhance corner case datasets, and VLM-C4L combines VLM-guided high-quality data extraction with a core data replay strategy, enabling the model to incrementally learn from diverse corner cases while preserving performance on previously routine scenarios, thus ensuring long-term stability and adaptability in real-world autonomous driving. We evaluate VLM-C4L on large-scale real-world autonomous driving datasets, including Waymo and the corner case dataset CODA. To assess the effectiveness of our approach, we employ Sparse R-CNN, the strongest model in the CODA benchmark, and Cascade-DETR, a widely recognized model. Experimental results demonstrate that VLM-C4L significantly enhances object detection performance in complex traffic scenarios, such as light pollution and foggy conditions, with AP and AR scores nearly matching those in regular scenarios.

Datasets

Foggy Datasets via VLM

Light Pollution Datasets via VLM

Result

Comparison of Object Detection Performance with VLM-C4L (AP, AR in %)

Methods WAYMO Light Pollution Foggy
APAR APAP50AP75AR1AR10AR100 APAP50AP75AR1AR10AR100
Sparse R-CNN36.445.914.625.515.112.026.930.2 16.728.017.811.927.329.8
Sparse R-CNN + VLM-C4L (ours)34.745.120.242.431.419.752.330.7 36.446.533.918.041.333.5
Cascade-DETR35.548.118.833.018.013.041.029.4 35.038.018.412.228.733.5
Cascade-DETR + VLM-C4L (ours)33.346.530.448.331.220.749.631.1 24.940.225.416.939.540.8

13222222222222222222222222222222222222222


Comparison of Object Detection Performance with Different Corner Case Data Subsets (AP, AR in %)

Methods WAYMO Light Pollution Foggy
APAR APAP50AP75AR1AR10AR100 APAP50AP75AR1AR10AR100
Sparse R-CNN + Random34.643.820.441.015.110.035.939.2 24.725.114.110.230.431.4
Sparse R-CNN + VLM-DB34.944.922.939.924.114.345.635.5 33.633.726.214.836.036.1
Sparse R-CNN + Random + Data Aug35.145.222.642.219.813.542.932.0 31.235.717.614.131.835.3
Sparse R-CNN + VLM-C4L (ours)33.746.131.552.434.120.154.137.4 29.541.132.217.641.341.5
Cascade-DETR + Random34.547.517.530.414.29.733.633.2 27.729.618.410.030.134.4
Cascade-DETR + VLM-DB35.048.121.740.217.512.742.936.7 27.232.218.512.233.335.8
Cascade-DETR + Random + Data Aug35.148.421.838.217.012.040.533.6 28.030.521.412.335.333.9
Cascade-DETR + VLM-C4L (ours)33.346.530.448.331.220.749.654.1 24.940.225.416.939.540.8

13222222222222222222222222222222222222222


Comparison of Object Detection Performance with Different Core Confidence Ratios and Number of Core Data (AP, AR in %)

Conf. Ratios WAYMO Light Pollution Fog
APAR APAP50AP75AR1AR10AR100 APAP50AP75AR1AR10AR100
τ = 0.234.644.024.541.326.618.140.043.8 23.837.326.514.934.036.7
τ = 0.434.944.923.939.924.417.239.843.3 23.837.326.214.834.036.4
τ = 0.634.243.423.539.524.616.338.441.8 23.035.925.414.732.235.4

13222222222222222222222222222222222222222


Comparison of Object Detection Performance with Different Core Confidence Ratios (AP, AR in %)

Core Data WAYMO Light Pollution Fog
APAR APAP50AP75AR1AR10AR100 APAP50AP75AR1AR10AR100
Dcore(1st) = 300034.643.826.843.929.118.341.845.4 24.337.826.815.135.338.0
Dcore(1st) = 1000034.944.922.939.924.417.239.843.3 23.837.426.214.834.036.4
Dcore(1st) = 2000035.444.919.132.919.715.833.836.4 20.733.822.513.831.433.5

13222222222222222222222222222222222222222

BibTeX

BibTex Code Here