Sensor-Drop: Improving Robustness of Cooperative Perception under V2X Message Loss
Keywords:
V2X, cooperative perception, robustness, intermediate fusion, data augmentationAbstract
Cooperative perception (CP) allows autonomous vehicles to share sensor data via V2X communication, expanding each vehicle’s field of view and improving 3D object detection. However, real-world V2X links are often unreliable, with packet loss rates up to 30% in some scenarios and occasional high latencies[1]. This leads to significant performance degradation (e.g. 7–10 percentage point drops in 3D detection mAP) for current “full-fusion” CP models when communication fails[1]. In this paper, we propose SensorDrop, a lightweight training-time data augmentation strategy that randomly drops collaborative sensor features during training to simulate message loss. SensorDrop requires no additional network modules or inference overhead and does not alter the V2X protocol, yet substantially improves the robustness of cooperative perception models under high packet loss. We describe the design of SensorDrop and position it relative to prior methods addressing V2X unreliability. A comprehensive evaluation plan is outlined, including experiments on real-world (DAIR-V2X) datasets under independent and bursty packet loss conditions. Our results suggest that training with randomized sensor feature masking can significantly enhance the robustness of V2X perception systems against real-world communication imperfections.Downloads
Published
2025-10-31
Issue
Section
Articles
License

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.