By Onn Haran
Several years ago, Autonomous Vehicles (AVs) were promised to be common today, but they can rarely be spotted. Level 3 vehicles were just introduced by Mercedes[1]. Several hundred Level 4 robotaxis are operating globally. Recently, several manufacturers had scaled back plans for self-driving technology.
Any V2X supplier is no stranger to mass deployments delay. But while V2X suffered from regulatory uncertainty, AV introduction is delayed mostly from the inability to assure correct operation under any condition. Covering all corner cases turned out to be a much greater challenge than anticipated.
AVs are trained in myriad different scenarios. But still, failing to train a specific object in a specific scenario may lead to a fault.
V2X is a unique vehicular sensor. It is the only sensor that doesn’t require training. V2X information is clearly describing the object properties. The information doesn’t depend on any external factor, hence applicable in any scenario.
The corner cases for visual perception were categorized in a research[2]. All failure cases are irrelevant to V2X except object level anomaly, where an unknown object appears on road. V2X operation is deterministic in any scenario, lighting, weather, or presence of other objects. The table below lists the main corner case categories, sorted on ascending risk level:
Case level | Failure | Relevant V2X property |
Pixel | Outlier | Agnostic to dirt or lighting conditions |
Domain | Appearance shift | Agnostic to weather condition |
Object | Contextual anomaly, where the object is in an unusual location, or collective anomaly, where the object appears in an unseen quantity | Agnostic to location and quantity of similar nearby objects |
Scenario | Novel or anomalous scenarios, which were not observed during the training process | Agnostic to scenario |
AV developers continue to expand the training set to cover as many as possible corner cases. History has taught us that this is a complex task. Can 100% certainty be achieved? Given the declining investments in AV, can that be funded?
V2X is also solving the interaction of AVs with first responders, namely police forces, ambulances, and firefighters, and even regular vehicles in some cases. Other sensors are limited and subject to assumptions about the nature and shape of interactions.
V2X is very valuable to AVs. It can accelerate the maturity of AVs. But then a question typically emerges: “How can AV rely on V2X if it’s not in 100% of the vehicles?”. That should be answered with “V2X will be aware of 100% of the vehicles sooner than you might think. Thanks to cooperative perception, that can be achieved with V2X deployed in 10% of the vehicles.”
The timeline of AV and V2X seems to match anyhow. If OEMs will make V2X a key element of the AV architecture, they will have additional motivation to accelerate V2X deployment, and by that elevating AVs reliability, accelerating AVs deployment, and eventually making AVs affordable.
[1] https://group.mercedes-benz.com/innovation/product-innovation/autonomous-driving/drive-pilot-nevada.html
[2] Systematization of Corner Cases for Visual Perception in Automated Driving by Jasmin Breitenstein, Jan-Aike Termohlen, Daniel Lipinski and Tim Fingscheidt. 2020 IEEE Intelligent Vehicles Symposium (IV)