NVIDIA showcases accelerated computing and generative AI breakthroughs for autonomous vehicle development at the
Making moves to accelerate self-driving car development, NVIDIA was today named an Autonomous Grand Challenge winner at the
Building on last year's win in 3D Occupancy Prediction,
This milestone shows the importance of generative AI in building applications for physical AI deployments in autonomous vehicle (AV) development. The technology can also be applied to industrial environments, healthcare, robotics and other areas.
The winning submission received CVPR's Innovation Award as well, recognizing NVIDIA's approach to improving 'any end-to-end driving model using learned open-loop proxy metrics.'
In addition, NVIDIA announced NVIDIA Omniverse Cloud Sensor RTX, a set of microservices that enable physically accurate sensor simulation to accelerate the development of fully autonomous machines of every kind.
How End-to-End Driving Works
The race to develop self-driving cars isn't a sprint but more a never-ending triathlon, with three distinct yet crucial parts operating simultaneously: AI training, simulation and autonomous driving. Each requires its own accelerated computing platform, and together, the full-stack systems purpose-built for these steps form a powerful triad that enables continuous development cycles, always improving in performance and safety.
To accomplish this, a model is first trained on an AI supercomputer such as NVIDIA DGX. It's then tested and validated in simulation - using the NVIDIA Omniverse platform and running on an NVIDIA OVX system - before entering the vehicle, where, lastly, the NVIDIA DRIVE AGX platform processes sensor data through the model in real time.
Building an autonomous system to navigate safely in the complex physical world is extremely challenging. The system needs to perceive and understand its surrounding environment holistically, then make correct, safe decisions in a fraction of a second. This requires human-like situational awareness to handle potentially dangerous or rare scenarios.
AV software development has traditionally been based on a modular approach, with separate components for object detection and tracking, trajectory prediction, and path planning and control.
End-to-end autonomous driving systems streamline this process using a unified model to take in sensor input and produce vehicle trajectories, helping avoid overcomplicated pipelines and providing a more holistic, data-driven approach to handle real-world scenarios.
Watch a video about the Hydra-MDP model, winner of the CVPR Autonomous Grand Challenge for End-to-End Driving:
Navigating the Grand Challenge
This year's CVPR challenge asked participants to develop an end-to-end AV model, trained using the nuPlan dataset, to generate driving trajectory based on sensor data.
The models were submitted for testing inside the open-source NAVSIM simulator and were tasked with navigating thousands of scenarios they hadn't experienced yet. Model performance was scored based on metrics for safety, passenger comfort and deviation from the original recorded trajectory.
The workflow NVIDIA researchers used to win the competition can be replicated in high-fidelity simulated environments with NVIDIA Omniverse. This means AV simulation developers can recreate the workflow in a physically accurate environment before testing their AVs in the real world. NVIDIA Omniverse Cloud Sensor RTX microservices will be available later this year. Sign up for early access.
In addition, NVIDIA ranked second for its submission to the CVPR Autonomous Grand Challenge for Driving with Language. NVIDIA's approach connects vision language models and autonomous driving systems, integrating the power of large language models to help make decisions and achieve generalizable, explainable driving behavior.
Learn More at CVPR
More than 50 NVIDIA papers were accepted to this year's CVPR, on topics spanning automotive, healthcare, robotics and more. Over a dozen papers will cover NVIDIA automotive-related research, including:
Hydra-MDP: End-to-End Multimodal Planning With Multi-Target Hydra-Distillation
Winner of CVPR's End-to-End Driving at Scale challenge
Read the NVIDIA technical blog
Producing and Leveraging Online Map Uncertainty in Trajectory Prediction
CVPR best paper award finalist
Driving Everywhere With Large Language Model Policy Adaptation
Is Ego Status All You Need for Open-Loop End-to-End Autonomous Driving?
Improving Distant 3D Object Detection Using 2D Box Supervision
Dynamic LiDAR Resimulation Using Compositional Neural Fields
BEVNeXt: Reviving Dense BEV Frameworks for 3D Object Detection
PARA-Drive: Parallelized Architecture for Real-Time Autonomous Driving
Learn more about
(C) 2024 Electronic News Publishing, source