The autonomous vehicle industry reached a major inflection point this week when NVIDIA and Uber announced an expansive partnership to deploy 100,000 SAE Level 4-capable robotaxis and delivery vehicles beginning in 2027.
The collaboration, unveiled by NVIDIA founder and CEO Jensen Huang at the company’s 2025 Global Technology Conference in Washington, D.C., represents the most ambitious autonomous vehicle deployment plan announced to date and signals a fundamental shift in how the world will experience mobility.
At the heart of this transformation lies NVIDIA’s new DRIVE AGX Hyperion 10 platform, a reference production architecture that the company claims can make any vehicle Level 4-ready.
The system features two DRIVE AGX Thor system-on-chips built on NVIDIA’s Blackwell architecture, each delivering more than 2,000 FP4 teraflops of real-time compute.
Enough processing power to fuse data from 14 high-definition cameras, nine radars, one lidar, and 12 ultrasonic sensors simultaneously.
To put this in perspective, consider the sensor configurations in 2025 vehicles currently on the road. Tesla’s Hardware 4 system, which powers the company’s Full Self-Driving capability, uses up to 12 cameras and is expected to include a new Phoenix radar unit, though most Tesla vehicles still operate without radar.
Mercedes-Benz’s DRIVE PILOT, the first SAE Level 3 conditionally automated driving system certified in the United States, employs a comprehensive but smaller sensor suite including multiple cameras, radar units, lidar from Luminar Technologies, ultrasound sensors, and GPS antenna arrays.
The system operates only under specific conditions, on pre-mapped freeways during daytime with clear weather and moderate to heavy traffic moving under 40 mph, precisely because its sensor processing capabilities remain limited compared to what Level 4 autonomy demands.
General Motors‘ Super Cruise, which MotorTrend named the best hands-free driving technology available in 2025, relies on cameras, GPS, precision lidar map data, and radar sensors. The system uses a driver-facing camera mounted atop the steering column to monitor attention, but it functions only on pre-mapped highways and still requires the driver to remain alert and ready to take control.
Current GM vehicles equipped with Super Cruise typically feature one forward-facing camera and multiple radar units, far fewer sensors than the Hyperion 10 platform.
Ford’s BlueCruise operates with a similarly constrained sensor setup: one forward-facing Mobileye camera mounted near the rearview mirror, five radar units (one mid-range and four short-range), a driver-monitoring camera, and GPS.
The forward camera provides a 52-degree horizontal field of view with operational ranges up to 200 meters for vehicles and 70 meters for pedestrians, while the mid-range radar covers approximately 200 meters. The system handles steering, braking, and speed control on pre-mapped highways, but the limited sensor count restricts its operational domain.
Even advanced research platforms fall short of Hyperion 10’s specifications. Mobileye’s Surround ADAS, designed for premium hands-off, eyes-on driving features, manages up to 11 sensors total. With multiple cameras and radars on a single EyeQ6 High System-on-Chip, it is still three sensors fewer than Hyperion 10’s camera count alone.
The processing power required to fuse data from 36 sensors simultaneously, analyzing high-resolution video streams from 14 cameras, tracking objects and velocities from nine radars at different ranges, processing three-dimensional point clouds from lidar, and integrating proximity data from 12 ultrasonic sensors, demands the 2,000+ teraflops of compute that NVIDIA’s dual Thor processors provide.
This computational capacity enables the system to build a complete 360-degree environmental model in real time, predict the behavior of surrounding vehicles and pedestrians, plan safe trajectories, and execute driving decisions with the redundancy and reliability that Level 4 autonomy requires.
Current production vehicles typically process sensor data sequentially or with limited fusion capabilities due to computational constraints.
NVIDIA’s architecture enables true parallel processing and sophisticated AI model inference across all sensor inputs simultaneously, which explains why automakers view the platform as essential for achieving the reliability standards necessary for driverless operation.
“Robotaxis mark the beginning of a global transformation in mobility, making transportation safer, cleaner and more efficient,” Huang said during his keynote address. “Together with Uber, we’re creating a framework for the entire industry to deploy autonomous fleets at scale, powered by NVIDIA AI infrastructure. What was once science fiction is fast becoming an everyday reality”.
Among the first automakers to commit to the initiative, Stellantis announced it will manufacture at least 5,000 Level 4 autonomous vehicles specifically for Uber’s platform, with production scheduled to begin in 2028.
The vehicles will be based on Stellantis’ AV-Ready Platforms, including the K0 Medium Size Van and STLA Small architecture, both designed with the flexibility to accommodate multiple passenger and commercial mobility use cases.
Stellantis will collaborate with Foxconn on hardware and systems integration, leveraging the Taiwanese electronics giant’s expertise in manufacturing and component assembly.
The partnership positions Stellantis as a significant player in the autonomous vehicle manufacturing space, building on the company’s experience supplying vehicles to Waymo, Motional, and AutoX since first providing Chrysler Pacifica minivans to Waymo in 2016.
“By combining Stellantis’s global scale with Nvidia Drive and Foxconn’s system integration, we’re creating a new class of purpose-built robotaxi vehicles,” said Antonio Filosa, CEO of Stellantis.
The collaboration follows Stellantis’ recent announcement of a separate agreement with Pony.ai to test autonomous vehicles in Europe, indicating the automaker’s commitment to a multi-partner approach in developing its autonomous vehicle strategy.
Initial deployment will focus on select U.S. cities, with Uber planning to expand the Stellantis-built fleet globally over time.
The non-binding memorandum of understanding establishes a framework for technology development, licensing, production, and procurement while allowing each company the flexibility to pursue additional opportunities in autonomous driving.
While Stellantis focuses on fleet operations, electric vehicle manufacturer Lucid is pursuing a different strategy: bringing Level 4 autonomy directly to consumers.
The company announced it will integrate NVIDIA’s full-stack autonomous driving software and the DRIVE Hyperion platform into its next-generation passenger vehicles, including its upcoming midsize SUV expected to launch at the end of 2026.
Lucid interim CEO Marc Winterhoff emphasized the company’s focus on true hands-off, eyes-off autonomy for private vehicle owners. “After a night out somewhere, you want to be driven home, you don’t want to have to take over at some point,” Winterhoff explained in an interview. “We think with today’s technology and with these kinds of partnerships we can make it work”.
The announcement builds on Lucid’s July 2025 partnership with Uber and self-driving technology company Nuro to develop a separate robotaxi fleet.
That agreement calls for deploying at least 20,000 Lucid Gravity SUVs equipped with Nuro’s Level 4 autonomy system exclusively for Uber’s platform over six years, with the first launch expected in the San Francisco Bay Area in late 2026.
Lucid’s dual-track approach in pursuing both private consumer vehicles and a dedicated robotaxi fleet reflects the company’s confidence in its “fully redundant zonal architecture” and software-defined vehicle platform as ideal foundations for autonomous operations.
The company recently delivered engineering test vehicles to Nuro, with plans to have over 100 robotaxis in their testing fleet within the coming months.
Luxury automaker Mercedes-Benz is also exploring Level 4 capabilities, testing future collaboration with industry partners powered by its proprietary MB.OS operating system and NVIDIA DRIVE AGX Hyperion architecture.
The German manufacturer plans to offer an exceptional chauffeured Level 4 experience in its new S-Class, combining luxury, safety, and cutting-edge autonomy.
Mercedes-Benz’s participation represents a strategic bet on premium autonomous mobility, where customers willing to pay for luxury vehicles may be early adopters of fully autonomous features.
The automaker’s approach focuses on delivering what it describes as a “chauffeured” experience rather than simply self-driving capability. A subtle distinction that emphasizes the premium nature of the offering.
SAE Level 4, defined as “high driving automation,” represents a significant leap from the advanced driver assistance systems currently available in consumer vehicles.
At this level, the automated driving system can perform all driving tasks and monitor the driving environment for an entire trip within specific operational design domains. Defined areas where factors like location, road type, speed range, and weather conditions are controlled without requiring human intervention.
The distinction between Level 4 and the aspirational Level 5 (full automation under all conditions) is crucial.
Level 4 vehicles can operate without a human driver within their designated service areas, but they must be able to safely stop if conditions prevent autonomous operation or if they approach the boundaries of their operational domain.
NVIDIA’s DRIVE AGX Hyperion 10 platform provides the computational foundation for this capability.
The system includes the safety-certified NVIDIA DriveOS operating system, which enables real-time processing, security, and system monitoring to meet functional safety requirements.
The modular and customizable architecture allows manufacturers and autonomous vehicle developers to tailor the platform to their specific requirements while maintaining access to NVIDIA’s rigorous development expertise and investments in automotive engineering and safety.
At the core of the platform sit two DRIVE AGX Thor processors, each optimized for transformer-based AI models, vision language action models, and generative AI workloads.
This processing power enables the system to interpret what NVIDIA calls “nuanced and unpredictable real-world conditions, such as sudden changes in traffic flow, unstructured intersections and unpredictable human behavior in real time”.
NVIDIA’s autonomous driving approach increasingly relies on foundation AI models, large language models, and generative AI trained on trillions of real and synthetic driving miles.
These advanced models enable self-driving systems to solve highly complex urban driving situations with human-like reasoning and adaptability.
Central to this approach is NVIDIA Cosmos, a platform purpose-built for physical AI that features state-of-the-art generative world foundation models, guardrails, and an accelerated data processing and curation pipeline.
NVIDIA and Uber are working together to develop a joint AI data factory built on the Cosmos platform to curate and process the massive amounts of data needed for autonomous vehicle development.
The Cosmos platform includes three model types that can be customized through post-training:
Cosmos Predict generates future frames based on inputs to build datasets predicting various edge cases.
Cosmos Reason uses chain-of-thought reasoning to evaluate synthetic visuals and reward outcomes.
Cosmos Transfer amplifies structured video across various environments and lighting conditions.
To support industry development, NVIDIA is releasing what it describes as the world’s largest multimodal autonomous vehicle dataset, comprising 1,700 hours of real-world camera, radar, and lidar data across 25 countries.
The dataset is designed to bolster development, post-training, and validation of foundation models for autonomous driving.
Underlying the entire autonomous vehicle ecosystem is NVIDIA Halos, what the company describes as a full-stack comprehensive safety system that unifies vehicle architecture, AI models, chips, software, tools, and services to ensure safe development and deployment of autonomous vehicles from cloud to car.
The Halos system delivers state-of-the-art safety guardrails using three powerful computers:
NVIDIA DGX for AI training with safety-focused data curation workflows.
NVIDIA Omniverse and Cosmos for building physically accurate driving simulations and validation.
NVIDIA DRIVE AGX for runtime safety across layers of the hardware and software stack.
A key component of the safety framework is the NVIDIA Halos AI Systems Inspection Lab, the industry’s first program to be accredited by the ANSI National Accreditation Board.
The lab performs independent evaluations and oversees the new Halos Certified Program, helping ensure products and systems meet rigorous criteria for trusted physical AI deployments.
Companies including AUMOVIO, Bosch, Nuro, and Wayve are among the inaugural members of the inspection lab, which aims to accelerate the safe, large-scale deployment of Level 4 automated driving and other AI-powered systems.
The certification program unifies functional safety, cybersecurity, and AI compliance into a single framework, streamlining technical validations and empowering the automotive ecosystem to deploy safer, more reliable AI-driven technologies.
While passenger mobility dominates the headlines, NVIDIA and Uber’s autonomous vehicle ecosystem extends to long-haul freight operations.
Aurora, Volvo Autonomous Solutions, and Waabi are developing Level 4 autonomous trucks powered by the NVIDIA DRIVE platform.
Waabi, backed by both Uber and NVIDIA, unveiled its production-ready Volvo VNL Autonomous truck at TechCrunch Disrupt this week, marking a significant milestone in autonomous trucking.
CEO Raquel Urtasun positioned the company to be the first to operate truly driverless commercial trucking operations, a pointed contrast to competitor Aurora, which launched commercial service earlier this year with human observers on board.
“The future of autonomous trucking hinges on three critical areas: autonomous technology that is safe, scalable, and can deliver on customer needs; hardware that is purpose-built for autonomous operations from the ground up; and a commercial deployment model that solves problems in the supply chain without added friction,” Urtasun said.
The Waabi Driver, an end-to-end interpretable AI model, enables the truck to safely navigate both highways and general surface streets, facilitating commercial operations that work within existing logistics frameworks.
Volvo Autonomous Solutions built the VNL Autonomous with redundant systems for safety-critical functions including braking, steering, and communication.
Production will take place at Volvo’s flagship New River Valley assembly plant, with the truck based on Volvo’s autonomous technology platform designed to support diverse operational needs and truck brands.
The ambitious deployment plans announced by NVIDIA, Uber, and their partners come as the robotaxi market is experiencing explosive growth.
Industry analysts project the global robotaxi market will leap from $400 million in 2023 to $45.7 billion by 2030, representing a compound annual growth rate of 91.8 percent.
Waymo, owned by Google parent Alphabet, currently leads the commercial robotaxi market with operations in Phoenix, San Francisco, Los Angeles, Atlanta, and Austin.
The company reportedly manages over 250,000 rides weekly, though that represents just a fraction of the scale Uber operates at globally with its human driver network.
Recent market data from San Francisco shows Waymo capturing approximately 25 percent market share, with Uber at 55 percent and Lyft at 20 percent.
However, Waymo rides cost an average of 31 to 41 percent more than comparable Uber or Lyft rides, according to studies comparing pricing across the same routes and times.
Despite the premium pricing, approximately 70 percent of survey respondents in California and Phoenix expressed preference for the Waymo experience over traditional rideshare with a human driver.
Tesla launched its own robotaxi service in Austin in June 2025, operating initially in a limited invitation-only format before opening to the public in September.
The company has since expanded its operational area to approximately 170 square miles, though analysts note more time is required to assess whether the technology is mature enough to scale.
In China, companies including Baidu’s Apollo Go, Pony.ai, WeRide, and DiDi are rapidly expanding autonomous operations, with the country aiming to deploy one million robotaxis by 2030.
The aggressive timeline reflects both government support and massive capital investment in autonomous vehicle technology across the region.
NVIDIA CEO Huang emphasized that automotive currently represents less than two percent of the company’s revenue, but the company sees major growth potential with the rise of what it calls “physical AI” to power robots, autonomous vehicles, and automated factories.
The partnerships announced this week position NVIDIA as the computational backbone for a substantial portion of the autonomous vehicle industry’s future deployments.
For Uber, the transition to autonomous vehicles represents both opportunity and necessity. As the world’s largest ride-hailing service operating in 15,000 cities across more than 70 countries, the company has systematically partnered with autonomous vehicle developers rather than building its own technology following the fatal 2018 crash involving one of its test vehicles.
“NVIDIA is the backbone of the AI era, and is now fully harnessing that innovation to unleash L4 autonomy at enormous scale, while making it easier for NVIDIA-empowered AVs to be deployed on Uber,” said Dara Khosrowshahi, CEO of Uber.
The company currently operates autonomous rides through partnerships with Waymo in Austin and Atlanta, and WeRide in Abu Dhabi and Saudi Arabia, with fleet sizes measured in the hundreds—tiny compared to the millions of human drivers on the platform.
The 100,000-vehicle deployment target starting in 2027 would represent a quantum leap in scale, though it remains modest compared to Uber’s existing human driver network. The timeline is aggressive: Stellantis won’t begin production until 2028, and other automakers have yet to announce specific production commitments.
With automotive industry giants, the world’s leading AI computing company, and the largest ride-hailing platform now aligned behind a common technological foundation, the autonomous vehicle future that has long been promised may finally be approaching reality.
Sources:
https://nvidianews.nvidia.com/news/nvidia-uber-robotaxi
https://nvidianews.nvidia.com/news/nvidia-us-manufacturing-robotics-physical-ai
https://media.stellantisnorthamerica.com/newsrelease.do?id=27171&mid=1
Article Last Updated: October 29, 2025.