counter easy hit

Nvidia's 'ChatGPT moment' for self-driving cars, and other key AI announcements at GTC 2026

Nvidia's 'ChatGPT moment' for self-driving cars, and other key AI announcements at GTC 2026
0
screenshot-2026-03-16-at-5-04-45pm.png
Screenshot by Radhika Rajkumar/ZDNET

Follow ZDNET: Add us as a preferred source on Google.


ZDNET’s key takeaways

  • Nvidia released new models for autonomous robots, cars, and more. 
  • Uber will add Nvidia-powered robotaxis to cities as early as 2027. 
  • More lifelike robotics could mean robotic characters at Disney World.

To close out his Nvidia GTC keynote on Monday, CEO Jensen Huang brought out an unexpected guest: a walking, talking robot version of Olaf, the animated snowman from Disney’s Frozen movie. Huang explained to robo-Olaf that he’s run on Nvidia’s Jetson platform and learned to walk inside the company’s Omniverse simulator

Olaf’s responses didn’t always make sense — the conversation was awkward, but the idea was clear: in the future, robotic characters could be wandering around Disneyland using Nvidia’s tech. 

Also: Nvidia wants to own your AI data center from end to end

Physical AI — AI systems embedded in machines like robots or cars that navigate real-world environments, as opposed to models stuck in the cloud or on your phone — has been gaining steam over the last year, and was all over CES this past January. At GTC, Nvidia made several investments in the technology, ranging from new models to support for the data that makes or breaks physical AI systems. 

Here’s what’s new. 

New models for physical AI

Nvidia released several new foundation models geared towards improving how robots and vehicles function in the real world. They include Cosmos 3, which generates synthetic worlds to help physical AI navigate complex environments; Isaac GR00T N1.7, an “open reasoning vision language action (VLA) model” built for humanoid robots, which the company says is “commercially viable for real-world deployment”; and Alpamayo 1.5, another reasoning VLA model that gives self-driving vehicles better navigation guidance and prompt specification. 

Also: Nvidia bets on OpenClaw, but adds a security layer – how NemoClaw works

Nvidia called Alpamayo 1.5 “a major upgrade” within its existing autonomous vehicle model family, noting it “takes driving video, ego-motion history, navigation guidance and natural language prompts as inputs.” It turns those inputs into driving trajectories that let developers closely track a vehicle’s behavior and create safety guardrails through prompts. Nvidia said Alpamayo 1.5 can help take autonomous driving to the next level by making it easier to learn from unpredictable road events, weather conditions, or pedestrian activity. 

Currently, Nvidia said, its customers are using Cosmos 3 to train physical AI systems and GR00T N1.7 to “scale humanoid robot deployment.” 

Autonomous vehicles 

With the image of 110 different robots behind him, Nvidia CEO Jensen Huang described our present, saying the “ChatGPT moment of self-driving cars has arrived.” 

Nvidia is broadening its partnership with Uber, saying it will “launch a fleet of autonomous vehicles” powered entirely by Nvidia’s Drive AV software in 28 cities across four continents by 2028, with Los Angeles and San Francisco starting earlier in 2027. Presumably, that means users will be able to book self-driving cars in the Uber app on a much larger scale. 

Also: Why encrypted backups may fail in an AI-driven ransomware era

“This DRIVE Hyperion-powered fleet will tap into NVIDIA Alpamayo open models and the NVIDIA Halos operating system to accelerate the development and deployment of safe, scalable robotaxi services worldwide,” the company said in the release. 

The company is also adding several automakers, including BYD, Hyundai, Nissan, and Geely, to its robotaxi initiative, which already includes GM, Mercedes, and Toyota. Several of those new addition companies are continuing to use Nvidia’s Drive Hyperion platform, alongside its Alpamayo models, to scale “level 4” vehicle training, or the highest level of automated driving (a fully functional self-driving car that has essentially no direction from human passengers).

Edge AI and space computing

Nvidia is also working with T-Mobile and Nokia to speed up physical AI using AI radio access network (AI-RAN) infrastructure in remote locations. The company says this could help real-world data collection for physical AI cross unconnected, isolated, or overcrowded zones using (but without disrupting) 5G connectivity. 

“By turning the 5G network into a distributed AI computer with T-Mobile and Nokia, we’re creating a scalable blueprint for the world’s edge AI infrastructure,” Huang said in the announcement. 

The benefit of edge AI is low latency: Local hubs allow information to move more quickly than when it has to cross the entire internet. Nvidia’s partnership uses T-Mobile’s existing infrastructure to support that for the development of physical AI. The company said utility and operations companies are already using physical AI agents, systems, and digital twins across this infrastructure for use cases like optimizing traffic light timing or fixing transmission lines. 

In another announcement, Nvidia also nodded to space computing. The company said its new platforms, including Vera Rubin, are “unlocking a new era of space innovation, bringing AI compute to orbital data centers (ODCs), geospatial intelligence and autonomous space operations.”

Also: What’s the deal with physical AI? Why the next frontier of tech is already all around you

What that means in practice: Nvidia is on the way to AI applications that can operate between Earth and space, as well as between space and space. Nvidia said its IGX ThorTM and Jetson OrinTM platforms offer the energy-efficient inference and data processing required to do anything in orbit — which is edge AI, functioning as a local hub in space, outside the cloud. 

“As we deploy satellite constellations and explore deeper into space, intelligence must live wherever data is generated,” Huang said in the release. 

But orbital data centers are still theoretical — not impossible, but not yet a full reality. While Nvidia’s IGX Thor and Jetson Orin platforms are available today, the Vera Rubin Space-1 component of the company’s space initiative, announced today, will be “available at a later date.” 

A new ‘factory’ for physical AI data 

Physical AI lives in robotics, autonomous vehicles, and other real-world applications, which can mean higher stakes if something goes mechanically or computationally wrong. That problem is best avoided with high-quality training data that prepares physical AI systems for as many situations as possible to ensure they take safer, more predictable, and more effective action. 

To accompany its focus on physical AI, Nvidia also announced its Physical AI Data Factory Blueprint, an “open reference architecture that unifies and automates how training data is generated, augmented and evaluated, reducing the costs, time and complexity of training physical AI systems at scale.”

Also: Why buying into Moltbook and OpenClaw may be Big Tech’s most dangerous bet yet

Set to be available next month on GitHub, Blueprint lets companies use Nvidia’s Cosmos family of world foundation models to process real-world data and generate synthetic data at scale to train physical AI systems. It also supports reinforcement learning and testing processes for autonomous vehicles and other physical AI systems. According to Nvidia, Blueprint ensures datasets are diverse by including synthetic examples of edge cases and other infrequent scenarios that are harder or expensive to document in the real world. 

While it won’t be available widely until April, Nvidia said Uber is already using Blueprint to develop autonomous vehicles, and Skild AI is using it for general-purpose robotics. 

The big picture

Advancements in physical AI have consumer applications, like Waymo cars and the viral house chore robots you’ve likely come across, but are most immediately relevant to industrial engineering. More capable, autonomous robots will have the biggest impact on our public and industrial landscapes: on roads, in factories, and, evidently, walking across theme parks. 

Artificial Intelligence

Leave A Reply

Your email address will not be published.