When comparing those fledgling humanoid robot startups in China with Tesla, the first reaction of many people is probably: “Hey, who do you think you are to even compare yourself?” To be honest, these startups are indeed no match for Tesla when it comes to technological accumulation, application scenarios for implementation, or the ability to invest heavily. However, what many people may not know is that behind many humanoid robot companies, there is an “invisible giant” lending support: NVIDIA. NVIDIA does not manufacture robot casings or provide mass-production contract manufacturing; instead, by offering a complete set of hardware and software solutions including computing power, software, simulation, and data, it has significantly lowered the industry’s entry barriers.

Currently, NVIDIA has built a highly comprehensive and mature hardware and software stack for humanoid robots: development modules + foundation models + dedicated chips + simulation platforms. It provides end-to-end services from training and simulation to deployment, with many functions ready for immediate use. For those fledgling humanoid robot companies, this not only saves a great deal of development costs but also allows them to get up and running quickly.

Why is NVIDIA so “dedicated” to this? Deepu Talla, NVIDIA’s Vice President in charge of robotics, stated that NVIDIA itself does not manufacture robots. Instead, it supports enterprises in the robotics ecosystem to build robots through its computing platforms. To put it bluntly, NVIDIA aims to replicate its “shovel-selling” model in the AI large model field (where it sells GPUs). It doesn’t dig for gold itself; instead, it sells shovels, sieves, and teaches others how to use them. By collaborating with all players in the robotics industry, NVIDIA seeks to profit from the overall growth of the sector.

AEON, the humanoid robot co-developed by Hexagon and NVIDIA

It is reported that to date, NVIDIA’s robotics ecosystem has attracted over 2 million developers, more than 7,000 customers, and over 1,000 partners engaged in hardware, software, and sensor development. Chinese companies such as UBtech, Unitree Robotics, Fourier Intelligence, Joint Engine Robotics, XPeng Robotics, Robot Era, Galaxy General, Accelerated Evolution, Beijing Humanoid Robot, and DeepRobotics have all joined NVIDIA’s ecosystem.

NVIDIA provides a “three-tier computing platform” (DGX, OVX, AGX) for humanoid robots and broader embodied intelligence, covering the entire workflow from training and simulation to deployment. To put it simply:

DGX is used for training large models and serves as NVIDIA’s supercomputer cluster;

OVX is for simulation/virtual testing, used to run digital twin/simulation applications such as Omniverse + Isaac Sim, generating synthetic data and conducting virtual tests;

AGX is designed to run inference and control tasks on physical robots, i.e., real-machine deployment.

This classification is somewhat complex to understand, and additionally, components like DGX for training were not originally developed exclusively for humanoid robots. Therefore, I will approach this from a different angle here and discuss NVIDIA’s humanoid robot layout in terms of four aspects: hardware, software, models, and data.

I.Hardware Chapter: The “Most Powerful Heart” of Robots

When it comes to NVIDIA’s hardware for robots, it essentially refers solely to chips. NVIDIA does not manufacture robot bodies—this is simply not its area of business. Currently, humanoid robots mainly use two types of NVIDIA chips: Jetson Orin and Jetson Thor.

The Jetson Orin series was launched in 2022. In fact, it was not originally designed exclusively for humanoid robots; it is a general-purpose AI chip primarily used in scenarios such as autonomous driving, AMRs (Autonomous Mobile Robots), and industrial vision. The AI computing power of Orin can reach 275 TOPS (roughly equivalent to that of a mainstream gaming PC in 2021), which was quite impressive at the time. Moreover, it integrates well with NVIDIA’s CUDA and Isaac ecosystems—boasting a mature ecosystem and low development migration costs. As a result, humanoid robot companies like Boston Dynamics, Joint Engine Robotics, and UBtech have adopted it as their “off-the-shelf main brain.”

On August 25, 2025, NVIDIA launched Jetson Thor, a chip specifically designed for robots this time. Jensen Huang stated that Thor is “the ultimate supercomputer driving the era of physical AI and general-purpose robots.” Equipped with 128GB of ultra-large memory, its AI computing power is 7.5 times stronger than that of Orin. Specifically, Thor can deliver up to 2070 FP4 TFLOPS of AI computing power in a package with only 130 watts of power consumption (equivalent to the AI computing power of “2.5 RTX 4090 graphics cards combined”), essentially achieving “server-level performance.”

Jetson Thor can be widely applied to robots in various fields

To put it simply, Thor packs “server-grade” computing power into robots, enabling them to make more real-time decisions on-board instead of sending data back to the cloud every time. This means tasks like computer vision and planning that previously required a server can now be handled directly on the robot itself. Thor allows humanoid robots to process real-time data from 20 cameras simultaneously. It also supports the latest transformer architecture (the currently most popular AI model framework) and multimodal AI models (such as GR00T N1.5), capable of running multiple AI models at once to achieve real-time perception, reasoning, and control.

In the United States, Agility Robotics has already equipped the fifth generation of its Digit robot with Jetson, while Boston Dynamics plans to install Thor in its Atlas robot. In China, Galaxy General has taken the lead in adopting the Thor chip. Its founder, Wang He, stated that robots equipped with this chip have seen a significant speed improvement and have been recognized as “the fastest humanoid robots.” Companies including UBtech, Unitree, Joint Engine Robotics, and DeepRobotics are also reportedly in the process of deploying Thor.

II.Software Chapter: A Complete Robot Toolbox

In terms of software, NVIDIA has developed the Isaac platform, which contains a variety of ready-to-use modules required for robot development. It’s similar to building with LEGO—you don’t have to “reinvent the wheel” from scratch; instead, you can directly assemble pre-made building blocks. The core concept of the Isaac platform is straightforward: break down complex robot development into individual functional modules. Each module can be used independently and run on NVIDIA GPUs for faster computation, greatly boosting development efficiency.

Specifically, the key modules include:

Isaac ROS: An enhanced version developed by NVIDIA based on ROS 2. It leverages GPU acceleration to make computationally intensive tasks (such as perception, navigation, and path planning) run faster and more stably.

Isaac Manipulator: A dedicated tool library for robotic arms. It encapsulates many common operational functions (e.g., grabbing, placing, obstacle avoidance). Developers only need to call the interfaces to enable robotic arms to perform complex movements, without having to write low-level algorithms themselves.

Isaac Perceptor: A perception and navigation tool library designed specifically for Autonomous Mobile Robots (AMRs). For example, warehouse transport robots can use it to achieve LiDAR and camera data fusion, obstacle recognition, map construction, and path planning.

Other software modules of the Isaac platform include Isaac Cortex (for high-level decision-making and behavior planning, enabling robots to execute complex tasks) and Isaac Mission Dispatch (for multi-robot scheduling and task assignment). It’s worth noting that Isaac Sim and Isaac Lab (covered later in the data section) and the GR00T model (discussed in the model section) also technically fall under the Isaac platform and can be categorized into the software layer. However, I will distinguish their layers and explain them separately here.

NVIDIA offers free robot development courses on its official website.

All the software on the Isaac platform is modular, which is equivalent to turning “common robot functions” into plug-and-play modules. Developers can combine and use them like building with LEGO—saving time while ensuring performance. Therefore, NVIDIA claims in its own promotions that millions of developers use the Isaac platform, enabling them to more quickly turn robot concepts into actual products. This statement is indeed not an exaggeration.

III.Model Chapter: Equipping Robots with a “Universal Brain”

At the top layer of its software stack, NVIDIA has launched Isaac GR00T, a foundation model and development platform designed for humanoid robots. Drawing inspiration from the “fast thinking/slow thinking” division of labor in human decision-making, GR00T features a “dual-system architecture”:

System 1 (Fast): Focused on motion control, it is responsible for converting plans into smooth, continuous motor commands—these are the reflex-like actions that are executed immediately.

System 2 (Slow): In charge of visual and language understanding as well as reasoning, it handles decision-making, interprets complex instructions, and formulates multi-step plans.

Jetson Thor

Jetson Thor and GR00T can accept various inputs such as text and images, enable general capabilities, and perform actions like grabbing objects, moving items, and completing complex tasks

By adopting the “dual-system architecture,” robots have evolved from being capable of only specific tasks to possessing general reasoning abilities, enabling them to handle new tasks in complex environments—and this is precisely the core of general-purpose robots.

Reportedly, with the launch of GR00T N1.5 by NVIDIA in 2025, training efficiency has been improved unprecedentedly. Traditionally, manual data collection often required a lengthy cycle of nearly three months. However, the new model, by integrating the GR00T-Dreams tool, can automatically generate synthetic data, thereby shortening the development cycle to an astonishing 36 hours and effectively breaking the data bottleneck in the development of humanoid robots.

It should be clarified here that it is not entirely accurate to regard GR00T as “a single independent small model”—it is more like a set of models combined with a training/fine-tuning pipeline, whose goal is to form a collection of general embodied intelligence capabilities.

IV.Data Chapter: Building a “Virtual Factory” for Robot Training

In robot development, data collection and model training are the most costly and time-consuming processes. To address this major challenge, NVIDIA has made simulation and synthetic data generation key strategic priorities. Isaac Sim and Isaac Lab provide developers with a “physically accurate” virtual environment for large-scale robot training and validation. Through this platform, developers can train multiple robots simultaneously in a digital twin environment and conduct thorough testing before real-world deployment.

Isaac Sim: A high-fidelity physical simulator based on Omniverse. It can simulate lighting, friction, physical collisions, and other real-world physical properties, making synthetic data more consistent with real-world scenarios. It also supports “digital twin” functionality—allowing developers to replicate real factories or warehouses in a virtual world for testing purposes.

By leveraging AI neural reconstruction and rendering technologies, realistic 3D scenes can be generated

It enables large-scale parallel training in simulation—for instance, training hundreds or thousands of virtual robots simultaneously (for reinforcement learning, imitation learning, etc.)—with a speed far faster than real-world data collection.

NVIDIA has also launched synthetic data pipelines such as GR00T-Mimic and GR00T-Dreams, which have completely transformed the way robots learn. These pipelines can convert a small amount of human operation demonstrations (e.g., data collected via Apple Vision Pro) into massive volumes of diverse synthetic training data. Reports indicate that this set of pipelines can generate 750,000 synthetic trajectories in just 11 hours—equivalent to 6,500 hours, or 9 months, of continuous human demonstration data! When this synthetic data is combined with real-world data, the performance of the GR00T N1 model can be improved by 40%.

In summary: By integrating hardware (computing power) + simulation (data) + software (tools) + models (the “brain”) into a single system, NVIDIA has created a “data flywheel”—the more partners join the ecosystem, the stronger the platform’s capabilities become, and the greater its appeal grows.

V. NVIDIA vs. Tesla: A Showdown of Two Approaches

Currently, there are two main approaches to the development of humanoid robots.

One is Tesla’s vertical integration model. It develops everything in-house—from hardware design (the Optimus robot body) and software algorithms (FSD vision algorithms) to computing infrastructure (Dojo supercomputer)—achieving a high level of integration. This makes Tesla analogous to Apple in the smartphone industry, but in the field of humanoid robots.

The other is the platform model, represented by NVIDIA and Google DeepMind. Google provides humanoid robots with powerful reasoning and action capabilities through its Gemini Robotics model, and also collaborates with companies like Agility Robotics and Boston Dynamics to empower robots via this model. Unlike Google, NVIDIA has built an end-to-end ecosystem spanning from underlying chips to top-tier applications, rather than just offering a single model.

By leveraging Jetson Thor for robust computing power, the Isaac platform and GR00T model for universal development tools and an AI “brain,” and Isaac Sim plus synthetic data pipelines to solve training data challenges, NVIDIA has become both an “infrastructure provider” and a “shovel seller” in the era of physical AI.