Nothing's in my cart
2-minute read
Last week, Jensen Huang kicked off his keynote at CES by setting the stage for tech trends beyond 2025: we're now entering the era of "Agentic AI" and "Physical AI."
We've previously discussed Agentic AI, which essentially refers to AI avatars that can make decisions on your behalf, aligning with your interests and intentions. This concept has been in the works since the days of Huang's NPC and is now a direction pursued by the tech giants. Physical AI, on the other hand, focuses on self-driving cars and humanoid robots. Currently, Tesla and NVIDIA (or brands using NVIDIA's services) are the main players, with a new contender, OpenAI, joining the fray, leveraging its expertise in RLHF (reinforcement learning from human feedback).
The next big thing to watch: Agentic AI and Physical AI making their way into reality. (Source: NVIDIA)
Last Friday, Caitlin Kalinowski, who recently moved from Meta to OpenAI, posted on her X account about job openings related to robotics, signaling OpenAI's intention to enter the humanoid robots race. OpenAI has dabbled in robotics before, showcasing a robotic hand that could solve a Rubik's Cube in 2019. However, in 2021, OpenAI disbanded its robotics team to focus on RLHF, which led to the impressive launch of ChatGPT in 2022.
Caitlin Kalinowski, who joined OpenAI late last year, was previously responsible for smart glasses at Meta. Before that, she was a product design engineer at Apple, working on the MacBook. It seems she's here to help OpenAI re-enter the "Physical AI" scene with humanoid robots, potentially utilizing NVIDIA's Omniverse platform.
A robotic hand that can solve a Rubik's Cube. (Source: OpenAI)
According to NVIDIA's definition, Physical AI enables autonomous machines like robots and self-driving cars to perceive, understand, and perform complex tasks in the real world. With its ability to perceive and act, it's also known as "Generative Physical AI."
This concept is somewhat related to the "World Model" we've discussed. Currently, online videos are the only way AI can understand the real world and physical laws. Models trained through these videos, like Sora, claim to be "World Models." However, we know that while these AI-generated videos are fun, they shouldn't be used as training data for robots and autonomous driving.
NVIDIA's advantage lies in its digital twin/metaverse collaboration platform, Omniverse. NVIDIA's newly announced Cosmos world model can generate highly realistic simulation scenarios based on precise physics engines, providing a reliable training environment for robots and self-driving cars. From simulation to generation, NVIDIA is perfecting the "Physical AI" training ground.
"Our Robotics team is focused on unlocking general-purpose robotics and pushing towards AGI-level intelligence in dynamic, real-world settings," states OpenAI's recruitment information. Does OpenAI have an Ace up its sleeve as it re-enters the race? Or is this related to Huang's strategy? Also, don't forget about the robotics company Figure, which is connected to both NVIDIA and OpenAI. We're genuinely excited about AI developments in 2025!