Nvidia’s Jensen Huang Reveals Rubin Chips at GTC 2025

    0
    0

    Nvidia’s founder, Jensen Huang, launched the company’s AI developer conference by addressing thousands of attendees about the dynamic transformation occurring in artificial intelligence. During his presentation at GTC 2025, known as the “Super Bowl of AI,” Huang highlighted Nvidia’s strides in AI technology and predicted shifts in the industry in the coming years. He noted an escalating demand for GPUs from the world’s leading cloud service providers, projecting that Nvidia’s data center infrastructure revenue could reach $1 trillion by 2028.

    An eagerly awaited segment of Huang’s keynote involved Nvidia’s upcoming graphic architectures: Blackwell Ultra and Vera Rubin, named after the renowned astronomer. The Blackwell Ultra is anticipated to debut in the latter part of 2025, followed by the Rubin AI chip in late 2026. The Rubin Ultra will make its appearance in 2027.

    Spanning over two hours, Huang’s address emphasized the “remarkable advancements” in AI. He reflected on AI’s evolution over the past decade, transitioning from perception and computer vision to generative and agentic AI—AI capable of reasoning. “AI comprehends context and our inquiries, generating responses and fundamentally reshaping computation,” Huang explained.

    The subsequent phase of AI development, according to Huang, involves robotics. These robotics, powered by “physical AI,” grasp concepts such as friction, inertia, cause and effect, and object permanence. “Every phase or wave unveils new market possibilities,” he stated.

    A crucial theme in Huang’s announcements was synthetic data generation—AI-generated data—for model training. AI relies on digital experiences for learning, a process that renders human involvement in training loops unnecessary. “There’s a limit to our data and human demonstrations,” Huang remarked. “A significant breakthrough in recent years is reinforcement learning.”

    Nvidia’s technology, Huang asserted, facilitates learning as AI tackles challenges incrementally. To advance this area, he unveiled Isaac GR00T N1, an open-source foundation model for developing humanoid robots, complementing the updated Cosmos AI model to produce simulated training data for robots.

    Benjamin Lee, an engineering professor at the University of Pennsylvania, noted that robotics training faces challenges in data collection due to the time and cost involved in real-world training. Simulated environments are standard in reinforcement learning, allowing researchers to test model efficacy. “It’s really exciting,” Lee commented. “An open-source platform will broaden access to reinforcement learning, benefiting both industry and academia.”

    Earlier this year, at CES, Huang introduced the Cosmos AI model series, which can generate economical, photo-realistic video for training robots and automated services.

    The open-source model operates with Nvidia’s Omniverse, a physics simulation tool, to create realistic video affordably, as opposed to traditional data collection methods like recording road experiences or human-taught repetitive tasks. General Motors plans to incorporate Nvidia’s technology into its autonomous car fleet. The collaboration aims to develop custom AI systems using Omniverse and Cosmos to train AI models for manufacturing.

    Moreover, Huang unveiled the Halos system, an AI solution focusing on vehicle safety in the realm of autonomous driving, boasting, “We’re likely the first to have comprehensive safety code assessments.”

    In the conference’s concluding moments, Huang announced an open-source physics engine for robotic simulations named Newton, developed in partnership with Google DeepMind and Disney Research. A small robot named Blue, emerging from a hatch on stage, followed Huang’s commands, symbolizing the dawn of generalist robotics.