UPDATED 10:51 EDT / JUNE 02 2024

AI

AI, AI and more AI: A deep dive into Nvidia’s announcements at Computex 2024

Nvidia Corp. today unveiled a series of advancements in artificial intelligence technology at Computex 2024, where it’s showcasing its latest innovations in consumer computing, accelerated computing, networking, enterprise computing, and industrial digitalization.

From new AI laptops to the powerful Blackwell platform and the Spectrum-X Ethernet network, Nvidia’s announcements at Computex offer a comprehensive view of the future of AI and computing. Here’s a deep dive into all the news:

Consumer innovation: AI on PCs and gaming

Nvidia has pioneered AI on PCs since 2018, when it introduced the first consumer graphics processing unit created for AI, the GeForce RTX. This innovation included deep learning super sampling, or DLSS, that significantly boosted gaming performance by using AI to generate pixels and frames. Since then, Nvidia has expanded its AI applications across various fields, such as content creation, videoconferencing, live streaming, streaming video consumption and productivity.

Today, Nvidia leads in emerging generative AI use cases, with more than 500 AI-powered apps and games accelerated by RTX GPUs. At Computex, Nvidia announced new GeForce RTX AI laptops from major manufacturers like ASUS and MSI. These laptops will feature advanced AI hardware and Copilot features, supporting AI-enabled apps and games.

Leveraging the latest Nvidia AI libraries and SDKs, these laptops are optimized for AI inferencing, training, gaming, 3D rendering and video processing. They offer up to seven times faster Stable Diffusion and up to 10 times faster large language model inference than Macs, marking a significant leap in performance and efficiency.

Nvidia is also rolling out the RTX AI Toolkit to aid developers in customizing, optimizing and deploying AI capabilities in Windows applications. It includes tools for fine-tuning pretrained models, optimizing them for various hardware and deploying them for local and cloud inference. According to Jesse Clayton, director of product for AI PC at Nvidia, the AI inference manager will allow developers to integrate hybrid AI capabilities into apps, making decisions between local and cloud inference based on system configuration.

“We consider AI on PCs to be one of the most important developments in the history of technology,” Clayton said. “AI is being incorporated into every major application, and it will impact nearly every PC user.

In the gaming sector, AI is extensively used to render environments with DLSS and enhance non-player characters or NPCs. The introduction of Project G-Assist will help developers create AI assistants for games. G-Assist enhances gamer experiences by providing context-aware responses to in-game queries, tracking system performance during gameplay, and optimizing system settings.

Also, Microsoft and Nvidia are working together to enable developers to create gen AI-capable Windows native and web applications. Available in developer preview later this year, the collaboration will allow API to access GPU-accelerated small language models, or SLMs, and retrieval-augmented generation of RAG capabilities that run on-device as part of Windows Copilot Runtime. SLMs open the door for Windows developers working on content summarization, content generation, task automation and more.

Accelerated computing: Blackwell platform and AI factories

Gen AI is driving a new industrial revolution, transforming data centers from cost centers into AI factories. These data centers leverage accelerated computing to process large datasets and develop AI models and apps. Nvidia announced the Blackwell platform earlier this year, taking a significant leap forward in AI performance and efficiency. Blackwell delivers 1,000 times more performance than the Pascal platform released eight years ago.

Nvidias’s AI capabilities extend beyond chip-level advancements. Nvidia continues to innovate across all layers of the data center to improve AI factories. Nvidia’s MGX platform provides a modular reference design for accelerated computing across multiple use cases, from remote visualization to edge supercomputing.

The platform supports multiple generations of hardware, ensuring compatibility with new acceleration architectures like Blackwell. According to Dion Harris, Nvidia’s director of accelerated computing, the MGX platform has seen a significant increase in adoption, with the number of partners growing from six to 25.

Nvidia announced the GB200 NVL2 platform, designed to bring generative AI capabilities to every data center. This platform offers 40 petaflops of AI performance, 144 Arm Neoverse CPU cores and 1.3 terabytes of fast memory in a single node. It significantly improves performance for LLMs and data processing tasks compared with traditional CPU systems. GB200 NVL2 speeds up data processing by up to 18 times and up to nine times better performance for vector database search queries.

“Our platform is truly an AI factory that comprises compute, networking, and software infrastructure,” Harris said. “It significantly improves performance for LLMs, and data processing tasks compared with traditional CPU systems.”

Networking: Spectrum-X Ethernet for AI

Nvidia’s Spectrum-X is an Ethernet network explicitly designed for AI factories. Traditional Ethernet networks, optimized for hyperscale cloud data centers, are insufficient for AI because of their minimal server-to-server communication and high jitter tolerance. In contrast, AI factories require robust support for distributed computing.

Spectrum-X is Nvidia’s end-to-end Ethernet solution that optimizes GPU-to-GPU connectivity by leveraging network interface cards, or NICs, and switches. According to Amit Katz, vice president of networking products at Nvidia, it significantly enhances performance, providing higher bandwidth, increased all-to-all bandwidth, better load balancing and 1,000 times faster telemetry for real-time AI application optimization.

“The network defines the data center,” Katz said. “Spectrum-X is key for Ethernet-based AI factories, and this is just the beginning. An amazing one-year roadmap for Spectrum-X is coming soon.”

Spectrum-X is already in production, with a growing ecosystem of partners deploying it in AI factories and generative AI clouds worldwide. Partners are incorporating the BlueField-3 SuperNIC into their systems and offering Spectrum-X as part of Nvidia’s reference architecture. At Computex, Nvidia will demonstrate Spectrum-X’s ability to support AI infrastructure and operations across industries.

Enterprise computing: NIM inference performance

Nvidia NIM is a set of microservices designed to run gen AI models, specifically for inference tasks. NIM is now generally available to all developers, including existing developers, enterprise developers, and newcomers to AI.

Developers can access NIM — delivered as a standard Docker container — from Nvidia’s portal or popular model repositories. Nvidia plans to provide developers free access to NIM for research, development and testing through its Nvidia Developer Program. NIM integrates with different tools and frameworks, making it accessible regardless of the development environment.

“This means gen AI now does not have to be limited to people who understand data science – now it’s democratized,” said Manuvir Das, vice president of enterprise computing at Nvidia. “Any AI developer, any new enterprise developer, can focus on building the application code. NIM takes care of all the plumbing underneath.”

NIM addresses two main challenges: optimizing model performance and simplifying developer deployment. It houses Nvidia’s runtime libraries to maximize model execution speed, offering up to three times the throughput compared to off-the-shelf models. This improvement allows more efficient use of AI infrastructure, generating more output in less time. According to Das, NIM simplifies deployment by reducing the time needed to set up and optimize models from weeks to minutes.

NIM’s availability marks a significant milestone, with more than 150 partners integrating it into their tools. Nvidia is offering free access to NIM through its developer program. Nvidia AI Enterprise licenses are $4,500 per GPU per year for enterprise production deployments.

Industrial digitalization: Omniverse and robotics

Manufacturing is experiencing industrial digitalization due to advancements in gen AI and digital twins. Over the past decade, Nvidia has developed the Omniverse platform, which the company claims has reached a tipping point in industrial digitalization. Omniverse connects with key industrial technologies like Siemens and Rockwell, offering capabilities such as simulation, synthetic data generation for AI training, and high-fidelity visualization.

Nvidia employs a three-computer solution in AI and robotics: an AI supercomputer for creating models in the data center, a runtime computer for real-time sensor processing in robots, and the Omniverse computer for digital twin simulation. This setup allows extensive virtual testing and optimization of AI models and robotic systems.

“Training and testing robots in the physical world are an extremely challenging problem,” said Deepu Talla, vice president of robotics and edge computing at Nvidia. “It’s expensive, not safe, and time-consuming. We can have hundreds of thousands, or even millions, of robots, simulating the digital twin. A lot of this AI capability can be tested in simulation before it’s brought into the real world.”

At Computex, Nvidia demonstrated how leading electronics manufacturers like Foxconn, Delta, Pegatron and Wistron have adopted AI and Omniverse for factory digitalization. These companies use digital twins to optimize factory layouts, test robots and enhance worker safety. Nvidia will also showcase how Siemens, Intrinsic and Vention, among other companies, use its Isaac platform for building robots.

Lastly, Nvidia is announcing the availability of its Holoscan sensor processing platform on the Nvidia IGX platform, which is used by leading medical instrument customers such as Medtronic and Johnson & Johnson. New enterprise support for IGX with Holoscan is expected to accelerate real-time AI at the industrial edge.

“When you think about all these applications like robotics with Isaac and Holoscan for sensor processing and medical instruments,” Talla said, “Just in the last 12 months, the pace and trajectory of how companies can deploy this is mind-boggling.”

Summary

Historically, Nvidia has used events such as Computex for significant announcements. The breadth and depth of innovation across all aspects of the AI ecosystem show how big of a lead the company currently has in AI. It stretches its lead over rival Intel Corp., highlighting the importance of building engineered systems.

Nvidia is often described as a GPU manufacturer, but the silicon is a small component of the company’s success. The scope of innovation at Computex involves hardware, software, developer tools, and applications and will help the industry bring AI to life.

Zeus Kerravala is a principal analyst at ZK Research, a division of Kerravala Consulting. He wrote this article for SiliconANGLE.

Image: Nvidia

A message from John Furrier, co-founder of SiliconANGLE:

Your vote of support is important to us and it helps us keep the content FREE.

One click below supports our mission to provide free, deep, and relevant content.  

Join our community on YouTube

Join the community that includes more than 15,000 #CubeAlumni experts, including Amazon.com CEO Andy Jassy, Dell Technologies founder and CEO Michael Dell, Intel CEO Pat Gelsinger, and many more luminaries and experts.

“TheCUBE is an important partner to the industry. You guys really are a part of our events and we really appreciate you coming and I know people appreciate the content you create as well” – Andy Jassy

THANK YOU