We at sysGen will be there to report on the highlights and what's coming next. Stay tuned and don't miss any important information about the use of artificial intelligence (AI), accelerated computing and much more.
![](https://www.sysgen.de/media/image/a9/66/11/Webaufnahme_21-3-2023_84822_www.nvidia.com.jpeg)
GTC Developer Conference
Keynote March 18
NVIDIA GTC 2024 - HIGHLIGHTS
NVIDIA Blackwell
NVIDIA's latest unveiling, Blackwell, promises to be a groundbreaking development in the world of artificial intelligence. With the size of 208 billion transistors combined on a single chip, Blackwell marks a significant advance over previous generations of GPUs. This "AI superchip" sets a new benchmark for performance and efficiency and will undoubtedly play a key role in the upcoming industrial revolution. With increasing demand for more powerful and scalable models for complex AI applications, the announcement of Blackwell comes at just the right time to further push the boundaries of technology.
Blackwell also offers secure AI with a 100% in-system self-testing RAS service and full performance encryption - with data protected not only in transit, but also at rest and during computation.
NVIDIA
Blackwell is specifically designed for generative AI models with trillions of parameters, so it's no surprise that it outperforms Hopper in terms of inference performance - with up to 30x greater output
.These systems will train massive GPT-like models, according to Huang, with thousands of GPUs coming together for massive computing power - and comparatively low power consumption.
However, when it comes to inference (i.e. generation/LLMs), Blackwell-powered units can also reduce computational costs and power requirements, according to Huang
.Blackwell Innovations to Fuel Accelerated Computing and Generative AI
Blackwell's six revolutionary technologies, which together enable AI training and real-time LLM inference for models scaling up to 10 trillion parameters, include:
- World's Most Powerful Chip - Packed with 208 billion transistors, Blackwell-architecture GPUs are manufactured using a custom-built 4NP TSMC process with two-reticle limit GPU dies connected by 10 TB/second chip-to-chip link into a single, unified GPU.
- Second-Generation Transformer Engine - Fueled by new micro-tensor scaling support and NVIDIA's advanced dynamic range management algorithms integrated into NVIDIA TensorRT™-LLM and NeMo Megatron frameworks, Blackwell will support double the compute and model sizes with new 4-bit floating point AI inference capabilities.
- Fifth-Generation NVLink - To accelerate performance for multitrillion-parameter and mixture-of-experts AI models, the latest iteration of NVIDIA NVLink® delivers groundbreaking 1.8TB/s bidirectional throughput per GPU, ensuring seamless high-speed communication among up to 576 GPUs for the most complex LLMs.
- RAS Engine - Blackwell-powered GPUs include a dedicated engine for reliability, availability and serviceability. Additionally, the Blackwell architecture adds capabilities at the chip level to utilize AI-based preventative maintenance to run diagnostics and forecast reliability issues. This maximizes system uptime and improves resiliency for massive-scale AI deployments to run uninterrupted for weeks or even months at a time and to reduce operating costs.
- Secure AI - Advanced confidential computing capabilities protect AI models and customer data without compromising performance, with support for new native interface encryption protocols, which are critical for privacy-sensitive industries like healthcare and financial services.
- Decompression Engine - A dedicated decompression engine supports the latest formats, accelerating database queries to deliver the highest performance in data analytics and data science. In the coming years, data processing, on which companies spend tens of billions of dollars annually, will be increasingly GPU-accelerated.
NVIDIA B100
![](https://www.sysgen.de/media/image/3c/25/72/nvidia-blackwell-architecture-image_640x640.jpg)
A Massive Superchip NVIDIA GB200
The NVIDIA GB200 Grace Blackwell Superchip connects two NVIDIA B200 Tensor Core GPUs to the NVIDIA Grace CPU over a 900GB/s ultra-low-power NVLink chip-to-chip interconnect.
For the highest AI performance, GB200-powered systems can be connected with the NVIDIA Quantum-X800 InfiniBand and Spectrum™-X800 Ethernet platforms, also announced today, which deliver advanced networking at speeds up to 800Gb/s.
The GB200 is a key component of the NVIDIA GB200 NVL72, a multi-node, liquid-cooled, rack-scale system for the most compute-intensive workloads. It combines 36 Grace Blackwell Superchips, which include 72 Blackwell GPUs and 36 Grace CPUs interconnected by fifth-generation NVLink. Additionally, GB200 NVL72 includes NVIDIA BlueField®-3 data processing units to enable cloud network acceleration, composable storage, zero-trust security and GPU compute elasticity in hyperscale AI clouds. The GB200 NVL72 provides up to a 30x performance increase compared to the same number of NVIDIA H100 Tensor Core GPUs for LLM inference workloads, and reduces cost and energy consumption by up to 25x.
The platform acts as a single GPU with 1.4 exaflops of AI performance and 30TB of fast memory, and is a building block for the newest DGX SuperPOD.
NVIDIA offers the HGX B200, a server board that links eight B200 GPUs through NVLink to support x86-based generative AI platforms. HGX B200 supports networking speeds up to 400Gb/s through the NVIDIA Quantum-2 InfiniBand and Spectrum-X Ethernet networking platforms.
![](https://www.sysgen.de/media/image/c9/5a/69/NVIDIA-GB200-Grace-Blackwell-Superchip_640x640.jpg)
![](https://www.sysgen.de/media/image/2d/f7/18/screenshot-www-youtube-com-2024-03-18-21_44_02_640x640.png)
NVILINK Switch Chip
Another impressive new introduction is the new NVLink switch chip, which enables all GPUs to communicate with each other simultaneously.
It is housed in the new DGX GB200 NVL72, which is basically another giant GPU, but with some amazing numbers - including 720 petaflops in training and 1.44 exaflops in inference. This performance is designed for extremely demanding tasks and can boast a processing speed of 130 TB/s - that's more than the entire bandwidth of the entire internet.
NVIDIA DGX GH200
![](https://www.sysgen.de/media/image/d6/a4/93/NVIDIA-GB200-NVL72_640x640.jpg)
![](https://www.sysgen.de/media/image/5e/96/b4/screenshot-www-youtube-com-2024-03-18-22_37_53_640x640.png)
"Chat GTP" moment
The "chat GPT" moment for robotics could potentially be just around the corner, according to Huang, and Nvidia is striving to be on the cutting edge and ready when that time comes.
"We need a simulation engine that digitally represents the world for a robot," he explains - and that's exactly what the Omniverse offers.
Now we turn to robotics, or as Huang calls it - "physical AI".
Robotics, along with AI and work on the Omniverse/Digital Twin, is seen as a central pillar for Nvidia, all working together to realize the full potential of the company's systems, Huang says.
It´s a Wrap
Huang highlights that we are in the midst of a new industrial revolution, powered by Blackwell, NIMs and the Omniverse.
For his grand finale, Huang is accompanied by an impressive array of Project GROOT robots - and then even two famous Star Wars characters!
He explains that the model will learn through both human examples and a digital "library" - a perspective that is quite intriguing.
Regarding robotics, Huang emphasizes that Nvidia has over 1,000 developers in this field.
A new SDK called Isaac Perceptor is specifically aimed at robotic arms and vehicles, giving these devices advanced insight and intelligence.
Nvidia is committed to the advancement of humanoid robots by introducing Project GROOT, another collection of APIs.
![](https://www.sysgen.de/media/image/59/75/0b/screenshot-www-youtube-com-2024-03-18-22_57_11_640x640.png)
![](https://www.sysgen.de/media/image/79/fc/82/screenshot-www-youtube-com-2024-03-18-23_02_35_1280x1280.png)