Lawrence Zenga
May 23, 2025 02:10
NVIDIA uses the BLACKWELL GPUS and LLAMA 4 Maverick to achieve the world’s record reasoning speed of 1,000 TPS/users to set new standards for AI model performance.
NVIDIA has set up a new benchmark with AI performance, breaking LLAMA 4 Maverick Model and Blackwell GPU to break 1,000 tokens (TPS) per user barrier. This achievement has been independently verified by artificial analysis of AI benchmarking service, and significant milestones in the speed of LLM (Lange Language Model) reasoning.
Technology development
This breakthrough has been achieved in a single NVIDIA DGX B200 node equipped with eight NVIDIA BLACKWELL GPUs that can handle more than 1,000 tp per user in LLAMA 4 MAVERICK, an 800 million parameter model. Due to this performance, Blackwell is an optimal hardware for deploying LLAMA 4 to maximize throughput or minimize atmospheric time.
Optimization
NVIDIA has completely utilized the Blackwell GPU by using TensOrt-Llm to implement extensive software optimization. The company also trained a speculative decoding draft model using the EAGLE-3 technology, resulting in a four-fold increase compared to the previous baseline. This improvement maintains response accuracy while improving performance and uses the FP8 data type for gemms and professional mixing to ensure the accuracy that can be compared with BF16 metrics.
The importance of low standby time
In the generated AI application, throughput balance and waiting time are important. In the case of important applications that require quick decision -making, NVIDIA’s BLACKWELL GPU is excellent by minimizing the delay time as shown in the TPS/user record. The function of hardware that handles high throughput and low standby time is ideal for various AI tasks.
CUDA kernel and speculation decoding
NVIDIA optimized the CUDA kernel for the work of Gemms, MoE and stocks to maximize performance by using spatial partitioning and efficient memory data rods. Dumping decoding was used to accelerate the speed of LLM reasoning using a smaller and faster draft model proven by smaller Target LLM. This approach increases significant speed, especially when the prediction of the draft model is correct.
Programming method dependency launch
To further improve performance, NVIDIA has reduced GPU idle time between continuous CUDA kernels using PDL (Programmatic Dependent Lunch). This technique allows you to run the kernel to improve the GPU usage rate and remove the performance interval.
The performance of NVIDIA emphasizes leadership in the field of AI infrastructure and data center technology, setting a new standard for the speed and efficiency of the AI model deployment. Innovation of the Blackwell architecture and software optimization continues to react with possible boundaries of AI performance and guarantee real -time user experience and powerful AI applications.
For more information, visit the NVIDIA official blog.
Image Source: Shutter Stock