Jog
April 11, 2025 10:56
AMD unveils Pensando AI NICS to meet the needs of the AI workload after promising AI infrastructure that can be expanded with high performance and flexibility.
AMD has announced the launch of the Pensando Pollara 400 AI NICS, designed to meet the increasingly more demands of AI and Machine Learning Work Road as a significant movement to strengthen the AI infrastructure. According to AMD, the new AI network interface card (NICS) promises to provide an extended solution that meets the performance demands of the AI cluster while maintaining flexibility.
AI infrastructure problem solving
As demand for AI and large -scale language models increases, pressure on parallel computing infrastructure that can effectively handle high performance requirements is required. The main challenge was a network bottleneck that prevented the use of GPUs. AMD’s new AI NICS aims to overcome this by optimizing the GPU-GPU communication network in the node in the data center to improve the data transfer rate and the overall network efficiency.
Features of Accident AI NICS
Pensando Pollara 400 AI NICS is described as the industry’s first full programming AI NICS. They are built according to the emerging UEC (Ultra Ethernet Consortium) standards to provide customers with the AMD’s P4 architecture to program the hardware pipeline. This allows you to add new features and customized transmission protocols to accelerate the AI workload without waiting for a new hardware generation.
Some main features are:
- Transmission protocol option: It supports ROCEV2, UEC RDMA or Ethernet protocol.
- Intelligent packet spray: Advanced packet management technology improves network bandwidth utilization.
- Outless packet processing: Efficiently managed the arrival of packets in addition to ordering to reduce buffer time.
- Selective retransmission: Recreate only lost or damaged packets to improve network performance.
- Path awareness congestion control: Optimize load balancing to maintain performance during congestion.
- Fast defect detection: It minimizes GPU idle time with a quick disability mechanism.
Open ecosystem and expansion
AMD emphasizes the advantages of the open ecosystem so that the organization can easily expand and program the future demands. This approach is a cost -effective solution for cloud service providers and companies because it not only reduces capital spending but also relies on expensive switching fabrics.
The Pensando Pollara 400 AI NIC has been verified in the world’s largest scale out data center. Due to programming possibilities, bandwidth, low waiting time, and extensive functional sets, cloud service providers who want to improve AI infrastructure function have been preferred.
Image Source: Shutter Stock