ASUS Unveils AI Factory and Next-Gen Servers with NVIDIA HGX B300 systems at OCP 2025!
- Christopher Chuah
- Oct 15
- 2 min read

ASUS announced its participation at the 2025 OCP Global Summit, taking place from October 13–16 at the San Jose Convention Center (booth #C15). During the event, ASUS introduced the XA NB3I-E12 series AI servers, powered by the NVIDIA® HGX B300 system with ConnectX-8 InfiniBand SuperNICs, delivering exceptional performance and stability for enterprises and cloud providers handling intensive AI workloads.
ASUS also confirmed that shipments have begun for its ASUS AI POD, built on the NVIDIA GB300 NVL72 platform, and the XA NB3I-E12 servers, giving early adopters access to next-generation AI computing capabilities.
Driving AI transformation with ASUS AI Factory
ASUS is showcasing its AI Factory powered by NVIDIA Blackwell architecture, featuring the ASUS AI POD with the NVIDIA GB300 NVL72 platform and XA NB3I-E12 servers with the NVIDIA HGX B300 system — essential components of enterprise AI infrastructure. This comprehensive ecosystem integrates advanced hardware, optimized software, and expert services to simplify AI deployment from edge to large-scale environments, supporting diverse applications such as generative AI and predictive analytics.
At the event, ASUS also highlights the compact Ascent GX10, powered by the NVIDIA GB10 Grace Blackwell Superchip, launching on October 15. Delivering up to 1 petaFLOP of AI performance, it brings petaflop-scale inferencing for 200-billion-parameter models to the desktop.
Optimizing AI workloads with AMD EPYC 9005 processors
ASUS also unveiled a range of AMD EPYC™ 9005-powered server solutions designed for AI-driven and mission-critical data center workloads. The ASUS ESC8000A-E13X accelerates generative AI and LLM performance with full compatibility for the NVIDIA RTX PRO 6000 Blackwell Server Edition and features an integrated NVIDIA ConnectX-8 SuperNIC supporting 400G InfiniBand/Ethernet per QSFP port for ultra-low latency, high-bandwidth networking.
Meanwhile, the RS520QA-E13 series delivers high-performance multi-node computing for HPC, EDA, and cloud applications, supporting up to 20 DIMM slots per node with advanced CXL memory expansion, PCIe 5.0, and OCP 3.0 for maximum efficiency in demanding workloads.
Join the ASUS 2025 OCP Global Summit session
Don’t miss the 15-minute ASUS Infrastructure for Every Scale—from Edge to Trillion-Token AI session at Expo Hall Stage on October 15 from 16:25–16:40. During this insightful presentation we will share how ASUS helps customers build future-ready AI data centers. Learn how our servers, rack-scale ASUS AI PODs with NVIDIA GB200/GB300 NVL72, and high-serviceability designs address diverse AI workloads and deployment challenges.









Comments