ai universal base board UBB fabrication for accelerator interconnect

In AI servers, the UBB mainboard not only serves as the carrier for the GPU platform and enables efficient data transmission and exchange, but also, with its high performance, high stability, and scalability, becomes the core support for AI computing infrastructure.

Description

AI UBB Board Definition

The AI UBB, Universal Base Board is a high-end backplane or motherboard designed to provide power, signal, management, and high-speed interconnect channels for AI accelerator cards such as OAM modules, GPUs, and FPGAs. It typically serves as the core interconnection platform in AI servers or AI accelerator chassis, such as NVIDIA HGX, Baidu Kunlun, and Alibaba Hanguang, making it a key hardware foundation for modern AI computing centers.

Main Functions of AI UBB Board

  • Enables high-speed interconnection between multiple AI accelerator cards and the host CPU, as well as between accelerator cards, supporting protocols like PCIe, NVLink, and CXL.
  • Provides unified power distribution, cooling interfaces, management signal distribution, and health monitoring functions.
  • Supports various types of AI accelerator modules, offering high compatibility and flexible scalability.
  • Ensures high-bandwidth, low-latency data paths for efficient data exchange and transmission.

Main Features of AI UBB PCB Fabrication

  • Ultra-high layer count (≥20 layers), large size (generally above 400×500mm), and board thickness ≥3mm.
  • Uses ultra-low loss, high-end materials to meet high-frequency signal transmission requirements.
  • Employs advanced processes such as back drilling, resin filled vias, and POFV, with minimum drill hole of 0.2mm and aspect ratio ≥15.
  • High trace density down to 0.09/0.09mm, with some impedance control precision reaching ±8%.
  • Supports high-power delivery and hot-swappable modules, with strong cooling and power management capabilities.
  • Highly customized design with outstanding reliability and stability, suitable for large-scale AI cluster deployment.

Main Applications of AI UBB Board

  • OAM, Open Accelerator Module, architecture AI servers, serving as the bridge between OAM modules and the motherboard/CPU.
  • Core backplane for AI server platforms such as NVIDIA HGX, providing interconnection for multiple GPUs/AI accelerator cards.
  • High-performance AI servers with liquid or air cooling, enabling modular expansion and management of high-density, high-computing-power AI clusters.
  • Used in supercomputing centers, data centers, and large-scale AI cloud computing platforms for high-end AI applications.

Differences from Traditional Backplanes

  • UBB boards are specifically designed for the AI accelerator ecosystem, supporting higher bandwidth (such as PCIe Gen4/Gen5, NVLink, CXL) and higher power requirements.
  • Greater focus on flexible expansion and diverse compatibility between AI modules, accommodating rapid evolution of AI technology and hardware.
  • Provides higher signal integrity and system reliability, making it an indispensable component for large-scale AI computing platforms.