**Nvidia Unveils Open Server Platform for AI Integration**
Nvidia is set to revolutionize its server platform by allowing customers to integrate their own CPUs and AI chips, including options for future racks featuring Nvidia and Qualcomm components. This announcement was made by CEO Jensen Huang during the COMPUTEX trade show in Taiwan, marking a significant shift in the data center landscape.
As demand for data centers and AI models surges, Nvidia has emerged as a leader, achieving remarkable triple-digit annual sales growth and reaching a market valuation of $3 trillion. Traditionally, Nvidia has provided a comprehensive AI solution that encompasses CPUs, networking equipment, and GPUs within its custom-designed server racks, facilitating the training and deployment of advanced AI models.
The newly introduced NVLink Fusion system represents a pivotal move towards an open platform, enabling large cloud providers to incorporate their custom chips alongside Nvidia’s renowned technology. This flexibility is expected to enhance Nvidia’s market presence and revenue potential, as it allows for a broader adoption of its server platform.
Initial partners for the NVLink Fusion AI chip include MediaTek, Marvell, and AIchip, with Fujitsu and Qualcomm joining as CPU partners. Qualcomm’s CEO, Cristiano Amon, emphasized the importance of this collaboration, stating that it advances their vision of high-performance, energy-efficient computing in data centers.
While the open system presents exciting opportunities, it is noteworthy that Broadcom, a key player in the custom AI chip market, was not included in the initial partner list, although Nvidia has indicated that more partners may be added in the future.
In summary, Nvidia’s strategic move to open its server platform is poised to reshape the AI infrastructure landscape, fostering innovation and collaboration among tech companies.
**FAQ**
**What is NVLink Fusion?**
NVLink Fusion is Nvidia’s new system that allows customers to integrate their own CPUs and AI chips into Nvidia’s server racks, promoting a more flexible and open AI infrastructure.
