www.industry-asia-pacific.com

AMD Powers Korea AI Infrastructure with NAVER Cloud Collaboration

Strategic partnership integrates EPYC processors and Instinct GPUs to support scalable, sovereign AI infrastructure and optimized cloud-based AI services in Korea.

  www.amd.com
AMD Powers Korea AI Infrastructure with NAVER Cloud Collaboration

Cloud computing, hyperscale data centers, and national AI infrastructure initiatives are driving demand for scalable, high-performance compute platforms, leading AMD to expand its collaboration with NAVER Cloud to deploy advanced CPU and GPU technologies for AI training and inference workloads in Korea.

The partnership focuses on strengthening sovereign AI infrastructure, where data control, system performance, and scalability are managed within national or regional ecosystems. By integrating AMD’s compute platforms into NAVER Cloud’s infrastructure, the collaboration addresses increasing requirements for localized AI processing in sectors such as public services, enterprise AI, and digital platforms.

Deployment of next-generation CPUs and GPUs for AI workloads
As part of the agreement, NAVER Cloud is expanding its deployment of AMD EPYC processors, including the upcoming 6th-generation EPYC “Venice” architecture. These processors are designed to handle high-density, compute-intensive workloads typical of AI model training, large-scale data analytics, and cloud-native applications.

In parallel, AMD is providing early access to its next-generation Instinct MI455X GPUs, enabling NAVER Cloud to enhance both development and production environments for AI services. These accelerators are expected to support high-throughput parallel processing required for deep learning and inference tasks, particularly in large language models and real-time AI applications.

Software optimization across AI and cloud platforms
Beyond hardware deployment, the collaboration includes joint optimization of NAVER Cloud’s AI services and software stack on AMD platforms, including the ROCm software ecosystem. This integration is intended to improve workload efficiency, reduce latency, and enable better utilization of compute resources across cloud environments.

Such software-hardware co-optimization is increasingly critical in AI infrastructure, where performance gains are often achieved through tighter integration between compute architectures and AI frameworks rather than hardware improvements alone.

Supporting sovereign AI infrastructure development
The concept of sovereign AI infrastructure underpins the collaboration, emphasizing the ability of organizations and governments to build and operate AI systems with control over data governance, security, and system architecture. This is particularly relevant in regions prioritizing data residency and independent AI capabilities.

By combining NAVER Cloud’s regional cloud infrastructure with AMD’s open compute platforms, the partnership aims to provide scalable AI environments that can be adapted to national requirements while maintaining compatibility with global AI development ecosystems.

Positioning within AI data center ecosystems
The collaboration reflects a broader industry trend toward open, heterogeneous computing environments, where CPUs, GPUs, and software frameworks are designed to work together across large-scale, distributed systems. Compared to closed or proprietary ecosystems, this approach can offer greater flexibility for cloud providers and enterprise users when deploying AI workloads.

Through the integration of EPYC processors, Instinct GPUs, and optimized software stacks, the partnership contributes to the development of AI-ready data center infrastructure capable of supporting complex workloads across cloud services, enterprise platforms, and public-sector applications.

Edited by Industrial Journalist, Natania Lyngdoh — Adapted by AI.

www.amd.com

  Ask For More Information…

LinkedIn
Pinterest

Join the 155,000+ IMP followers