New Processors Help Modernize the Data Center for AI

Enterprise and cloud data centers are using new compute and networking processors to run and speed up today’s more complex AI workloads.

New Processors Help Modernize the Data Center for AI
(Credit: Ruslan Kudrin / Alamy Stock Photo)

Meeting the demands of artificial intelligence workloads is setting the data center AI processor market on fire. In fact, revenue for GPUs and other processors designed for AI computational tasks have bolstered the much broader semiconductor market, which had been slumping since Q1 2022.

To that point, the total semiconductor industry grew 8.4% in 3Q23 from 2Q23 to $139 billion, according to Omdia and its Competitive Landscape Tracker, which came out late last year. Most importantly, Omdia found that after declining for five consecutive quarters, the industry had grown for two consecutive quarters.

While other semiconductor market segments also grew, Omdia Principal Analyst Cliff Leimbach noted (in an Omdia blog) that two AI-related companies, NVIDIA and SK Hynix, recorded large revenue increases.

Most people are familiar with NVIDIA, whose GPUs are widely used to accelerate data-intensive AI compute tasks and jobs. SK Hynix may not be as well known outside the high-performance computing (HPC) world. It provides high bandwidth memory (HBM) used in HPC and AI applications.

Omdia noted that NVIDIA’s semiconductor third-quarter revenues rose to $7.3 billion from $4.6 billion in 2022. SK Hynix’s semiconductor revenues increased 26% to $6.7 billion.

Mainstreaming of AI causes a data center disruption

After the world emerged from the pandemic, AI use was gaining momentum. However, the release of ChatGPT in November 2022 led to explosive adoption of AI, in general, and generative AI, in particular, in businesses of all sizes in every industry. A quick look at AMD’s, another leading AI processor vendor’s, annual revenues puts the AI-driven growth into perspective.

In particular, several industry experts and publications have noted that future market expectations were high at the end of 2022. As an article on The Next Platform stated:

“AMD was projecting the total addressable market for data center AI accelerators was on the order of $30 billion in 2023 and would grow at around a 50 percent compound annual growth rate through the end of 2027 to more than $150 billion.”

By the end of 2023, the actual revenue was about 50% higher, and growth rate predictions skewed significantly higher, too.

Industry analysts noted a strong 2023 for the entire market and predicted a very strong market going forward. For example, in December 2023, IDC released an updated forecast stating that enterprises would spend $19.4 billion worldwide on GenAI solutions in 2023. (That number includes all infrastructure hardware, software, and IT services.) Additionally, the report expected that number to more than double in 2024, reaching $151.1 billion in 2027 with a compound annual growth rate (CAGR) of 86.1% between 2023 and 2027.

Data center networking needs acceleration, too

Network Computing has dedicated a lot of coverage over the last year addressing the need for data centers to change to accommodate artificial intelligence (AI) workloads. (See our roundup on the issue here, and download our guide here.) Naturally, given the focus of the Network Computing site, most of our coverage focused on ways to improve data center networking for AI workloads.

Specifically, we've looked at various technologies that accelerate AI jobs running on enterprise or cloud data centers. Some of the critical technologies include workload accelerators like Infrastructure Processing Units (IPUs), Data Processing Units (DPUs), and Compute Express Link (CLX) technology.

Additionally, another industry effort, the Ultra Ethernet Consortium, is trying to build a complete Ethernet-based communication stack architecture for AI and high-performance computing (HPC) workloads. The effort is a Joint Development Foundation project hosted by The Linux Foundation. Its founding members include AMD, Arista, Broadcom, Cisco, Eviden (an Atos Business), HPE, Intel, Meta, and Microsoft.

The bottom line is that enterprise and cloud data center compute and networking are undergoing rapid and radical changes to meet businesses' ever-growing performance and time-to-results needs when running AI workloads.

Related articles:

About the Author

Salvatore Salamone, Managing Editor, Network Computing

Salvatore Salamone is the managing editor of Network Computing. He has worked as a writer and editor covering business, technology, and science. He has written three business technology books and served as an editor at IT industry publications including Network World, Byte, Bio-IT World, Data Communications, LAN Times, and InternetWeek.

SUBSCRIBE TO OUR NEWSLETTER
Stay informed! Sign up to get expert advice and insight delivered direct to your inbox

You May Also Like


More Insights