HOME > NEWS > BODY

Intel vs. Nvidia who wins this fight?

Intel has announced the availability of its latest line of x86 processors, which — along with its previously announced Gaudi line — the company hopes can help it grab AI market share from industry leader Nvidia.

Intel Xeon 6 E-core chips. Intel is shoehorning its upcoming next-generation X86 processors into artificial intelligence (AI) tasks, even though the chips won't run AI workloads.

This week at Computex, Intel announced its Xeon 6 processor line, talking up what it calls Efficient-cores (E-cores) that it said will deliver up to 4.2X the performance of Xeon 5 processors.

The first Xeon 6 CPU is the Sierra Forest version (6700 series)—a more performance-oriented line. Granite Rapids, with Performance cores (P-cores or 6900 series), will arrive next quarter.

The new Xeon processors make 3:1 data center rack consolidation possible with equivalent performance and up to 2.6 times the performance-per-watt gains of the previous generation, Intel claimed.

"The data center AI market is hyper-focused on the impact of AI power consumption with increasing concerns around the environmental impact and impact on the power grid," said Reece Hayden, a principal analyst for ABI Research. "Gaudi-powered AI systems will utilize the Intel Xeon Scalable processors for the CPU head node.

Increased performance per watt and density will lower the overall power consumption of AI systems, which is a positive for the energy footprint of AI overall." Improved rack density also allows for data center consolidation by freeing up room for the deployment of AI-focused hardware to support training or inferencing, Hayden said.

The company also took the wraps off its Lunar Lake line of client processors, aiming for the AI PC industry. Intel said the X86 chips use up to 40% lower SoC power than the previous generation.

Available in the third quarter of this year, the Lunar Lake Core Ultra also has NPUs on board. Those chips boast over 100 platform TOPS and more than 45 NPU TOPS.

Their target will be a new generation of PCs enabled for genAI tasks. Last week, Intel laid out its chip strategy, planning processor lines that run AI workloads from the data center to the edge. IDC sees all enterprise PC purchases becoming AI computers within two years.

"Intel is one of the only companies in the world innovating across the full spectrum of the AI market opportunity — from semiconductor manufacturing to PC, network, edge, and data center systems," said Intel CEO Pat Gelsinger in a statement from Computex Conference in Taiwan this week.

It also revealed the pricing for its Gaudi 2 and Intel Gaudi 3 AI accelerator product kits. They are deep learning accelerators aiming to support training and inference of artificial intelligence large language models.

The Gaudi 3 accelerator kit comes with eight AI chips and is selling for about $125,000. And the earlier Gaudi 2 generation will list for $65,000. Accelerator microprocessors serve two main functions in genAI: training and inference.

Chips that handle AI training use massive amounts of data on which to train neural network algorithms, which then are supposed to make correct predictions.

For example, one prediction could be the next word or phrase in a sentence and another, if it is text, the next image in the sequence— so chips have to infer what this answer to a prompt (query) will be quite quickly.

However LLMs need to be trained before making valuable inferences concerning a query. Some of the most popular types of LLMs can generate answers by using massive data sets ingested from the Internet.

Still, occasionally they can be wrong or utterly bizarre in their answers, like the genie hallucinations. Shane Rau, IDC research vice president for computing semiconductors, said Intel's introduction of Xeon 6 with P-cores and E-cores accepts that end-user workloads continue to diversify and, depending upon what workload an end-user has, they may need mainly performance (P-cores) or to balance performance and power consumption (E-cores).

"For example, workloads run primarily in a core data center, where there are fewer power constraints and more need for raw performance, can use more P-cores," said Rau. "In contrast, workloads run primarily in edge systems, like edge servers, need to work within more constrained environments where power consumption and heat output must be limited," and therefore benefit from E-cores. "

If you think of AI as sort of doing what humans do, and humans do a lot of different tasks that require different combinations of capabilities, then it makes sense that the AI will need different capabilities depending on the task," Rau added. Moreover, not every type of operation requires maximum performance and, hence, maximum acceleration (for example, server GPUs).

Many types of operations can be run on microprocessors only or other kinds of specialized accelerators. AI, in this way, like a new market, is maturing and segmenting as it matures.

FREE TRIAL
CONTACT