Skip to content

Intel launches Xeon 6 and Gaudi 3 AI chips to boost AI and HPC performance

Intel launches Xeon 6 and Gaudi 3 AI chips to boost AI and HPC performance

Intel is launching new Xeon 6 processors with performance cores as well as Gaudi 3 AI accelerators to stay competitive in the AI wars.

The new Xeon 6 processors have performance cores (P-cores) that can double AI vision performance and the Gaudi 3 AI accelerators have 20% more throughput.

As AI continues to revolutionize industries, enterprises are increasingly in need of infrastructure that is both cost-effective and available for rapid development and deployment. To meet this demand head-on, Intel today launched Xeon 6 with Performance-cores (P-cores) and Gaudi 3 AI accelerators, bolstering the company’s commitment to deliver powerful AI systems with optimal performance per watt and lower total cost of ownership (TCO).

“Demand for AI is leading to a massive transformation in the data center, and the industry is asking for choice in hardware, software, and developer tools,” said Justin Hotard, Intel executive vice president and general manager of the data center and AI group at Intel, in a statement. “With our launch of Xeon 6 with P-cores and Gaudi 3 AI accelerators, Intel is enabling an open ecosystem that allows our customers to implement all of their workloads with greater performance, efficiency, and security.”


Join us for GamesBeat Next!

GamesBeat Next is connecting the next generation of video game leaders. And you can join us, coming up October 28th and 29th in San Francisco! Take advantage of our buy one, get one free pass offer. Sale ends this Friday, August 16th. Join us by registering here.


Introducing Intel Xeon 6 with P-cores and Gaudi 3 AI accelerators

Intel Gaudi 3

Intel’s latest advancements in AI infrastructure include two major updates to its data center portfolio. These include Intel Xeon6 with P-cores. They’re designed to handle compute-intensive workloads with exceptional efficiency, Xeon 6 delivers twice the performance of its predecessor.

It features increased core count, double the memory bandwidth and AI acceleration capabilities embedded in every core. This processor is engineered to meet the performance demands of AI from edge to data center and cloud environments.

The Intel Gaudi 3 AI accelerator is specifically optimized for large-scale generative AI, Gaudi 3 boasts 64 Tensor processor cores (TPCs) and eight matrix multiplication engines (MMEs) to accelerate deep neural network computations.

It includes 128 gigabytes (GB) of HBMe2 memory for training and inference, and 24 200 Gigabit (Gb) Ethernet ports for scalable networking. Gaudi 3 also offers seamless compatibility with the PyTorch framework and advanced Hugging Face transformer and diffuser models. Intel recently announced a collaboration with IBM to deploy Intel Gaudi 3 AI accelerators as a service on IBM Cloud. Through this collaboration, Intel and IBM aim to lower the total cost of ownership to leverage and scale AI, while enhancing performance.

Enhancing AI systems with TCO benefits

Intel’s Xeon 6 and Gaudi 3 are getting enhancements.

Deploying AI at scale involves considerations such as flexible deployment options, competitive price-performance ratios and accessible AI technologies. Intel’s robust x86 infrastructure and extensive open ecosystem position it to support enterprises in building high-value AI systems with an optimal TCO and performance per watt. Notably, 73% of GPU-accelerated servers use Intel Xeon as the host CPU.

Intel has partnered with leading original equipment manufacturers (OEMs) including Dell Technologies, Hewlett Packard Enterprise, and Supermicro to develop co-engineered systems tailored to specific customer needs for effective AI deployments. Dell Technologies is currently co-engineering RAG-based solutions leveraging Gaudi 3 and Xeon 6.

Transitioning generative AI (Gen AI) solutions from prototypes to production-ready systems presents challenges in real-time monitoring, error handling, logging, security and scalability. Intel addresses these challenges through co-engineering efforts with OEMs and partners to deliver production-ready retrieval-augmented generation (RAG) solutions.

These solutions, built on the Open Platform Enterprise AI (OPEA) platform, integrate OPEA-based microservices into a scalable RAG system, optimized for Xeon and Gaudi AI systems, designed to allow customers to easily integrate applications from Kubernetes and Red Hat OpenShift.

Expanding access to enterprise AI applications

Intel Xeon 6 is getting enhanced with performance cores.

Intel’s Tiber portfolio offers business solutions to tackle challenges such as access, cost, complexity, security, efficiency and scalability across AI, cloud and edge environments. The Intel® Tiber™ Developer Cloud now provides preview systems of Intel Xeon 6 for tech evaluation and testing.

Additionally, select customers will gain early access to Intel Gaudi 3 for validating AI model deployments, with Gaudi 3 clusters to begin rolling out next quarter for large-scale production deployments.

New service offerings include SeekrFlow, an end-to-end AI platform from Seekr for developing trusted AI applications. The latest updates feature Intel Gaudi software’s newest release and Jupyter notebooks loaded with PyTorch 2.4 and Intel oneAPI and AI tools 2024.2, which include new AI acceleration capabilities and support for Xeon 6 processors.

Leave a Reply