FILE PHOTO: An Intel logo appears in this illustration taken August 25, 2025. REUTERS/Dado Ruvic/Illustration/File Photo

By Max A. Cherney

(Reuters) -Intel announced on Tuesday a new artificial intelligence chip for the data center that it plans to launch next year, in a renewed push to break into the AI chip market.

The new chip (GPU) will be optimized for energy efficiency and support a wide range of uses such as running AI applications, or inference, Intel Chief Technology Officer Sachin Katti said at the Open Compute Summit on Tuesday.

"It emphasizes that focus that I talked about earlier, inference, optimized for AI, optimized, optimized for delivering the best token economics out there, the best performance per dollar out there," Katti said.

The new chip, called Crescent Island, is the struggling U.S. chipmaker's latest attempt to capitalize on the frenzy in AI spending that has generated billions in revenue for AMD and Nvidia.

The company's plans trail behind competitors and represent the significant challenge Intel's executives and engineers face to capture a meaningful portion of the market for AI chips and systems.

Intel CEO Lip-Bu Tan has vowed to restart the company's stalled AI efforts after the company effectively mothballed projects such as the Gaudi line of chips and Falcon Shores processor.

Crescent Island will feature 160 gigabytes of a slower form of memory than the high bandwidth memory (HBM) that is found on AMD and Nvidia's data center AI chips. The chip will be based on a design that Intel has used for its consumer GPUs.

Intel did not disclose which manufacturing process Crescent Island would use. The company did not immediately respond to a request for comment.

MIX AND MATCH

Since the generative AI boom with the launch of OpenAI's ChatGPT in November 2022, startups and large cloud operators have rushed to grab GPUs that help run AI workloads on data center servers.

The demand explosion has led to a supply crunch and sky-high prices for chips designed or suited for AI applications.

Katti said at the San Jose trade show that the company would release new data center AI chips every year, which would match the annual cadence set by AMD, Nvidia and several of the cloud computing companies that make their own chips.

Nvidia has dominated the market to build large AI models such as the one used for ChatGPT. Tan has said the company plans to focus its design efforts on building chips useful for running those AI models - which work behind the scenes to make AI software operate.

"Instead of trying to build for every workload out there, our focus is increasingly going to be on inference," Katti said.

Intel has taken an open and modular approach in which customers can mix and match chips from different vendors, Katti said.

Nvidia said last month it would invest $5 billion in Intel, taking a roughly 4% stake and becoming one of its largest shareholders as part of a partnership to co-develop future PC and data center chips.

The deal is part of Intel's effort to ensure that Intel's central processors (CPUs) are installed in every AI system that gets sold, Katti said.

(Reporting by Akash Sriram in Bengaluru and Max Cherney in San Francisco; Editing by Shilpi Majumdar and Stephen Coates)