Cerebras had effectively one anchor customer before this (G42), and the deal is specifically for low-latency ChatGPT inference. It sits alongside Samsung HBM4 for Project Titan, AMD accelerators, and Nvidia training clusters. This is multi-silicon diversification, not consolidation.
What the deal does is validate Cerebras as an alternative to the Nvidia GPUs. It gives rivals another door to knock on.
We also notice Broadcom is now the implementation layer for OpenAI's Titan ASIC AND the 3.5GW Anthropic-Google TPU expansion announced April 7. Two of the three US frontier labs are routing custom silicon through the same vendor on the same TSMC N3 process. Watch item: TSMC N3 capacity disclosures and any Broadcom ASIC backlog commentary in next earnings. That's where the real scarcity signal shows up.