The memory pricing forecast is the thing we'd watch most carefully today. A 130% surge in DRAM and SSD prices by end of 2026, if it holds, reshapes the cost stack for anyone running memory-intensive inference at scale. The Memory Tightness Index is already at 69 and drifting higher. Mid-tier AI operators with no long-term supply agreements are the most exposed.
The Meta-AMD GPU deal is worth sitting with for a moment. Six gigawatts of Instinct capacity going to one hyperscaler tightens allocation for everyone else, and we've been tracking the hyperscaler hoarding narrative strengthening in our internal theme work. That dynamic now has a concrete anchor.
Xiaomi putting a humanoid on an EV assembly line for autonomous nut installation is a quieter signal than the BMW-Figure deployment, but the pattern is consistent. Two separate OEMs, two separate robot vendors, both in high-value manufacturing within the same week. The deployment cadence is accelerating faster than most enterprise adoption timelines assumed.