Akamai plans to deploy thousands of NVIDIA Blackwell GPUs across its distributed cloud network, embedding AI inference capacity into more than 4,400 edge locations. The strategy shifts compute closer to users, targeting latency-sensitive applications and data sovereignty demands. As enterprises prioritize production-scale AI over model training, Akamai is positioning edge-based inference as a structural alternative to centralized hyperscale infrastructure.
