Akamai is disrupting the cloud market yet again, this time with the introduction of Akamai Cloud Inference—a solution powered by AI to move inference workloads near users. With this move, the company positions itself as a central player in the emerging AI space, disrupting conventional cloud providers that struggle with latency and distributed processing.
The new platform arrives at a timely juncture, as more enterprises are increasingly demanding faster, more efficient AI inferencing capabilities. Though massive hyperscale data centers continue to be the exclusive province for large-scale model training, real-time processing of AI-driven insights occurs at the edge.
Akamai, with its wide network of over 700 cities and 4,300 points of presence, is uniquely positioned to orchestrate this shift. Using its existing infrastructure, the company will reduce latency by up to 2.5 times and triple throughput—an achievement that has the potential to transform the way companies utilize AI applications.
That’s not all the innovation involves, however, Akamai is also adding video processing units (VPUs) to its cloud service, becoming the first cloud vendor to make this purpose-built hardware available in a cloud platform. While GPUs handle a wide variety of compute-intensive workloads, VPUs specialize in video transcoding and offer a more power-efficient solution for streaming high-quality content. This new idea could really change things for media companies. It allows them to expand their business without having to spend more money.
Akamai is doing something unique by combining AI inferencing with media acceleration in its cloud platform. This approach not only strengthens its capabilities but also changes the basic design of cloud computing. Traditionally, data centers are located in one central place. However, Akamai is showing that spreading them out could truly unleash the full potential of AI.