From News Desk

Akamai Technologies has launched Akamai Inference Cloud, a platform that claims to redefines where and how AI is used by expanding inference from core datacentres to the edge of the internet.
Akamai Inference Cloud enables intelligent, agentic AI inference at the edge, close to users and devices. Unlike traditional systems this platform is purpose-built to provide low-latency, real-time edge AI processing on a global scale. This launch of Akamai Inference Cloud leverages Akamai’s expertise in globally distributed architectures and NVIDIA Blackwell AI infrastructure to radically rethink and extend the accelerated computing needed to unlock AI’s true potential.
The next generation of AI applications, from personalised digital experiences and smart agents to real-time decision systems demand that AI inference be pushed closer to the user, providing instant engagement where they interact; and making smart decisions about where to route requests. Agentic workloads increasingly require low-latency inference, local context and the ability to scale globally in an instant. Said to be built to power this transformation, Akamai Inference Cloud is a distributed, generative edge platform that places the NVIDIA AI stack closer to where data is created and decisions need to be made.
“The next wave of AI requires the same proximity to users that allowed the internet to scale to become the pervasive global platform that it is today,” said Dr Tom Leighton, Akamai CEO and Co-Founder. “Akamai solved this challenge before – and we’re doing it again. Powered by NVIDIA AI infrastructure, Akamai Inference Cloud will meet the intensifying demand to scale AI inference capacity and performance by putting AI’s decision-making in thousands of locations around the world, enabling faster, smarter, and more secure responses.”
“Inference has become the most compute-intensive phase of AI — demanding real-time reasoning at planetary scale,” said Jensen Huang, founder and CEO, NVIDIA. “Together, NVIDIA and Akamai are moving inference closer to users everywhere, delivering faster, more scalable generative AI and unlocking the next generation of intelligent applications.”
Disclaimer – The details expressed in this post are from the companies responsible for sending this post for publication. This website doesn’t endorse the details published here. Readers are urged to use their own discretion while making a decision about purchasing or using a product/service related to this company, or using this information in any way. There has been no monetary benefit to the Publisher/Editor/Website Owner for publishing this post and the Website Owner takes no responsibility for the impacts of purchasing or using these products/services on the reader, or using this information in any way.





