Nvidia’s latest SEC filing reveals a sweeping move that could reshape how the company builds and delivers its AI services. The company committed to spend 26 billion dollars on rented servers from cloud providers over the next six years, and the scale of that decision hints at more than simple resource planning. It reflects a long term strategy that ties Nvidia even closer to the same companies that rely heavily on its GPUs.
According to the filing, Nvidia plans to pay one billion dollars in the final quarter of fiscal 2026, six billion in both 2027 and 2028, five billion in 2029, and four billion in each of the two following fiscal periods. Their collaboration with cloud giants that are widely using Nvidia’s GPUs to fuel their platforms has made Nvidia indirectly reveal how the AI infrastructure market is getting more and more like a web of deep interdependencies with the help of the spending data.
Reports already link several of these agreements to providers such as Lambda and CoreWeave. Both companies built their growth around Nvidia hardware, and now Nvidia intends to rent large portions of their capacity. This circular relationship raises questions about how cloud providers and GPU manufacturers influence each other as AI computing demand keeps accelerating.
The filing also highlights the concentration of Nvidia’s customer base. Four of its customers make up 22 percent, 17 percent, 14 percent, and 12 percent of its accounts receivable. The unspecific numbers given by the company since it didn’t name them directly, call for guessing how the big cloud vendors have the biggest impact.
Some cloud providers can still reduce or terminate these agreements, which gives Nvidia some flexibility. Even so, the contracts show how the company aims to support its research projects and its DGX Cloud ecosystem. Despite reports of a possible pullback, Nvidia leadership publicly rejected that claim, insisting that DGX Cloud remains heavily used and expanding.
Nvidia’s revenue climbed to 57 billion dollars in the most recent quarter, and CEO Jensen Huang described demand for Blackwell GPUs as overwhelming. Because cloud providers have already sold out their GPU capacity, Nvidia may avoid future bottlenecks most effectively by securing long term rented capacity.
