Google and SpaceX in talks to build orbital data centers for AI compute
Moving AI compute to orbit solves terrestrial power and cooling constraints by utilizing uninterrupted solar energy and the vacuum of space. While current launch costs make this economically unviable today, Starship's promised payload economics could eventually shift the breakeven point. This signals a long-term architectural pivot toward space-based infrastructure for next-generation gigawatt AI clusters.
What Happened
Google and SpaceX are reportedly in exploratory talks to deploy data centers in low Earth orbit (LEO). The initiative pitches the vacuum of space as the ultimate frontier for housing massive AI compute clusters, bypassing the severe land, power, and water constraints currently throttling terrestrial data center expansion.Technical Details
From an engineering perspective, orbital compute presents a fascinating trade-off matrix. Terrestrial AI clusters are primarily bottlenecked by power generation and thermal management. In orbit, data centers could theoretically harness uninterrupted solar energy (depending on the orbital plane) and utilize the ambient thermal environment of space for passive cooling, drastically reducing the Power Usage Effectiveness (PUE) overhead.However, the technical hurdles are steep. Cosmic radiation requires heavily shielded or radiation-hardened silicon, which traditionally lags several generations behind state-of-the-art terrestrial GPUs and TPUs. Furthermore, data transmission requires high-bandwidth, low-latency optical inter-satellite links (laser communications) to beam massive datasets between Earth and orbit. While SpaceX has proven this technology with Starlink, it has not yet been deployed at the petabit scale required for distributed AI training workloads.