Starcloud, a Washington-based orbital computing startup, has officially submitted an application to the Federal Communications Commission (FCC) for a massive constellation of 88,000 satellites designed to function as high-performance data centers in Low Earth Orbit (LEO). By relocating intensive computational tasks like Artificial Intelligence (AI) training to space, the company intends to utilize the vacuum of space for passive cooling and direct solar energy, effectively bypassing the physical and environmental constraints currently hampering terrestrial data centers.
How does Starcloud's orbital data center work?
Starcloud’s orbital data center integrates high-density GPU clusters, persistent storage, and proprietary thermal management systems into satellites operating in sun-synchronous orbits to ensure continuous power and radiative cooling. These satellites are networked via optical inter-satellite links, allowing them to process high-volume data in real-time for both orbital and terrestrial users. This infrastructure bypasses traditional downlink bottlenecks by providing secure, scalable cloud computing directly in the vacuum of space.
Starcloud, formerly known as Lumen Orbit, is designing its architecture to accommodate the exponential growth of AI-driven compute demands. According to the company's FCC filing on March 13, 2026, these orbital facilities will operate in narrow shells at altitudes between 600 and 850 kilometers. By maintaining a dusk-dawn sun-synchronous orientation, the satellites can achieve near-continuous power generation, which is essential for the high-energy requirements of modern Large Language Models (LLMs) and GPU-intensive workloads.
The technical framework relies heavily on broadband integration with existing providers. While the satellites will handle the heavy lifting of computation, they will utilize laser links to communicate with established networks such as SpaceX’s Starlink, Amazon’s Project Kuiper, and Blue Origin’s Tera Wave. This hybrid approach allows Starcloud to focus on the hardware of computation—such as the Nvidia H100 processor—while leveraging the global connectivity of existing mega-constellations for data delivery.
Why is space better for data centers than Earth?
Space offers an infinite heat sink for radiative cooling and constant access to solar energy, eliminating the need for massive water consumption and battery storage required on Earth. This environment allows for a 10x reduction in operational energy costs and avoids terrestrial limitations such as land scarcity, power grid instability, and carbon emissions. Consequently, orbital data centers can scale to gigawatt levels much faster than ground-based facilities.
The primary driver for moving compute to LEO is the passive thermal management provided by the vacuum of space. Terrestrial data centers currently face severe roadblocks due to the immense heat generated by AI chips, requiring millions of gallons of water and complex HVAC systems. Starcloud argues that space-based deployment is the most cost-effective way to deliver compute this decade, as it removes the infrastructure constraints that typically delay ground-based data center expansion by years.
Furthermore, the Starcloud constellation aims to provide a "sovereign cloud" that exists outside traditional national boundaries, offering unique security and accessibility benefits. The company's roadmap includes the deployment of Starcloud-4, which features massive satellites launched via SpaceX Starship. These future units are envisioned to have solar arrays spanning four kilometers on each side, supporting a staggering five-gigawatt data center capacity per craft—a scale virtually impossible to achieve in a single terrestrial location today.
What happened to the Nvidia H100 GPU on Starcloud's first satellite?
The Nvidia H100 GPU on Starcloud-1 successfully performed the first orbital training of a Large Language Model and ran Google’s Gemini AI model following its November 2025 launch. This 60-kilogram testbed demonstrated that commercial-off-the-shelf (COTS) high-performance hardware could survive the launch and operate effectively in LEO. The mission confirmed that the H100 provides 100x more processing power than any previous GPU deployed in space.
The success of the Starcloud-1 mission, which was part of a SpaceX rideshare, served as a vital proof-of-concept for the company’s future fleet. Despite the harsh radiation environment of space, the Nvidia H100 remained operational, allowing the team to run complex inference tasks on Synthetic Aperture Radar (SAR) data. This capability suggests that future satellites could process satellite imagery locally, only sending the critical insights back to Earth rather than massive, raw data files.
Building on these results, the company is preparing for Starcloud-2, its first fully commercial spacecraft scheduled for a 2027 launch. This next iteration will feature a cluster of processors paired with proprietary thermal and power systems in a smallsat form factor. The goal is to refine the hardware's radiation hardening and power efficiency before scaling up to the full 88,000-satellite constellation authorized in the recent FCC filing.
Regulatory and Sustainability Challenges for Mega-Constellations
The scale of the Starcloud proposal places it among the largest satellite filings in history, surpassed only by SpaceX’s recent application for a one-million-satellite constellation. Managing a fleet of 88,000 objects requires rigorous adherence to orbital safety and space traffic management. Starcloud has committed to several best practices to mitigate these risks, including:
- Full Demisability: Satellites are designed to burn up completely upon atmospheric reentry, leaving no debris.
- Brightness Mitigation: Coordination with the astronomy community to minimize light pollution and protect essential observations.
- Initial Checkout Orbits: Deploying satellites at lower altitudes to ensure functionality before raising them to operational orbits.
- Non-Interference Basis: Using Ka-band spectrum for telemetry and control in a way that does not disrupt existing communications.
As written by Jeff Foust for SpaceNews, the FCC's acceptance of the filing is only the first step in a long regulatory journey. The Redmond-based company must prove that such a dense constellation will not exacerbate the problem of orbital debris. By focusing on sun-synchronous orbits and ensuring that malfunctioning units reenter the atmosphere quickly, Starcloud aims to establish itself as a responsible steward of the "soon-to-be highly utilized orbits" between 600 and 850 kilometers.
The transition from communication-focused satellites to computation-focused orbital infrastructure represents a paradigm shift in space utilization. If Starcloud successfully deploys its 88,000-satellite network, it will not only provide a massive boost to global AI capabilities but also redefine the relationship between cloud computing and environmental sustainability. For now, the industry watches as the company prepares its 2027 mission, which will serve as the definitive test for the feasibility of gigawatt-scale orbital supercomputing.
Comments
No comments yet. Be the first!