
The Math Nobody Taught You Behind Every AI
AI isn't magic — it's linear algebra, calculus, and probability. Here's the math behind every LLM and AI model, explained in plain human language.
The energy crisis, AI's insane compute appetite, and a wave of bold startups are pushing digital infrastructure beyond Earth's atmosphere.

Let's be honest — nobody was expecting 2026 to be the year we started talking about data centers in space as if they were a real thing. But here we are.
A handful of companies have already launched actual compute hardware into low-Earth orbit. An NVIDIA H100 GPU is currently running AI workloads somewhere above us right now. Axiom Space is running cloud computing experiments on the International Space Station. And SpaceX has filed patents for what some analysts are describing as a vision for a million AI-data-center satellites.
So what's actually going on? Is this real, or just a very expensive PR stunt?
Turns out — it's genuinely real, and the reasons behind it make a lot of sense once you understand the pressure the data center industry is under right now.
"By 2030, AI data centers are projected to consume close to 1,000 terawatt-hours of electricity per year — rivaling the power consumption of entire countries."
Here's the thing about AI: it's incredibly power-hungry. Training large models, running inference at scale, storing and moving petabytes of data — all of this requires enormous amounts of electricity and cooling.
On Earth, that's becoming a massive problem. Data centers already account for roughly 1-2% of global electricity consumption. As AI workloads scale up through the late 2020s, that number is expected to explode.
The bottlenecks are getting painful:
Power: Many regions simply don't have spare grid capacity for the next wave of data centers.
Cooling: Traditional water-cooling systems are expensive, water-intensive, and increasingly restricted in drought-prone areas.
Land: Prime real estate near fiber networks and power infrastructure is getting scarce and expensive.
Regulation: Carbon and water usage regulations are tightening in the EU, US, and elsewhere.
So when engineers started looking at space as an alternative, it wasn't really out of nowhere. Space offers something Earth can't — a nearly unlimited power source (the sun, unfiltered by atmosphere), natural cooling (the vacuum of space), and zero land constraints.
An orbital data center is exactly what it sounds like — servers, storage, and compute hardware mounted inside a satellite or modular orbital platform, operating in low-Earth orbit (LEO), typically 300–600 km above ground.
Instead of sending raw satellite imagery back to Earth for processing (which eats up expensive bandwidth), you run the AI model directly on the satellite and send back only the results. Think: a crop-health alert instead of 50GB of raw imagery. A weather prediction, not a terabyte of sensor data.
The architecture is basically a hybrid cloud edge node — except the 'edge' is in orbit.
Here's roughly how it works:
Radiation-tolerant hardware is packaged into satellites or orbital modules.
Large solar arrays generate power — more consistently than anything on Earth.
The cold vacuum acts as a natural heat sink, handling the thermal dissipation problem for free.
AI models run on-orbit, and results are beamed back via laser or radio links.
Terrestrial cloud systems integrate with orbital nodes as edge extensions.
A few years ago, orbital data centers were a concept paper topic. In 2026, they're operational. Here's what's actually happened:
Starcloud — probably the most talked-about name in this space right now — launched a satellite with an NVIDIA H100 GPU onboard. That's a high-end AI accelerator, the same kind powering ChatGPT servers, running AI workloads in orbit. They're positioning this as the world's first AI data center in space, and technically, it's hard to argue otherwise.
Kepler Communications launched their Tranche 1 constellation in March 2026. These aren't just comms satellites — they have distributed compute nodes baked in, so customers can actually run AI models in orbit instead of waiting for data to hit the ground.
Axiom Space took a different approach. They deployed what they're calling AxDCU-1-style nodes on the ISS, running a mix of cloud computing, AI/ML workloads, and space-cybersecurity experiments. They've partnered with Kepler, Skyloom, and Microchip Technology to build out the ecosystem.
And then there's SpaceX. Elon Musk's vision — which has leaked through filings and executive statements — describes a potential future where SpaceX-linked systems operate up to a million AI data center satellites. That's more ambition than operational reality right now, but it signals where the industry thinks this could go.
"Analysts estimate the orbital data center market could grow from ~$1.77B around 2029 to roughly $39B by 2035 — a 67% compound annual growth rate."
Once you get past the coolness factor of 'servers in space,' the advantages are surprisingly practical:
Solar energy without interruption — In orbit, there's no night cycle (in the way we experience it), no clouds, no weather. Solar irradiance is about 40% stronger than what hits a rooftop panel on Earth. For always-on compute workloads, that's a meaningful efficiency gain.
Cooling for free — Thermal management is one of the biggest costs in data centers. In space, the vacuum handles heat dissipation passively. No cooling towers, no water consumption, no expensive HVAC.
Latency advantages for the right use cases — For Earth observation, satellite analytics, and defense applications, doing processing in orbit rather than routing data to the ground and back is genuinely faster.
Resilience — Orbital infrastructure isn't affected by floods, power grid failures, or the physical vulnerabilities that ground-based data centers carry.
There are real reasons to be skeptical, and the honest answer is that most of this is still in early-pilot territory.
The cost problem is significant. Launching hardware into orbit is still expensive — even with reusable rockets. And if something breaks, you can't send a technician. Hardware has to be designed to last years without any physical maintenance, which means expensive radiation-hardened components and overbuilt redundancy.
The economics are rough right now. Some analyses suggest orbital compute currently costs around 3x more per watt than terrestrial equivalents. That gap needs to close dramatically for this to make sense at scale.
Latency can actually get worse depending on the use case. For enterprise workloads that aren't space-native (think: a SaaS company running their database in orbit), you're adding network hops that could hurt performance.
And space debris is a real, growing concern. More satellites means more collision risk, more coordination requirements, and more regulatory scrutiny from bodies like the FCC and ITU.
The most likely near-term outcome isn't 'data centers move to space.' It's that specific, space-native workloads — Earth observation, satellite analytics, defense and intelligence processing — start being handled in orbit rather than on the ground.
The longer-term implications are bigger though. If launch costs keep falling (SpaceX's Starship is targeting sub-$100/kg to orbit eventually), and orbital hardware gets cheaper and more reliable, you start to see a world where:
- AI inference for satellite imagery and sensor data happens entirely off-Earth.
- A new category of 'space-edge computing as a service' becomes a real product line.
- Some of the most energy-intensive AI workloads get offloaded to orbit, reducing pressure on terrestrial grids.
- The cloud becomes literally that — something in the sky.
For now, 2026 is a testing year. The pilots are live, the market is forming, and a few bold companies are figuring out what actually works. The full-scale rollout, if it happens, is a 2028-2032 story.
Orbital data centers aren't replacing your AWS region anytime soon. But they're not science fiction anymore either.
The fundamental driver here is simple: AI needs more compute than Earth's infrastructure can sustainably provide, and space is one of the few places with effectively unlimited solar energy, free cooling, and room to scale. The engineering is hard, the economics are still rough, but the direction of travel is clear.
If you're building AI infrastructure — or building products that rely on satellite data, Earth observation, or space-native data pipelines — this is a space worth watching closely. The companies getting into orbit now are laying the groundwork for what the cloud looks like in 2030 and beyond.
Keep reading
You might also enjoy

AI isn't magic — it's linear algebra, calculus, and probability. Here's the math behind every LLM and AI model, explained in plain human language.
ManasAi
Want AI built for your business?
We build custom AI agents, MCP servers, and automation workflows that transform how your team works.
Talk to our team →