0:00
/
0:00
Transcript

Venturing Into Orbital Data Centers: VC Insights for Deep Tech Startups | Deep Tech Catalyst

A chat with Jasper Wigley, Investment Associate at Systemiq Capital

Welcome to the 111th edition of Deep Tech Catalyst, the educational channel from The Scenarionist where science meets venture!

This week, we step into one of the most provocative infrastructure questions of the AI era: what happens when the energy demands of computation outgrow the Earth?

I sat down with Jasper Wigley, Investment Associate at Systemiq Capital, to unpack how an investor thinks about the shifting shape of compute demand, why space is being considered as an energy and constraint workaround, and where startups can position themselves in the stack to build traction—and capture the upside.

Key takeaways from the episode:

🚀 Compute Demand Has Split Into Two Economies
Training is optimized for power and performance, while inference is increasingly optimized for cost per token and energy efficiency. As models move from text into multimodal, physical-world workloads, the compute requirement expands again.

🛰️ Edge Compute Is Real Today—Orbital Data Centers Are the Bigger Bet
Processing data on satellites to avoid downlink bottlenecks is already happening. “AWS in orbit” is a different category: larger, more ambitious, and still in an early testing phase.

☀️ The Core Space Argument Is Energy
Sun-synchronous solar can offer continuous baseload generation, and higher harvesting efficiency without atmospheric filtering. Space also avoids terrestrial permitting and local opposition, while adding a potential sovereignty angle for compute and data.

🧊 Cooling Sounds Easy in Space—Until You Size the Radiators
In a vacuum, heat rejection relies on radiation, which can require enormous radiator surface area. Thermal design becomes a gating constraint once you connect it to the economics of launch mass.

🧱 Investability Comes From Stage-Gating and Early Customers
A credible plan de-risks in steps—lab proof, then subscale flight—while anchoring early commercial pull through specific LOIs and a small set of concrete use cases, often starting with defense, satellite operators, and eventually hyperscalers exploring off-grid compute.


Join The Scenarionist Premium

Whether you’re an experienced investor leading an established fund, an emerging manager stepping into the field, an angel investor exploring new opportunities, or a founder eager to see the industry from a fresh perspective, The Scenarionist Premium is built for you.

You’ll have access to:

  • Startup case studies that have been successfully deployed in real industrial settings.

  • In-depth due diligence and execution frameworks designed to win.

  • Curated, independent analysis of weekly Deep Tech inflection points, from scaling signals to incumbent moves and policy shifts.

Join Premium


BEYOND THE CONVERSATION — STRATEGIC INSIGHTS FROM THE EPISODE

How Compute Demand Is Shifting, and Why It Keeps Accelerating

The clearest way to understand why the idea of orbit-based computing is even being discussed is to start with what’s happening on Earth. Compute demand didn’t simply “grow.” It changed shape. And once the workload changes, everything downstream—power, infrastructure, and the investment logic—changes with it.

Training vs. Inference as Two Different Compute Economies

Before the 2020s, a lot of the growth story in data centers was driven by cloud computing. That era rewarded an infrastructure stack built to optimize for a particular kind of workload and a particular set of constraints.

The dominant jobs were not the same as the high-performance compute workloads now defining the AI moment, and the system design decisions followed accordingly.

Then the “ChatGPT moment” arrived, and large language models moved from a research theme into something broadly present in society. That shift didn’t just increase demand; it introduced a new split in how to think about compute.

  1. One track is training. Training is fundamentally optimized for power and performance—getting the most capability out of the system, pushing model quality forward, and competing on who can produce the strongest results. Cost and energy still matter, but in the race among the biggest hyperscale players, training is often treated as the performance frontier first.

  2. The second track is inference, where the center of gravity moves. Inference is increasingly optimized around the cost per token and energy efficiency. Once a model exists, the question becomes how cheaply and reliably it can be deployed in the real world, at scale, under constraints that are far more operational than aspirational. The economics are different, and the engineering priorities follow.

This distinction matters because it makes “compute demand” a misleading single number. Demand is not only rising; it is bifurcating into different economic regimes that reward different architectures and different infrastructure choices.

From Text to the Physical World: Multimodal Models and New Workloads

There is also a deeper reason the trajectory feels so steep: the workloads are expanding beyond text.

Text is one thing. The leap comes when AI has to understand and interact with the physical world—when it needs to work with data across many modalities.

Vision is an obvious example, but it extends well beyond that: sound, temperature, and hundreds of other types of signals that show up in real systems. As the model’s relationship with the physical world becomes more direct, the complexity increases dramatically.

This is part of why attention is shifting toward foundation models in areas like biology and vision, and toward multimodal systems that can integrate multiple streams of physical-world data. The point is not that these are fashionable categories. The point is that they’re harder.

And because they’re harder, they require more compute.

It also helps explain the renewed energy around robotics and adjacent domains that once felt perpetually “almost ready.” When the cost of compute comes down and the performance of the infrastructure improves, it becomes possible to train and run workloads that were previously unrealistic.

A set of applications that used to be constrained by compute economics begins to open up—not because the ambition changed, but because the underlying capability finally caught up.

From this perspective, the compute story is not a temporary surge. It is a structural shift in what AI is being asked to do, and therefore in the scale and type of compute required to do it.



Two Distinct Space Pathways

Any serious discussion of “data centers in space” gets confusing quickly if everything is treated as one category. In practice, there are two different tracks that share a similar setting—orbit—but operate with different assumptions, different levels of ambition, and very different timelines.

Space Edge Computing

The first track is space edge computing, and it is already happening.

The idea is straightforward: process data directly on satellites or on space stations instead of sending everything back down to Earth. You can think of it as putting a kind of “brain” on the spacecraft.

That brain performs an initial stage of analysis in orbit, then only the important parts of the data—or the insights extracted from it—are transmitted back to Earth. This matters because bandwidth is not infinite, downlink capacity is constrained, and raw data volumes from space systems can be overwhelming.

So, if the spacecraft can triage its own data—identify what is relevant, compress what matters, discard what doesn’t—then the entire system becomes more efficient.

  • Technically, this tends to involve custom chips, custom ASICs, and hardware strategies designed to operate reliably in the space environment.

  • Commercially, it is still a relatively small market compared to terrestrial compute, but it is real, growing, and already producing companies that are doing well.

Orbital Data Centers (ODCs)

The second track is orbital data centers, which is a different proposition entirely.

This is the more ambitious idea people often mean when they talk about “AWS in space” or “Azure in orbit.” Instead of edge nodes designed to support satellite operations, the concept is to launch bespoke platforms capable of delivering full-stack compute in orbit—something that resembles a true data center capability rather than an embedded onboard processor.

That distinction matters because the maturity is not the same.

Orbital data centers are, at best, in a testing phase. They are not yet a widely established reality. The system implications are bigger, the engineering and economics are harder, and the path to scale is still being worked out.

The demand narrative behind this track is also different. It’s tied to the broader AI compute hunger and the fact that, today, compute is supply constrained. That constraint is visible everywhere: data centers racing to come online, multi-year grid queues, unconventional power solutions being pulled back into service.

In that environment, the question becomes whether orbit offers a path to unlock energy and compute capacity that is difficult to access on Earth.

So it’s important to keep the two tracks separate.

  • Space edge computing is about processing space-generated data more intelligently and efficiently, right now.

  • Orbital data centers are about building a new compute frontier in orbit, driven by the escalating demand for large-scale AI workloads—and that frontier is still emerging.



Why Space is on the Table: Energy, Constraints, and Control

The underlying argument for putting compute in orbit isn’t that space is glamorous.

It’s that, when you strip the concept down to first principles, the bottleneck on Earth is increasingly energy—and space changes the energy equation in ways that are hard to replicate in any other environment.

Solar in Orbit: 24/7 Baseload and Higher Harvesting Efficiency

A useful starting point is simply the sun, and the difference between harnessing solar energy in space versus on Earth.

In orbit, a platform can operate in a sun-synchronous configuration that effectively tracks the sun continuously. That means solar generation can be available twenty-four hours a day, seven days a week. There’s no night cycle in the same sense, no daily drop-off that forces the system to compensate with grid dependence or oversized storage.

The second factor is efficiency.

On Earth, even in strong solar regions, sunlight is filtered through the atmosphere. That filtering matters. A panel on Earth is not receiving the same energy input as a panel in space, because the atmosphere changes what reaches the surface.

Once you remove that filtering effect, the same solar technology can, in principle, harvest far more effectively.

The implication is not only higher raw efficiency, but a different marginal cost profile.

If the system is designed so that energy harvesting is both continuous and more efficient, then the marginal cost of energy can look meaningfully lower. That doesn’t eliminate the question of upfront costs—those are real and heavy—but it does explain why the energy story is the center of the case.

Escaping Terrestrial Friction

The second set of reasons has less to do with physics and more to do with constraints.

On Earth, new data center capacity tends to collide with jurisdiction. You have land issues, local opposition, permitting regimes, and practical limitations on where power can be generated and delivered.

As data centers become higher density, the infrastructure footprint expands, and it’s not surprising that communities push back—especially when the resource demands are tangible, like water or local power capacity.

In space, those constraints change. There is no local opposition in the same way, no permitting path to navigate, and no land acquisition. The friction that comes from operating inside terrestrial jurisdictions largely disappears.

There is also a sovereignty angle embedded in this.

A spacecraft operating in orbit, outside a traditional jurisdictional footprint, can be framed as a more sovereign environment for storing or processing data. That perspective may become more relevant as the geopolitical importance of compute grows.

Cooling as a Potential Advantage When Thermodynamics Cooperate

Cooling is often the point that sounds counterintuitive at first, and it needs to be framed carefully.

On Earth, cooling can represent a major part of the energy bill for high-density AI data centers. You’re paying not only to run the compute, but to keep the system within operating limits, often using water and climate-dependent approaches.

In space, the environment is very different.

You are sitting in a vacuum, looking out into a universe that is effectively near absolute zero. In that sense, the thermodynamics are on your side. The theoretical promise is that you are not constrained by climate, and you don’t need to move vast amounts of water through the system to manage heat.

That doesn’t mean cooling is “easy.”

The physics are hard in practical engineering terms, and solving them at a meaningful scale becomes one of the defining challenges. But the point is that the basic thermodynamic context is not working against you in the way it does on Earth.

Why Space Over Other Harsh Environments Is Still an Open Question

It’s also important not to treat this as a settled conclusion.

People have explored other harsh environments as a way to manage cooling and infrastructure constraints—underwater deployments, mines, and polar regions.

But those approaches don’t remove the same bottlenecks.

Even if cooling is more efficient underwater, you still have to generate energy, connect to grid infrastructure, and operate inside permitting and jurisdictional regimes. Generating power in extreme terrestrial environments—like the Arctic—is not trivial, and you still inherit the terrestrial constraints that space side-steps.

So even if some arguments overlap—cooling being one of them—the full rationale for space is not simply “space cools better.” The deeper logic comes back to energy availability, continuous generation, and escaping the terrestrial constraints that increasingly slow down the deployment of new compute capacity.



The Binding Constraints: Thermal Design, Radiation, and Launch Economics

The vision of serious compute capacity in orbit tends to trigger an intuitive reaction: the hardware works on Earth, so why wouldn’t it work in space?

The reality is that the bottlenecks are not about whether compute can function at all. They’re about whether the system can function at a meaningful scale, for meaningful workloads, within constraints that are brutally physical and brutally economic.

Cooling Dense Compute Requires Radiators at Uncomfortable Scale

The thermal problem is the first place to start, even though it can sound counterintuitive after hearing that space is “cold.”

Cooling a chip in space is not inherently impossible.

The issue is what it takes to do it effectively when the compute density becomes serious. On Earth, heat is moved away from chips using air and water—whether that’s traditional air cooling, immersion, or various approaches to on-chip cooling.

The common feature is that you bring something to the chip that can absorb heat and transport it away.

In orbit, you don’t have that. There is no air and no water moving heat for you. In a vacuum, you are largely left with radiation as the mechanism to get heat out of the system. And once you accept that, the engineering consequence is simple: you need radiating surface area, and a lot of it.

At the scale implied by high-performance AI compute, that surface area becomes enormous.

The cooling solution begins to look like a giant radiator infrastructure—potentially measured in kilometers squared. It’s not a showstopper in a pure physics sense. It is a hard design space, but it is still a design space.

The problem is what those radiators imply once you connect them to the launch and deployment reality. Size becomes mass. Mass becomes cost. And cost becomes the gating factor.

Radiation Hardening Shifts Cost and Performance Trade-Offs

The second constraint sits inside the hardware itself: radiation.

There are already examples of compute operating in orbit, and a major part of what makes that possible is how chips are packaged and protected. Radiation tolerance is not a “nice-to-have.” It determines whether the system can operate reliably in the environment at all.

But radiation hardening isn’t free.

The more you engineer hardware to survive the space environment, the more expensive the compute module becomes—and, in practice, the more you risk sacrificing performance relative to the best commercial terrestrial parts.

That creates a tension: the most powerful, most cost-effective compute available on Earth is not designed for orbit, and the compute designed for orbit carries a different cost and capability profile.

So the constraint is not only technical. It’s economic. The moment you change the hardware requirements, you change the unit economics of deploying compute at scale in space.

Access to Orbit Is Improving, but the Mass Problem Still Dominates

The third constraint is the one everyone reaches for quickly: launch.

Launch costs have fallen dramatically over the last few decades—from many thousands of dollars per kilogram down to the low hundreds in some contexts—and they are still coming down.

And yet, even under optimistic assumptions, launch remains expensive when the system you’re trying to deploy is physically huge.

If the orbital data center requires multiple kilometers squared of solar arrays and radiator surface area, then even “cheap” launch becomes expensive in absolute terms. The math is unforgiving: if mass is large, cost is large.

That is why these constraints are tightly linked.

Thermal management pushes you toward massive radiator structures. Power generation pushes you toward large solar arrays and storage. Radiation pushes you toward more specialized—and often more expensive—compute packaging.

And then launch economics forces all of it into a single question: can the system be designed so that the mass per unit of useful compute becomes viable?

In the end, none of these issues are abstract. They are the binding constraints that decide whether “compute in space” remains a compelling story, or becomes a practical system that can be built, funded, and scaled.



Where a Founder Can Play: The Component Stack and Its Bottlenecks

Once the constraints are clear, the opportunity landscape comes into focus. The most investable entry points are not necessarily the most ambitious “build the whole orbital cloud” visions. They are the places in the stack where a technical breakthrough can materially change the feasibility curve—where solving a bottleneck reduces mass, cost, or risk in a way that the entire system depends on.

Thermal Management

Thermal management stands out because it is both fundamental and leverageable.

The cooling challenge in space isn’t about inventing cooling from scratch. It’s about the scale of radiator infrastructure required to reject heat through radiation alone.

That scale drives mass. Mass drives launch cost. And launch cost drives whether the entire concept can ever be economical.

So if a founder can improve the thermal layer—specifically by reducing radiator mass per kilowatt of heat rejection—that becomes a platform-level contribution. It makes everything above it more viable.

There are several ways this could be approached.

  1. One route is materials: high-emissivity coatings that more effectively convert heat into radiated energy in the vacuum of space.

  2. Another is structure: deployable radiator architectures that can fold tightly for launch and then unfold into massive surface areas once in orbit—ultralight panels that behave more like infrastructure than like traditional spacecraft components.

  3. There is also the internal physics of the radiator system itself: heat pumps that remain effective under radiation constraints, thermal storage materials that can buffer heat and dump it in sync with varying load, and other engineering strategies that improve the overall heat rejection performance without ballooning mass.

The logic is simple. Any company that meaningfully improves the mass-per-kilowatt requirement changes the economics of what can be launched, and that position can translate either into direct commercial pull or into strategic value as a target for integration.

Power Systems, Storage, and the Nuclear “Backup” Conversation

Power is the other foundational layer.

If the appeal of orbit is continuous solar generation and improved harvesting efficiency, then the supporting infrastructure—solar arrays, energy storage, power electronics—becomes critical.

The system is only as good as its ability to deliver stable, reliable power to compute modules over long periods.

This is where design choices matter: deployable solar arrays large enough to support meaningful workloads, batteries that can handle storage and load dynamics, and architectures that are compatible with how orbital platforms are actually assembled and maintained.

And in parallel, there is a more speculative but important thread: nuclear. The idea isn’t to replace solar as the primary driver, but to consider what a backup reactor capability could enable.

NASA has already funded fission studies focused on the Moon, and the broader concept of nuclear in space continues to surface because it could offer a different reliability and baseload profile.

Whether that becomes practical or not, it’s part of the emerging conversation around how orbit-based infrastructure could be powered robustly.

Optical Links, Orbital Assembly, and Maintenance as Enablers

Even if power and thermal problems are solved, an orbital data center is not useful if it can’t communicate effectively.

High-bandwidth links between satellites and Earth become an enabling requirement.

As the ecosystem shifts from radio frequency communication toward optical communications, there are problems to solve—but also clear opportunity.

Optical downlink is not just a technical feature; it’s the bridge that turns orbit-based compute into something that can integrate with terrestrial networks and deliver value at scale.

Then there is the question of how these systems are built and sustained.

If the platform requires massive radiator and power structures, the assembly method becomes central.

That’s where modularity and robotic assembly enter the picture: launching components separately, assembling them in orbit, and designing for repair, replacement, and maintenance.

A founder can play in that layer too—orbital assembly robotics, servicing capabilities, and even “maintenance as a service” models that support long-lived infrastructure rather than one-off spacecraft deployments.

Taken together, these are the practical wedges into a market that isn’t fully formed yet. The system-level vision may be new, but the component challenges are concrete.

And for founders coming from university-driven technology push—whether they’re working on coatings, batteries, structures, optics, or robotics—the critical step is to frame their work against the bottlenecks that determine whether orbital compute can ever graduate from a demo into an industry.



Turning a Future Vision Into an Investable Plan Today

A future market can be a powerful story, but it can’t be the entire story. In the early stages, fundraising is always partly about vision. The mistake is to treat vision as a substitute for evidence.

A claim like “we’ll build a data center in space, and someone will find a use case later” doesn’t clear the bar. The investable version of the narrative starts by forcing specificity: what is the path to capability, and who is prepared to pay for it along the way?

Platform vs. Prime Model

One way to ground the business plan is to acknowledge that there are different models for how an orbital data center ecosystem could be built.

  1. At one end is the traditional space prime approach. A single company acts as the prime contractor: it designs the platform, integrates subsystems, launches the system, and sells directly to customers. It sources components from suppliers—chips, solar arrays, subsystems—but it owns the system architecture and the customer relationship. The organizing principle is integration.

  2. At the other end is a cloud ecosystem model. The parallel here is to providers like AWS, which do not manufacture every chip or server themselves. Instead, they rely on a broad ecosystem—chip companies, OEMs, power vendors—working against shared standards. The organizing principle is orchestration across a stack built by many specialized contributors.

For a founder, the choice of model isn’t a branding exercise. It determines what you need to control, what you need to buy, and what kind of company you’re actually building. It also shapes how you stage development and what milestones you can credibly hit before you need serious capital.

Early Commercial Proof

The harder part in Deep Tech, especially when the market is still emerging, is commercial proof. Even on Earth, getting firm commitments before a product exists is difficult. In orbit, it can feel even more abstract.

The discipline here is to start with customers anyway. As early as possible, the goal is to line up a small number of concrete use cases—three to five is a useful target—where someone can credibly say what they will buy if you deliver a specific capability by a specific date.

In the earliest stage, this doesn’t have to be a paid contract.

Letters of intent can do real work if they are written commitments tied to clear milestones: if you demonstrate X capability by Y date, we will buy Z amount of capacity.

That structure forces the company to translate ambition into a deliverable, and it gives investors a basis for believing demand can exist beyond curiosity.

From the conversation, three customer categories stand out as plausible early anchors.

  1. The first is defense and government. These buyers care about sovereign and resilient compute, and there is a game theory logic to why countries might want to be first to do this at scale. If a team can navigate the nuances of selling into that world, it can be a meaningful route to early traction.

  2. The second category is satellite constellation operators. These players are already drowning in raw data. Across thermal, SAR, optical, and soon LIDAR, satellite systems generate enormous volumes—petabytes of data. If machine learning and increasingly multimodal models are applied to that data, the value of processing more in orbit becomes clearer. Early prototypes can be tied directly to this reality by demonstrating onboard or nearby-in-orbit processing for satellites in LEO or GEO, with different constellation strategies influencing what “in-orbit compute” looks like in practice.

  3. The third category is hyperscalers, and AI companies thinking about sustainable or off-grid compute. Engagement here may start as exploratory, but the signal is that the theme is entering mainstream discussion. Public comments from major technology leaders suggest that orbit-based compute is becoming a topic that serious organizations are at least evaluating.

An investable startup plan, then, is not a promise that the market will appear in five to ten years. It’s a staged strategy: de-risk the technical system through progressively harder demonstrations, and de-risk the commercial system by anchoring early commitments to specific capabilities and timelines. That combination is what turns a futuristic category into something an investor can underwrite.



Disclaimer
Please be aware: the information provided in this publication is for educational purposes only and should not be construed as financial or legal advice or a solicitation to buy or sell any assets or to make any financial decisions. Moreover, this content does not constitute legal or regulatory advice. Nothing contained herein constitutes an offer to sell, or a solicitation of an offer to buy, any securities or investment products, nor should it be construed as such. Furthermore, we want to emphasize that the views and opinions expressed by guests on The Scenarionist do not necessarily reflect the opinions or positions of our platform. Each guest contributes their unique viewpoint, and these opinions are solely their own. We remain committed to providing an inclusive and diverse environment for discussion, encouraging a variety of opinions and ideas. It is essential to consult directly with a qualified legal or financial professional to navigate the landscape effectively.

Ready for more?