The Scenarionist - Where Deep Tech Meets Capital

The Scenarionist - Where Deep Tech Meets Capital

Underwriting Advanced Materials for AI Data Center Cooling | The Scenarionist

An analysis combining five case studies, thermoeconomics, and practical reasoning to navigate risk and build durable advantage

Feb 26, 2026
∙ Paid

Every major technology transition in computing history has, at some point, hit a physical wall. In the 1990s it was clock speed. In the 2000s it was power density in single cores. Today, in the era of exascale AI training and trillion-parameter inference, the wall is thermal. A single rack of NVIDIA H100s can demand 100 kilowatts of cooling. The next generation is heading toward 300. The question of who removes that heat — and with what materials — is not an engineering footnote. It is, increasingly, the question on which the economics of artificial intelligence rest.

For the better part of two decades, the dominant narrative in data center infrastructure was about software-defined everything. The hardware beneath the workload — the racks, the power distribution, the cooling plant — was treated as a commodity cost center, a tax on the business rather than a source of value. Cooling in particular was viewed as solved: air handlers, computer room air conditioners, cold aisles, hot aisles. Good enough. Move on.

That assumption worked when a standard 1U server drew 300 watts and a rack held twenty of them. At 6 kilowatts per rack, air cooling was adequate and inexpensive. The thermal engineering required no particular brilliance — fans, airflow modeling, basic thermodynamics. Anyone could buy it. Nobody had a moat in cold air.

The AI buildout of 2022–2025 ended that assumption so decisively that it is now almost embarrassing to recall how confidently it was held. A single NVIDIA DGX H100 system draws approximately 10.2 kilowatts. A full rack of H100s, properly configured for dense AI training, runs between 60 and 100 kilowatts.

The Blackwell B200 GPU, announced in 2024 and ramping from early deployments into broader volumes through 2025, pushed thermal design power per chip to around 1,000 watts — roughly doubling the per-GPU power envelope in a short window. By early 2026, hyperscalers are actively planning rack deployments at 200 to 300 kilowatts, and the projections for the generation after Blackwell approach 500 kilowatts per rack.

Air cooling — the conventional, commodity answer — fails at roughly 30 to 40 kilowatts per rack under real-world data center conditions. It is not a matter of engineering creativity; it is a matter of physics.

Air has a specific heat capacity of roughly 1.005 joules per gram per kelvin. Water has a specific heat of approximately 4.18. The dielectric fluids used in immersion cooling systems typically sit between 1.0 and 1.3 joules per gram per kelvin, but their density and wetting behavior allow heat removal rates that air cannot approach regardless of fan speed or airflow velocity.

When the thermal load per rack doubles and then doubles again, the medium doing the cooling must change. And when the medium changes, the materials that comprise and interact with that medium become suddenly, enormously important.

So, the physics of AI compute have elevated a question that was previously operational in nature into a strategic one.

What materials sit at the thermal interface between a chip and the external world? What fluids carry that heat away?

What refrigerants move it to the outside air or a secondary process? Who owns the intellectual property on those materials?

Who has spent years qualifying them with the major OEMs?

And what happens to a data center operator — or a colocation provider, or a hyperscaler — when a key cooling material becomes unavailable, as 3M’s 2022 announcement about phasing out its PFAS-based Novec dielectric fluid demonstrated so vividly?

The deeper question is not whether liquid cooling will replace air cooling — that transition is already underway, faster than most 2022 projections anticipated. The deeper question is which of the many companies claiming to hold critical positions in this thermal stack actually possess the kind of durable, defensible, compounding advantage that justifies the capital being deployed into the sector today.

This analysis is organized in the following parts:

The first establishes the thermoeconomic context: the actual cost and capacity math that makes thermal management a first-order business problem rather than an engineering curiosity. The second dissects the thermal stack layer by layer, identifying where proprietary materials have the greatest leverage. The third presents five detailed case studies of companies — ranging from well-capitalized spinoffs to EPFL-born startups — whose competitive positions hinge on the thermal stack, from pure materials moats (sorbents, TIMs) to system and qualification moats that are tightly coupled to fluid and materials choices. The fourth connects materials advantage to cash flow, including a frank examination of how those advantages erode. The fifth provides a practical evaluative framework for assessing thermal moat claims systematically. The sixth closes with a set of key takeaways on what is compounding.

Let’s Start!


✨ This Analysis is Reserved for Our Premium Members

To read it, unlock The Scenarionist Premium today.

Becoming a Premium Member of The Scenarionist means you also get full access to our complete archive of Deep Tech intelligence, actionable playbooks, rigorous VC guides, and founder-ready strategies you won’t find anywhere else — all based on our independent research, original deep dives, and exclusive insights with the world’s top Deep Tech experts.

👉 Join The Scenarionist Premium today — and level up your Deep Tech advantage!


1. The Thermoeconomics of AI Data Centers in 2026

The business case for premium cooling materials cannot be understood without first grasping the underlying economics of the AI infrastructure buildout.

The numbers are large enough that even fractional improvements in efficiency generate returns that justify substantial materials costs. Conversely, the scale of capital deployment means that incremental cost increases for cooling infrastructure, if they enable higher rack density and better chip utilization, can be absorbed readily — provided they actually deliver the performance claimed.

Consider the economics at the margin.

A hyperscale data center campus of 100 megawatts of IT load — not unusual in the current buildout, with several companies announcing gigawatt-scale commitments — represents roughly $1 billion to $1.5 billion in capital expenditure for facility construction alone, before servers.

Of that capex, mechanical and electrical infrastructure — including cooling — typically represents 40 to 50 percent of the total.

Power Usage Effectiveness (PUE), the ratio of total facility power to IT power, determines how much of that mechanical spend is purely thermal overhead.

A PUE of 1.4, often associated with a well-run air-cooled facility, implies that about 29 cents of every dollar of electricity purchased is consumed by overhead rather than IT compute.​

A PUE of 1.03 to 1.06 would imply an overhead of roughly 3 to 6 percent.​

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 The Scenarionist · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture