0:00
/
0:00

Hardware for AI: The Data Center Math Investors Care About | Deep Tech Catalyst

A chat with Daniel Franke, Executive Director at M Ventures

Welcome to the 93rd edition of Deep Tech Catalyst, the educational channel from The Scenarionist where science meets venture!

If you’ve been tracking the AI buildout and wondering where the real economics will concentrate (chips, power, cooling, interconnects, or the software that binds them), this episode maps the terrain.

Today, we’re joined by Daniel Franke, Executive Director at M Ventures!

From capex-heavy growth to an efficiency-centric cycle, we’ll discuss how hyperscalers are shifting to obsessive, system-level design, and what that means for founders turning hard science into investable companies.

In this edition, we explore:

  • Why AI demand is real but bending toward profitability, and how that reframes hardware priorities

  • The constraints that govern scale: power, cooling, memory, bandwidth, and networking

  • Where the money goes (chips and fabric) vs. where it’s saved (energy, conversion, and thermal)

  • How tiny efficiency gains—like a 2% bump in power conversion—become multibillion-dollar outcomes

  • What great early customer conversations yield, and how the qualification process shapes the financing ladder

Whether you’re building AI infrastructure, investing in Deep Tech, or translating advanced physics into practical systems, this conversation offers a clear playbook for where to focus.

Let’s dive in !⚡️


✨ For more, see Membership | Deep Tech Briefing | Insights | VC Guides


BEYOND THE CONVERSATION — STRATEGIC INSIGHTS FROM THE EPISODE

Framing AI Demand: Bigger models, Synthetic Data, and Compounding Compute Needs

Over roughly the last eighteen months, AI has crossed the line from early promise to routine utility. In software engineering, tools like intelligent coding assistants have become part of the normal workflow rather than an experiment.

The same pattern is spreading across office work more broadly, where teams now lean on AI to draft, refine, and accelerate everyday tasks. The revenue signals are real and visible.

The dominant learning from successive flagship releases is straightforward: scale still wins. With narrow caveats, bigger models perform better, and more data makes those models stronger.

The public internet has largely been scraped, which shifts the bottleneck. To keep advancing quality, organizations are generating synthetic data—effectively adding another heavy layer of compute before training even begins.

That compounds demand.

The implication is not just more GPUs; it’s larger clusters, denser interconnects, and broader, better-connected data centers.

The spending has already reached macroeconomic relevance.

In the United States, AI infrastructure capex has represented an outsized share of recent investment dynamics, with headline figures pointing to tens of billions deployed in a single quarter.

This has happened even while many businesses remain unprofitable—an unusual combination that won’t persist indefinitely. Early signs of a shift are emerging. Some hyperscalers are slowing or pausing footprint expansion.

On the model side, efficiency is coming to the fore, especially in inference. Pricing moves in the latest releases underscore a new commercial mindset in which performance must be delivered at materially lower cost.

Expansion remains massive because demand is real.

But the narrative is bending toward efficiency and unit economics, and that change will propagate down into hardware decisions just as surely as it has begun to shape software.

Constraints That Actually Matter: Power, Cooling, Data, and System Design

The basics come first: power, water, and cooling

No power, no data center. It sounds obvious, but it’s easy to forget when the conversation jumps straight to chips and fancy optics.

The reality on the ground is that electricity supply, access to water, and dependable cooling have moved from “assumed” to “make-or-break.” Sites that can’t secure enough power or keep machines within safe temperatures simply can’t grow.

That’s why the less glamorous parts of the stack—how power is brought in, conditioned, and turned into useful work; how heat is removed quickly and safely—suddenly matter so much.

Improvements here reduce the bills that arrive every month. They show up immediately in operating costs, which is why investors and customers pay attention.

Why synthetic data is now the data story

Most of the public internet has already been used to train models. Scraping a bit more won’t change much. The next step forward is making better training data on purpose.

That’s what “synthetic data” means in practice: creating new examples that help models learn more accurately.

The bottleneck isn’t running out of raw material; it’s having the tools and know-how to generate, filter, and blend this data so it actually improves results. That work needs extra compute before training even starts, which is why the demand on infrastructure keeps rising.

Memory, movement, and the network in between

There isn’t a single fix that unlocks everything. Memory remains costly and not fast enough for where AI wants to go.

Even if you make memory better, you still have to move information quickly and efficiently between memory and the chips that do the work. That’s where the links (electrical and optical) come in.

And above that sits the network: how servers talk to each other within a rack and across racks, how traffic is routed, and how the whole system stays stable under load. Each layer can become the limiting factor. Small gains in any one of them, when repeated across an entire cluster, add up to big wins.

The new mindset: design the system, not the part

The biggest shift is how buyers think. Hyperscalers no longer believe that a single, better component will fix their problems. They design from the system level down.

That means choosing the rack layout, the switching fabric, the internal connections, the cooling approach, and the power delivery path as one coordinated plan, and then fitting components into that plan.

Their teams evaluate the details with real rigor, comparing options for the exact lasers, sensors, and modules that will live in their environment.

For startups, this sets a higher bar. A neat demo isn’t enough. The product has to slot into a real data center, work with the surrounding equipment, and move a metric that matters at the facility level, whether that’s energy use, reliability, speed, or cost to run. If it does, it gets attention. If it doesn’t, it stays in the lab.

Share

Where the Money Goes—and Where It’s Saved: The Unit Economics of AI Infrastructure

Chips and networking as dominant capex

In the buildout phase, the cost structure is unmistakable. The lion’s share of capital goes into the compute itself and the fabric that connects it.

Racks of accelerators and the associated networking from the major vendors routinely account for the majority of upfront spend. It is not unusual for that combination to represent well over half of the budget.

That’s where checks get written, and it explains why roadmap awareness on chips and switches sets the tone for everything else.

Opex realities: energy, cooling, and conversion losses

Once the lights are on, the economics flip. Operating costs are dominated by energy, and energy’s impact shows up in multiple places at once.

The accelerators consume a massive share, but the cooling infrastructure is a headline item on its own, from HVAC down to chip-level thermal management.

Power conversion becomes a recurring cost that quietly compounds across the entire facility. These are not peripheral concerns. They are the difference between an attractive cost-to-serve and a painful one, especially at the cluster scale.

Why 2% matters: the 400V→50V→1V cascade and the multibillion-dollar delta

The pathway that power takes illustrates why small percentage gains matter. Incoming supply might enter the site around four hundred volts, step down to roughly fifty volts at the rack, and eventually arrive at about a volt at the chip.

Each conversion stage already operates at around 95% efficiency. Those last few percentage points are where meaningful money hides.

Moving from, say, 95 to 97 percent sounds incremental on paper, but applied to the electricity footprint of modern clusters and multiplied across fleets of data centers, it translates into savings large enough to support an entire startup thesis.

That is why founders who can credibly shift these percentages (whether through better topologies, components, or control) draw serious attention. The scale has become so great that even tiny improvements clear the bar for compelling returns.


Rumors

The Optical Interconnect Rush: Powering the New AI Network Stack | Rumors

Oct 2
The Optical Interconnect Rush: Powering the New AI Network Stack | Rumors

Six Startups, One Rumor. A new generation of photonic technologies is rewiring data‑center networks for the AI era—at light speed and with radical efficiency.


Crafting an Investable Hardware Narrative: Enabling the New vs. Beating the Old

There are really two clean ways to matter in this market. One is to enable something customers cannot do today. The other is to do what they already do, but materially better on cost or energy.

When a technology unlocks a new capability—higher speeds, longer reaches, novel fabrics, tighter integration—early conversations tend to be less price sensitive.

The value is self-evident if it advances the roadmap.

On the efficiency side, the standard is harsher. It isn’t enough to be interesting; you have to be measurably superior to the incumbent on the metric that governs operating expense.

In both cases, technical credibility is the entry ticket. Either you demonstrate that a new path is viable, or you prove you outperform what’s shipping.

Reading roadmaps and credibly outrunning them

Customers are not operating in the dark. Major vendors publish enough forward guidance that the bar is visible. Switch families, optical fabrics, and accelerator generations telegraph where performance will be in the next refresh cycle.

The practical job is to study those trajectories and present a believable path to beating them.

That credibility comes from specifics: the device physics that underpin the advantage, the manufacturability that scales it, and the test data that shows it survives real integration constraints. If a customer can see how your curve stays above their roadmap, they will lean in.

Sampling, qualification, and the financing ladder

The commercial arc follows a familiar pattern. Early prototypes open doors to high-level evaluations. If the fit is promising, the relationship formalizes into a joint development agreement.

Sampling begins, integration wrinkles surface, and the qualification grind starts. Each step demands more capital than the last, which is why financing rounds naturally scale with progress.

Seed and Series A support the first devices and samples; larger raises fund the qualification work and the path to revenue. Throughout, customers remain focused on whether the technology lands inside their system constraints and advances a data center-level objective.

Pricing sensitivity where performance is the product

For technologies that extend the frontier, pricing tends to follow performance rather than lead it.

The conversation focuses on what becomes possible, with the working assumption that a fair price will be negotiated if the advantage holds. Where the proposition is efficiency—lower energy per bit, reduced cooling load, improved conversion—it flips.

You need a model that shows the savings in the customer’s terms and a technical basis that explains why the gains persist at scale.

The good news is that sophisticated buyers will do the math with you. If the performance stands up and the economics translate cleanly to their environment, price becomes a function of documented value.

Quantifying Value with Real Benchmarks: How to Price When Data Isn’t Public

It’s tempting to search for a neat, public ledger of operating costs and assume that it will anchor pricing. In practice, the most meaningful numbers live behind closed doors.

Energy tariffs by country and region provide a rough baseline, and large infrastructure suppliers occasionally publish materials that help triangulate. But the real thresholds—the conversion losses tolerated at each stage, the target cost per bit on specific fabrics, the acceptable thermal envelope for the next rack design—are set internally by hyperscalers and are rarely disclosed.

The strongest position comes from direct conversations with potential buyers who will tell you, under NDA and with specificity, where the bar sits today and where it needs to be for the next generation.

What great early conversations yield—and why they matter

Early technical dialogues are not about selling a price; they are about learning the yardstick you must beat.

The ideal outcome is a crisp statement from the customer that defines present performance and aspirational targets.

When a team hears, “This is our current line; for us to consider you, you need to be here,” it compresses months of guessing into a workable spec. That clarity doesn’t just guide engineering. It shapes trial design, sampling priorities, and how you present results.

It also builds credibility, because it shows you are solving the customer’s stated problem rather than optimizing in a vacuum.

Turning technical credibility into a pricing logic that customers can validate

When the value proposition is efficiency, pricing follows from a concrete savings model anchored to the customer’s environment.

Start with demonstrated technical deltas—lower energy per conversion, fewer watts per bit moved, higher reliability that reduces downtime—and translate them into operating impact at cluster scale.

Expect the buyer to rerun the math; sophisticated teams always do. Your job is to make that easy: present the assumptions you believe are true, show why the gains persist at fleet scale, and be ready to negotiate within a band that preserves reasonable margins.

Where the proposition unlocks new capability, the path is different.

Technical proof that you stay ahead of published roadmaps is often enough to progress to a joint development agreement, with pricing sensitivity deferred until the advantage is validated in situ.

In both cases, the sequence is the same: earn trust with data, anchor value in the customer’s terms, and let price emerge from evidence rather than conjecture.

Share The Scenarionist - Deep Tech Startups & Venture Capital


Capital Movements

🏭 54 Startup Rounds + 6 Fund Vehicles Tracked | Deep Tech Capital Movements #37

🏭 54 Startup Rounds + 6 Fund Vehicles Tracked | Deep Tech Capital Movements #37

Sector allocation: AI 11 ties Energy & Climate 11; next Biotech & Health 6, Industrial & Manufacturing 6, Compute 5, Agri 4, Robotics 3, Construction 3, Cyber 2, Space 2, Materials 1.


Disclaimer
Please be aware: the information provided in this publication is for educational purposes only and should not be construed as financial or legal advice or a solicitation to buy or sell any assets or to make any financial decisions. Moreover, this content does not constitute legal or regulatory advice. Nothing contained herein constitutes an offer to sell, or a solicitation of an offer to buy, any securities or investment products, nor should it be construed as such. Furthermore, we want to emphasize that the views and opinions expressed by guests on The Scenarionist do not necessarily reflect the opinions or positions of our platform. Each guest contributes their unique viewpoint, and these opinions are solely their own. We remain committed to providing an inclusive and diverse environment for discussion, encouraging a variety of opinions and ideas. It is essential to consult directly with a qualified legal or financial professional to navigate the landscape effectively.