The Scenarionist - Where Deep Tech Meets Capital

The Scenarionist - Where Deep Tech Meets Capital

DeepTech Briefing

The First-Token Premium. Why “How Fast It Feels” Is Repricing the AI Stack | DTB 97

Plus: fusion moves to real projects, AI on factory floors, space and carbon rules, critical materials, and more.

Giulia Spano, PhD's avatar
Giulia Spano, PhD
Feb 08, 2026
∙ Paid

Welcome back to Deep Tech Briefing, the weekly intelligence for people who build, fund, and shape Deep Tech.

Every week we analyze Deep Tech as a chain of linked milestones: startups hitting key milestones, established players adjusting around them, and public programs opening or closing paths for value to be created.

In this edition of Deep Tech Briefing

  • Interesting Reading – your Deep tech reading list, pre-filtered.

    On late VC for science-heavy startups, deep-tech secondaries, agtech’s reset, blurred ARR definitions, staff liquidity, space data centers, AI “have-nots”, and mega-funds moving faster.

  • The Big Idea – where we slow down and connect one big shift.

    The First-Token Premium. Why “How Fast It Feels” Is Repricing the AI Stack

  • Deep Tech Key Moves – a structured read-out of what shifted deep tech this week.

    Fusion framed as future infrastructure, AI stepping onto factory floors and ranches, commodity hardware tested in orbit, storage anchored in caverns and garages, carbon-removal platforms consolidating, minerals recovered from tailings, and aircraft redesigned around outsized equipment.

  • Deep Tech Power Play – where policy and public money meet Deep Tech.

    Advanced nuclear, critical minerals, carbon removals, robotics, and farm innovation programs that will quietly tilt the playing field.


🔶 Interesting Reading

  • Why deep-tech startups get VC too late (and why public money has to move faster)

    Science|Business

    The familiar trap, quantified: science-based ventures are slow to win private capital, so early public support matters—but even flagship programs can bottleneck (e.g., long disbursement timelines) right when time-to-proof is everything

  • Deep tech + secondaries: the new liquidity stack for capital-intensive innovation

    Crunchbase — Celesta’s Viswanathan frames secondaries as rivaling M&A as an exit valve—bringing in deeper-pocket buyers—while warning that “core tech” won’t be enough as markets demand revenue growth and margin narratives again.

  • Agtech isn’t dead—just sobering up (and that’s when real positions get built)

    AgFunderNews

    A useful “down-cycle” field memo: consolidation is harder when everyone’s weak, SPAC memories still haunt the category, and patient CVC can decide which tech survives long enough to matter.

  • a16z wants founders to chill about “insane ARR”—because a lot of it isn’t ARR

    TechCrunch

    The clean distinction that’s getting lost in the hype: contracted recurring revenue vs “run-rate math.” The subtext: durability, retention, and business quality are coming back as the real scoreboard—even for AI.

  • Secondary sales are shifting from founder flex to employee retention tooling

    TechCrunch

    Tender offers are becoming a compensation primitive: Clay/Linear/ElevenLabs-style liquidity is less “cash-out” and more “keep the team through the next comp cycle,” especially when IPO windows stay fickle.

  • SpaceX asks the FCC to greenlight a million-satellite “orbital data center” constellation

    Bloomberg

    Compute is turning into a space-and-spectrum logistics problem: solar-powered processing in orbit to feed AI demand, plus the unsexy constraints (debris/Kessler risk, astronomy, multi-shell altitudes, and whether “tow-truck satellites” become mandatory infrastructure).

  • Palantir’s Alex Karp says Europe and Canada are slipping into “AI have-nots”

    Fortune

    A blunt read on adoption as destiny: the gap isn’t model quality, it’s willingness to re-architect institutions around production AI—turning procurement speed, regulation, and “platform commitment” into geopolitical variables.

  • Abu Dhabi’s quiet $160B fund starts acting less “sovereign” and more “endowment”

    Bloomberg

    ADIC stepping into secondaries, upping Bitcoin exposure, and leaning into insurance is a signal: mega pools of capital are building new liquidity plumbing and moving faster up the risk curve—right as late-stage markets stay selective.

  • The AI models “rattling markets” are really a moat-repricing event BusinessDesk

    The selloff (hundreds of billions in a day; nearly a trillion across sessions) reads like investors finally underwriting “LLMs eat the app layer”—with legal/workflow automation as the narrative accelerant and SaaS multiples as the immediate casualty.


Inside the Premium Intelligence Layer

The free tier of Deep Tech Briefing offers a carefully curated set of Deep Tech readings each week.
The Full Edition, reserved for Premium Members, goes further: it reconstructs what shifted in deep tech that week, who drove those shifts, and what the implications are across technology, capital, and policy.

Premium Membership to The Scenarionist also unlocks the full ecosystem: all weekly series, in-depth analyses, guides, and playbooks designed for people who build, fund, and shape Deep Tech.

Unlock the Full Edition


🔶 The Big Idea

The First-Token Premium.

Why “How Fast It Feels” Is Repricing the AI Stack

For years, the AI race was priced in training: bigger models, more FLOPs, more GPUs, more parameters, more everything.

This week’s Cerebras Series H—$1B raised at roughly a $23B post-money valuation—is a clean signal that the market is starting to price a different choke point: not how big intelligence can get, but how quickly it can arrive.

Outside the datacenter, “compute” is invisible.

What’s visible is the empty beat after you hit enter.

If the first words show up fast, the system feels present—something you can work with in a tight back-and-forth. If they don’t, the whole experience becomes waiting, and the interaction starts to feel like you’re submitting a task instead of collaborating.

This is why the conversation is moving away from “tokens per second” and toward time-to-first-token (TTFT).

TTFT is simply how long it takes before you see the first token of the reply. And it’s not just about how fast the model can think. TTFT includes how long you sit in line (queueing), the work the system does to read and prepare your prompt (prefill), and network delay.

It also usually gets worse with longer prompts because the system has to process more text before it can start generating. NVIDIA’s own documentation treats TTFT as a key metric because it describes what happens in real life—real prompts, real traffic, real bottlenecks—not a perfect lab benchmark.

OpenAI made this framing in its Cerebras partnership announcement.

This post is for paid subscribers

Already a paid subscriber? Sign in
© 2026 The Scenarionist · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture