The Optical Interconnect Rush: Powering the New AI Network Stack | Rumors
Six Startups, One Rumor. A new generation of photonic technologies is rewiring data‑center networks for the AI era—at light speed and with radical efficiency.
Welcome to Rumors.
✨ Rumors is a pattern-recognition layer for investors, founders, and operators who care about where the frontier is heading—before it becomes consensus, before the rest of the market connects the dots.
Discover. Compare. Assess. Stay ahead.
✨ Inside This Rumor
Optical Circuit Switching in AI Networks
Introduction – Why the explosion of AI is forcing a rethink of data center networks, from copper to optical.
Macro Frame: Why Now? – Five converging forces—from surging GPU clusters to breakthroughs in silicon photonics—are accelerating the shift toward optical circuit switching (OCS).
Framing the Problems – A breakdown of bottlenecks in today’s networks: bandwidth ceilings, energy inefficiencies, rigid topologies, and spiraling costs.
Market Metrics That Matter – Key indicators of momentum, from network power consumption and costs in large AI clusters to venture capital bets on photonic interconnects.
Player Mapping: Startups Shaping the Rumor – A close look at six startups unbundling the data center network: one beam of light at a time.
The Competitive Edge – A complementary ecosystem: each team removes a bottleneck electrons can’t.
Strategic Lens – What this means for hyperscalers, semiconductor makers, and the broader AI infrastructure ecosystem.
Risk Assessment Framework – A grounded look at what could break: from reliability and integration challenges to incumbent responses.
5–10 Years Out – Looking ahead: the vision of an AI data center built on light
Conclusion – Built on Light: the new baseline for the AI network stack.
1. Introduction
Modern artificial intelligence workloads are placing unprecedented demands on data‑centre interconnects. As training runs scale into the tens of thousands of GPUs and accelerators, network bandwidth, latency, energy consumption and topology flexibility have become gating factors for innovation. Today’s hyperscale clusters routinely wire together 24 000 or even 30 000+ GPUs to train state‑of‑the‑art large language models and other deep‑learning architectures. Each accelerator must exchange petabytes of data per second during operations like all‑reduce or gradient shuffling, yet the underlying network architecture still resembles a multi‑tier Clos of electrical packet switches connected via copper or pluggable optics. Every hop through that fabric requires optical‑to‑electrical‑to‑optical (O‑E‑O) conversion, buffering, queuing and forwarding, adding microseconds of latency and burning hundreds of watts per device.
The result is a situation where compute keeps getting faster while the network “streets” remain congested. Analysts have likened this to a fleet of race cars stuck in traffic. In many superpods the network alone can consume multiple megawatts and cost $150–$300 million to build. Studies by MIT and Meta showed that a full‑bandwidth Clos network for ~30 000 GPUs would draw about 4.7 MW, and scaling to 65 000 GPUs pushes network power toward 6 MW. At that scale, pluggable optical transceivers can account for 10 % of the total GPU power budget. NVIDIA noted that a hypothetical 400 000‑GPU supercomputer would behave like a 24 MW laser just to feed the optics. Such numbers drive home the reality that yesterday’s copper‑centric interconnects are unsustainable for tomorrow’s AI ambitions.
Given this pressure, a new class of technologies is entering the conversation: optical circuit switching (OCS). Instead of forwarding packets hop‑by‑hop through electrical devices, an optical circuit switch dynamically establishes end‑to‑end circuits using light. Photons travel through fibre with minimal latency and loss; there is no need for intermediate O‑E‑O conversions or store‑and‑forward buffering. In effect, an OCS acts like a gigantic, reconfigurable optical patch panel that can be programmed in microseconds or even nanoseconds to connect any two endpoints.
Hyperscalers such as Google have already built custom OCS systems for internal TPU supercomputers, demonstrating performance and power advantages. Now a growing cohort of startups is racing to productise photonic switching for the broader market. Advances in silicon photonics, micro‑electromechanical systems (MEMS) mirrors and optical packaging have reduced costs and improved switching speeds by orders of magnitude. A wave of venture capital is following, with more than $300 million invested in OCS‑oriented companies since early 2023.
✨This report investigates why optical circuit switching is emerging now, what macro forces drive adoption, which problems it hopes to solve, and who the leading players are. It analyses market metrics, compares the approaches of six notable startups, examines competitive dynamics, follows where capital is flowing, and offers a strategic lens on how these shifts affect hyperscalers, semiconductor vendors and the wider AI infrastructure ecosystem. It also considers risks and limitations, then looks ahead to how data centres might evolve over the next decade if optical fabrics become mainstream.
✨ Before Go Ahead:
Rumors is just one part of The Scenarionist experience. To enjoy the full experience, become a Premium Member!
The Scenarionist Premium is designed to make you a better Deep Tech Founder, Investor, and Operator. Premium members gain exclusive access to unique insights, analysis, weekly intelligence and VC Guides with the wisdom of the world’s leading Deep Tech thought leaders and more..
2. Macro Frame: Why Now?
Optical circuit switching is not a new idea; telecom operators have used large optical cross‑connects for decades. However, data centres and high‑performance computing systems historically stuck with electrical Ethernet or InfiniBand networks because optical switches were expensive, slow and complex to operate.
Five converging forces have changed that calculus: