0:00
/
0:00

Neuromorphic Computing: Market, Bottlenecks, and Use Cases | Deep Tech Catalyst

A chat with Peter Olcott, Principal @ First Spark Ventures

Welcome to the 113th edition of Deep Tech Catalyst, the educational channel from The Scenarionist where science meets venture!

This week, we explore one of the most intriguing compute questions beyond the current AI stack: what happens when the dominant architecture is no longer enough for systems that need to learn continuously, respond in real time, and operate under severe power constraints?

I sat down with Peter Olcott, Principal at First Spark Ventures, to unpack how a VC investor thinks about neuromorphic computing, why biology remains such a powerful reference point for the future of intelligence, and where founders might find the first investable openings in a field that sits between scientific ambition and practical system design.

Key takeaways from the episode:

🧠 Neuromorphic Computing Starts from a Different Model of Intelligence
Rather than optimizing the current transformer paradigm, neuromorphic computing revisits a deeper question: why biological intelligence can generalize, learn continuously, and operate in real time with extraordinary power efficiency.

🤖 Embodied AI May Be the First Real Commercial Wedge
Humanoid robots, drones, and autonomous machines need low-latency, low-power compute that can operate safely in the physical world. That makes embodied AI one of the clearest early markets for neuromorphic architectures.

⚛️ This Is Not Quantum Computing
Quantum and neuromorphic computing may both sit outside the mainstream stack, but they solve very different problems. One is designed for highly centralized, exceptionally hard scientific computation; the other aims to bring intelligence into distributed, real-world systems.

🧩 The Biggest Bottleneck Is Training, Not Just Hardware
The core challenge is not simply building brain-inspired chips. It is figuring out how to train these dynamic, spike-based systems in a stable and scalable way—something the field still has not fully solved.

📈 Investability Depends on a Staged, Full-Stack Strategy
The strongest companies in this space are unlikely to be isolated chip plays. They will need to own the broader system—training, inference, and silicon—and enter through emerging markets where traditional architectures remain structurally weak.



BEYOND THE CONVERSATION — STRATEGIC INSIGHTS FROM THE EPISODE

Neuromorphic Computing Begins with a Different Model of Intelligence

Neuromorphic computing is often presented as a futuristic departure from mainstream computing, but the idea is older than the current AI wave.

In many ways, it reaches back to the earliest stages of AI, when researchers were already trying to understand intelligence by recreating some of its underlying principles.

The field has always been tied to biology, not as a metaphor, but as a source of architectural insight.

Even some of the earliest neural-network concepts were rooted in biological observation. The original intuition was that if intelligence in nature emerged from networks of neurons, then perhaps artificial systems could be designed along similar lines.

What changed over time was not the disappearance of that idea, but the direction taken by modern AI. As large language models and transformer-based systems became dominant, AI in silicon moved further away from biological inspiration.

That divergence is part of what has brought neuromorphic computing back into focus. It is not trying to incrementally improve the dominant architecture.

It is trying to revisit a more fundamental question:

What makes biological intelligence different from the digital intelligence we currently build?”

Biological intelligence and digital intelligence are not the same thing

That question matters because the contrast is not subtle. Today’s AI systems can appear remarkably capable. They can generate language, solve complex tasks, and often give the impression of reasoning.

But they do so through an implementation that is very different from the one found in the brain.

Biological intelligence combines several properties that current digital systems still struggle to achieve at the same time.

  • It operates in real time, with very low latency.

  • It reasons, but it also acts fluidly in the world.

  • It can drive a car, play sports, react to unexpected changes, and move continuously between fast perception and deeper thought without switching architectures.

That continuity remains difficult for current AI systems, which are powerful but still constrained by latency and by the amount of compute required to operate in real time.

Another difference is learning itself.

Human beings do not stop learning once a model is trained.

Learning is continuous, cumulative, and inseparable from lived experience. Biological systems adapt from birth to death.

By contrast, most of today’s AI systems are effectively fixed once trained. They may be updated, fine-tuned, or retrained, but they do not learn in the ordinary course of use the way a person does.

That gap is not just philosophical. It points to a practical limitation in how current systems are deployed in the real world, especially in environments that require continuous local adaptation rather than periodic centralized improvement.

Generalization and power efficiency

Generalization is another defining distinction. Biological intelligence is extraordinarily efficient at transferring knowledge across contexts.

For instance, a human can learn to drive with a relatively modest amount of experience.

That ability to adapt from limited exposure stands in sharp contrast to the enormous quantity of training data and compute required by today’s AI systems to approach comparable real-world performance.

The issue is not that current models are ineffective. It is that they often require orders of magnitude more pretraining than people do to reach a narrower form of competence.

Neuromorphic computing is, in part, an attempt to understand why that is.

Power consumption brings the contrast into even sharper focus.

The brain operates on roughly 20 watts. That figure becomes striking when compared with the computational resources required to sustain state-of-the-art AI systems.

Even highly capable digital models consume dramatically more energy while still delivering only a subset of what biological intelligence can do.

A human brain does not just produce language. It coordinates movement, perception, memory, sensory integration, and real-time interaction all at once.

So when neuromorphic computing looks to biology, it is doing so because biology appears to solve an intelligence problem with a level of efficiency, adaptability, and responsiveness that current digital architectures have not matched.



Why Building the Brain at Scale Isn’t Simple

Once the field is framed in those terms, the difficulty becomes obvious. Neuromorphic computing is compelling precisely because biological intelligence appears to do things current architectures do not.

But the moment one tries to reproduce that capability in hardware, the scale and complexity of the brain become impossible to ignore.

The challenge is not just to build something inspired by neurons. It is to understand what must actually be replicated for brain-like behavior to emerge at scale.

And, simple but not obvious, copying the brain at a small scale does not necessarily produce the outcomes people imagine.

There have been serious efforts to build silicon neurons directly, using analog circuits to mimic the electrical behavior of biological neurons as closely as possible.

The intuition is understandable: if one can recreate the neuron faithfully and wire enough of them together, perhaps intelligence will follow.

But that view runs into a brutal scaling problem.

The human brain contains roughly 86 billion neurons, and the greater challenge is not just the number of neurons themselves, but the density and diversity of their connections.

On average, each neuron connects to thousands of others.

The result is an immense web of connectivity whose complexity is difficult even to represent, let alone reproduce in silicon.

Connectivity is the real source of complexity

That connectivity problem is central.

Neurons do not simply talk to their immediate neighbors in a neat local pattern. They connect across regions, across functions, and across long structural pathways.

The brain is not a flat network. It is an extraordinarily dense three-dimensional system.

Even storing the information that describes which neurons connect to which others, along with the strength of those connections, requires terabytes of information.

This means that the challenge is not limited to computational logic. It immediately becomes a problem of architecture, density, and physical organization.



Where Neuromorphic Computing Could Win First

If the long-term ambition is to build systems that bring intelligence closer to the flexibility, responsiveness, and efficiency of biological systems, the most credible early wins are unlikely to come from trying to beat existing architectures on their own terms.

They are more likely to come from environments where today’s architectures are structurally disadvantaged—where power, latency, and local adaptation matter more than raw centralized compute.

That is why the most promising early use cases sit in the physical world.

Neuromorphic computing is particularly well matched to situations where intelligence must operate directly inside machines, in real time, under strict energy constraints.

These are not edge cases. They are the conditions that define a growing class of important systems.

Embodied AI as the most natural entry point

One of the clearest application domains is embodied AI.

As intelligence moves out of the cloud and into robots, autonomous systems, and physical devices, the requirements begin to change.

A text model can tolerate latency in ways a robot cannot.

If a prompt response arrives half a second late, it may be frustrating. If the same delay occurs in a machine interacting with the real world, the result can be unsafe.

That difference is crucial.

Physical systems do not just need intelligence. They need intelligence that can act continuously, respond instantly, and do so without carrying the energy burden of a data center.

This is where neuromorphic computing starts to look like a potentially well-suited architecture for the next generation of machine intelligence.

Humanoid robotics makes this especially visible.

These systems need compact, power-efficient, low-latency compute to coordinate sensing, balance, motion, and interaction in real time.

Yet the compute requirements are high, and the power budget is limited.

Adding more traditional compute means more energy consumption, more battery weight, and often more mechanical stiffness. In other words, the computational architecture directly affects the physical design and safety profile of the robot.

From that perspective, the most compelling target is not simply “AI for robots,” but the brain of the robot itself.

A neuromorphic system could become the core compute layer that allows a humanoid platform to behave more dynamically and more safely under real-world constraints.

That does not mean everything must become neuromorphic at once. A more realistic view is that the first successful systems may be hybrid.

Different parts of a machine can be optimized for different computational tasks, and neuromorphic chips may enter first where the performance gap is most acute.

In robotics, that could mean low-level control.

Actuator control, balance, inverse kinematics, and similar functions require fast, efficient feedback loops.

These are domains where latency matters immediately and where there is value in specialized compute that can operate at much higher update rates, potentially closer to kilohertz than to the low hertz rates still common in many embedded systems.

Running physical systems at very low update rates creates obvious risks. In fast-moving machines, a delayed response is not a minor inconvenience; it is a core limitation.

Neuromorphic chips could also play an important role in sensor processing.

Vision, audio, and other incoming streams may be preprocessed locally in ways that reduce latency and improve responsiveness before being passed into more traditional architectures.

In that sense, the opportunity is not only to build a monolithic neuromorphic machine, but to distribute intelligence across a device in a way that mirrors biology more closely.

The human nervous system already works through specialized regions and layered processing. A machine architecture that adopts a similar logic may be more achievable in the near term than a single all-encompassing neuromorphic platform.

Another reason embodied systems are such a strong fit is that they expose one of the weaknesses of current AI deployment models.

Today’s systems typically depend on centralized training pipelines.

Data is collected, labeled, sent back, retrained, and then redistributed as model updates.

That process can work at scale, but it is poorly suited to environments where each instance accumulates useful local knowledge that may never make it back into a global model in a meaningful way.

Neuromorphic computing becomes interesting here because continual learning is not a secondary feature. It is part of the promise.

A machine operating in the field could adapt from repeated exposure to its own environment rather than waiting for centralized retraining.

A local system can accumulate narrow but valuable knowledge tied to its own operating context—repeated routes, recurring obstacles, or environment-specific signals—that may never be meaningfully propagated back through a centralized training loop.

In a conventional system, it is far from guaranteed that such experience would ever be translated into a model update that materially improves that individual unit’s behavior.

This is what makes edge autonomy such a natural application category.

Drones, delivery robots, and other untethered autonomous systems all operate under pressure from power limits, intermittent connectivity, and the need for immediate response.

A different path for human-computer interaction

Beyond robotics and autonomy, another compelling frontier is human-computer interaction.

This is a very different use case, but it draws on the same underlying advantages. If the goal is to create digital systems that interact more naturally with people, the challenge is not only to generate text or images. It is to sustain fluid, emotionally responsive, low-latency interaction across multiple modalities at once.

A more human form of interface would need to process tone, timing, expression, visual cues, and conversational dynamics in real time.

It would need to react not just accurately, but naturally. That kind of interaction is difficult to achieve efficiently with today’s dominant architectures, especially when the system must generate and interpret audio, video, and affective signals simultaneously while adapting through use.

This is where neuromorphic computing opens a different possibility.

The architecture is not attractive merely because it might be cheaper or smaller. It is attractive because it may be better aligned with the kind of continuous, context-sensitive, real-time processing that natural interaction requires.

If that capability becomes technically viable, it could evolve into a universal interface layer embedded across devices—from phones and vehicles to household systems and everyday consumer products.

That market may not fully exist today, but the logic is already visible.

Whenever a capability becomes broadly useful and repeatedly needed across many devices, there is a strong incentive to create custom compute optimized for that task.

Natural human-computer interaction has the characteristics of precisely that kind of opportunity.

So the near-term promise of neuromorphic computing is not that it will replace everything current AI does. It is that it may solve classes of problems current AI handles inefficiently.

The strongest early markets are those where intelligence must leave the cloud, live inside physical systems, adapt locally, and operate under real-time constraints.

In those settings, neuromorphic computing does not look like a fringe alternative. It looks like a candidate architecture for making AI truly native to the real world.



Neuromorphic Computing vs. Quantum Computing

As interest in alternative computing architectures grows, how should we think about neuromorphic computing in relation to quantum computing?

Both sit outside the mainstream digital stack, both carry a strong sense of future potential, and both are often framed as breakthrough technologies rather than incremental improvements.

However, the two paradigms solve fundamentally different problems, operate under radically different assumptions, and address different bottlenecks across the computing landscape.

Two architectures built for different kinds of intelligence

Quantum computing is rooted in quantum effects at the most fundamental level of physics. To make those effects usable, the system has to be kept under extremely controlled conditions.

That usually means highly sensitive hardware, deep isolation from the surrounding environment, and specialized infrastructure that is inherently difficult to distribute. In practical terms, quantum computers tend to be centralized systems.

They are designed to tackle extraordinarily hard computational problems that would be intractable for conventional machines.

That makes them powerful in a very specific way.

A quantum computer is most compelling when the task itself is exceptional: discovering new materials, solving unusually complex scientific problems, or unlocking categories of computation that cannot be addressed efficiently with classical methods.

These are important use cases, but they are not the kinds of tasks most people or most everyday machines perform continuously.

Neuromorphic computing sits at the opposite end of that spectrum.

Its ambition is not to isolate compute from the world in order to solve impossibly hard abstract problems. It is to bring intelligence more effectively into the world itself.

It is concerned with real-time behavior, low-power operation, responsiveness, and adaptation in physical systems.

Where quantum computing is about solving rare but extremely demanding problems, neuromorphic computing is about making distributed systems more naturally intelligent in everyday operation.

Centralized breakthrough compute versus distributed physical-world intelligence

That distinction matters because it shapes the entire economic and technical logic of each field.

Quantum computing is naturally aligned with centralized scientific infrastructure. A company, laboratory, or institution may use it to solve a breakthrough problem once, and that result can then support years of downstream value creation.

The computer itself does not need to be embedded everywhere. Its value comes from solving a narrow class of extremely important problems at the frontier of science and engineering.

Neuromorphic computing points toward the reverse model.

If it succeeds, its impact would come not from a few centralized machines, but from widespread deployment across devices and systems that need intelligence at the edge.

It is meant for robots, autonomous machines, sensor-rich environments, and human-facing systems that must operate continuously under power and latency constraints.

In that sense, it is less about concentrated computational supremacy and more about making small things (or, one day, big things) smart.



The Real Bottleneck is Not Just Hardware

For all the attention neuromorphic computing receives as a hardware story, the deepest constraint may sit elsewhere.

The instinctive assumption is that the challenge is mainly about fabricating better chips, denser architectures, or more biologically inspired circuits.

Those are real issues, but they are not necessarily the first one that matters. The more fundamental obstacle is training.

The hardest question is how to train the system at all

This is the point where enthusiasm often gives way to scientific reality. If neuromorphic architectures are modeled on biological systems, then the question is not only how to build spiking neurons or brain-like connectivity in silicon. It is how to train such a system in a stable and scalable way.

That remains unresolved.

The core difficulty is that biological neural systems do not behave like the architectures that underpin mainstream AI today.

They are highly dynamic, oscillatory, and spike-based. That makes them appealing from an intellectual standpoint, but much harder to train in practice.

When attempts are made to extract information, assign weights, and produce stable learning behavior, the result can be unstable networks that collapse rather than converge.

This is not a secondary technical detail. It is one of the central scientific bottlenecks that determines whether a neuromorphic architecture can evolve from an interesting demonstration into a viable computing platform.

A startup may have beautiful hardware, and compelling biological inspiration, but without a convincing answer to the training problem, the rest of the proposition remains incomplete.

From an investment standpoint, this becomes a gating issue. The architecture must show not only that it can run, but that it can be trained reliably.

The training issue becomes even more significant because the broader AI ecosystem has already built massive infrastructure around a very different paradigm.

Today’s leading models are trained through centralized data pipelines, large research teams, benchmark-driven workflows, heavy annotation, and tooling stacks refined over years of transformer development.

That ecosystem cannot simply be lifted and transferred onto neuromorphic systems.

This matters because every time the architecture changes fundamentally, the surrounding infrastructure must change with it.

The challenge is no longer limited to the chip.

It extends to data handling, training methods, software tooling, research practices, and the people capable of operating the system.

A neuromorphic company is not just introducing a new processor. It is proposing a different training regime, and with that comes a change in organizational capability as well.

That is one reason current neuromorphic systems cannot simply take existing large models, download them, and map them onto a new chip.

The accumulated value embedded in today’s LLMs and related architectures does not transfer in any straightforward way. The field is not inheriting the current AI stack. It is, in a meaningful sense, starting over.

Memory density is the second structural constraint

Even if the training problem is eventually addressed, another structural issue remains: memory.

Neuromorphic systems need to store parameters, weights, and state. But once the architecture combines memory and compute in a tightly integrated way, a new trade-off emerges.

The density of that memory becomes critically important.

This is where conventional digital architectures still retain a powerful advantage. Modern transformer systems rely on specialized memory technologies that are highly optimized for storing enormous amounts of information in compact spaces and moving it quickly.

High-bandwidth memory is not just an accessory to those systems. It is one of the reasons they scale.

Neuromorphic architectures often lack that same density. If the neuron-like components and their associated weights occupy too much space, then the system runs into scaling limits very quickly.

The aspiration may be to create a compute-and-memory architecture that is more brain-like, but if the density is too low, the result is a platform that remains confined to relatively simple applications.

That does not make it useless. In fact, there may be many commercially relevant simpler applications. But it does impose limits on the claim that these systems are a near-term route to very advanced general intelligence.

What this means for startup founders and teams

These bottlenecks also reshape what technical credibility looks like in the field. A founder does not need to manufacture full silicon at the outset to prove whether a concept is viable.

Many of the most important questions can and should be explored in simulation before large amounts of capital are spent on hardware.

That changes how a serious neuromorphic company should be judged. The strongest teams are likely to be those that have been working on these foundational problems for a long time and understand where the real points of failure are.

This also means that team composition is unusually important.

The challenge is not simply to gather chip designers. It is to bring together people who understand the interaction between architecture, learning dynamics, memory constraints, and system-level behavior.

The field is difficult precisely because no single layer can be treated in isolation.

So the bottleneck in neuromorphic computing is not just that hardware is hard. It is that the architecture, the training method, the memory model, and the software stack all have to evolve together.



What an Investable Company Looks Like

Once the technical ambition is brought back down to company-building reality, the picture becomes clearer.

Neuromorphic computing may be a frontier field, but that does not automatically make every company in the space investable.

The path to a fundable business depends on whether the team can translate a scientific and architectural breakthrough into a staged development plan, a credible product scope, and a go-to-market strategy that does not ask the impossible on day one.

The timeline is long, but still within venture logic

This is not a near-term software cycle. It belongs in Deep Tech venture timelines.

A realistic horizon is closer to 7 to 10 years than to a short product sprint, but that still leaves it within the bounds of what can be financed if the roadmap is constructed properly.

What makes it fundable is not the idea that everything must be solved upfront.

A credible neuromorphic company would need to advance in stages, beginning with narrower problems where the advantages of the architecture can already matter.

Full-stack matters more than isolated brilliance

Moreover, neuromorphic computing is not just a chip problem. It is a chip, a software stack, an inference system, and a training system that all have to work together.

A company that only brings one piece of that puzzle, without control over the rest, risks becoming disconnected from the actual value creation.

That is why the more compelling company model is vertically integrated.

The business has to deliver the full solution, not just silicon. It needs to show how the model is trained, how inference is performed, and how the hardware and software interact in a usable end-to-end system.

Otherwise, the customer is left with an impressive technical component but no practical way to deploy it.

In a field where the surrounding infrastructure does not yet exist in mature form, integration becomes a strategic necessity. It is not enough to claim that someone else will build the tooling, the model pipeline, or the deployment layer later.

The company has to behave as though those layers are part of its own responsibility.

The best markets may be the ones that do not exist yet

The other defining characteristic of an investable neuromorphic company is market choice. The instinct to attack large, established markets can be misleading here.

Trying to enter the data center and compete directly with incumbent architectures on their own ground is a poor fit for a new technology that still has major scientific and technical hurdles to overcome.

A more coherent strategy is to go after new or underserved environments where traditional architectures are structurally weak.

The right markets are likely to be those where centralized GPU compute cannot go easily, where energy is constrained, where latency is mission-critical, and where intelligence must live inside the system rather than be reached through the cloud.

That may sound niche at first, but it does not have to remain small.

New compute paradigms often become viable by solving problems existing systems handle badly, not by trying to beat them everywhere at once.

In neuromorphic computing, the opportunity may come from entering spaces that are still emerging rather than displacing incumbents in mature ones.

So an investable company in this field is the one that pairs a real technical breakthrough with a staged roadmap, builds the full stack rather than a detached component, and chooses markets where the architecture’s strengths are genuinely native to the problem.

That is the version of the story that can move from scientific fascination to venture-backed company building.


Whether you’re an experienced investor leading an established fund, an emerging manager stepping into the field, an angel investor exploring new opportunities, or a founder eager to see the industry from a fresh perspective, The Scenarionist Premium is built for you.

You’ll have access to:

  • Startup case studies that have been successfully deployed in real industrial settings.

  • In-depth due diligence and execution frameworks designed to win.

  • Curated, independent analysis of weekly Deep Tech inflection points, from scaling signals to incumbent moves and policy shifts.

Join Premium


Disclaimer
Please be aware: the information provided in this publication is for educational purposes only and should not be construed as financial or legal advice or a solicitation to buy or sell any assets or to make any financial decisions. Moreover, this content does not constitute legal or regulatory advice. Nothing contained herein constitutes an offer to sell, or a solicitation of an offer to buy, any securities or investment products, nor should it be construed as such. Furthermore, we want to emphasize that the views and opinions expressed by guests on The Scenarionist do not necessarily reflect the opinions or positions of our platform. Each guest contributes their unique viewpoint, and these opinions are solely their own. We remain committed to providing an inclusive and diverse environment for discussion, encouraging a variety of opinions and ideas. It is essential to consult directly with a qualified legal or financial professional to navigate the landscape effectively.

Ready for more?