0:00
/
0:00

The Autonomous Driving Stack: Opportunities, Bottlenecks, and Business Models | Deep Tech Catalyst

A chat with Ivy Nguyen, Investor @ GFT Ventures

Welcome to the 104th edition of Deep Tech Catalyst, the educational channel from The Scenarionist where science meets venture!

Autonomous driving is getting real attention again—not as a sci-fi layer on passenger cars, but as a blunt, practical tool for sectors that can’t find enough skilled operators to keep tractors, forklifts, and trucks moving.

Behind the robotaxi noise sits a less visible but critical question set: where autonomy actually makes economic sense first, how much data it really takes to reach “good enough,” and which business models can survive the long, expensive slog to OEM integration.

Look under the hood, and a few structural challenges show up.

  • Early traction tends to appear off-road in constrained, industrial settings long before fully open roads are viable.

  • Simulation can speed things up, but autonomy is still a “data, data, data” business that ultimately hinges on real-world experience.

  • And the most resilient teams are moving away from full-stack hardware dreams toward software-first, OEM-aligned platforms—while carrying years of retrofit costs just to be taken seriously.

To unpack how autonomy becomes an actual business, I spoke with Ivy Nguyen, investor at GFT Ventures!

Key takeaways from the episode:

🤖 Autonomy Solves Labor Scarcity Before It Cuts Labor Cost
Early adopters are less focused on removing drivers from the P&L and more on filling roles they simply cannot hire for, especially in remote or hard-to-staff locations.

🏭 Off-Road Domains Are Where Commercial Autonomy Starts
Controlled environments like warehouses, yards, farms, and factory floors offer bounded complexity and clearer liability, making them more viable than open roads for early deployments.

📊 Data Is Still the Hardest Bottleneck
New model architectures and simulation help, but real-world data remains essential; winning teams are those that can acquire and generate it quickly and economically.

🧩 Software-First, OEM-Integrated Is the New Default
Instead of full-stack vehicle startups, the emerging pattern is autonomy as a software layer integrated into legacy OEM platforms and distributed through their dealer networks.

🏗️ OEM Partnerships Take Years, Retrofits, and Meaningful Capital
Reaching scalable distribution via OEMs often requires four to five years, multiple parallel OEM conversations, and $10M+ of retrofit and field work to prove both technical performance and customer demand.


Deep Tech Monthly in Review - December 2025

Deep Tech Monthly in Review - December 2025

Field notes from last month in Deep Tech startups and private markets — a strategic recap for Builders and Backers.


BEYOND THE CONVERSATION — STRATEGIC INSIGHTS FROM THE EPISODE

The Logic of Autonomous Driving

In practical terms, autonomous driving is the ability of a machine to move from one point to another without relying on a human operator to control it in real time. The operator is not sitting in the cabin of the vehicle, and may not even be physically present on site.

The core requirement is that the system itself can handle the act of driving.

That definition applies across a wide range of platforms. It can be a car, a truck, a forklift, a tractor, or any other machine that needs to move through an environment in a controlled way.

What changes from case to case is not the fundamental concept of autonomy, but the environment in which that autonomy has to work and the constraints it must respect.

Understanding the Environment, Not Just Following a Route

For a system to qualify as genuinely autonomous, it is not enough to replay a fixed route. The vehicle must be able to understand what is around it and make decisions continuously as conditions change.

That includes recognizing obstacles in front, to the side, and behind; distinguishing between static structures and moving actors; and updating its plan as new information arrives.

The machine has to chart a path between point A → B based on the rules of the environment it is operating in.

  • On a public road, that means following traffic laws and adapting to how human drivers actually behave on highways and city streets.

  • In a warehouse or factory, it means respecting internal safety rules, right-of-way conventions, and the layout of aisles, shelves, and loading areas.

The bar for safety is high.

An autonomous system must avoid damaging itself, avoid damaging the infrastructure around it, and, above all, avoid harming people, animals, or anything else that might enter its path.

The entire autonomy stack — from sensing, to perception, to planning and control — exists to make that kind of safe navigation possible without a person constantly correcting the machine.

Teleoperation as Part of the Autonomy Stack

Autonomous driving is not an all-or-nothing state. In many real deployments, there is still a human in the loop, but that human is no longer physically inside the vehicle.

Instead, they might be nearby or in a remote operations center, ready to step in when the system reaches the limits of what it can handle.

Seen this way, teleoperation is not a contradiction of autonomy but an integral part of how autonomy is made practical in early stages.

The system does as much as it can on its own, and when it encounters a scenario it has not yet learned to resolve, control can be handed off to a remote operator who manages the exception and then returns the vehicle to autonomous mode.



Why Autonomy Matters Today

When the Issue Is Not Labor Cost, but Labor Availability

Autonomous driving is often framed as a way to remove labor costs from the bottom line. The assumption is that if a company can take the driver out of the vehicle, it can immediately improve margins.

In practice, that is not what is driving many of the most serious pilots and deployments today.

For a large number of businesses experimenting with autonomous machinery, the problem is that labor is missing.

Companies cannot find enough people with the right skills to operate the equipment they depend on, whether that is a tractor in a field or a forklift in a logistics operation. The constraint is the availability of skilled operators in specific markets, not the hourly wage.

That simple, but not obvious, shift changes the logic of adoption.

When a business cannot staff its operations reliably, autonomy stops being a theoretical efficiency gain and becomes a practical requirement to keep the operation running at all.

Infrastructure as an Unexpected Barrier to Deployment

Operating in remote regions introduces another layer of complexity that does not show up in a simple autonomy business case.

The areas that are most eager to adopt autonomous vehicles because they lack people are often the same areas that lack basic infrastructure.

Electrical and power infrastructure may be limited or unreliable, which complicates everything from charging vehicles to running on-board compute and communications systems.

Telecommunications infrastructure can be thin or absent, making it difficult to support connectivity, over-the-air updates, or teleoperation. What looks straightforward in a connected, urban environment becomes much harder when the supporting systems are not in place.

These gaps shape how and where autonomous vehicles can be deployed commercially.

They introduce additional engineering and operational challenges that may not be obvious from the outside. The autonomy stack itself might be ready to handle the task, but the local environment is not ready to support it.

As a result, real-world deployments are full of surprises that sit outside the pure software and hardware discussion.

The Less Obvious ROI: Reduced Damage and Consistent Performance

Alongside labor dynamics and infrastructure constraints, there is a quieter set of benefits that can make the return on investment for autonomy more compelling than it appears at first glance.

These benefits come from the way autonomous systems perceive their surroundings and the way they standardize performance across a fleet.

When a vehicle is covered in sensors and driven by an algorithm-based system:

  • It can maintain a comprehensive, 360-degree view of its environment.

  • It can process multiple streams of data at once and react based on that full picture. In practical terms, that can mean fewer incidents.

Some operators are twenty-year veterans who know exactly how to handle the equipment in every situation. Others are on their second or third day and are still learning.

That variation shows up in damage rates, near-misses, and overall efficiency.

By contrast, a computer-driven fleet tends to deliver a more consistent level of performance. Once the autonomy stack reaches a certain standard, that standard applies across every vehicle running that software.

When companies analyze the economics of autonomy in settings where it works well, these less obvious factors can be significant.



Where Commercial Viability Begins: Off-Road vs Open-Road Applications

A useful way to think about autonomous driving is to divide the world into off-road and open-road environments. That distinction is not just semantic; it is one of the main reasons some applications of autonomy move toward commercial viability faster than others.

On the technical side, autonomy behaves a lot like robotics in general.

The more control you have over the environment in which the system operates, the easier it is to make that system work reliably.

When the space is constrained, when the types of objects are limited, and when the patterns of behavior are relatively predictable, it becomes far more realistic to engineer a system that can handle most edge cases without constant human rescue.

Off-road autonomy benefits from exactly this kind of constraint.

The vehicle is operating in a defined footprint rather than on an open network of roads. The number of different actors it must interact with is smaller.

The set of motions, maneuvers, and scenarios it needs to handle is narrower. In some cases, it is even possible to carve out a dedicated lane or a protected corridor where only autonomous vehicles are allowed to operate, which further simplifies the problem.

In that context, the autonomy stack is being asked to master a contained universe. It still has to perform robust perception, planning, and control, but it does so within a territory measured in square meters or square feet that does not change very much over time.

The vehicle is not being moved arbitrarily from one environment to another with completely different rules. That boundedness is a major advantage when the goal is to get a system from prototype to something that can be deployed and left to run for extended periods without constant intervention.

Off-Road Applications: Where the Path to Deployment Is Shorter

These characteristics explain why many of the most promising autonomous driving applications are off-road. Factory floors, warehouses, rail yards, industrial plants, yards full of vehicles, and large parking environments all share the same structure: a limited geographical footprint, a defined set of tasks, and a smaller variety of interacting agents.

In these settings, the system does not need to be prepared for “infinite miles” of novel conditions. It needs to be deeply competent within a known layout and a known set of operating patterns.

The number of scenarios it must recognize and resolve is still large, but it is no longer unbounded. As a result, the amount of data required to reach a commercially acceptable level of autonomy is more manageable, and the engineering time required to reach that level is shorter.

That difference shows up in practical terms. A system designed for off-road autonomy can be tuned to the specifics of a single rail yard or a family of similar warehouses.

It can be deployed knowing that the environment is not going to surprise it with an entirely new class of obstacle or behavior every day.

The company building it can reach meaningful performance faster, and the customers testing it can see reliable operation sooner, which matters when both sides are working under real economic and time constraints.

Open-Road Autonomy: Infinite Edge Cases and Expensive Data

Once a vehicle leaves this kind of controlled domain and enters the open road, the problem changes dramatically. The autonomy system has to cope with the full complexity of public driving environments, and that complexity is vast.

Highways have their own patterns: merging traffic, lane changes at high speed, long-distance visibility, and a particular mix of vehicle types.

City streets behave differently.

There are more intersections, more pedestrians, more unpredictable stops and starts. Vehicles range from compact cars to delivery vans to heavy-duty trucks. Motorcycles and bicycles cut through gaps that other vehicles cannot. Pedestrians may jaywalk.

Debris appears unexpectedly in the road. Animals can cross at any time. Weather and lighting conditions change hour to hour.

Each of these elements expands the universe of edge cases the system must be ready to handle. Training models to recognize, anticipate, and respond safely in all of these situations requires large amounts of highly varied data.

Collecting that data is expensive and time-consuming. It involves driving vast distances, capturing rare events, and then painstakingly labeling and incorporating that information into the training pipeline.

That is why open-road autonomy has absorbed so much capital over the past decade. It is not just the complexity of the algorithms. It is the scale of the data collection and model training effort required to make the system safe and reliable across such a diverse, unpredictable environment.



Data as the Central Bottleneck: The Real Requirements Behind Training Autonomous Systems

Changing How Autonomous Systems Are Trained

Over the past few years, there have been real shifts in how autonomous driving systems are trained. New approaches have made it possible to get to “mostly autonomous” operation with less data than would have been required in earlier generations of the technology.

Architectures that move beyond purely traditional pipelines are now being applied to self-driving systems, from passenger vehicles to trucks in logistics corridors.

Some systems still lean heavily on high-definition maps and classic perception stacks. Others, like Tesla’s self-driving effort or newer entrants training autonomous trucks for specific markets, are using model architectures that can learn patterns of behavior directly from large volumes of driving data.

These newer approaches can compress some of the data requirements and make better use of the information that is already available.

But the fundamental reality has not changed as much as people might hope. Regardless of the exact architecture, these systems still end up needing more data than most founders expect at the outset.

The early impression that a system is close to ready — because it handles a demo route smoothly or performs well in a limited pilot — often hides how much additional experience the model will need before it can be trusted to operate with minimal human intervention across a broader range of conditions.

Synthetic Data

To address this, a growing number of companies are turning to synthetic data and simulation. The logic is straightforward. Instead of waiting to encounter every relevant edge case on real roads or in live industrial environments, it is tempting to imagine those edge cases, construct them in a virtual setting, and generate the data needed to train the model on how to respond.

In simulation, a team can create rare but critical scenarios on demand.

They can stress-test their autonomy stack against unusual combinations of events that might take an impractically long time to observe in real operations.

They can iterate quickly, adjust parameters, and see how the system reacts without putting physical equipment or people at risk.

This toolkit can dramatically accelerate the training process.

It reduces the number of physical miles that have to be driven solely to “collect edge cases,” and it gives teams a way to explore the limits of their system more systematically. Used well, it becomes a powerful amplifier on top of the real-world data the company already has.

However, simulated data is not a full substitute for real-world experience.

The scenarios constructed in software are based on assumptions about how the world behaves. They are abstractions, even when they are detailed. The gap between what happens in a carefully modeled virtual environment and what happens on an actual road, or in an actual yard or warehouse, can be subtle but important.

It Is Still “Data, Data, Data”

That is why data remains the central bottleneck, even as tools and architectures evolve. Whether a company takes a more traditional mapping-centric approach or leans into transformer-based models and heavy use of simulation, the constraint shows up in the same place: the quantity, quality, and relevance of the data used to train and improve the system.

From an investor’s perspective, this is an area where founders often underestimate what will be required. It is easy to focus on the elegance of the model or the novelty of the approach and treat data collection as a secondary consideration.

In real deployments, the opposite is true.

The autonomy stack advances as quickly as the organization can access or create the data it needs, at a cost and speed that the business can sustain.

The companies that ultimately pull ahead will be those that treat this as a core strategic question rather than a downstream operational detail. They will be the ones who design their programs, partnerships, and early customer engagements around systematic ways to gather and generate the right kinds of driving data. They will use simulation to accelerate, not to replace, the hard work of learning from the real world.



Business Models in Transition: From Full-Stack Autonomy to OEM-Integrated Software

The First Wave: Full-Stack Autonomy

The early generation of autonomous driving companies grew up in a very different financial environment from today. Interest rates were held low for a prolonged period, capital was abundant, and the venture market was willing to back highly capital-intensive plays with long horizons. In that context, the dominant pattern was full-stack autonomy.

These companies built almost everything themselves.

They developed the autonomy software stack and also took responsibility for the hardware platform. In practice, that meant buying vehicles off the shelf—trucks, cars, and specialized equipment—and then modifying them heavily.

They added sensors, wired up compute, and converted the vehicles into drive-by-wire platforms that their software could control.

The result was a business that had to carry both the cost structure of a software company and many of the commitments of a hardware manufacturer. Significant teams were required on both sides. Engineering capacity was split between algorithms, perception, planning, and control on the one hand, and mechanical integration, vehicle systems, and physical reliability on the other.

It was an ambitious model, and it was only sustainable because raising hundreds of millions of dollars for a first commercial prototype was considered realistic in that era.

Today, the backdrop has changed.

Liquidity in venture as an asset class is more limited. There is less tolerance for indefinitely funding highly capital-intensive models without a clear line of sight to commercial traction.

The combination of slower exit markets and a higher cost of capital has made that first-wave playbook much harder to repeat.

The Shift to Software-Centric, OEM-Aligned Autonomy

Against this new economic reality, the business model is evolving. Instead of trying to be vehicle companies in disguise, newer autonomy startups are deliberately focusing on the software layer and aligning themselves with existing original equipment manufacturers.

Legacy OEMs—whether they build trucks, cars, tractors, garbage trucks, or other specialized machinery—are now designing next-generation platforms with autonomy in mind.

These are vehicles that have been rethought so they can be driven entirely by software. They expose the interfaces needed for drive-by-wire, and they are architected to accommodate sensors, compute, and connectivity from day one.

The emerging division of labor is clear. The OEM provides the physical machine and the industrial capabilities around it.

The startup provides the autonomy stack and the data that make the vehicle operate without a human driver. The two sides integrate their systems so that, to the end customer, the result appears as a coherent, next-generation product rather than a collection of separate components.

This approach is not just a matter of capital efficiency, although that is part of the story. It also reflects a more realistic view of what different players are good at.

Startups excel at building software and data systems quickly, but they are not set up to replicate decades of accumulated knowledge in vehicle manufacturing and support.

OEMs understand hardware and field service deeply, but they rarely move at the pace required to build cutting-edge autonomy software on their own.

What End Customers Actually Buy

From the customer’s perspective, the decision-making lens is much simpler. A farmer considering an autonomous tractor, or a warehouse operator evaluating autonomous forklifts, is not primarily focused on architectural elegance or the details of who owns which part of the stack. The first questions are straightforward:

  • Does this machine do the job it is supposed to do?

  • And if it stops working, how quickly can it be brought back online?

Reliability and service dominate the buying criteria.

Farmers cannot afford to lose two weeks of harvest because a niche autonomy startup cannot overnight a replacement part or dispatch a technician within hours. Warehouse operators cannot halt critical operations while they wait for a single vendor engineer to fly in and diagnose a problem.

These customers are accustomed to dealing with established brands that have dense dealer networks, trained service technicians, and proven spare parts logistics.

That is the real advantage legacy manufacturers bring to the table.

They already operate the networks that can reach end customers quickly with the right parts and the right expertise. Their brands carry the accumulated trust of decades spent delivering and maintaining equipment in demanding environments.

For a new company, recreating that footprint from scratch is not simply a matter of money; it is a matter of time and institutional depth.

When autonomy is delivered as a software layer inside an OEM’s product, the customer can experience it as an evolution of something familiar, backed by a support structure they already know.

The purchase becomes less of a leap into the unknown and more of a decision to adopt a new capability from a partner they already rely on.

Scaling Through OEM Distribution

This is also why the OEM-integrated model is more promising as a path to scale. Selling directly as a startup can work, but typically only for a limited number of early adopters who are willing to accept imperfections and provide intense feedback.

Those customers will tolerate systems that require teleoperation, frequent updates, and close involvement from the engineering team. That stage is important, but it is not enough to build a large, durable business.

To move beyond a few hundred deployed units and reach thousands of vehicles across multiple geographies, a different distribution mechanism is required. That mechanism already exists in the form of OEMs’ global sales and dealer networks.

They know how to place products into markets, finance equipment for customers, and provide ongoing service. Once autonomy is integrated into the machines they sell, it can ride on top of that infrastructure.

For the autonomy startup, this model shifts the emphasis.

Instead of dedicating most of its capital to owning and modifying vehicles, it invests in the software stack, data collection, and integration work needed to make its system compatible with one or more OEM platforms.

It focuses on building something that an OEM can adopt and distribute, rather than trying to build a parallel distribution system of its own.

In that sense, the evolution of business models in autonomous driving is a response to both financial reality and operational truth.

The capital environment no longer supports the first generation’s full-stack ambitions at the same scale, and the market signal from end customers is clear: they want machines that work, supported by service networks they trust.



The Reality of OEM Partnerships: Timelines, Capital Needs, and the Path to Scale

The Multi-Year Journey to a Meaningful OEM Partnership

From the outside, partnering with a major OEM can look like a discrete milestone: a negotiation, a contract, and an integration plan. In reality, reaching that point is the outcome of a multi-year journey that is far longer than a typical venture funding cycle.

For the teams we’ve discussed in this conversation, those that have managed to secure deep, strategic partnerships with equipment manufacturers have not done it in six or twelve months; it has taken four or five years.

That span includes the time needed to build credible technology, to demonstrate it in the field with real customers, and to convince multiple layers inside a legacy organization that the partnership is both technically viable and commercially attractive.

This timeline sits in direct tension with the way venture capital usually operates. Most rounds are designed to provide twelve to eighteen months of runway, perhaps stretching to twenty-four.

Founders are therefore caught between the slow, deliberate pace at which OEMs move and the much shorter time horizon imposed by their capital structure.

The practical question becomes how to progress far enough toward OEM readiness, within those constraints, to keep financing the company without giving away the entire cap table in the process.

Despite the difficulty, OEM partnerships remain central for any autonomy company that aspires to scale beyond niche deployments. They are the mechanism for accessing global distribution and service networks, and for embedding autonomy into the mainstream equipment that customers already buy.

The challenge is not whether to pursue them, but how to survive long enough and progress far enough, to make those partnerships realistic.

Why You Cannot Rely on a Single OEM

One of the hard lessons that emerges from this experience is that a startup cannot afford to depend on a single OEM relationship. Large manufacturers are decades-old institutions with incentives aligned around sustaining and incrementally improving their existing business.

They work on three-, four-, or five-year planning horizons, not on the twelve- to eighteen-month cadence that defines a startup’s runway.

Inside those organizations, the people interacting with a young autonomy company are navigating their own constraints. They are typically not rewarded for taking disruptive risks that could destabilize the core business.

They are rewarded for keeping existing customers happy and existing products on track. As a result, even enthusiastic champions inside an OEM can find it difficult to move quickly.

A reliable way to introduce urgency into this system is through competitive tension. When one OEM sees that a promising autonomy provider is also in serious discussions with a major rival, the fear of being left behind begins to influence behavior.

Suddenly, moving first has strategic value.

Timelines that would normally stretch over several years can compress because the alternative is ceding a potential advantage to a competitor.

For the startup, this means engaging with multiple OEMs in parallel rather than waiting for a single partner to move. It is a demanding strategy, because it requires building several relationships at once and managing confidential conversations carefully. But without that dynamic, the startup risks being pulled entirely onto the OEM’s timeline, with no leverage to bring decisions in line with its own survival needs.

Taking on Retrofit Costs to Prove Technical and Commercial Readiness

Even with competitive tension in place, OEMs insist on a high standard of proof before committing to serious integration. They want evidence that the technology works reliably and evidence that their customers actually want it.

Autonomy has to be more than a lab demo. It has to be a product that performs in the field and a user experience that operators are willing to adopt.

To reach that bar, successful startups have had to absorb the cost of retrofit themselves in the early stages. They buy their own tractors, forklifts, or other target machines—sometimes secondhand, sometimes new—and modify them to be driven by their autonomy stack.

They then take these retrofitted vehicles to early adopter customers who are willing to experiment.

Those first customers accept a level of imperfection that would not be tolerable at scale.

Teams send engineers into the field, sometimes literally camping on-site during critical periods such as harvest, to ensure the system keeps running. They fine-tune behavior in real time, resolve issues as they arise, and capture the data needed to improve the stack.

This phase is expensive and operationally intense, but it is precisely what convinces OEMs that the autonomy solution is more than a theoretical add-on. It demonstrates technical viability under realistic conditions and surfaces the kinds of user interface, workflow, and branding considerations that matter to end customers.

When an OEM sees working machines in real customers’ hands—and hears those customers affirm that they would buy such a product through their usual equipment supplier—it changes the tone of the conversation.

The Capital Threshold

The capital required to reach that level of proof is substantial. In many cases, it takes at least ten million dollars to fund the equipment purchases, retrofits, and field operations necessary to make a credible case to OEMs.

The exact figure depends heavily on the category of equipment involved. Retrofitting tractors can be comparatively less expensive, especially when secondhand units are an option.

Because of this, there is a clear narrowing in the funnel of autonomy companies as they mature.

Many teams can reach a compelling prototype using simulation and limited real-world testing. Fewer can raise the capital required to run a serious retrofit program with early adopters. Fewer still manage to convert that effort into one or more meaningful OEM partnerships that open the door to large-scale distribution.

Compared with sectors such as enterprise AI software, where the primary investments are in people and cloud infrastructure, the bar here is higher and more unforgiving.

Autonomy companies must fund physical assets, carry the operational burden of real-world deployments, and stay alive long enough to convince conservative industrial partners to commit.

At the same time, there are reasons for cautious optimism.

As sensors, compute, and supporting technologies continue to mature, the cost of achieving key milestones is gradually decreasing. Talented teams are finding more efficient ways to structure their retrofit programs, collect data, and negotiate with OEMs. But the underlying structure of the challenge remains.


Beyond Dilution: Venture Debt & Revenue Sharing for Deep Tech Ventures | The Scenarionist

·
October 17, 2025
Beyond Dilution: Venture Debt & Revenue Sharing for Deep Tech Ventures | The Scenarionist

How New Models pairing equity with venture debt and revenue sharing bridge the lab-to-market gap for capital-intensive Deep Tech.


Disclaimer
Please be aware: the information provided in this publication is for educational purposes only and should not be construed as financial or legal advice or a solicitation to buy or sell any assets or to make any financial decisions. Moreover, this content does not constitute legal or regulatory advice. Nothing contained herein constitutes an offer to sell, or a solicitation of an offer to buy, any securities or investment products, nor should it be construed as such. Furthermore, we want to emphasize that the views and opinions expressed by guests on The Scenarionist do not necessarily reflect the opinions or positions of our platform. Each guest contributes their unique viewpoint, and these opinions are solely their own. We remain committed to providing an inclusive and diverse environment for discussion, encouraging a variety of opinions and ideas. It is essential to consult directly with a qualified legal or financial professional to navigate the landscape effectively.

Ready for more?