Welcome to the 116th edition of Deep Tech Catalyst, the educational channel from The Scenarionist where science meets venture!
In this week’s episode of Deep Tech Catalyst, I sat down with Eric Donsky, three-times exited Deep Tech entrepreneur, and today CEO of Atomic13.
In our conversation, we unpacked TearLab’s journey, a venture path at the intersection of Deep Tech and lab-on-a-chip diagnostics, from identifying an overlooked gap in eye care to building a clinically usable biomarker platform, navigating long development timelines, structuring capital around technical uncertainty, and ultimately scaling the company through clinical validation, market adoption, and the public markets.
Key takeaways from the episode:
🎯 The best opportunities often begin where demand is real but not yet explicit
Some of the most valuable Deep Tech companies are built not around markets loudly demanding innovation, but around operational or customer friction that is widespread and poorly solved.
🧩 A product must work for every user in the workflow, not just the buyer
In healthcare especially, adoption depends on solving for multiple stakeholders at once. A technology may be clinically powerful, but it still needs to fit seamlessly into the daily routines, incentives, and constraints of the people expected to use it.
🧪 The invention is only the starting point; the real challenge is building the full system
The scientific insight may define the opportunity, but commercialization depends on solving the interface between science, product architecture, manufacturability, and reliability in real-world use.
💸 In Deep Tech, capital strategy has to reflect delay, iteration, and technical risk
When timelines stretch and technical bottlenecks emerge, undercapitalization becomes one of the fastest ways to destroy optionality. The right capital often comes from investors who understand the problem deeply enough to strengthen more than just the balance sheet.
📈 Technical success does not scale on its own
Clinical evidence, trusted validators, reimbursement logic, and a credible market narrative all matter. In regulated sectors, scaling a company requires much more than proving that the technology works.
BEYOND THE CONVERSATION — STRATEGIC INSIGHTS FROM THE EPISODE
Study the Friction Before the Market Names It
Some businesses are built by responding to an already visible demand signal. Others are built by recognizing that an existing workflow is underperforming, even if customers are not yet asking for a radically different product.
TearLab belonged to the second category.
The starting point was the belief that analytical functions normally performed in centralized laboratories might be miniaturized and brought much closer to the patient.
At that stage, however, this was still a technological possibility, not yet a business.
The challenge, then, was not simply to advance the technology. It was to identify a clinical setting in which miniaturization could create enough practical value to justify building a company around it.
The question was not just, “Where could this technology work?” but “Where could it materially improve an existing workflow, decision process, or care pathway?”
That search led to eye care.
What made eye care interesting was not explicit market demand for a point-of-care biomarker platform. What made it interesting was the apparent mismatch between the clinical problem and the tools available to manage it.
The early thesis was that, if biomarker analysis could be performed directly in the doctor’s office, it might not only improve diagnostic quality, but also change the speed and economics of decision-making in routine practice.
This is where the business logic started to become more concrete.
The company was not entering a market with clearly articulated demand; it was identifying a setting in which a real problem existed, but the category of solution had not yet fully formed.
That meant the opportunity could not be defined in technical terms alone. It also had to be defined in workflow terms.
A point-of-care diagnostic in eye care would matter only if it could fit inside the patient visit, reduce ambiguity during diagnosis, and provide information at the moment treatment decisions were being made.
By that point, the concept was taking shape around three linked elements: a technical capability, a specific clinical bottleneck, and a care setting in which time-to-information had practical value.
The ambition, however, was broader than a single test.
From early on, the idea pointed toward a more general platform logic: miniaturize part of the reference laboratory and make it usable at the point of care. That made the first application important not only as a standalone product, but as a beachhead for a broader diagnostic architecture.
The first wedge therefore had to do two things at once: it had to be narrow enough to solve a real and immediate problem, and structured in a way that could support expansion into additional biomarkers over time.
Seen this way, the company did not begin with a fully formed business model. It began with a sequence of design choices:
First, identifying a technical capability that could change how diagnostics were delivered;
Second, finding a clinical environment in which the status quo was weak enough that better information would matter;
Third, defining the initial product not as a generic platform, but as a tool that could fit into a real workflow and improve a real decision.
The venture became credible only once those elements were connected.
A Product Wins Adoption Only When It Solves for Every User in the Room
One of the important commercial realities in this case was that adoption did not depend on a single user. The product had to work for at least two distinct customer profiles inside the same clinical setting, each with different priorities.
The first was the office technician or lab technician responsible for running the test as part of the patient workup. This person was not the ultimate clinical decision-maker, but was essential to the success of the product in everyday practice.
If the system was difficult to use, slow to operate, or disruptive to workflow, adoption would be limited regardless of its scientific merits.
That made ease of use a central design requirement.
The test had to fit into the normal rhythm of a practice, be manageable by someone without specialized laboratory training, and generate a result without introducing unnecessary complexity.
This is a useful lesson for Deep Tech founders.
In many cases, especially in healthcare, the person who physically uses the product is not the same person who benefits most from the result or authorizes the purchase.
A product may solve an important problem in principle, but still fail if it is too complex for the person expected to run it day after day. In practice, ease of use becomes part of the value proposition.
The case also shows that workflow considerations can be commercially decisive.
Better decisions, better economics, and reimbursement
The physician represented a second and distinct customer logic. What mattered here was not primarily operational convenience, but whether the diagnostic information improved care in a meaningful way and whether the economics of using the test made sense.
For the doctor, the central question was how the result would influence clinical decision-making. A new test was valuable only if it helped improve diagnosis, clarify ambiguity, or support better treatment choices.
In the eye care setting described in the interview, this was particularly relevant because several front-of-eye conditions could present with similar symptoms. A more informative test therefore had value not simply as an additional data point, but as a tool for differential diagnosis.
At the same time, clinical value alone was not enough.
The physician also had to understand the economic implications.
Would the test be reimbursed?
Would it improve the financial performance of the practice?
Would it make the clinical process more efficient?
These questions were a key part of the adoption decision, and this point becomes especially clear in the discussion of practice economics.
A broader point emerges from this. In Deep Tech, it is often not enough to “know the customer” in the singular. The more useful discipline is to map the full set of stakeholders involved in use, decision-making, and economic approval.
In this case, the technician and the doctor each required a different narrative. One cared about usability and process reliability. The other cared about clinical utility, reimbursement, and revenue logic. The product had to satisfy both at once.
That dual-customer structure shaped the go-to-market logic.
In that respect, the commercial challenge was not separate from the product challenge. They were tightly connected from the beginning.
Key Challenges in Developing a Deep Tech Product
The broader vision was compelling, but turning that vision into a usable system required solving a much more specific and difficult problem.
In particular, one key challenge was tear collection.
In order to generate a meaningful diagnostic signal, the system needed to work with an extremely small sample volume, roughly 50 nL of tear fluid.
But collection at that scale was not straightforward.
If too much manipulation of the eye occurred, reflex tearing would begin and dilute the sample. If the sample changed through dilution, the signal would become unreliable.
The same was true for evaporation. Tears could not simply be collected and sent to an external laboratory because sample loss during handling would distort the result.
In practical terms, this meant that the core challenge was not only to detect a biomarker. It was to collect a very small tear sample in a way that preserved its integrity, then move that sample into the analytical system without introducing clinically unacceptable variability. That requirement shaped the entire product.
This is an instructive pattern for Deep Tech more broadly.
A company may begin with a central scientific thesis, but the hardest part of commercialization often emerges in the interface between the science and the real-world operating environment.
In this case, the key bottleneck was not the abstract idea of biomarker analysis. It was the highly constrained physical act of collecting a stable sample from the eye in a routine clinical workflow.
The three-part architecture behind the platform
Once the nature of the problem became clearer, the product logic also became more concrete. The platform was not a single device, but a coordinated system made up of three interdependent components.
The first component was the disposable chip. This was the central consumable and the economic core of the business model. The company was structured in what was effectively a razor-and-blade model: the chip was single-use, and this was where recurring revenue would come from. But commercially attractive recurring revenue only mattered if the chip could perform consistently and be produced at scale.
The second component was the handheld device into which the disposable chip would snap for each test. This device acted as the collection and signal-processing interface. It was not simply a holder. It had to manage the practical interaction with the eye, support sample transfer, and process the resulting signal in a way that reduced noise and stabilized the output.
The third component was the reader. Once the handheld device was docked, the reader translated the processed signal into a result that could be displayed.
What matters here is that the company was not solving for one isolated technical feature. It was designing an integrated architecture in which collection, signal processing, and readout had to work together seamlessly.
A failure in any one part would compromise the usefulness of the whole platform.
This system-level view also explains why product development timelines became longer and more complex than a simpler diagnostic concept might suggest. Each component introduced its own design constraints, but all three also had to align with a future regulatory path.
The device had to be developed not just to work technically, but to be compatible with a point-of-care use case where reliability, usability, and eventually regulatory clearance would all matter.
Turning a scientific concept into a repeatable clinical tool
The most substantial technical work centered on the chip itself. The team’s objective was to build a chip architecture that could collect a very small tear sample, move it through a nanofluidic channel, and interrogate that sample for the relevant marker with high precision.
That alone would have been difficult. What made it more challenging was the requirement that this be done using a low-temperature plastic substrate rather than the higher-temperature silicon approaches more typical in microfluidics at the time.
The reason was practical.
The company wanted the platform to be suitable not only for osmolarity, but also for future protein analysis. That meant the chip architecture had to be compatible with attachment chemistries and biological components that would not tolerate high-temperature fabrication processes.
No obvious development path existed for this. The state of the art in the field was not yet aligned with what the company needed to build.
This is where partner selection became central. Early interactions with technically strong organizations were valuable, but they did not lead to the required solution. The more suitable partner was eventually found in Melbourne, a company willing to work on a longer time horizon and able to innovate around laser ablation in plastic substrates.
Together, they developed a polycarbonate-based platform with a hydrophilic pressure-sensitive adhesive that could support capillary collection and movement of the tear sample through the nanofluidic structure.
This step moved the company closer to something commercially usable. A clinical product could not depend on occasional performance under ideal conditions. It had to deliver repeatable results with a tight coefficient of variability.
Diagnostic accuracy depended on chips performing the same way every time, at volume, and at a cost structure that could support adoption.
This is where the difference between proof of concept and product became most visible. The company was no longer just demonstrating that a biomarker could be measured. It was trying to build a manufacturing-compatible, clinically reliable system that could eventually be used in routine practice.
That transition required advances in materials, fabrication, fluid handling, ergonomics, electronics, and quality control at the same time.
Deep Tech Timelines Break Simple Plans
Because the company was developing a system rather than a single component, partner selection became a strategic decision rather than a procurement exercise.
Different parts of the platform required different external capabilities.
The chip, the handheld, and the reader all had to be designed and built with an eventual regulatory path in mind, especially because the commercial goal depended heavily on CLIA waiver.
That meant the product could not merely function in a technical sense. It had to be robust enough that an untrained operator would not introduce errors leading to misleading results or patient risk.
The implications for design were substantial. Product architecture, usability, manufacturing tolerances, and signal reliability all had to support that eventual outcome.
This raised the standard for external partners.
The company did not just need vendors with technical skills. It needed development partners capable of understanding why the constraints mattered, how the system would eventually be used, and how the work being done at that moment would affect later manufacturability and regulatory feasibility.
Why founders should raise more capital than they think
One of the clearest founder lessons stated in the interview is that early capital planning is often too optimistic for the realities of Deep Tech execution.
A company may begin with a strong vision, but it is unlikely to know in advance exactly how every technical problem will be solved.
In practice, development paths shift, timelines slip, and costs rise. For that reason, capital should not be raised only against the ideal version of the plan. It should be raised with the expectation that setbacks will occur.
This point is especially relevant in a case like this one, where the company faced multiple layers of uncertainty at the same time. There was technical risk in the chip architecture, product integration risk across the three-part system, manufacturing risk in achieving repeatability and cost, and regulatory risk linked to future CLIA-waived use.
Each of these could extend timelines. Together, they made undercapitalization particularly dangerous.
The lesson here is not simply “raise more money” in a generic sense. It is more specific than that.
Founders should assume that development will take longer and cost more than early models suggest, particularly when they are building against technical requirements that have not yet been solved in a standardized way.
Dilution, in that context, should be weighed against the much more serious risk of running out of time before the company reaches a meaningful value-inflection point.
This case also shows why milestone planning in Deep Tech needs to be connected to the actual structure of uncertainty. It is not enough to define milestones as if the path were linear.
Milestones need to reflect what has truly been de-risked, what still depends on external capability, and what setbacks are plausible at each stage.
What emerges from this part of the story is a more focused view of execution. Deep Tech timelines are not difficult only because the science is hard. They are difficult because technical, manufacturing, regulatory, and partner-related uncertainties often interact.
That interaction is what can make simple plans unreliable, and what makes capital resilience such a central part of company building.
Capital Works Best When It Came from Belief, Relevance, and Timing
One of the most interesting parts of this case is the way early capital was sourced.
The company did not begin by relying on traditional venture capital. Its first meaningful financing came from people inside the market who already understood the clinical problem and could see why the proposed solution mattered.
The first roughly $6 million came from key opinion leaders in eye care.
They were clinicians with enough technical and clinical understanding to recognize the platform’s potential value at an early stage.
That made the capital especially useful, because it was tied not only to funding, but also to domain credibility, influence over the market narrative, and access to parts of the ecosystem that would otherwise have remained difficult to reach.
How opinion leaders became investors, validators, and market-makers
The role of these early supporters extended well beyond capital. Their influence also had practical consequences for development and validation.
When the company later needed to run a large multi-site clinical study, these same opinion leaders made their clinical sites available at cost.
According to the interview, this reduced the cost of generating that data significantly relative to what a conventional outsourced path would have required. In other words, the value of these relationships compounded over time.
This part of the story shows how some forms of capital are unusually efficient because they arrive bundled with trust, access, and market-making power.
For a Deep Tech company operating in a highly regulated environment, that combination can be especially important.
Using milestone-based capital to keep technical progress aligned with scale
As the company moved beyond proof of concept and began to see product traction, the financing strategy evolved.
At that point, the question was no longer only how to fund technical development. It was how to fund manufacturing scale-up, clinical validation, commercial infrastructure, and broader market expansion.
This is where timing became decisive.
The company was operating in a favorable public-market environment, while also beginning to see stronger revenue growth and adoption. Rather than continue on the same financing path, it chose to partner with a public eye care company as a way to access larger pools of capital.
The logic reflected a growing recognition that the company would need substantially more capital than originally expected in order to build manufacturing capacity, expand sales, and complete the clinical work required to support broader adoption.
The deal structure was milestone-based. This created a staged path to full consolidation into the public company.
As discussed in the interview, that approach proved especially important when circumstances changed and the founder had to help redirect the public company narrative.
What this illustrates is that capital strategy in Deep Tech is rarely static.
The right source of funding depends on what the company is trying to accomplish at a given stage, what the external market environment looks like, and how much uncertainty still remains.
Early capital in this case came from actors with deep domain relevance.
Later capital came through a structure capable of funding larger-scale execution, but still organized around milestones because the path remained developmental rather than fully mature.
The broader lesson is that effective capital formation depends on fit. The best funding is the one that matches the company’s stage, reinforces its path to validation, and arrives at a time when it can meaningfully expand what the company is able to do next.
Scaling the Company Required More Than Technical Success
To recap, the company’s path to success required a combination of clinical validation, market education, and a credible adoption narrative that could be understood and repeated by the right actors in the industry.
Clinical data played a central role in that transition. The company did not rely only on the technical logic of the platform or on the novelty of measuring osmolarity at the point of care.
It also invested in a large multi-site clinical study to strengthen the evidence base around the product. That data had more than one function. It supported the clinical legitimacy of the test, but it also gave influential physicians a stronger foundation from which to explain why the platform mattered.
This is important because adoption in clinical markets is rarely driven by the product alone. It is often driven by a combination of evidence and interpretation.
Even when a technology works, the market still needs trusted intermediaries who can explain how it fits into clinical practice, why it improves decision-making, and why it deserves to become part of routine care.
In this case, the company benefited from having leading figures in the eye care field help shape and communicate that narrative.
The interview also makes clear that product traction was connected to broader market-shaping activity. The company was not only selling a diagnostic tool. It was also participating in a shift in how dry eye disease was understood, with osmolarity becoming part of the disease definition itself.
That mattered because it helped align the product with an emerging clinical framework rather than leaving it as an isolated innovation looking for relevance.
What worked here was the combination of several reinforcing elements: a product that solved a real diagnostic problem, data that supported its use, and respected clinical voices that could help translate its value to the wider market.
The lesson is that for a Deep Tech healthcare company, scaling often depends on building this full structure around the product. Scientific validity is necessary, but it does not automatically create adoption.
Key takeaways for Deep Tech founders
The closing reflections in the interview are useful because they move from the specifics of one company to a more general set of operating lessons.
The first is the importance of understanding technology readiness level in a precise way. For a pre-revenue Deep Tech company, TRL is not just a technical classification. It is a way of explaining where the company truly is, what risks remain, what must happen next, and how capital should be matched to progress. In this view, founders need to be able to communicate not only what they are building, but what it will take to move from one stage of technical maturity to the next, including timelines, risks, and resource needs.
The second lesson concerns techno-economics. A company can solve meaningful technical problems and still fail commercially if the economics do not work at scale. This is one of the more sobering points in the interview. The claim is not that technical success guarantees business success if execution is disciplined. It is that many Deep Tech companies still fail after raising substantial capital because they do not ultimately meet the cost and economic requirements of the market they are trying to serve. For that reason, founders need to understand their future economics early and continuously, not only the technical feasibility of the product.
The third lesson is that customer understanding has to extend beyond the present moment. It is not enough to know who the customer is today or what the current market looks like. Founders also need to think about what the competitive and commercial environment will look like by the time the product actually launches. In Deep Tech, long development cycles create a gap between early assumptions and eventual market entry. A value proposition that appears differentiated at the start may be less differentiated several years later if the market evolves in the meantime.
JOIN THE SCENARIONIST PREMIUM!
Whether you’re an experienced investor leading an established fund, an emerging manager stepping into the field, an angel investor exploring new opportunities, or a founder eager to see the industry from a fresh perspective, The Scenarionist Premium is built for you.
You’ll have access to:
Startup case studies that have been successfully deployed in real industrial settings.
In-depth due diligence and execution frameworks designed to win.
Curated, independent analysis of weekly Deep Tech inflection points, from scaling signals to incumbent moves and policy shifts.















