The Bioverge Podcast: Enlisting Liquid Biopsies in the Fight Against Chronic Diseases

Nathan Hunkapiller, Chief Scientific Officer and Co-Founder of Curve Biosciences, sits down with Neil to discuss liquid biopsies, the company’s efforts to expand the use of these blood-based tests to improve the treatment of chronic diseases, and how it is enabling earlier intervention.

Summary

On the latest episode of The Bioverge Podcast, Nathan Hunkapiller, Chief Scientific Officer and Co-Founder of Curve Biosciences, sits down with Neil Littman to discuss liquid biopsies, the company’s efforts to expand the use of these blood-based tests to improve the treatment of chronic diseases, and how it is enabling earlier intervention.

We dive into Curve’s Chronic Disease Tissue Atlas (CDTA), which uniquely allows the company to discover these precise changes with data from over 250,000 tissues across more than 1,000 clinical studies.

Curve is focused on first commercializing its best-in-class blood tests for the $30 billion market of chronic liver diseases while using CDTA to find biomarkers and targets for other conditions.

Listen today wherever you get your podcasts (links below):

Watch on YouTube.
Listen on Soundcloud.
Listen on iTunes.
Listen on Spotify.

Transcript

00:00
Danny Levine (Producer)
Bioverge podcast with Neil Littman. 


00:29

Danny Levine (Producer)
Now we've got Nathan Hunkapiller on the show today. For listeners not familiar with Nathan, who is he? 


00:35

Neil Littman (Host)
So Danny Nathan is chief scientific officer and co founder of Curve Biosciences. They are developing blood testing technologies for patients with chronic liver diseases. Prior to co founding Curve, Nathan was senior vice president of R and D at Grail and a member of their exec team, where he led lab research and development efforts for the company's multicancer early detection product all the way from inception to commercialization through the company's $8 billion acquisition by Illumina. So he's got a rich history there in the liquid biopsy market. Prior to Grail, Nathan was a VP of R D at Natera, where he led research and product development for the company's reproductive health and oncology products. So Nathan brings a vast degree of experience to the liquid biopsy market and what they're doing at Curve. He's applying what he's done from Natera and Grail to the field of chronic diseases at Curve. 


01:30

Neil Littman (Host)
So I'm excited to dive into what they're doing differently at Curve and what they're building and how liquid biopsies can be adapted from prenatal testing and oncology to chronic diseases. One more thing, Danny. I should mention for full disclosure that Bioverge is an investor in curb biosciences. 


01:49

Danny Levine (Producer)
There's been a lot of buz around liquid biopsies for many years, but why haven't we seen greater use of these? Is it cost? Is it science? Is it validation? 


02:01

Neil Littman (Host)
It's a great question, Danny. I am excited to get Nathan's perspective on that. I think it might be a little of all of the above. I think we've seen some early success in the field of oncology, but I'm not sure the technology has been as readily adopted and has appealed to sort of the mass market as one would have thought. And I think there's some important nuances to understand around the sensitivity and specificity of these tests as they scale to very large patient populations. So I want to dive into some of the weeds around those details with Nathan. And Danny, if the name Hunkapillar sounds familiar, nathan's father, Michael Hunkapillar, was one of the early pioneers in DNA sequencing at his time at Applied Bio and Pacific Biosciences. And so I'm sure Nathan had some very interesting dinner conversations over the years with his father and his uncle Tim, who is also in the field as well and is pretty established. 


02:57

Danny Levine (Producer)
You talked about the use of broadening the use of liquid biopsies. I think people still think about this technology as it relates to cancer. What's the potential to really expand its use? 


03:12

Neil Littman (Host)
I think there's huge potential to expand its use, and that's a lot of what we're going to get into today with Nathan. But the whole premise of Curve is applying the liquid biopsy technology to chronic diseases. So Curve is specifically started with chronic liver disease, and we'll get into some of the nuances around why that is their lead indication. But I think there's a lot of potential moving from oncology to other chronic diseases to more niche indications. And so I think in terms of the adoption Curve, I think we're still in the relatively early innings of liquid biopsy technology and it being widely available for a whole variety of different diseases within oncology and outside of oncology. So I think there's a long way to go. So I definitely want to dive into Nathan's perspective on the evolution of the technology, how it's evolved from cancer to what they're doing at Curve to where it could go in the future. 


04:06

Danny Levine (Producer)
Well, if you're all set, let's do it. 


04:09

Neil Littman (Host)
Danny, Nathan, thank you for joining me on the show today. I'm incredibly excited to welcome you. 


04:18

Nathan Hunkapiller (Guest)
I'm excited to be here as well. Thank you for having me. 


04:21

Neil Littman (Host)
Great. So, Nathan, today we are going to talk about liquid biopsies, curve biosciences, and your efforts to expand the use of this technology to monitor patients with a range of chronic conditions. There's long been excitement over the potential for liquid biopsies, and of course, we've seen the first ones come to market in the field of oncology. Nathan, this is an area that you know a lot about given your prior experience. Prior to being a co founder of Curve, you were senior vice president of R D at Grail, which of course, was one of the first developers of commercially available liquid biopsy tests. So, Nathan, you've seen the maturation of this technology from sort of a very immature stage to now being a commercially viable and marketed product. So I'd love to start with the 30,000 foot view and get your perspective on the general premise of liquid biopsies to start with, just to lay the foundation. 


05:19

Nathan Hunkapiller (Guest)
Sure. Yeah. So the idea of liquid biopsy describes the use of relatively easy to sample biological fluids like blood, saliva like urine, and their use in replacing invasive tissue biopsy. So rather than studying the biopsy, tissue, use these materials as a surrogate. And those might be things like capturing circulating cells in the blood cell particles or exosomes common molecules like DNA and RNA and protein that are present in cell free settings in body fluids. And I think the concept has been around for a while. I think originally it kind of came to the field as a term that was defined for oncology, but it's been around longer in other tissue settings. So some examples of that are NIPT, non invasive prenatal testing or kind of assessing the chromosomal status of a child in organ transplant testing, for example, looking for rejection. And then I think real explosion in the setting of oncology where it was learned maybe quite a while ago that mutations could be found in cell free DNA. 


06:30

Nathan Hunkapiller (Guest)
And this has since expanded to other analytes that you can use to study the tumor. But the basic premise is by analyzing sort of the molecular characteristics of the DNA that's found in the circulation. You can understand the mutational profile of a tumor and then use that information in place of a biopsy to make informed treatment decisions. And those applications expanded to others, including surveillance, screening, molecular characterization for therapy selection, minimal residual disease testing, real number of great applications in that setting. And I think lastly, the term today is used much more fluidly, and it describes just this broader set of applications beyond oncology, and we're, of course, looking at chronic diseases as well. 


07:16

Neil Littman (Host)
Nathan, let's start with your time at Grail, because I think most people are more familiar with lipid biopsies in the oncology space. Could you talk a little bit about how that technology sort of evolved and really sort of how it works today and sort of the premise of using the liquid biopsy technology from a clinical standpoint? 


07:36

Nathan Hunkapiller (Guest)
Initially, it was discovered that you could profile mutations in blood. And I think Garden Health was one of the companies that really pioneered this, first trying to understand how you could inform treatment decisions based on tissue data. But then they moved into cell free DNA and learned that you could uncover a tumor's specific mutational profile by learning those patterns in blood. If you're able to sequence deeply enough down to the levels of very low fractional DNA contribution to the blood that you would see, and then that with sensitive technologies, was highly specific to cancer in the setting of genomic alterations. And that's because cancers have very unusual genomic profiles, driver genes that when mutated, they really aren't a lot of other things that they could be. And so that just really worked well in that context. So that set off this whole field of understanding the tumor biology based on a non invasive sampling solution. 


08:36

Nathan Hunkapiller (Guest)
Fast forward to many years later. There was a discovery made in the setting of another lab that was called Veronada, where they were exploring the first non invasive prenatal test. They uncovered this quite exciting, well, quite unfortunate observation, too, that some of the women that they had enrolled in their clinical trials had these very abnormal genomes. And those tests were looking at copy number variations. And one of the scientists, the lab directors, Meredith Hawkes Miller, discovered and reported to the management of the company that there was only one explanation for those signals that she was seeing and that those women must have cancer. They followed those individuals up, looked into their medical records, and learned that almost every single woman who had that profile, and I think there were about ten of them in that study, had a late stage cancer. So that opened people's eyes to the possibility that not only in the setting of a known cancer could you find some actionable information from cell free DNA, but that you could potentially also screen for cancer individuals. 


09:50

Nathan Hunkapiller (Guest)
And Grail set down that path to go find the right technology. To do that, they spent an incredible amount of money learning what was the right approach and then exploring a pretty vast biomarker space across a lot of different analytes, but selected down to what we felt was the most highly performing and the most potentially improvable assay with DNA methylation. 


10:17

Neil Littman (Host)
It's fascinating. And Nathan, I want to get into what you're doing at Curve here in a minute, but I want to start with potentially a problem statement. And is there anything in your view that's holding us back today from broader use of the technology? 


10:32

Nathan Hunkapiller (Guest)
Yeah, so it's not a lack of clinical unmet need or patients and physicians wanting solutions. I actually remember a former leader at Grail once saying, the patients are waiting, which means we've got to help them. There's not solutions available for them and they need them. But the challenges, I'd say are twofold. One is in this historical challenge of trying to identify accurate biomarkers in blood, it's just incredibly expensive, takes a lot of time and the signals are quite diluted in there. The second problem is in finding a viable and fast insurance and coverage path. So those two things are quite difficult. Maybe I could elaborate on them a bit. So, finding biomarkers is quite challenging in blood for a couple of reasons. One of them is most of the material that's in blood is not of your tissue of interest. It's derived from other blood cells. 


11:31

Nathan Hunkapiller (Guest)
That's just what is most exposed to that particular fluid. And there's a couple of problems with that. One of them is the signals of the tissues that you might be studying and trying to find biomarkers for. They're just highly diluted and hard to resolve, so they take expensive, deep sequencing to profile them. You need a lot of samples to do it. The other problem is there are other tissues represented in that material. All other body tissues shed some amount of material into that sample type and those are sources of potentially confounding background noise. And it's hard to profile them also because they're at relatively low signal levels and low resolution. So it's just been a problem trying to relatively economically approach that as a biomarker discovery route. And there are companies like Grail who did that, but they did it by raising an incredible amount of money and I think there's not going to be a lot of other investors in our current market environment who want to take that kind of brute force approach. 


12:28

Nathan Hunkapiller (Guest)
We just need something simpler. Yeah. 


12:30

Neil Littman (Host)
Nathan, so that's a really good segue into the basis for Curves technology and what you're doing differently. So let's talk about what you're doing. I want to dive into how does your test work and what are you measuring? Why don't we start with those two points? 


12:47

Nathan Hunkapiller (Guest)
Yeah, maybe I back up and tell you a little bit about Curve and what we are doing. So we're a diagnostics company and our mission is to bring precision to a new area of liquid biopsy, which is chronic disease care. And so to do that, we're building blood tests, and we think they will serve important unmet needs for patients with chronic disease. And I think initially we're focused on bringing advancements to patients with chronic liver diseases where we see applications in detecting liver fibrosis, in detecting liver cancer, both in high risk populations who have advanced forms of liver disease. And the first product, and the one that we've generated the most data for is a blood test for liver cancer surveillance. Surveillance is a little different from the idea of screening because it's a much higher risk population that needs to be closely monitored. And these individuals are patients with advanced liver disease like Cirrhosis. 


13:45

Nathan Hunkapiller (Guest)
And we're also, as a company, interested in applying our technology to other chronic diseases like endometriosis inflammatory bowel disease. But we're thinking about that in a future product setting. So what makes our technology unique is that we have this very different approach to biomarker discovery. And I kind of describe some of the problems that we see in blood based discovery with diluted signals and unresolvable sources of noise. So we realized that tissue based discovery overcomes a lot of those problems. For example, it allows you to see disease signals and sources of tissue noise if you have the right sample types represented at a few orders of magnitude better resolution than their diluted forms in blood. And while, unfortunately, tissue data is hard to come by because nobody likes a biopsy, we realized that and spent quite a lot of time, about six years, building this enormous chronic disease tissue atlas that we've accumulated over time that we use for biomarker discovery. 


14:48

Nathan Hunkapiller (Guest)
And that atlas has methylation assay data from a variety of technology platforms. And it has corresponding clinical patient data for about a quarter million tissue samples that we've acquired from the public domain. And those collectively, I would say, comprise about 100 distinct tissue and disease states. And we have very high numbers of samples kind of covering our disease areas of interest. And so to get into how we use that, we take this disease tissue data set that we have. And we've sought to really deeply understand the progression of chronic disease states across their natural history. And you need to do that if you want to discover biomarkers that are specific to one of those states along the continuum because they're quite closely related. And then, secondly, we downselect these candidate biomarkers from that first analysis based on an extremely stringent filtering criteria from all of the other tissue and disease data that we have. 


15:49

Nathan Hunkapiller (Guest)
And that process reduces the possibility for noise across any other confounding tissue or disease state because cell free DNA is kind of everything that the body produces, and it's an amalgamation of that. If you want to understand how to build really low noise markers, you have to profile every other tissue and state in the body. And that's what we do. We subtract those from our sets. 


16:14

Neil Littman (Host)
So I guess in a simplistic sort of way to think about this is really a data mode that you have, based on your access to this large tissue sample data set, that you've been able to derive these different stages of disease progression by looking at the state of DNA methylation. So let's talk a little bit about that. I'm curious what DNA methylation can tell you about the progression of a disease, and are there other biomarkers that you're looking for within the blood sample? Or is it really that DNA methylation can break down the progression of the disease, in the case of liver disease, into sort of various stages, from early stage all the way through to cirrhosis? 


16:57

Nathan Hunkapiller (Guest)
Yeah. So DNA methylation has emerged in, I would say, the past seven or eight years as quite an interesting and compelling cancer biomarker. And we're learning that value extends to chronic diseases as well. But you asked what do we use in our product? It's predominantly DNA methylation. I would say almost all of the performance that we derive from our atlas comes from the use of DNA methylation data. And the biomarkers that we've selected in the setting of liver cancer surveillance, there is a protein that is used in the current standard of care that has some predictive value. It's called alpha fetoprotein or AFP. And we've included that in our first clinical study and we found that it had some incremental value. It's not a particularly sensitive or specific marker as it's used today, but if you use it quite stringently, in other words, you only call it when it has extremely high levels, then it does boost the sensitivity of our test. 


17:58

Nathan Hunkapiller (Guest)
We found in a first clinical study that it got us kind of one incremental detected cancer. So we're not sure if we're going to use it, but it is used today. It's kind of accepted by clinical practice and we may or may not use it.


18:12

Neil Littman (Host)
Nathan, you mentioned just now this idea of sensitivity, which I want to come back to. You mentioned a previous answer that you had been sort of building this data set over the past half dozen years or so. Where did your technology originate from? Was this done all in house? Was this licensed from a university? Can you talk about the genesis and where all this data originated and where this work sort of stemmed from? 


18:35

Nathan Hunkapiller (Guest)
Yes. So the tissue atlas and some of the data analysis tools that we've used to build it and to generate biomarkers from it, those were built by our CEO, who's the technical founder, Riddish Pat Nike. And he did a lot of this work during his PhD program at Stanford in Data Sciences. He was working with Professor Sean Wong, who's his advisor, who's an endowed professor there. And Sean actually took a whole year off on sabbatical to help found the company with Ritdish, but the work that he did during his PhD was licensed to Curve, and we've been expanding on the tissue atlas and the discovery capabilities since. Also, since I've joined as well, I. 


19:19

Neil Littman (Host)
Always like hearing the origin story of how companies are sort of so okay, Nathan, I want to circle back to this idea of specificity and sensitivity, and I want to get into the weeds a little bit, because I think this is super important for everyone to understand when it comes to a diagnostic test. So let's first Nathan, can you talk about and define what those terms mean and why they are important when it comes to a diagnostic test? 


19:41

Nathan Hunkapiller (Guest)
Of course. Yeah. So sensitivity you can think of as the proportion of individuals who have a disease that a clinical test is able to detect. So if you have a cancer surveillance test that's 95% sensitive, you'll find 95 out of 100 cancers in a population. And we want that number to be as high as possible so we don't miss any disease. And that's important clinically in case of cancer, for example, because if you miss that in your test, you might not find it until later, until it's grown, where it's potentially harder to treat, where worse outcomes may occur, and where greater costs may occur to the system as a result of that late stage care. And then specificity is the proportion of individuals who don't have a disease that a clinical test is able to correctly identify as not having that disease. And so a 95% specificity in a cancer test would correctly identify 95 out of 100 people who don't have disease. 


20:42

Nathan Hunkapiller (Guest)
And this would unfortunately also mean there's a flip side to this. The false positive rate is kind of one minus the specificity. In a 95% specific test, 5% of people who take it are going to get a false positive result when they might not have the disease. And that can cause a lot of problems downstream, namely, it scares them. They may have to undergo unnecessary testing procedures that can be harmful. They're medically unnecessary. They bog down the medical system. They cost the patient and the insurance system money. So you want that specificity number to be as high as you can get it. 


21:18

Neil Littman (Host)
Nathan there's a real important nuance here that I want to dive into because there's a big difference when it comes to evaluating the sensitivity and specificity of a test that will be used on a relatively small population of people versus a test that may be used on millions of people. And so there are trade offs between sensitivity and specificity, as you just said. So just for example, a test with 99% specificity means, by definition, that there's a 1% false positive rate, which, when testing on a population of 200 people, means two people will be expected to receive a false positive. Maybe not a big deal, but if you're then scaling that technology across a population of, let's say, 100 million people. That means there's 1 million people that will be expected to receive a false positive that is problematic and could not only add tremendous cost to the system in general, but as you mentioned before, it creates a tremendous amount of anxiety and all those issues for all those people who have that sort of false positive. 


22:21

Neil Littman (Host)
Can you talk about at curve, how you think about the trade off between specificity and sensitivity and how that may have informed the indications that you're going after? 


22:32

Nathan Hunkapiller (Guest)
Yeah. Okay, so you've hit the nail on the head. There is a trade off that you have to make between sensitivity and specificity when you're building a diagnostic test. And you can't push in both directions simultaneously unless you have fundamentally amazing technology improvement. So you got to make trade offs. You can set your threshold for detection low and try to pull in more of the disease that you capture, but you're trading off your false positive rate. And how best to optimize that really depends on how the test is going to be used. In other words, what's most clinically important in the patient population? There's kind of the clinical side of that, which is what does the patient need? And then, as you hinted at, there's the health, economic side of that, which is what happens with all these false positives you generate. And do you bog the system down with it. 


23:23

Nathan Hunkapiller (Guest)
So we have to think about how it'll be used clinically. We need to think about what are the underlying rates of the disease in the population, because if the rate is really high and you're reasonably good at detecting them, then you don't have to push your sensitivity quite so high. You won't have a huge penalty in your false positive population if the rate of, say, your disease is very high. And then I guess, lastly, what is that sensitivity, specificity, trade off? Is there an optimal point on that kind of gets you the best optimization there. So I liked your example. Maybe think through one that's common in the multicancer space where we have this enormous potential testing population. Could be all adults over 50, everybody who goes and gets health care. So your false positive rate, whether it's 1% 2%, that's a huge difference. And here's an example of kind of one trade off decision we had to make. 


24:25

Nathan Hunkapiller (Guest)
The trade off curve was actually quite steep for sensitivity versus specificity there. In other words, what I mean is one point on that receiver operator curve might have been, say, 50% sensitivity and 99% specificity, which I think is pretty close to the product profile today for multicancer the gallery test. Another point on that curve, if you really wanted to push to have higher sensitivity, might have been 90% sensitivity, but with an enormous drop in specificity, say down to 90% or lower. And so this prevalence of disease becomes really important when you're managing this decision. Take, for example, this sort of 90% sensitive, 90% specific case. So we push the sensitivity high. We'll catch nine out of ten of the cancers, but we have a 10% false positive rate. And those patients who get a negative result, they're going to feel really good. That reassured that they don't have much risk of cancer after the test. 


25:28

Nathan Hunkapiller (Guest)
But they also didn't have much to begin with. They had a pretest ODS of 99% of being cancer free and a post test odds of 99.9. And that's good, but that's not much of a difference. On the other hand, with the specificity being 90% and the rate of cancers in that population, if you were to give a patient a positive result, nine out of ten times or so, they're going to be false positive results. You can't send nine kind of false positive patients down the road of diagnostic resolution. For every one, you're going to overwhelm the health care system. So in that setting, we had to decide, let's give up some of that sensitivity so that we don't bog down the system. We're still telling people kind of pre and post test ODS of a negative result, your risk is pretty low. It's between 99.5% to 99.9%. 


26:26

Nathan Hunkapiller (Guest)
And I don't think that's all that different that I wouldn't be worried in either case in those two product profiles. But a positive result with this lower sensitivity but much lower false positive rate means that only about one in three people who are positive actually have cancer, and that's not an overwhelming number to the healthcare system. So considering that in diagnostics, you got multiple customers, one being your patient, another being your provider, but another being your payer, you have to balance the kind of needs of all of them when you're building a test, because you don't want to build just the best possible test for patients in ways that is uncoverable. So that's how we think about the trade off. And in the setting of Curves test, we know that both of those factors are important. And fortunately, we've been able to build a test that's been very high in performance for both sensitivity and specificity. 


27:21

Nathan Hunkapiller (Guest)
So we got lucky. We didn't have to make that trade off too carefully. 


27:24

Neil Littman (Host)
So let's talk about that. Nathan, let's talk about chronic liver disease. How did you choose liver disease as your lead indication? 


27:32

Nathan Hunkapiller (Guest)
Oh, great question. So in general, chronic liver diseases have a number of features of diagnostic testing, kind of development pathways that make them attractive or ultra compelling. So I came from this cancer screening environment where you have quite a lot of work to do to get the technology into practice, a complicated regulatory path, an unknown kind of trailblazing path down to get reimbursement. You have these huge markets, but you also have low prevalence in your population, meaning your clinical studies have to be huge. The health economics arguments are a little harder to make when the prevalence isn't there. And liver disease had kind of the opposite of a lot of these factors. Besides the markets also being large there was a number of things that our leadership learned as we worked in these other companies that we just realized would make the chronic disease space and in particular liver diseases, more compelling for these reasons. 


28:37

Nathan Hunkapiller (Guest)
It's a widespread disease population, so there's large markets, there's poor non invasive testing solutions, which means there's opportunity to improve things for patients. The testing outcomes are clinically actionable. There's existing surveillance guidelines that are reported by clinical societies. There's existing insurance coverage policies that follow the current workup that we think we can navigate into and replace. Let's see what else. There's strong healthcare economic arguments because the prevalence is high. The prevalence being high also means the clinical trials can be small. These patients are managed by specialty physicians rather than primary care physicians. So your commercial spend is lower. It's kind of amazing across many aspects, but liver disease in particular was compelling because we saw this nice fit in all these dimensions, but also a strong synergy for our first and our second product, which are collectively a liver cancer surveillance product and a liver fibrosis detection product. 


29:43

Nathan Hunkapiller (Guest)
And we saw a lot of opportunity there for that to be streamlined as well. As we have good data in liver cancer surveillance that gives us a sense we can be technically successful. 


29:54

Neil Littman (Host)
Nathan, you commented on a little bit of this, but how big is the need for better testing of patients and earlier detection of patients with liver disease? 


30:04

Nathan Hunkapiller (Guest)
So, chronic liver diseases are a big problem in the United States, and I'll back up and just describe a little the natural history of chronic liver diseases. So probably about a third of individuals in the United States have some form of kind of abnormal liver disease, most of them fatty liver, almost all diabetics and anyone who's obese, you add that up, it's about 100 million people in the US. Who have some form of fatty liver. And that's the top of the funnel for liver disease. With additional time, with a fatty liver, with inflammation, with other stressors like alcohol use, viral infection, certain drugs that damage the liver. The liver can get kind of progressively more damaged and that results in over time, deposition of fibrosis, death of liver cells, loss of liver function, and unfortunately, in very diseased livers that creates this kind of environment that seeds. 


31:10

Nathan Hunkapiller (Guest)
Cancer development at a very high rate, about two and a half percent ODS per year in people with Cirrhosis, which is an incredibly high risk factor for that disease. So the need is large and there's other problems which are there's not good non invasive testing solutions. Most people are kind of silently progressing as a result of this. And with this very large patient population, most people are diagnosed late and that results. In kind of enormous downstream health care costs. So I'd say the opportunity is and. 


31:44

Neil Littman (Host)
Nathan, the expectation is that the curve test will have value both in advanced liver disease patients as well as earlier liver disease patients. You mentioned a little bit of that continuum from fatty liver disease all the way through to Cirrhosis, for example. How would the test be used in these different patient populations? 


32:05

Nathan Hunkapiller (Guest)
Yeah, well, so there's great potential to impact patient outcomes in two parts of this spectrum, both for liver fibrosis detection and surveillance. Let me start with a surveillance story because we've spent a lot of our energy developing that thought process. So I think to understand this question and the opportunity, we have to break down what's done today and that existing paradigm and how unfortunately ineffective it is, there's this very high risk of developing cancer in this population. Like I said, as a result of that, they're today being recommended to be surveilled by ultrasound imaging and this AFP protein test twice a year, every six months. And that ultrasound and AFP has very low performance. You can think of it as sort of a low cost first line screen. It's also low performance, so doesn't catch too many but doesn't cause too much burden to the system. 


33:01

Nathan Hunkapiller (Guest)
And then that's followed up if there's a positive test by either imaging or protein by a higher cost but less accessible diagnostic workup, which is usually MRI or CT. And that ultrasound AFP first line surveillance only picks up about 64% of early stage cancers and it has a high false positive rate, 16%. So we've done a clinical study, we partnered with Stanford and we've shown that our test has 96% early stage detection sensitivity and only a 4% false positive rate. So we're catching almost all of the cancers early and we're doing it with four fold lower false positive rate. And that's going to really change how care is administered to these patients. For example, if you consider that most people who are going to get their surveillance, they have to go to their hepatologist, they get separate blood draws and ultrasound imaging ordered for them, most of them will get the blood draw in office. 


34:06

Nathan Hunkapiller (Guest)
But the ultrasound imaging causes a lot of people to drop out of the surveillance paradigm and that compliance is about 50%. Couple that with the low detection sensitivity and what it means is that only about a third of people are being caught with early stage cancer, which leaves this enormous kind of clinical improvement opportunity there. The other issue with that testing is most results that passed this first line screen because of its high false positive rate, most of them didn't need to be diagnostically monitored with MRI or CT. There were just unnecessary tests as a result of that. And then I guess, lastly, the healthcare costs associated with both the unnecessary diagnostic workup and with the kind of cost of late stage cancer care which is much higher, causes the system to be overwhelmed with both costs and late stage cancer care treatment. It's just not working well. 


35:05

Nathan Hunkapiller (Guest)
So Curves test comes in and simplifies this system completely. We replace the ultrasound with our blood tests, which means we're going to boost compliance, we think up to about 80% instead of 50, because it's all going to be done in office. We have a higher sensitivity for early stage detection. We have a lower false positive rate, so most of the cancers are going to be caught. High compliance times high sensitivity means we're going to catch instead of one third of the early stage cancers, which is where ultrasound is, we think we're going to catch more than three quarters. And imagine the kind of mortality benefit of that. Also, we're reducing the number of false positives. So instead of having, say, nine out of ten of those diagnostic CTS MRIs being unnecessary, it's probably going to be more like one in three. So huge benefits to this population, both in terms of better health economics, better healthcare utilization, better detection rates, better outcomes. 


36:07

Nathan Hunkapiller (Guest)
We're just excited about it all. 


36:09

Neil Littman (Host)
So, Nathan, I want to jump into the point around the cost, and specifically you mentioned this sort of at the top of the show, but this idea of reimbursement, and we all know that if the test isn't reimbursed by CMS, then it won't be adopted, won't be used. So given your experience at places like Natera and Grail, you've seen how these tests are reimbursed. Have you had any discussions with Payers at Curve yet? Is it too early or what is your sense of how Payers view this type of test that you're developing at Curve? 


36:46

Nathan Hunkapiller (Guest)
Yeah, we have begun discussions with Payers. We've talked with people who used to work in the Moldex program. So Payers at both the government level but also on the commercial side as well. We have advisors in that realm, and we've hired coverage and reimbursement consultants to get their feedback, and that's positive. We also have some internal experience from our chief medical officer, Sam Asgarian, who's worked on both sides of this PE landscape. He was the CMO of Aetna and he was the CMO of Thrive earlier detection and also cipher medicine, which they just got a local coverage determination that was in their favor quite recently. And based on what we're hearing and what our management thinks, we're seeing a good landscape from which to make this case around pretty sound. Healthcare economics a good precedent in the surveillance technology existing today, but where we think we can improve heavily on it, that's just kind of a clean, easy argument to make. 


37:48

Nathan Hunkapiller (Guest)
There's existing coverage guidelines. We've talked to physicians. We've done a lot of market research with physicians, and they're very strongly supportive of this kind of test. And those are all the factors that you want in your favor when you're going to make these arguments to payers but in the end it's a process and we have to go through the right motions to establish the clinical evidence that we need and submit that package for review. 


38:11

Neil Littman (Host)
And actually, let's talk about that for a minute. What do you feel like you need as the clinical package to get reimbursement for the test? 


38:20

Nathan Hunkapiller (Guest)
Yeah, so the things that they look for, you can't have a package that doesn't have some form of analytical validity, clinical validity, and that's just what's, the analytical and clinical performance of your test, but also a clinical utility study that demonstrates how do patients use it, what are the impacts of using it. And in our field, the advisors have told us that we need to do a head to head comparison against this standard of care, ultrasound and AFP. And you can do a head to head study if you have kind of a gold standard to go with that, which in this case is diagnostic MRI. That's how we plan to approach this. And we think we can show very clearly the superiority of the performance of our test over ultrasound. We think we need about 2000 patients to do this, which is actually quite nice. 


39:08

Nathan Hunkapiller (Guest)
That's not a large study. We found that because the prevalence is high in this population, in order to power the study effectively and get the event rate of cancers that you need to demonstrate the benefit, we only need a relatively small study size. So once we've done that study, published it in a peer reviewed setting, then we can submit an application to Moldex, which administers kind of Medicare coverage decisions. 


39:34

Neil Littman (Host)
Nathan, chronic liver disease could be the first of many indications that you intend to pursue. How do you think about prioritizing other indications and what other indications are you currently focusing on, if any? 


39:47

Nathan Hunkapiller (Guest)
Well, I described all those factors earlier that kind of have made this liver disease pathway appealing to us. What's the total level of spend we're going to have? How clear is the reimbursement pathway? How strong are the health economics? How much evidence do we need to generate? And I think fortunately for us, most of the chronic disease realm fits those criteria. We found that there were many compelling cases there. We're particularly interested in a few other diseases in our product roadmap, and we also have some data for these within our tissue atlas, so that makes it compelling as well. Those are endometriosis, which is a disease of kind of abnormal endometrial tissue behavior that causes pain young women and infertility and then we're also interested inflammatory bowel diseases which cause uncontrollable flare ups and a lot of time in the hospital for people with that particular disease. 


40:46

Nathan Hunkapiller (Guest)
So those are compelling to us. But we're excited and very focused on liver in the next few years as sort of our product one two punch just because of the synergies I described. And so we're laser focused there, but we're going to generate some data that hopefully in our next financing round will begin to show the ability of our technology platform to address multiple chronic diseases. And I think once you establish yourself as a platform company, there are a lot of opportunities in partnering, there are a lot of opportunities to demonstrate your value to the investment community. And so we'd like to see what else we can do. But we'll be laser focused and liver makes sense. 


41:26

Neil Littman (Host)
And actually, as we're thinking about expanding indications and talking about that topic, I'd love to get your perspective on expanding the use of liquid biopsies. And so this may be tangential, but I know you're a scientific advisor at Droplet Biosciences and they're exploring sort of a new domain of liquid biopsies and using it in the realm of lymphatic fluid, if I'm not mistaken. So I'd love to get your perspective on how the liquid biopsy technology is evolving and could evolve going forward. 


41:58

Nathan Hunkapiller (Guest)
Yeah, this is a great question. So there's a number of trends we're kind of seeing in the liquid biopsy space. The early applications of liquid biopsy, I think they haven't even totally matured yet. I'd say they're in somewhat of a teenage phase of growth. There's a lot of labs that have identified and matured some of these low hanging fruit applications like non invasive prenatal testing and transplant rejection monitoring and some of these cancer indications. Cancer screening is certainly still growing and still has some maturing to do over the next ten years. But there's a few interesting trends. One of them is labs are innovating on their technologies and their analytes and coming up with more efficient chemistries. Every month or so you see a great new article about a company that's approaching one of these applications differently with a different molecule that they're looking at or a different way of analyzing it, and those are exciting to see and they're moving up the technology S curve for those reasons. 


42:59

Nathan Hunkapiller (Guest)
And this other trend is in how the field is starting to appreciate the value of pairing an appropriate sampling strategy for the disease of interest. And blood, which has been the default for many years for several reasons, has been used widely. It's considered to be pretty non invasive, patients don't have too much issue giving blood the compliance and using it's high. The collection devices and processes of drawing and shipping and storing that blood are really standardized and stable and it's a good kind of generic capture system for most tissues of the body. So I think it's a good all around material to use. But there's some areas where blood has limitations, where the blood fluids are not the most enriched source of material for some tissue types and some diseases, like if you were to look at things like urinary or certain reproductive health organs. 


43:57

Nathan Hunkapiller (Guest)
Urine is a better example of a biofluid that's more proximal to those specific tissues like kidney and bladder and prostate. And we're seeing other examples of this. Like cerebrospinal fluid is a good example of how you access the nervous system tissues that are protected by the blood brain barrier or in the lower GI system there's not a lot of vascularization of the epithelial tissues. So stool is a better sample in that setting and companies like Exact Sciences and others are leveraging that for colorectal cancer screening. When those cells die, they slough off into the GI system, not to the blood and droplet. Biosciences, I think, is another great example of this where they are exploring a new area of liquid biopsy that is what can be found in lymphatic fluid. Maybe I'll give some examples of this. So they have a kind of first of a kind lymphatic fluid liquid biopsy. 


44:58

Nathan Hunkapiller (Guest)
They have a sampling method and a technology that they are intending to use in the setting of collecting post surgical drain fluid which these are drains that are placed right after a surgery, typically resection surgeries for specific types of cancer they're localized which is, I think, maybe the most important part of this. The vascular system of blood kind of is a generic system and it's peripheral. But the lymphatic system via kind of lymph nodes and the vessels that connect it to other tissues are draining sites of the body in very specific locations. And by sampling in those specific locations you get a very proximal enriched signal of what you're after. And they're interested in doing this where they think they'll have an advantage over blood which in the setting of, say, minimal residual disease is more distal to that site where a tumor was excised and you want to see if anything's still left there. 


45:56

Nathan Hunkapiller (Guest)
So there's a big problem in the setting of kind of head and neck and lung and breast and kidney and prostate and bladder cancers where the blood just doesn't get a lot of this material. These are not tissues that shed a lot. They're historically been challenging in multicancer early detection and it's kind of biological like the kidney and the prostate and the bladder are just not as adjacent to the blood. There's other better ways of sampling them. Lung cancers don't shed particularly well. So when you're trying to find signals of those in the blood it's just not the most enriched source and lymphatic fluid seems to overcome that. And also lymphatic drains are placed in most of those surgical procedures. So I think that creates kind of opportunity to improve the performance by finding enriched signals of cancer in those samples in those fluids but also improving clinical performance because there's more molecules available to be sampled and the technologies don't have to deeply sequence them to find it. 


47:03

Neil Littman (Host)
That's fascinating how the technology has evolved and will certainly continue to evolve. Nathan, I want to be cognizant of your time. We could probably talk for another couple of days about some of these fascinating topics and I just want to wrap up with one last question for you, and that's advice for other entrepreneurs. You shared a wealth of knowledge in sort of the liquid biopsy space. You've worked at Netera. You've seen a lot from that company sort of developing Maturing. Same goes with Grail. And in fact, you come from sort of a line of entrepreneurs as well. I mean, your dad, Michael Hunkerpiller, his name may sound familiar with the CEO of Pacific Biosciences and was a prominent figure in the world of DNA sequencing, as was your Uncle Tim. I can only imagine some of the conversations you had around the dinner table, but any advice for entrepreneurs that are out there building businesses, raising capital, or folks that are thinking about jumping into the entrepreneurial journey? 


48:02

Nathan Hunkapiller (Guest)
Yeah, I mean, I'm more of a scientist than an entrepreneur. That's what I do on a day to day basis. But I've enjoyed kind of learning across the few different companies that I've worked at Netera and Grail and now Curve. And through advising companies as well, I get a flavor of how the business is evolving. We've also talked to a lot of investors lately, so we're learning what they're looking for. And I would say one piece of advice for founders who are attempting to raise in this current environment is don't just think about your product. Think like an investor who are generally brutally scrutinizing every aspect of your plan to make sure it's absolutely bulletproof. They're trying to find reasons to say no rather than reasons to say yes. It's a rule out process. So I would say if you can find things that you're thinking ahead about that don't quite make sense yet or you don't have a solution for, you better get on that right away because it's eventually going to cause a problem. 


48:59

Nathan Hunkapiller (Guest)
And as entrepreneurs who are spending an incredible amount of energy to bring a company to light, you're going to be at it for years. Your time is valuable and you could be spending it in other ways. So if something isn't going to work, think hard about it and move on. Or change your strategy. Think about something different. Be flexible. Another thing is know your customer. In the diagnostics world. I said we have three. We have patients and providers and payers, and they all have different needs. And I've seen instances where if there's a conflict between anyone and any one of the other of them, your product is probably going to die on the vine. There's a few good examples of this where people tried to build a technology that kind of met one need but know all of the others and that led to those products not getting very far. 


49:51

Neil Littman (Host)
Yeah, Nathan, I think that's all great advice. So with that, I think we better wrap up. And I'd like to say a big thank you for joining me on the show today. 


49:59

Nathan Hunkapiller (Guest)
Yes, thank you for having me. It was great speaking with. 


50:03

Danny Levine (Producer)
Well, Neil, what did you think? 


50:05

Neil Littman (Host)
I thought that was a great conversation with Nathan. He brings vast expertise in the field of liquid biopsies from his time at Natera and Grail, and now what they're building at Curve Biosciences, and you can see sort of the lineage of what he's learned and how they're applying that to Curve in the field of chronic liver disease. And it was really interesting to get his perspective on the trade offs between sensitivity and specificity for the tests. I think in the case of chronic liver disease, which is their lead indication, one of the things that's really interesting is there is a built in patient population for this test, so they're not really looking to expand to a much larger pool of patients like a lot of the oncology tests are doing. They have a built in patient population. They're looking to replace the existing standard of care with a test that performs much more accurately than what is done today. 


50:53

Neil Littman (Host)
So when I think about what they're doing from a commercial standpoint, they're looking to sort of fit seamlessly into the existing workflow with a test that is cheaper and performs better than what is currently available. And so that's not only better for patients, obviously, but that actually can pull costs out of the healthcare system. So I thought that was really fascinating that they chose that sort of defined patient population within chronic liver disease as their lead indication. And you heard about some of the trade offs between specificity and sensitivity. Well, they're not really scaling to a huge patient population, and so I think it makes a lot of sense where they're targeting their sort of beachhead for their initial commercial launch. 


51:32

Danny Levine (Producer)
I thought that was actually a particularly interesting point of the conversation. How difficult do you think it is to strike that balance between specificity and sensitivity? 


51:45

Neil Littman (Host)
Well, you heard Nathan comment on this a couple of times. I think it really depends on the nature of how the test is being used, number one, and the patient population that you're going after. Right. So you design a test very differently depending how it's being used earlier versus later stage disease, for example, and also the size of the patient population. 


52:04

Nathan Hunkapiller (Guest)
Right. 


52:04

Neil Littman (Host)
It's one thing if you're going after a relatively small patient population. It's a whole nother thing. And we sort of learned a lot about this during the COVID days. When you're going after a population of tens of millions or hundreds of millions of people, then that false positive rate, even if it's 1% or 0.5%, has a massive effect. And so there are different trade offs. And so I think it's a really fascinating concept for people who are developing diagnostic tests to make sure that they get right up front. How is it being used, what is the patient population? And if you think about those two things, then you design the sensitivity and specificity to meet those two criteria. 


52:41

Danny Levine (Producer)
Related to that is this atlas of a large number of tissue samples they've been able to put together and what they've had to do to kind of remove noise from the process. Do you think that's going to be a point of differentiation for the company? 


52:57

Neil Littman (Host)
I think that's the single biggest point of differentiation for the company. Danny, you heard him talk about the large tissue bank. And so the way that I think about it is that tissue bank is a data moat that they have that surrounds the company and they have access to this large tissue sample base where they've been able to derive different methylation states that indicate disease progression ranging from fatty liver disease to fibrosis to cirrhosis. And so that's all predicated on the tissue samples and the bank that they have. And so if you think about in this day and age, how you train new generative, AI models, it's all about the data. It's all about the input. Right? So it's those data sets that are proprietary that drive and drive a ton of value for these companies. It's the same thing, I think, what they're doing here. 


53:56

Neil Littman (Host)
It's the data set that they have that's really driving value for the company and they have exclusive access to that. And that's where they drive this algorithm that they have for measuring DNA methylation across the different states of chronic liver disease. And that's how they design their test. 


54:14

Danny Levine (Producer)
As you think about cost and outcomes, with costs being the highest and outcomes being worse when patients are diagnosed late, what do you think the potential here is to really impact health care? 


54:29

Neil Littman (Host)
I think it's huge, right? I mean, we all know that earlier diagnosis leads to better outcomes most of the time. So I think if you could diagnose patients earlier, you can intervene earlier and they're absolutely going to have better outcomes. And so you heard Nathan talk a little bit about how their tests could be used and it could range from advanced liver disease patients to patients with earlier stage liver disease. And I think if you can COVID patients across that spectrum, I think there's no doubt you can have a huge impact on patient outcomes. 


55:01

Danny Levine (Producer)
You did ask him about talks with Payers. How receptive do you think payers are to this type of technology and how much convincing and data do you think it will take to demonstrate its value? 


55:13

Neil Littman (Host)
Well, Danny, you heard me ask Nathan that question specifically, and it sounds like they need to conduct one additional study in the range of about 2000 patients in order to secure reimbursement. It sounds like that's going to be a non inferiority study against standard of care. Obviously, if that comes out positive, it sounds like that should be enough to secure reimbursement for the test. So they've already done a validation study. So obviously reimbursement for these types of tests is critical. So it sounds like they have one more sort of larger study to complete to get that reimbursement. 


55:44

Danny Levine (Producer)
Well, until next time, okay? 


55:46

Neil Littman (Host)
Thanks, Danny. 


55:50

Danny Levine (Producer)
Thanks for listening. 


55:51

Danny Levine (Producer)
The Bioverge podcast is a product of Bioverge, Inc. An investment platform that funds visionary entrepreneurs with the aim of transforming healthcare. Bioverge provides access and enables everyone to invest in highly vetted healthcare startups on the cutting edge of innovation, from family offices and registered investment advisors to accredited and nonaccredited individuals. To learn more, go to bioverge.com. This podcast is produced for Bioverge by the Levine Media Group. Music for this podcast is provided courtesy of the Jonah Levine Collective. All opinions expressed in this podcast by participants are solely their opinions do not reflect the opinion of Bioverge, Inc. Or its affiliates. The participants'opinions are based upon information they consider reliable. Neither Bioverge or its affiliates warrants its completeness or accuracy, and it should not be relied on itself. Nothing contained in accompanying this podcast shall be construed as an offer to sell, a solicitation of an offer to buy, or a recommendation to purchase. 


56:58

Danny Levine (Producer)
Any security by Bioverge, its portfolio companies, or any third party past performance is not indicative of future results.