The idea that colchicine—a poison drawn from the seeds and stem of the lovely autumn crocus—could be used to prevent complications in patients undergoing heart surgery was an intoxicating one. Known since the Middle Ages, colchicine had been one of those ancient discoveries that had found new life in a host of scientific and medical applications during the modern era. Some six decades ago, it had been used in the study of human chromosomes: The chemical, which could stop mitosis in its tracks, made it easier to spy the dividing chromosomal strands in metaphase, where they could be clearly viewed under a light microscope. Then colchicine, which has unusual anti-inflammatory properties, proved effective as a treatment for gout, and eventually became a second-line therapy for pericarditis. At the turn of the most recent millennium, when one large international clinical trial was recruited to test whether colchicine could prevent post-pericardiotomy syndrome—a common complication following cardiac surgery—something marvelous was discovered: colchicine seemed to prevent post-operative atrial fibrillation (AF) as well. In a follow-up trial, results of which were published in 2011, patients given the drug had nearly half the incidence of AF as those not receiving it. But when the same lead investigators repeated the trial with 360 new heart surgery patients at 11 hospitals across Italy, testing the age-old poison against a placebo, "there were no significant differences between the colchicine and placebo groups" when it came to AF. Both studies were randomized and double-blinded . Both, naturally, crossed all their t's and dotted all their i's. And both came to entirely different conclusions. So began a cycle that is all too familiar in the clinical trials arena today: a round-robin of studies that never quite concludes and rarely emerges with a clear victor. In 2016, when researchers gathered up data from 10 randomized controlled trials (aggregating the results from 1,981 patients in all), they found—"Colchicine therapy was not associated with a significantly lower risk of post-operative AF." A year later, another so-called "meta-analysis"—this one rounding up six randomized controlled trials with 1,257 patients—found the opposite: " Colchicine significantly reduced the odds" of post-operative a-fib. You can find the same types of long-running disagreements across the therapeutic board in clinical trials, as I wrote about years ago in an essay for the New York Times, entitled "Do Clinical Trials Work?" The striking example I focused on then was the cancer drug Avastin, developed by Genentech (now Roche)—which, at the time, had been studied in at least 400 completed human clinical trials for various cancers. In spite of all that testing—at a financial cost of untold billions of dollars and an immeasurable time cost for the patients who volunteered—no one was able to say with any certitude whether the drug would work in any one patient or not. As a Genentech spokesperson told me at the time: "Despite looking at hundreds of potential predictive biomarkers, we do not currently have a way to predict who is most likely to respond to Avastin and who is not." (FYI, there are now 767 completed clinical interventional trials studying Avastin. And we still don't know the answer.) The question is, Why? Why do clinical trials, so often, tell us so little? Well, there's a long answer for that—and for those interested, I humbly refer you to here and here. (Yes, the latter link is a shameless book plug.) But the short answer for anyone who doesn't want to buy a book is that the standard old-fashioned clinical trial is designed to answer a specific question about a population with lots of different specific characteristics. (Thanks to differences in physiology and pharmacogenetics , a single drug, for instance, can work very differently in two different healthy people, as it is. Throw in the complexities and heterogeneities of an evolving disease of genetic mutation, such as cancer, and the drug response from one person to the next can be even more variable.) And so often—too often, really—the message that emerges from trials is a jumble. That doesn't mean we should throw out clinical trials. Nothing of the sort. What it does mean is that we should reinvent them. We should develop clinical studies that actually teach us something in every study. Which brings me, at last, to my fourth prediction for 2018. Faithful readers will recall that I've written about three developing trends for the new year and beyond: self-aggregation, de-hospitalization, and (yesterday) the bet on RNA-based therapies . In addition, I think we're going to see—or begin to see—a radical rethinking of the clinical trials process. The urgency for this change will be particularly acute when it comes to testing investigative drugs (and cocktails of drugs) in disease settings where there's a great deal of variability between patients (as in most cancers) and where the patient populations are small (as in rare diseases). There are lots of good ideas on this front—including trials that are designed to adapt, which I've written about earlier . But this year, I believe we will have a new tool to explore in the clinical research setting: artificial intelligence. In a realm of swirling, incomprehensible big biological data—which is, perhaps, another way of thinking about the human body—the opportunity to use computer learning to better anticipate which drugs will work well (and not so well) in any one person is one we shouldn't pass up. As one of the world's premier experts in clinical trials, Don Berry, told me, "The standard clinical trial is pretty much the only thing in medicine that hasn't changed in the last 70 years." Well, a change is gonna come. |
No comments:
Post a Comment