–Milton Packer, MD, calls the system to test and evaluate heart failure drugs dysfunctional
The system is completely broken, writes heart failure specialist Milton Packer, MD, in an editorial in Circulation: Heart Failure.
The results of important clinical trials are not being incorporated into guidelines quickly, intelligently, or consistently, Packer argued. The most important reason is that academic leaders are more interested in performing new trials than in translating new research into clinical practice.
After a major trial has been published and reviewed by regulators, Packer asked, “what do leaders in cardiology do? Often, they sit in silence. What reason do they offer for their caution? They refer to the need for replication, the need for two trials, the need to bring the level of uncertainty down to an imperceptible level.”
“When we know something does not work, we need to abandon it; when we know that something saves lives, we need to encourage patients to use it,” he wrote. “If we do not fulfill our responsibilities as physicians and as academic leaders, sponsors will stop doing clinical trials, because they will have figured out that our purpose for doing them is not to find answers but to have more work to do. Is that what we really want?”
“Whether our trials succeed or fail, we want sponsors to support more trials,” he continued. “We will always find a sliver of hope in a totally neutral trial that has failed to meet any of its expectations, and we will always find something lacking in a trial with overwhelmingly robust results. There is no trial that is too negative, and there is no trial that is good enough. Regrettably, if you ask a clinical trialist for the solution to a problem, the answer will always be — a new clinical trial.”
Packer pointed out that at last year’s Scientific Sessions of the American Heart Association every single major randomized trial in heart failure was negative, but all the investigators sought to spin the results to suggest that more trials were warranted. On those increasingly rare occasions where there are positive trials, “experts relish in identifying perceived weaknesses in the study design or analysis, even if such deficiencies are slight and fail to meaningfully alter the interpretation of results. Undue emphasis is often placed on subgroups that can differ by the play of chance alone.” Packer also cited geographic variation, the failure to measure surrogate endpoints, and the early termination of trials as reasons that are used to discount the results of favorable trials.
Packer explained that he is “not suggesting that the heart failure community embrace every trial that reports some nominally significant P value for every promising post hoc analysis. I am certainly not proposing that trials that seem to meet their primary end point should be viewed as providing undeniable truths that should be immediately trumpeted to physicians throughout the world.”
Packer cited the case of several major drugs and drug classes that have been either ignored by guidelines despite important new evidence or have had dramatic changes in recommendations despite a complete absence of new evidence. He cited the cases of two older heart failure treatments, digoxin and the combination of isosorbide dinitrate and hydralazine, which have received major upgrades and downgrades in the guidelines despite a complete lack of new evidence. What is the basis for these changes in the absence of new data, he questioned.
By contrast, two major new drugs, ivabradine (Corlanor) and sacubitril/valsartan (Entresto), have been available on the U.S. market since last spring and summer but have yet to be treated in the guidelines. In an interview, Packer praised the Canadian guidelines, which gave a strong recommendation, on the basis of the positive PARADIGM HF clinical trial, to sacubitril/valsartan even before the drug was approved for use in that country.
PARADIGM and the Guidelines
Packer, of course, was the co-principal investigator of PARADIGM. Several outside experts said privately that the editorial appeared to be at least partially motivated by Packer’s interest in the trial. Packer doesn’t claim that he is fully objective about his own trial but he pushed back against this criticism. His complaint about guideline delays, he argued, does not depend on guideline authors accepting his view of his own trial.
“Take any evidence you like and be critical and be skeptical. Satisfy yourself that the data has integrity, and then say something,” Packer said.
“We are now 19 months after publication of the trial and full FDA review and approval of the drug,” noted Packer. “Tell me, what are they waiting for?” he asked. “Every question has been answered — there are no new questions being asked and there haven’t been any new questions asked in a long time.”
For Packer, like many others — though not all — the results of PARADIGM are clear. But, he argued, “even if there were problems, is silence the right response? If there is reason for caution then they ought” to put those uncertainties into the guidelines.
One of the first critics of PARADIGM was Vinay Prasad, MD, MPH, of Oregon Health and Sciences University in Portland. He agreed with many of Packer’s larger points but disagreed with his interpretation of PARADIGM. (Click here to read excerpts from Prasad’s fascinating debate with Packer following the publication of PARADIGM.) Prasad said he thinks that because of serious limitations in the trial’s design, guideline committees would not be justified in issuing a broad first-line recommendation in favor of sacubitril/valsartan in the absence of supporting evidence from a second clinical trial. It is extremely unlikely that such a trial would ever be performed, however, and in his editorial Packer wrote that “it is difficult to understand how this could be ethically accomplished.”
I asked Packer whether the delays in guideline committee changes perhaps reflected the inability of the committee to reach a broad consensus. He agreed that this likely played a role, but went on to argue that “if the guideline process is consensus-based and not evidence-based then they should say that.” Evidence-based medicine should not be dependent on the experts’ “level of comfort.”