COMMENTARY

REGULATORY | January 28, 2009

One Size Doesn’t Fit All Well

The current approach to measuring the cost-effectiveness of various therapies fails to capture the genetic variations that explain differences in response to medicines by different patients.

PETER J. PITTS

“While it may provide transitory savings in the short-term, current strategies result in a lower quality of care that result in higher healthcare costs over time.”

Congress is calling for the establishment of a “Federal Coordinating Council for Comparative Effectiveness Research.” What does this mean? Is comparative effectiveness the same thing as cost effectiveness? 
 
No. There’s a big difference. 
 
Cost effectiveness is what NICE (The United Kingdom’s National Institute for Clinical Excellence) does based on, among other things, the infamous $50,000 per year of life measure known as QALY or Quality Adjusted Life Year. Cost effectiveness assumes an additional year of life is worth about $50,000, the average price of a fully-loaded Land Rover.
 
For example, NICE’s preliminary decision was that four new cancer drugs to treat people with kidney cancer that has spread—temsirolimus (Torisel), bevacizumab (Avastin), sorafenib (Nexavar), sunitinib (Sutent)—should not be reimbursed by the National Health Service because, despite clinical evidence that these drugs can actually help, they weren’t “cost effective.” In essence, NICE doesn’t think that these four drugs are a good value for the NHS.
 
Currently, the only available treatment for metastatic renal cell cancer is immunotherapy. This halts the disease’s progress for just four months on average. But if people are unsuitable for immunotherapy, or it doesn’t work, that’s it. There’s no other treatment option.
 
NICE agreed that patients tended to live longer when they were given these drugs. But when they put the data from the trials into their QALY-driven computer models, they found that the drugs cost a lot at £20,000 to £35,000 ($39,000 to $68,000) per patient per year compared to the benefit they brought patients – too much for them to recommend that the NHS prescribe these drugs. The result? The government saves money and patients receive an expedited death sentence. That’s not hyperbole, that’s cost effectiveness.
 
Comparative effectiveness is different. The key word is “comparative.”
 
Comparative effectiveness strives to show which medicines are most effective for any given disease state. Is there a “more effective” statin? A “more effective” treatment for depression? Most of the world refers to comparative effectiveness as Healthcare Technology Assessment.
 
But how do you compare two molecules (or three or more) that have different mechanisms of action for patients that respond differently to different medicine based on their personal genetic make-up?
 
Comparative effectiveness in its current form leads to a “one-size-fits-all” approach to healthcare, which means that it doesn’t fit anyone all that well. The concept it good, but the tools are wrong. Comparative effectiveness relies heavily on findings from randomized clinical trials. While these trials are essential to demonstrating the safety and efficacy of new medical products, the results are based on large population averages that rarely, if ever, will tell us which treatments are “best” for which patients. This is why it is so important for physicians to maintain the ability to combine study findings with their expertise and knowledge of the individual in order to make optimal treatment decisions.
 
Government sponsored studies that conduct head-to-head comparisons of drugs in “real world” clinical settings are regarded as a valuable source of information for such coverage and reimbursement decisions—if not for making clinical decisions. Two such studies, the Clinical Antipsychotic Trials in Intervention Effectiveness (CATIE), study and the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (ALLHAT) study, were two such “practice-based” clinical trials, sponsored in part by the National Institutes of Health, to determine whether older (cheaper) medicines were as effective in achieving certain clinical outcomes as newer (more expensive) ones. 
 
The findings of both CATIE and ALLHAT were highly controversial, but one thing is not: even well-funded comparative effectiveness trials are swiftly superseded by trial designs based on better mechanistic understanding of disease pathways and pharmacogenomics. And, since most comparative effectiveness studies are underpowered, they don’t capture the genetic variations that explain differences in response to medicines by different patients.
 
But it’s important to move beyond criticizing comparative effectiveness in its current form, and instead focus on creating a policy roadmap for integrating technologies and science that is more patient-centric into comparative effectiveness thinking.
 
Much like the U.S. Food and Drug Administration created something called the Critical Path Initiative to apply 21st-century science to accelerate the development of personalized medicine, another national goal should be to create a Critical Path Initiative to apply new approaches to data analysis and clinical insights to promote patient-centric healthcare.
 
Why? Because comparative effectiveness should reflect and measure individual response to treatment based on the combination of genetic, clinical, and demographic factors that indicate what keep people healthy, improve their health, or prevent disease. First steps have been taken. For example, the Department of Health and Human Services has invested in electronic patient records and genomics. Encouraging the Centers for Medicare & Medicaid Services to adopt the use of data that takes into account patient needs would complement such efforts.
 
We need to develop proposals that modernize the information used in the evaluation of the value of treatments. Just as the key scientific insights guiding the FDA critical path program are genetic variations and biomedical informatics that predict and inform individual responses to treatment, we must establish a science-based process that incorporates the knowledge and tools of personalized medicine in reimbursement decisions: true evidence-based, patient-centric medicine.
 
For instance, the FDA, in cooperation with many interested parties, has developed a Critical Path opportunities list that provides 76 concrete examples of how new scientific discoveries in fields such as genomics and proteomics, imaging, and bioinformatics could be applied during medical product development to improve the accuracy of the tests used to predict the safety and efficacy of investigational medical products.
 
We need a Critical Path for Comparative Effectiveness to begin the process of developing a similar list of ways new discoveries and tools (such as electronic patient records) can be used to improve the predictive and prospective nature of comparative effectiveness.
 
It’s a complicated proposition—but such a body’s goal is as simple as it is essential—cost must never be allowed to trump care, and short-term savings must not be allowed to trump long-term outcomes Just as we need new and better tools for drug development, so too do we need them for comparative effectiveness measurements.
 
Today, comparative effectiveness is a short-term, short-sighted, politically-driven policy. While it may provide transitory savings in the short-term, current strategies result in a lower quality of care that result in higher healthcare costs over time.
 
Restrictive formularies and healthcare systems that deny patients access to the right medicine in the right dose at the right time but pay for more invasive and expensive procedures later on have their priorities upside down. Attention must be paid. It’s time for a deep dive beyond simplistic and self-serving “comparative effectiveness.”
 
A health technology assessment model for the 21st Century should reflect and measure individual response to treatment based on the combination of genetic, clinical, and demographic factors that indicate what keep people healthy, improve their health, and prevent disease.
 
In an era of personalized medicine, one-size-fits-all treatments and reimbursement strategies are dangerously outdated. We are early in this debate, but at least we can all agree that this is not, and must not be exclusively, a debate about saving money. It must be about patient care.


Peter J. Pitts is President of the Center for Medicine in the Public Interest and a former FDA Associate Commissioner.