Estimates of what it takes to deliver a compound to market are more than an academic exercise — such data has an increasingly important on-the-ground impact on industry revenues, because if you cannot justify your costs how do you expect to prevail on price? Fundamental to the debate on the “productivity lag” in drug R&D is the assertion that the cost to bring a new compound to market is high — and going higher. Critics of the industry are concentrating their (f)ire on this issue, contending that average cost estimates are excessive and tend to distort the increasingly important calculation of “value” for payers and policy-makers in pricing new medicines. The divide even extends to industry itself, as evidenced by GSK CEO Andrew Witty’s recent assertion that a better consensus is needed to measure drug development costs, based on the principles of “frugal science.”
The chief target for those who contest the cost figures cited by industry is the work conducted over three decades by the Tufts Center for the Study of Drug Development. Its most recent in a series of profiles – based on its own interpretation of data drawn from the leading US-based “big pharma” companies – tagged the average cost of bringing a new compound to market at more than $800 million. Last year, two prominent industry critics, Donald Light and Rebecca Warburton, published a harsh critique of the Tufts study methodology, or, as the authors put it, “mythology.” Specifically, their paper contends that a key element in the Tufts work – which apportions the expense of investing funds in research against alternative uses of that capital to obtain an equal or higher return – is poor grounds for fixing costs; eliminating this “opportunity cost” pushes down the average cost virtually in half, to only $403 million. Their own anecdotal calculations render that figure even lower. In addition, the 11 per cent interest rate figure used by Tufts in estimating the value derived from that alternative use of funds against the investment in R&D is deemed excessively high; Light and Warburton claimed that three per cent would be more appropriate.
So who is right here? Tufts is criticized for relying too much on company data without broad options for disclosure, while Light and Warburton are well-known for their adversarial stance on virtually every policy issue relevant to biopharmaceuticals. Enter an objective third party, in the form of a new study just published by F. M. Scherer, emeritus Professor at the Kennedy School of Government at Harvard. Scherer is well-known for his earlier work on drug innovation and pricing which was balanced – if sometimes skeptical – in supporting industry claims.
In R&D Costs and Productivity in Biopharmaceuticals, Scherer makes the following points:
• There has been over the past 30 years a substantial growth in average R&D costs. Spending by the industry on R&D rose by an average 7.4 per cent annually between 1970 and 2007, whereas the number of approved new drugs increased by only 2.1 per cent annually over the same period – in other words, with more money spent to obtain a much lower rate of increase in new drug approvals, it is inevitable that the average cost of bringing those medicines to market has tended to rise.
• Pre-clinical costs for industry have been fairly steady over the period reviewed, largely because of the higher profile and resources of the National Institutes of Health [NIH] in subsidizing basic research. Industry progress in creating tools for “rational drug design” is another positive factor. The real growth in costs has taken place at the clinical stage, where industry obligations have soared due to tighter regulatory controls and the complexity of trials. Trials are bigger, testing requirements on enrollees have become more extensive and complex, while teaching hospitals and other trial sites are charging more to sponsor and participate, seeing their development support work as “profit centers.”
• The opportunity foregone to invest R&D funds elsewhere is a legitimate calculation in estimating average drug development costs due to the long time lag to secure market access and profits, which is more prominent than in other sectors. Scherer says the US government evaluated the merits of this approach and endorsed it as far back as 1993, when a federal Office of Technology Assessment report stated that “the practice of capitalizing costs to their present value in the year of market approval is a valid approach to measuring R&D costs.”
• The argument that estimates of cost should incorporate the implicit value derived by companies from the tax deductibility of R&D outlays is overridden by the difficulty of singling out qualifying activity on both a functional and geographic basis, a calculation that the corporate tax regime is not set up to do.
• Scherer also dismisses the Light and Warburton contention that three per cent is a more valid rate of interest in estimating the investment potential of alternative uses of R&D outlays. He calls it “clearly wrong.” The Tufts study’s 11 per cent rate is well in line with the private sector’s underlying cost of capital over the study period, and is actually “quite conservative” given that the cost of capital in R&D itself is fairly three or more percentage points higher, given the inherent risks of investing in unproven science over a long period of time.
There are also some implicit recommendations on industry positioning worth gleaning from the Scherer paper. First, he admits that methodologies for calculating the cost of drug development pose inherent challenges. More progress could be made, with support from industry, in overcoming them. Stakeholders should work together with Tufts to address misconceptions and enhance public confidence in the survey. To that end, industry associations like PhRMA might well expand and improve methods of collecting member R&D data, particularly for R&D activity outside the US; while BIO, PhRMA’s biotech partner, might also upgrade its commitment to quantify member R&D spending to support the work of Tufts and other academic institutions – big pharma must not be the sole source.
Second, companies that currently provide the data to these institutions might consider reevaluating the confidentiality standards that bar efforts to openly evaluate and communicate that data to other stakeholders.
Third, journalists and other communicators need as a “matter of good practice” to highlight the opportunity cost element as a factor when reporting the numbers from the Tufts studies. This should no longer be allowed to be treated as a “surprise” wielded by activists to discredit the body of evidence as a whole.
Finally, the regulatory community must better understand how its practices are driving development costs. It is advised to work more closely with industry in agreeing basic standards for defining, monitoring and, where appropriate, ameliorating such costs.
The stakes here are high, as it can arguably be said that the three decades of Tufts surveys are the most important body of policy research to bear on the cost of supporting good science. If there is no agreed line of defense around the basic issue of costs incurred in bringing a medicine to market, then industry is the ultimate loser when it comes to obtaining access and defining a price for that medicine, especially now that payer expectations around “evidence” are becoming more insistent and precise.
The Tufts Center is undeterred by the criticism and regards the Scherer paper as a welcome addition to the debate. Center Director Ken Kaitin told PE that a new, equally robust iteration of its research is underway. “We are presently collecting fresh data from companies. The interpretation of this data is exclusively our own, as has always been the case,” Kaitin told PE. He noted that the Center is committed to communicating with any interested party on ways to extend the integrity and relevance of this important line of research.