PharmExec Blog

Paring the Fat in Clinical Trials

Is too much irrelevant data slogging the pace to registration?

Conclusions of a Pharm Exec May 15 Roundtable on finding ways to build efficiencies into an increasingly costly and complex clinical trials process have been bolstered by new research from the Tufts Center for the Study of Drug Development. Based on extensive survey data drawn from a working group of 15 major drug makers, Tufts’ latest work provides fresh answers to a basic strategic question:  given the growing awareness of how complex most trial protocols have become, what exactly is contributing to the run up in costs – and where and how can the biggest savings be made, without impairing the integrity of the process for regulators and patients?   

The Tufts research renders a startling conclusion.  Trial sponsors, in attempting to anticipate or meet the high expectations of all stakeholders in the process, are accumulating vast amounts of protocol data that are either irrelevant to the results or end up not being used by regulators. Tufts researchers reviewed data compiled for 140 protocols submitted by the 15 participating companies, involving in total more than 25,000 separate procedures. All the protocols covered phase II(a) through phase III(b) for compounds completed since 2009; the largest therapeutic segment covered was oncology, with 31 protocols. The study tied each of the protocol procedures to a primary, secondary, tertiary and exploratory endpoint, using the clinical study report and analytical plan.   Reviewers then determined the proportion of protocol procedures that are not associated with the primary or key secondary endpoints. Next, an economic assessment was made to quantify the cost of performing these “non-core” procedures.

“What we found was that nearly one out of four procedures can be classified as non-core, costing sponsors approximately 20 per cent of every study budget on the collection of data that was not material or may never be used,” study coordinator and Tufts senior research fellow Ken Getz told Pharm Exec.  In monetary terms, that 20 per cent amounts to an average of $1.1 million for each protocol budget, a figure that does not include all the staff time spent in collecting, managing and analyzing this non-essential clinical data.  Extending this finding to the total number of active protocols performed each year, as reported by the FDA, the cost to the industry overall in performing these non-core procedures amounts to a staggering $3 – $5 billion per year.

What are the most frequent procedures that can be defined as non-core?  The list includes dietary evaluations/nutrition monitoring, hematology scans, TSH testing, and unmasked physician reviews including physical examinations. Again, oncology studies had the highest proportion of non-core activities among the therapeutic areas included.

The Tufts research indicates that industry is beginning to get serious about bending the cost curve for this mission critical aspect of the new drug development process, with many of the 15 sponsors noting their high level of cooperation in furnishing data that could be viewed as commercially sensitive.  For its part, Tufts hopes the work will help companies boost their in-house trial management efficiencies.  There is also a rich new vein of evidence from the work that can be used to launch a dialogue with regulators and other stakeholders, on two fronts: (1) improved clarity from regulators on what they really need to render the best clinical judgment around registration; and (2) cooperation within industry in finding better and potentially novel medical or clinical applications for all the data that has been collected, but sits unused.

This entry was posted in Op-Ed, R&D and tagged , . Bookmark the permalink. Trackbacks are closed, but you can post a comment.

3 Comments

  1. Posted July 20, 2012 at 9:00 am | Permalink

    Unfortunately, I don’t see this news as a “startling conclusion”. Clinical trials have long-since over-collected information as part of the trial process. It certainly helps to quantify the wasted effort, and perhaps that will spur some changes. It is, in fact, a relatively simple matter to identify — at the onset of a trial — which data fields are necessary per the protocol. In fact, developing the statistical analysis plan *at the same time* as the protocol, and then mapping data collection fields to their role in analysis results, is an excellent way of clearly identifying ‘unused’ fields.

    As stated in the article, the problem lies with tertiary and exploratory endpoints, and the desire to collect data which “may be needed later” without a clear understanding of how that data will be used. It’s as if NASA planned an exploratory mission, and kept adding instruments to look for things with diminishing potential value. Each added instrument increases the cost, complexity and likelihood of unexpected problems, and has the potential to undermine the success of the primary mission.

  2. Posted July 24, 2012 at 7:42 am | Permalink

    I appreciate the work that the Tufts Center performs in the analysis of costs of drug development and I have no doubt that this report captures the problems in clinical trials for the major firms that were interviewed. However, my experience with smaller firms has been that the high cost of studies for these companies can often be attributed to poor planning, general waste of resources, and failure to perform adequate audits (beyond monitoring) while studies are being conducted. Major Pharma and small start up firms are very different in how they approach clinicals. I would like to see more attention given to the problems and costs of development for the smaller entities that are often under stakeholder pressure to conduct studies long before they are ready.

  3. Lilli Abdullahi
    Posted July 26, 2012 at 8:19 am | Permalink

    I agree with Carol. Is there any study for smaller start ups? With a background in QA/RA I wonder if new management methodologies like Six Sigma/Lean could be applied including dealing with stakeholder power to remove non critical items? My experience is that to little attention is paid to the quality of the study protocol and format for collecting the data causing confusion at the study site. Monitoring is left to the CRO without proper oversight by the sponsor resulting in wasted resources when data is returned incomplete and inconclusive. The push for data to obtain funding forces smaller start ups to rush the CMC and study process. Maybe it is time for another business model?

Post a Comment

Your email is never published nor shared. Required fields are marked *

*
*

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>

  • Categories

  • Meta