Article | October 10, 2013

Clinical Trial Performance Measures You Can Use (…And Believe)

Source: Clinical Leader
Michael Howley

Are you are getting your money’s worth from your clinical trials? As clinical trial costs expand and margins thin, this question becomes increasingly important. But it is currently hard to know if your clinical study team is performing well. It’s an ironic dilemma – even though life sciences is a data-intensive industry, measuring the performance of clinical trials is still an immature science.

Our on-going research, the Clinical Trials Outsourcing Performance (C-TOP) project is an academic-industry collaboration studying how to improve measurement of clinical trial performance. The goal is to develop scientific measures that will complement existing operational and financial metrics.

Over-Reliance On Isolated Operational Metrics

Two limitations of current clinical trial performance measurement motivate the C-TOP research. First, there is an over-reliance on isolated operational metrics. While operational metrics have the benefit of being easy to measure (i.e. they are reliable), it is hard to know what they mean (i.e. they have limited validity). As an example, suppose you were evaluating trial startup and you found that it took 70 days to recruit subjects. Is this good? Of course, the answer is that ‘it depends’ – on a host of factors like the phase, specialty, the country of the trial, or even when the trial took place. When the meaning of a measure changes on the context then the measure lacks validity. When a measure is both reliable and valid, we call it scientific.

The Metrics Trap

The second limitation we identified is an emphasis on benchmarking, which can lead to the ‘Metrics Trap.’ Benchmarks are typically industry averages, and have the advantage of allowing you to compare yourself to the competition. You can fall into the metrics trap, though, when the industry is performing poorly. In the example above, if you found that the industry average is 90 days, you would be tempted to think that you had a high- performing trial. But what if high performance only happens when subjects are recruited in 50 days? In this case, you fell into the Metrics Trap.

The Value of a Predictive Model

In order to avoid the Metrics Trap, it is important to depend on a predictive model. Then can you understand the relative contribution of all of the aspects of performance. With benchmarks, all variables have equal importance. We can illustrate this advantage of predictive models with finding from our research. Our subjects identified the ability to identify qualified investigators as being important in study startup. In our predictive model, however, the ability to identify qualified investigators had a negative effect on study startup performance. Our subjects were not wrong; but when you look at the unique contribution of each variable to performance, the ability to identify qualified investigators actually detracts from performance. Just looking at these performance factors in isolation is inadequate. A predictive model allows you to identify all of the drivers of performance and to weigh each driver based on its contribution to performance.

Getting Your Money’s Worth

So how do you know if you are getting your money’s worth from your clinical trials? Based on the results of the C-TOP collaboration, you should base your judgment on comparative data from a scientific and predictive model. Such an approach gives you the significant drivers of performance with their relative weights. You can then plug in your scores to understand how your clinical study team is performing, identify areas for improvement, and understand whether you are getting your money’s worth out of your clinical trials.

About The Author:

Michael J Howley PA-C, PhD. is an Associate Clinical Professor at the LeBow College of Business, Drexel University. Prior to becoming a business professor, Mike worked for 2 decades as a clinical Physician Assistant (PA).mikehowley@drexel.edu

Peter B. Malamis, MBA is the CEO of CRO Analytics. The company helps sponsors and CROs improve clinical research performance. Peter has served as a Board member and advisor to biopharma services companies and is a regular lecturer at Drexel University and industry conferences. pmalamis@croanalytics.com