Time. Cost. Quality. Three important parameters of any clinical trial. Ideally, you’d want the highest quality data, collected at the lowest possible cost, delivered in the shortest period of time. Unfortunately, in some studies, one or more of these parameters is sorely stressed. The severity of the stress, together with your response to it, will determine the ultimate success of your trial as well as the long-term viability of the test article in the marketplace.
If time is the issue, you could add staff or open more study sites. If cost is the constraint, you could request more funding, cut staff, reduce the number of monitoring visits, or eliminate some non-critical activities. But what if data quality is called into question? Then, the choices are not so obvious.
For example, let’s say that you’re running a clinical trial and everything is going your way. You found a group of enthusiastic investigators. Training went off without a hitch. Enrollment was fairly quick. There were very few protocol deviations, adverse events, or other problems. Now, with the few remaining queries being resolved, you are about to lock the database. It looks like the final report will be delivered about one month ahead of schedule. In short, you’re sitting pretty.
Then, you get a call from the biostatistician who says that some of the data “don’t look quite right.” He explains that one site, the one whose investigator has been doing studies for more than ten years, has two cases which show the exact same body temperature readings at each of the six study visits and two more which have the exact same body weights at each of the six visits. “The FDA will go to town on these,” he warns.
So, you call the investigator to discuss these four curious cases. “How likely is it,” you ask, “that an animal would have the same temperature and weight, down to the first decimal point, at each visit over the six-month study period?” Seeming to ignore your question, the investigator instead boasts that her clinic enrolled more subjects than any other study site. “On some days,” she continues, “we saw as many as six study subjects in an hour.” You probe further until the investigator, obviously irritated, says, “Look, it may be unlikely, but it’s not impossible. That’s what we recorded for those cases.”
Got the picture? Were the data (or, worse, the cases themselves) fabricated? If no, how do you defend these highly unlikely readings? If yes, how many other cases are tainted?
You could ask the investigator to write an explanatory note, then exclude the questionable cases from the study population but include them in the safety data analysis. Alternatively, you might disqualify the site, exclude the cases from any analysis, and hope the FDA does not audit the site. Or, you could close the site, exclude all of their cases from any analyses, and include the incident in the final study report. Any of these steps, certainly the latter two, could push your study from nicely ahead of schedule to woefully behind. What do you do?
If the numbers are insignificant, some might argue that the questionable data won’t impact the analysis. Therefore, they would include the data, trusting that the investigator is both honest and GCP compliant. Others might not want to risk an entire study for a few cases, or even a whole site. They would exclude any questionable data and document their reasons for doing so, even if that meant that they needed to enroll more cases to meet the protocol’s requirements.
This is where the choice becomes sticky because there are obvious economic implications and perhaps not-so-obvious ethical considerations. Discarding the cases and resuming enrollment will extend your timeline, with all of the attendant costs. That delay could result in a far greater cost if a competitor beats you to market with a similar product. On the other hand, if you proceed, you might elude scrutiny of the questionable data and get your new product approved. Tough call.
After more than thirty years in the clinical trials arena, on both the human and veterinary sides of the aisle, I am happy to report that most sponsors take the high road – regardless of cost. The few that don’t eventually get their comeuppance. Nonetheless, it is the job of clinical trial managers like me to make sure that no sponsor ever has to face the question.