Bake a Perfect Batch

Featured Blog by Denni O. Day

The principal purpose of most clinical trials is the collection of data to prove the study hypothesis. Whether the data are collected on paper case report forms or entered directly into an electronic data capture (EDC) system, every data point must be checked for accuracy, consistency, and legibility. With paper data forms, this review is time-consuming and labor-intensive. It also is conducted retrospectively, i.e., after the study visit has occurred. This can mean that data forms are reviewed later during the same day, days or weeks later, during the next monitoring visit, or as individual cases are closed out at the end of the study. Whenever the review occurs, each form is inspected, either on-site by a study monitor or remotely by the sponsor’s project manager.

A major advantage of EDC systems is their ability to identify data errors and other entries requiring further documentation before the investigator has sent the data form…and usually while the study subject is still present in the clinic. This real-time data cleaning saves most of the time lost in paper data form review and error correction. It also permits transmission of data directly to the study database. This allows the biostatisticians to set up their tables, listings, and figures and the medical writers to draft the report shell as cases move through their study visits versus waiting until after the last subject’s last visit. We have found that EDC shaves two to four weeks from these downstream activities.

Every error, regardless of how or when it was discovered, must be corrected by the investigator on the original data forms, using the corresponding error codes and other required notations. Once the data forms are ‘clean’, the quality assurance (QA) auditing process begins. In some situations, QA audits occur simultaneously with the basic data cleaning process, which can result in changes to that process due to the discovery of additional errors.

Of the myriad error types, which ones need to be corrected? The answer depends on which variables the sponsor has identified as analytically critical to the clinical dataset. Examples of critical data are the study subject’s identification number, date of birth, medical history, study visit dates, adverse events, and date of study completion or early termination.

Some typical QA audit report findings are:

(1) Vital signs recorded during the last assessment are the same as those recorded one hour later on the discharge form. Were identical readings observed or were the readings transcribed from one form to the other?

(2) An adverse event report for “temperature reading outside normal range” includes observations of hypothermia and hyperthermia. Each of these occurrences should have been recorded as a separate adverse event.

(3) A protocol deviation report references a missing parameter related to Visit #6, but the deviation occurred during Visit #5. This needs to be reconciled.

(4) An adverse event for elevated liver enzymes was recorded as occurring during Visits #1 and #4. These readings were not observed during consecutive visits; they should be recorded as separate adverse events.

Some errors might seem relatively minor, and they may be. However, they also could be symptomatic of a lax attitude toward the data that could jeopardize the success of the trial. If investigators are not diligent in their observance of Good Clinical Practice (GCP) guidelines, data may be recorded incorrectly or significant information (e.g., concomitant medications) may be overlooked…all to the detriment of the database.

Section 3.2.28 of GCPs charges each investigator with collecting and recording data “…in accordance with the study protocol and applicable regulatory requirements in an unbiased manner that accurately and completely reflects the observations of the study.” The key words are “unbiased,” “accurately,” and “completely.”

“Unbiased” means the investigator has not been unmasked to treatment assignment. “Accurately” means all data are recorded in the proper format and exactly as observed. It also means data are not assumed, interpolated, or fabricated. “Completely” means all data fields are filled in.

Some investigators contend that “minor” errors should be tolerated. They argue that reviewers should focus on whether “important” data support the study hypothesis rather than writing queries because an owner signed the consent form in pencil instead of pen. To me, an error is an error, regardless of gradation. I also think exact protocol compliance should be the standard for every study. Otherwise, we struggle in the nebulous world of “good enough.” And, as Debbie Fields (founder of Mrs. Fields’ Cookies) said, after discarding a franchisee’s tray of ever-so-slightly over-baked cookies, “good enough never is!”

Leave a Reply

Your email address will not be published. Required fields are marked *