YouTube Facebook LinkedIn Google+ Twitter Xingrss  

Capturing clinical source data electronically can hit a home run for trial efficiency and safety — but access control is key. Here's an expert guide.
By Paul Bleicher

 April 15, 2003 | When clinical professionals are asked for ways to improve clinical development, they consistently rank electronic solutions as a key strategy. They cite electronic tools, such as e-sourcing, as essential to improving data quality, data access, cycle time, and safety.

"E-sourcing" means collecting clinical data into electronic format without an original paper record. In other words, the first or "primary" recording of source data is achieved electronically. Many see this as unleashing the true value of electronic data capture (EDC), enabling implementation of electronic case report forms (eCRFs).

E-source technology can ensure cleaner clinical data by eliminating repetitive rounds of data entry and the need for continuous reconciliation between paper source documents and their electronic equivalents. Despite these benefits, however, e-sourcing raises considerable regulatory concern. Indeed, regulatory focus on e-sourcing has increased as a result of 21 CFR Part 11, the criteria under which the FDA considers electronic records to be equivalent to paper ones. Part 11 applies to electronic records that are created, modified, maintained, archived, retrieved, or transmitted.

Consider, for example, source document retention. According to Computerized Systems Used in Clinical Trials, an industry guide to interpreting Part 11, source documents must be retained to enable the reconstruction and evaluation of the trial, and clinical investigators are to retain either the original or a certified copy of all source documents sent to a sponsor or contract research organization (CRO). This guideline dovetails with Section §11.10 (e) of the Final Rule, which states that audit trail documentation is to be kept for a period at least as long as that required of electronic records.

Good clinical practice (GCP) guidelines in 21 CFR Part 312.62 (b) and (c) explain that the clinical investigator is to prepare and maintain records, and retain them, electronic or otherwise, for a period of two years following the date a marketing application is approved for the drug for the indication under investigation. Similarly, International Conference on Harmonisation (ICH) guidelines for GCP state that the investigator or institution should retain records for two years.

Who's Got the Data?
These regulations and guidelines have implications for how e-sourcing is implemented. To be sure, the easiest way to comply is for the clinical investigator to make the first data recording on paper, in a medical record, or into a hospital or clinic-based electronic medical record system. This lets source data remain under the control of the investigator and enables auditing to determine veracity at a later date. Any e-source solution must do the same.

Three Choices for E-Source Data Storage 

Read More 
There are two basic choices for where to keep clinical data collected electronically: on-site or off-site. If source data reside locally — on individual computers, devices, or on a local server under the control of the investigator or with ready investigator access — it's possible to comply with the archival control requirement that data be ALCOA: attributable, legible, contemporaneous, original, and accurate. Attributable, for example, refers to the fact that data can be traced to individuals responsible for observing and recording them.

At first glance, housing source data locally seems the logical way to be compliant. But things aren't that simple. First, local data storage brings many logistic difficulties, namely significant risk of data loss or corruption. In addition, the local investigator must now become a systems administrator, responsible for computer security, backup, validation, and access control.

Investigative sites that store data locally are required to maintain a vast array of standard operating procedures to safeguard the data. These standard operating procedures (SOPs) must be in place to perform the required rigorous testing of all computer system components. Furthermore, sites housing local data need to comply with a software validation plan; ensure that clinical data accurately reflects source data; restrict and monitor access to the data at the clinical site; carry out plans for disaster mitigation; manage the local application or network; and more. This is a tall order for a typical investigative site.

By comparison, data not stored locally travel over the Internet to a central server maintained by a party who is not an agent for the investigator; the sponsor, for example. Data maintained in this manner aren't under investigator control, so some regulators might view this as non-compliance with GCP guidelines for record maintenance and retention. For example, the data could be modified without the knowledge of the investigator, and he or she would have no archival copies to prove otherwise.

A possible solution is to use "thin clients" at the site — personal computers that operate without any software application other than a Web browser, and do not store data locally. Consequently, far fewer SOPs are needed, as source data don't reside on the computer or even at a local server. However, the "gotcha" with thin-client architecture is that delivering e-source data directly into a sponsor's or CRO's remote server may not meet the GCP requirement for maintaining data by the investigator who generated it.

In Third Parties We Trust 
There is a third way, however, that could bridge the gap between the compliant-yet-awkward locally stored data model and the sleek-but-possibly-noncompliant thin-client approach. In this scenario, the eCRFs may reside on a central server controlled by a trusted third party (TTP). The TTP administers the system, maintains the database and trial software, and limits the sponsor from having direct database access. The TTP must employ very stringent controls and procedures to account for the fact that it's housing the only source for the data.

Strategic Insights 
· Regulatory Compliance
· Easing the Pain of Part 11
· Part 11: The FDA's New View
· Rising to the Regulatory Challenge
· E-Sourcing: Covering the Regulatory Bases
But if the sponsor pays for the TTP, this would suggest that the TTP is functioning as an agent of the sponsor, not the site. To avoid this apparent conflict of interest, the investigative site and the TTP must develop strict standard operating procedures, and probably a contract that establishes without question three conditions:

  • That the TTP is acting as an agent for the site, as it pertains to the source data
  • That the TTP is not to assume any of the sponsor's regulatory responsibility
  • That the sponsor will not have direct access to add, delete, or modify data to the clinical database

Unfortunately, there are no clear-cut regulatory guidelines yet for defining the relationship between the third party, the sponsor, and the investigative site. Just how the TTP will function in a GCP manner will likely emerge over time, though, as various sponsors broach this issue with the FDA.

 Paul Bleicher, M.D., Ph.D., is founder and chief scientific officer of Phase Forward, a provider of clinical solutions for drug development. He may be reached at 


For reprints and/or copyright permission, please contact  Jay Mulhern, (781) 972-1359,