Advertisement

Use of ClinicalTrials.gov to Estimate Condition-Specific Nocebo Effects and Other Factors Affecting Outcomes of Analgesic Trials

Open AccessPublished:March 04, 2013DOI:https://doi.org/10.1016/j.jpain.2012.12.011

      Abstract

      ClinicalTrials.gov is a registry and results database of federally and privately supported clinical trials conducted worldwide. We sought to answer: what are the characteristics of pain trials; how frequently are these trials stopped and why; what is the magnitude of attrition due to lack of efficacy or adverse events; and whether the withdrawal rates depend on pain syndrome. To facilitate this and subsequent studies, we have developed a system called Sherlock that automatically downloads data from ClinicalTrials.gov into a relational database. We included pain interventional trials. To evaluate attrition, we restricted consideration to prospective randomized, parallel, double-blind, placebo-controlled trials. Of the 82,867 trials, 6% reported results and 5.6% terminated before the planned number of subjects was accrued. Of these early terminations, 38% were due to enrollment difficulties. In the placebo arms, 3.8% of participants withdrew due to lack of efficacy and 4.9% due to adverse events, with proportions differing among pain conditions. Compared with migraine trials, in fibromyalgia trials 5.1% more participants withdrew due to lack of efficacy (95% confidence interval [CI], 2.5–7.8%), and 6.4% more withdrew due to adverse events (95% CI, 4.3–8.6%). Nonsteroidal anti-inflammatory drugs were the treatment class with the lowest adverse events withdrawals. Recruitment challenges account for the largest proportion of noncompleted trials. Attrition rates differ across pain conditions. Migraine studies had the lowest withdrawal rate. Tools like Sherlock facilitate conducting research in the ClinicalTrials.gov registry.

      Perspective

      ClinicalTrials.gov registry enables researchers to get a snapshot of a specific field and observe changes over time in trial design, including numbers of subjects accrued, and it can inform clinical trial design. We learned that recruitment challenges account for the largest proportion of noncompleted trials, attrition rates differed across pain conditions, and migraine studies had the lowest withdrawal rate.

      Key words

      Clinical trials play a critical role in the progression of medicine and the improvement of human health. A registry of trials permits researchers and the public to be aware of the existence of such trials and to act on the information. Registries of trial protocols promote awareness of ongoing trials that can facilitate enrollment. Registries of trial results permit evaluation of efficacy and safety of treatment options and can assist in planning of future trials.
      In 1997, the US Food and Drug Administration Modernization Act established a U.S.-based trial registry, and in 2000, the ClinicalTrials.gov registry was launched.
      • Dickersin K.
      • Rennie D.
      The evolution of trial registries and their use to assess the clinical trial enterprise.
      In 2007, the U.S. Food and Drug Administration Amendments Act (FDAAA) required the posting of basic results of trials in ClinicalTrials.gov not later than 1 year after the primary completion date of the trial. Trials that qualify under FDAAAA are those trials evaluating a drug, biologic, or device manufactured in the U.S., conducted under an investigational new drug application, and those with at least 1 site in the U.S.
      • Miller J.D.
      Registering clinical trial results: The next step.
      • Zarin D.A.
      • Tse T.
      • Williams R.J.
      • Califf R.M.
      • Ide N.C.
      The ClinicalTrials.gov results database − Update and key issues.
      In 2004, the International Committee of Medical Journal Editors announced that their journals would only publish the reports of trials that were registered in a public clinical trial registry before recruitment started. This decision was an important motivator for the registration of clinical trials worldwide.
      • Dickersin K.
      • Rennie D.
      The evolution of trial registries and their use to assess the clinical trial enterprise.
      By May 2012, ClinicalTrials.gov had registered 125,747 trials with locations in 179 countries.

      US National Institute of Health. ClinicalTrials.gov. http://clinicaltrials.gov/. 2012. 5-15-2012

      In the ClinicalTrials.gov registry, trial results are reported in a standard, tabular format and consist of participant flow, baseline characteristics, outcome measures and statistical analyses, adverse events information, and administrative information. Although not all studies are required to be registered in the ClinicalTrials.gov registry and only basic results have to be reported,
      • Tse T.
      • Williams R.J.
      • Zarin D.A.
      Reporting “Basic Results” in ClinicalTrials.gov.
      the breadth of information present in ClinicalTrials.gov registry has great potential value.
      One example of the utility of ClinicalTrials.gov registry is a study
      • Califf R.M.
      • Zarin D.A.
      • Kramer J.M.
      • Sherman R.E.
      • Aberle L.H.
      • Tasneem A.
      Characteristics of clinical trials registered in ClinicalTrials.gov, 2007-2010.
      that assessed the overall characteristics of all interventional clinical trials registered in the ClinicalTrials.gov. The authors challenged the capacity of clinical trials to supply sufficient high quality evidence to ensure confidence in treatment guideline recommendations
      • Califf R.M.
      • Zarin D.A.
      • Kramer J.M.
      • Sherman R.E.
      • Aberle L.H.
      • Tasneem A.
      Characteristics of clinical trials registered in ClinicalTrials.gov, 2007-2010.

      Science News: Large-scale analysis finds majority of clinical trials don’t provide meaningful evidence. http://www.sciencedaily.com/releases/2012/05/120501162702.htm. 2012

      because of the degree of heterogeneity in study designs and the small sample sizes of the trials (ie, 62% enrolled 100 or fewer participants).
      • Califf R.M.
      • Zarin D.A.
      • Kramer J.M.
      • Sherman R.E.
      • Aberle L.H.
      • Tasneem A.
      Characteristics of clinical trials registered in ClinicalTrials.gov, 2007-2010.

      Science News: Large-scale analysis finds majority of clinical trials don’t provide meaningful evidence. http://www.sciencedaily.com/releases/2012/05/120501162702.htm. 2012

      Searching for basic information in the ClinicalTrials.gov registry is relatively straightforward. The web interface provided by the National Library of Medicine permits the user to find studies based on a wide variety of study characteristics, including conditions being treated, characteristics of participants, or availability of trial results. A search in the ClinicalTrials.gov registry returns a list of studies that meet the specified search criteria, and the available trial information can be downloaded, but not in a format ready for quantitative analyses, which is an obstacle to performing meta-analyses. To address this limitation, we developed Sherlock, a system that automatically downloads data from ClinicalTrials.gov, parses and organizes the information, and creates a database with numeric fields that is ready for statistical analyses.
      In this study, we used the Sherlock system to illustrate the value of the data available in ClinicalTrials.gov for designing clinical trials for pain treatments. Specifically, we addressed the following questions: 1) What are the study design characteristics of pain clinical trials and have these changed in the last 4 years? 2) How frequently are pain clinical trials stopped before the planned number of subjects is accrued and why? 3) What is the magnitude of attrition due to lack of efficacy or adverse events in the placebo and active arms of pain trials? and 4) Does the withdrawal rate depend on pain syndrome or drug tested?

      Methods

      The search of ClinicalTrials.gov was conducted through May 22, 2012, using the Sherlock system.

      Sherlock

      Sherlock includes all historical data and daily updates are routinely downloaded from ClinicalTrials.gov in an XML format. The data are automatically processed and loaded into a relational database (Microsoft SQL Server 2008 R2; Microsoft Corp, Redmond, WA). Data processing steps include 1) conversion of the XML text representing numbers into corresponding numeric fields such as participant flow counts and outcome measure values; 2) linking between comparison groups and treatment arms; 3) mapping of the studied conditions to Systematized Nomenclature of Medicine Clinical Terms (SNOMED-CT) and the Medical Dictionary for Regulatory Activities (MedDRA) terminology using the Unified Medical Language System (UMLS); 4) mapping of the investigated interventions to the mechanism of action information extracted from the Pharmaprojects Pipeline database, a commercial database, supplied by Citeline; and 5) manual validation of the mapping algorithms to ensure accuracy.
      The conversion of the XML text into numeric fields is a straightforward conversion from text to numbers. Linking between comparison groups and treatment arms required the development of a heuristic algorithm that performed fuzzy text matching (ie, use of term normalization, string matching, word counting) in various data fields such as group title, group description, arm title, arm label, arm type, and arm intervention. For the mapping of conditions to reference vocabularies, we first matched the conditions to SNOMED-CT terms extracted from UMLS using a string-matching algorithm that took into account word stems, synonyms, and matching word counts. The condition-SNOMED term association was then used to identify a corresponding MedDRA term using the linkage between SNOMED-CT and MedDRA terms provided by UMLS. To associate a mechanism of action with the trial interventions, a similar string-matching algorithm was used to map the intervention name with the drug name known to Pharmaprojects Pipeline database. The manual validation included random spot-checking by technical and clinical experts.
      Sherlock provides a query interface that facilitates trial search by any database field including condition, intervention, trial design characteristic, and text keywords. Identified trial information, including results if available, can be extracted into a data file with a selected set of fields and organized by study, arm, or a reported endpoint/outcome. The generated data file represents an analysis-ready data set that can be further investigated using any statistical software package.

      Studies Included

      We included interventional clinical trials that assessed pain-related conditions. ClinicalTrials.gov defines interventional studies as “studies in human beings in which individuals are assigned by an investigator based on a protocol to receive specific interventions. The assignment of the intervention may or may not be random.”

      ClinicalTrials.gov. ClinicalTrials.gov Protocol data element definitions. http://prsinfo.clinicaltrials.gov/definitions.html. 2011. 5-11-2012

      To identify all pain-related clinical trials, we conducted a text search using the term “pain” in the following data fields: study title, summary, description, outcome title, condition, condition Medical Subject Heading (MeSH) term, arm title, arm description, and study design fields such as study eligibility and inclusion criteria. This search would identify trials that included subjects with pain or that evaluated pain intensity or pain relief. However, it will omit trials that, despite enrolling subjects with painful conditions such as osteoarthritis, did not have the word “pain” in any of the searched fields. The algorithm would also miss studies in which the outcomes evaluated were not related to pain, such as imagery, or in which the scales used may have assessed pain, such as Western Ontario and McMaster Universities Arthritis Index (WOMAC), but in which the pain subscale was not specifically mentioned.

      Classification of Trials

      Two time periods were considered, “before 2008” and “2008 and after.” The year 2008 was chosen as a cutoff because it was the year in which it became mandatory in the U.S. to report study results for trials under the FDAAA scope.
      The various pain conditions reported in the registry were mapped to the MedDRA high-level terms and then grouped further. For example, diabetic neuropathy and postherpetic neuralgia were categorized as neuropathic pain. Drug interventions were grouped by mechanism of action extracted from the Pharmaprojects Pipeline database. For medications with more than one mechanism of action, a clinical judgment was used to select one. Tapentadol, an opioid agonist and an adrenergic transmitter uptake inhibitor, was classified as an opioid.

      Analysis

      To assess how frequently pain clinical trials stopped before the planned number of subjects was reached, we grouped the reasons for stopping provided by the sponsor into 14 categories that ranged from difficulty in enrollment, administrative problems, and funding problems, to safety reasons and lack of effect.
      To conduct attrition assessments, we restricted the analysis to randomized parallel, double-blind, placebo-controlled trials to ensure comparability and permit generalization of the findings.

      Clinical Trials Transformation Initiative. Using AACT for statistical analysis of data. https://www.ctti-clinicaltrials.org/project-topics/clinical-trials.gov/using-aact-for-statistical-analysis-of-data. 2012. 8-11-2012

      A trial was considered placebo-controlled if one of the arms was designated as “placebo comparator” by the sponsor. The designation of the trial arm type is one of the elements required by ClinicalTrials.gov for interventional trials. To determine withdrawal rates, we used the information provided in the “Reason not completed” field, which is required by ClinicalTrials.gov. We focused the analysis on withdrawal counts reported by the sponsor as due to lack of efficacy or adverse events.
      To calculate the attrition rates in the placebo and active arms, the number of subjects who withdrew because of lack of efficacy or adverse events were divided by the total number of participants in each arm and then multiplied by 100. These analyses were limited to settings with at least 10 studies (when analyzing the rates in the placebo arms) or 10 arms (when analyzing the rates in the active arms), since more precise and reproducible estimates were expected when at least 10 or more studies or arms were included.
      Results were stratified by pain condition and treatment class. To generate a pooled estimate, we calculated a weighted average using the DerSimonian-Laird random-effects meta-analytic model. The weights in this model are the inverse of the total of within-study plus among-study variance estimates. A constant of .5 was added to allow analyses of studies with no withdrawals and to calculate approximate standard errors, which is similar to adding a continuity correction to studies with no events in 1 arm to calculate odds ratios or relative risk in meta-analysis.
      To assess the association between the pain syndrome and the rate of withdrawals due to adverse events or lack of efficacy, we used a random-effects meta-regression model, treating withdrawal rates as a continuous variable, and 95% confidence intervals (CIs) were calculated. All analyses were conducted with Stata/SE version 10.1 (StataCorp, College Station, TX).

      Results

      The search of ClinicalTrials.gov produced 11,811 interventional trials identified using the “pain” keyword search; 2,944 trials were excluded because they did not assess pain, or assessed conditions such as heart disease, coronary disease, myocardial ischemia, cardiomyopathy, or gastroesophageal reflux disease. A total of 8,867 interventional studies were included in the reported analysis.
      Almost 44% of the trials started before 2008. In both time periods, 80% of the trials were randomized and almost 50% were double blinded. The parallel design was the most common design in both periods. The types of interventions evaluated were also similar in both time periods. Approximately 6% of the pain interventional studies evaluated behavioral therapies. In both time periods, the principal sponsors were university organizations followed by industry, with university organizations sponsoring a higher proportion of registered trials in the last 4 years (see Table 1).
      Table 1Characteristics of Interventional Pain Trials
      Before 20082008 and After
      Number of trials3,866 (43.6)5,001 (56.4)
      Number of randomized trials3,120 (80.7)4,113 (82.2)
      Phase
       Phase 1197 (5.1)276 (5.5)
       Phase 1–2125 (3.2)196 (3.9)
       Phase 2802 (20.7)789 (15.8)
       Phase 2–3129 (3.3)127 (2.5)
       Phase 3998 (25.8)927 (18.5)
       Phase 4602 (15.6)882 (17.6)
       Phase “0” or missing1,013 (26.2)1,804 (36.1)
      Number of arms2.07 ± 1.02.15 ± 1.0
      Studies that include both genders3,273 (84.7)4,208 (84.1)
      Studies that include only women417 (10.8)612 (12.2)
      Number of subjects enrolled218.4 ± 835.0168.1 ± 606.0
      Number of subjects enrolled in Phase 3 studies399.4 ± 1,341.8334.1 ± 927.8
      Masking
       Double blinded1,914 (49.5)2,460 (49.2)
       Single blinded512 (13.2)863 (17.2)
       Open label1,248 (32.3)1,615 (32.3)
       Missing192 (5.0)63 (1.3)
      Type of design
       Crossover302 (7.8)401 (8.2)
       Factorial93 (2.4)109 (2.2)
       Parallel2,436 (63.0)3,475 (69.5)
       Single group755 (19.5)918 (18.4)
       Missing280 (7.2)98 (2.0)
      Type of intervention
       Behavioral242 (6.3)239 (4.8)
       Biological50 (1.3)100 (2.0)
       Device376 (9.7)505 (10.1)
       Drug2,107 (54.5)2,295 (45.9)
       Procedure544 (14.1)713 (14.3)
       Other and combination546 (14.1)1,147 (22.9)
      Type of sponsor
       Clinical Research Network58 (1.5)56 (1.1)
       Government, excluding U.S. Federal81 (2.1)149 (3.0)
       Industry1,396 (36.1)1,446 (28.9)
       National Institutes of Health252 (6.5)43 (0.9)
       U.S. Federal Agency, excluding NIH79 (2.0)63 (1.3)
       University/Organization1,922 (49.7)3,104 (62.1)
       Missing78 (2.0)140 (2.8)
      NOTE. Values are no. (%) or mean ± SD.
      Of the 8,867 studies, 495 trials (5.6%) terminated before the planned number of subjects was accrued. The most common reason for termination was difficulty with enrollment (38% of the stopped trials). Table 2 describes the reasons for terminating the trials before the planned number of subjects was accrued.
      Table 2Reasons for Unplanned Termination of the Trials
      Reason for TerminationNumber of Trials (%)
      Difficulty with enrollment188 (38.0)
      Principal investigator/logistics/administrative problems62 (12.5)
      Business decision46 (9.3)
      Safety concerns42 (8.5)
      Lack of efficacy37 (7.5)
      Funding problems36 (7.3)
      Study not pertinent21 (4.2)
      Drug/device availability15 (3)
      Never started9 (1.8)
      No actual reason provided9 (1.8)
      Outcome achieved7 (1.4)
      Interim review/recommendation6 (1.2)
      IRB driven decision4 (0.8)
      Other13 (2.6)
      Total number of trials495 (100)
      Abbreviation: IRB, institutional review board.
      Around 6% of the trials (521) reported results. In these trials, the median (25th–75th percentiles) number of subjects per arm was 62 (30–139) and the median number of arms was 2 (2–3). Around 80% of these trials were randomized controlled trials (432) and 46.5% (247) had a placebo arm.
      In terms of attrition in the placebo arm, about 3.8% of the participants withdrew because of lack of efficacy (range, 0–28%), and 4.9% withdrew because of adverse events (range, 0–50%) (unweighted averages).
      Migraine, postoperative pain, rheumatoid arthritis, osteoarthritis, neuropathic pain, and fibromyalgia were the conditions with at least 10 placebo-controlled double-blind parallel studies with results. The median (25th–75th percentiles) duration of follow-up in these trials was 1 day (.5–1) for migraine, 2 days (1–2) for postoperative pain, 126 days (49–266) for rheumatoid arthritis, 56 days (28–91) for osteoarthritis, 84 days (45–91) for neuropathic pain, and 84 days (56–84) for fibromyalgia.
      The likelihood of dropping out because of lack of efficacy in the placebo arms differed among the pain conditions. Compared with trials that evaluated migraine, the percent of withdrawals due to lack of efficacy was 5.1% larger in trials evaluating fibromyalgia (95% CI, 2.5–7.8%) and 5.0% larger in trials evaluating rheumatoid arthritis (95% CI, 2.0–7.9%) (see Fig 1).
      Figure thumbnail gr1
      Figure 1Percent of withdrawals due to lack of efficacy by pain condition in the placebo arms of interventional pain trials.
      Similarly, the likelihood of participants dropping out because of adverse events in the placebo arms differed among the pain syndromes. Compared with trials that evaluated migraine, the percent of withdrawals due to adverse events was 6.4% larger in trials evaluating fibromyalgia (95% CI, 4.3–8.6%), 3.8% larger in trials evaluating neuropathic pain (95% CI, 1.8–5.6%), and 2.7% larger in trials evaluating osteoarthritis (95% CI, 1.0–4%) (see Fig 2).
      Figure thumbnail gr2
      Figure 2Percent of withdrawals due to adverse events in the placebo arms (nocebo effect) of interventional pain trials by pain condition.
      In terms of attrition in the active arms, 2.6% of participants in placebo-controlled trials withdrew because of lack of efficacy (range, 0–28%) and 7.3% withdrew because of adverse events (range, 0–66%) (unweighted averages).
      Nonsteroidal anti-inflammatory drugs (NSAIDs), calcium channel blockers, antidepressants, and opioids were the treatment classes with at least 10 studies or 10 arms reporting results. Postoperative pain, osteoarthritis, and migraine were the conditions for which at least 2 of these treatment classes were assessed. NSAIDs were the treatment class associated with the fewest dropouts because of adverse events. The proportion of withdrawals due to adverse events varied with the pain condition even within the same treatment class. For example, the dropout rate was ≤1% in studies that evaluated NSAIDs and postoperative pain, and 4% in studies that evaluated NSAIDs and migraine. Similarly, the dropout rate was 3% in studies that evaluated calcium channel blockers and postoperative pain versus 11% in studies that evaluated calcium channel blockers and migraine (see Fig 3).
      Figure thumbnail gr3
      Figure 3Percent of withdrawals due to adverse events by treatment class and pain condition in the active arms of interventional pain trials.

      Discussion

      This study confirms the value of ClinicalTrials.gov registry as a rich source of clinical trial information. ClinicalTrials.gov contains information for tens of thousands of clinical trials conducted worldwide during the past 12 years and provides an opportunity for researchers to obtain and analyze a snapshot of a specific clinical field, observe changes over time, and inform clinical trial design.
      By using the ClinicalTrials.gov registry, we learned that no substantial changes in the characteristics of study designs in pain studies have occurred in the last decade.
      In terms of informing researchers about the design of new clinical trials, our study shows that around 5% of the interventional studies in the pain field stopped before the planned number of subjects was accrued; and that slow recruitment was the principal reason for stopping. Hence, researchers involved in designing and conducting clinical trials should not underestimate the challenges in recruiting participants.
      On average, 9% of the subjects in the placebo arms withdrew from the trials because of lack of efficacy or adverse events. This empirical-based estimate will help trialists adjust the sample size estimates accordingly when the aim of the trial is to include a specific number of subjects completing the study, or to estimate minimum expected event rates when the event of interest is a composite outcome that includes dropouts because of lack of efficacy or adverse events as treatment failures.
      Interestingly, the pain condition plays a role in participant attrition rates due to adverse events and lack of efficacy. The difference in attrition rates appears to be more than just a consequence of the duration of follow-up of the trials. We found that withdrawal rates were similar in short follow-up studies that assessed migraine (1 day) and longer follow-up studies that assessed osteoarthritis (months), but different in longer follow-up studies that evaluated fibromyalgia (months).
      Although we did not study placebo response per se, our findings on dropout rates due to lack of efficacy in the placebo arm support published research suggesting that pain condition can influence study results. In previous research, trials that evaluated human immunodeficiency virus neuropathic pain had higher placebo response rates than trials that evaluated central pain.
      • Cepeda M.S.
      • Berlin J.A.
      • Gao C.Y.
      • Wiegand F.
      • Wada D.R.
      Placebo response changes depending on the neuropathic pain syndrome: Results of a systematic review and meta-analysis.
      Studies that evaluated fibromyalgia appeared to have lower placebo response rates than trials that evaluated neuropathic pain.
      • Hauser W.
      • Bartram-Wunn E.
      • Bartram C.
      • Reinecke H.
      • Tolle T.
      Systematic review: Placebo response in drug trials of fibromyalgia syndrome and painful peripheral diabetic neuropathy-magnitude and patient-related predictors.
      Our analyses of the registry data showed that the withdrawal rate due to lack of efficacy in the placebo arms of trials studying fibromyalgia was higher than that in trials that studied neuropathic pain. These findings seem to support previous assertions that fibromyalgia subjects exhibit a low placebo response. We also found that studies evaluating migraine had the lowest withdrawal rate due to lack of efficacy, which could indicate that subjects with migraine exhibit a higher placebo response.
      The dropout rate because of adverse events in the placebo arm has been termed the “nocebo effect.” The nocebo effect is often overlooked, but it has started to receive more attention recently.
      • Hauser W.
      • Bartram C.
      • Bartram-Wunn E.
      • Tolle T.
      Adverse events attributable to nocebo in randomized controlled drug trials in fibromyalgia syndrome and painful diabetic peripheral neuropathy: Systematic review.
      • Hauser W.
      • Hansen E.
      • Enck P.
      Nocebo phenomena in medicine: Their relevance in everyday clinical practice.
      • Mitsikostas D.D.
      • Mantonakis L.I.
      • Chalarakis N.G.
      Nocebo is the enemy, not placebo. A meta-analysis of reported side effects after placebo treatment in headaches.
      The nocebo rates found in the present study are similar to those reported in meta-analyses of randomized trials for the treatment of symptomatic migraine,
      • Mitsikostas D.D.
      • Mantonakis L.I.
      • Chalarakis N.G.
      Nocebo is the enemy, not placebo. A meta-analysis of reported side effects after placebo treatment in headaches.
      fibromyalgia, and diabetic neuropathy.
      • Hauser W.
      • Bartram C.
      • Bartram-Wunn E.
      • Tolle T.
      Adverse events attributable to nocebo in randomized controlled drug trials in fibromyalgia syndrome and painful diabetic peripheral neuropathy: Systematic review.
      In the current study, we assessed the nocebo effect in these and several other painful conditions. Our findings reveal the influence of pain syndromes on the nocebo effect and highlight the complexities of participants’ suggestions and expectations. One might expect that trials of migraine, which exhibited the lowest dropout rates because of lack of efficacy (and are therefore likely to have the highest placebo response rates), would also have the highest nocebo effect rates, but it appears to be not the case for migraine as well as other pain syndromes. One explanation could be that subjects who are experiencing pain relief are more tolerant of the adverse events. This is a pain research area that will definitely benefit from a better understanding of the impact of participants’ expectations on health outcomes, which in turn will help the design and execution of clinical trials for pain treatments and inform the communication between health care providers and patients.
      There are some limitations to our study. The findings are based on a nonprobabilistic sample.

      Clinical Trials Transformation Initiative. Using AACT for statistical analysis of data. https://www.ctti-clinicaltrials.org/project-topics/clinical-trials.gov/using-aact-for-statistical-analysis-of-data. 2012. 8-11-2012

      In ClinicalTrials.gov, to the extent that the results of the trials were posted solely because of mandatory regulatory requirements, this may limit the generalizability of the results only to studies that are highly likely to be registered. However, since it is mandatory to post results of all trials evaluating medical products manufactured in the U.S., even nonsignificant results are registered, which would reduce potential for a publication selection bias.
      • Dickersin K.
      • Rennie D.
      The evolution of trial registries and their use to assess the clinical trial enterprise.
      Research has shown that a third of protocols registered in ClinicalTrials.gov remained unpublished in peer-reviewed biomedical journals even 30 months after the trial completion.
      • Ross J.S.
      • Tse T.
      • Zarin D.A.
      • Xu H.
      • Zhou L.
      • Krumholz H.M.
      Publication of NIH funded trials registered in ClinicalTrials.gov: Cross sectional analysis.
      Due to the relative novelty of the result submission requirement, the compliance with the mandatory reporting of results is still low
      • Prayle A.P.
      • Hurley M.N.
      • Smyth A.R.
      Compliance with mandatory reporting of clinical trial results on ClinicalTrials.gov: Cross sectional study.
      and only a relatively small number of pain studies have results posted in the registry. We overcame this problem by focusing the analyses to circumstances in which there were at least 10 studies or 10 arms evaluating a treatment class for a given type of condition. However, this meant that we could not evaluate individual conditions or drugs. Assuming that the number of trial results registered will continue to increase rapidly, it would be possible to assess individual treatments or conditions in the near future.
      We developed the Sherlock system to create an analysis-ready database. We believe that tools such as Sherlock are essential to facilitate conducting research based on ClincalTrials.gov data. However, there are also steps that the National Institutes of Health, the sponsor of ClinicalTrials.gov, could implement to facilitate easier analysis of the submitted trial data. Currently, a system such as Sherlock has to map trial results to the corresponding trial arms. We recommend modifying the results submission process to ensure that sponsor of the trial provides the mapping. To facilitate identification of relevant trials and to enable data analysis, Sherlock maps conditions and interventions evaluated in the trials to standardized ontologies and controlled vocabularies of medical conditions and drug names such as MedDRA and Pipeline. The ClinicalTrials.gov web page interface uses similar approaches, but unfortunately this information is not included into the download options. We suggest making it available for download. We also suggest that in addition to providing data in an XML format, the data would be available as a frequently updated database dump. Having data available in an analysis-ready form will significantly increase the value of the ClinicalTrials.gov registry, will facilitate data utilization by a larger number of researchers, and could also encourage better compliance with the registration mandate.
      There are initiatives that, like Sherlock, promote the use of clinical trial data to inform and optimize future clinical trials. One is the Clinical Trials Transformation Initiative (CTTI), which also relies on data from the ClinicalTrials.gov registry. A stated aim of this public-private partnership initiative

      Duke University. Clinical trials transformation initiative. https://www.ctti-clinicaltrials.org/about-us_main. 2012

      is to make the acquisition and analysis of the data from ClinicalTrials.gov more user-friendly. CTTI has created a downloadable database of ClinicalTrials.gov data based on an annual snapshot. Unfortunately, this database does not yet include the results portion of the ClinicalTrials.gov registry. Another initiative—ACTTION (Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Network)—specifically targets the pain field. Key objectives of this public-private partnership are to expedite the discovery and development of improved analgesics and to assess the effect of research methods on study assay sensitivity and efficiency.

      Dworkin RH, Turk D: Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Network (ACTTION). http://www.acttion.org/. 2012

      In summary, our study contributes to a growing body of research showing the value of data in the registry. We hope that this study will promote the posting of trial results and that a widespread use of the ClinicalTrials.gov results database will generate further improvements in the download capabilities of the registry. We learned that recruitment challenges are the most common cause of trial termination, attrition rates vary across pain conditions, and migraine studies have the lowest withdrawal rates.

      Acknowledgment

      Bradford Challis provided editorial support.

      References

        • Califf R.M.
        • Zarin D.A.
        • Kramer J.M.
        • Sherman R.E.
        • Aberle L.H.
        • Tasneem A.
        Characteristics of clinical trials registered in ClinicalTrials.gov, 2007-2010.
        JAMA. 2012; 307: 1838-1847
        • Cepeda M.S.
        • Berlin J.A.
        • Gao C.Y.
        • Wiegand F.
        • Wada D.R.
        Placebo response changes depending on the neuropathic pain syndrome: Results of a systematic review and meta-analysis.
        Pain Med. 2012; 13: 575-595
      1. Clinical Trials Transformation Initiative. Using AACT for statistical analysis of data. https://www.ctti-clinicaltrials.org/project-topics/clinical-trials.gov/using-aact-for-statistical-analysis-of-data. 2012. 8-11-2012

      2. ClinicalTrials.gov. ClinicalTrials.gov Protocol data element definitions. http://prsinfo.clinicaltrials.gov/definitions.html. 2011. 5-11-2012

        • Dickersin K.
        • Rennie D.
        The evolution of trial registries and their use to assess the clinical trial enterprise.
        JAMA. 2012; 307: 1861-1864
      3. Duke University. Clinical trials transformation initiative. https://www.ctti-clinicaltrials.org/about-us_main. 2012

      4. Dworkin RH, Turk D: Analgesic, Anesthetic, and Addiction Clinical Trial Translations, Innovations, Opportunities, and Network (ACTTION). http://www.acttion.org/. 2012

        • Hauser W.
        • Bartram C.
        • Bartram-Wunn E.
        • Tolle T.
        Adverse events attributable to nocebo in randomized controlled drug trials in fibromyalgia syndrome and painful diabetic peripheral neuropathy: Systematic review.
        Clin J Pain. 2012; 28: 437-451
        • Hauser W.
        • Bartram-Wunn E.
        • Bartram C.
        • Reinecke H.
        • Tolle T.
        Systematic review: Placebo response in drug trials of fibromyalgia syndrome and painful peripheral diabetic neuropathy-magnitude and patient-related predictors.
        Pain. 2011; 152: 1709-1717
        • Hauser W.
        • Hansen E.
        • Enck P.
        Nocebo phenomena in medicine: Their relevance in everyday clinical practice.
        Dtsch Arztebl Int. 2012; 109: 459-465
        • Miller J.D.
        Registering clinical trial results: The next step.
        JAMA. 2010; 303: 773-774
        • Mitsikostas D.D.
        • Mantonakis L.I.
        • Chalarakis N.G.
        Nocebo is the enemy, not placebo. A meta-analysis of reported side effects after placebo treatment in headaches.
        Cephalalgia. 2011; 31: 550-561
        • Prayle A.P.
        • Hurley M.N.
        • Smyth A.R.
        Compliance with mandatory reporting of clinical trial results on ClinicalTrials.gov: Cross sectional study.
        BMJ. 2012; 344: d7373
        • Ross J.S.
        • Tse T.
        • Zarin D.A.
        • Xu H.
        • Zhou L.
        • Krumholz H.M.
        Publication of NIH funded trials registered in ClinicalTrials.gov: Cross sectional analysis.
        BMJ. 2012; 344: d7292
      5. Science News: Large-scale analysis finds majority of clinical trials don’t provide meaningful evidence. http://www.sciencedaily.com/releases/2012/05/120501162702.htm. 2012

        • Tse T.
        • Williams R.J.
        • Zarin D.A.
        Reporting “Basic Results” in ClinicalTrials.gov.
        Chest. 2009; 136: 295-303
      6. US National Institute of Health. ClinicalTrials.gov. http://clinicaltrials.gov/. 2012. 5-15-2012

        • Zarin D.A.
        • Tse T.
        • Williams R.J.
        • Califf R.M.
        • Ide N.C.
        The ClinicalTrials.gov results database − Update and key issues.
        NEJM. 2011; 364: 852-860