Also summarized in Chapter 6 are discussions of a stakeholder panel charged with moving the conversation about data utility to an action agenda, by offering practical ideas on strategies or incentives that advance the development of an improved data utility, and what strategies or incentives might be necessary to make that happen. As session chair David Blumenthal observed, the environment for clinical data is much more distributive than ever, a phenomenon that overrides traditional instincts of policy makers to develop solutions by identifying roles and responsibilities for local, state, and federal governments.
In a distributed environment, such an approach is too narrowly framed. For example, the conversation that engages consumers directly, and focuses on the personal health record, is a very different policy environment from one that could be addressed through a centralized authority.
At the same time, the federal government is a big stakeholder and player in the collection of health-related data. However, the environment surrounding data differ from one part of the government to the other—the NIH, for example, has the capacity to focus on promoting sharing of data and has a broad mandate for data collection sharing, whereas Medicare operates in a much more restrictive environment. With these observations as context, panelists offered comments on decisions and actions that could best enable access to and use of clinical data as a means of advancing learning and improving the value delivered in health care.
Government-sponsored clinical and claims data. Steve E. Medicare currently collects data in each of the four parts of the program: A, B, C, and D. Collected data are used as the basis of paying claims. Different data collection programs look at how different payment systems may affect outcomes versus clinical issues. Data are collected to help improve quality of health care, for payment purposes, and to develop pay-for-performance qualitative information.
Another set of data collection programs is in Medicare demonstration projects, which look at a variety of issues and, generally, examine how different payment systems may affect outcomes versus clinical issues. Data are also collected in the interest of evidence development. Given the limits of its authority, Medicare has had to be somewhat innovative. One example is linking some clinical data to collections to coverage of particular technologies.
One carrot Medicare has developed is that it has required the delivery of clinical data beyond the typical claims data as a provision for payment for certain services; a few years ago, the system required, for example, additional clinical information for the insertion of implantable defibrillators. Such an approach has the potential to provide significant amounts of information if, in fact, we can learn how to meet the challenge of what we can do with data that have been collected, and merge those data with other sources of data so that data collection can inform clinical practice.
Government-sponsored research data. The molecular biology revolution was founded on the commonality of DNA and the genetic code among living things.
Discoveries at the molecular level provide unprecedented insight into the mechanisms of human disease. This understanding has developed into an expectation of wide data sharing in molecular biology and molecular genetics. Now that powerful genomewide molecular methods are being applied to populations of individuals, the necessity of broad data sharing is being brought to clinical and large cohort studies.
James M. He observed that in the course of collecting and distributing terabytes of data, the branch has wrestled with questions concerning which data are worth centralizing versus which should be kept distributed. Although technical and policy requirements sometimes dictate answers to those questions, nature sometimes directs information engineers to pursue certain tactics.
For example, the commonality of molecular data might drive the desire to have all related information in one data pool, so that a researcher could search all the data comprehensively, perhaps not even with a specific goal in mind. This could lead to the kind of serendipitous connection that is fundamental to the nature of discovery. At the same time, however, there must be a balance toward collecting only those pieces of data that make sense in a universal way.
The NIH has required researchers to pool data collected under NIH grants so that other investigators might benefit from those data. NIH created dbGaP to archive and distribute the results of studies that have investigated the interaction of genotype and phenotype. Such studies include genome-wide association studies, medical sequencing, and molecular diagnostic assays, as well as association between genotype and nonclinical traits. The advent of high-throughput, cost-effective methods for genotyping and sequencing has provided powerful tools that allow for the generation of the massive amounts of genotypic data required to make these analyses possible.
Dozens of studies are now in the database, and by the end of , the database was expected to hold data on more than , individuals and tens of thousands of measured attributes. Hundreds of researchers have already begun using the resource.
There is also a movement on the part of the major scientific and medical journals to require deposition accession numbers when they publish the types of studies alluded to above, the same as required for DNA sequence data. To further encourage secondary use of data, other accession numbers are used when people take data out of a database, reanalyze the data, and then publish their analysis.
Professional organization-sponsored data. Guidelines and performance measures in cardiology developed by the American College of Cardiology ACC , often in association with the American Heart Association, typically are adopted worldwide.
ACC Chief Executive Officer Jack Lewin described ongoing efforts to ensure that ACC guidelines, performance measures, and technology appropriateness criteria are adopted in clinical care, where they can benefit individual patients. Although most guidelines are currently available on paper, the vision is to have clinical decision support integrated into EHRs.
The NCDR strives to standardize data and to provide data that are relevant, credible, timely, and actionable, and to represent real-life outcomes that help providers improve care and that help participants meet consumer, payer, and regulator demands for quality care.
Other NCDR registries collect data on acute coronary syndrome, percutaneous coronary interventions, implantable cardioverter defibrillators, and carotid artery revascularizations. The ACC is currently working to standardize registry data to be able to measure gaps in performance and adherence to guidelines, with an ultimate goal of being able to teach how to fill those gaps and thus create a cycle of continuous quality improvement.
Mandates from Medicare and states have pushed hospitals to use the ACC registries, but there is room for wider adoption. The ACC is working to alleviate barriers such as the need for standardization, the expense of collecting needed data, and the lack of clinical decision support processes built into EHRs.
The ACC believes wider adoption of data sharing via registries is within reach, should be encouraged, and would ultimately result in better health care overall, but that strategies need to be developed and implemented that foster systems of care versus development of data collection mechanisms specific to a single hospital.
Toward the development of business strategies needed to develop the clinical decision support capacity, standardization, and interoperability, the ACC wants to collaborate with other medical specialties, EHR vendors, the government, insurers, employers, and other interested parties.
Going forward, the ACC supports investment in rigorous measurement programs, advocating for government endorsements of a limited number of data collection programs, allowing professional societies to help providers meet mandated reporting requirements, and implementing systematic change designed to engage physicians and track meaningful measures.
Product development and testing data. The pharmaceutical industry collects and shares a great deal of clinical data. Because the industry is heavily regulated, the data it collects are voluminous and made available publicly under strict regulations that, it is hoped, ensure their accuracy and the accuracy of their interpretations.
Eve Slater, senior vice president for worldwide policy at Pfizer, noted that the pharmaceutical industry is interested in ensuring the widespread availability of data to support research at the point of patient care and care at the point of research. In the pursuit of that goal, the industry is interested in pursuing the alignment of data quality, accessibility, integrity, and comprehensiveness.
An influx of regulations and an acknowledged need for transparency are prompting the appearance of product development and testing data in the public domain. Nonetheless, attention is needed to ensure data standards, integrity, and appropriate, individualized interpretation.
Although significant amounts of product development data are required by law to be in the public domain, roadblocks prevent the effective sharing of clinical data. In the area of clinical trials posted on www. The information also needs to be translated into language that patients can understand.
The lack of an acceptable format for providing data summaries for the public is linked to concerns about disseminating data in the absence of independent scientific oversight; once data are in the public domain, controlling quality assurance and the accuracy with which the information is translated to patients become difficult.
Policies to address some of these issues lag behind the actual availability of data. These issues argue in support of the data-sharing and standardization principles that the IOM has articulated. Regulatory policies to promote sharing. Although large repositories now exist for controlled clinical trial data, including primary data, Janet Woodcock, deputy commissioner and chief medical officer at the FDA, observed that much of that information unfortunately resides on paper in various archives, not in an electronic form that would readily enable sharing.
The FDA asked companies engaged in cardiac safety trials to use that standard. Today the ECG Warehouse holds more than , digital ECGs along with the clinical data, and the FDA is collaborating with the academic community to analyze those data to learn new knowledge that would not have been accessible before the development of a standardized dataset.
The FDA is constructing quantitative disease models from clinical trials data, building electronic models that incorporate the natural history of the disease, performance of all the different biomarkers about the disease over time, and results from interventions. Given multiple interventions, the approach allows researchers to model quantitatively. The FDA expects more of these models to evolve in the future.
Within the Critical Path Initiative, the FDA worked with various pharmaceutical companies to pool all their animal data for different drug-induced toxicities, before the drugs are given to people. The first dataset, on drug-induced kidney toxicity in animals, has been submitted to the FDA and is under review.
Similar approaches could be undertaken with humans; pooling those data from various sources could lead to new knowledge. The FDA also plans to build a distributed network for pharmaco-vigilance.
The Sentinel Network seeks to integrate, collect, analyze, and disseminate medical product e. One approach is to build a secure distributed network in which data stay with the data owners, but are accessible to others. Legislative change to allow sharing. The Center for Medical Consumers, a nonprofit advocacy organization, was founded in to provide access to accurate, science-based information so that consumers could participate more meaningfully in medical decisions that often have profound effects on their health.
Legislatively, most of the action concerning data sharing is currently in the states. Levin noted that we may face a scenario similar to that with managed care legislation, where in the absence of federal legislation, states moved ahead on their own, for better or worse. Currently states are moving ahead rapidly with HIT and health information exchange.
Issues of privacy and confidentiality are very much in the forefront and driving state legislation. In terms of legislation covering data sharing, we need to make sure that whatever policy is developed moves things in an agreed-upon direction that does not create new obstacles and barriers.
A first step will be to develop a much better understanding of what barriers exist in the states and federal government to aggregating data for research, quality improvement, and similar goals.
Another issue is that data sharing is, in essence, a social contract between individuals and researchers who want to use their data. Patients are told there will be some payoff from sharing data, but perhaps patients do not hear enough about how that is supposed to happen. Where does the payoff come? How does the other side of that contract deliver? What are the deliverables? Is there a time line for those deliverables?
Is there accountability for those deliverables? As part of the social contract, there should be a burden on collecting data, a requirement that the collector do something specific with the data being collected. Privacy and confidentiality rules and remedies can be legislated; however, trust must be built.
All who believe that data represent a public good—and that data sharing is a public responsibility to advance the public interest in improving healthcare quality, safety, and efficacy—also understand that such a message may not resonate so readily with the public. The public has not yet been brought up to that level, and more is needed to engage consumers in this enterprise.
The session further considered what technical, communication, and demonstration-of-value advances might help address the concerns of healthcare consumers. As summarized in Chapter 7 , participants provided an overview of public knowledge, issues, concerns, and discussion of strategies on public understanding, engagement, and support for the changes necessary to create the next-generation public data utility. Also discussed were the design and implementation of tools that would be enhanced by wider availability of clinical data—such as those that help improve patient access and use of information from, about, and by those who are dealing with similar circumstances.
Finally, the nature and potential use of personal health records, safeguards for data access and entry, and possible influence on public perceptions about privacy and data use were considered. In many respects, the greatest challenge associated with establishing a medical care data system to serve the public interest lies in the fact that such data largely reside in the private sector, where commercial interests and other factors inhibit sharing.
This paradigm has benefited discrete entities, but it has failed to serve the public health interests of the broader U. Though the public should have considerable interest in this information, the limitations of the data system as currently structured severely inhibit demonstration of the value proposition for consumers, both individually and collectively.
Alison Rein, senior manager at AcademyHealth, identified key issues to be addressed to develop public awareness and perception of medical care data use for public good applications. Progress will require public education, outreach, and the demonstration of value in the use of health data. Generating interest in electronic access to personal health information might help overcome market obstacles related to sequestering data for proprietary interests.
However, Rein suggested that until greater regulation is put in place to compel providers and healthcare institutions to share data appropriately, use of clinical data for the public good will remain constrained.
Efforts should also be made to align public and research interests toward pursuing common goals and helping the public develop a deeper appreciation for research as a public good.
Public demonstration of the value of data sharing might help in this regard—showing, for example, the potential impact of clinical data on personal lifestyle, the bottom line, or other endpoints of interest to the public. Possible approaches to demonstrating the value of research as a public good included expanded reporting of limited, but meaningful, clinical health data to public health entities; the enhancement and expansion of clinical data registries; and the development of a nationwide health tracking network that could yield information of value to researchers, the public health community, providers, policy makers, and consumers.
Both the public and private sectors are struggling to navigate this logistically challenging landscape to gain medical insights and occasionally to monetize these insights. Patient-focused clinical trial information services created in the past decade provide a unique view of how patients feel about healthcare research at both the individual and the population level.
Courtney Hudson, chief executive officer and founder of EmergingMed, provided an overview of EmergingMed, a company that helps cancer patients gain access to clinical trials and search for treatment options.
Patients in this country support mining clinical databases for the good of public health and for learning, and they believe overwhelmingly that it already happens. Patients seek information to inform treatment decisions, and Hudson indicated it would be unconscionable to not provide as much information as we have available in the public domain to possibly help each patient. As ways to use and aggregate public datasets are developed, it would be extremely difficult ethically to justify any decision to withhold information from patients.
Similarly, Hudson highlighted the concept of promoting evidence-based medicine and garnering public approval and cooperation in terms of the potential benefit to the public, rather than the public understanding of research.
Transparency and trust were also emphasized. Regarding the informed consent process, a basic ethical concern is that the clinical trials system as it stands today has a narrow definition of informed consent. Hudson encouraged workshop participants to consider ways to provide context, full disclosure, or transparency to patients or to inform them about the larger process. Dramatic increases in medical information and increases in consumer access to information via the Internet, are making health care one of the most significant hot spots for technology innovation today.
Currently the practice of medicine suffers from an information management problem. Control will eventually shift, moving the current top-down doctor-patient relationship to one that is characterized by mutual control.
For physicians, the issue is about aggregating data within and across provider organizations, and for consumers it is about aggregating health data across all of their sources. Ultimately, these views will connect to enable informed health decisions and better clinical outcomes. Today, we have more personal health data than ever; however, the data are dispersed over a variety of facilities, providers, and even our own monitoring devices and home computers.
As described by Jim Karkanias, partner and senior director of applied research and technology at Microsoft Corporation, Microsoft is working to address gaps in the healthcare data management system, both from an enterprise and a consumer standpoint, to enable a more connected, informed, and collaborative healthcare ecosystem. Microsoft HealthVault, a consumer health platform with specialized health search capabilities, delivers a platform that puts users in control of their information so they can access, store, and recall it on demand.
Karkanias indicated that such a level of access and control contributes to the ability to make good decisions.
The platform is built on the premise that the consumer is at the center of health care, so patients are the logical aggregators of this information. Chronic conditions and more serious illnesses could be handled proactively. The availability of timely and reliable evidence to guide healthcare decisions depends substantially on the quality and accessibility of the data used to produce the evidence.
Important information about the results of different diagnostic and treatment interventions is collected in multiple forms by many institutions for different reasons and audiences—providers, patients, insurers, manufacturers, health researchers, and public agencies. Medical care data represent a vital resource for improving insight and action for more effective treatment. With the increasing potential of technical capacity for aggregation and sharing of data while ensuring confidentiality, the prospects are at hand for powerful and unprecedented tools to determine the circumstances under which medical interventions work best, and for whom.
However, these data are usually held in a proprietary manner instead of being considered a public good that can be pooled and mined for new research and, ultimately, better patient care and outcomes. There are a number of challenges to the use of such data—coding discrepancies, platform incompatibilities, patient protection tools—yet practical approaches are and can be developed to contend with these issues. The most significant challenge may be the barriers and restrictions to data access inherent in treating clinical outcome data as a proprietary commodity.
Chapter 8 summarizes the themes emerging from workshop discussion and opportunities for follow-up action by the Roundtable. Key issues discussed include clarifying basic principles of data stewardship; creating next-generation data utilities and models; creating next-generation data policy; and engaging the public. Turn recording back on. National Center for Biotechnology Information , U.
Search term. Workshops and publications in this series since include: The Learning Healthcare System. Clarity on the basic principles of clinical data stewardship.
The starting point for expanded access and use of clinical data for knowledge development is agreement on some of the fundamental notions to guide the activities for all individuals and organizations with responsibility for managing clinical data. Workshop participants repeatedly mentioned the need for consensus on approaches to such issues as data structure, standards, reporting requirements, quality assurance, timeliness, deidentification or security measures, and access and use procedures—all of which will determine the pace and nature of evidence development.
Incentives for real-time use of clinical data in evidence development. Current barriers to the real-time use of clinical data for new knowledge discussed at the workshop ranged from regulatory and commercial issues to cost and quality issues.
Participants suggested the need for a dedicated program of activities, incentives, and strategies to improve the methods and approaches, their testing and demonstration, the cooperative decision making on priorities and programs, and the collective approach to regulatory barriers. Transparency to the patient when data are applied for research.
Patient acceptance is key to use of clinical data for knowledge development, and patient engagement and control are key to acceptance. In this respect, clarity to individual patients on the structure, risks, and benefits of access to data for knowledge development was noted by participants as particularly important. Patient confidence and system accountability may be enhanced through transparent notification and audit processes in which patients are informed of when and by whom their information has been accessed for knowledge development.
Addressing the market failure for expanding electronic health records. Currently, market incentives are not enough to bring about the expansion of use of electronic health records necessary to make the point of care a locus for the development, sharing, and application of knowledge about what works best for individual patients. Shortfalls noted by participants included demand by providers or patients that is not sufficient to counter the expense to small organizations, competing platforms, and asynchronous reporting requirements that work against their utility for broad quality and outcome determinations, and that even the larger payers—apart from government—do not possess the critical mass necessary to drive broader scale applicability and complementarity.
Deeper, more directed, and coordinated strategy involving Medicare leadership will likely be needed to foster such changes. Personal records and portals that center patients in the learning process. Patient demand could be instrumental in spreading the availability of electronic health records for improving patient care and knowledge development. Such demand will depend on much greater patient access to, comfort with, and regular use of programs that allow either the maintenance of personal electronic health records or access through a dedicated portal to their provider-maintained electronic medical record.
As noted during the workshop, many consumer-oriented products under development give patients and consumers more active roles in managing personal clinical information. These may help to demonstrate value in the speed and ease of personal access to the information, better accommodate patient preference in care, and foster a partnership spirit conducive to the broader electronic health records EHRs application. Coordinated EHR user organization evidence development work. The development of a vehicle to enhance collaboration among larger EHR users of different vendors was raised during the workshop as a means to accelerate the emergence of more standardized agreements and approaches to integrating and sharing data across multiple platforms, common query strategies, virtual data warehousing rules and strategies, relational standards, and engagement of ways to reduce misperceptions on regulatory compliance issues.
The business case for expanded data sharing in a distributed network. Demonstrating the net benefits of data sharing could promote its use. Benefits suggested by participants included cost savings or avoidance from facilitated feedback to providers on quality and outcomes; quick, continuous improvement information; and improved management, coordination, and assessment of patient care. Assuring publicly funded data for the public benefit.
Federal and state funds that support medical care and support insights into medical care through clinical research grant funding are the source of substantial clinical data, yet many participants observed that these resources are not yet effectively applied to the generation of new knowledge for the common good.
Broader semantic strategies for data mining. Platform incompatibilities for clinical data substantially limit the spread of electronic health records and their use for knowledge development.
Yet discussion identified strategies using alternative semantic approaches for mining clinical data for health insights, which may warrant dedicated cooperative efforts to develop and apply them. Public engagement in evidence development strategies. Generating a base of support for and shared emphasis on developing a healthcare ecosystem in which all stakeholders play a contributory role was noted by many participants as important for progress. Ultimately, the public will determine the broad acceptance and applicability of clinical data for knowledge development, underscoring the importance of keeping the public closely involved and informed on all relevant activities to use clinical data to generate new knowledge.
Clinical Data as the Basic Staple of Health Learning Clinical data consist of information ranging from determinants of health and measures of health and health status to documentation of care delivery. Healthcare Data Today: Current State of Play The first set of workshop sessions provided an overview of existing healthcare data—the sources, types, accessibility, and uses in the United States. Current Healthcare Data Profile When discussing elements associated with evidence-based medicine or when defining the data or the taxonomies regarding health and health care, the healthcare community does not always consider all of the potential effects on health.
Recruiting and not yet recruiting studies All studies. Condition or disease For example: breast cancer x. Other terms For example: NCT number, drug name, investigator name x. Yemen Zambia Zimbabwe x. City x. Distance 50 miles miles miles miles. Advanced Search. Composite endpoints may also result in higher power and resulting smaller sample sizes in event-driven trials since more events will be observed assuming that the effect size is unchanged.
Composite endpoints may also reduce the bias due to competing risks and informative censoring. This is because one event can censor other events and if data were only analyzed on a single component then informative censoring can occur.
Composite endpoints may also help avoid the multiplicity issue of evaluating many endpoints individually. Composite endpoints have several limitations. Firstly, significance of the composite does not necessarily imply significance of the components nor does significance of the components necessarily imply significance of the composite. For example one intervention could be better on one component but worse on another and thus result in a non-significant composite.
Another concern with composite endpoints is that the interpretation can be challenging particularly when the relative importance of the components differs and the intervention effects on the components also differ. For example, how do we interpret a study in which the overall event rate in one arm is lower but the types of events occurring in that arm are more serious? Higher event rates and larger effects for less important components could lead to a misinterpretation of intervention impact.
It is also possible that intervention effects for different components can go in different directions. Power can be reduced if there is little effect on some of the components i. When designing trials with composite endpoints, it is advisable to consider including events that are more severe e.
It is also advisable to collect data and evaluate each of the components as secondary analyses. This means that study participants should continue to be followed for other components after experiencing a component event. When utilizing a composite endpoint, there are several considerations including: i whether the components are of similar importance, ii whether the components occur with similar frequency, and iii whether the treatment effect is similar across the components.
In the treatment of some diseases, it may take a very long time to observe the definitive endpoint e. A surrogate endpoint is a measure that is predictive of the clinical event but takes a shorter time to observe.
The definitive endpoint often measures clinical benefit whereas the surrogate endpoint tracks the progress or extent of disease. Surrogate endpoints could also be used when the clinical end-point is too expensive or difficult to measure, or not ethical to measure. Surrogate markers must be validated. Ideally evaluation of the surrogate endpoint would result in the same conclusions if the definitive endpoint had been used.
The criteria for a surrogate marker are: 1 the marker is predictive of the clinical event, and 2 the intervention effect on the clinical outcome manifests itself entirely through its effect on the marker. It is important to note that significant correlation does not necessarily imply that a marker will be an acceptable surrogate. Missing data is one of the biggest threats to the integrity of a clinical trial. Missing data can create biased estimates of treatment effects.
Thus it is important when designing a trial to consider methods that can prevent missing data. Researchers can prevent missing data by designing simple clinical trials e.
Similarly it is important to consider adherence to protocol e. Envision a trial comparing two treatments in which the trial participants in both groups do not adhere to the assigned intervention. Then when evaluating the trial endpoints, the two interventions will appear to have similar effects regardless of any differences in the biological effects of the two interventions. Note however that the fact that trial participants in neither intervention arm adhere to therapy may indicate that the two interventions do not differ with respect to the strategy of applying the intervention i.
Researchers need to be careful about influencing participant adherence since the goal of the trial may be to evaluate the strategy of how the interventions will work in practice which may not include incentives to motivate patients similar to that used in the trial.
Sample size is an important element of trial design because too large of a sample size is wasteful of resources but too small of a sample size could result in inconclusive results. Calculation of the sample size requires a clearly defined objective. The analyses to address the objective must then be envisioned via a hypothesis to be tested or a quantity to be estimated. The sample size is then based on the planned analyses.
A typical conceptual strategy based on hypothesis testing is as follows:. Formulate null and alternative hypotheses. Select the Type I error rate. Type I error is the probability of incorrectly rejecting the null hypothesis when the null hypothesis is true. In the example above, a Type I error often implies that you incorrectly conclude that an intervention is effective since the alternative hypothesis is that the response rate in the intervention is greater than in the placebo arm.
For example, when evaluating a new intervention, an investigator may consider using a smaller Type I error e. Alternatively a larger Type I error e. Select the Type II error rate. Type II error is the probability of incorrectly failing to reject the null hypothesis when the null hypothesis should be rejected.
The implication of a Type II error in the example above is that an effective intervention is not identified as effective. Type II error and power are not generally regulated and thus investigators can evaluate the Type II error that is acceptable.
For example, when evaluating a new intervention for a serious disease that has no effective treatment, the investigator may opt for a lower Type II error e. Obtain estimates of quantities that may be needed e. This may require searching the literature for prior data or running pilot studies. Select the minimum sample size such that two conditions hold: 1 if the hull hypothesis is true then the probability of incorrectly rejecting is no more than the selected Type I error rate, and 2 if the alternative hypothesis is true then the probability of incorrectly failing to reject is no more than the selected Type II error or equivalently that the probability of correctly rejecting the null hypothesis is the selected power.
Since assumptions are made when sizing the trial e. Interim analyses can be used to evaluate the accuracy of these assumptions and potentially make sample size adjustments should the assumptions not hold. Sample size calculations may also need to be adjusted for the possibility of a lack of adherence or participant drop-out. In general, the following increases the required sample size: lower Type I error, lower Type II error, larger variation, and the desire to detect a smaller effect size or have greater precision.
An alternative method for calculating the sample size is to identify a primary quantity to be estimated and then estimate it with acceptable precision. For example, the quantity to be estimated may be the between-group difference in the mean response.
A sample size is then calculated to ensure that there is a high probability that this quantity is estimated with acceptable precision as measured by say the width of the confidence interval for the between-group difference in means. Interim analysis should be considered during trial design since it can affect the sample size and planning of the trial. When trials are very large or long in duration, when the interventions have associated serious safety concerns, or when the disease being studied is very serious, then interim data monitoring should be considered.
Typically a group of independent experts i. The DSMB meets regularly to review data from the trial to ensure participant safety and efficacy, that trial objectives can be met, to assess trial design assumptions, and assess the overall risk-benefit of the intervention. The project team typically remains blinded to these data if applicable. The DSMB then makes recommendations to the trial sponsor regarding whether the trial should continue as planned or whether modifications to the trial design are needed.
Careful planning of interim analyses is prudent in trial design. Care must be taken to avoid inflation of statistical error rates associated with multiple testing to avoid other biases that can arise by examining data prior to trial completion, and to maintain the trial blind. Many structural designs can be considered when planning a clinical trial.
Common clinical trial designs include single-arm trials, placebo-controlled trials, crossover trials, factorial trials, noninferiority trials, and designs for validating a diagnostic device.
The choice of the structural design depends on the specific research questions of interest, characteristics of the disease and therapy, the endpoints, the availability of a control group, and on the availability of funding.
Structural designs are discussed in an accompanying article in this special issue. This manuscript summarizes and discusses fundamental issues in clinical trial design. A clear understanding of the research question is a most important first step in designing a clinical trial. Minimizing variation in trial design will help to elucidate treatment effects. Randomization helps to eliminate bias associated with treatment selection.
Stratified randomization can be used to help ensure that treatment groups are balanced with respect to potentially confounding variables. Blinding participants and trial investigators helps to prevent and reduce bias. Placebos are utilized so that blinding can be accomplished. Control groups help to discriminate between intervention effects and natural history. The selection of a control group depends on the research question, ethical constraints, the feasibility of blinding, the availability of quality data, and the ability to recruit participants.
The selection of entry criteria is guided by the desire to generalize the results, concerns for participant safety, and minimizing bias associated with confounding conditions. Endpoints are selected to address the objectives of the trial and should be clinically relevant, interpretable, sensitive to the effects of an intervention, practical and affordable to obtain, and measured in an unbiased manner.
Composite endpoints combine a number of component endpoints into a single measure. Surrogate endpoints are measures that are predictive of a clinical event but take a shorter time to observe than the clinical endpoint of interest. Interim analyses should be considered for larger trials of long duration or trials of serious disease or trials that evaluate potentially harmful interventions.
Sample size should be considered carefully so as not to be wasteful of resources and to ensure that a trial reaches conclusive results. There are many issues to consider during the design of a clinical trial. Researchers should understand these issues when designing clinical trials. The author would like to thank Dr. Justin McArthur and Dr. The author thanks the students and faculty in the course for their helpful feedback.
National Center for Biotechnology Information , U. J Exp Stroke Transl Med. Author manuscript; available in PMC Apr Scott R. Evans , Ph. Author information Copyright and License information Disclaimer. Evans, Ph. Phone: Copyright notice. See other articles in PMC that cite the published article. Abstract Most errors in clinical trials are a result of poor planning. Keywords: p-value, confidence intervals, intent-to-treat, missing data, multiplicity, subgroup analyses, causation.
Introduction The objective of clinical trials is to establish the effect of an intervention. Design Issues There are many issues that must be considered when designing clinical trials.
0コメント