Edited by George Jelinek David M Taylor One important strategy in clinical research is to compare groups of people. These might be different groups or the same group pre- and post-intervention. The methods used are mainly non-experimental, that is observational. They are based on what we can observe and compare in groups of people within populations. By comparing the characteristics (such as behaviours and exposures) and the health experiences of these groups of people, it is possible to identify associations that might be responsible for the cause of a disease. The research question forms the basis of every research study and is the reason that it is undertaken. It is the scientific, clinical, practical or hypothetical question that, when answered, will allow the researcher to apply newly found knowledge for some useful purpose. The research question may be generated from many sources, including questions raised by clinical observations, the published medical literature, scientific conferences, seminars and discussions or the effectiveness of currently used or new treatment. A hypothesis is a bold statement of what we think the answer to the research question is. Essentially, it is our best guess of what the underlying reality is. As such, it has a pivotal role in any study. The purpose of a research study is to weigh the evidence for and against the study hypothesis. Accordingly, the hypothesis is directly related to the research question. In expressing a hypothesis, the researcher needs to be very specific about who or what is to be observed and under what conditions. A failure to define clearly the study groups and the study endpoints often leads to sloppy research. The aims of a study are a description of what the researcher hopes to do in order to weigh up the evidence for and against the study hypothesis. Just as the research question begs the hypothesis, the hypothesis begs the study aims. The examples above demonstrate clearly the natural progression from research question through to the study aims. This is a simple, yet important, process and time spent defining these components will greatly assist in clarifying the study’s objectives. These concepts are discussed more fully elsewhere [1,2]. Most research projects are undertaken as collaborative efforts with the co-investigators each contributing in their area of expertise. Co-investigators should meet the criteria for co-authorship of the publication reporting the study’s findings [3]. Usually, the person who has developed the research question takes the role of principal investigator (team leader) for the project. Among the first tasks is to assemble the research team. Ideally, the principal investigator determines the areas of expertise required for successful completion of the project (e.g. biostatistics) and invites appropriately skilled personnel to join the team [1]. It is advisable to keep the numbers within the team to a minimum. In most cases, three or four people are adequate to provide a range of expertise without the team becoming cumbersome. It is recommended that nursing staff be invited to join the team, if this is appropriate. This may foster research interest among these staff, improve departmental morale and may greatly assist data collection and patient enrolment. All co-investigators are expected to contribute time and effort to the project, although the extent of this contribution will vary. The temptation to include very senior staff or department heads simply to bolster the profile of the project should be avoided if possible. It is recommended that personality and track record for ‘pulling one’s weight’ be considered when assembling the team. There is little more frustrating than having poor contributors impede the progress of a study. Assigning specific responsibilities, in writing, to each member of the team is a useful tactic in preventing this potential problem. However, care should be taken to ensure that the timelines for assignment completion are reasonable. The importance of good communication within the research team cannot be overemphasized. This is usually the responsibility of the principal investigator and may involve regular meetings or reports. At the risk of flooding each co-investigator with excessive or trivial communications (e.g. e-mail), selected important communications should be forwarded as they appear, for instance, notification of ethics committee approval and updates on enrolment. The protocol is the blue print or recipe of a research study. It is a document drawn up prior to commencement of data collection that is a complete description of the study to be undertaken [4]. Every member of the study team should be in possession of an up-to-date copy. Furthermore, an outside researcher should be able to pick up the protocol and successfully undertake the study without additional instruction. Research protocols are required: The protocol should be structured largely in the style of a journal article’s Introduction and Methods sections [4]. Hence, the general structure is as follows: Study setting and period – a description of where and when the study will take place. Data-collection instruments, e.g. questionnaires, proformas, equipment. Data-collection procedures including quality-control procedures to ensure integrity of data. Ethical issues – subject confidentiality, safety, security and access to data. This general plan should be followed in the preparation of any study protocol. However, the final protocol will vary from study to study. Study design, in its broadest sense, is the method used to obtain data to weigh up the evidence for and against the study hypothesis. Many factors influence the decision to use a particular study design and each design has advantages and disadvantages. For a more extensive discussion on study design the reader is referred elsewhere [1,5]. In general, research studies examine the relationship between an exposure or risk factor (e.g. smoking, obesity, vaccination) and an outcome of interest (e.g. lung cancer, cardiac disease, protection from infection). In observational (non-experimental) studies, the principal challenge is to find a naturally occurring experiment, i.e. a comparison of two or more populations that enables the investigator to address a hypothesis about the outcome of interest. Cross-sectional studies examine the present association between two variables. For example, within a population you could take a single random sample of all persons, measure some variable of interest (e.g. lung function) and then correlate that variable with the presence or absence of lung cancer. Data are often collected in surveys and the information on exposure and outcome of interest is collected from each subject at one point in time. The main outcome measure obtained from a cross-sectional study is prevalence. Ecological studies relate the rate of an outcome of interest to an average level of exposure that is presumed to apply to all persons in the population or group under investigation. So, for example, we could determine the association between the average amount smoked per capita in different countries and the incidence of lung cancer in each country. In a cohort study, a group of individuals, in whom the personal exposures to a risk factor have been documented, are followed over time. The rate of disease that subsequently occurs is examined in relation to the individuals’ exposure levels. For example, within a population you could take a sample (cohort) of healthy individuals, document their personal past and ongoing smoking history, and relate that to the subsequent occurrence of lung cancer in that same sample. Although not as powerful a study design as clinical trials (see below), cohort studies are able to provide valuable data relating to the causation of disease. Case-control studies involve a comparison between a representative sample of people with an outcome of interest (cases) and another sample of people without the outcome (controls). If an antecedent feature (exposure) is found to be more common in the cases than the controls, this suggests an association between that exposure and the development of the outcome. The frequencies of past exposures to risk factors of interest are compared in each group. Case-control studies provide only medium level evidence of an association between exposure and outcome of interest. This study design is often employed in emergency medicine research. The clinical details (history, management, outcome) of interesting or similar patients are described. This study design provides weak evidence for an association between exposure and outcome of interest and is best employed for hypothesis generation. For example, a series of patients who all developed skin necrosis after being bitten by a certain spider would reasonably lead to the hypothesis that the venom of the spider of interest contained a particular tissue necrosis factor. However, this hypothesis would need to be proven by the isolation of the factor and experimental demonstration of its effects. Data for case reports/series are often extracted from medical record reviews or existing databases. This is one reason for the weakness of this study design insofar as the data were most likely collected for purposes other than the research study. Accordingly, such data are often of low quality and may suffer from inaccuracies, incompleteness and measurement bias. In an experimental study, the researcher is more than a mere observer and actively manipulates the exposure of study subjects to an exposure of interest (risk) and measures the effects (outcomes) of this manipulation. The preferred form of experimental study is currently the randomized, controlled trial, in which the intervention is randomly assigned at the level of the individual study subject. Although this is the most scientifically rigorous design, other study designs must often be used for a number of reasons including: For ethical reasons, we cannot easily use experimental studies to study factors that are thought to increase the risk of disease in humans. For example, you could not do a study where you ask half of the group to smoke for 10 years and half of the group to remain non-smokers. This is a process by which patients are allocated to one of two or more study groups, purely by chance. Randomization prevents any manipulation by the investigators or treating doctors in the creation of the treatment groups. This prevents a situation whereby a doctor can, for example, allocate the sicker (or not so sick) patients to a new treatment. Randomization also helps to produce study groups comparable to one another with respect to known, as well as unknown, confounding variables (e.g. risk factors). The most convenient methods of randomizing patients are random number tables in statistical textbooks or computerized random-number-generating programs. A fundamental aspect of randomization is that it must only take place after the commitment to participate has been made (enrolment has taken place). Another important principle is that randomized patients are irrevocably committed to follow up and must not be excluded from, or lost to, follow up, regardless of their subsequent compliance or progress (‘intention to treat analysis’). Blinding is the most effective method of minimizing systematic error (bias) in clinical trials. In single-blinded studies, patients participating in the trial are unaware which treatment they are receiving but the investigators do know. In double-blind studies, neither the subjects nor the investigators know which patient is receiving which treatment. This type of study is usually only feasible with drug studies where it is possible to provide identically appearing medication. This is often achieved using the double dummy approach in which patients receive two medications, one active and the other placebo. The alternative treatment involves a swap-over of the active and placebo medications. Even in apparently blinded studies, there may be various indicators that allow the patient or investigator to determine which treatment they are receiving. In this circumstance, additional methods of bias control may be needed. It is essential that the study uses valid and repeatable methods, that is, measurements that measure what they purport to measure. Ideally, the validity of each of the measurements used in any study should be tested, during the design stage of the study, against another method of measuring the same thing that is known to be valid. Two types of validity are described: Non-response is a problem for many types of observational study. Often people who participate in a study (responders) have different characteristics from those who do not (non-responders). This can introduce substantial selection bias into the prevalence estimates of a cross-sectional study. In order to minimize this bias, as large a sample as possible is required. To this end, investigators undertaking cross-sectional surveys aim for at least 70% of invited participants actually to respond. Unfortunately, a target response rate of 70% is often not met and low response rates are likely to impact significantly upon bias and validity of the study. A variable is a property or parameter that may vary from patient to patient. The framework for the study hypothesis is the independent variable. This variable is often the factor that is thought to affect the measurable endpoints, or dependent variables, in the study. For example, cigarette smoking causes lung cancer. In this example, cigarette smoking is the independent variable and lung cancer is the dependent variable, as its incidence and nature depends upon cigarette smoking. Study endpoints are variables that are impacted upon by the factors under investigation. It is the extent to which the endpoints are affected, as measured statistically, that will allow us to weigh up the evidence for and against the hypothesis. For example, a researcher wishes to examine the effects of a new anti-hypertensive drug. It is known that this drug has minor side effects of impotence and nightmares. A study of this new drug would have a primary endpoint of blood pressure drop and secondary endpoints of the incidence of the known side effects. Essentially, all forms of investigation involve counting or measuring to quantify the study endpoints. In doing so, there is always the opportunity for error, either in the measurement itself or in the observer who makes the measurement. Such errors (measurement bias) can invalidate the study findings and render the conclusions worthless. There are several important principles in sampling study subjects: This is a list of all members (for instance persons, households, businesses) of the target population that can be used as a basis for selecting a sample. For example, a sampling frame might be the electoral roll, the membership list of a club or a register of schools. It is important to ensure that the sampling frame is complete, that all known deficiencies are identified and that flaws have been considered (omissions, duplications, incorrect entries). When every member of the population has some known probability of inclusion in the sample, we have probability sampling. There are several varieties: Simple random sampling: in this type of sampling, every element has an equal chance of being selected and every possible sample has an equal chance of being selected. This technique is simple and easy to apply when small numbers are involved, but requires a complete list of members of the target population. Systematic sampling: this employs a fixed interval to select members from a sampling frame. For example, every twentieth member can be chosen from the sampling frame. It is often used as an alternative to simple random sampling as it is easier to apply and less likely to make mistakes. Furthermore, the cost is less, its process can be easily checked and it can increase the accuracy and decrease the standard errors of the estimate. A stratified sample is obtained by separating the population into non-overlapping groups or strata (e.g. males and females) and then selecting a single random (or systematic) sample from each stratum. This may be done to: obtain separate estimates for each stratum accommodate different sampling plans in different strata, e.g. over-sampling. However, the strata should be designed so that they collectively include all members of the target population, each member must appear in only one stratum and the definitions or boundaries of the strata should be precise and unambiguous. Convenience sampling is an example of non-probability sampling. This technique is used when patients are sampled during periods convenient for the investigators. For example, patients presenting to an emergency department after midnight are much less likely to be sampled if research staff are not present. This technique is less preferred than probability sampling, as there is less confidence that a non-probability sample will be representative of the population of interest or can be generalized to it. However, it does have its uses, such as in in-depth interviews for groups difficult to find and for pilot studies. Surveys are one of the most commonly used means of obtaining research data. While seemingly simple in concept, the execution of a well-designed, questionnaire-based survey can be difficult. From a practical point of view, the following points are suggested: Cross-check all the data again. Perform the main data analysis. Perform any other exploratory data analysis. If possible, incorporate commonly asked questions into your questionnaire. One good source of such questions is standard surveys (such as Australian Bureau of Statistics). There are many other sources of pre-validated questions (for instance measures on quality of life, functional ability and disease-specific symptoms). The scientific literature, accessible through MEDLINE and other databases, is a good start. This is particularly important if you want to compare the sample with other surveys or, in general, if you want to be able to compare the sample’s responses to previously completed work. Also, previously used questionnaires for similar topics are very helpful and often can be used directly. The advantage to doing this is that these questionnaires’ reliability and validity are established. The wording of a question can affect its interpretation. Attitude questions with slightly different wordings can elicit differing responses, so several questions on the same topic may be helpful to be certain that the ‘true attitude’ of the respondent is obtained. This technique can enhance internal validity and consistency. Pre-testing of a questionnaire is most important. Consider the following points: Assess face validity of all questions. Do different people have similar interpretations of questions? Do closed questions have appropriate possible answers? Does the questionnaire give a positive impression? It is always worth checking with your colleagues to determine whether the questionnaire will answer the study question. Also, test the questionnaire on a cross-section of potential respondents of differing reading levels and background. There can be a few surprises and several revisions may be required before the final questionnaire is determined. These documents, also called case report forms, are generally used to record individual case data that are later transferred to electronic databases. These data may be obtained from the patient directly (e.g. vital sign measurements) or extracted from the medical records or similar source. While simple in concept, careful design of a data-collection proforma should be undertaken. First, a list of the data required should be drafted and translated into data fields on the proforma. These fields should be clearly laid out and well separated. Prior to data collection, the proforma should be trialled on a small selection of subjects. In such an exercise, it is commonly found that the data fields are not adequate for the collection of the required data. Hence, revision of the proforma is often required. Consideration should be given to the ease of data entry and extraction from the proforma. Data entry should progress logically from the top to the bottom of the document without interruption. This is particularly important for data extraction from medical records. Data extracted from the front of the record should be entered at the top of the proforma and so on. Consideration should also be given to later translation of the data to an electronic database. This should follow the same principles as described above. If possible, design a proforma that will allow data to be scanned directly into an electronic database. In any study design, errors may occur. This is particularly so for observational studies. When interpreting findings from an observational study, it is essential to consider how much of the association between the exposure (risk factor) and the outcome may have resulted from errors in the design, conduct or analysis of the study [5]. The following questions should be addressed when considering the association between an exposure and outcome: Bias resulting from the way a study is designed or carried out can result in an incorrect conclusion about the relationship between an exposure (risk factor) and an outcome (such as a disease) of interest [5]. Small degrees of systematic error may result in high degrees of inaccuracy. Many types of bias can be identified: This is not the same as bias. A confounding factor can be described as one that is associated with the exposure under study and independently affects the risk of developing the outcome [5]. Thus, it may offer an alternative explanation for an association that is found and, as such, must be taken into account when collecting and analysing the study results. Confounding may be a very important problem in all study designs. Confounding factors themselves affect the risk of disease and, if they are unequally distributed between the groups of people being compared, a wrong conclusion about an association between a risk factor and a disease may be made. A lot of the effort put into designing non-experimental studies is in addressing potential bias and confounding. For example, in an often-cited case-control study on the relationship between coffee drinking and pancreatic cancer, the association between exposure and disease was found to be confounded by smoking. Smoking is a risk factor for pancreatic cancer; it is also known that coffee drinkers are more likely to smoke than non-coffee drinkers. These two points create a situation in which the proportion of smokers will be higher in those who drink coffee than in those who do not. The uneven distribution of smokers then creates the impression that coffee drinking is associated with an increased rate of pancreatic cancer when it is smoking (related to those who drink coffee and to pancreatic cancer) that underlies the apparent association. Common confounders that need to be considered in almost every study include age, gender, ethnicity and socioeconomic status. Age is associated with increased rates of many diseases. If the age distribution in the exposure groups differs (such as where the exposed group is older than the non-exposed group) then the exposed group will appear to be at increased risk for the disease. However, this relationship would be confounded by age. Age would be the factor that underlies the apparent, observed, association between the exposure and disease. Although age is a common confounder, it is the biological and perhaps social changes that occur with age that may be the true causes that increase the rate of disease. There are several ways to control for the effect of confounding. To control for confounding during the design of the study, there are several possible alternatives: In the analysis phase of a study, one can use: The sample must be sufficiently large to give adequate precision in the prevalence estimates obtained by the study for the purposes required. The most common mistake made by inexperienced researchers is to underestimate the sample size required. As a result, the sample size may be too small and not representative of the population that the sample is meant to represent. This usually leads to outcome measures that have very wide 95% confidence intervals and, hence, statistically significant differences between study groups may not be found. To ensure that a study has adequate sample sizes to show statistically significant differences, if they are there, sample sizes should be calculated prior to the study commencement. In reality, sample size is often determined by logistic and financial considerations, that is to say a trade-off between sample size and costs. The power of a study is the chance of correctly identifying, as statistically significant, an effect that truly exists. If we increase the sample size, we increase the power. As a general rule, the closer the power of a study is to 1.0, the better. This means that the type II error will be small, that is there will be only a small chance of not finding a statistical difference when there really is one. Usually, a power of 0.8 or more is sufficient. To determine statistical significance, we can obtain a P value, relative risk or some other statistical parameter that is indicative of a difference between study groups. However, a statistical difference (e.g. P<0.05) between groups may be found if the study is highly powered (many subjects), even though the absolute difference between the groups is very small and not a clinically significant (meaningful) difference. This difference is important for two reasons. First, it forms the basis of sample size calculations. These calculations include consideration of what is thought to be a clinically significant difference between study groups. The resulting sample sizes adequately power the study to demonstrate a statistically and clinically significant difference between the study groups, if one exists. Second, when reviewing a research report, the absolute differences between the study groups should be compared. Whether or not these differences are statistically significant is of little importance if the difference is not clinically relevant. For example, a study might find an absolute difference in blood pressure between two groups of 3 mmHg. This difference may be statistically significant, but too small to be clinically relevant. The fundamental objective of any research project is to collect information (data) to analyse statistically and, eventually, produce a result. Data can come in many forms (laboratory results, personal details) and are the raw material from which information is generated. Therefore, how data are managed is an essential part of any research project [4]. Many a study has foundered because the wrong data were collected or important data were not collected. Generally, data fall into the following groups: identification data: personal information needed to link to an individual patient administrative data: initials of the data collector, the study centre if a multicentred trial. Collect only the research data that are essential to answer the study question. Collection of data that will not be of use is time-consuming, expensive and may detract from the quality of the remaining data. However, there will usually be a minimum of data that must be collected. If these data are not collected, then the remaining data may not be analysed adequately. This relates particularly to data on confounding factors. A database is a specific collection of data that is organized in a structured fashion. In other words, database software provides us with a way of organizing the data we collect from a research project in a systematic way. This refers to the entry of data into the electronic database, e.g. Access, Excel. Even if the study design and the data collection have been well done, the final data set may contain inaccurate data if the data-entry process is inadequate. This relates particularly to manually entered data where mistakes are bound to happen. Data entry can be achieved in many ways: Effectively, this is a quality assurance process that confirms the accuracy of the data and can be done in the following ways: Participation in a clinical trial may involve a sacrifice by the participant of some of the privileges of normal medical care for the benefit of other individuals with the same illness. The privileges forgone might include: Participation may also require the discomfort and inconvenience associated with additional investigations and the potential incursion on privacy. Without the willingness of some individuals to make these sacrifices, progress in clinical medicine would be greatly impaired. Most individuals who now expect to receive safe and effective medical care are benefiting by the sacrifices previously made by other individuals. Some have argued, in contrast, that enrolment into clinical trials ensures the absolute best care currently available, with greater involvement and scrutiny by attending healthcare teams. If one accepts that clinical trials are morally appropriate, then the ethical challenge is to ensure a proper balance between the degree of individual sacrifice and the extent of the community benefit. However, it is a widely accepted community standard that no individual should be asked to undergo any significant degree of risk regardless of the community benefit involved, that is, the balance of risks and benefits must be firmly biased towards an individual participant. Because of the trade-offs required and because of the spectrum of views about the degree of personal sacrifice that might be justified by a given community benefit, it is accepted that all clinical trials should be reviewed by an ethics committee that should have as a minimum: sufficient technical expertise to quantify the risks and benefits involved adequate community representation so that any decisions are in keeping with community standards. It is unethical to request individuals to undergo the risks and inconvenience of a study that is unlikely to provide a scientifically worthwhile result. It is also unethical to request sacrifices from volunteers that are out of keeping with the value of the research being undertaken. In keeping with this principle, studies that suffer from substantial design errors or are susceptible to serious bias should not be approved until these deficiencies are remedied. It is unethical to allow scientifically invalid studies to proceed. Sample-size calculations should be scrutinized because of the ethical undesirability of including too few subjects to provide an answer or many more than is needed to provide a convincing answer. Another safeguard to ensure that the research will be valuable is that the investigator should be qualified, experienced and competent, with a good knowledge of the area of study and have adequate resources to ensure its completion. It is unethical to require any patient to forgo proven effective treatment during the course of a trial. It follows that clinical trials should only be undertaken when each of the treatments being compared is equally likely to have the more favourable outcome. Very commonly, however, there is an expectation before a trial is commenced that one or other treatment is the more beneficial. This may be based on results of uncontrolled studies or even on biochemical or physiological expectations. The large number of times such expectations have been proven wrong can still provide strong justification for a trial. If such an expectation of benefits is held strongly by an individual, it is probably not ethical for that individual to participate in a study. Furthermore, it is the responsibility of an ethics committee to assess the strength of the presumptive evidence facing one or other treatment and consider whether any substantial imbalance in likely outcome exists. This must be considered in relation to the importance of the question being addressed. Participants in clinical trials have a fundamental right to be fully informed about the nature of a clinical trial and to be free to choose whether or not to take part. Ethical principles also dictate that prospective participants be: provided with a full explanation about the discomforts and inconvenience associated with the study and a description of all risks that may reasonably be considered likely to influence the decision whether or not to participate [4]. It is usual practice to provide prospective participants with a Participant Information and Consent Form that provides a simple, easy to understand account of the purposes, risks and benefits associated with participation in the study. Ethics committees are required to review these statements and confirm that they provide a reasonable account. In practice, the procedures involved in obtaining informed consent are often problematic. Considering the dependence of sick patients on the health system, their anxiety and their desire to cooperate with their physicians, it is doubtful whether informed consent is ever freely given. When ethics committees identify situations where this scenario is likely to be a particular problem, the involvement of an independent uninvolved person to explain the study may be useful. Anne-Maree Kelly Sharing of knowledge and experience through publication is an important way of improving clinical practice. In addition, researchers have an ethical obligation to publish their findings. Communication may be by way of an original research publication, brief report, case report or letter to the editor. Each of these has different requirements in terms of content, format and length and these requirements may vary between journals. It is useful to choose the intended journal for publication early. While impact factor may be a consideration in this choice, most authors are more concerned with publishing in a journal that has the appropriate target audience for the subject matter of the paper. It is important to check the Instructions for Authors for the chosen journal to ensure that your submission matches that journal’s requirements. Failure to do so reduces the chances of acceptance considerably. Although journals may have differences in format and style, all prefer clear and concise communications. In particular, it is important for the material to be arranged logically so that clear relationships can be seen between the objective of the study or communication, the evidence and any conclusions drawn. Authorship can be a contentious issue, however, there are defined requirements for qualification as an author. The International Committee of Medical Journal Editors state that authorship credit should be based on:
Academic Emergency Medicine
24.1 Research methodology
Introduction
Initiating the research project
The research question
The study hypothesis
The study aims
Assembling the research team
Development of the study protocol
Purpose of the study protocol
Protocol structure
Introduction
Methods
Study design
Observational studies
Cross-sectional studies
Ecological studies
Cohort studies
Case-control studies
Case reports and case series
Experimental studies
Main types of clinical trials
Key features of clinical trials
Randomization
Blinding
Concepts of methodology
Validity and repeatability of the study methods
Response rate
Study variables
Study endpoints
Sampling study subjects
Sampling frame
Sampling methodology
Probability sampling
Stratified sampling
Non-probability sampling
Data-collection instruments
Surveys
Designing a survey
Before a survey
During the survey
After the survey
Data-collection proformas
Bias and confounding
Study design errors
Systematic error (bias)
Confounding
Common confounders
Principles of clinical research statistics
Sample size
Study power
Statistical versus clinical significance
Databases and principles of data management
Defining data to be collected
Database design
Data entry
Data validation
Research ethics
Scientific value
Benefits forgone
Informed consent
24.2 Writing for publication
Introduction
Important principles
Authorship, acknowledgement and competing interests
< div class='tao-gold-member'>
Full access? Get Clinical Tree
24. Academic Emergency Medicine
Only gold members can continue reading. Log In or Register a > to continue