If you don't remember your password, you can reset it by entering your email address and clicking the Reset Password button. You will then receive an email that contains a secure link for resetting your password
If the address matches a valid account an email will be sent to __email__ with instructions for resetting your password
Even under ideal conditions, clinical research is threatened by typical flaws that may seriously affect the validity of study findings. It is crucial for both investigators and readers to be aware of these threats and their adverse effects on data interpretation and generalisability of findings. The two most important flaws are bias and confounding. While both may creep in subtly, they entail fundamentally different effects on study validity. Therefore, correct interpretation of reported findings requires a clear understanding of these flaws. The first part of this edutorial aims to introduce the principal types of bias that may typically affect clinical research. The second part will deal with the concept of confounding.
Bias can be defined as ‘any process at any stage of inference that tends to produce results or conclusions that differ systematically from the truth’.
For instance, a meta-analysis that only includes studies reporting positive treatment effects (but ignores studies with negative findings) is intrinsically in no position to evaluate the ‘real’ treatment effect and, thus, biased. Importantly, bias must not be confused with random errors,
Selection bias is probably the best known type of bias affecting generalisability. It refers to any selective difference between study groups or between the study population and the ‘real world’. Although differences between study groups may very well be addressed by randomised controlled trials (RCTs), their restrictive inclusion and exclusion criteria often limit representation of the ‘real world’ within the trial. Therefore, it is always important to contemplate whether the study population is really representative and whether the study design may have introduced an unfair imbalance between groups. An example is that an RCT investigating abdominal aneurysm repair in men will lead to management recommendations that cannot simply be extrapolated to women.
Information bias refers to an inadequate registration of data (e.g., patient characteristics or outcome events) and has also been called measurement or classification bias. Although data recording may always be affected by imprecision, systematic differences, for instance due to unblinded (potentially prejudiced) outcome evaluation in favour of the intervention compared with the control group, may lead to flawed findings. Blinding (wherever possible) and standardised assessment protocols are the most effective guard against information bias.
Recall bias is a specific form of information bias and refers to systematic differences in patients' abilities to recall certain events or symptoms. For instance, in case control studies, affected subjects that were exposed to the factor of interest may be more likely to remember specific circumstances of their medical history than controls that were never exposed. Another source of recall bias may arise from variable intensity of surveillance programmes. If, for any reason, follow-up intervals are shorter in one group, their recollection of events may be more accurate than in the control group with longer follow-up intervals.
Attrition bias is probably the least known (and thus most underestimated) form of bias. It refers to uneven loss of follow-up information due to differences in follow-up completeness. Obviously, unaccounted follow-up periods (e.g., patients not returning for follow-up) will lead to underestimated outcome rates.
Therefore, if a group with poor outcome is more often lost to follow-up, their treatment may appear better than it is. Nonetheless, follow-up completeness is rarely measured or reported leaving the risk of attrition bias obscure. Of note, attrition bias is only introduced after study inclusion or randomisation; therefore, it may affect any study design, not only retrospective observations. A predefined study end date with cross-sectional (and complete) follow-up provides the best protection against attrition bias. At the least, completeness of follow-up should always be declared per study group (using summary follow-up indices).
Finally, publication bias refers to the risk associated with the publication process, where authors are more likely to submit (and journals more likely to accept and publish) positive results rather than negative findings. Thereby, the apparent benefit of a treatment may be unfairly inflated by biased meta-analyses (see above).
In conclusion, reported differences between treatment groups must always be pondered against four potential scenarios: (i) Is it just a random variation within the sample (i.e., chance finding, quantified by p value and affected by study power)?
(ii) Is there a (hidden) systematic flaw in patient selection or data collection which may limit applicability of the finding (i.e., any type of bias)? (iii) Is there an unaccounted confounding factor that may cause the effect in reality (and not the ‘treatment’)? (Misinterpretation due to confounding is discussed in the second part of this Edutorial.) (iv) The treatment effect is real! Therefore, the methods section of any article should always be scrutinised for (and ideally provide) all critical information for fair assessment of bias and confounding. More in depth information may be found in specialised literature such as Epidemiological Studies: A Practical Guide by Silman and Macfarlane (2002).
To submit a comment for a journal article, please use the space above and note the following:
We will review submitted comments as soon as possible, striving for within two business days.
This forum is intended for constructive dialogue. Comments that are commercial or promotional in nature, pertain to specific medical cases, are not relevant to the article for which they have been submitted, or are otherwise inappropriate will not be posted.
We require that commenters identify themselves with names and affiliations.