The influences around us – be wary of writers’ biases: McCormack and Jewesson

To maintain professional competence in the ever-changing world of health care, we need to become life-long learners. To become a life-long learner, we are obligated to continuously review, critique and integrate new information.  The sources of this new information are numerous.  A day never goes by without us being exposed to (and probably influenced by) what we have read in the lay press and medical journals; what we have heard on the television and radio; what has been presented to us at seminars and continuing education events; and what we are told by our colleagues and patients.

Unfortunately, every source of information has it’s inherent biases.  Bias is unavoidable because the information being disseminated represents the evidence as interpreted by one or more individuals.  Let’s face it, there is virtually no way of presenting or interpreting research data without introducing bias. While we are inundated daily with relatively easy to identify examples of the media and the government blatantly falsifying information, and less obvious examples of the drug industry withholding or distorting the data, in general, the biases that we are describing are more subtle, consequently more pervasive, more difficult to detect and potentially even more dangerous.

When we talk about bias, we are referring to both the intentional and unintentional bias that occurs in the delivery of information that draws the authors and readers to reach inaccurate interpretations and conclusions about the data. Whether these preconceptions are innocent or intentional is unimportant.  What is critical is that the recipient of this information needs to be acutely aware that these biases exist and that with a dose of healthy skepticism and a little work, the impact of this bias can, at least to some degree, be lessened.

The following are just a few examples of interpretation and presentation biases:

A recent media analysis of 207 stories (television, newspaper) about pravastatin, alendronate, and aspirin revealed that 40% of these stories did not report the quantitative benefits of therapy.  Of those that did, 83% of the narratives reported relative benefits only, 2% reported absolute benefits only, and 15% percent reported both.  Over one-half (53% to be exact) did not mention the potential harm to patients and 70% of the stories failed to mention the costs of therapy.  Of the stories reported, 170 cited an expert or a scientific study, while 50% cited at least one study or expert with a financial tie to the manufacturer.  This conflict of interest was disclosed in only 39% of cases.  It has shown that reporting information as relative benefits only, increases the likelihood that clinicians and patients will choose to use drug therapy.

Reporters have stated that “we have to almost overstate, we have to come as close as we can within the boundaries of truth to a dramatic, compelling statement”. This has been recently illustrated by television media reports of a study published in the New England Journal of Medicine stating that a 16-fold increase in the risk of hemorrhagic stroke was observed for patients who had taken, cough/cold preparations and dietary aids containing phenylpropanolamine (PPA). These reports were accompanied by dramatic pictures of drug store employees removing these products from the shelves in the interest of public safety.  It was quite the sensation, especially when you consider the data behind the hype.  First of all, the investigators responsible for the study reported that there was no statistically significant association between hemorrhagic strokes and the use of cough and cold remedies containing PPA.  Secondly, the investigators characterized the risk as “one woman may have a stroke due to PPA for every 107,000 to 3,268,000 women who use PPA as an appetite suppressant within a 3-day window”.  While any increased risk is certainly of concern, it is likely that if the media had presented the risk in this objective context, the story would have received different, and probably significantly less press coverage.

Consider the impact of such a media report on the public.  Patients often come to us as health professionals and ask our opinion about health care stories reported by the media.  One author (JM) has discussed the PPA issue with several parents who had called asking about whether or not it was safe to use cough and cold preparations containing PPA in their children.  While their concerns were defused once they were apprised of the actual facts and numbers, they were also surprised to learn that there are actually no published trials that demonstrate these agents have an effect on symptoms in children.  In fact, two trials have shown that cough and cold products containing this agent are no more effective on symptoms than placebo when used in children under the age of 6. In addition, cough and cold preparations are a common cause of drug overdose cases in children.

There is also considerable evidence of biased reporting in the medical literature. In 1986, Davidson reported on the results of 107 drug trials in leading medical journals.  If the drug trials were company funded, 89% favoured the new drug; whereas if the study was non-company funded only 61% favoured the new drug.  This may also reflect a publication bias.

Interpretation bias is also prevalent in the medical literature. In 1989, Gotzsche reported that of 196 NSAID trials, 76% reported doubtful or invalid statements in the conclusions or abstracts. After an assessment of 56 published NSAID trials, Rochon et al found that 29% reported superior efficacy of one NSAID over another while the remaining 71% reported comparable efficacy.  Among those trials claiming superiority, all reported the sponsor’s agent as being more efficacious.  Of the 22 trials identifying one drug as being less toxic than the other, the manufacturer-associated drug was reported to have a superior safety profile in 86.4% of cases.  Only half of the trials reporting less toxicity included any statistical measure to back this claim.

More recently, Stelfox and colleagues reported that of 70 published articles focusing on safety issues of calcium channel blockers, 96% of authors who were supportive of these drugs had financial ties to manufacturers of these agents, whereas only 37% of authors who were critical of the calcium channel blockers had financial ties.

This interpretation bias is not confined to clinical evaluations. In the area of economic analyses, Friedberg et al showed that of 44 economic analyses of oncology drugs, only 5% were unfavourable when funded by the drug industry whereas 38% were unfavourable if the study was not funded by the drug industry.

Recently, one of the authors (JM) reported on the biases associated with the dissemination of the United Kingdom Prospective Diabetes Study (UKPDS) results. The results of this clinical trial showed firstly that intensive blood glucose control with the use of sulphonylureas or insulin in type 2 diabetics over 10 years had no effect on macrovascular endpoints, diabetes-related deaths or all-cause mortality. Secondly, the results of this trial demonstrated that the use of metformin in overweight type 2 diabetics was associated with a 10% absolute reduction in all micro- and macrovascular complications combined, and a reduction in the incidence of individual macrovascular events in those patients treated with metformin. Amazingly, however, an editorial of this article stated that the UKPDS showed that “clear and consistent evidence now exists that hyperglycemia in diabetes is a continuous, modifiable risk factor for clinically important outcomes and that reduction in glucose is the key to improving outcomes”. A fascinating perspective considering the data provided little if any support for this conclusion.

There have been a couple of articles published in reputable journals over the last few months which make it seem like the conclusions were written before the study was complete, as the interpretation of the study results simply didn’t match the researcher’s own data. In the Archives of Internal Medicine, Bohadana and colleagues describe a trial aimed at determining whether or not a combination of a nicotine inhaler and a nicotine patch was more effective than a nicotine inhaler alone for smoking cessation. The authors concluded that “treatment with the nicotine inhaler plus nicotine patch resulted in significantly higher cessation rates than inhaler plus placebo patch”. However, their data showed that while there were statistically significant differences between treatment arms at 1 and 3 months, there were no differences between treatments at 6 months and 1 year.

In an article published in the British Medical Journal, researchers looked at the combination of candesartan and lisinopril compared to treatment with either agent alone in patients with diabetes and hypertension. The investigators state in their abstract that “reduction in urinary albumin: creatinine ratio with combination [candesartan plus lisinopril] was greater than with candesartan and lisinopril [as single agents]” and actually present p-values. However in the abstract they presented the p-values for the differences from baseline for the groups, not the p-values for the comparison between the groups. In the body of the article itself, the data presented reveals that the difference in the urinary albumin:creatinine ratio between lisinopril and the combination was not significant (p>0.20). In other words, they did not show that the combination was more effective than lisinopril alone. The p value for the difference between candesartan and the combination was 0.04. As numerous statistical tests were done, and no correction for multiple testing was applied, a p-value of 0.04 is of questionable significance.

So what is the take-home message?  The examples we have discussed are intended to simply illustrate that biases are well entrenched in published medical information and that as health care professionals, we need to be aware of these and be skeptical health care information consumers.  If you are thinking of changing your practice based on something you have read or heard, don’t dive in until you know how deep the water is.  Don’t rely on others to interpret the data for you.  Take the time to read the original research report yourself.  You’ll be surprised how often your interpretation of the clinical importance of the results differs from those of the investigators, the editorialists and even your colleagues.

Always remember that everyone with an interest in health care has a perspective and everyone with a perspective has an inherent bias.  That includes the government, the media, the drug industry, the universities, and even the authors of this editorial.  Look at the results for yourself and form your own conclusions.

James McCormack, PharmD, Peter J. Jewesson, PhD FCSHP

Latest Posts