Statistics - Using the Truth to Mislead

Dr. Pullen examines the importance of statistics in medicine, how the meaning of them can often be hard to interpret, and the need, perhaps, for more statistics courses in medical school.

This article originally appeared online at Dr.Pullen.com

My daughter is an accountant. She took a statistics class in high school, and another as a requirement for her major. My son has taken a statistics course, and he is an English Literature major. I was a chemistry major in college and have an an MD and have never taken a statistics course. I don’t even recall a lecture on statistics in medical school. Mark Twain quoted Disraili as saying, “There are three kinds of lies: lies, damned lies and statistics.” Reading medical journal articles reporting on the benefits and lack of benefits when reported statistically can be really challenging. Reading a report of these, or worse listening to an interested party, like a sales rep or sponsored speaker on talk about a study, requires being a skeptic. Here are some examples of how true statistics can be worse than a lie, and how what would seem to be common sense does not pay off.

Relative risk reduction vs. absolute risk reduction : The benefits of an intervention, say a medication or a procedure is usually reported as the desired outcome seen with the intervention as compared to a baseline seen with placebo or a control group. It can be reported as relative risk reduction, or absolute risk reduction. Relative risk reduction can seem very impressive, but it is key to look at absolute risk reduction also. An example of this is the reported benefit of reduction of risk of hip fractures with a bisphosphonate called Fosamax. A report in JAMA showed a reduction in hip fractures over 4 years of use in women with osteoporosis from 2.2% to 1.0%. This is an absolute reduction of 1.2% over 4 years, or 0.3% annually. This is reported by the makers of Fosamax accurately as a 56% reduction in risk, which is true but misleading. A more helpful way to look at this is the NNT (Number needed to treat, defined in statistics using the formula 100/%reduction). In this case 100/1.2% =88. So to prevent 1 hip fracture it would be necessary to treat 88 women for 4 years. Sounds less impressive than a 56% reduction to me.

Graphs can also exaggerate benefits or risks: Many people are visual thinkers, People who make graphs manipulate them to prove their point. By changing the scale on the X or Y axis of a graph, the extent of a trend can be visually exaggerated or minimized. A good article discussing this is Does Graph Design Matter To CPAs And Financial Statement Readers? I couldn’t find a good reference to this discussing medicine. Apparently misleading readers about their money is more newsworthy than misleading them about their health.

Extrapolation of Data: Another problem with using evidence from studies is the lure of using proven benefits in one circumstance and assuming thte data must also apply to a somewhat different circumstance. An example: Since treating a person who has had a stroke with an aspirn daily has been proven to reduce the risk of another stroke significantly, then using an aspirin before having a stroke must be even better, prevent the first stroke. As logical as this seems, data does not support this corollary to the proven theorem. See my prior post Aspirin — Should you take one a day? to see a discussion of primary vs. secondary prevention, and the real and significant differences in where the data leads you.

For a nice series of articles using similar discussions of statistical lies and aberrations see Bittersweet Medicine by Dr Lemmon (love the blog site name given the author’s name) and his series on the most overrated treatments.

Maybe medical schools should require pre-med students to take a course in statistics and require one less course in the physical sciences.

Ed Pullen, MD, is a board-certified family physician practicing in Puyallup, WA. He blogs at DrPullen.com — A Medical Bog for the Informed Patient.