I recently wrote about the frequent failures of the "peer-review" system for vetting scientific articles before their publication. The way it works is that researchers submit their manuscript to the journal, and the editors then send it to (usually unpaid) reviewers, or "peers," in the research community who anonymously offer an opinion on whether the article is of sufficiently high quality to be published.
But far too often, this system fails. Many articles that pass through it to publication should not have been accepted because they are methodologically flawed, contain fraudulently manipulated data, or make obviously absurd claims. An important subset of these is outright cheating with statistics, such as "data mining" – looking for correlations that are meaningless statistical artifacts and reporting them as though they were real.
I'm reminded of such statistical sleight of hand when I see TV ads for supplements that supposedly improve memory or sleep or relieve pain. Even if the manufacturers bother to do actual studies instead of just collecting individual anecdotes from a handful of compensated retired politicians or athletes, obtaining misleading "positive" results from the "testing" of a worthless nostrum is surprisingly easy.
First, you might enroll a hundred people into your study, and every day for a month, half of them take the Worthless Product -- which we'll call "WP" – and the other half get a placebo, an identical but empty (and also worthless) capsule. At the end of the study period, you administer a memory test to the subjects and ask them about their pain, sleep habits, or whatever, depending on what the product is supposed to do.
If WP is, indeed, inactive and worthless, you should get essentially the same results from the members of the treated and placebo groups. But because of the nature of statistics, the results may appear to be otherwise; if you keep repeating the hundred-subject trial due to chance, sometimes you'll get a lopsided result in one direction or another. It's like flipping a coin 1,000 times: Although a single flip has an equal chance of being either a head or tail and in the end, you'll approach 500 each, along the way, there will be some long runs of heads or tails.
So, once you have a trial with a lopsided result in favor of the subjects who took the daily dose of WP, you've reached your goal: You can show that result and claim that WP has been "proven in clinical trials."
Not so fast. Legitimate clinical trials require statistical corrections for the number of iterations you do, but of course, the companies selling worthless products don't make those corrections or reveal their methodology.
There are a few lessons here.
First, don't believe drug (or other) endorsements from random, supposedly satisfied customers or, especially, compensated celebrities. There is an old saying in the medical community that the plural of anecdote is not data.
Second, consult your doctor about any supplements or other treatments you would like to take.
Third, government regulatory agencies, such as the Food and Drug Administration, Federal Trade Commission, and state consumer protection agencies, should crack down on companies making fraudulent claims about medical treatments.
Henry I. Miller, a physician and molecular biologist, is the Glenn Swogger Distinguished Fellow at the American Council on Science and Health. He was the founding director of the FDA's Office of Biotechnology.