Friday, May 18, 2012

(Almost) all studies are B.S.

Since I've written here so many times about bad math and bad science and how people use it to confuse the broader public (and often themselves), I felt the need to share this piece from Science Daily that backs up a lot of what I've been saying.
The largest comprehensive analysis of finds that clinical trials are falling short of producing high-quality evidence needed to guide medical decision-making. 
The analysis, published May 1 in the Journal of the American Medical Association, found the majority of clinical trials is small, and there are significant differences among methodical approaches, including randomizing, blinding and the use of data monitoring committees. 
"Our analysis raises questions about the best methods for generating evidence, as well as the capacity of the clinical trials enterprise to supply sufficient amounts of high quality evidence to ensure confidence in guideline recommendations," said Robert Califf, MD, first author of the paper, vice chancellor for clinical research at Duke University Medical Center, and director of the Duke Translational Medicine Institute.
People (and companies, and governments) these days are increasingly desperate for "data", but they generally don't have a clue how to interpret it--or, to handle the fact that most studies are, by their nature, "inconclusive". This Freakonomics post similarly pointed out another troubling trend, the increasing rate of research retractions (due to improper methodology, improper interpretation, or both).

Feeding off of that dynamic, journalistic outlets often find ways to massage or outright manipulate the "data" from these studies to suit their storytelling goals. There are, after all, a lot of ways that we can frame statistics in ways to make them misleading--there's the basic crime of omission, where we leave out a lurking variable that actually makes our "conclusions" meaningless, but there are countless other ways in which we can manipulate study results in ways that make them seem more meaningful than they are, or that obscure the actual truth of the matter (I'll save a deeper discussion for another day).

As consumers of media of various types, we all need to be incredibly careful about how we read study results when they are published. We need to educate ourselves about how the studies were performed, determine what the statistics are actually telling us, and decide how we should interpret those statistics, rather than allowing a biased party to mislead us with a headline interpretation. When you read an article that says (for example) that drinking coffee triples the risk of dying from a lightning strike, ask yourself whether that makes any sense at all, or whether there could've been something else going on that explained the study results.

Please, people, don't take "studies" at face value. There's just too many ways to manipulate the results, and too many people with too great an incentive to manipulate them. Remember, scientists who continually return "inconclusive" results from their studies generally won't keep getting grants to perform further studies. It's to their benefit (and can often bring great fame) to report incredibly surprising and unexpected study results--but that doesn't mean their "results" are in any way correct.

Be careful out there. Defrauding the public can be an incredibly profitable business, if you allow it to be. Educate yourselves, and don't buy into the scams. That way you'll be that much more capable of identifying the things you really have to worry about in the world. Like the penguins.

[Science Daily]

No comments:

Post a Comment