It is extremely common for me to run into a person who has a “study” that says X or Y. Proposition X or Y can be dubious for any number of reasons, or even a priori impossibility, but we see government paying people to study the topic and publish the results (One that is laughable is that raising the minimum wage does not create unemployment – and in fact may increase employment. Yes, they actually paid people to come up with a blatant contradiction of the reality of scarcity, but I digress…).
This post’s topic is BEWARE(!) of believing what is in studies. All of us know of at least some research that has been fraudulently conducted. For example, the infamous hockey stick graph used to prove global warming, as well as the following Climategate emails show that there is self-affirming bias that riddles scientific study today. Steven Jay Gould has a book, The Mismeasure of Man, that breaches this very topic (and the thesis of the book is quite leftist to begin with, which is notable, as most studies you will find today are leftist/pro-interventionism…).
There is actually a meta-study called “Why Most Published Research Findings Are False” which examines the phenomenon of bias and error in studies, and concludes that at least half of all published studies are wrong. I think this study is one we can take at its word…
The first major theme of why most research findings are erroneous or false is statistics. Many statisticians commonly make errors in their daily life concerning statistical analyses and chance, so why would we expect a psychology Ph.D who has had one or two statistics classes to be able to analyze and fully interpret wide sets of data? Further, statistical data is misread, erroneously input, or selectively edited often in “scientific” studies to correspond with the biases of the researchers involved. How you frame statistical data can very often lead to radically different interpretations of data that is the exact same on either side of the interpretation. Further, it could be that the statistical model being used isn’t even appropriate for the study at hand – and the researcher would be none-the-wiser for it (for an interesting discussion of this in relation to Gaussian bell curves versus Mandelbrot sets, pick up the book The Black Swan). All of the above also assumes that you have honest researchers who aren’t manipulating the data to reach a conclusion that they believe is warranted.
Which brings us to bias. In a climate where the more outrageous or “boundary-pushing” a study is, the more money the program is given, of course there will be theses and conclusions that are attenuated to an agenda (and the agenda is generally interventionism of some kind). This is well-documented, and a great example of how this “push-the-limits” nature of modern research flourishes is clear in the Sokal Affair, in which a researcher submitted an outrageously biased study (without having done any research) that he thought the department heads at his university would accept merely based on the slanted conclusions therein. He was right, and the implications are resounding to sociological and psychological studies (cross-sectional & longitudinal sociological studies, especially). Universities are research engines of today, and where politically leftist research is given much more money and credence in the public and intellectual sphere, the game may be rigged at the start. Even the question itself that the researcher poses may beg to be answered a certain way.
Cognitive biases, fallacies (non sequiturs being the most common), and perception errors are themselves often fatal to a researcher’s “well-founded” theories, especially where there is grant money involved. U.S. researchers are also most known for faking research, because of the high stakes of money involved in federal funding of research. The saying “it is difficult to get a man to understand something when his salary depends on not understanding it” is completely true. Try to tell an organic farmer that GMO crops are better for humanity and see where that gets you…
But again, an expert need not be lying to have erroneous or biased conclusions. An experiment called the Rosenhan experiment was conducted in the 70’s by a researcher who believed that, after pretending to be crazy to gain admittance to a psychological ward, reverting to normal behavior would not convince the ward’s administrators that he was sane. It was true. Experts (generally, this is true of those who are “experts” in the humanities – economics, sociology, psychology, political science, etc. Experts in the hard sciences are generally beyond the capability of most people) are often worse than the average person at prediction and sometimes explanation. This research has been repeated several times, with the same results – the experts are not as good as they think they are. Often, variables that the researcher wishes to test cannot be isolated, and the causation that is inferred is mere correlation or even statistical anomaly. There are very few factors that 1-to-1 cause or are caused by measurable factors. The world is far too complicated for people to consistently be correct when such assumptions are made. See also, “Judgment under Uncertainty: Heuristics and Biases,” or “why we suck at perception outside of our own biases and slants.”
The moral of the story? The “experts” who craft politically-biased studies are not as often correct as you might believe (and if they are, it may be accidental). Examine any claims backed by a study very carefully, because the conclusions may be in contradiction with reality or completely attenuated.