Understanding how science uses certain key concepts—selection bias, impact-value, statistical significance, etc.—can make the difference between parroting pure speculation and taking an evidence-based approach. From climate change to gun crime to office productivity, scientific studies are the stock and trade of modern truth.
Beware WEIRD college students.
Selection bias addresses who has been chosen as the subject of the experiment. Statistical organizations like Pew and Gallup use survey samples that reflect the nation’s population along important lines like age, gender, and ethnicity. But smaller organizations may recruit the most convenient participants possible, e.g. undergraduate college students attending large research universities. Do you think you behave like a nineteen year-old sophomore?
Other objections have been raised about the scientific findings of western countries in general, resulting the creation of the acronym W.E.I.R.D., which stands for Western, Educated, Industrialized, Rich, and Democratic.
An impressive-sounding journal title doesn’t mean a study is necessarily valid. Due to the growth of the scientific publishing industry, a cottage industry of for-profit journals have emerged which take money in exchange for publication (see “conflict of interest”). Scientists have therefore created the impact-factor metric, which counts the number of times a journal’s papers have been mentioned in other papers, relative to the journal’s own volume of article output.
The tool, however, is not uncontroversial. One problem is that impact value can be skewed by a few major studies which are cited over and over again. Once a journal collects a few such studies, its importance can become exaggerated.
“Statistically significant” is a phrase often associated with a study overcoming the correlation/causation barrier, but the two terms are not directly related. Statistical significance, called the p-value, is measured against the chances of a given study’s results occurring randomly.
“What’s considered a good p-value is arbitrary and can vary somewhat between scientific fields.”
Some p-values are greater than other: generally, the higher the p-value, the less likely the experiment results occurred randomly, i.e. without a specific and identifiable cause. One criticism of the p-value found that fish were shown, in statistically significant fashion, to be able to read the minds of humans.
Science v. Religion.
As Sylvester James Gates, theoretical physicist at the University Maryland, explains, science and religion do not speak to each other very well. At its best, science can posit theories, purposefully never approaching the claims to truth that dominate religion.
Read more at Vox
Photo credit: Shutterstock