In the media saturated world that we live in today, how can you tell if what you’re reading or seeing is good science or bad science?
The vast majority of people will get their science news from online news site articles, and rarely delve into the research that the article is based on.
Use these 12 simple hacks – which are closely tied to logical fallacies – to help you spot bad science quickly.
Click the image to see the infographic in a larger, more legible format
- Sensationalized Headlines:
Headlines of articles are commonly designed to entice viewers into clicking on and reading the article. At best, they over-simplify the findings of research. At worst, they sensationalize and misrepresent them.
- Misinterpreted Results:
News articles sometimes distort of misinterpret the findings of research for the sake of a good story, intentionally or otherwise. If possible, try to read the original research, rather than relying on the article based on it for information.
- Conflict of Interests:
Many companies employ scientists to carry out and publish research – whilst this does not necessarily invalidate research, it should be analyzed with this in mind. Research can also be misrepresented for personal or financial gain.
- Correlation and Causation:
Be wary of confusion of correlation and causation. Correlation between two variables doesn’t automatically mean one causes the other. Global warming has increased since the 1800s, and pirate numbers decreased, but lack of pirates doesn’t cause global warming.
- Speculative Language:
Speculations from research are just that – speculation. Be on the lookout for words such as “may”, “could”, “might”, and others, as it is unlikely the research provides hard evidence for any conclusions they precede.
- Sample Size Too Small:
In trials, the smaller the sample size, the lower the confidence in the results from that sample. Conclusions drawn should be considered with this in mind, though in some cases small samples are unavoidable. It may be cause for suspicion if a large sample was possible but avoided.
- Unrepresentative Samples:
In human trials, researchers will try to select individuals that are representative of a larger population. If the sample is different from the population as a whole, then the conclusions may well also be different.
- No Control Group Is Used:
In clinical trials, results from test subjects should be compared to a “control group” not given the substance being tested. Groups should also be allocated randomly. In general experiments, a control test should be used where all variables are controlled.
- No Blind Testing Used:
To prevent any bias, subjects should not know if they are in the test or the control group. In double-blind testing, even the researchers don’t know which group subjects are in until after testing. Note, blind testing is always feasible, or ethical.
- “Cherry-Picked” Results:
This involves selecting data from experiments which supports the conclusion of the research, whilst ignoring those that do not. If a research paper draws conclusions from a selection of its results, not all, it may be cherry-picking.
- Unreplicable Results:
Results should be replicable by independent research, and tested over a wide range of conditions (where possible) to ensure they are generalizable. Extraordinary claims require extraordinary evidence – that is, much more than one independent study!
- Journals and Citations:
Research published to major journals will have undergone a review process, but can still be flawed, so should still be evaluated with these points in mind. Similarly, large numbers of citations do not always indicate that research is highly regarded. After all the citations themselves could point to bad science.
Citation: Interest, C.
A Rough Guide to Spotting Bad Science
Interest, C. (2014). A Rough Guide to Spotting Bad Science. [online] Retrieved from: http://www.compoundchem.com/2014/04/02/a-rough-guide-to-spotting-bad-science/ [Accessed: 3 Apr 2014].