Posted on

How To Tell When Science Reporting Is Bullshit

Article written by Will Herrick
In the era of Facebook and the ‘Food Babe,’ scientific misinformation is all around us, and the unfortunate reality is that most non-scientists simply don’t have the knowledge or experience to adequately judge what the media reports about science and health. Add in a heavy dose of skepticism of the pharma and Ag industries and what we’ve got is a world where a former Playboy model, without any scientific credentials is put up against scientists and doctors as if she’s equally qualified – a dangerous symptom of false equivalency in reporting. We see this in the ongoing debates over GMOs, vaccines and autism, global climate change, and diet and health, and in the long run it’s going to be bad for all of us.

The good news is that you can learn how to tell if a science story is bullshit with a little bit of guidance and skepticism. What follows is a short guide to evaluating that next popular article you come across in your social media feeds. I’m not saying you’ll be able to know that an article is completely true, but it’s often pretty easy to tell if it’s total garbage. To be clear, this is not a guide specifically for determining if the actual science is good (there’s some of that too), but rather whether or not the reporting is sensationalized or inaccurate. I hope it will help you weed out a lot of the stupid shit out there.

#1: ALWAYS BE SKEPTICAL.

This is the most important thing, because people really like to believe whatever fits their preconceived notions. Cognitive dissonance makes anyone uncomfortable. It’s important because it’s pretty rare for a single study to make some radical breakthrough or discovery, so if you see something that seems really sensational, it’s probably been sensationalized, and the reality is much more mundane. Plus, there’s indications that most published biomedical research findings are wrong. Of course, you probably won’t find these flaws yourself when peer review so often fails spectacularly. You can, however, look for a few things as a ‘sniff test’ to decide whether what you just read is as significant as claimed:

2 SMART TO BE WRONG

 

Where are you reading it? If the article was on Facebook and seems really insane, search for the study elsewhere and see what others are saying about it. It’s possible that the entire thing is a hoax – especially on Facebook. If you’re interested in finding objective information on a topic, you shouldn’t consult websites with obvious biases against them like the anti-vaccine crank blog “Age of Autism.”

Does the source directly link to the study being written about, or provide a citation? If you suspect the source is biased (or unqualified) and they link to another questionable, non-primary source, start getting skeptical. For example, this article  links to a source that is another lifestyle and entertainment ‘news’ site written by an unqualified ‘lifestyle’ writer, which links to another news article as a source before we actually get a citation for the study. Credible science reporting will always provide a direct link or citation.

Are the claims too good (or too horrible) to be true? The last article is a good example of something that is too good to be true (because it isn’t), and you may also see articles that are too terrible to be true. For example, there’s an infamous study from 2012 that reported rats fed GMO corn developed numerous tumors. However, the supposed effects were so severe that it should have set off some BS detectors (because we’ve been eating GMO crops since 1996 and have not seen a stark increase in cancers), and indeed the study was eventually discredited and retracted. The gist is that the authors used too few rats in each group to draw any conclusions, and the rats used are already so prone to tumors that 80% are riddled with them by age 2 no matter what. Plus, the authors prevented journalists from getting opinions from other researchers when publicizing their results pre-publication. Naturally, it is still cited by anti-GMO activists and has even been republished.

 

#2: CORRELATION ≠ CAUSATION

One of the worst mistakes made in science reporting is the conflation of correlation and causation. A correlation is merely an observation that when one phenomena changes, another also changes, whereas causation is evidence of a specific cause that directly explains why the correlation exists. Unfortunately, sometimes the best we can do is a randomized clinical trial. This is really important, and even scientists still make this mistake all the time. A good scientist will always qualify their results, i.e. “this is only a correlation and further research is needed to prove causation.” I’ve even seen research articles that were fairly written, but when talking to the press the investigators act like they’ve proven causation.

A classic example: in the late 1940’s before a vaccine was available, researchers noticed an association between the ice cream sales and the rate of new polio cases and recommended against it as part of an anti-polio diet. The truth is it was pure coincidence – ice cream sales and new polio cases both increased during the summer for unrelated reasons. Anti-vaxxers love to use the correlation is causation fallacy to justify their unfounded, dangerous beliefs – don’t fall for their shit.

catch the polios

This has also plagued health and nutrition research, where epidemiology is used to find correlations between certain behaviors or foods and health outcomes. Epidemiology was originally created to study infectious diseases i.e. microbes and viruses. In my opinion, it is less than ideal that epidemiology is so heavily replied upon to study diseases of aging (cancer, heart disease) because there is no specific cause. These studies also rely upon the truthfulness and memory recall of tens or hundreds of thousands of people, and people are notoriously shitty at recalling what they ate just last week. Make no mistake, I do believe these studies are important, but the results are too often misinterpreted or overblown by the media and the public is left misinformed.

         

#3: UNDERSTAND EFFECT SIZE AND ‘SIGNIFICANCE’

There are two other big reasons that studies of the sort just mentioned are overblown in the media: the overemphasis on effect size, and misunderstanding of the statistical term ‘significance.’ Consider this story about a connection between breast cancer risk and eating red meat. The researchers found that eating 1.5 servings of red meat per day increased the risk of breast cancer by 22%, and by an additional 13% for every extra daily serving. This is one way of stating an effect size. However, over the 20 year study, 2,830 out of 89,000 women got breast cancer, which is ~3.2%, and a 22% increase in this rate is… 3.9% (or about 600 more cases if everyone ate 2.5 servings/day). If someone ate 2.5 servings per day, the increase in risk is 35%, which is an overall rate of 4.3% (~1000 more cases than the base rate). I’m absolutely not saying a 4.3% rate of breast cancer is something to scoff at, but it definitely paints a very different picture than the effect size – and that’s only if you believe it’s possible for anyone to accurately recall what they ate for the last 4 years. There’s another thing that, unfortunately, you cannot gleam from the paper abstract: the number of breast cancer cases per ‘person-year’ is nearly identical in all red meat consumption groups. It might help to visualize the data:red meat

I would prefer to have more of the data for a more accurate analysis, but overall this is wholly unconvincing to me.

Another issue is the misunderstanding of the word ‘significant’ in research. When researchers find that a certain dietary factor ‘significantly’ affects the risk of cancer or another disease, they’re actually referring to statistical significance. This has nothing to do with the clinical significance, but the public sees ‘significant and that’s what they think. Statistical significance is the chance of finding the measured (or greater) difference between groups when there is actually no difference, usually called a ‘P-value.’ If this is very low we should say “There is a low probability that the difference is due to chance, so it may be a real effect,” and the result is “significant.” However, this is all based in statistics that requires many assumptions and depend on the quality of the data and sample size, and even with perfect data a single study can’t prove much. The standard in research is a P-value less than 0.05, or less than 5%. In other words, the odds the difference is due to chance approaches 1 in 20. And even then it’s still only a correlation, and without a plausible mechanism to explain causation I would be cautiously skeptical.

 

#4: DOSE SIZE MATTERS

This is one that really bugs me. It’s such an obvious flaw in many animal studies that it’s amazing these studies keep getting published. I’m referring to the use of extraordinarily high doses of certain compounds to ‘prove’ that they are toxic, or that they provide some health benefit. For instance, the bustle.com article in #1 describes a study which concluded that drinking a glass of red wine has equivalent health benefits to exercising an hour at the gym… except the study says no such thing, and the dose of resveratrol (an antioxidant in red wine) given to rats in the study could be obtained from red wine by drinking at least 850 glasses.

For another example, consider that the artificial sweetener aspartame, which has been studied for decades and has never been consistently linked to any sort of negative health effect in humans, is constantly targeted by conspiracy theorists who insist it is poisonous and causes brain cancer/multiple sclerosis. One study I’ve seen cited many times as evidence of aspartame’s toxicity dosed rats with aspartame-laced water and found that it induced ‘significant’ liver toxicity compared to controls. This sounds scary, but you only need to read the abstract to realize it’s rubbish: “[the] first group was given aspartame dissolved in water in a dose of 500 mg/kg b.wt.; the second group was given a dose of 1000 mg/kg b.wt…” which means they were given 500 or 1000 mg of aspartame per kilogram of body weight per day. This is a “Human Equivalent Dose,” or HED, of ~81 and 162 mg/kg body weight. The acceptable daily intake (ADI) of aspartame is 50 mg/kg body weight, meaning they used 160% and 324% of the ADI, and the effects were still modest! A can of diet cola has ~110 mg of aspartame, meaning a 150 lb person would have to drink a ridiculous 30 cans of diet cola in a day to reach it, and 50-100 cans (or 9-18 2 liters!) to reach the equivalent dose used in this study. For 180 days straight. How the hell is this supposed to be relevant to human health? Despite the absurdity of the doses used, they published yet another study with the same dosages just last year. That is some crazy bullshit.

aspartame equivalents

Conclusion  

You may read everything above and come away with the impression that I’m being overly cynical, or that we just shouldn’t trust research at all. While the former may be partially true, I do actually believe that even flawed research can be important – as long as we are willing to acknowledge flaws and be careful not to draw unfounded conclusions. Despite the vocal squawking of anti-science fanatics, I do think the average person has trust in science. However, we must be careful not to make the opposite mistake and put too much faith in it – we should especially be skeptical of the way science is reported, and we must always be skeptical of unsubstantiated claims. We may never totally rid ourselves of homeopathy and other pseudoscience nonsense, but if more people have even a basic grasp of what makes good science and push back against exaggeration, paranoia, and fear, the public as a whole will benefit. So go eat some delicious red meat, it [probably] won’t kill you!

Author: Will H.

Bio: Will H. has a BS in Chemical & Biomolecular Engineering and a PhD in Chemical Engineering and has authored scientific works in Biomacromolecules and Cellular and Molecular Bioengineering.”

10 thoughts on “How To Tell When Science Reporting Is Bullshit

  1. I have a friend who is a biostatistician – reading and evaluating medical studies is her area of interest. I’m lucky to be able to run stuff by her if I have any questions about it!

  2. Its really all bullshit. I took a course in health research, and their GOLD standard in research is the randomized controlled trials. They sound so legit on the surface, but when you look under the hood, they are all mostly crap. There is bias in them from the get go. For one, in order to get your sample, you immediately are selective about who you select for your study. HELLO this is not randomized at all. You are only randomizing from the non-randomized group of people you selected. BULLSHIT.

    1. No, it is not ALL bullshit, that sounds like something an anti-vaxxer would say. Is it perfect? No. But its the best we have right now, and is always trying to improve upon itself. One school course doesn’t counter all of research.

    2. Randomized: I don’t think that word means what you think it means. Researchers have to be selective of the subjects included in a study because they are trying to control for other variables that may affect the outcome.

    3. You wouldn’t do a randomized clinical trial for a new cancer treatment on patients without cancer. You have to select for patients with cancer first.

    4. I took a course in electrical engineering when I was in high school (I was bored), and yet I don’t know squat about how power stations are designed. I also did a Bachelors, Masters and PhD in medical research and I feel I must politely inform you that you don’t know squat about randomised controlled trials.

  3. Typo: “heavily replied upon to study diseases of aging (cancer, heart disease) because there is no specific cause.”
    I think that should read “relied upon”?
    Great article

  4. Great post! Most of this applies to politics and the media as well. Nice work.

  5. My prof once said that under the most carefully controlled scientific conditions a Holstein cow will do whatever she pleases. People too, I guess. Great article. Many thanks.

  6. Perhaps a little oversimplified but would probably be enough to revolutionise the media’s interpretations of isolated scientific literature i.e. one paper’s findings does not equate to a fact(s). There’s a reason why a single thesis with a single research question still cites hundreds of papers just to draw together an acceptable answer 😉

Leave a Reply

Your email address will not be published. Required fields are marked *