How do I know it’s OK? Swimming through the science communications minefield

How do I know it’s OK? This often goes through my mind when I’m writing about science. The mere fact that I’m a scientist doesn’t give me authority to write about science: my own research field was unbelievably narrow, and my PhD represents only a tiny fraction of even that field. As with all PhDs, for a short time, yes, I was the world expert in my tiny piece of the science kingdom, but no, that still does not make me an authority on science in general.

What it does mean is that 99.99% of the science writing I do needs research. It was relatively easy researching for my PhD; I knew the subject matter and had unlimited academic access to primary research journals. Nowadays my access is more limited, since most peer-reviewed publications are behind a paywall. I rely on open-access information readily available on the Internet and the journals that my college library holds (yes, lifelong learning—that’s me!). In addition, since the topic is usually outside my area of expertise, I am often hunting through resources outside my science comfort zone.

Which brings me to my major dilemma: how do I know if what I’m reading is reliable? How can I tell if the science is good enough to share on Talk Science to Me’s social media channels? Is the experimental design robust? Are the inferences supported? Does the news come from a genuine source? Am I propagating rubbish?

There are a lot of clickbait-worthy health and science headlines floating around out there, easily spread in just a few clicks. Everyone wants to know the secret to curing cancer or prolonging life, or whether it’s all just down to bad luck—such news is viral. But why?

In December, researchers published an investigation into the source of clickbait: are scientists themselves promising the Moon in research papers? Or are overzealous academic public relations departments writing up fantastical press releases? Or maybe journalists themselves are to blame, rushed for deadlines and churning out eye-catching headlines?

Their conclusion? Compared to the primary sources, more than a third of university-issued press releases made exaggerated claims regarding the science they reported. The releases gave explicit advice, implied causation from correlational studies and over-inferred that results from animal studies applied to humans. And with the hype come the clickbait headlines and the social media whirlwind. (Note: it’s not always the press release that’s at fault…)

In a perfect world, anyone creating or sharing stories about science would go back to the primary source and investigate for themselves. Using useful tips and tools like Carl Sagan’s Baloney Detector (discussed here by Maria Popova on BrainPickings) or referring to some of the tools mentioned in earlier posts here and here also works.

For me, the answer is to read more, read critically and read with an eye to quality in science writing. Checking in with trusted sources, throwing out the occasional “What does this mean?” on social media, and reading as much commentary as I can find slowly builds confidence. But it does take more than time than is strictly valid for RT-ing a juicy tweet.

Hmmm—maybe I should be tuning in to the UK’s NHS Behind the Headlines Twitter account for some truly excellent takedowns of clickbait headlines before I retweet.

Comments are closed.