Issues Magazine

Scrutinising Science in the Media

By Edward Sykes

Journalists need to be as critical of science as scientists are.

Scientific issues permeate every aspect of our lives, yet we are often left bewildered by the way it is portrayed in the news, with climate scientists being accused of alarming people for their own ends and medical experts extolling the dangers of alcohol on one day and its benefits the next.

In the present economic climate, the journalists who report these stories are under more pressure than ever before to produce more articles with fewer resources. The changing face of journalism, a far cry from the investigative ideal, is a deeply worrying development where the voice that gets heard is not the most authoritative but simply the one that shouts the loudest.

Scientists may be in broad agreement about a topic but they are always arguing the finer points and poring over the evidence. Public scrutiny goes with the territory in science, and it is time for journalists to scrutinise just the same.

The majority of people absorb their scientific knowledge from the media, and judging by the headlines they could be forgiven for thinking that every week brings yet another scientific breakthrough even though the reality is quite different. Science is a continually oozing lava-flow of progress and it is actually quite rare to have an eruption of knowledge that completely alters the landscape. The slow pace of scientific discoveries, which can easily take 10 or 20 years to unfold, is at great odds with the instantaneous speed of the media.

The difference in speeds creates problems for scientists, journalists and most definitely the public, who are left confused by tales of the latest cures and causes of cancer, when in reality each piece of research is normally only a snapshot of what one scientist is working on rather than the end of the story for that field of science. I am certainly not going to defend all the sensationalist pieces that you see in the news (I throw my remote control at the TV with the best of them!) and I will show why we need to be critical of news stories, but first I want to highlight why journalists need to be more critical of science.

Most people understand that a story about Kate Middleton’s wedding dress receives more media coverage than a change to tax law not because celebrity fashion choices are more important to the country but because journalists are trying to grab your attention and the best way to do that is by telling people about things that already interest them. Editors choose a story because it has a picture of an attractive person or a cute animal; headline writers use provocative and emotive words to make you click on their article; and advertisers pay thousands of dollars to slap their slogans by the side in the hope you’ll remember their products.

I doubt any of this will come as a shock, but it might surprise you that science, the highly respected bastion of knowledge, has its own problems of hype, spin and vested interests too – so if you really want to know how important a story is then you need to be as critical as the scientists are themselves.

Every researcher strives to be published in a scientific journal, and every journal competes to be the most well-read and influential voice, just as every newspaper does in the mainstream media. There are thousands of scientific journals and they cover every topic you can think of, ranging from the Journal of Shellfish Research or Solid Waste and Power to the worryingly named Journal of Trauma. Just as with broadsheets and tabloids, journals generally make their money through sales or from selling advertising space, and they range from the highly professional and well-respected to the … less impressive.

The journals need scientists to provide them with the best and most exciting research, while the scientists need the journals to publish their research. The entire process relies upon a form of self-policing where scientists comment on each other’s work in order to determine whether it’s worthy of being published. The system is called “peer review”, but it’s certainly far from perfect.

Peer review gives scientific research a seal of approval from the scientific community – an independent assessment of the quality of the science on display – but it is still a heavily flawed system. Although most researchers say the main issue is that the peer review process sometimes rejects deserving science, I want to talk about how it can incorrectly deem research worthy enough to publish.

Over the years there have been examples of sloppy, and even fraudulent, peer-reviewed research being published, which is later followed by a series of retractions that never make as much news as the original research did in the first place. So what could be going wrong?

First, the editors will invite independent reviewers to consider the research, but on occasions the examiner may not be fully qualified, particularly in the necessary statistics, to be able to spot a mistake. I myself reviewed a number of articles while still a PhD student and I certainly wasn’t ever a very good statistician! What’s more, even the suitably qualified scientists are not being paid for their time and so, with busy schedules, may not devote all the time required.

Second, it is very rare for the reviewer or editor to ever see the raw data that the science is based on. Instead, the scientists submitting the work will have chosen what data to put forward and how to present it, meaning that mistakes can slip through the net. A thorough reviewer can ask for extra data to be presented, and outright fraud is thankfully rare, but it can often take the keen eye of a reviewer to spot something that invalidates much of the findings.

Third, the editor has the final say on what goes into the journal. Even if the reviewers dismiss the research, the editor can still publish it. If it is from a prominent scientist or on a sexy topic that may shift a few extra copies of the journal there is a financial pressure to include the research regardless.

The top journals are now part of a huge industry, so once publishing is finalised it is time to spread the word about the research in an effort to get people to buy the journal.

Journalists are often accused of hyping their stories but they are not the only ones who do it. In fact, they have even accused journals and universities of doing it themselves. The role of writing media releases normally falls to a media manager, many of whom do not have a scientific background and are also under extreme pressure to get their organisation in the limelight. As ever, there are some excellent ones who take every care not to hype or sensationalise, but there are others that sometimes use misleading titles, inflated claims and inaccurate statements. How is a journalist to know which media managers understand the science that they are writing about?

Journalists’ inboxes are flooded with media releases every day; the releases are a very successful way of getting attention. In fact, a huge proportion of stories that appear in the news are re-hashed media releases, or even verbatim copies reproduced by time-stressed journalists who have to fill pages in the papers along with blogs and even audio and video content to go online. There is even a name for it – “churnalism”, coined by the journalist Nick Davies in his book Flat Earth News.

So how can journalists sift through the dross that’s fired at them each day? The following are just a rule of thumb that anyone can use to assess the veracity and importance of a piece of research.

Journal quality is a major issue. There are always exceptions, but generally the more prestigious the journal, the more robust and more significant the results, the more qualified the reviewer and the less hyped the press release. There is no golden rule to assessing journal quality, but a good starting point is to look at impact factor, a value devised by the Institute for Scientific Information (ISI) and given to each journal, which gives an idea of how often its articles are cited by other scientists.

Check that the published paper backs the claims of the media release with a quick scan of the abstract, and then check whether the data in the results table matches the figures in the abstract.

How big was the study? Generally, the more things that are counted or tested, the more confident you can be of the results. A controversial study on the MMR (measles, mumps, rubella) vaccine published in the prestigious medical journal The Lancet led to a huge fall in vaccine uptake and an ensuing rise in measles, but this should never have received as much attention as it did, or arguably even been published, as it only studied 12 children.

Check the statistics. It may sound daunting but a quick glance at the main result can often tell you a lot. Scientists use something called a p-value to say how likely it is that something happened by chance. The bigger the number (up to 1), the more likely it happened by chance. The cut-off is normally 0.05, which means that there was a one in 20 chance (5%) of this result happening simply by chance. Many studies will have p-values less than 0.001.

Distinguish between correlation and causation. Obese people may spend more time using Facebook (correlation) but that doesn’t mean that using Facebook makes you overweight (causation). Lots of studies use one measurement as an indicator of something else because they’re easier to establish, such as your finger length being used as an indicator of the levels of testosterone present in the womb when you were an embryo. High testosterone, converted in the body to another hormone, can lead to baldness. It doesn’t mean that if you cut your fingers off that your hair would grow back.

Is it worth the risk? If I told you that eating a bacon sandwich each day gave you a 20% greater relative risk of getting bowel cancer then you might be a little concerned and avoid it. If I told you that the chance of getting bowel cancer was originally 5% and eating the sandwiches every day had raised it by 20% to 6% then perhaps you might decide that the absolute risk was worth it.

Similarly, risk should always be viewed in context. People often worry about the risk of flying, but really they have much more to worry about just driving to the airport.

Questionnaires after an event can be very useful for scientists to get a broad picture of past events, but they can also be incredibly subjective and misleading. Trying to remember what you fed the kids last week can be hard enough, let alone estimating the number of apples you gave them to eat a few years ago.

Once a journalist has decided to run with a story it is often tempting to follow the maxim of “balance” by getting two opposing views. However although the “he said/she said” technique might apply for politics, it just is not subtle enough for science. After a debate in parliament, a political journalist would interview two opposing politicians; after a football match a sports reporter would interview the managers from both teams. Yet after a piece of research is announced, or a scientific issue is being discussed, it is rare to hear from two scientists who can discuss where the uncertainties on climate change or the benefits of stem cell technology lie. Instead we get an interview between the scientist who is “pro” climate change or stem cells and a non-scientist who does not think climate change is a problem or that stem cells are morally acceptable. General news journalists often point out to scientists that science as a news story should be treated no differently to anything else, yet often it seems to be.

Every news story has to be treated with some scepticism as the viewer wonders what information has been omitted and what have the journalists not been told. Science is no different, and anyone should be able to follow the paper trail back to the original piece of research to see how well the piece stands up. But with the ocean of news that bombards us every day there is a greater reliance on trusted detectives to sift through the dross.

The misinformation that is spun about scientific issues can not only be confusing but also have serious consequences as people are swayed from giving vaccines to their children or climate change is dismissed as hot air. There is no guaranteed way of checking a paper and its media release unless you are an expert in the field, but journalists, or any non-scientist, can make a very good attempt.

Science is not a special case – the critical eye that appraises every potential story needs to be focused not just on the science in the headlines, but also on the media releases sent by universities and journals and even on the research articles themselves.

No one is ever more critical of a piece of science than another scientist, but we can all certainly try.

The opinions expressed here are the personal opinions of the author.