What eight years of writing the Bad Science column have taught me | Ben Goldacre
Pulling bad science apart is the best teaching gimmick I know for explaining how good science works
Alternative therapists don't kill many people, but they do make a great teaching tool for the basics of evidence-based medicine, because their efforts to distort science are so extreme. When they pervert the activities of people who should know better – medicines regulators, or universities – it throws sharp relief onto the role of science and evidence in culture. Characters from this community who wonder why people keep writing about them should look at their libel cases and their awesomely bad behaviour under fire. You are a comedy factory. Don't go changing.
Next: the real story of how the world works is much weirder than anything a quack can make up. The placebo effect is maddening, the nocebo effect moreso, but the research on how we make decisions, and are misled by heuristics and mental shortcuts, is the wildest of all. Knowing about these belief-hacks gives you thrills, and power.
Pharmaceutical companies can behave dismally. Most important, they still won't publish all the results of all the clinical trials conducted on humans. This is indefensible, and because we tolerate it, we don't know the true effect sizes of the medicines that we give. This absurd situation mocks the whole of medicine: we need legislation to fix it, and popular movements to drive that. I'll join yours.
Journalists can mislead the public about the answers of evidence-based medicine, which is bad. But they also mislead us on the methods and techniques. We live in a new era of doctors and patients – at our best – making decisions together. For that collaboration to work, everyone needs to understand how we know if something is good for us, or bad for us. The basics of evidence-based medicine, of trials, meta-analyses, cohort studies and the like should be taught in schools and waiting rooms. It's interesting, but it's also life and death: people care about it.
Politicians misuse evidence, and distort it to shameful degrees. But more than that, there are endless cases where we could do randomised trials on policies – old and new – to find out if they achieve the outcomes they're aiming for. There is no honourable excuse for failing to use the fairest tests we can design.
Real scientists can behave as badly as anyone else. Science isn't about authority, or white coats, it's about following a method. That method is built on core principles: precision and transparency; being clear about your methods; being honest about your results; and drawing a clear line between the results, on the one hand, and your judgment calls about how those results support a hypothesis. Anyone blurring these lines is iffy.
Conflict of interest stories – where someone has a vested interest in the results of their study – are important, because they tell you when there's a risk that something's wrong in a piece of science. But this is only motive: the gruesome, fascinating mechanism of a crime against science – the methodological flaws – that's where the action is. People who don't really understand science can only critique it in terms of motive. Let them have that; we'll do the details.
Last, nerds are more powerful than we know. Changing mainstream media will be hard, but you can help create parallel options. More academics should blog, post videos, post audio, post lectures, offer articles and more. You'll enjoy it: I've had threats and blackmail, abuse, smears and formal complaints with forged documentation.
But it's worth it, for one simple reason: pulling bad science apart is the best teaching gimmick I know for explaining how good science works. I'm not a policeman, and I've never set out to produce a long list of what's right and what's wrong. For me, things have to be interestingly wrong, and the methods are all that matter.guardian.co.uk © 2013 Guardian News and Media Limited or its affiliated companies. All rights reserved. | Use of this content is subject to our Terms & Conditions | More Feeds
DIY statistical analysis: experience the thrill of touching real data | Ben Goldacre
The story of one man's efforts to re-analyse the stats behind a BBC report on bowel cancer is a heartwarmingly nerdy one
The BBC has found a story: "'Threefold variation' in UK bowel cancer rates". The average death rate across the UK from bowel cancer is 17.9 per 100,000 people, but in some places it's as low as 9, and in some places it's as high as 30. What can be causing this?
Journalists tend to find imaginary patterns in statistical noise, which we've covered many times before. But this case is particularly silly, as you will see, and it has a heartwarming, nerdy twist.
Paul Barden is a quantitative analyst. He saw the story, and decided to download the data and analyse it himself. The claims come from a press release by the charity Beating Bowel Cancer: they've built a map where you can find your local authority mortality rate and get worried, or reassured. Using a "scraping" program, Barden brought up the page for each area in turn, and downloaded the figures. By doing this, he could make a spreadsheet showing the death rate in each region, and its population. From here things gets slightly complicated, but very rewarding.
We know that there will be random variation around the average mortality rate, and also that this will be different in different regions: local authorities with larger populations will have less random variation than areas with smaller populations, because the variation from chance events gets evened out more when there are more people.
You can show this formally. The random variation for this kind of mortality rate will follow the Poisson distribution (a bit like the bell-shaped curve you'll be familiar with). This bell-shaped curve gets narrower – less random variation – for areas with a large population.
So Barden ran a series of simulations in Excel, where he took the UK average bowel cancer mortality rate and a series of typical population sizes, and then used the Poisson distribution to generate figures for the for the bowel cancer death rate that varied with the randomness you would expect from chance.
This random variation predicted by the Poisson distribution – before you even look at the real variations between areas – shows that you would expect some areas to have a death rate of seven, and some areas to have a death rate of 32. So it turns out that the real UK variation, from nine to 31, may actually be less than you'd expect from chance.
Then Barden sent his blog to David Spiegelhalter, a professor of statistics at Cambridge, who runs the excellent website "Understanding Uncertainty". Spiegelhalter suggested Barden could present the real cancer figures as a funnel plot, and that's what you see above.
I cannot begin to tell you how happy it makes me that Spiegelhalter, author of "Funnel plots for comparing institutional performance" – the citation classic from 2005 – can be found by a random blogger online, and then collaborate to make an informative graph of some data that's been over-interpreted by the BBC.
But back to the picture. Each dot is a local authority. The dots higher up show areas with more deaths. The dots further to the right show ones with larger populations. As you can see, areas with larger populations are more tightly clustered around the UK average death rate, because there's less random variation in bigger populations. Lastly, the dotted lines show you the amount of random variation you expect to see, from the Poisson distribution, and there are very few outliers (well, one main one, really).
Excitingly, you can also do this yourself online. The Public Health Observatories provide several neat tools for analysing data, and one will draw a funnel plot for you, from exactly this kind of mortality data. The bowel cancer numbers are in the table below. You can paste them into the Observatories' tool, click "calculate", and experience the thrill of touching real data.
In fact, if you're a journalist, and you find yourself wanting to claim one region is worse than another, for any similar set of death rate figures, then do feel free to use this tool on those figures yourself. It might take five minutes.guardian.co.uk © 2013 Guardian News and Media Limited or its affiliated companies. All rights reserved. | Use of this content is subject to our Terms & Conditions | More Feeds
Serious claims belong in a serious scientific paper | Ben Goldacre
If you have a serious new claim to make, it should go through scientific publication and peer review before you present it to the media
This week Baroness Susan Greenfield, professor of pharmacology at Oxford reportedly announced that computer games could cause dementia in children. This would be very concerning scientific information. But this comes from the opening of a new wing of an expensive boarding school, not an academic conference. Then a spokesperson told a gaming site that's not what she means. Though they didn't say what she does mean.
Two months ago the same professor linked internet use with rising autism diagnoses (not for the first time), then pulled back when autism charities and an Oxford professor of psychology raised concerns. Similar claims go back a long way. They seem changeable, but serious.
It's with some trepidation that anyone writes about Professor Greenfield's claims. When I raised concerns, she said I was like the epidemiologists who denied that smoking caused cancer. Other critics find themselves derided as sexist. When Professor Dorothy Bishop raised concerns, Professor Greenfield responded: "It's not really for Dorothy to comment on how I run my career."
But I have one, humble question: why, in over five years of appearing in the media raising these grave worries, has Professor Greenfield of Oxford University never simply published the claims in an academic paper?
A scientist with enduring concerns about a serious widespread risk would normally set out their concerns clearly, to other scientists, in a scientific paper, and for one simple reason. Science has authority, not because of white coats, or titles, but because of precision and transparency: you explain your theory, set out your evidence, and reference the studies that support your case. Other scientists can then read it, see if you've fairly represented the evidence, and decide whether the methods of the papers you've cited really do produce results that meaningfully support your hypothesis.
Perhaps there are gaps in our knowledge? Great. The phrase "more research is needed" has famously been banned by the British Medical Journal, because it's uninformative: a scientific paper is the place to clearly describe the gaps in our knowledge, and specify new experiments that might resolve these uncertainties.
But the value of a scientific publication goes beyond this simple benefit, of all relevant information appearing, unambiguously, in one place. It's also a way to communicate your ideas to your scientific peers, and invite them to express an informed view.
In this regard, I don't mean peer review, the "least-worst" system settled on for deciding whether a paper is worth publishing, where other academics decide if it's accurate, novel and so on. This is often represented as some kind of policing system for truth, but in reality, some dreadful nonsense gets published, and mercifully so: shaky material of some small value can be published into the buyer-beware professional literature of academic science; then the academic readers of this literature, who are trained to critically appraise a scientific case, can make their own judgment.
And it is this second stage of review by your peers – after publication – that is so important in science. If there are flaws in your case, responses can be written, as letters, or even whole new papers. If there is merit in your work, then new ideas and research will be triggered. That is the real process of science.
If a scientist sidesteps their scientific peers, and chooses to take an apparently changeable, frightening and technical scientific case directly to the public, then that is a deliberate decision, and one that can't realistically go unnoticed. The lay public might find your case superficially appealing, but they may not be fully able to judge the merits of all your technical evidence.
I think these serious scientific concerns belong, at least once, in a clear scientific paper. I don't see how this suggestion is inappropriate, or impudent, and in all seriousness, I can't see an argument against it. I hope it won't elicit an accusation of sexism, or of participation in a cover-up. I hope that it will simply result in an Oxford science professor writing a scientific paper, about a scientific claim of great public health importance, that they have made repeatedly – but confusingly – for at least half a decade.guardian.co.uk © 2013 Guardian News and Media Limited or its affiliated companies. All rights reserved. | Use of this content is subject to our Terms & Conditions | More Feeds