Tuesday, December 2, 2008

That esteemed journal, Viz

Viz is a British institution. One of the funniest magazines ever printed, and utterly unique; the most obvious comparison, although it's a bad one, is MAD Magazine, but I've always found Viz much more entertaining.

Just about everything in Viz is a parody - most commonly the taregt is British comic strips, but most issues have parodies of newspapers as well. Although almost everything in the magazine is full of nudity, profanity, or both, Viz features often have a (semi)serious message. There's a long-running series poking fun at religion and pseudoscience; a few months ago there was a strip about "Dr" Gillian McKeith and her Giant Pile of Bollocks, and recently they ran a contest, in association with Ben Goldacre, in which you could win a free PhD - perfect for cranks who need to add a little academic credibility. In fact Ben has said that he aims for his columns to be something of a cross between Viz and the British Medical Journal.

So, inspired by some Viz fun on ScienceBlogs, I thought I'd post a classic bit of parody neuro-journalism from a few years back. Or is it a parody? You'll learn more about the brain reading this than you will from any number of writeups of fMRI studies.

That esteemed journal, Viz

Viz is a British institution. One of the funniest magazines ever printed, and utterly unique; the most obvious comparison, although it's a bad one, is MAD Magazine, but I've always found Viz much more entertaining.

Just about everything in Viz is a parody - most commonly the taregt is British comic strips, but most issues have parodies of newspapers as well. Although almost everything in the magazine is full of nudity, profanity, or both, Viz features often have a (semi)serious message. There's a long-running series poking fun at religion and pseudoscience; a few months ago there was a strip about "Dr" Gillian McKeith and her Giant Pile of Bollocks, and recently they ran a contest, in association with Ben Goldacre, in which you could win a free PhD - perfect for cranks who need to add a little academic credibility. In fact Ben has said that he aims for his columns to be something of a cross between Viz and the British Medical Journal.

So, inspired by some Viz fun on ScienceBlogs, I thought I'd post a classic bit of parody neuro-journalism from a few years back. Or is it a parody? You'll learn more about the brain reading this than you will from any number of writeups of fMRI studies.

Monday, December 1, 2008

I'm big in Vietnam

Apparently. Thanks to whoever posted a link to Neuroskeptic on this Vietnamese forum! As they comment, wisely:
To be well informed about science, ignore everything you read about it in newspapers. Then read some science books if you like, but ignoring journalists is the important thing.

I'm big in Vietnam

Apparently. Thanks to whoever posted a link to Neuroskeptic on this Vietnamese forum! As they comment, wisely:
To be well informed about science, ignore everything you read about it in newspapers. Then read some science books if you like, but ignoring journalists is the important thing.

Friday, November 28, 2008

Do Herbs Get a Bad Press?

An neat little study in BMC Medicine investigates how newspapers report on clinical research. The authors tried to systematically compare the tone and accuracy of write-ups of clinical trials of herbal remedies with those of trials of pharmaceuticals. The results might surprise you.

The research comes from a Canadian group, and most of the hard slog was done by two undergrads, who read through and evaluated 105 trials and 553 newspaper articles about those trials. (They didn't get named as authors on the paper, which seems a bit mean, so let's take a moment to appreciate Megan Koper and Thomas Moran.) The aim was to take all English language newspaper articles about clinical trials printed between 1995 and 2005 (as found on LexisNexis). Duplicate articles were weeded out and every article was then rated for overall tone (subjective), the number of risks and benefits reported, whether it reported on conflicts of interest or not, and so forth. The trials themselves were also rated.

As the authors say

This type of study, comparing media coverage with the scientific research it covers is a well recognized method in media studies. Is the tone of reporting different for herbal remedy versus pharmaceutical clinical trials? Are there differences in the sources of trial funding and the reporting of that issue? What about the reporting of conflicts of interest?
There were a range of findings. Firstly, newspapers were generally poor at reporting on important facts about trials such as conflicts of interest and methodological flaws. No great surprise there. They also tended to understate risks, especially in regards to herbal trials.

The most novel finding was that newspaper reports of herbal remedy trials were quite a lot more likely to be negative in tone than reports of pharmaceutical trials. The graphs here show this: out of 201 newspaper articles about pharmaceutical clinical trials, not one was negative in overall tone, and most were actively positive about the drug, while the herbs got a harsh press, with roughly as many negative articles as positive ones. (Rightmost two bars.)


This might partly be explained by the fact that slightly more of the herbal remedy trials found a negative result, but the difference in this case was fairly small (leftmost two bars). The authors concluded that
Those herbal remedy clinical trials that receive newspaper coverage are of similar quality to pharmaceutical clinical trials ... Despite the overall positive results and tone of the clinical trials, newspaper coverage of herbal remedy clinical trials was more negative than for pharmaceutical clinical trials.
Bet you didn't see that coming - the media (at any rate in Britain) are often seen as reporting uncritically on complementary and alternative medicine. These results suggest that this is a simplification, but remember that this study only considered articles about specific clinical trials - not general discussions of treaments or diseases. The authors remark:
[The result] is contrary to most published research on media coverage of CAM. Those studies consider a much broader spectrum of treatments and the media content is generally anecdotal rather than evidence based. Indeed, journalists are displaying a degree of skepticism rare for medical reporting.
So, it's not clear why journalists are so critical of trials of herbs when they're generally fans of CAM the rest of the time. The authors speculate:
It is possible that once confronted with actual evidence, journalists are more critical or skeptical. It may be considered more newsworthy to debunk commonly held beliefs and practices related to CAM, to go against the trend of positive reporting in light of evidence. It is also possible that journalists who turn to press releases of peer-reviewed, high-impact journals have subtle biases towards scientific method and conventional medicine. Also, journalists turn to trusted sources in the biomedical community for comments on clinical trials, both herbal and pharmaceutical, potentially leading to a biomedical bias in reporting trial outcomes.
If you forgive the slightly CAM-ish language (biomedical indeed), you can see that they make some good suggestions - but we don't really know. This is the problem with this kind of study (as the authors note) - the fact that a story is "negative" about herbs could mean a lot of different things. We also don't know how many other articles there were about herbs which didn't mention clinical trials, and because this article only considered articles referring to primary literature, not meta-analyses (I think), it leaves out a lot of material. Meta-analyses are popular with journalists and are often more relevant to the public than single trials are.

Still, it's a paper which challenged my prejudices (like a lot of bloggers I have a bit of a persecution complex about the media being pro-CAM) and a nice example of empirical research on the media.

ResearchBlogging.orgTania Bubela, Heather Boon, Timothy Caulfield (2008). Herbal remedy clinical trials in the media: a comparison with the coverage of conventional pharmaceuticals BMC Medicine, 6 (1) DOI: 10.1186/1741-7015-6-35

Do Herbs Get a Bad Press?

An neat little study in BMC Medicine investigates how newspapers report on clinical research. The authors tried to systematically compare the tone and accuracy of write-ups of clinical trials of herbal remedies with those of trials of pharmaceuticals. The results might surprise you.

The research comes from a Canadian group, and most of the hard slog was done by two undergrads, who read through and evaluated 105 trials and 553 newspaper articles about those trials. (They didn't get named as authors on the paper, which seems a bit mean, so let's take a moment to appreciate Megan Koper and Thomas Moran.) The aim was to take all English language newspaper articles about clinical trials printed between 1995 and 2005 (as found on LexisNexis). Duplicate articles were weeded out and every article was then rated for overall tone (subjective), the number of risks and benefits reported, whether it reported on conflicts of interest or not, and so forth. The trials themselves were also rated.

As the authors say

This type of study, comparing media coverage with the scientific research it covers is a well recognized method in media studies. Is the tone of reporting different for herbal remedy versus pharmaceutical clinical trials? Are there differences in the sources of trial funding and the reporting of that issue? What about the reporting of conflicts of interest?
There were a range of findings. Firstly, newspapers were generally poor at reporting on important facts about trials such as conflicts of interest and methodological flaws. No great surprise there. They also tended to understate risks, especially in regards to herbal trials.

The most novel finding was that newspaper reports of herbal remedy trials were quite a lot more likely to be negative in tone than reports of pharmaceutical trials. The graphs here show this: out of 201 newspaper articles about pharmaceutical clinical trials, not one was negative in overall tone, and most were actively positive about the drug, while the herbs got a harsh press, with roughly as many negative articles as positive ones. (Rightmost two bars.)


This might partly be explained by the fact that slightly more of the herbal remedy trials found a negative result, but the difference in this case was fairly small (leftmost two bars). The authors concluded that
Those herbal remedy clinical trials that receive newspaper coverage are of similar quality to pharmaceutical clinical trials ... Despite the overall positive results and tone of the clinical trials, newspaper coverage of herbal remedy clinical trials was more negative than for pharmaceutical clinical trials.
Bet you didn't see that coming - the media (at any rate in Britain) are often seen as reporting uncritically on complementary and alternative medicine. These results suggest that this is a simplification, but remember that this study only considered articles about specific clinical trials - not general discussions of treaments or diseases. The authors remark:
[The result] is contrary to most published research on media coverage of CAM. Those studies consider a much broader spectrum of treatments and the media content is generally anecdotal rather than evidence based. Indeed, journalists are displaying a degree of skepticism rare for medical reporting.
So, it's not clear why journalists are so critical of trials of herbs when they're generally fans of CAM the rest of the time. The authors speculate:
It is possible that once confronted with actual evidence, journalists are more critical or skeptical. It may be considered more newsworthy to debunk commonly held beliefs and practices related to CAM, to go against the trend of positive reporting in light of evidence. It is also possible that journalists who turn to press releases of peer-reviewed, high-impact journals have subtle biases towards scientific method and conventional medicine. Also, journalists turn to trusted sources in the biomedical community for comments on clinical trials, both herbal and pharmaceutical, potentially leading to a biomedical bias in reporting trial outcomes.
If you forgive the slightly CAM-ish language (biomedical indeed), you can see that they make some good suggestions - but we don't really know. This is the problem with this kind of study (as the authors note) - the fact that a story is "negative" about herbs could mean a lot of different things. We also don't know how many other articles there were about herbs which didn't mention clinical trials, and because this article only considered articles referring to primary literature, not meta-analyses (I think), it leaves out a lot of material. Meta-analyses are popular with journalists and are often more relevant to the public than single trials are.

Still, it's a paper which challenged my prejudices (like a lot of bloggers I have a bit of a persecution complex about the media being pro-CAM) and a nice example of empirical research on the media.

ResearchBlogging.orgTania Bubela, Heather Boon, Timothy Caulfield (2008). Herbal remedy clinical trials in the media: a comparison with the coverage of conventional pharmaceuticals BMC Medicine, 6 (1) DOI: 10.1186/1741-7015-6-35

Wednesday, November 26, 2008

The Spooky Case of the Disappearing Crap Science Article

Just a few hours ago, I drafted a post about a crap science study in the Daily Telegraph called "Stress of modern life cuts attention spans to five minutes".
The pressures of modern life are affecting our ability to focus on the task in hand, with work stress cited as the major distraction, it said.
Declining attention spans are causing household accidents such as pans being left to boil over on the hob, baths allowed to overflow, and freezer doors left open, the survey suggests.
A quarter of people polled said they regularly forget the names of close friends or relatives, and seven per cent even admitted to momentarily forgetting their own birthdays.
The study by Lloyds TSB insurance showed that the average attention span had fallen to just 5 minutes, down from 12 minutes 10 years ago.
But the over-50s are able to concentrate for longer periods than young people, suggesting that busy lifestyles and intrusive modern technology rather than old age are to blame for our mental decline.
"More than ever, research is highlighting a trend in reduced attention and concentration spans, and as our experiment suggests, the younger generation appear to be the worst afflicted," said sociologist David Moxon, who led the survey of 1,000 people.
Almost identical stories appeared in the Daily Mail (no surprise) and, for some reason, an awful lot of Indian news sites. So I hacked out a few curmudgeonly lines - but before I posted them, the story had vanished! (Update: It's back! See end of post). Spooky. But first, the curmudgeonry:
  • Crap science story in "crap" shocker
The term "attention span" is meaningless - attention to what? Are we so stressed out that after five minutes down the pub, we tend to forget our pints and wander home in a daze? You could talk about attention span for a particular activity, so long as you defined your criteria for losing attention - for example, you could measure the average time a student sits in a lecture before he starts doodling on his notes. Then if you wanted you could find out if stress affects that time. I wouldn't recommend it, because it would be very boring, but it would be a scientific study.

This news, however is not based on a study of this kind. It's based on a survey of 1,000 people i.e. they asked people how long their attention span was and whether they felt they were prone to accidents. No doubt the questions were chosen in such a way that they got the answers they wanted. Who are "they"? - Lloyds TSB insurance, or rather, their PR department, who decided that they would pay Mr David Moxon MSc. to get them the results they wanted. He obliged, because that's what he does. Then the PR people wrote up Moxon's "results" as a press release and sent it out to all the newspapers, where stressed-out, over-worked journalists (there's a grain of truth to every story!) leapt at the chance to fill some precious column inches with no thinking required. Lloyds get their name in the newspapers, their PR company gets cash, and Moxon gets cash and his name in the papers so he gets more clients in the future. Sorted!

How do I know this? Well, mainly because I've read Ben Goldacre's Bad Science and Nick Davie's Flat Earth News, two excellent books which explain in great detail how modern journalism works and how this kind of PR junk routinely ends up on the pages of your newspapers in the guise of science or "surveys". However, even if I hadn't, I could have worked it out by just consulting Google regarding Mr Moxon. Here is his website. Here's what Moxon says about his services:
David can provide a wide range of traditional behavioural research methods on a diverse range of social, psychological and health topics. David works in partnership with clients delivering precisely the brief they require whilst maintaining academic integrity.
The more commonly provided services include:
  • The development and compilation of questionnaire or survey questions

  • Statistical analysis of data (including SPSS® if required)

  • The development of personality typologies

  • The production of media friendly tests and quizzes (always with scoring systems)

  • The production of primary research reports identifying ‘top line findings’ as well as providing detailed results and conclusions.

In other words, he gets the results you want. And he urges potential customers to
Contact the consultancy which gives you fast, highly-creative and psychologically-endorsed stories that grab the headlines.
  • The Disappearance
The mystery is that the story, so carefully crafted by the PR department, has gone. Both the Telegraph and the Mail have pulled it, although it was there last time I checked, a couple of hours ago. Googling the story confirms that it used to be there, but now it's gone. Variants are still available elsewhere, sadly.

So, what happened? Did both the Mail and the Telegraph suddenly experience an severe attack of journalistic integrity and decide that this story was so bad, they weren't even going to host it on their websites? It seems doubtful, especially in the case of the Mail, but it's possible.

I prefer a different explanation: my intention to rubbish the story travelled forwards in time, and caused the story to be taken down, even though I hadn't posted about it yet. Lynn McTaggart has proven that this can happen, you know.

Update 27th November 13:30: And it's back! The story has reappeared on the Telegraph website. The Lay Scientist tells me that the story was originally put up too prematurely and then pulled because it was embargoed until today. I don't quite see why it matters when a non-story like this is published - it could just as well have been 10 years ago - but there you go. And in a ridiculous coda to this sorry tale, the Telegraph have today run a second crap science article centered around the concept of "5 minutes" - according to the makers of cold and flu remedy Lemsip, 52% of women feel sorry for their boyfriends when they're ill for just five minutes or less. Presumably because this is their attention span. How I wish I were making this up.