Friday, November 28, 2008

Do Herbs Get a Bad Press?

An neat little study in BMC Medicine investigates how newspapers report on clinical research. The authors tried to systematically compare the tone and accuracy of write-ups of clinical trials of herbal remedies with those of trials of pharmaceuticals. The results might surprise you.

The research comes from a Canadian group, and most of the hard slog was done by two undergrads, who read through and evaluated 105 trials and 553 newspaper articles about those trials. (They didn't get named as authors on the paper, which seems a bit mean, so let's take a moment to appreciate Megan Koper and Thomas Moran.) The aim was to take all English language newspaper articles about clinical trials printed between 1995 and 2005 (as found on LexisNexis). Duplicate articles were weeded out and every article was then rated for overall tone (subjective), the number of risks and benefits reported, whether it reported on conflicts of interest or not, and so forth. The trials themselves were also rated.

As the authors say

This type of study, comparing media coverage with the scientific research it covers is a well recognized method in media studies. Is the tone of reporting different for herbal remedy versus pharmaceutical clinical trials? Are there differences in the sources of trial funding and the reporting of that issue? What about the reporting of conflicts of interest?
There were a range of findings. Firstly, newspapers were generally poor at reporting on important facts about trials such as conflicts of interest and methodological flaws. No great surprise there. They also tended to understate risks, especially in regards to herbal trials.

The most novel finding was that newspaper reports of herbal remedy trials were quite a lot more likely to be negative in tone than reports of pharmaceutical trials. The graphs here show this: out of 201 newspaper articles about pharmaceutical clinical trials, not one was negative in overall tone, and most were actively positive about the drug, while the herbs got a harsh press, with roughly as many negative articles as positive ones. (Rightmost two bars.)


This might partly be explained by the fact that slightly more of the herbal remedy trials found a negative result, but the difference in this case was fairly small (leftmost two bars). The authors concluded that
Those herbal remedy clinical trials that receive newspaper coverage are of similar quality to pharmaceutical clinical trials ... Despite the overall positive results and tone of the clinical trials, newspaper coverage of herbal remedy clinical trials was more negative than for pharmaceutical clinical trials.
Bet you didn't see that coming - the media (at any rate in Britain) are often seen as reporting uncritically on complementary and alternative medicine. These results suggest that this is a simplification, but remember that this study only considered articles about specific clinical trials - not general discussions of treaments or diseases. The authors remark:
[The result] is contrary to most published research on media coverage of CAM. Those studies consider a much broader spectrum of treatments and the media content is generally anecdotal rather than evidence based. Indeed, journalists are displaying a degree of skepticism rare for medical reporting.
So, it's not clear why journalists are so critical of trials of herbs when they're generally fans of CAM the rest of the time. The authors speculate:
It is possible that once confronted with actual evidence, journalists are more critical or skeptical. It may be considered more newsworthy to debunk commonly held beliefs and practices related to CAM, to go against the trend of positive reporting in light of evidence. It is also possible that journalists who turn to press releases of peer-reviewed, high-impact journals have subtle biases towards scientific method and conventional medicine. Also, journalists turn to trusted sources in the biomedical community for comments on clinical trials, both herbal and pharmaceutical, potentially leading to a biomedical bias in reporting trial outcomes.
If you forgive the slightly CAM-ish language (biomedical indeed), you can see that they make some good suggestions - but we don't really know. This is the problem with this kind of study (as the authors note) - the fact that a story is "negative" about herbs could mean a lot of different things. We also don't know how many other articles there were about herbs which didn't mention clinical trials, and because this article only considered articles referring to primary literature, not meta-analyses (I think), it leaves out a lot of material. Meta-analyses are popular with journalists and are often more relevant to the public than single trials are.

Still, it's a paper which challenged my prejudices (like a lot of bloggers I have a bit of a persecution complex about the media being pro-CAM) and a nice example of empirical research on the media.

ResearchBlogging.orgTania Bubela, Heather Boon, Timothy Caulfield (2008). Herbal remedy clinical trials in the media: a comparison with the coverage of conventional pharmaceuticals BMC Medicine, 6 (1) DOI: 10.1186/1741-7015-6-35

Do Herbs Get a Bad Press?

An neat little study in BMC Medicine investigates how newspapers report on clinical research. The authors tried to systematically compare the tone and accuracy of write-ups of clinical trials of herbal remedies with those of trials of pharmaceuticals. The results might surprise you.

The research comes from a Canadian group, and most of the hard slog was done by two undergrads, who read through and evaluated 105 trials and 553 newspaper articles about those trials. (They didn't get named as authors on the paper, which seems a bit mean, so let's take a moment to appreciate Megan Koper and Thomas Moran.) The aim was to take all English language newspaper articles about clinical trials printed between 1995 and 2005 (as found on LexisNexis). Duplicate articles were weeded out and every article was then rated for overall tone (subjective), the number of risks and benefits reported, whether it reported on conflicts of interest or not, and so forth. The trials themselves were also rated.

As the authors say

This type of study, comparing media coverage with the scientific research it covers is a well recognized method in media studies. Is the tone of reporting different for herbal remedy versus pharmaceutical clinical trials? Are there differences in the sources of trial funding and the reporting of that issue? What about the reporting of conflicts of interest?
There were a range of findings. Firstly, newspapers were generally poor at reporting on important facts about trials such as conflicts of interest and methodological flaws. No great surprise there. They also tended to understate risks, especially in regards to herbal trials.

The most novel finding was that newspaper reports of herbal remedy trials were quite a lot more likely to be negative in tone than reports of pharmaceutical trials. The graphs here show this: out of 201 newspaper articles about pharmaceutical clinical trials, not one was negative in overall tone, and most were actively positive about the drug, while the herbs got a harsh press, with roughly as many negative articles as positive ones. (Rightmost two bars.)


This might partly be explained by the fact that slightly more of the herbal remedy trials found a negative result, but the difference in this case was fairly small (leftmost two bars). The authors concluded that
Those herbal remedy clinical trials that receive newspaper coverage are of similar quality to pharmaceutical clinical trials ... Despite the overall positive results and tone of the clinical trials, newspaper coverage of herbal remedy clinical trials was more negative than for pharmaceutical clinical trials.
Bet you didn't see that coming - the media (at any rate in Britain) are often seen as reporting uncritically on complementary and alternative medicine. These results suggest that this is a simplification, but remember that this study only considered articles about specific clinical trials - not general discussions of treaments or diseases. The authors remark:
[The result] is contrary to most published research on media coverage of CAM. Those studies consider a much broader spectrum of treatments and the media content is generally anecdotal rather than evidence based. Indeed, journalists are displaying a degree of skepticism rare for medical reporting.
So, it's not clear why journalists are so critical of trials of herbs when they're generally fans of CAM the rest of the time. The authors speculate:
It is possible that once confronted with actual evidence, journalists are more critical or skeptical. It may be considered more newsworthy to debunk commonly held beliefs and practices related to CAM, to go against the trend of positive reporting in light of evidence. It is also possible that journalists who turn to press releases of peer-reviewed, high-impact journals have subtle biases towards scientific method and conventional medicine. Also, journalists turn to trusted sources in the biomedical community for comments on clinical trials, both herbal and pharmaceutical, potentially leading to a biomedical bias in reporting trial outcomes.
If you forgive the slightly CAM-ish language (biomedical indeed), you can see that they make some good suggestions - but we don't really know. This is the problem with this kind of study (as the authors note) - the fact that a story is "negative" about herbs could mean a lot of different things. We also don't know how many other articles there were about herbs which didn't mention clinical trials, and because this article only considered articles referring to primary literature, not meta-analyses (I think), it leaves out a lot of material. Meta-analyses are popular with journalists and are often more relevant to the public than single trials are.

Still, it's a paper which challenged my prejudices (like a lot of bloggers I have a bit of a persecution complex about the media being pro-CAM) and a nice example of empirical research on the media.

ResearchBlogging.orgTania Bubela, Heather Boon, Timothy Caulfield (2008). Herbal remedy clinical trials in the media: a comparison with the coverage of conventional pharmaceuticals BMC Medicine, 6 (1) DOI: 10.1186/1741-7015-6-35

Wednesday, November 26, 2008

The Spooky Case of the Disappearing Crap Science Article

Just a few hours ago, I drafted a post about a crap science study in the Daily Telegraph called "Stress of modern life cuts attention spans to five minutes".
The pressures of modern life are affecting our ability to focus on the task in hand, with work stress cited as the major distraction, it said.
Declining attention spans are causing household accidents such as pans being left to boil over on the hob, baths allowed to overflow, and freezer doors left open, the survey suggests.
A quarter of people polled said they regularly forget the names of close friends or relatives, and seven per cent even admitted to momentarily forgetting their own birthdays.
The study by Lloyds TSB insurance showed that the average attention span had fallen to just 5 minutes, down from 12 minutes 10 years ago.
But the over-50s are able to concentrate for longer periods than young people, suggesting that busy lifestyles and intrusive modern technology rather than old age are to blame for our mental decline.
"More than ever, research is highlighting a trend in reduced attention and concentration spans, and as our experiment suggests, the younger generation appear to be the worst afflicted," said sociologist David Moxon, who led the survey of 1,000 people.
Almost identical stories appeared in the Daily Mail (no surprise) and, for some reason, an awful lot of Indian news sites. So I hacked out a few curmudgeonly lines - but before I posted them, the story had vanished! (Update: It's back! See end of post). Spooky. But first, the curmudgeonry:
  • Crap science story in "crap" shocker
The term "attention span" is meaningless - attention to what? Are we so stressed out that after five minutes down the pub, we tend to forget our pints and wander home in a daze? You could talk about attention span for a particular activity, so long as you defined your criteria for losing attention - for example, you could measure the average time a student sits in a lecture before he starts doodling on his notes. Then if you wanted you could find out if stress affects that time. I wouldn't recommend it, because it would be very boring, but it would be a scientific study.

This news, however is not based on a study of this kind. It's based on a survey of 1,000 people i.e. they asked people how long their attention span was and whether they felt they were prone to accidents. No doubt the questions were chosen in such a way that they got the answers they wanted. Who are "they"? - Lloyds TSB insurance, or rather, their PR department, who decided that they would pay Mr David Moxon MSc. to get them the results they wanted. He obliged, because that's what he does. Then the PR people wrote up Moxon's "results" as a press release and sent it out to all the newspapers, where stressed-out, over-worked journalists (there's a grain of truth to every story!) leapt at the chance to fill some precious column inches with no thinking required. Lloyds get their name in the newspapers, their PR company gets cash, and Moxon gets cash and his name in the papers so he gets more clients in the future. Sorted!

How do I know this? Well, mainly because I've read Ben Goldacre's Bad Science and Nick Davie's Flat Earth News, two excellent books which explain in great detail how modern journalism works and how this kind of PR junk routinely ends up on the pages of your newspapers in the guise of science or "surveys". However, even if I hadn't, I could have worked it out by just consulting Google regarding Mr Moxon. Here is his website. Here's what Moxon says about his services:
David can provide a wide range of traditional behavioural research methods on a diverse range of social, psychological and health topics. David works in partnership with clients delivering precisely the brief they require whilst maintaining academic integrity.
The more commonly provided services include:
  • The development and compilation of questionnaire or survey questions

  • Statistical analysis of data (including SPSS® if required)

  • The development of personality typologies

  • The production of media friendly tests and quizzes (always with scoring systems)

  • The production of primary research reports identifying ‘top line findings’ as well as providing detailed results and conclusions.

In other words, he gets the results you want. And he urges potential customers to
Contact the consultancy which gives you fast, highly-creative and psychologically-endorsed stories that grab the headlines.
  • The Disappearance
The mystery is that the story, so carefully crafted by the PR department, has gone. Both the Telegraph and the Mail have pulled it, although it was there last time I checked, a couple of hours ago. Googling the story confirms that it used to be there, but now it's gone. Variants are still available elsewhere, sadly.

So, what happened? Did both the Mail and the Telegraph suddenly experience an severe attack of journalistic integrity and decide that this story was so bad, they weren't even going to host it on their websites? It seems doubtful, especially in the case of the Mail, but it's possible.

I prefer a different explanation: my intention to rubbish the story travelled forwards in time, and caused the story to be taken down, even though I hadn't posted about it yet. Lynn McTaggart has proven that this can happen, you know.

Update 27th November 13:30: And it's back! The story has reappeared on the Telegraph website. The Lay Scientist tells me that the story was originally put up too prematurely and then pulled because it was embargoed until today. I don't quite see why it matters when a non-story like this is published - it could just as well have been 10 years ago - but there you go. And in a ridiculous coda to this sorry tale, the Telegraph have today run a second crap science article centered around the concept of "5 minutes" - according to the makers of cold and flu remedy Lemsip, 52% of women feel sorry for their boyfriends when they're ill for just five minutes or less. Presumably because this is their attention span. How I wish I were making this up.

The Spooky Case of the Disappearing Crap Science Article

Just a few hours ago, I drafted a post about a crap science study in the Daily Telegraph called "Stress of modern life cuts attention spans to five minutes".
The pressures of modern life are affecting our ability to focus on the task in hand, with work stress cited as the major distraction, it said.
Declining attention spans are causing household accidents such as pans being left to boil over on the hob, baths allowed to overflow, and freezer doors left open, the survey suggests.
A quarter of people polled said they regularly forget the names of close friends or relatives, and seven per cent even admitted to momentarily forgetting their own birthdays.
The study by Lloyds TSB insurance showed that the average attention span had fallen to just 5 minutes, down from 12 minutes 10 years ago.
But the over-50s are able to concentrate for longer periods than young people, suggesting that busy lifestyles and intrusive modern technology rather than old age are to blame for our mental decline.
"More than ever, research is highlighting a trend in reduced attention and concentration spans, and as our experiment suggests, the younger generation appear to be the worst afflicted," said sociologist David Moxon, who led the survey of 1,000 people.
Almost identical stories appeared in the Daily Mail (no surprise) and, for some reason, an awful lot of Indian news sites. So I hacked out a few curmudgeonly lines - but before I posted them, the story had vanished! (Update: It's back! See end of post). Spooky. But first, the curmudgeonry:
  • Crap science story in "crap" shocker
The term "attention span" is meaningless - attention to what? Are we so stressed out that after five minutes down the pub, we tend to forget our pints and wander home in a daze? You could talk about attention span for a particular activity, so long as you defined your criteria for losing attention - for example, you could measure the average time a student sits in a lecture before he starts doodling on his notes. Then if you wanted you could find out if stress affects that time. I wouldn't recommend it, because it would be very boring, but it would be a scientific study.

This news, however is not based on a study of this kind. It's based on a survey of 1,000 people i.e. they asked people how long their attention span was and whether they felt they were prone to accidents. No doubt the questions were chosen in such a way that they got the answers they wanted. Who are "they"? - Lloyds TSB insurance, or rather, their PR department, who decided that they would pay Mr David Moxon MSc. to get them the results they wanted. He obliged, because that's what he does. Then the PR people wrote up Moxon's "results" as a press release and sent it out to all the newspapers, where stressed-out, over-worked journalists (there's a grain of truth to every story!) leapt at the chance to fill some precious column inches with no thinking required. Lloyds get their name in the newspapers, their PR company gets cash, and Moxon gets cash and his name in the papers so he gets more clients in the future. Sorted!

How do I know this? Well, mainly because I've read Ben Goldacre's Bad Science and Nick Davie's Flat Earth News, two excellent books which explain in great detail how modern journalism works and how this kind of PR junk routinely ends up on the pages of your newspapers in the guise of science or "surveys". However, even if I hadn't, I could have worked it out by just consulting Google regarding Mr Moxon. Here is his website. Here's what Moxon says about his services:
David can provide a wide range of traditional behavioural research methods on a diverse range of social, psychological and health topics. David works in partnership with clients delivering precisely the brief they require whilst maintaining academic integrity.
The more commonly provided services include:
  • The development and compilation of questionnaire or survey questions

  • Statistical analysis of data (including SPSS® if required)

  • The development of personality typologies

  • The production of media friendly tests and quizzes (always with scoring systems)

  • The production of primary research reports identifying ‘top line findings’ as well as providing detailed results and conclusions.

In other words, he gets the results you want. And he urges potential customers to
Contact the consultancy which gives you fast, highly-creative and psychologically-endorsed stories that grab the headlines.
  • The Disappearance
The mystery is that the story, so carefully crafted by the PR department, has gone. Both the Telegraph and the Mail have pulled it, although it was there last time I checked, a couple of hours ago. Googling the story confirms that it used to be there, but now it's gone. Variants are still available elsewhere, sadly.

So, what happened? Did both the Mail and the Telegraph suddenly experience an severe attack of journalistic integrity and decide that this story was so bad, they weren't even going to host it on their websites? It seems doubtful, especially in the case of the Mail, but it's possible.

I prefer a different explanation: my intention to rubbish the story travelled forwards in time, and caused the story to be taken down, even though I hadn't posted about it yet. Lynn McTaggart has proven that this can happen, you know.

Update 27th November 13:30: And it's back! The story has reappeared on the Telegraph website. The Lay Scientist tells me that the story was originally put up too prematurely and then pulled because it was embargoed until today. I don't quite see why it matters when a non-story like this is published - it could just as well have been 10 years ago - but there you go. And in a ridiculous coda to this sorry tale, the Telegraph have today run a second crap science article centered around the concept of "5 minutes" - according to the makers of cold and flu remedy Lemsip, 52% of women feel sorry for their boyfriends when they're ill for just five minutes or less. Presumably because this is their attention span. How I wish I were making this up.

Monday, November 24, 2008

Aww, monkeys!

From the hilarious and always informative climate-change-based cartoon series, Throbgoblins, comes this little reminder that there's more to life than psychology...

See also this strip which was, they tell me, inspired by something I said regarding Galileo. Thus bringing the count of awesome things that I've inspired to one.

Aww, monkeys!

From the hilarious and always informative climate-change-based cartoon series, Throbgoblins, comes this little reminder that there's more to life than psychology...

See also this strip which was, they tell me, inspired by something I said regarding Galileo. Thus bringing the count of awesome things that I've inspired to one.

Sunday, November 23, 2008

Totally Addicted to Genes

Why do some people get addicted to things? As with most things in life, there are lots of causes, most of which have little, if anything, to do with genes or the brain. Getting high or drunk all day may be an appealing and even reasonable life choice if you're poor, bored and unemployed. It's less so if you've got a steady job, a mortgage and a family to look after.

On the other hand, substance addiction is a biological process, and it would be surprising if genetics did not play a part. There could be many routes from DNA to dependence. Last year a study reported that two genes, TAS2R38 and TAS2R16, were associated with problem drinking. These genes code for some of the tongue's bitterness taste receptor proteins - presumably, carriers of some variants of these genes find alcoholic drinks less bitter, more drinkable and more appealing. Yet most people are more excited by the idea of genes which somehow "directly" affect the brain and predispose to addiction. Are there any? The answer is yes, probably, but they do lots of other things beside cause addiction.

A report just published in the American Journal of Medical Genetics by Argawal et. al. (2008), found an association between a certain variant in the CNR1 gene, rs806380, and the risk of cannabis dependence. They looked at a sample of 1923 white European American adults from six cities across the U.S, and found that the rs806380 "A" allele (variant) was more common in people with self-reported cannabis dependence than in those who denied having such a problem. A couple of other variants in the same gene were also associated, but less strongly.

As with all behavioural genetics, there are caveats. (I've warned about this before.) The people in this study were originally recruited as part of an alcoholism project,COGA. In fact, all of the participants were either alcohol dependent or had relatives who were. Most of the cannabis-dependent people were also dependent on alcohol. However, this is true of the real world as well, where dependence on more than one substance is common.

The sample size of nearly 2000 people is pretty good, but the authors investigated a total of eleven different variants of the CNR1 gene. This raises the problem of multiple comparisons, and they don't mention how they corrected for this, so we have to assume that they didn't. The main finding does corroborate earlier studies, however. So, assuming that this result is robust, and it's at least as robust as most work in this field, does this mean that a true "addiction gene" has been discovered?

Well, the gene CNR1 codes for the cannabinoid type 1 (CB1) receptor protein, the most common cannabinoid receptor in the brain. Endocannabinoids, and the chemicals in smoked cannabis, activate it. Your brain is full of endocannabinoids, molecules similiar to the active compounds found in cannabis. Although they were discovered just 20 short years ago, they've already been found to be involved in just about everything that goes on in the brain, acting as a feedback system which keeps other neurotransmitters under control.

So, what Argawal et. al. found is that the cannabinoid receptor gene is associated with cannabis dependence. Is this a common-sense result - doesn't it just mean that people whose receptors are less affected by cannabis are less likely to want to use it? Probably not, because what's interesting is that the same variant in the CNR1 gene, rs806380, has been found to be associated with obesity and dependence on cocaine and opioids. Other variants in the same gene have shown similar associations, although there have been several studies finding no effect, as always.

What makes me believe that CNR1 probably is associated with addiction is that a drug which blocks the CB1 receptor, rimonabant, causes people to lose weight, and is also probably effective in helping people stop smoking and quit drinking (weaker evidence). Give it to mice and they become little rodent Puritans - they lose interest in sweet foods, and recreational drugs including alcohol, nicotine, cocaine and heroin. Only the simple things in life for mice on rimonabant. (No-one's yet checked whether rimonabant makes mice lose interest in sex, but I'd bet money that it does.)

So it looks as though the CB1 receptor is necessary for pleasurable or motivational responses to a whole range of things - maybe everything. If so, it's not surprising that variants in the gene coding for CB1 are associated with substance dependence, and with body weight - maybe these variants determine how susceptible people are to the lures of life's pleasures, whether it be a chocolate muffin or a straight vodka. (This is speculation, although it's informed speculation, and I know that many experts are thinking along these lines.)

What if we all took rimonabant to make us less prone to such vices? Wouldn't that be a good thing? It depends on whether you think people enjoying themselves is evidence of a public health problem, but it's worth noting that rimonabant was recently taken of the European market, despite being really pretty good at causing weight loss, because it causes depression in a significant minority of users. Does rimonabant just rob the world of joy, making everything else less fun? That would make anyone miserable. Except for neuroscientists, who would look forward to being able to learn more about the biology of mood and motivation by studying such side effects.

ResearchBlogging.orgArpana Agrawal, Leah Wetherill, Danielle M. Dick, Xiaoling Xuei, Anthony Hinrichs, Victor Hesselbrock, John Kramer, John I. Nurnberger, Marc Schuckit, Laura J. Bierut, Howard J. Edenberg, Tatiana Foroud (2008). Evidence for association between polymorphisms in the cannabinoid receptor 1 (CNR1) gene and cannabis dependence American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 9999B DOI: 10.1002/ajmg.b.30881

Totally Addicted to Genes

Why do some people get addicted to things? As with most things in life, there are lots of causes, most of which have little, if anything, to do with genes or the brain. Getting high or drunk all day may be an appealing and even reasonable life choice if you're poor, bored and unemployed. It's less so if you've got a steady job, a mortgage and a family to look after.

On the other hand, substance addiction is a biological process, and it would be surprising if genetics did not play a part. There could be many routes from DNA to dependence. Last year a study reported that two genes, TAS2R38 and TAS2R16, were associated with problem drinking. These genes code for some of the tongue's bitterness taste receptor proteins - presumably, carriers of some variants of these genes find alcoholic drinks less bitter, more drinkable and more appealing. Yet most people are more excited by the idea of genes which somehow "directly" affect the brain and predispose to addiction. Are there any? The answer is yes, probably, but they do lots of other things beside cause addiction.

A report just published in the American Journal of Medical Genetics by Argawal et. al. (2008), found an association between a certain variant in the CNR1 gene, rs806380, and the risk of cannabis dependence. They looked at a sample of 1923 white European American adults from six cities across the U.S, and found that the rs806380 "A" allele (variant) was more common in people with self-reported cannabis dependence than in those who denied having such a problem. A couple of other variants in the same gene were also associated, but less strongly.

As with all behavioural genetics, there are caveats. (I've warned about this before.) The people in this study were originally recruited as part of an alcoholism project,COGA. In fact, all of the participants were either alcohol dependent or had relatives who were. Most of the cannabis-dependent people were also dependent on alcohol. However, this is true of the real world as well, where dependence on more than one substance is common.

The sample size of nearly 2000 people is pretty good, but the authors investigated a total of eleven different variants of the CNR1 gene. This raises the problem of multiple comparisons, and they don't mention how they corrected for this, so we have to assume that they didn't. The main finding does corroborate earlier studies, however. So, assuming that this result is robust, and it's at least as robust as most work in this field, does this mean that a true "addiction gene" has been discovered?

Well, the gene CNR1 codes for the cannabinoid type 1 (CB1) receptor protein, the most common cannabinoid receptor in the brain. Endocannabinoids, and the chemicals in smoked cannabis, activate it. Your brain is full of endocannabinoids, molecules similiar to the active compounds found in cannabis. Although they were discovered just 20 short years ago, they've already been found to be involved in just about everything that goes on in the brain, acting as a feedback system which keeps other neurotransmitters under control.

So, what Argawal et. al. found is that the cannabinoid receptor gene is associated with cannabis dependence. Is this a common-sense result - doesn't it just mean that people whose receptors are less affected by cannabis are less likely to want to use it? Probably not, because what's interesting is that the same variant in the CNR1 gene, rs806380, has been found to be associated with obesity and dependence on cocaine and opioids. Other variants in the same gene have shown similar associations, although there have been several studies finding no effect, as always.

What makes me believe that CNR1 probably is associated with addiction is that a drug which blocks the CB1 receptor, rimonabant, causes people to lose weight, and is also probably effective in helping people stop smoking and quit drinking (weaker evidence). Give it to mice and they become little rodent Puritans - they lose interest in sweet foods, and recreational drugs including alcohol, nicotine, cocaine and heroin. Only the simple things in life for mice on rimonabant. (No-one's yet checked whether rimonabant makes mice lose interest in sex, but I'd bet money that it does.)

So it looks as though the CB1 receptor is necessary for pleasurable or motivational responses to a whole range of things - maybe everything. If so, it's not surprising that variants in the gene coding for CB1 are associated with substance dependence, and with body weight - maybe these variants determine how susceptible people are to the lures of life's pleasures, whether it be a chocolate muffin or a straight vodka. (This is speculation, although it's informed speculation, and I know that many experts are thinking along these lines.)

What if we all took rimonabant to make us less prone to such vices? Wouldn't that be a good thing? It depends on whether you think people enjoying themselves is evidence of a public health problem, but it's worth noting that rimonabant was recently taken of the European market, despite being really pretty good at causing weight loss, because it causes depression in a significant minority of users. Does rimonabant just rob the world of joy, making everything else less fun? That would make anyone miserable. Except for neuroscientists, who would look forward to being able to learn more about the biology of mood and motivation by studying such side effects.

ResearchBlogging.orgArpana Agrawal, Leah Wetherill, Danielle M. Dick, Xiaoling Xuei, Anthony Hinrichs, Victor Hesselbrock, John Kramer, John I. Nurnberger, Marc Schuckit, Laura J. Bierut, Howard J. Edenberg, Tatiana Foroud (2008). Evidence for association between polymorphisms in the cannabinoid receptor 1 (CNR1) gene and cannabis dependence American Journal of Medical Genetics Part B: Neuropsychiatric Genetics, 9999B DOI: 10.1002/ajmg.b.30881

Wednesday, November 19, 2008

Educational neuro-nonsense, or: The Return of the Crockus

Vicky Tuck, President of the British Girls' Schools Association, has some odd ideas about the brain.

Tuck has appeared on British radio and in print over the past few days arguing that there should be more single-sex schools (which are still quite common in Britain) because girls and boys learn in different ways and benefit from different teaching styles. Given her job, I suppose she ought to be doing that, and there are, I'm sure, some good arguments for single-sex schools.

So why has she resorted to talking nonsense about neuroscience? Listen if you will to an interview she gave on the BBC's morning Today Program (Her part runs from 51:50s to 55:10s). Or, here's a transcript of the neuroscience bit, with my emphasis:
Interviewer: Do we know that girls and boys brains are wired differently?
Tuck: We do, and I think we're learning more and more every day about the brain, and particularly in adolescents this wiring is very interesting, and it's quite clear that you need to teach girls and boys in a very different way for them to be successful.
Interviewer: Well give us some examples, how should the way in which you teach them differ?
Tuck: Well, take maths. If you look at the girls they sort of approach maths through the cerebral cortex, which means that to get them going you really need to sort of paint a picture, put it in context, relate it to the real world, while boys sort of approach maths through the hippocampus, therefore they're very happy and interested in the core properties of numbers and can sort of dive straight in. So if a girl's being taught in a male-focused way she will struggle, whereas in an all-girl's school their confidence in maths is very, very high.
Interviewer: So you have no doubt that all girls should be taught separately from boys?
Tuck: I think that ideally, girls fare better if they're in a single sex environment, and I think that boys also fare better in an all boy environment, I think for example in the study of literature, in English, again a different kind of approach is needed. Girls are very good at empathizing, attuning to things via the emotions, the cerebral cortex again, whereas the boys come at things... it's the amygdala is very strong in the boy, and he will you know find it hard to tune in in that way and needs a different approach.
Interviewer: And yet we've had this trend towards co-education and we've also had more boys schools opening their doors to girls... [etc.]
This is, to put it kindly, confused. Speaking as a neuroscientist, I know of no evidence that girls and boys approach maths or literature using different areas of the brain, I'm not sure what evidence you could look for which would suggest that, and I'm not even sure what that statement means.

Girls and boys all have brains, and they all have the same parts in roughly the same places. When they're reading about maths, or reading a novel, or indeed when they're doing anything, all of these areas are working together at once. The cerebral cortex, in particular, comprises most of the bulk of the brain, and almost literally does everything; it has dozens of sub-regions responsible for everything from seeing moving objects to feeling disgusted to moving your eyes. I don't know which area is responsible for the the boyish "core properties of numbers" but for what it's worth, the area most often linked to counting and calculation is the angular gyrus, part of... the supposedly girly cerebral cortex!

The gruff and manly hippocampus, on the other hand, is best known for its role in memory. Damage here leaves people unable to form new memories, although they can still remember things that happened before the injury. It's not known whether these people also have problems with number theory.

When it comes to literature, things get even worse. She says - "Girls are very good at empathizing, attuning to things via the emotions" - which I guess is a pop-psych version of psychologist Simon Baron-Cohen's famous theory of gender differences: that girls are, on average, better at girly social and emotional stuff while boys are better at systematic, logical stuff. This is, er, controversial, but it's a theory that has at least some merit to it.

However, given that the amygdala is generally seen as a fluffy "emotion area" while the cerebral cortex, or at least parts of it, are associated with more "cold" analytic cognition, "The amygdala is very strong in boys" suggests that they should be more emotionally empathic. If Tuck's going to deal in simplistic pop-neuroanatomy, she should at least get it the right way round.

The likely source of Tuck's confusion, given what's said here about Harvard research, is this study led by Dr. Jill Goldstein, who found differences in the size of brain areas between men and women. For example she found that men have, on average, larger amygdalas than women. Although they also have smaller hippocampi. Whatever, this study is fine science, although bear in mind that there could be a million reasons why men's and women's brains are different - it might have nothing to do with inborn differences. Stress, for example, makes your hippocampus shrink.

More importantly, there's no reason to think that "bigger is better", when it comes to parts of the brain. (I make no comment about other parts of the body.) That's phrenology, not science. Is a bigger mobile phone better than a smaller one? Bigger could be worse, if it means that the brain cells are less well organized. Likewise, if an area "lights up" more on an fMRI scan in boys than in girls, that sounds good, but in fact it might mean that the boys are having to think harder than the girls, because their brain is less efficient.

I'm a believer in the reality of biological sex differences myself - I just don't should try to find them with MRI scans. And Vicky Tuck seems like a clever person who's ended up talking nonsense unnecessarily. She could be making a good argument for single-sex schools based on some actual evidence about how kids learn and mature. Instead, she's shooting herself in the foot (or maybe in the brain's "foot center") with dodgy brain theories. Save yourself, Vicky - put the brain down and walk away.

Link Cognition and Culture who originally picked up on this.
Link The hilarious story of "The Crockus", a made-up brain area which has also been invoked to justify teaching girls and boys differently. It's weird how bad neuroscience repeats itself.

[BPSDB]

Educational neuro-nonsense, or: The Return of the Crockus

Vicky Tuck, President of the British Girls' Schools Association, has some odd ideas about the brain.

Tuck has appeared on British radio and in print over the past few days arguing that there should be more single-sex schools (which are still quite common in Britain) because girls and boys learn in different ways and benefit from different teaching styles. Given her job, I suppose she ought to be doing that, and there are, I'm sure, some good arguments for single-sex schools.

So why has she resorted to talking nonsense about neuroscience? Listen if you will to an interview she gave on the BBC's morning Today Program (Her part runs from 51:50s to 55:10s). Or, here's a transcript of the neuroscience bit, with my emphasis:
Interviewer: Do we know that girls and boys brains are wired differently?
Tuck: We do, and I think we're learning more and more every day about the brain, and particularly in adolescents this wiring is very interesting, and it's quite clear that you need to teach girls and boys in a very different way for them to be successful.
Interviewer: Well give us some examples, how should the way in which you teach them differ?
Tuck: Well, take maths. If you look at the girls they sort of approach maths through the cerebral cortex, which means that to get them going you really need to sort of paint a picture, put it in context, relate it to the real world, while boys sort of approach maths through the hippocampus, therefore they're very happy and interested in the core properties of numbers and can sort of dive straight in. So if a girl's being taught in a male-focused way she will struggle, whereas in an all-girl's school their confidence in maths is very, very high.
Interviewer: So you have no doubt that all girls should be taught separately from boys?
Tuck: I think that ideally, girls fare better if they're in a single sex environment, and I think that boys also fare better in an all boy environment, I think for example in the study of literature, in English, again a different kind of approach is needed. Girls are very good at empathizing, attuning to things via the emotions, the cerebral cortex again, whereas the boys come at things... it's the amygdala is very strong in the boy, and he will you know find it hard to tune in in that way and needs a different approach.
Interviewer: And yet we've had this trend towards co-education and we've also had more boys schools opening their doors to girls... [etc.]
This is, to put it kindly, confused. Speaking as a neuroscientist, I know of no evidence that girls and boys approach maths or literature using different areas of the brain, I'm not sure what evidence you could look for which would suggest that, and I'm not even sure what that statement means.

Girls and boys all have brains, and they all have the same parts in roughly the same places. When they're reading about maths, or reading a novel, or indeed when they're doing anything, all of these areas are working together at once. The cerebral cortex, in particular, comprises most of the bulk of the brain, and almost literally does everything; it has dozens of sub-regions responsible for everything from seeing moving objects to feeling disgusted to moving your eyes. I don't know which area is responsible for the the boyish "core properties of numbers" but for what it's worth, the area most often linked to counting and calculation is the angular gyrus, part of... the supposedly girly cerebral cortex!

The gruff and manly hippocampus, on the other hand, is best known for its role in memory. Damage here leaves people unable to form new memories, although they can still remember things that happened before the injury. It's not known whether these people also have problems with number theory.

When it comes to literature, things get even worse. She says - "Girls are very good at empathizing, attuning to things via the emotions" - which I guess is a pop-psych version of psychologist Simon Baron-Cohen's famous theory of gender differences: that girls are, on average, better at girly social and emotional stuff while boys are better at systematic, logical stuff. This is, er, controversial, but it's a theory that has at least some merit to it.

However, given that the amygdala is generally seen as a fluffy "emotion area" while the cerebral cortex, or at least parts of it, are associated with more "cold" analytic cognition, "The amygdala is very strong in boys" suggests that they should be more emotionally empathic. If Tuck's going to deal in simplistic pop-neuroanatomy, she should at least get it the right way round.

The likely source of Tuck's confusion, given what's said here about Harvard research, is this study led by Dr. Jill Goldstein, who found differences in the size of brain areas between men and women. For example she found that men have, on average, larger amygdalas than women. Although they also have smaller hippocampi. Whatever, this study is fine science, although bear in mind that there could be a million reasons why men's and women's brains are different - it might have nothing to do with inborn differences. Stress, for example, makes your hippocampus shrink.

More importantly, there's no reason to think that "bigger is better", when it comes to parts of the brain. (I make no comment about other parts of the body.) That's phrenology, not science. Is a bigger mobile phone better than a smaller one? Bigger could be worse, if it means that the brain cells are less well organized. Likewise, if an area "lights up" more on an fMRI scan in boys than in girls, that sounds good, but in fact it might mean that the boys are having to think harder than the girls, because their brain is less efficient.

I'm a believer in the reality of biological sex differences myself - I just don't should try to find them with MRI scans. And Vicky Tuck seems like a clever person who's ended up talking nonsense unnecessarily. She could be making a good argument for single-sex schools based on some actual evidence about how kids learn and mature. Instead, she's shooting herself in the foot (or maybe in the brain's "foot center") with dodgy brain theories. Save yourself, Vicky - put the brain down and walk away.

Link Cognition and Culture who originally picked up on this.
Link The hilarious story of "The Crockus", a made-up brain area which has also been invoked to justify teaching girls and boys differently. It's weird how bad neuroscience repeats itself.

[BPSDB]

Deep Brain Stimulation Cures Urge To Break Glass

Deep Brain Stimulation (DBS) is in. There's been much buzz about its use in severe depression, and it has a long if less glamorous record of success in Parkinson's disease. Now that it's achieved momentum as a treatment in psychiatry, DBS is being tried in a range of conditions including chronic pain, obsessive-compulsive disorder and Tourette's Syndrome. Is the hype justified? Yes - but the scientific and ethical issues are more complex, and more interesting, than you might think.

Biological Psychiatry have just published this report of DBS in a man who suffered from severe, untreatable Tourette's syndrome, as well as OCD. The work was performed by a German group, Neuner et. al. (who also have a review paper just out), and they followed the patient up for three years after implanting high-frequency stimulation electrodes in an area of the brain called the nucleus accumbens. It's fascinating reading, if only for the insight into the lives of the patients who receive this treatment.
The patient suffered from the effects of auto-aggressive behavior such as self-mutilation of the lips, forehead, and fingers, coupled with the urge to break glass. He was no longer able to travel by car because he had broken the windshield of his vehicle from the inside on several occasions.
It makes even more fascinating viewing, because the researchers helpfully provide video clips of the patient before and after the procedure. Neuropsychiatric research meets YouTube - truly, we've entered the 21st century. Anyway, the DBS seemed to work wonders:
... An impressive development was the cessation of the self-mutilation episodes and the urge to destroy glass. No medication was being used ... Also worthy of note is the fact that the patient stopped smoking during the 6 months after surgery. In the follow-up period, he has successfully refrained from smoking. He reports that he has no desire to smoke and that it takes him no effort to refrain from doing so.
Impressive indeed. DBS is, beyond a doubt, an exciting technology from both a theoretical and a clinical perspective. Yet it's worth considering some things that tend to get overlooked.

Firstly, although DBS has a reputation as a high-tech, science-driven, precisely-targeted treatment, it's surprisingly hit-and-miss. This report involved stimulation of the nucleus accumbens, an area best known to neuroscientists as being involved in responses to recreational drugs. (It's tempting to infer that this must have something to do with why the patient quit smoking.) I'm sure there are good reasons to think that DBS in the nucleus accumbens would help with Tourette's - but there are equally good reasons to target several other locations. As the authors write:
For DBS in Tourette's patients, the globus pallidus internus (posteroventrolateral part, anteromedial part), the thalamus (centromedian nucleus, substantia periventricularis, and nucleus ventro-oralis internus) and the nucleus accumbens/anterior limb of the internal capsule have all been used as target points.
For those whose neuroanatomy is a little rusty, that's a fairly eclectic assortment of different brain regions. Likewise, in depression, the best-known DBS target is the subgenual cingulate cortex, but successful cases have been reported with stimulation in two entirely different areas, and at least two more have been proposed as potential targets (Paper.) Indeed, even once a location for DBS has been chosen, it's often necessary to try stimulating at several points in order to find the best target. The point is that there is no "Depression center" or "Tourette's center" in the brain which science has mapped out and which surgery can now fix.

Second, by conventional standards, this was an awful study: it only had one patient, no controls, and no blinding. Of course, applying usual scientific standards to this kind of research is all but impossible, for ethical reasons. These are people, not lab rats. And it does seem unlikely that the dramatic and sustained response in this case could be purely the placebo effect, especially given that the patient had tried several medications previously.

So what the authors did was certainly reasonable under the circumstances - but still, this article, published in a leading journal, is basically an anecdote. If it had been about a Reiki master waving his hands at the patient, instead of a neurosurgeon sticking electrodes into him, it wouldn't even make it into the Journal of Alternative and Complementary Medicine. This is par for the course in this field; there have been controlled trials of DBS, but they are few and very small. Is this a problem? It would be silly to pretend that it wasn't - there is no substitute for good science. There's not much we can do about it, though.

Finally, Deep Brain Stimulation is a misleading term - the brain doesn't really get stimulated at all. The electrical pulses used in most DBS are at such a high frequency (145 Hz in this case) that they "overload" nearby neurons and essentially switch them off. (At least that's the leading theory.) In effect, turning on a DBS electrode is like cutting a hole in the brain. Of course, the difference is that you can switch off the electrode and put it back to normal. But this aside, DBS is little more sophisticated than the notorious "psychosurgery" pioneered by Walter Freeman performed back in the 1930s and that have since become so unpopular. I see nothing wrong with that - if it works, it works, and psychosurgery worked for many people, which is why it's still used in Britain today. It's interesting, though, that whereas psychosurgery is seen as the height of psychiatry barbarity, DBS is lauded as medical science at its most sophisticated.

For all that, DBS is the most interesting thing in neuroscience at the moment. Almost all research on the human brain is correlational - we look for areas of the brain which activate on fMRI scans when people are doing something. DBS offers one of the very few ways of investigating what happens when you manipulate different parts of the human brain. For a scientist, it's a dream come true. But of course, the only real reason to do DBS is for the patients. DBS promises to help people who are suffering terribly. If it does, that's reason enough to be interested in it.

See also: Someone with Parkinson's disease writes of his experiences with DBS on his blog.

ResearchBlogging.org
I NEUNER, K PODOLL, D LENARTZ, V STURM, F SCHNEIDER (2008). Deep Brain Stimulation in the Nucleus Accumbens for Intractable Tourette's Syndrome: Follow-Up Report of 36 Months Biological Psychiatry DOI: 10.1016/j.biopsych.2008.09.030

Deep Brain Stimulation Cures Urge To Break Glass

Deep Brain Stimulation (DBS) is in. There's been much buzz about its use in severe depression, and it has a long if less glamorous record of success in Parkinson's disease. Now that it's achieved momentum as a treatment in psychiatry, DBS is being tried in a range of conditions including chronic pain, obsessive-compulsive disorder and Tourette's Syndrome. Is the hype justified? Yes - but the scientific and ethical issues are more complex, and more interesting, than you might think.

Biological Psychiatry have just published this report of DBS in a man who suffered from severe, untreatable Tourette's syndrome, as well as OCD. The work was performed by a German group, Neuner et. al. (who also have a review paper just out), and they followed the patient up for three years after implanting high-frequency stimulation electrodes in an area of the brain called the nucleus accumbens. It's fascinating reading, if only for the insight into the lives of the patients who receive this treatment.
The patient suffered from the effects of auto-aggressive behavior such as self-mutilation of the lips, forehead, and fingers, coupled with the urge to break glass. He was no longer able to travel by car because he had broken the windshield of his vehicle from the inside on several occasions.
It makes even more fascinating viewing, because the researchers helpfully provide video clips of the patient before and after the procedure. Neuropsychiatric research meets YouTube - truly, we've entered the 21st century. Anyway, the DBS seemed to work wonders:
... An impressive development was the cessation of the self-mutilation episodes and the urge to destroy glass. No medication was being used ... Also worthy of note is the fact that the patient stopped smoking during the 6 months after surgery. In the follow-up period, he has successfully refrained from smoking. He reports that he has no desire to smoke and that it takes him no effort to refrain from doing so.
Impressive indeed. DBS is, beyond a doubt, an exciting technology from both a theoretical and a clinical perspective. Yet it's worth considering some things that tend to get overlooked.

Firstly, although DBS has a reputation as a high-tech, science-driven, precisely-targeted treatment, it's surprisingly hit-and-miss. This report involved stimulation of the nucleus accumbens, an area best known to neuroscientists as being involved in responses to recreational drugs. (It's tempting to infer that this must have something to do with why the patient quit smoking.) I'm sure there are good reasons to think that DBS in the nucleus accumbens would help with Tourette's - but there are equally good reasons to target several other locations. As the authors write:
For DBS in Tourette's patients, the globus pallidus internus (posteroventrolateral part, anteromedial part), the thalamus (centromedian nucleus, substantia periventricularis, and nucleus ventro-oralis internus) and the nucleus accumbens/anterior limb of the internal capsule have all been used as target points.
For those whose neuroanatomy is a little rusty, that's a fairly eclectic assortment of different brain regions. Likewise, in depression, the best-known DBS target is the subgenual cingulate cortex, but successful cases have been reported with stimulation in two entirely different areas, and at least two more have been proposed as potential targets (Paper.) Indeed, even once a location for DBS has been chosen, it's often necessary to try stimulating at several points in order to find the best target. The point is that there is no "Depression center" or "Tourette's center" in the brain which science has mapped out and which surgery can now fix.

Second, by conventional standards, this was an awful study: it only had one patient, no controls, and no blinding. Of course, applying usual scientific standards to this kind of research is all but impossible, for ethical reasons. These are people, not lab rats. And it does seem unlikely that the dramatic and sustained response in this case could be purely the placebo effect, especially given that the patient had tried several medications previously.

So what the authors did was certainly reasonable under the circumstances - but still, this article, published in a leading journal, is basically an anecdote. If it had been about a Reiki master waving his hands at the patient, instead of a neurosurgeon sticking electrodes into him, it wouldn't even make it into the Journal of Alternative and Complementary Medicine. This is par for the course in this field; there have been controlled trials of DBS, but they are few and very small. Is this a problem? It would be silly to pretend that it wasn't - there is no substitute for good science. There's not much we can do about it, though.

Finally, Deep Brain Stimulation is a misleading term - the brain doesn't really get stimulated at all. The electrical pulses used in most DBS are at such a high frequency (145 Hz in this case) that they "overload" nearby neurons and essentially switch them off. (At least that's the leading theory.) In effect, turning on a DBS electrode is like cutting a hole in the brain. Of course, the difference is that you can switch off the electrode and put it back to normal. But this aside, DBS is little more sophisticated than the notorious "psychosurgery" pioneered by Walter Freeman performed back in the 1930s and that have since become so unpopular. I see nothing wrong with that - if it works, it works, and psychosurgery worked for many people, which is why it's still used in Britain today. It's interesting, though, that whereas psychosurgery is seen as the height of psychiatry barbarity, DBS is lauded as medical science at its most sophisticated.

For all that, DBS is the most interesting thing in neuroscience at the moment. Almost all research on the human brain is correlational - we look for areas of the brain which activate on fMRI scans when people are doing something. DBS offers one of the very few ways of investigating what happens when you manipulate different parts of the human brain. For a scientist, it's a dream come true. But of course, the only real reason to do DBS is for the patients. DBS promises to help people who are suffering terribly. If it does, that's reason enough to be interested in it.

See also: Someone with Parkinson's disease writes of his experiences with DBS on his blog.

ResearchBlogging.org
I NEUNER, K PODOLL, D LENARTZ, V STURM, F SCHNEIDER (2008). Deep Brain Stimulation in the Nucleus Accumbens for Intractable Tourette's Syndrome: Follow-Up Report of 36 Months Biological Psychiatry DOI: 10.1016/j.biopsych.2008.09.030

Tuesday, November 18, 2008

Kruger & Dunning Revisited

The irreplaceable Overcoming Bias have an excellent post on every blogger's favorite psychology paper, Kruger and Dunning (1999) "Unskilled and Unaware Of It".

Most people (myself included) have taken this paper as evidence that the better you are at something, the better you are at knowing how good you are at it. Thus, people who are bad don't know that they are, which is why they don't try to improve. It's an appealing conclusion, and also a very intuitive one.

In general, these kind of conclusions should be taken with a pinch of salt.

Indeed, it turns out that there's another more recent paper, Burson et. al. (2006) "Skilled or Unskilled, but Still Unaware of It", which finds that everyone is pretty bad at judging their own skill, and in some circumstances, more skilled people make less accurate judgments than novices. Heh.

Kruger & Dunning Revisited

The irreplaceable Overcoming Bias have an excellent post on every blogger's favorite psychology paper, Kruger and Dunning (1999) "Unskilled and Unaware Of It".

Most people (myself included) have taken this paper as evidence that the better you are at something, the better you are at knowing how good you are at it. Thus, people who are bad don't know that they are, which is why they don't try to improve. It's an appealing conclusion, and also a very intuitive one.

In general, these kind of conclusions should be taken with a pinch of salt.

Indeed, it turns out that there's another more recent paper, Burson et. al. (2006) "Skilled or Unskilled, but Still Unaware of It", which finds that everyone is pretty bad at judging their own skill, and in some circumstances, more skilled people make less accurate judgments than novices. Heh.

Saturday, November 15, 2008

Prozac Made My Cells Spiky

A great many neuroscientists are interested in clinical depression and antidepressants. We're still a long way from understanding depression on a biological level - and if anyone tries to tell you otherwise, they're probably trying to sell you something. I've previously discussed the controversies surrounding the neurotransmitter serotonin - according to popular belief, the brain's "happy chemical". My conclusion was that although clinical depression is not caused by "low serotonin" alone, serotonin does play an important role in mood at least in some people.

A paper published recently in Molecular Psychiatry makes a number of important contributions to the literature on depression and antidepressants; I haven't seen it discussed elsewhere, so here is make take on it. The paper is by a Portuguese research group, Bessa et. al., and it's titled The mood-improving actions of antidepressants do not depend on neurogenesis but are associated with neuronal remodeling. The findings are right there in the title, but a little history is required in order to appreciate their significance.

For a long time, the only biological theory which attempted to explain clinical depression and how antidepressants counteract it was the monoamine hypothesis. During the early 1960s, it was noticed that early antidepressant drugs, such as imipramine, all inhibited either the breakdown or the removal (reuptake) of chemicals in the brain called monoamines, including serotonin. This led many to conclude that antidepressants improve mood by raising monoamine levels, and that depression is probably caused by some kind of monoamine deficiency. For various reasons (not all of them good ones), it was later decided that serotonin was the crucial monoamine involved in mood, although for several years another, noradrenaline, was favored by most people.

This "monoamine hypothesis" was always a little shaky, and over the past decade or so, an alternative approach has become increasingly fashionable. If you were so inclined, you might even call it a new paradigm. This is the proposal that antidepressants work by promoting the survival and proliferation of new neurones in certain areas of the brain - the "neurogenesis hypothesis". Neurogenesis, the birth of new cells from stem cells, occurs in a couple of very specific regions of the adult brain, including the elaborately named subgranular zone (SGZ) of the dentate gyrus (DG) of the hippocampus. Many experiments on animals have shown that chronic stress, and injections of the "stress hormone" corticosterone, can suppress neurogenesis, while a wide range of antidepressants block this effect of stress and promote neurogenesis. (Other evidence shows that antidepressants probably do this by inducing the expression of neurotrophic signaling proteins, like BDNF.)

The literature on stress, neurogenesis, and antidepressants, is impressive and growing rapidly. For good reviews, see Duman (2004) and Duman & Monteggia (2006). However, the crucial question - do antidepressants work by boosting hippocampal neurogenesis? - remains a controversial one. The hippocampus is not an area generally thought of as being involved in mood or emotion, and damage to the human hippocampus causes amnesia, not depression. Given that the purpose (if any) of adult neurogenesis remains a mystery, it's entirely possible that neurogenesis has nothing to do with depression and mood.

To establish whether neurogenesis is involved in antidepressant action, you need to to manipulate it - for example, by blocking neurogenesis and seeing if this makes antidepressants ineffective. This is practically quite tricky, but Luca Santarelli et. al. (2003) managed to do it by irradiating the hippocampi of mice with x-rays. They found that this made two antidepressants (fluoxetine, aka Prozac, and imipramine) ineffective in protecting the animals against the detrimental effects of chronic stress. This was a landmark result, and raised a lot of interest in the neurogenesis theory.

This new paper, however, says differently. The authors gave lab rats a six-week Chronic Mild Stress treatment, a Guantanamo Bay-style program of intermittent food deprivation, sleep disruption, and confinement. Chronic stress has various effects on rats, including increased anxiety and decreased time spent grooming leading to fur deterioration. These behaviours and others can be quantified, and are treated as a rat analogue of human clinical depression - whether this is valid is obviously debatable, but I'm willing to accept it at least until a better animal model comes along.

Anyway, some of the rats were injected with antidepressants during the final two weeks of the stress procedure. As expected, these rats coped better with the stress at the end of six weeks. This graph shows the effects of stress and antidepressants on the rat's behaviour in the Forced Swim (Porsolt) Test. Higher bars indicate more "depressed" behaviour. The second pair of bars, representing the stressed rats who got placebo injections, is a lot higher than the first pair of bars representing rats who were not subjected to any stress. In other words, stress made rats "depressed" - no surprise. The other four pairs of bars are pretty much the same height as the first pair; these are rats who got antidepressants, showing that they were resistant to the effects of stress.

The crucial finding is that the white and the black bars are all pretty much the same height. The black bars represent animals who were given injections of methylazoxymethanol (MAM), a cytostatic toxin which blocks cell division (rather like cancer chemotherapy). As you can see, MAM had no effect at all on behaviour in the swim test. It had no effect on most other tests, although it did seem to make the rats more anxious in one experiment.

However, MAM powerfully inhibited neurogenesis. This second graph shows the number of hippocampal cells expressing KI-67, a protein which is a marker of neuroproliferation. As expected, stress reduced neurogenesis and antidepressants increased it. MAM (black bars again) reduced neurogenesis, and in particular, it completely blocked the ability of antidepressants to increase it.

But as we saw earlier, MAM did not stop antidepressants from protecting rats against stress. So, the authors concluded, neurogenesis is not necessary for antidepressants to work. This contradicts the landmark finding of Santarelli et. al. - why the discrepency? There are so many differences between the two experiments that there could be any number of explanations - the current study used rats, while Santarelli used mice, for one thing, and that could well be important. Whatever the reason, this result suggests at the least that neurogenesis is not the only mechanism by which antidepressants counteract the effects of stress in animals.

The most interesting aspect of this paper, to my mind, was an essentially unrelated new finding. Stress was found to reduce the volume of several areas of the rat's brain, including the hippocampus and also the medial prefrontal cortex (mPFC). Unlike the hippocampus, this is an area known to be involved in motivation and emotion. Importantly, the authors found that following stress, the mPFC did not shrink because neurones were dying or because fewer neurones were being born, but rather because the existing neurones were changing shape - stress caused atrophy of the dendritic spines which branch out from neurones. Dendrites are essential for communication between neurones.

As you can see in the drawings above, stress (the middle column) caused shrinking and stunting of the dendrites in pyrimidal neurones from three areas relative to the unstressed rats (left), while those rats recieving antidepressants as well as stress showed no such effect (right). The cytostatic MAM had no effect whatsoever on dendrites. Further work found that antidepressants increase expression of NCAM1, a protein which is involved in dendritic growth.

So what does this mean? Well, for one thing, it doesn't prove that antidepressants work by increasing dendritic branching. Cheekily, the authors come close to implying this in their choice of title for the paper, but the published evidence shows no direct evidence for this. To find out, you would have to show that blocking the effects of antidepressants on dendrites also blocks their beneficial effects. I suspect this is what the authors are now working hard to try to do, but they haven't done so yet.

It also doesn't mean that taking Prozac will change the shape of your brain cells. It might well do, but this was a study in rats given huge doses of antidepressants (by human standards), so we really don't know whether the findings apply to humans. On the other hand, if Prozac changes the shape of your cells, this study suggests that stressful situations do too - and Prozac, if anything, will put your cells back to "normal".

Finally, I don't want to suggest that the neurogenesis theory of depression is now "dead". In neuroscience, theories never live or die on the basis of single experiments (unlike in physics). But it does suggest that the much-blogged-about neurogenesis hypothesis is not the whole story. Depression isn't just a case of too little serotonin, and it isn't just a case of too little neurogenesis or too little BDNF either.

ResearchBlogging.org
J M Bessa, D Ferreira, I Melo, F Marques, J J Cerqueira, J A Palha, O F X Almeida, N Sousa (2008). The mood-improving actions of antidepressants do not depend on neurogenesis but are associated with neuronal remodeling Molecular Psychiatry DOI: 10.1038/mp.2008.119