Showing posts with label 5HTT. Show all posts
Showing posts with label 5HTT. Show all posts

Friday, July 31, 2009

St John's Wort - The Perfect Antidepressant, If You're German

The herb St John's Wort is as effective as antidepressants while having milder side effects, according to a recent Cochrane review, St John's wort for major depression.

Professor Edzard Ernst, a well-known enemy of complementary and alternative medicine, wrote a favorable review of this study in which he comments that given the questions around the safety and effectiveness of antidepressants, it is a mystery why St John's Wort is not used more widely.

When Edzard Ernst says a herb works, you should take notice. But is St John's Wort (Hypericum perforatum) really the perfect antidepressant? Curiously, it seems to depend whether you're German or not.

The Cochrane review included 29 randomized, double-blind trials with a total of 5500 patients. The authors only included trials where all patients met DSM-IV or ICD-10 criteria for "major depression". 18 trials compared St John's Wort extract to placebo pills, and 19 compared it conventional antidepressants. (Some trials did both).

The analysis concluded that overall, St John's Wort was significantly more effective than placebo. The magnitude of the benefit was similar to that seen with conventional antidepressants in other trials (around 3 HAMD points). However, this was only true when studies from German-speaking countries were examined.

Out of the 11 Germanic trials, 8 found that St John's Wort was significantly better than placebo and the other 3 were all very close. None of the 8 non-Germanic trials found it to be effective and only one was close.


Edzard Ernst, by the way, is German. So were the authors of this review. I'm not.

The picture was a bit more clear when St John's Wort was directly compared to conventional antidepressants: it was almost exactly as effective. It was only significantly worse in one small study. This was true in both Germanic and non-Germanic studies, and was true when either older tricyclics or newer SSRIs were considered.

Perhaps the most convincing result was that St John's Wort was well tolerated. Patients did not drop out of the trials because of side-effects any more often than when they were taking placebo (OR=0.92), and were much less likely to drop out versus patients given antidepressants (OR=0.41). Reported side effects were also very few. (It can be dangerous when combined with certain antidepressants and other medications however.)

So, what does this mean? If you look at it optimistically, it's wonderful news. St John's Wort, a natural plant product, is as good as any antidepressant against depression, and has much fewer side effects, maybe no side effects at all. It should be the first-line treatment for depression, especially because it's cheap (no patents).

But from another perspective this review raises more questions than answers. Why did St John's Wort perform so differently in German vs. non-German studies? The authors admit that:
Our finding that studies from German-speaking countries yielded more favourable results than trials performed elsewhere is difficult to interpret. ... However, the consistency and extent of the observed association suggest that there are important differences in trials performed in different countries.
The obvious, cynical explanation is that there are lots of German trials finding that St John's Wort didn't work, but they haven't been published because St John's Wort is very popular in German-speaking countries and people don't want to hear bad news about it. The authors downplay the possibility of such publication bias:
We cannot rule out, but doubt, that selective publication of overoptimistic results in small trials strongly influences our findings.
But we really have no way of knowing.

The more interesting explanation is that St John's Wort really does work better in German trials because German investigators tend to recruit the kind of patients who respond well to St John's Wort. The present review found that trials including patients with "more severe" depression found slightly less benefit of St John's Wort vs placebo, which is the opposite of what is usually seen in antidepressant trials, where severity correlates with response. The authors also note that it's been suggested that so-called "atypical depression" symptoms - like eating too much, sleeping a lot, and anxiety - respond especially well to St John's Wort.

So it could be that for some patients St John's Wort works well, but until studies examine this in detail, we won't know. One thing, however, is certain - the evidence in favor of Hypericum is strong enough to warrant more scientific interest than it currently gets. In most English-speaking psychopharmacology circles, it's regarded as a flaky curiosity.

The case of St John's Wort also highlights the weaknesses of our current diagnostic systems for depression. According to DSM-IV someone who feels miserable, cries a lot and comfort-eats icecream has the same disorder - "major depression" - as someone who is unable to eat or sleep with severe melancholic symptoms. The concept is so broad as to encompass a huge range of problems, and doctors in different cultures may apply the word "depression" very differently.

[BPSDB]

ResearchBlogging.orgErnst, E. (2009). Review: St John's wort superior to placebo and similar to antidepressants for major depression but with fewer side effects Evidence-Based Mental Health, 12 (3), 78-78 DOI: 10.1136/ebmh.12.3.78

Klaus Linde, Michael M Berner, Levente Kriston (2008). St John's wort for major depression Cochrane Database of Systematic Reviews (4)

St John's Wort - The Perfect Antidepressant, If You're German

The herb St John's Wort is as effective as antidepressants while having milder side effects, according to a recent Cochrane review, St John's wort for major depression.

Professor Edzard Ernst, a well-known enemy of complementary and alternative medicine, wrote a favorable review of this study in which he comments that given the questions around the safety and effectiveness of antidepressants, it is a mystery why St John's Wort is not used more widely.

When Edzard Ernst says a herb works, you should take notice. But is St John's Wort (Hypericum perforatum) really the perfect antidepressant? Curiously, it seems to depend whether you're German or not.

The Cochrane review included 29 randomized, double-blind trials with a total of 5500 patients. The authors only included trials where all patients met DSM-IV or ICD-10 criteria for "major depression". 18 trials compared St John's Wort extract to placebo pills, and 19 compared it conventional antidepressants. (Some trials did both).

The analysis concluded that overall, St John's Wort was significantly more effective than placebo. The magnitude of the benefit was similar to that seen with conventional antidepressants in other trials (around 3 HAMD points). However, this was only true when studies from German-speaking countries were examined.

Out of the 11 Germanic trials, 8 found that St John's Wort was significantly better than placebo and the other 3 were all very close. None of the 8 non-Germanic trials found it to be effective and only one was close.


Edzard Ernst, by the way, is German. So were the authors of this review. I'm not.

The picture was a bit more clear when St John's Wort was directly compared to conventional antidepressants: it was almost exactly as effective. It was only significantly worse in one small study. This was true in both Germanic and non-Germanic studies, and was true when either older tricyclics or newer SSRIs were considered.

Perhaps the most convincing result was that St John's Wort was well tolerated. Patients did not drop out of the trials because of side-effects any more often than when they were taking placebo (OR=0.92), and were much less likely to drop out versus patients given antidepressants (OR=0.41). Reported side effects were also very few. (It can be dangerous when combined with certain antidepressants and other medications however.)

So, what does this mean? If you look at it optimistically, it's wonderful news. St John's Wort, a natural plant product, is as good as any antidepressant against depression, and has much fewer side effects, maybe no side effects at all. It should be the first-line treatment for depression, especially because it's cheap (no patents).

But from another perspective this review raises more questions than answers. Why did St John's Wort perform so differently in German vs. non-German studies? The authors admit that:
Our finding that studies from German-speaking countries yielded more favourable results than trials performed elsewhere is difficult to interpret. ... However, the consistency and extent of the observed association suggest that there are important differences in trials performed in different countries.
The obvious, cynical explanation is that there are lots of German trials finding that St John's Wort didn't work, but they haven't been published because St John's Wort is very popular in German-speaking countries and people don't want to hear bad news about it. The authors downplay the possibility of such publication bias:
We cannot rule out, but doubt, that selective publication of overoptimistic results in small trials strongly influences our findings.
But we really have no way of knowing.

The more interesting explanation is that St John's Wort really does work better in German trials because German investigators tend to recruit the kind of patients who respond well to St John's Wort. The present review found that trials including patients with "more severe" depression found slightly less benefit of St John's Wort vs placebo, which is the opposite of what is usually seen in antidepressant trials, where severity correlates with response. The authors also note that it's been suggested that so-called "atypical depression" symptoms - like eating too much, sleeping a lot, and anxiety - respond especially well to St John's Wort.

So it could be that for some patients St John's Wort works well, but until studies examine this in detail, we won't know. One thing, however, is certain - the evidence in favor of Hypericum is strong enough to warrant more scientific interest than it currently gets. In most English-speaking psychopharmacology circles, it's regarded as a flaky curiosity.

The case of St John's Wort also highlights the weaknesses of our current diagnostic systems for depression. According to DSM-IV someone who feels miserable, cries a lot and comfort-eats icecream has the same disorder - "major depression" - as someone who is unable to eat or sleep with severe melancholic symptoms. The concept is so broad as to encompass a huge range of problems, and doctors in different cultures may apply the word "depression" very differently.

[BPSDB]

ResearchBlogging.orgErnst, E. (2009). Review: St John's wort superior to placebo and similar to antidepressants for major depression but with fewer side effects Evidence-Based Mental Health, 12 (3), 78-78 DOI: 10.1136/ebmh.12.3.78

Klaus Linde, Michael M Berner, Levente Kriston (2008). St John's wort for major depression Cochrane Database of Systematic Reviews (4)

Saturday, July 25, 2009

In Science, Popularity Means Inaccuracy

Who's more likely to start digging prematurely: one guy with a metal-detector looking for an old nail, or a field full of people with metal-detectors searching for buried treasure?

In any area of science, there will be some things which are more popular than others - maybe a certain gene, a protein, or a part of the brain. It's only natural and proper that some things get of lot of attention if they seem to be scientifically important. But Thomas Pfeiffer and Robert Hoffmann warn in a PLoS One paper that popularity can lead to inaccuracy - Large-Scale Assessment of the Effect of Popularity on the Reliability of Research.

They note two reasons for this. Firstly, popular topics tend to attract interest and money. This means that scientists have much to gain by publishing "positive results" as this allows them to get in on the action -
In highly competitive fields there might be stronger incentives to “manufacture” positive results by, for example, modifying data or statistical tests until formal statistical significance is obtained. This leads to inflated error rates for individual findings... We refer to this mechanism as “inflated error effect”.
Secondly, in fields where there is a lot of research being done, the chance that someone will, just by chance, come up with a positive finding increases -
The second effect results from multiple independent testing of the same hypotheses by competing research groups. The more often a hypothesis is tested, the more likely a positive result is obtained and published even if the hypothesis is false. ... We refer to this mechanism as “multiple testing effect”.
But does this happen in real life? The authors say yes, based on a review of research into protein-protein interactions in yeast. (Happily, you don't need to be a yeast expert to follow the argument.)

There are two ways of trying to find out whether two proteins interact with each other inside cells. You could do a small-scale experiment specifically looking for one particular interaction: say, Protein B with Protein X. Or you can do "high-throughput" screening of lots of proteins to see which ones interact: Does Protein A interact with B, C, D, E... Does Protein B interact with A, C, D, E... etc.

There have been tens of thousands of small-scale experiments into yeast proteins, and more recently, a few high-throughput studies. The authors looked at the small-scale studies and found that the more popular a certain protein was, the less likely it was that reported interactions involving it would be confirmed by high-throughput experiments.

The second and the third of the above graphs shows the effect. Increasing popularity leads to a falling % of confirmed results. The first graph shows that interactions which were replicated by lots of small-scale experiments tended to be confirmed, which is what you'd expect.

Pfeiffer and Hoffmann note that high-throughput studies have issues of their own, so using them as a yardstick to judge the truth of other results is a little problematic. However, they say that the overall trend remains valid.

This is an interesting paper which provides some welcome empirical support to the theoretical argument that popularity could lead to unreliability. Unfortunately, the problem is by no means confined to yeast. Any area of science in which researchers engage in a search for publishable "positive results" is vulnerable to the dangers of publication bias, data cherry-picking, and so forth. Even obscure topics are vulnerable but when researchers are falling over themselves to jump on the latest scientific bandwagon, the problems multiply exponentially.

A recent example may be the "depression gene", 5HTTLPR. Since a landmark paper in 2003 linked it to clinical depression, there has been an explosion of research into this genetic variant. Literally hundreds of papers appeared - it is by far the most studied gene in psychiatric genetics. But a lot of this research came from scientists with little experience or interest in genes. It's easy and cheap to collect a DNA sample and genotype it. People started routinely looking at 5HTTLPR whenever they did any research on depression - or anything related.

But wait - a recent meta-analysis reported that the gene is not in fact linked to depression at all. If that's true (it could well be), how did so many hundreds of papers appear which did find an effect? Pfeiffer and Hoffmann's paper provides a convincing explanation.

Link - Orac also blogged this paper and put a characteristic CAM angle on it.

ResearchBlogging.orgPfeiffer, T., & Hoffmann, R. (2009). Large-Scale Assessment of the Effect of Popularity on the Reliability of Research PLoS ONE, 4 (6) DOI: 10.1371/journal.pone.0005996

In Science, Popularity Means Inaccuracy

Who's more likely to start digging prematurely: one guy with a metal-detector looking for an old nail, or a field full of people with metal-detectors searching for buried treasure?

In any area of science, there will be some things which are more popular than others - maybe a certain gene, a protein, or a part of the brain. It's only natural and proper that some things get of lot of attention if they seem to be scientifically important. But Thomas Pfeiffer and Robert Hoffmann warn in a PLoS One paper that popularity can lead to inaccuracy - Large-Scale Assessment of the Effect of Popularity on the Reliability of Research.

They note two reasons for this. Firstly, popular topics tend to attract interest and money. This means that scientists have much to gain by publishing "positive results" as this allows them to get in on the action -
In highly competitive fields there might be stronger incentives to “manufacture” positive results by, for example, modifying data or statistical tests until formal statistical significance is obtained. This leads to inflated error rates for individual findings... We refer to this mechanism as “inflated error effect”.
Secondly, in fields where there is a lot of research being done, the chance that someone will, just by chance, come up with a positive finding increases -
The second effect results from multiple independent testing of the same hypotheses by competing research groups. The more often a hypothesis is tested, the more likely a positive result is obtained and published even if the hypothesis is false. ... We refer to this mechanism as “multiple testing effect”.
But does this happen in real life? The authors say yes, based on a review of research into protein-protein interactions in yeast. (Happily, you don't need to be a yeast expert to follow the argument.)

There are two ways of trying to find out whether two proteins interact with each other inside cells. You could do a small-scale experiment specifically looking for one particular interaction: say, Protein B with Protein X. Or you can do "high-throughput" screening of lots of proteins to see which ones interact: Does Protein A interact with B, C, D, E... Does Protein B interact with A, C, D, E... etc.

There have been tens of thousands of small-scale experiments into yeast proteins, and more recently, a few high-throughput studies. The authors looked at the small-scale studies and found that the more popular a certain protein was, the less likely it was that reported interactions involving it would be confirmed by high-throughput experiments.

The second and the third of the above graphs shows the effect. Increasing popularity leads to a falling % of confirmed results. The first graph shows that interactions which were replicated by lots of small-scale experiments tended to be confirmed, which is what you'd expect.

Pfeiffer and Hoffmann note that high-throughput studies have issues of their own, so using them as a yardstick to judge the truth of other results is a little problematic. However, they say that the overall trend remains valid.

This is an interesting paper which provides some welcome empirical support to the theoretical argument that popularity could lead to unreliability. Unfortunately, the problem is by no means confined to yeast. Any area of science in which researchers engage in a search for publishable "positive results" is vulnerable to the dangers of publication bias, data cherry-picking, and so forth. Even obscure topics are vulnerable but when researchers are falling over themselves to jump on the latest scientific bandwagon, the problems multiply exponentially.

A recent example may be the "depression gene", 5HTTLPR. Since a landmark paper in 2003 linked it to clinical depression, there has been an explosion of research into this genetic variant. Literally hundreds of papers appeared - it is by far the most studied gene in psychiatric genetics. But a lot of this research came from scientists with little experience or interest in genes. It's easy and cheap to collect a DNA sample and genotype it. People started routinely looking at 5HTTLPR whenever they did any research on depression - or anything related.

But wait - a recent meta-analysis reported that the gene is not in fact linked to depression at all. If that's true (it could well be), how did so many hundreds of papers appear which did find an effect? Pfeiffer and Hoffmann's paper provides a convincing explanation.

Link - Orac also blogged this paper and put a characteristic CAM angle on it.

ResearchBlogging.orgPfeiffer, T., & Hoffmann, R. (2009). Large-Scale Assessment of the Effect of Popularity on the Reliability of Research PLoS ONE, 4 (6) DOI: 10.1371/journal.pone.0005996

Saturday, May 23, 2009

Do Antidepressants Help in Mild Depression?

Yes! says the BBC, reporting on the results of a new trial -
Drugs 'can help mild depression'
Not so fast. Read this before you reach for the Prozac.
It was about this time last year that Irving Kirsch and colleagues released Initial Severity and Antidepressant Benefits. This bombshell of a meta-analysis concluded, notoriously, that the benefits of antidepressants over and above placebo are in general pretty small. Moreover, it claimed that the benefits are even smaller - indeed pretty much zero - in people whose depression is not very severe to begin with.

However, Neuroskeptic readers will know that antidepressant trials are not all they're cracked up to be (1,2). On top of which Kirsch et al. were a little "creative" with their statistics, as bloggers P J Leonard and Robert Waldmann aptly demonstrated. So, the claim that antidepressants don't work in mild depression rests on shaky foundations.

But that doesn't mean that they do work. In fact, there have been very few studies looking at the effectiveness of drugs in mild to moderate depression. That's a shame, because mild depression is the most common reason why people are given antidepressants in real life.

Now a new clinical trial, run by the British National Health Service, has appeared. It was (drumroll) a Randomised controlled trial to determine the clinical effectiveness and cost-effectiveness of selective serotonin reuptake inhibitors plus supportive care, versus supportive care alone, for mild to moderate depression with somatic symptoms in primary care.

The researchers enlisted GPs (family doctors) from across the UK, and got them to refer suitable patients to the study. Patients could be included if their doctors considered that they were depressed and had been for at least 8 weeks. They also had to be aged 18 or over, and they had to be rated between 12 and 19 on the HAMD, a scale used to measure the severity of depression. (Slightly oddly, they were also required to show at least some evidence of "somatic" symptoms - aches, pains, indigestion, that kind of thing. I'm not sure why.) Patients were excluded if they "expressed suicidal intent" or if they admitted to drug or alcohol misuse.

A total of 602 patients were referred to the trial, but of these only 220 actually took part; the rest either didn't want to do it or were unsuitable for whatever reason. It took the researchers nearly 4 years and heroic efforts to recruit those 220 people, including reimbursing doctors £45 for each patient referred. This kind of research is frustrating. This is probably why there's so little of it.The volunteers were randomly assigned to get supportive care alone, or supportive care plus the doctor's choice of SSRI antidepressant. "Supportive care" is basically a euphemism for "doing sweet F. A.". The GPs were meant to see the patients 5 times over a 12 week period; given that a typical GP consultation in the UK lasts about 10 minutes, the idea that this constitutes any kind of "care", supportive or not, is a bit of a joke.

What happened? Well, to cut a very long story short, the patients assigned to SSRIs did better than the ones assigned to supportive care alone. Hurrah! But they only did slightly better. After 12 weeks they had a mean HAMD score of 8.7 compared to 11.2 in the supportive care group. The SSRI group also did a bit better on some other measures of health, well-being and general satisfaction. The difference on the BDI, a self-reported measure of depression, was not significant however (13.0 vs. 15.1)

So does that mean antidepressants "work" in mild depression? Maybe. Maybe not. The most obvious issue, of course, is that there was no placebo group in this trial. So any benefit of the pills could have just been psychological. Gettingly randomly assigned to "supportive care" and condemned to twiddle your thumbs for 12 weeks is not going to make anyone feel better. Starting on antidepressants, on the other hand, feels like a fresh start. It gives hope. It's change you can believe in.

But if giving people pills makes them feel better, isn't that good enough reason to do it? Who cares if it's all the placebo effect? Well, there's some truth to that, but the problem is that patients included in this trial were a rather unusual bunch. In particular, they were people who agreed to be randomized to get antidepressants or not, i.e. they had no strong preference either for or against pills.

Given that an awful lot of people do have such a preference, we can't assume that these results apply to the average patient in the clinic. As the authors note (page 59, emphasis mine):
The tallies of surgery logs completed by a number of the study GPs at various points during the study showed that only around 1 in 10 patients with a new episode of depression were referred into the study, mainly because the rest did not fulfil the inclusion criteria, particularly in terms of a lack of equipoise about the benefits of drug treatment on the part of the doctor or patient or both.
And of those 602 referred, only about a third actually took part, as mentioned above. So what we have here is a study on an unusual 3% of patients. What about the other 97%? We don't know. Still.

Or don't we? Well, it depends who "we" are. I suspect that a moderately competent doctor with experience treating depression probably does have a good idea of who is likely to benefit from drugs and who isn't. There's no substitute for real, hands-on clinical experience. There's more to life than trials...

ResearchBlogging.orgT Kendrick, J Chatwin, C Dowrick, A Tylee, R Morriss, R Peveler, M Leese, P McCrone, T Harris, M Moore, R Byng, G Brown, S Barthel, H Mander, A Ring, V Kelly, V Wallace, M Gabbay, T Craig and A Mann (2009). Randomised controlled trial to determine the clinical effectiveness and cost-effectiveness of selective serotonin reuptake inhibitors plus supportive care, versus supportive care alone, for mild to moderate depression with somatic symptoms in primary care Health Technology Assessment, 13 (22)

Do Antidepressants Help in Mild Depression?

Yes! says the BBC, reporting on the results of a new trial -
Drugs 'can help mild depression'
Not so fast. Read this before you reach for the Prozac.
It was about this time last year that Irving Kirsch and colleagues released Initial Severity and Antidepressant Benefits. This bombshell of a meta-analysis concluded, notoriously, that the benefits of antidepressants over and above placebo are in general pretty small. Moreover, it claimed that the benefits are even smaller - indeed pretty much zero - in people whose depression is not very severe to begin with.

However, Neuroskeptic readers will know that antidepressant trials are not all they're cracked up to be (1,2). On top of which Kirsch et al. were a little "creative" with their statistics, as bloggers P J Leonard and Robert Waldmann aptly demonstrated. So, the claim that antidepressants don't work in mild depression rests on shaky foundations.

But that doesn't mean that they do work. In fact, there have been very few studies looking at the effectiveness of drugs in mild to moderate depression. That's a shame, because mild depression is the most common reason why people are given antidepressants in real life.

Now a new clinical trial, run by the British National Health Service, has appeared. It was (drumroll) a Randomised controlled trial to determine the clinical effectiveness and cost-effectiveness of selective serotonin reuptake inhibitors plus supportive care, versus supportive care alone, for mild to moderate depression with somatic symptoms in primary care.

The researchers enlisted GPs (family doctors) from across the UK, and got them to refer suitable patients to the study. Patients could be included if their doctors considered that they were depressed and had been for at least 8 weeks. They also had to be aged 18 or over, and they had to be rated between 12 and 19 on the HAMD, a scale used to measure the severity of depression. (Slightly oddly, they were also required to show at least some evidence of "somatic" symptoms - aches, pains, indigestion, that kind of thing. I'm not sure why.) Patients were excluded if they "expressed suicidal intent" or if they admitted to drug or alcohol misuse.

A total of 602 patients were referred to the trial, but of these only 220 actually took part; the rest either didn't want to do it or were unsuitable for whatever reason. It took the researchers nearly 4 years and heroic efforts to recruit those 220 people, including reimbursing doctors £45 for each patient referred. This kind of research is frustrating. This is probably why there's so little of it.The volunteers were randomly assigned to get supportive care alone, or supportive care plus the doctor's choice of SSRI antidepressant. "Supportive care" is basically a euphemism for "doing sweet F. A.". The GPs were meant to see the patients 5 times over a 12 week period; given that a typical GP consultation in the UK lasts about 10 minutes, the idea that this constitutes any kind of "care", supportive or not, is a bit of a joke.

What happened? Well, to cut a very long story short, the patients assigned to SSRIs did better than the ones assigned to supportive care alone. Hurrah! But they only did slightly better. After 12 weeks they had a mean HAMD score of 8.7 compared to 11.2 in the supportive care group. The SSRI group also did a bit better on some other measures of health, well-being and general satisfaction. The difference on the BDI, a self-reported measure of depression, was not significant however (13.0 vs. 15.1)

So does that mean antidepressants "work" in mild depression? Maybe. Maybe not. The most obvious issue, of course, is that there was no placebo group in this trial. So any benefit of the pills could have just been psychological. Gettingly randomly assigned to "supportive care" and condemned to twiddle your thumbs for 12 weeks is not going to make anyone feel better. Starting on antidepressants, on the other hand, feels like a fresh start. It gives hope. It's change you can believe in.

But if giving people pills makes them feel better, isn't that good enough reason to do it? Who cares if it's all the placebo effect? Well, there's some truth to that, but the problem is that patients included in this trial were a rather unusual bunch. In particular, they were people who agreed to be randomized to get antidepressants or not, i.e. they had no strong preference either for or against pills.

Given that an awful lot of people do have such a preference, we can't assume that these results apply to the average patient in the clinic. As the authors note (page 59, emphasis mine):
The tallies of surgery logs completed by a number of the study GPs at various points during the study showed that only around 1 in 10 patients with a new episode of depression were referred into the study, mainly because the rest did not fulfil the inclusion criteria, particularly in terms of a lack of equipoise about the benefits of drug treatment on the part of the doctor or patient or both.
And of those 602 referred, only about a third actually took part, as mentioned above. So what we have here is a study on an unusual 3% of patients. What about the other 97%? We don't know. Still.

Or don't we? Well, it depends who "we" are. I suspect that a moderately competent doctor with experience treating depression probably does have a good idea of who is likely to benefit from drugs and who isn't. There's no substitute for real, hands-on clinical experience. There's more to life than trials...

ResearchBlogging.orgT Kendrick, J Chatwin, C Dowrick, A Tylee, R Morriss, R Peveler, M Leese, P McCrone, T Harris, M Moore, R Byng, G Brown, S Barthel, H Mander, A Ring, V Kelly, V Wallace, M Gabbay, T Craig and A Mann (2009). Randomised controlled trial to determine the clinical effectiveness and cost-effectiveness of selective serotonin reuptake inhibitors plus supportive care, versus supportive care alone, for mild to moderate depression with somatic symptoms in primary care Health Technology Assessment, 13 (22)

Saturday, April 18, 2009

Depression, Neurogenesis and Herpes

Previously, I've discussed the neurogenesis theory of depression in two rather skeptical posts. Not that I'm on some kind of anti-neurogenesis theory crusade, but a study just published adds to the evidence that all's not well with that hypothesis.

The paper is Singer et. al.'s Conditional ablation and recovery of forebrain neurogenesis in the mouse. Via some cunning genetic engineering, the authors created mice with a gene for a protein called herpes simplex virus thymidine kinase. As the name suggests, this is a protein normally found in, er, herpes. Ganciclovir is a drug which can be used to treat herpes and related viral infections. And, as you might expect, cells engineered to express the herpes protein die when exposed to ganciclovir.

The authors engineered mice which expressed herpes simplex virus thymidine kinase, but only in neural progenitor cells. These are the cells which eventually become new neurones in the adult brain. They found that injections of gancyclovir devasasted the production of new neurones in the engineered mice. (It had no effect on normal mice, of course, because their brain cells weren't half mouse, half herpes). That's not all that surprising.

However, they also found that gancyclovir treatment had no effect on the ability of 28 days treatment imipramine, an antidepressant, to affect the mice's behaviour. (The measure of antidepressant action was the Tail Suspension Test). That's a result, because a lot of people are interested in the theory that antidepressants work by boosting neurogenesis in the hippocampus. If that were true, blocking neurogenesis should also block the effects of antidepressants.

Some rather exciting experiments found that it does, most famously the much-cited Santarelli et al (2003). But a growing number of other studies, such as this one, have not confirmed this finding. This doesn't mean that Santarelli et al were wrong, but it does suggest that there's more to antidepressants than neurogenesis. The seemingly-contradictory findings of the various studies might be due to important differences in the methods used. For example, the authors of this paper say that Santarelli et al's way of blocking neurogenesis - using x-rays - may have also caused inflammation and blocked the formation of non-neural cells, such as those which go to make up blood-vessels.

Of course, it's easy enough for us to speculate along such lines - rather harder to work out what exactly is going on. With any luck, the next few years will see more progress on this important topic.

ResearchBlogging.orgSinger, B., Jutkiewicz, E., Fuller, C., Lichtenwalner, R., Zhang, H., Velander, A., Li, X., Gnegy, M., Burant, C., & Parent, J. (2009). Conditional ablation and recovery of forebrain neurogenesis in the mouse The Journal of Comparative Neurology, 514 (6), 567-582 DOI: 10.1002/cne.22052

Depression, Neurogenesis and Herpes

Previously, I've discussed the neurogenesis theory of depression in two rather skeptical posts. Not that I'm on some kind of anti-neurogenesis theory crusade, but a study just published adds to the evidence that all's not well with that hypothesis.

The paper is Singer et. al.'s Conditional ablation and recovery of forebrain neurogenesis in the mouse. Via some cunning genetic engineering, the authors created mice with a gene for a protein called herpes simplex virus thymidine kinase. As the name suggests, this is a protein normally found in, er, herpes. Ganciclovir is a drug which can be used to treat herpes and related viral infections. And, as you might expect, cells engineered to express the herpes protein die when exposed to ganciclovir.

The authors engineered mice which expressed herpes simplex virus thymidine kinase, but only in neural progenitor cells. These are the cells which eventually become new neurones in the adult brain. They found that injections of gancyclovir devasasted the production of new neurones in the engineered mice. (It had no effect on normal mice, of course, because their brain cells weren't half mouse, half herpes). That's not all that surprising.

However, they also found that gancyclovir treatment had no effect on the ability of 28 days treatment imipramine, an antidepressant, to affect the mice's behaviour. (The measure of antidepressant action was the Tail Suspension Test). That's a result, because a lot of people are interested in the theory that antidepressants work by boosting neurogenesis in the hippocampus. If that were true, blocking neurogenesis should also block the effects of antidepressants.

Some rather exciting experiments found that it does, most famously the much-cited Santarelli et al (2003). But a growing number of other studies, such as this one, have not confirmed this finding. This doesn't mean that Santarelli et al were wrong, but it does suggest that there's more to antidepressants than neurogenesis. The seemingly-contradictory findings of the various studies might be due to important differences in the methods used. For example, the authors of this paper say that Santarelli et al's way of blocking neurogenesis - using x-rays - may have also caused inflammation and blocked the formation of non-neural cells, such as those which go to make up blood-vessels.

Of course, it's easy enough for us to speculate along such lines - rather harder to work out what exactly is going on. With any luck, the next few years will see more progress on this important topic.

ResearchBlogging.orgSinger, B., Jutkiewicz, E., Fuller, C., Lichtenwalner, R., Zhang, H., Velander, A., Li, X., Gnegy, M., Burant, C., & Parent, J. (2009). Conditional ablation and recovery of forebrain neurogenesis in the mouse The Journal of Comparative Neurology, 514 (6), 567-582 DOI: 10.1002/cne.22052

Monday, March 2, 2009

A Very Optimistic Genetics Paper


Saturday saw the Guardian on fine form with a classic piece of bad neuro-journalism which made it all the way onto the front page:
Psychologists find gene that helps you look on the bright side of life
Those unfortunate enough to lack the 'brightside gene' are more likely to suffer from mental health problems such as depression
What the research actually found was nothing to do with looking on the bright side of anything, and was nothing to do with depression either. In fact, it suggests that the gene in question doesn't cause mental health problems. So the headlines are a little misleading, then.

The study comes from Elaine Fox and colleagues from the University of Essex.* They took 111 people, presumably students, and got them to do a "dot-probe" task. Performance on this task was related to the genotype of the 5HTTLPR polymorphism, a variant in the gene which encodes for the serotonin transporter protein. Serotonin is "the brain's main feelgood chemical" as the Guardian put it... except it isn't, although it does have something to do with mood.

What's a "dot probe" task? It's a test which has become popular amongst all kinds of psychologists over the past 10 years or so, having first been used in 1986 by Colin MacLeod et. al. The task involves pressing a button whenever a "probe" - a little dot - appears on a screen. The goal is to press the button as quickly as possible, as soon as the dot appears.

The twist is that as well as the dots, there are other things on the screen. In the 1986 version of the test these were words, while in this experiment they were colour pictures. Some of the images were pleasant: smiling faces, flowers, and other nice things. Some were unpleasant - scary dogs, bloody injuries, etc. And some were neutral objects, like furniture.Pairs of these pictures appeared on the screen for a short time (half a second) immediately before each dot appeared, one on the left of the screen and one on the right. The key is that the dot appeared in the same place as one of the pictures.

The task operates under the assumption that if the viewer's attention is grabbed by one of the pictures, they are likely to be faster to respond to seeing the dot when it appears in the same place as that picture, because they will already be focused on that area of the screen. If, for example, people are on average faster to detect the dot when it appears in the same place as the nice pictures as opposed to the horrible ones, this is described as indicating a "positive attentional bias" i.e. an unconscious tendency to pay attention to pleasant pictures.

Unfortunately, now that you know what a dot probe task is, you can't take part in any psychology experiments which uses one, because once you know how it's supposed to work there's no point in doing it. Sorry. But on the bright side, you now officially know more about psychology than The Economist, whose write-up of this experiment managed to be even worse than the Guardian's. They not only sensationalized the results, but also misunderstand the whole point of the dot-probe task - it's not about "distraction", it's about selective attention-grabbing.

Anyway, that's the task, and the study found that carriers of two "long" variants of the 5HTTLPR gene showed a strong attention bias towards nice pictures and away from nasty ones, while other people showed no biases. Statistically, the result was highly significant, so let's assume it's true. What does it mean? You could take it to mean that carriers of two long variants were more optimistic in that they tend to pay attention to the good stuff. On the other hand you could equally well say they're so squeamish and wussy that they can't bear to look at the bad stuff and have to avert their eyes from it.

And what's this got to do with depression? Well to cut a very long story short the gene in question has been previously linked to depression and also to personality traits such as "neuroticism" - being anxious, worried and generally miserable (see this paper). But in this study they found no such association with neuroticism. Despite the fact that it was a report of this association which got everyone interested in the 5HTTLPR variant in the first place back in 1996! Brilliantly, they spin their negative finding as a good thing -
The fact that our genotype groups were matched on a range of self-report measures, including neuroticism can be seen as a major strength.
Hope springs eternal. Overall, while this paper is a fine contribution to the psychology literature on the dot-probe task (and the results genuinely do seem to be very significant - there's probably something going on here) it's got nothing to do with optimism and little to do with anything that the average newspaper reader cares about. Luckily, we have journalists to make science interesting on the cheap and on the quick - at the cost of accuracy. There's a lot of really interesting, really thought-provoking popular science writing to be done about the dot-probe, and about the 5HTTLRP gene. But none of it has yet made it into the British papers.

[BPSDB]

*Fox, my PubMed search reveals, also does work on so-called "electromagnetic sensitivity". The upshot of her work is that lots of people sincerely believe that signals from mobile phones and other sources make them feel unwell, but actually, it's all the placebo effect. Now that really is something that everyone should find fascinating - much more so than this study, anyway.

ResearchBlogging.orgElaine Fox, Anna Ridgewell and Chris Ashwin (2009). Looking on the bright side: biased attention and the human serotonin transporter gene Proc. R. Soc. B

A Very Optimistic Genetics Paper


Saturday saw the Guardian on fine form with a classic piece of bad neuro-journalism which made it all the way onto the front page:
Psychologists find gene that helps you look on the bright side of life
Those unfortunate enough to lack the 'brightside gene' are more likely to suffer from mental health problems such as depression
What the research actually found was nothing to do with looking on the bright side of anything, and was nothing to do with depression either. In fact, it suggests that the gene in question doesn't cause mental health problems. So the headlines are a little misleading, then.

The study comes from Elaine Fox and colleagues from the University of Essex.* They took 111 people, presumably students, and got them to do a "dot-probe" task. Performance on this task was related to the genotype of the 5HTTLPR polymorphism, a variant in the gene which encodes for the serotonin transporter protein. Serotonin is "the brain's main feelgood chemical" as the Guardian put it... except it isn't, although it does have something to do with mood.

What's a "dot probe" task? It's a test which has become popular amongst all kinds of psychologists over the past 10 years or so, having first been used in 1986 by Colin MacLeod et. al. The task involves pressing a button whenever a "probe" - a little dot - appears on a screen. The goal is to press the button as quickly as possible, as soon as the dot appears.

The twist is that as well as the dots, there are other things on the screen. In the 1986 version of the test these were words, while in this experiment they were colour pictures. Some of the images were pleasant: smiling faces, flowers, and other nice things. Some were unpleasant - scary dogs, bloody injuries, etc. And some were neutral objects, like furniture.Pairs of these pictures appeared on the screen for a short time (half a second) immediately before each dot appeared, one on the left of the screen and one on the right. The key is that the dot appeared in the same place as one of the pictures.

The task operates under the assumption that if the viewer's attention is grabbed by one of the pictures, they are likely to be faster to respond to seeing the dot when it appears in the same place as that picture, because they will already be focused on that area of the screen. If, for example, people are on average faster to detect the dot when it appears in the same place as the nice pictures as opposed to the horrible ones, this is described as indicating a "positive attentional bias" i.e. an unconscious tendency to pay attention to pleasant pictures.

Unfortunately, now that you know what a dot probe task is, you can't take part in any psychology experiments which uses one, because once you know how it's supposed to work there's no point in doing it. Sorry. But on the bright side, you now officially know more about psychology than The Economist, whose write-up of this experiment managed to be even worse than the Guardian's. They not only sensationalized the results, but also misunderstand the whole point of the dot-probe task - it's not about "distraction", it's about selective attention-grabbing.

Anyway, that's the task, and the study found that carriers of two "long" variants of the 5HTTLPR gene showed a strong attention bias towards nice pictures and away from nasty ones, while other people showed no biases. Statistically, the result was highly significant, so let's assume it's true. What does it mean? You could take it to mean that carriers of two long variants were more optimistic in that they tend to pay attention to the good stuff. On the other hand you could equally well say they're so squeamish and wussy that they can't bear to look at the bad stuff and have to avert their eyes from it.

And what's this got to do with depression? Well to cut a very long story short the gene in question has been previously linked to depression and also to personality traits such as "neuroticism" - being anxious, worried and generally miserable (see this paper). But in this study they found no such association with neuroticism. Despite the fact that it was a report of this association which got everyone interested in the 5HTTLPR variant in the first place back in 1996! Brilliantly, they spin their negative finding as a good thing -
The fact that our genotype groups were matched on a range of self-report measures, including neuroticism can be seen as a major strength.
Hope springs eternal. Overall, while this paper is a fine contribution to the psychology literature on the dot-probe task (and the results genuinely do seem to be very significant - there's probably something going on here) it's got nothing to do with optimism and little to do with anything that the average newspaper reader cares about. Luckily, we have journalists to make science interesting on the cheap and on the quick - at the cost of accuracy. There's a lot of really interesting, really thought-provoking popular science writing to be done about the dot-probe, and about the 5HTTLRP gene. But none of it has yet made it into the British papers.

[BPSDB]

*Fox, my PubMed search reveals, also does work on so-called "electromagnetic sensitivity". The upshot of her work is that lots of people sincerely believe that signals from mobile phones and other sources make them feel unwell, but actually, it's all the placebo effect. Now that really is something that everyone should find fascinating - much more so than this study, anyway.

ResearchBlogging.orgElaine Fox, Anna Ridgewell and Chris Ashwin (2009). Looking on the bright side: biased attention and the human serotonin transporter gene Proc. R. Soc. B

Sunday, December 7, 2008

Lessons from the Placebo Gene

Update: See also Lessons from the Video Game Brain



The Journal of Neuroscience has published a Swedish study which, according to New Scientist (and the rest) is something of a breakthrough:

First 'Placebo Gene' Discovered
I rather like the idea of a dummy gene made of sugar, or perhaps a gene for being Brian Moloko, but what they're referring to is a gene, TPH2, which allegedly determines susceptibility to the placebo effect. Interesting, if true. Genetic Future was skeptical of the study because of its small sample size. It is small, but I'm not too concerned about that because there are, unfortunately, other serious problems with this study and the reporting on it. I should say at the outset, however, that most of what I'm about to criticize is depressingly common in the neuroimaging literature. The authors of this study have done some good work in the past and are, I'm sure, no worse than most researchers. With that in mind...



The study included 25 people diagnosed with Social Anxiety Disorder (SAD). Some people see the SAD diagnosis as a drug company ploy to sell pills (mainly antidepressants) to people who are just shy. I disagree. Either way, these were people who complained of severe anxiety in social situations. The 25 patients were all given placebo pill treatment for 8 weeks.



Before and after the treatment they each got an [H2

15O] PET scan, which measures regional blood flow (rCBF) in the brain, something that is generally assumed to correlate with neural activity. It's a bit like fMRI, although the physics are different. During the scans the subjects had to make a brief speech in front of 6 to 8 people. This was intended to make them anxious, as it would do. The patient's self-reported social anxiety in everyday situations was also assessed every 2 weeks by questionaires and clinical interviews.



The patients were then split into two groups based upon their final status: "placebo responders" were those who ended up with a "CGI" rating of 1 or 2 - meaning that they reported that their anxiety had got a lot better - and "placebo nonresponders" who didn't. (You may take issue with this terminology - if so, well done, and keep reading). Brain activation during the public speaking task was compared between these two groups. The authors also looked at two genes, 5HTTLPR and TPH2. Both are involved in serotonin signalling and both have been associated (in some studies) with vulnerability to anxiety and depression.



The results: The placebo responders reported less anxiety following treatment - unsurprisingly, because this is why they were classed as responders. On the PET scans, the placebo responders showed reduced amygdala activity during the second public speaking task compared to the first one; the non-responders showed no change. This is consistent with the popular and fairly sensible idea that the amygdala is active during the experience of emotion, especially fear and anxiety. However, in fact, this effect was marginal, and it was only significant under a region-of-interest analysis i.e. when they specifically looked at the data from the amygdala; in a more conservative whole-brain analysis they found nothing (or rather they did, but they wrote it off as uninteresting, as cognitive neuroscientists generally do when they see blobs in the cerebellum and the motor cortex):

PET data: whole-brain analyses

Exploratory analyses did not reveal significantly different treatment-induced patterns of change in responders versus nonresponders. Significant within-group alterations outside the amygdala region were noted only in nonresponders, who had increased (pre < post) rCBF in the right cerebellum ... and in a cluster encompassing the right primary motor and somatosensory cortices...
As for the famous "placebo gene", they found that two genetic variants, 5HTTLPR ll and TPH2 GG, were associated with a bigger drop in amygdala activity from before treatment to after treatment. TPH2 GG was also associated with the improvement in anxiety over the 8 weeks.
In a logistic regression analysis, the TPH2 polymorphism emerged as the only significant variable that could reliably predict clinical placebo response (CGI-I) on day 56, homozygosity for the G allele being associated with better outcome. Eight of the nine placebo responders (89%), for whom TPH2 gene data were available, were GG homozygotes.
You could call this a gene correlating with the "placebo effect", although you'd probably be wrong (see below). There are a number of important lessons to take home here.



1. Dr Placebo, I presume? - Be careful what you call the placebo effect



This study couldn't have discovered a "placebo gene", even if there is one. It didn't measure the placebo effect at all.



You'll recall that the patients in this study were assessed before and after 8 weeks of placebo treatment (sugar pills). Any changes occuring during these 8 weeks might be due to a true "placebo effect" - improvement caused by the patient's belief in the power of the treatment. This is the possibility that gets some people rather excited: it's mind over matter, man! This is why the word "placebo" is often preceded by words like "Amazing", "Mysterious", or even "Magical" - as if Placebo were the stage-name of a 19th century conjuror. (As opposed to the stage name of androgynous pop-goth Brian Moloko ... I've already done that one.)



But, as they often do, more prosaic explanations suggest themselves. Most boringly, the patients might have just got better. Time is the greater healer, etc., and two months is quite a long time. Maybe one of the patients hooked up with a cute guy and it did wonders for their self-confidence. Maybe the reason why the patients volunteered for the study when they did was because their anxiety was especially bad, and by the time of the second scan it had returned to normal (regression towards the mean). Maybe the study itself made a difference, by getting the patients talking about their anxiety with sympathetic professionals. Maybe the patients didn't actually feel any better at all, but just said they did because that's what they thought were expected to say. I could go on all day.



In my opinion most likely, the patients were just less anxious having their second PET scan, once they had survived the first one. PET scans are no fun: you get a catheter inserted into your arm, through which you're injected with a radioactive tracer compound. Meanwhile, your head is fixed in place within big white box covered in hazard signs. It's not hard to see that you'd probably be much more anxious on your first scan than on your second time around.



So, calling the change from baseline to 8 weeks a "placebo response", and calling the people who got better "placebo responders", is misleading (at least it misled every commentator on this study so far). The only way to measure the true placebo effect is to compare placebo-treated people with people who get no treatment at all. This wasn't done in this study. It rarely is. This is something which confuses an awful lot of people. When people talk about the placebo effect, they're very often referring to the change in the placebo group, which as we've seen is not the same thing at all, and has nothing even vaguely magical or mysterious about it. (For example, some armchair psychiatrists like to say that since patients in the placebo group in antidepressant drug trials often show large improvements, sugar pills must be helpful in depression.) Although that said there was another study in the same issue of the same journal which did measure an actual placebo effect.



2. Beware Post Hoc-us Pocus



From the way it's been reported, you would probably assume that this was a study designed to investigate the placebo effect. However, in the paper we read:

Patients were taken from two previously unpublished RCTs that evaluated changes in regional cerebral blood flow after 56 d of pharmacological treatment by means of positron emission tomography. ... The clinical PET trials ... included a total of 108 patients with SAD. There were three treatment arms in the first study and six arms in the second. ... Only the pooled placebo data are included herein, whereas additional data on psychoactive drug treatment will be reported separately.
Personally, I find this odd. Why have so many groups if you're interested in just one of them? Even if the data from the drug groups are published, it's unusual to report on some aspect of the placebo data in a seperate paper before writing up the main results of an RCT. To me it seems likely that when this study was designed, no-one intended to search for genes associated with the placebo effect. I suspect that the analysis the authors report on here was post-hoc; having looked at the data, they looked around for any interesting effects in it.



To be clear, there's no proof that this is what happened here, but anyone who has worked in science will know that it does happen, and to my jaded eyes it seems probable that this is a case of it. For one thing, if this was a study intended to investigate the placebo effect, it was poorly designed (see above).



There's nothing wrong with post-hoc findings. If scientists only ever found what they set out to look for, science wouldn't have got very far. However, unless they are clearly reported as post-hoc the problem of the Texas Sharpshooter arises - findings may appear to be more significant than they otherwise would. In this case, the TPH2 gene was only a significant predictor of "placebo response" with p=0.04, which is marginal at the best of times.



The reason researchers feel the need to do this kind of thing is because of the premium the scientific community (and hence scientific publishing) places on getting "positive results". Plus, no-one wants to PET scan over 100 people (they're incredibly expensive) and report that nothing interesting happened. However, this doesn't make it right (rant continues...)



3. Science Journalism Is Dysfunctional



Sorry to go on about this, but really it is. New Scientist's write up of this study was, relatively speaking, quite good - they did at least include some caveats ("The gene might not play a role in our response to treatment for all conditions, and the experiment involved only a small number of people.") Although, they had a couple of factual errors such as saying that "8 of the 10 responders had two copies [of the TPH2 G allele], while none of the non-responders did" - actually 8 of the 15 non-responders did - but anyway.



The main point is that they didn't pick up on the fact that this experiment didn't measure the placebo effect at all, which makes their whole article misleading. (The newspapers generally did an even worse job.) I was able to write this post because I had nothing else on this weekend and reading papers like this is a major part of my day job. Ego aside, I'm pretty good at this kind of thing. That's why I write about it, and not about other stuff. And that's why I no longer read science journalism (well, except to blog about how rubbish it is.)



It would be wrong to blame the journalist who wrote the article for this. I'm sure they did the best they could in the time available. I'm sure that I couldn't have done any better. The problem is that they didn't have enough time, and probably didn't have enough specialist knowledge, to read the study critically. It's not their fault, it's not even New Scientist's fault, it's the fault of the whole idea of science journalism, which involves getting non-experts to write, very fast, about complicated issues and make them comprehensible and interesting to the laymen even if they're manifestly not. I used to want to be a science journalist, until I realised that that was the job description.



ResearchBlogging.orgT. Furmark, L. Appel, S. Henningsson, F. Ahs, V. Faria, C. Linnman, A. Pissiota, O. Frans, M. Bani, P. Bettica, E. M. Pich, E. Jacobsson, K. Wahlstedt, L. Oreland, B. Langstrom, E. Eriksson, M. Fredrikson (2008). A Link between Serotonin-Related Gene Polymorphisms, Amygdala Activity, and Placebo-Induced Relief from Social Anxiety Journal of Neuroscience, 28 (49), 13066-13074 DOI: 10.1523/JNEUROSCI.2534-08.2008