Showing posts with label antidepressants. Show all posts
Showing posts with label antidepressants. Show all posts

Sunday, March 6, 2011

Paxil: The Whole Truth?

Paroxetine, aka Paxil aka Seroxat, is an SSRI antidepressant.

Like other SSRIs, its reputation has see-sawed over time. Hailed as miracle drugs in the 1990s and promoted for everything from depression to "separation anxiety" in dogs, they fell from grace over the past decade.

First, concerns emerged over withdrawal symptoms and suicidality especially in young people. Then more recently their antidepressant efficacy came into serious question. Paroxetine has arguably the worst image of all SSRIs, although whether it's much different to the rest is unclear.

Now a new paper claims to provide a definitive assessment of the safety and efficacy of paroxetine in adults (age 18+). The lead authors are from GlaxoSmithKline, who invented paroxetine. So it's no surprise that the text paints GSK and their product in a favourable light, but the data warrant a close look and the results are rather interesting - and complicated.

They took all of the placebo-controlled trials on paroxetine for any psychiatric disorder - because it wasn't just trialled in depression, but also in PTSD, anxiety, and more. They excluded studies with fewer than 30 people; this makes sense though it's somewhat arbitrary, why not 40 or 20? Anyway, they ended up with 61 trials.

First they looked at suicide. In a nutshell paroxetine increased suicidal "behaviour or ideation" in younger patients (age 25 or below) relative to placebo, whether or not they were being treated for depression. In older patients, it only increased suicidality in the depression trials, and the effect was smaller. I've put a red dot where paroxetine was worse than placebo; this doesn't mean the effect was "statistically significant", but the numbers are so small that this is fairly meaningless. Just look at the numbers.

This is not very new. It's been accepted for a while that broadly the same applies when you look at trials of other antidepressants. Whether this causes extra suicides in the real world is a big question.

When it comes to efficacy, however, we find some rather startling info that's not been presented together in one article before, to my knowledge. Here's a graph showing the effect of paroxetine over-and-above placebo in all the different disorders, expressed as a proportion of the improvement seen in the placebo group.

Now I should point out that I just made this measure up. It's not ideal. If the placebo response is very small, then a tiny drug effect will seem large by comparison, even if what this really means is that neither drug nor placebo do any good.

However the flip side of that coin is that it controls for the fact that rating scales for different disorders might be just more likely to show change than others. The d score is a more widely used standardized measure of effect size - though it has its own shortcomings - and I'd like to know those, but the data they provide don't allow us to easily calculate it. You could do it from the GSK database but it would take ages.

Anyway as you can see paroxetine was better, relative to placebo, against PTSD, PMDD, obsessive-compulsive disorder, and social anxiety, than it was against depression measured with the "gold-standard" HAMD scale! In fact the only thing it was worse against was Generalized Anxiety Disorder. Using the alternative MADRS depression scale, the antidepressant effect was bigger, but still small compared to OCD and social anxiety.

This is rather remarkable. Everyone calls paroxetine "an antidepressant", yet at least in one important sense it works better against OCD and social anxiety than it does against depression!

In fact, is paroxetine an antidepressant at all? It works better on MADRS and very poorly on the HAMD; is this because the HAMD is a better scale of depression, and the MADRS actually measures anxiety or OCD symptoms?

That's a lovely neat theory... but in fact the HAMD-17 has two questions about anxiety, scoring 0-4 points each, so you can score up to 8 (or 12 if you count "hypochondriasis", which is basically health anxiety, so you probably should), out of a total maximum of 52. The MADRS has one anxiety item with a max score of 6 on a total of 60. So the HAMD is more "anxious" than the MADRS.

This is more than just a curiosity. Paroxetine's antidepressant effect was tiny in those aged 25 or under on the HAMD - treatment just 9% of the placebo effect - but on the MADRS in the same age group, the benefit was 35%! So what is the HAMD measuring and why is it different to the MADRS?

Honestly, it's hard to tell because the Hamilton scale is so messy. It measures depression and the other distressing symptoms which commonly go along with it. The idea, I think, was that it was meant to be a scale of the patient's overall clinical severity - how seriously they were suffering - rather than a measure of depression per se.

Which is fine. Except that most modern trials carefully exclude anyone with "comorbid" symptoms like anxiety, and on the other hand, recruit people with symptoms quite different to the depressed inpatients that Dr Max Hamilton would have seen when he invented the scale in 1960.

Yet 50 years later the HAMD17, unmodified, is still the standard scale. It's been repeatedly shown to be multi-factorial (it doesn't measure one thing), no-one even agrees on how to interpret it, and a "new scale", the HAMD6, which consists of simply chucking out 11 questions and keeping the 6 that actually measure depression, has been shown to be better. Yet everyone still uses the HAMD17 because everyone else does.

Link: I recently covered a dodgy paper about paroxetine in adolescents with depression; it wasn't included in this analysis because this was about adults.

ResearchBlogging.orgCarpenter DJ, Fong R, Kraus JE, Davies JT, Moore C, & Thase ME (2011). Meta-analysis of efficacy and treatment-emergent suicidality in adults by psychiatric indication and age subgroup following initiation of paroxetine therapy: a complete set of randomized placebo-controlled trials. The Journal of clinical psychiatry PMID: 21367354

Paxil: The Whole Truth?

Paroxetine, aka Paxil aka Seroxat, is an SSRI antidepressant.

Like other SSRIs, its reputation has see-sawed over time. Hailed as miracle drugs in the 1990s and promoted for everything from depression to "separation anxiety" in dogs, they fell from grace over the past decade.

First, concerns emerged over withdrawal symptoms and suicidality especially in young people. Then more recently their antidepressant efficacy came into serious question. Paroxetine has arguably the worst image of all SSRIs, although whether it's much different to the rest is unclear.

Now a new paper claims to provide a definitive assessment of the safety and efficacy of paroxetine in adults (age 18+). The lead authors are from GlaxoSmithKline, who invented paroxetine. So it's no surprise that the text paints GSK and their product in a favourable light, but the data warrant a close look and the results are rather interesting - and complicated.

They took all of the placebo-controlled trials on paroxetine for any psychiatric disorder - because it wasn't just trialled in depression, but also in PTSD, anxiety, and more. They excluded studies with fewer than 30 people; this makes sense though it's somewhat arbitrary, why not 40 or 20? Anyway, they ended up with 61 trials.

First they looked at suicide. In a nutshell paroxetine increased suicidal "behaviour or ideation" in younger patients (age 25 or below) relative to placebo, whether or not they were being treated for depression. In older patients, it only increased suicidality in the depression trials, and the effect was smaller. I've put a red dot where paroxetine was worse than placebo; this doesn't mean the effect was "statistically significant", but the numbers are so small that this is fairly meaningless. Just look at the numbers.

This is not very new. It's been accepted for a while that broadly the same applies when you look at trials of other antidepressants. Whether this causes extra suicides in the real world is a big question.

When it comes to efficacy, however, we find some rather startling info that's not been presented together in one article before, to my knowledge. Here's a graph showing the effect of paroxetine over-and-above placebo in all the different disorders, expressed as a proportion of the improvement seen in the placebo group.

Now I should point out that I just made this measure up. It's not ideal. If the placebo response is very small, then a tiny drug effect will seem large by comparison, even if what this really means is that neither drug nor placebo do any good.

However the flip side of that coin is that it controls for the fact that rating scales for different disorders might be just more likely to show change than others. The d score is a more widely used standardized measure of effect size - though it has its own shortcomings - and I'd like to know those, but the data they provide don't allow us to easily calculate it. You could do it from the GSK database but it would take ages.

Anyway as you can see paroxetine was better, relative to placebo, against PTSD, PMDD, obsessive-compulsive disorder, and social anxiety, than it was against depression measured with the "gold-standard" HAMD scale! In fact the only thing it was worse against was Generalized Anxiety Disorder. Using the alternative MADRS depression scale, the antidepressant effect was bigger, but still small compared to OCD and social anxiety.

This is rather remarkable. Everyone calls paroxetine "an antidepressant", yet at least in one important sense it works better against OCD and social anxiety than it does against depression!

In fact, is paroxetine an antidepressant at all? It works better on MADRS and very poorly on the HAMD; is this because the HAMD is a better scale of depression, and the MADRS actually measures anxiety or OCD symptoms?

That's a lovely neat theory... but in fact the HAMD-17 has two questions about anxiety, scoring 0-4 points each, so you can score up to 8 (or 12 if you count "hypochondriasis", which is basically health anxiety, so you probably should), out of a total maximum of 52. The MADRS has one anxiety item with a max score of 6 on a total of 60. So the HAMD is more "anxious" than the MADRS.

This is more than just a curiosity. Paroxetine's antidepressant effect was tiny in those aged 25 or under on the HAMD - treatment just 9% of the placebo effect - but on the MADRS in the same age group, the benefit was 35%! So what is the HAMD measuring and why is it different to the MADRS?

Honestly, it's hard to tell because the Hamilton scale is so messy. It measures depression and the other distressing symptoms which commonly go along with it. The idea, I think, was that it was meant to be a scale of the patient's overall clinical severity - how seriously they were suffering - rather than a measure of depression per se.

Which is fine. Except that most modern trials carefully exclude anyone with "comorbid" symptoms like anxiety, and on the other hand, recruit people with symptoms quite different to the depressed inpatients that Dr Max Hamilton would have seen when he invented the scale in 1960.

Yet 50 years later the HAMD17, unmodified, is still the standard scale. It's been repeatedly shown to be multi-factorial (it doesn't measure one thing), no-one even agrees on how to interpret it, and a "new scale", the HAMD6, which consists of simply chucking out 11 questions and keeping the 6 that actually measure depression, has been shown to be better. Yet everyone still uses the HAMD17 because everyone else does.

Link: I recently covered a dodgy paper about paroxetine in adolescents with depression; it wasn't included in this analysis because this was about adults.

ResearchBlogging.orgCarpenter DJ, Fong R, Kraus JE, Davies JT, Moore C, & Thase ME (2011). Meta-analysis of efficacy and treatment-emergent suicidality in adults by psychiatric indication and age subgroup following initiation of paroxetine therapy: a complete set of randomized placebo-controlled trials. The Journal of clinical psychiatry PMID: 21367354

Thursday, March 3, 2011

Earthquakes And Antipsychotics

According to a clever little paper just out from Italy, prescriptions for antipsychotic drugs skyrocketed in the months following a major earthquake. But there are some surprising details.


On 6th April 2009, an earthquake hit L'Aquila, a medium-sized city in central Italy. Out of about 100,000 people living in the L'Aquila area, over 600 died and over 60,000 were displaced: a major disaster for the local people.

Rossi et al from the University of L'Aquila looked at medication prescription in the 6 months following the earthquake and compared them to the previous 6 months. This is not an ideal method, it would have been better to compare L'Aquila to a neighboring district unaffected by the earthquake to control for nationwide changes; but over a few months we wouldn't expect large changes.

Anyway - they found that the number of "new" antidepressant prescriptions rose by 37%. However, prescriptions of non-psychiatric drugs like statins and anti-diabetic medications also rose by up to 50%. This is a bit sketchy but it suggests that the increase in antidepressants might just reflect increased post-disaster medical care for everyone in the area.

There was one big finding though: rates of antipsychotic prescribing more than doubled to 833 prescriptions, a 130% increase.

Does this mean that more people experienced psychosis in the aftermath of the trauma? That's one possibility - but a closer look reveals that the "extra" antipsychotics were given almost entirely to elderly people: just 0.3% of people under 45 got a new antipsychotic prescription but 1% of those 65-75 did and in those 75+ it reached 2.7% in men and a dizzying 3.8% of women.

Unfortunately Rossi et al couldn't tell what the drugs were being prescribed for, because their dataset was based on drug sales. However, it's known that schizophrenia and other forms of psychosis generally strike younger people, not the elderly. However, antipsychotics are often used as sedatives in elderly people especially those suffering dementia.

As the authors point out, this is a controversial practice:
A further observation concerns the appropriateness of prescribed drugs to a potentially vulnerable group such as the elderly. The majority of prescriptions were made by primary care physicians. This may partly explain the somewhat unusual increase in prescriptions for antipsychotic medications. It has been reported that antipsychotic medications are disproportionately prescribed to elderly subjects and need further regulation. This is particularly true in emergency and disaster situations.
In the UK a 2009 government report warned that antipsychotics were being used too freely in people with dementia, at the risk of causing significant harm, and said that they should be reserved for the most serious cases only. This study raises concerns that already questionable prescribing might get even worse following disasters.

ResearchBlogging.orgRossi A, Maggio R, Riccardi I, Allegrini F, & Stratta P (2011). A quantitative analysis of antidepressant and antipsychotic prescriptions following an earthquake in Italy. Journal of traumatic stress, 24 (1), 129-32 PMID: 21351173

Earthquakes And Antipsychotics

According to a clever little paper just out from Italy, prescriptions for antipsychotic drugs skyrocketed in the months following a major earthquake. But there are some surprising details.


On 6th April 2009, an earthquake hit L'Aquila, a medium-sized city in central Italy. Out of about 100,000 people living in the L'Aquila area, over 600 died and over 60,000 were displaced: a major disaster for the local people.

Rossi et al from the University of L'Aquila looked at medication prescription in the 6 months following the earthquake and compared them to the previous 6 months. This is not an ideal method, it would have been better to compare L'Aquila to a neighboring district unaffected by the earthquake to control for nationwide changes; but over a few months we wouldn't expect large changes.

Anyway - they found that the number of "new" antidepressant prescriptions rose by 37%. However, prescriptions of non-psychiatric drugs like statins and anti-diabetic medications also rose by up to 50%. This is a bit sketchy but it suggests that the increase in antidepressants might just reflect increased post-disaster medical care for everyone in the area.

There was one big finding though: rates of antipsychotic prescribing more than doubled to 833 prescriptions, a 130% increase.

Does this mean that more people experienced psychosis in the aftermath of the trauma? That's one possibility - but a closer look reveals that the "extra" antipsychotics were given almost entirely to elderly people: just 0.3% of people under 45 got a new antipsychotic prescription but 1% of those 65-75 did and in those 75+ it reached 2.7% in men and a dizzying 3.8% of women.

Unfortunately Rossi et al couldn't tell what the drugs were being prescribed for, because their dataset was based on drug sales. However, it's known that schizophrenia and other forms of psychosis generally strike younger people, not the elderly. However, antipsychotics are often used as sedatives in elderly people especially those suffering dementia.

As the authors point out, this is a controversial practice:
A further observation concerns the appropriateness of prescribed drugs to a potentially vulnerable group such as the elderly. The majority of prescriptions were made by primary care physicians. This may partly explain the somewhat unusual increase in prescriptions for antipsychotic medications. It has been reported that antipsychotic medications are disproportionately prescribed to elderly subjects and need further regulation. This is particularly true in emergency and disaster situations.
In the UK a 2009 government report warned that antipsychotics were being used too freely in people with dementia, at the risk of causing significant harm, and said that they should be reserved for the most serious cases only. This study raises concerns that already questionable prescribing might get even worse following disasters.

ResearchBlogging.orgRossi A, Maggio R, Riccardi I, Allegrini F, & Stratta P (2011). A quantitative analysis of antidepressant and antipsychotic prescriptions following an earthquake in Italy. Journal of traumatic stress, 24 (1), 129-32 PMID: 21351173

Wednesday, February 9, 2011

Antidepressants Don't Work...In Fish

Here at Neuroskeptic fMRI scanning and antidepressants are both big topics.


As I discussed lask week, fish - specifically salmon - are the next big thing in fMRI and the number of salmon brains being scanned is growing at a remarkable rate. But fish haven't made much of an entrance into the world of antidepressants...until now.

Swedish scientists Holmberg et al have just published a paper asking: Does waterborne citalopram affect the aggressive and sexual behaviour of rainbow trout and guppy?

SSRI antidepressants, of which citalopram is one, are very popular. So popular, in fact, that non-trivial levels of SSRIs have been found in sewage and there's a concern that they might make their way into lakes and rivers and thereby affect the behaviour of the animals living there.

Holmberg et al set out to see what citalopram did to some fish in an attempt to find out whether this is likely to be a major problem. So they put some citalopram in the fish's water supplies and then tested their aggressiveness and also their sex drives. It turns out that one of the main ways of measure fish aggression is to put a mirror in their tank and see if they try to fight their own reflection. Fish are not very bright, really.

Anyway, the good news for fish everywhere was that seven days of citalopram exposure had no effect at all, even at doses much higher than those reported as a pollutant (the maximum dose was 0.1 mg/l). And the authors had no conflicts of interest: Big Pharma had nothing to do with this research, although Big Fish Farmer did because they bought the fish from one.

However, this may not be the end of the story, because it turned out that citalopram was very poorly absorbed into the fish's bloodstreams. But other antidepressants have been reported to accumulate in fish. Clearly, the only way to find out for sure what's going on would be to use fMRI...

ResearchBlogging.orgHolmberg A, Fogel J, Albertsson E, Fick J, Brown JN, Paxéus N, Förlin L, Johnsson JI, & Larsson DG (2011). Does waterborne citalopram affect the aggressive and sexual behaviour of rainbow trout and guppy? Journal of hazardous materials PMID: 21300431

Antidepressants Don't Work...In Fish

Here at Neuroskeptic fMRI scanning and antidepressants are both big topics.


As I discussed lask week, fish - specifically salmon - are the next big thing in fMRI and the number of salmon brains being scanned is growing at a remarkable rate. But fish haven't made much of an entrance into the world of antidepressants...until now.

Swedish scientists Holmberg et al have just published a paper asking: Does waterborne citalopram affect the aggressive and sexual behaviour of rainbow trout and guppy?

SSRI antidepressants, of which citalopram is one, are very popular. So popular, in fact, that non-trivial levels of SSRIs have been found in sewage and there's a concern that they might make their way into lakes and rivers and thereby affect the behaviour of the animals living there.

Holmberg et al set out to see what citalopram did to some fish in an attempt to find out whether this is likely to be a major problem. So they put some citalopram in the fish's water supplies and then tested their aggressiveness and also their sex drives. It turns out that one of the main ways of measure fish aggression is to put a mirror in their tank and see if they try to fight their own reflection. Fish are not very bright, really.

Anyway, the good news for fish everywhere was that seven days of citalopram exposure had no effect at all, even at doses much higher than those reported as a pollutant (the maximum dose was 0.1 mg/l). And the authors had no conflicts of interest: Big Pharma had nothing to do with this research, although Big Fish Farmer did because they bought the fish from one.

However, this may not be the end of the story, because it turned out that citalopram was very poorly absorbed into the fish's bloodstreams. But other antidepressants have been reported to accumulate in fish. Clearly, the only way to find out for sure what's going on would be to use fMRI...

ResearchBlogging.orgHolmberg A, Fogel J, Albertsson E, Fick J, Brown JN, Paxéus N, Förlin L, Johnsson JI, & Larsson DG (2011). Does waterborne citalopram affect the aggressive and sexual behaviour of rainbow trout and guppy? Journal of hazardous materials PMID: 21300431

Wednesday, February 2, 2011

Pharma: Tamed But Still A Big Beast

Everyone knows that Big Pharma go around lying, concealing data and distorting science in an effort to sell their pills. Right?

Actually, not so much. They used to, but most of the really scandalous stuff happened many years ago. The late 80's through to about the turn of the century were the Golden Age of pharmaceutical company deception.

This is when we had drugs that don't work getting approved, with the trials showing that they don't work buried, and only now being uncovered. Data on drug-induced suicides seemingly fudged to make them seem less scary. Textbooks "written by" leading psychiatrists that were, allegedly, in fact ghost-written on behalf of drug companies. Ghost-writing programs with chuckle-some names like CASPPER. And so on.

But today, we have to give credit where credit's due: things have improved. Credit is due not to the companies but to the authorities who put a stop to this nonsense through rules. Mandatory clinical trial registration to ensure all the data is available and stop outcoming cherrypicking. Anti-ghostwriting rules (albeit they're not universal yet.) etc.

What's shocking is how long it took to get these simple rules in place. The next generation of scientists and doctors will look back on the 1990s with disbelief: they let them do what? But at least we woke up eventually.

Still, there's more left to do. At the moment, the main problem, as I see it, is that different jurisdictions have different rules, with the best ideas being confined to one particular place. For instance, the USA has by far the most sensible system of clinical trial registration and reporting. Europe needs to catch up (we are, but slowly.)

Yet the USA is also one of the only countries (with New Zealand) to permit direct-to-consumer (DTC) advertising for prescription drugs. To the rest of the world, this is really weird. We all have a right to free speech. But drug companies pushing drugs directly to patients just isn't a free speech issue, in Europe. Corporations don't speak, they advertise.

By encouraging self-diagnosis and self-treatment, DTC replaces medical judgement with marketing, undermining the doctor-patient relationship. The patient is meant to present his symptoms and the doctor is meant to make a diagnosis and prescribe a treatment. DTC encourages self-diagnosis and self-prescription: the fact that a doctor is still, technically, in charge and has to sign that prescription, means little in practice.

So there's a lot to be happy about, but there's also a lot still to do.

Pharma: Tamed But Still A Big Beast

Everyone knows that Big Pharma go around lying, concealing data and distorting science in an effort to sell their pills. Right?

Actually, not so much. They used to, but most of the really scandalous stuff happened many years ago. The late 80's through to about the turn of the century were the Golden Age of pharmaceutical company deception.

This is when we had drugs that don't work getting approved, with the trials showing that they don't work buried, and only now being uncovered. Data on drug-induced suicides seemingly fudged to make them seem less scary. Textbooks "written by" leading psychiatrists that were, allegedly, in fact ghost-written on behalf of drug companies. Ghost-writing programs with chuckle-some names like CASPPER. And so on.

But today, we have to give credit where credit's due: things have improved. Credit is due not to the companies but to the authorities who put a stop to this nonsense through rules. Mandatory clinical trial registration to ensure all the data is available and stop outcoming cherrypicking. Anti-ghostwriting rules (albeit they're not universal yet.) etc.

What's shocking is how long it took to get these simple rules in place. The next generation of scientists and doctors will look back on the 1990s with disbelief: they let them do what? But at least we woke up eventually.

Still, there's more left to do. At the moment, the main problem, as I see it, is that different jurisdictions have different rules, with the best ideas being confined to one particular place. For instance, the USA has by far the most sensible system of clinical trial registration and reporting. Europe needs to catch up (we are, but slowly.)

Yet the USA is also one of the only countries (with New Zealand) to permit direct-to-consumer (DTC) advertising for prescription drugs. To the rest of the world, this is really weird. We all have a right to free speech. But drug companies pushing drugs directly to patients just isn't a free speech issue, in Europe. Corporations don't speak, they advertise.

By encouraging self-diagnosis and self-treatment, DTC replaces medical judgement with marketing, undermining the doctor-patient relationship. The patient is meant to present his symptoms and the doctor is meant to make a diagnosis and prescribe a treatment. DTC encourages self-diagnosis and self-prescription: the fact that a doctor is still, technically, in charge and has to sign that prescription, means little in practice.

So there's a lot to be happy about, but there's also a lot still to do.

Thursday, January 20, 2011

Retract That Seroxat?

Should a dodgy paper on antidepressants be retracted? And what's scientific retraction for, anyway?


Read all about it in a new article in the BMJ: Rules of Retraction. It's about the efforts of two academics, Jon Jureidini and Leemon McHenry. Their mission - so far unsuccesful - is to get this 2001 paper retracted: Efficacy of paroxetine in the treatment of adolescent major depression.

Jureidini is a member of Healthy Skepticism, a fantastic Australian organization that Neuroskeptic readers have encountered before. They've got lots of detail on the ill-fated "Study 329", including internal drug company documents, here.

So what's the story? Study 329 was a placebo-controlled trial of the SSRI paroxetine (Paxil, Seroxat) in 275 depressed adolescents. The paper concluded: that "Paroxetine is generally well tolerated and effective for major depression in adolescents." It was published in the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP).

There's two issues here: whether paroxetine worked, and whether it was safe. On safety, the paper concluded that "Paroxetine was generally well tolerated...and most adverse effects were not serious." Technically true, but only because there were so many mild side effects.

In fact, 11 patients on paroxetine reported serious adverse events, including suicidal ideation or behaviour, and 7 were hospitalized. Just 2 patients in the placebo group had such events. Yet we are reassured that "Of the 11, only headache (1 patient) was considered by the treating investigator to be related to paroxetine treatment."

The drug company argue that it didn't become clear that paroxetine caused suicidal ideation in adolescents until after the paper was published. In 2002, British authorities reviewed the evidence and said that paroxetine should not be given in this age group.

That's as maybe; the fact remains that in this paper there was a strongly raised risk. However, in fairness, all that data was there in the paper, for readers to draw their own conclusions from. The paper downplays it, but the numbers are there.

*

The efficacy question is where the allegations of dodgy practices are most convincing. The paper concludes that paroxetine worked, while imipramine, an older antidepressant, didn't.

Jureidini and McHenry say that paroxetine only worked on a few of the outcomes - ways of measuring depression and how much the patients improved. On most of the outcomes, it didn't work, but the paper focusses on the ones where it did. According to the BMJ

Study 329’s results showed that paroxetine was no more effective than the placebo according to measurements of eight outcomes specified by Martin Keller, professor of psychiatry at Brown University, when he first drew up the trial.

Two of these were primary outcomes...the drug also showed no significant effect for the initial six secondary outcome measures. [it] only produced a positive result when four new secondary outcome measures, which were introduced following the initial data analysis, were used... Fifteen other new secondary outcome measures failed to throw up positive results.

Here's the worst example. In the original protocol, two "primary" endpoints were specified: the change in the total Hamilton Scale (HAMD) score, and % of patients who 'responded', defined as either an improvement of more than 50% of their starting HAMD score or a final HAMD of 8 or below.

On neither of these measures did paroxetine work better than placebo at the p=0.05 significance level. It did work if you defined 'responded' to mean only a final HAMD of 8 or below, but this was not how it was defined in the protocol. In fact, the Methods section of the paper follows the protocol faithfully. Yet in the Results section, the authors still say that:
Of the depression-related variables, paroxetine separated statistically from placebo at endpoint among four of the parameters: response (i.e., primary outcome measure)...
It may seem like a subtle point. But it's absolutely crucial. Paroxetine just did not work on either pre-defined primary outcome measure, and the paper says that it did.

Finally, there were also issues of ghostwriting. I've never been that concerned by this in itself. If the science is bad, it's bad whoever wrote it. Still, it's hardly a good thing.

*

Does any of this matter? In one sense, no. Authorities have told doctors not to use paroxetine in adolescents with depression since 2002 (in the UK) and 2003 (in the USA). So retracting this paper wouldn't change much in the real world of treatment.

But in another sense, the stakes are enormous. If this paper were retracted, it would set a precedent and send a message: this kind of p-value fishing to get positive results, is grounds for retraction.

This would be huge, because this kind of fishing is sadly very common. Retracting this paper would be saying: selective outcome reporting is a form of misconduct. So this debate is really not about Seroxat, but about science.


There are no Senates or Supreme Courts in science. However, journal editors are in a unique position to help change this. They're just about the only people (grant awarders being the others) who have the power to actually impose sanctions on scientists. They have no official power. But they have clout.

Were the JAACAP to retract this paper, which they've so far said they have no plans to do, it would go some way to making these practices unacceptable. And I think no-one can seriously disagree that they should be unacceptable, and that science and medicine would be much better off if they were. Do we want more papers like this, or do we want fewer?

So I think the question of whether to retract or not boils down to whether it's OK to punish some people "to make an example of them", even though we know of plenty of others who have done the same, or worse, and won't be punished.

My feeling is: no, it's not very fair, but we're talking about multi-billion pound companies and a list of authors whose high-flying careers are not going to crash and burn just because one paper from 10 years ago gets pulled. If this were some poor 24 year old's PhD thesis, it would be different, but these are grown-ups who can handle themselves.

So I say: retract.

ResearchBlogging.orgNewman, M. (2010). The rules of retraction BMJ, 341 (dec07 4) DOI: 10.1136/bmj.c6985

Keller MB, et al. (2001). Efficacy of paroxetine in the treatment of adolescent major depression: a randomized, controlled trial. Journal of the American Academy of Child and Adolescent Psychiatry, 40 (7), 762-72 PMID: 11437014

Retract That Seroxat?

Should a dodgy paper on antidepressants be retracted? And what's scientific retraction for, anyway?


Read all about it in a new article in the BMJ: Rules of Retraction. It's about the efforts of two academics, Jon Jureidini and Leemon McHenry. Their mission - so far unsuccesful - is to get this 2001 paper retracted: Efficacy of paroxetine in the treatment of adolescent major depression.

Jureidini is a member of Healthy Skepticism, a fantastic Australian organization that Neuroskeptic readers have encountered before. They've got lots of detail on the ill-fated "Study 329", including internal drug company documents, here.

So what's the story? Study 329 was a placebo-controlled trial of the SSRI paroxetine (Paxil, Seroxat) in 275 depressed adolescents. The paper concluded: that "Paroxetine is generally well tolerated and effective for major depression in adolescents." It was published in the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP).

There's two issues here: whether paroxetine worked, and whether it was safe. On safety, the paper concluded that "Paroxetine was generally well tolerated...and most adverse effects were not serious." Technically true, but only because there were so many mild side effects.

In fact, 11 patients on paroxetine reported serious adverse events, including suicidal ideation or behaviour, and 7 were hospitalized. Just 2 patients in the placebo group had such events. Yet we are reassured that "Of the 11, only headache (1 patient) was considered by the treating investigator to be related to paroxetine treatment."

The drug company argue that it didn't become clear that paroxetine caused suicidal ideation in adolescents until after the paper was published. In 2002, British authorities reviewed the evidence and said that paroxetine should not be given in this age group.

That's as maybe; the fact remains that in this paper there was a strongly raised risk. However, in fairness, all that data was there in the paper, for readers to draw their own conclusions from. The paper downplays it, but the numbers are there.

*

The efficacy question is where the allegations of dodgy practices are most convincing. The paper concludes that paroxetine worked, while imipramine, an older antidepressant, didn't.

Jureidini and McHenry say that paroxetine only worked on a few of the outcomes - ways of measuring depression and how much the patients improved. On most of the outcomes, it didn't work, but the paper focusses on the ones where it did. According to the BMJ

Study 329’s results showed that paroxetine was no more effective than the placebo according to measurements of eight outcomes specified by Martin Keller, professor of psychiatry at Brown University, when he first drew up the trial.

Two of these were primary outcomes...the drug also showed no significant effect for the initial six secondary outcome measures. [it] only produced a positive result when four new secondary outcome measures, which were introduced following the initial data analysis, were used... Fifteen other new secondary outcome measures failed to throw up positive results.

Here's the worst example. In the original protocol, two "primary" endpoints were specified: the change in the total Hamilton Scale (HAMD) score, and % of patients who 'responded', defined as either an improvement of more than 50% of their starting HAMD score or a final HAMD of 8 or below.

On neither of these measures did paroxetine work better than placebo at the p=0.05 significance level. It did work if you defined 'responded' to mean only a final HAMD of 8 or below, but this was not how it was defined in the protocol. In fact, the Methods section of the paper follows the protocol faithfully. Yet in the Results section, the authors still say that:
Of the depression-related variables, paroxetine separated statistically from placebo at endpoint among four of the parameters: response (i.e., primary outcome measure)...
It may seem like a subtle point. But it's absolutely crucial. Paroxetine just did not work on either pre-defined primary outcome measure, and the paper says that it did.

Finally, there were also issues of ghostwriting. I've never been that concerned by this in itself. If the science is bad, it's bad whoever wrote it. Still, it's hardly a good thing.

*

Does any of this matter? In one sense, no. Authorities have told doctors not to use paroxetine in adolescents with depression since 2002 (in the UK) and 2003 (in the USA). So retracting this paper wouldn't change much in the real world of treatment.

But in another sense, the stakes are enormous. If this paper were retracted, it would set a precedent and send a message: this kind of p-value fishing to get positive results, is grounds for retraction.

This would be huge, because this kind of fishing is sadly very common. Retracting this paper would be saying: selective outcome reporting is a form of misconduct. So this debate is really not about Seroxat, but about science.


There are no Senates or Supreme Courts in science. However, journal editors are in a unique position to help change this. They're just about the only people (grant awarders being the others) who have the power to actually impose sanctions on scientists. They have no official power. But they have clout.

Were the JAACAP to retract this paper, which they've so far said they have no plans to do, it would go some way to making these practices unacceptable. And I think no-one can seriously disagree that they should be unacceptable, and that science and medicine would be much better off if they were. Do we want more papers like this, or do we want fewer?

So I think the question of whether to retract or not boils down to whether it's OK to punish some people "to make an example of them", even though we know of plenty of others who have done the same, or worse, and won't be punished.

My feeling is: no, it's not very fair, but we're talking about multi-billion pound companies and a list of authors whose high-flying careers are not going to crash and burn just because one paper from 10 years ago gets pulled. If this were some poor 24 year old's PhD thesis, it would be different, but these are grown-ups who can handle themselves.

So I say: retract.

ResearchBlogging.orgNewman, M. (2010). The rules of retraction BMJ, 341 (dec07 4) DOI: 10.1136/bmj.c6985

Keller MB, et al. (2001). Efficacy of paroxetine in the treatment of adolescent major depression: a randomized, controlled trial. Journal of the American Academy of Child and Adolescent Psychiatry, 40 (7), 762-72 PMID: 11437014

Friday, January 7, 2011

Antidepressants Still Don't Work In Mild Depression

A new paper has added to the growing ranks of studies finding that antidepressant drugs don't work in people with milder forms of depression: Efficacy of antidepressants and benzodiazepines in minor depression.


It's in the British Journal of Psychiatry and it's a meta-analysis of 6 randomized controlled trials on three different drugs. Antidepressants were no better than placebo in patients with "minor depressive disorder", which is like the better-known Major Depressive Disorder but... well, not as major, because you only need to have 2 symptoms instead of 5 from this list.

They also wanted to find out whether benzodiazepines (like Valium) worked in these people, but there just weren't any good studies out there.

The results look solid, and they fit with the fact that antidepressants don't work in people diagnosed with "major" depression, but who fall at the "milder" end of that range, something which several recent studies have shown. Neuroskeptic readers will, if they've been paying attention, find this entirely unsurprising.

But in fact, it's not just not news, it's positively ancient. 50 years ago, at the dawn of the antidepressant era, it was commonly said that most antidepressants don't work in everyone with "depression", they work best in people with endogenous depression, and less well, or not at all, in those with "neurotic" or "reactive" depressions (see, e.g. 1, 2, 3, but the literature goes back even further).

"Endogenous" is not strictly the same as "severe", however, in practice, these two concepts have never really been clearly seperated, and they're largely equivalent today, because the leading measure of "severity", the Hamilton Scale, measures symptoms, and arguably these symptoms are mostly (though not entirely) the symptoms of the old concept of endogenous depression. The Hamilton Scale was formulated in 1960 when modern concepts of "minor depressive disorder" and "major depressive disorder" were unknown.

Why then are we only now working out that antidepressants only work in some people? There's one obvious answer: Prozac, which arrived in 1987. Before Prozac, antidepressants were serious stuff. They could easily kill you in overdose, and they had a lot of side effects. Many of them even meant that you couldn't eat cheese. As a result, they weren't used lightly.

Prozac and the other SSRIs changed the game completely. They're much less toxic, the side effects are milder, and you can eat as much cheese as you want. So it's very easy to prescribe an SSRI - maybe it won't work, but it can't hurt, so why not try it...?

As a result, I think, the concept of "depression" broadened. Before Prozac, depression was inherently serious, because the treatments were serious. After Prozac, it didn't have to be. Drug company marketing no doubt helped this process along, but marketing has to have something to work with. Over the past 25 years, terms like "endogenous", "neurotic" etc. largely disappeared from the literature, replaced by the single construct of "Major Depression".

For nearly 1,000 years, the great scientific and philosophical work of the ancient Greeks and Romans were lost to Europeans. Only when Christian scholars rediscovered them in the libraries of the Islamic world did Europe begin to remember what it had forgotten. We call those the Dark Ages. Will the past 25 years be remembered as psychiatry's Dark Age?

ResearchBlogging.orgBarbui, C., Cipriani, A., Patel, V., Ayuso-Mateos, J., & van Ommeren, M. (2011). Efficacy of antidepressants and benzodiazepines in minor depression: systematic review and meta-analysis The British Journal of Psychiatry, 198 (1), 11-16 DOI: 10.1192/bjp.bp.109.076448

Antidepressants Still Don't Work In Mild Depression

A new paper has added to the growing ranks of studies finding that antidepressant drugs don't work in people with milder forms of depression: Efficacy of antidepressants and benzodiazepines in minor depression.


It's in the British Journal of Psychiatry and it's a meta-analysis of 6 randomized controlled trials on three different drugs. Antidepressants were no better than placebo in patients with "minor depressive disorder", which is like the better-known Major Depressive Disorder but... well, not as major, because you only need to have 2 symptoms instead of 5 from this list.

They also wanted to find out whether benzodiazepines (like Valium) worked in these people, but there just weren't any good studies out there.

The results look solid, and they fit with the fact that antidepressants don't work in people diagnosed with "major" depression, but who fall at the "milder" end of that range, something which several recent studies have shown. Neuroskeptic readers will, if they've been paying attention, find this entirely unsurprising.

But in fact, it's not just not news, it's positively ancient. 50 years ago, at the dawn of the antidepressant era, it was commonly said that most antidepressants don't work in everyone with "depression", they work best in people with endogenous depression, and less well, or not at all, in those with "neurotic" or "reactive" depressions (see, e.g. 1, 2, 3, but the literature goes back even further).

"Endogenous" is not strictly the same as "severe", however, in practice, these two concepts have never really been clearly seperated, and they're largely equivalent today, because the leading measure of "severity", the Hamilton Scale, measures symptoms, and arguably these symptoms are mostly (though not entirely) the symptoms of the old concept of endogenous depression. The Hamilton Scale was formulated in 1960 when modern concepts of "minor depressive disorder" and "major depressive disorder" were unknown.

Why then are we only now working out that antidepressants only work in some people? There's one obvious answer: Prozac, which arrived in 1987. Before Prozac, antidepressants were serious stuff. They could easily kill you in overdose, and they had a lot of side effects. Many of them even meant that you couldn't eat cheese. As a result, they weren't used lightly.

Prozac and the other SSRIs changed the game completely. They're much less toxic, the side effects are milder, and you can eat as much cheese as you want. So it's very easy to prescribe an SSRI - maybe it won't work, but it can't hurt, so why not try it...?

As a result, I think, the concept of "depression" broadened. Before Prozac, depression was inherently serious, because the treatments were serious. After Prozac, it didn't have to be. Drug company marketing no doubt helped this process along, but marketing has to have something to work with. Over the past 25 years, terms like "endogenous", "neurotic" etc. largely disappeared from the literature, replaced by the single construct of "Major Depression".

For nearly 1,000 years, the great scientific and philosophical work of the ancient Greeks and Romans were lost to Europeans. Only when Christian scholars rediscovered them in the libraries of the Islamic world did Europe begin to remember what it had forgotten. We call those the Dark Ages. Will the past 25 years be remembered as psychiatry's Dark Age?

ResearchBlogging.orgBarbui, C., Cipriani, A., Patel, V., Ayuso-Mateos, J., & van Ommeren, M. (2011). Efficacy of antidepressants and benzodiazepines in minor depression: systematic review and meta-analysis The British Journal of Psychiatry, 198 (1), 11-16 DOI: 10.1192/bjp.bp.109.076448

Thursday, December 23, 2010

Depression Treatment Increased From 1998 to 2007

A paper just out reports on the changing patterns of treatment for depression in the USA, over the period from 1998 to 2007.

The headline news is that it increased: the overall rate of people treated for some form of "depression" went from 2.37% to 2.88% per year. That's an increase of 21%, which is not trivial, but it's much less than the increase in the previous decade: it was just 0.73% in 1987.

But the increase was concentrated in. some groups of people.
  • Americans over 50 accounted for the bulk of the rise. Their use went up by about 50%, while rates in younger people stayed almost steady. In '98 the peak age band was 35-49, now it's 50-64, with almost 5% of those people getting treated in any given year.
  • Men's rates of treatment went up by over 40% while women's only increased by 10%. Women are still more likely to get treated for depression than men, though, with a ratio of 1.7 women for each 1 man. But that ratio is a lot closer than it used to be.
  • Black people's rates increased hugely, by 120%. Rates in black people now stand at 2.2% which is close behind whites at 3.2%. Hispanics are now the least treated major ethnic group at 1.9%: in previous studies, blacks were the least treated. (There was no data on Asians or others).
So the increase wasn't an across the board rise, as we saw from '87 to '98. Rather the '98-'07 increase was more of a "catching up" by people who've historically had low levels of treatment, closing in on the level of the historically highest group: middle-aged white women.

In terms of what treatments people got, out of everyone treated for depression, 80% got some kind of drugs, and that didn't change much. But use of psychotherapy declined a bit from 54% to 43% (some people got both).

What's also interesting is that the same authors reported last year that, over pretty much the same time period ('96 to '05), the number of Americans who used antidepressants in any given year sky-rocketed from 5% to 10% - that is to say, much faster than the rate of depression treatment rose! And the data are comparable, because they came from the same national MEPS surveys.

In other words, the decade must have seen antidepressants increasingly being used to treat stuff other than depression. What stuff? Well, all kinds of things. SSRIs are popular in everything from anxiety and OCD to premature ejaculation. Several of the "other new" drugs, like mirtazapine and trazodone, are very good at putting you to sleep (rather too good, some users would say...)

ResearchBlogging.orgMarcus SC, & Olfson M (2010). National trends in the treatment for depression from 1998 to 2007. Archives of general psychiatry, 67 (12), 1265-73 PMID: 21135326

Depression Treatment Increased From 1998 to 2007

A paper just out reports on the changing patterns of treatment for depression in the USA, over the period from 1998 to 2007.

The headline news is that it increased: the overall rate of people treated for some form of "depression" went from 2.37% to 2.88% per year. That's an increase of 21%, which is not trivial, but it's much less than the increase in the previous decade: it was just 0.73% in 1987.

But the increase was concentrated in. some groups of people.
  • Americans over 50 accounted for the bulk of the rise. Their use went up by about 50%, while rates in younger people stayed almost steady. In '98 the peak age band was 35-49, now it's 50-64, with almost 5% of those people getting treated in any given year.
  • Men's rates of treatment went up by over 40% while women's only increased by 10%. Women are still more likely to get treated for depression than men, though, with a ratio of 1.7 women for each 1 man. But that ratio is a lot closer than it used to be.
  • Black people's rates increased hugely, by 120%. Rates in black people now stand at 2.2% which is close behind whites at 3.2%. Hispanics are now the least treated major ethnic group at 1.9%: in previous studies, blacks were the least treated. (There was no data on Asians or others).
So the increase wasn't an across the board rise, as we saw from '87 to '98. Rather the '98-'07 increase was more of a "catching up" by people who've historically had low levels of treatment, closing in on the level of the historically highest group: middle-aged white women.

In terms of what treatments people got, out of everyone treated for depression, 80% got some kind of drugs, and that didn't change much. But use of psychotherapy declined a bit from 54% to 43% (some people got both).

What's also interesting is that the same authors reported last year that, over pretty much the same time period ('96 to '05), the number of Americans who used antidepressants in any given year sky-rocketed from 5% to 10% - that is to say, much faster than the rate of depression treatment rose! And the data are comparable, because they came from the same national MEPS surveys.

In other words, the decade must have seen antidepressants increasingly being used to treat stuff other than depression. What stuff? Well, all kinds of things. SSRIs are popular in everything from anxiety and OCD to premature ejaculation. Several of the "other new" drugs, like mirtazapine and trazodone, are very good at putting you to sleep (rather too good, some users would say...)

ResearchBlogging.orgMarcus SC, & Olfson M (2010). National trends in the treatment for depression from 1998 to 2007. Archives of general psychiatry, 67 (12), 1265-73 PMID: 21135326