Showing posts with label drugs. Show all posts
Showing posts with label drugs. Show all posts

Sunday, March 6, 2011

Paxil: The Whole Truth?

Paroxetine, aka Paxil aka Seroxat, is an SSRI antidepressant.

Like other SSRIs, its reputation has see-sawed over time. Hailed as miracle drugs in the 1990s and promoted for everything from depression to "separation anxiety" in dogs, they fell from grace over the past decade.

First, concerns emerged over withdrawal symptoms and suicidality especially in young people. Then more recently their antidepressant efficacy came into serious question. Paroxetine has arguably the worst image of all SSRIs, although whether it's much different to the rest is unclear.

Now a new paper claims to provide a definitive assessment of the safety and efficacy of paroxetine in adults (age 18+). The lead authors are from GlaxoSmithKline, who invented paroxetine. So it's no surprise that the text paints GSK and their product in a favourable light, but the data warrant a close look and the results are rather interesting - and complicated.

They took all of the placebo-controlled trials on paroxetine for any psychiatric disorder - because it wasn't just trialled in depression, but also in PTSD, anxiety, and more. They excluded studies with fewer than 30 people; this makes sense though it's somewhat arbitrary, why not 40 or 20? Anyway, they ended up with 61 trials.

First they looked at suicide. In a nutshell paroxetine increased suicidal "behaviour or ideation" in younger patients (age 25 or below) relative to placebo, whether or not they were being treated for depression. In older patients, it only increased suicidality in the depression trials, and the effect was smaller. I've put a red dot where paroxetine was worse than placebo; this doesn't mean the effect was "statistically significant", but the numbers are so small that this is fairly meaningless. Just look at the numbers.

This is not very new. It's been accepted for a while that broadly the same applies when you look at trials of other antidepressants. Whether this causes extra suicides in the real world is a big question.

When it comes to efficacy, however, we find some rather startling info that's not been presented together in one article before, to my knowledge. Here's a graph showing the effect of paroxetine over-and-above placebo in all the different disorders, expressed as a proportion of the improvement seen in the placebo group.

Now I should point out that I just made this measure up. It's not ideal. If the placebo response is very small, then a tiny drug effect will seem large by comparison, even if what this really means is that neither drug nor placebo do any good.

However the flip side of that coin is that it controls for the fact that rating scales for different disorders might be just more likely to show change than others. The d score is a more widely used standardized measure of effect size - though it has its own shortcomings - and I'd like to know those, but the data they provide don't allow us to easily calculate it. You could do it from the GSK database but it would take ages.

Anyway as you can see paroxetine was better, relative to placebo, against PTSD, PMDD, obsessive-compulsive disorder, and social anxiety, than it was against depression measured with the "gold-standard" HAMD scale! In fact the only thing it was worse against was Generalized Anxiety Disorder. Using the alternative MADRS depression scale, the antidepressant effect was bigger, but still small compared to OCD and social anxiety.

This is rather remarkable. Everyone calls paroxetine "an antidepressant", yet at least in one important sense it works better against OCD and social anxiety than it does against depression!

In fact, is paroxetine an antidepressant at all? It works better on MADRS and very poorly on the HAMD; is this because the HAMD is a better scale of depression, and the MADRS actually measures anxiety or OCD symptoms?

That's a lovely neat theory... but in fact the HAMD-17 has two questions about anxiety, scoring 0-4 points each, so you can score up to 8 (or 12 if you count "hypochondriasis", which is basically health anxiety, so you probably should), out of a total maximum of 52. The MADRS has one anxiety item with a max score of 6 on a total of 60. So the HAMD is more "anxious" than the MADRS.

This is more than just a curiosity. Paroxetine's antidepressant effect was tiny in those aged 25 or under on the HAMD - treatment just 9% of the placebo effect - but on the MADRS in the same age group, the benefit was 35%! So what is the HAMD measuring and why is it different to the MADRS?

Honestly, it's hard to tell because the Hamilton scale is so messy. It measures depression and the other distressing symptoms which commonly go along with it. The idea, I think, was that it was meant to be a scale of the patient's overall clinical severity - how seriously they were suffering - rather than a measure of depression per se.

Which is fine. Except that most modern trials carefully exclude anyone with "comorbid" symptoms like anxiety, and on the other hand, recruit people with symptoms quite different to the depressed inpatients that Dr Max Hamilton would have seen when he invented the scale in 1960.

Yet 50 years later the HAMD17, unmodified, is still the standard scale. It's been repeatedly shown to be multi-factorial (it doesn't measure one thing), no-one even agrees on how to interpret it, and a "new scale", the HAMD6, which consists of simply chucking out 11 questions and keeping the 6 that actually measure depression, has been shown to be better. Yet everyone still uses the HAMD17 because everyone else does.

Link: I recently covered a dodgy paper about paroxetine in adolescents with depression; it wasn't included in this analysis because this was about adults.

ResearchBlogging.orgCarpenter DJ, Fong R, Kraus JE, Davies JT, Moore C, & Thase ME (2011). Meta-analysis of efficacy and treatment-emergent suicidality in adults by psychiatric indication and age subgroup following initiation of paroxetine therapy: a complete set of randomized placebo-controlled trials. The Journal of clinical psychiatry PMID: 21367354

Paxil: The Whole Truth?

Paroxetine, aka Paxil aka Seroxat, is an SSRI antidepressant.

Like other SSRIs, its reputation has see-sawed over time. Hailed as miracle drugs in the 1990s and promoted for everything from depression to "separation anxiety" in dogs, they fell from grace over the past decade.

First, concerns emerged over withdrawal symptoms and suicidality especially in young people. Then more recently their antidepressant efficacy came into serious question. Paroxetine has arguably the worst image of all SSRIs, although whether it's much different to the rest is unclear.

Now a new paper claims to provide a definitive assessment of the safety and efficacy of paroxetine in adults (age 18+). The lead authors are from GlaxoSmithKline, who invented paroxetine. So it's no surprise that the text paints GSK and their product in a favourable light, but the data warrant a close look and the results are rather interesting - and complicated.

They took all of the placebo-controlled trials on paroxetine for any psychiatric disorder - because it wasn't just trialled in depression, but also in PTSD, anxiety, and more. They excluded studies with fewer than 30 people; this makes sense though it's somewhat arbitrary, why not 40 or 20? Anyway, they ended up with 61 trials.

First they looked at suicide. In a nutshell paroxetine increased suicidal "behaviour or ideation" in younger patients (age 25 or below) relative to placebo, whether or not they were being treated for depression. In older patients, it only increased suicidality in the depression trials, and the effect was smaller. I've put a red dot where paroxetine was worse than placebo; this doesn't mean the effect was "statistically significant", but the numbers are so small that this is fairly meaningless. Just look at the numbers.

This is not very new. It's been accepted for a while that broadly the same applies when you look at trials of other antidepressants. Whether this causes extra suicides in the real world is a big question.

When it comes to efficacy, however, we find some rather startling info that's not been presented together in one article before, to my knowledge. Here's a graph showing the effect of paroxetine over-and-above placebo in all the different disorders, expressed as a proportion of the improvement seen in the placebo group.

Now I should point out that I just made this measure up. It's not ideal. If the placebo response is very small, then a tiny drug effect will seem large by comparison, even if what this really means is that neither drug nor placebo do any good.

However the flip side of that coin is that it controls for the fact that rating scales for different disorders might be just more likely to show change than others. The d score is a more widely used standardized measure of effect size - though it has its own shortcomings - and I'd like to know those, but the data they provide don't allow us to easily calculate it. You could do it from the GSK database but it would take ages.

Anyway as you can see paroxetine was better, relative to placebo, against PTSD, PMDD, obsessive-compulsive disorder, and social anxiety, than it was against depression measured with the "gold-standard" HAMD scale! In fact the only thing it was worse against was Generalized Anxiety Disorder. Using the alternative MADRS depression scale, the antidepressant effect was bigger, but still small compared to OCD and social anxiety.

This is rather remarkable. Everyone calls paroxetine "an antidepressant", yet at least in one important sense it works better against OCD and social anxiety than it does against depression!

In fact, is paroxetine an antidepressant at all? It works better on MADRS and very poorly on the HAMD; is this because the HAMD is a better scale of depression, and the MADRS actually measures anxiety or OCD symptoms?

That's a lovely neat theory... but in fact the HAMD-17 has two questions about anxiety, scoring 0-4 points each, so you can score up to 8 (or 12 if you count "hypochondriasis", which is basically health anxiety, so you probably should), out of a total maximum of 52. The MADRS has one anxiety item with a max score of 6 on a total of 60. So the HAMD is more "anxious" than the MADRS.

This is more than just a curiosity. Paroxetine's antidepressant effect was tiny in those aged 25 or under on the HAMD - treatment just 9% of the placebo effect - but on the MADRS in the same age group, the benefit was 35%! So what is the HAMD measuring and why is it different to the MADRS?

Honestly, it's hard to tell because the Hamilton scale is so messy. It measures depression and the other distressing symptoms which commonly go along with it. The idea, I think, was that it was meant to be a scale of the patient's overall clinical severity - how seriously they were suffering - rather than a measure of depression per se.

Which is fine. Except that most modern trials carefully exclude anyone with "comorbid" symptoms like anxiety, and on the other hand, recruit people with symptoms quite different to the depressed inpatients that Dr Max Hamilton would have seen when he invented the scale in 1960.

Yet 50 years later the HAMD17, unmodified, is still the standard scale. It's been repeatedly shown to be multi-factorial (it doesn't measure one thing), no-one even agrees on how to interpret it, and a "new scale", the HAMD6, which consists of simply chucking out 11 questions and keeping the 6 that actually measure depression, has been shown to be better. Yet everyone still uses the HAMD17 because everyone else does.

Link: I recently covered a dodgy paper about paroxetine in adolescents with depression; it wasn't included in this analysis because this was about adults.

ResearchBlogging.orgCarpenter DJ, Fong R, Kraus JE, Davies JT, Moore C, & Thase ME (2011). Meta-analysis of efficacy and treatment-emergent suicidality in adults by psychiatric indication and age subgroup following initiation of paroxetine therapy: a complete set of randomized placebo-controlled trials. The Journal of clinical psychiatry PMID: 21367354

Thursday, March 3, 2011

Earthquakes And Antipsychotics

According to a clever little paper just out from Italy, prescriptions for antipsychotic drugs skyrocketed in the months following a major earthquake. But there are some surprising details.


On 6th April 2009, an earthquake hit L'Aquila, a medium-sized city in central Italy. Out of about 100,000 people living in the L'Aquila area, over 600 died and over 60,000 were displaced: a major disaster for the local people.

Rossi et al from the University of L'Aquila looked at medication prescription in the 6 months following the earthquake and compared them to the previous 6 months. This is not an ideal method, it would have been better to compare L'Aquila to a neighboring district unaffected by the earthquake to control for nationwide changes; but over a few months we wouldn't expect large changes.

Anyway - they found that the number of "new" antidepressant prescriptions rose by 37%. However, prescriptions of non-psychiatric drugs like statins and anti-diabetic medications also rose by up to 50%. This is a bit sketchy but it suggests that the increase in antidepressants might just reflect increased post-disaster medical care for everyone in the area.

There was one big finding though: rates of antipsychotic prescribing more than doubled to 833 prescriptions, a 130% increase.

Does this mean that more people experienced psychosis in the aftermath of the trauma? That's one possibility - but a closer look reveals that the "extra" antipsychotics were given almost entirely to elderly people: just 0.3% of people under 45 got a new antipsychotic prescription but 1% of those 65-75 did and in those 75+ it reached 2.7% in men and a dizzying 3.8% of women.

Unfortunately Rossi et al couldn't tell what the drugs were being prescribed for, because their dataset was based on drug sales. However, it's known that schizophrenia and other forms of psychosis generally strike younger people, not the elderly. However, antipsychotics are often used as sedatives in elderly people especially those suffering dementia.

As the authors point out, this is a controversial practice:
A further observation concerns the appropriateness of prescribed drugs to a potentially vulnerable group such as the elderly. The majority of prescriptions were made by primary care physicians. This may partly explain the somewhat unusual increase in prescriptions for antipsychotic medications. It has been reported that antipsychotic medications are disproportionately prescribed to elderly subjects and need further regulation. This is particularly true in emergency and disaster situations.
In the UK a 2009 government report warned that antipsychotics were being used too freely in people with dementia, at the risk of causing significant harm, and said that they should be reserved for the most serious cases only. This study raises concerns that already questionable prescribing might get even worse following disasters.

ResearchBlogging.orgRossi A, Maggio R, Riccardi I, Allegrini F, & Stratta P (2011). A quantitative analysis of antidepressant and antipsychotic prescriptions following an earthquake in Italy. Journal of traumatic stress, 24 (1), 129-32 PMID: 21351173

Earthquakes And Antipsychotics

According to a clever little paper just out from Italy, prescriptions for antipsychotic drugs skyrocketed in the months following a major earthquake. But there are some surprising details.


On 6th April 2009, an earthquake hit L'Aquila, a medium-sized city in central Italy. Out of about 100,000 people living in the L'Aquila area, over 600 died and over 60,000 were displaced: a major disaster for the local people.

Rossi et al from the University of L'Aquila looked at medication prescription in the 6 months following the earthquake and compared them to the previous 6 months. This is not an ideal method, it would have been better to compare L'Aquila to a neighboring district unaffected by the earthquake to control for nationwide changes; but over a few months we wouldn't expect large changes.

Anyway - they found that the number of "new" antidepressant prescriptions rose by 37%. However, prescriptions of non-psychiatric drugs like statins and anti-diabetic medications also rose by up to 50%. This is a bit sketchy but it suggests that the increase in antidepressants might just reflect increased post-disaster medical care for everyone in the area.

There was one big finding though: rates of antipsychotic prescribing more than doubled to 833 prescriptions, a 130% increase.

Does this mean that more people experienced psychosis in the aftermath of the trauma? That's one possibility - but a closer look reveals that the "extra" antipsychotics were given almost entirely to elderly people: just 0.3% of people under 45 got a new antipsychotic prescription but 1% of those 65-75 did and in those 75+ it reached 2.7% in men and a dizzying 3.8% of women.

Unfortunately Rossi et al couldn't tell what the drugs were being prescribed for, because their dataset was based on drug sales. However, it's known that schizophrenia and other forms of psychosis generally strike younger people, not the elderly. However, antipsychotics are often used as sedatives in elderly people especially those suffering dementia.

As the authors point out, this is a controversial practice:
A further observation concerns the appropriateness of prescribed drugs to a potentially vulnerable group such as the elderly. The majority of prescriptions were made by primary care physicians. This may partly explain the somewhat unusual increase in prescriptions for antipsychotic medications. It has been reported that antipsychotic medications are disproportionately prescribed to elderly subjects and need further regulation. This is particularly true in emergency and disaster situations.
In the UK a 2009 government report warned that antipsychotics were being used too freely in people with dementia, at the risk of causing significant harm, and said that they should be reserved for the most serious cases only. This study raises concerns that already questionable prescribing might get even worse following disasters.

ResearchBlogging.orgRossi A, Maggio R, Riccardi I, Allegrini F, & Stratta P (2011). A quantitative analysis of antidepressant and antipsychotic prescriptions following an earthquake in Italy. Journal of traumatic stress, 24 (1), 129-32 PMID: 21351173

Tuesday, March 1, 2011

The Mystery of "Whoonga"


According to a disturbing BBC news story, South African drug addicts are stealing medication from HIV+ people and using it to get high:
'Whoonga' threat to South African HIV patients

"Whoonga" is, allegedly, the street name for efavirenz (aka Stocrin), one of the most popular antiretroviral drugs. The pills are apparantly crushed, mixed with marijuana, and smoked for its hallucinogenic effects.

This is not, in fact, a new story; Scientific American covered it 18 months ago and the BBC themselves did in 2008 (although they didn't name efavirenz.)

Edit 16.00 pm: In fact the picture is even messier than I first thought. Some sources, e.g. Wikipedia and the articles it links to, mostly from South Africa, suggest that "whoonga" is actually a 'brand' of heroin and that the antiretrovirals may not be the main ingredient, if they're an ingredient at all. If this is true, then the BBC article is misleading. Edit and see the Comments for more on this...

Why would an antiviral drug get you high? This is where things get rather mysterious. Efavirenz is known to enter the brain, unlike most other HIV drugs, and psychiatric side-effects including anxiety, depression, altered dreams, and even hallucinations are common in efavirenz use, especially with high doses (1,2,3), but they're usually mild and temporary. But what's the mechanism?

No-one knows, basically. Blank et al found that efavirenz causes a positive result on urine screening for benzodiazepines (like Valium). This makes sense given the chemical structure:
Efavirenz is not a benzodiazepine, because it doesn't have the defining diazepine ring (the one with two Ns). However, as you can see, it has a lot in common with certain benzos such as oxazepam and lorazepam.

However, while this might well explain why it confuses urine tests, it doesn't by itself go far to explaining the reported psychoactive effects. Oxazepam and lorazepam don't cause hallucinations or psychosis, and they reduce anxiety, rather than causing it.

They also found that efavirenz caused a false positive for THC, the active ingredient in marijuana; this was probably caused by the gluconuride metabolite. Could this metabolite have marijuana-like effects? No-one knows at present.

Beyond that there's been little research on the effects of efavirenz in the brain. This 2010 paper reviewed the literature and found almost nothing. There were some suggestions that it might affect inflammatory cytokines or creatine kinase, but these are not obvious candidates for the reported effects.

Could the liver be responsible, rather than the brain? Interestingly, the 2010 paper says that efavirenz inhibits three liver enzymes: CYPs 2C9, 2C19, and 3A4. All three are involved in the breakdown of THC, so, in theory, efavirenz might boost the effects of marijauna by this mechanism - but that wouldn't explain the psychiatric side effects seen in people who are taking the drug for HIV and don't smoke weed.

Drugs that cause hallucinations generally either agonize 5HT2A receptors or block NMDA receptors. Off the top of my head, I can't see any similarities between efavirenz and drugs that target those systems like LCD (5HT2A) or ketamine or PCP (NMDA), but I'm no chemist and anyway, structural similarity is not always a good guide to what drugs do.

If I were interested in working out what's going on with efavirenz, I'd start by looking at GABA, the neurotransmitter that's the target of benzos. Maybe the almost-a-benzodiazepine-but-not-quite structure means that it causes some unusual effects on GABA receptors? No-one knows at present. Then I'd move on to 5HT2A and NMDA receptors.

Finally, it's always possible that the users are just getting stoned on cannabis and mistakenly thinking that the efavirenz is making it better through the placebo effect. Stranger things have happened. If so, it would make the whole situation even more tragic than it already is.

ResearchBlogging.orgCavalcante GI, Capistrano VL, Cavalcante FS, Vasconcelos SM, Macêdo DS, Sousa FC, Woods DJ, & Fonteles MM (2010). Implications of efavirenz for neuropsychiatry: a review. The International journal of neuroscience, 120 (12), 739-45 PMID: 20964556

The Mystery of "Whoonga"


According to a disturbing BBC news story, South African drug addicts are stealing medication from HIV+ people and using it to get high:
'Whoonga' threat to South African HIV patients

"Whoonga" is, allegedly, the street name for efavirenz (aka Stocrin), one of the most popular antiretroviral drugs. The pills are apparantly crushed, mixed with marijuana, and smoked for its hallucinogenic effects.

This is not, in fact, a new story; Scientific American covered it 18 months ago and the BBC themselves did in 2008 (although they didn't name efavirenz.)

Edit 16.00 pm: In fact the picture is even messier than I first thought. Some sources, e.g. Wikipedia and the articles it links to, mostly from South Africa, suggest that "whoonga" is actually a 'brand' of heroin and that the antiretrovirals may not be the main ingredient, if they're an ingredient at all. If this is true, then the BBC article is misleading. Edit and see the Comments for more on this...

Why would an antiviral drug get you high? This is where things get rather mysterious. Efavirenz is known to enter the brain, unlike most other HIV drugs, and psychiatric side-effects including anxiety, depression, altered dreams, and even hallucinations are common in efavirenz use, especially with high doses (1,2,3), but they're usually mild and temporary. But what's the mechanism?

No-one knows, basically. Blank et al found that efavirenz causes a positive result on urine screening for benzodiazepines (like Valium). This makes sense given the chemical structure:
Efavirenz is not a benzodiazepine, because it doesn't have the defining diazepine ring (the one with two Ns). However, as you can see, it has a lot in common with certain benzos such as oxazepam and lorazepam.

However, while this might well explain why it confuses urine tests, it doesn't by itself go far to explaining the reported psychoactive effects. Oxazepam and lorazepam don't cause hallucinations or psychosis, and they reduce anxiety, rather than causing it.

They also found that efavirenz caused a false positive for THC, the active ingredient in marijuana; this was probably caused by the gluconuride metabolite. Could this metabolite have marijuana-like effects? No-one knows at present.

Beyond that there's been little research on the effects of efavirenz in the brain. This 2010 paper reviewed the literature and found almost nothing. There were some suggestions that it might affect inflammatory cytokines or creatine kinase, but these are not obvious candidates for the reported effects.

Could the liver be responsible, rather than the brain? Interestingly, the 2010 paper says that efavirenz inhibits three liver enzymes: CYPs 2C9, 2C19, and 3A4. All three are involved in the breakdown of THC, so, in theory, efavirenz might boost the effects of marijauna by this mechanism - but that wouldn't explain the psychiatric side effects seen in people who are taking the drug for HIV and don't smoke weed.

Drugs that cause hallucinations generally either agonize 5HT2A receptors or block NMDA receptors. Off the top of my head, I can't see any similarities between efavirenz and drugs that target those systems like LCD (5HT2A) or ketamine or PCP (NMDA), but I'm no chemist and anyway, structural similarity is not always a good guide to what drugs do.

If I were interested in working out what's going on with efavirenz, I'd start by looking at GABA, the neurotransmitter that's the target of benzos. Maybe the almost-a-benzodiazepine-but-not-quite structure means that it causes some unusual effects on GABA receptors? No-one knows at present. Then I'd move on to 5HT2A and NMDA receptors.

Finally, it's always possible that the users are just getting stoned on cannabis and mistakenly thinking that the efavirenz is making it better through the placebo effect. Stranger things have happened. If so, it would make the whole situation even more tragic than it already is.

ResearchBlogging.orgCavalcante GI, Capistrano VL, Cavalcante FS, Vasconcelos SM, Macêdo DS, Sousa FC, Woods DJ, & Fonteles MM (2010). Implications of efavirenz for neuropsychiatry: a review. The International journal of neuroscience, 120 (12), 739-45 PMID: 20964556

Sunday, February 13, 2011

The Mystery of Stiff Person Syndrome

"Stiff Person Syndrome" (SPS) is a rare neurological disease with a silly name but serious symptoms.

Not in fact a disorder caused by an overdose of Viagra, the defining feature of SPS is uncontrollable muscle rigidity, which comes and goes in bouts, but generally gets worse over time. However, other symptoms are seen including depression, anxiety, and other neurological features such as cerebellar ataxia.

What causes SPS? Well, it's been known for over 20 years that most SPS patients have antibodies against the enzyme GAD65, which is required for the production of GABA, the main inhibitory neurotransmitter in the brain. The body shouldn't be producing antibodies against its own proteins, but unfortunately this does happen quite often, for various reasons, and the result is autoimmune diseases.

So this all seems to make sense. We know that GABA causes muscle relaxation by reducing the brain's input to the muscles. This is why GABA drugs like Valium are muscle-relaxants, and it's part of the reason why drunk people tend to stagger around.

This also explains the anxiety symptoms, because Valium and beer make you less anxious, while drugs that block GABA cause panic attacks. Anti-GAD65 antibodies block GAD, so less GABA gets made. So SPS is autoimmunity against GAD65. Mystery solved?

Not quite. Anti-GAD65 antibodies are also seen in most people with Type I diabetes, but the vast majority of diabetics luckily don't suffer SPS. Mystery remains.

Two studies just out investigated exactly what the antibodies produced by SPS patients do. Geis et al purified the antibodies from a 53 year old woman with SPS and serious anxiety, and injected them into the brains of some rats.

The rats became very anxious. Here's what the cowardly critters did in a standard rodent anxiety test: they avoided the open spaces, which are naturally scary to rodents, who prefer dark, enclosed places:

This was associated with reduced GABA production.

Meanwhile Manto et al found that anti-GAD65 antibodies from another patient with SPS caused very different effects in rat brains compared to the antibodies derived from a patient with autoimmune cerebellar ataxia, but no SPS symptoms. They also found that two kinds of off-the-shelf anti-GAD65 antibodies commonly used in research had different effects as well.

Taken together this all suggests that SPS is caused by anti-GAD65 antibodies, but they have to be a particular type. Different antibodies cause different symptoms even though they all bind to GAD65.

Presumably this is because it's a big protein, and antibodies could bind to any part of it. Only ones that block the "business end" - the part which actually catalyzes the formation of GABA - will cause problems. A bit like how if you get shot in the heart, that's the end of you, but get shot in the foot and it probably won't be.


ResearchBlogging.orgGeis, C., et al. (2011). Human Stiff-Person Syndrome IgG Induces Anxious Behavior in Rats PLoS ONE, 6 (2) DOI: 10.1371/journal.pone.0016775

Manto MU, Hampe CS, Rogemond V, & Honnorat J (2011). Respective implications of glutamate decarboxylase antibodies in stiff person syndrome and cerebellar ataxia. Orphanet journal of rare diseases, 6 (1) PMID: 21294897

The Mystery of Stiff Person Syndrome

"Stiff Person Syndrome" (SPS) is a rare neurological disease with a silly name but serious symptoms.

Not in fact a disorder caused by an overdose of Viagra, the defining feature of SPS is uncontrollable muscle rigidity, which comes and goes in bouts, but generally gets worse over time. However, other symptoms are seen including depression, anxiety, and other neurological features such as cerebellar ataxia.

What causes SPS? Well, it's been known for over 20 years that most SPS patients have antibodies against the enzyme GAD65, which is required for the production of GABA, the main inhibitory neurotransmitter in the brain. The body shouldn't be producing antibodies against its own proteins, but unfortunately this does happen quite often, for various reasons, and the result is autoimmune diseases.

So this all seems to make sense. We know that GABA causes muscle relaxation by reducing the brain's input to the muscles. This is why GABA drugs like Valium are muscle-relaxants, and it's part of the reason why drunk people tend to stagger around.

This also explains the anxiety symptoms, because Valium and beer make you less anxious, while drugs that block GABA cause panic attacks. Anti-GAD65 antibodies block GAD, so less GABA gets made. So SPS is autoimmunity against GAD65. Mystery solved?

Not quite. Anti-GAD65 antibodies are also seen in most people with Type I diabetes, but the vast majority of diabetics luckily don't suffer SPS. Mystery remains.

Two studies just out investigated exactly what the antibodies produced by SPS patients do. Geis et al purified the antibodies from a 53 year old woman with SPS and serious anxiety, and injected them into the brains of some rats.

The rats became very anxious. Here's what the cowardly critters did in a standard rodent anxiety test: they avoided the open spaces, which are naturally scary to rodents, who prefer dark, enclosed places:

This was associated with reduced GABA production.

Meanwhile Manto et al found that anti-GAD65 antibodies from another patient with SPS caused very different effects in rat brains compared to the antibodies derived from a patient with autoimmune cerebellar ataxia, but no SPS symptoms. They also found that two kinds of off-the-shelf anti-GAD65 antibodies commonly used in research had different effects as well.

Taken together this all suggests that SPS is caused by anti-GAD65 antibodies, but they have to be a particular type. Different antibodies cause different symptoms even though they all bind to GAD65.

Presumably this is because it's a big protein, and antibodies could bind to any part of it. Only ones that block the "business end" - the part which actually catalyzes the formation of GABA - will cause problems. A bit like how if you get shot in the heart, that's the end of you, but get shot in the foot and it probably won't be.


ResearchBlogging.orgGeis, C., et al. (2011). Human Stiff-Person Syndrome IgG Induces Anxious Behavior in Rats PLoS ONE, 6 (2) DOI: 10.1371/journal.pone.0016775

Manto MU, Hampe CS, Rogemond V, & Honnorat J (2011). Respective implications of glutamate decarboxylase antibodies in stiff person syndrome and cerebellar ataxia. Orphanet journal of rare diseases, 6 (1) PMID: 21294897

Wednesday, February 9, 2011

Antidepressants Don't Work...In Fish

Here at Neuroskeptic fMRI scanning and antidepressants are both big topics.


As I discussed lask week, fish - specifically salmon - are the next big thing in fMRI and the number of salmon brains being scanned is growing at a remarkable rate. But fish haven't made much of an entrance into the world of antidepressants...until now.

Swedish scientists Holmberg et al have just published a paper asking: Does waterborne citalopram affect the aggressive and sexual behaviour of rainbow trout and guppy?

SSRI antidepressants, of which citalopram is one, are very popular. So popular, in fact, that non-trivial levels of SSRIs have been found in sewage and there's a concern that they might make their way into lakes and rivers and thereby affect the behaviour of the animals living there.

Holmberg et al set out to see what citalopram did to some fish in an attempt to find out whether this is likely to be a major problem. So they put some citalopram in the fish's water supplies and then tested their aggressiveness and also their sex drives. It turns out that one of the main ways of measure fish aggression is to put a mirror in their tank and see if they try to fight their own reflection. Fish are not very bright, really.

Anyway, the good news for fish everywhere was that seven days of citalopram exposure had no effect at all, even at doses much higher than those reported as a pollutant (the maximum dose was 0.1 mg/l). And the authors had no conflicts of interest: Big Pharma had nothing to do with this research, although Big Fish Farmer did because they bought the fish from one.

However, this may not be the end of the story, because it turned out that citalopram was very poorly absorbed into the fish's bloodstreams. But other antidepressants have been reported to accumulate in fish. Clearly, the only way to find out for sure what's going on would be to use fMRI...

ResearchBlogging.orgHolmberg A, Fogel J, Albertsson E, Fick J, Brown JN, Paxéus N, Förlin L, Johnsson JI, & Larsson DG (2011). Does waterborne citalopram affect the aggressive and sexual behaviour of rainbow trout and guppy? Journal of hazardous materials PMID: 21300431

Antidepressants Don't Work...In Fish

Here at Neuroskeptic fMRI scanning and antidepressants are both big topics.


As I discussed lask week, fish - specifically salmon - are the next big thing in fMRI and the number of salmon brains being scanned is growing at a remarkable rate. But fish haven't made much of an entrance into the world of antidepressants...until now.

Swedish scientists Holmberg et al have just published a paper asking: Does waterborne citalopram affect the aggressive and sexual behaviour of rainbow trout and guppy?

SSRI antidepressants, of which citalopram is one, are very popular. So popular, in fact, that non-trivial levels of SSRIs have been found in sewage and there's a concern that they might make their way into lakes and rivers and thereby affect the behaviour of the animals living there.

Holmberg et al set out to see what citalopram did to some fish in an attempt to find out whether this is likely to be a major problem. So they put some citalopram in the fish's water supplies and then tested their aggressiveness and also their sex drives. It turns out that one of the main ways of measure fish aggression is to put a mirror in their tank and see if they try to fight their own reflection. Fish are not very bright, really.

Anyway, the good news for fish everywhere was that seven days of citalopram exposure had no effect at all, even at doses much higher than those reported as a pollutant (the maximum dose was 0.1 mg/l). And the authors had no conflicts of interest: Big Pharma had nothing to do with this research, although Big Fish Farmer did because they bought the fish from one.

However, this may not be the end of the story, because it turned out that citalopram was very poorly absorbed into the fish's bloodstreams. But other antidepressants have been reported to accumulate in fish. Clearly, the only way to find out for sure what's going on would be to use fMRI...

ResearchBlogging.orgHolmberg A, Fogel J, Albertsson E, Fick J, Brown JN, Paxéus N, Förlin L, Johnsson JI, & Larsson DG (2011). Does waterborne citalopram affect the aggressive and sexual behaviour of rainbow trout and guppy? Journal of hazardous materials PMID: 21300431

Wednesday, February 2, 2011

Pharma: Tamed But Still A Big Beast

Everyone knows that Big Pharma go around lying, concealing data and distorting science in an effort to sell their pills. Right?

Actually, not so much. They used to, but most of the really scandalous stuff happened many years ago. The late 80's through to about the turn of the century were the Golden Age of pharmaceutical company deception.

This is when we had drugs that don't work getting approved, with the trials showing that they don't work buried, and only now being uncovered. Data on drug-induced suicides seemingly fudged to make them seem less scary. Textbooks "written by" leading psychiatrists that were, allegedly, in fact ghost-written on behalf of drug companies. Ghost-writing programs with chuckle-some names like CASPPER. And so on.

But today, we have to give credit where credit's due: things have improved. Credit is due not to the companies but to the authorities who put a stop to this nonsense through rules. Mandatory clinical trial registration to ensure all the data is available and stop outcoming cherrypicking. Anti-ghostwriting rules (albeit they're not universal yet.) etc.

What's shocking is how long it took to get these simple rules in place. The next generation of scientists and doctors will look back on the 1990s with disbelief: they let them do what? But at least we woke up eventually.

Still, there's more left to do. At the moment, the main problem, as I see it, is that different jurisdictions have different rules, with the best ideas being confined to one particular place. For instance, the USA has by far the most sensible system of clinical trial registration and reporting. Europe needs to catch up (we are, but slowly.)

Yet the USA is also one of the only countries (with New Zealand) to permit direct-to-consumer (DTC) advertising for prescription drugs. To the rest of the world, this is really weird. We all have a right to free speech. But drug companies pushing drugs directly to patients just isn't a free speech issue, in Europe. Corporations don't speak, they advertise.

By encouraging self-diagnosis and self-treatment, DTC replaces medical judgement with marketing, undermining the doctor-patient relationship. The patient is meant to present his symptoms and the doctor is meant to make a diagnosis and prescribe a treatment. DTC encourages self-diagnosis and self-prescription: the fact that a doctor is still, technically, in charge and has to sign that prescription, means little in practice.

So there's a lot to be happy about, but there's also a lot still to do.

Pharma: Tamed But Still A Big Beast

Everyone knows that Big Pharma go around lying, concealing data and distorting science in an effort to sell their pills. Right?

Actually, not so much. They used to, but most of the really scandalous stuff happened many years ago. The late 80's through to about the turn of the century were the Golden Age of pharmaceutical company deception.

This is when we had drugs that don't work getting approved, with the trials showing that they don't work buried, and only now being uncovered. Data on drug-induced suicides seemingly fudged to make them seem less scary. Textbooks "written by" leading psychiatrists that were, allegedly, in fact ghost-written on behalf of drug companies. Ghost-writing programs with chuckle-some names like CASPPER. And so on.

But today, we have to give credit where credit's due: things have improved. Credit is due not to the companies but to the authorities who put a stop to this nonsense through rules. Mandatory clinical trial registration to ensure all the data is available and stop outcoming cherrypicking. Anti-ghostwriting rules (albeit they're not universal yet.) etc.

What's shocking is how long it took to get these simple rules in place. The next generation of scientists and doctors will look back on the 1990s with disbelief: they let them do what? But at least we woke up eventually.

Still, there's more left to do. At the moment, the main problem, as I see it, is that different jurisdictions have different rules, with the best ideas being confined to one particular place. For instance, the USA has by far the most sensible system of clinical trial registration and reporting. Europe needs to catch up (we are, but slowly.)

Yet the USA is also one of the only countries (with New Zealand) to permit direct-to-consumer (DTC) advertising for prescription drugs. To the rest of the world, this is really weird. We all have a right to free speech. But drug companies pushing drugs directly to patients just isn't a free speech issue, in Europe. Corporations don't speak, they advertise.

By encouraging self-diagnosis and self-treatment, DTC replaces medical judgement with marketing, undermining the doctor-patient relationship. The patient is meant to present his symptoms and the doctor is meant to make a diagnosis and prescribe a treatment. DTC encourages self-diagnosis and self-prescription: the fact that a doctor is still, technically, in charge and has to sign that prescription, means little in practice.

So there's a lot to be happy about, but there's also a lot still to do.

Thursday, January 20, 2011

Retract That Seroxat?

Should a dodgy paper on antidepressants be retracted? And what's scientific retraction for, anyway?


Read all about it in a new article in the BMJ: Rules of Retraction. It's about the efforts of two academics, Jon Jureidini and Leemon McHenry. Their mission - so far unsuccesful - is to get this 2001 paper retracted: Efficacy of paroxetine in the treatment of adolescent major depression.

Jureidini is a member of Healthy Skepticism, a fantastic Australian organization that Neuroskeptic readers have encountered before. They've got lots of detail on the ill-fated "Study 329", including internal drug company documents, here.

So what's the story? Study 329 was a placebo-controlled trial of the SSRI paroxetine (Paxil, Seroxat) in 275 depressed adolescents. The paper concluded: that "Paroxetine is generally well tolerated and effective for major depression in adolescents." It was published in the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP).

There's two issues here: whether paroxetine worked, and whether it was safe. On safety, the paper concluded that "Paroxetine was generally well tolerated...and most adverse effects were not serious." Technically true, but only because there were so many mild side effects.

In fact, 11 patients on paroxetine reported serious adverse events, including suicidal ideation or behaviour, and 7 were hospitalized. Just 2 patients in the placebo group had such events. Yet we are reassured that "Of the 11, only headache (1 patient) was considered by the treating investigator to be related to paroxetine treatment."

The drug company argue that it didn't become clear that paroxetine caused suicidal ideation in adolescents until after the paper was published. In 2002, British authorities reviewed the evidence and said that paroxetine should not be given in this age group.

That's as maybe; the fact remains that in this paper there was a strongly raised risk. However, in fairness, all that data was there in the paper, for readers to draw their own conclusions from. The paper downplays it, but the numbers are there.

*

The efficacy question is where the allegations of dodgy practices are most convincing. The paper concludes that paroxetine worked, while imipramine, an older antidepressant, didn't.

Jureidini and McHenry say that paroxetine only worked on a few of the outcomes - ways of measuring depression and how much the patients improved. On most of the outcomes, it didn't work, but the paper focusses on the ones where it did. According to the BMJ

Study 329’s results showed that paroxetine was no more effective than the placebo according to measurements of eight outcomes specified by Martin Keller, professor of psychiatry at Brown University, when he first drew up the trial.

Two of these were primary outcomes...the drug also showed no significant effect for the initial six secondary outcome measures. [it] only produced a positive result when four new secondary outcome measures, which were introduced following the initial data analysis, were used... Fifteen other new secondary outcome measures failed to throw up positive results.

Here's the worst example. In the original protocol, two "primary" endpoints were specified: the change in the total Hamilton Scale (HAMD) score, and % of patients who 'responded', defined as either an improvement of more than 50% of their starting HAMD score or a final HAMD of 8 or below.

On neither of these measures did paroxetine work better than placebo at the p=0.05 significance level. It did work if you defined 'responded' to mean only a final HAMD of 8 or below, but this was not how it was defined in the protocol. In fact, the Methods section of the paper follows the protocol faithfully. Yet in the Results section, the authors still say that:
Of the depression-related variables, paroxetine separated statistically from placebo at endpoint among four of the parameters: response (i.e., primary outcome measure)...
It may seem like a subtle point. But it's absolutely crucial. Paroxetine just did not work on either pre-defined primary outcome measure, and the paper says that it did.

Finally, there were also issues of ghostwriting. I've never been that concerned by this in itself. If the science is bad, it's bad whoever wrote it. Still, it's hardly a good thing.

*

Does any of this matter? In one sense, no. Authorities have told doctors not to use paroxetine in adolescents with depression since 2002 (in the UK) and 2003 (in the USA). So retracting this paper wouldn't change much in the real world of treatment.

But in another sense, the stakes are enormous. If this paper were retracted, it would set a precedent and send a message: this kind of p-value fishing to get positive results, is grounds for retraction.

This would be huge, because this kind of fishing is sadly very common. Retracting this paper would be saying: selective outcome reporting is a form of misconduct. So this debate is really not about Seroxat, but about science.


There are no Senates or Supreme Courts in science. However, journal editors are in a unique position to help change this. They're just about the only people (grant awarders being the others) who have the power to actually impose sanctions on scientists. They have no official power. But they have clout.

Were the JAACAP to retract this paper, which they've so far said they have no plans to do, it would go some way to making these practices unacceptable. And I think no-one can seriously disagree that they should be unacceptable, and that science and medicine would be much better off if they were. Do we want more papers like this, or do we want fewer?

So I think the question of whether to retract or not boils down to whether it's OK to punish some people "to make an example of them", even though we know of plenty of others who have done the same, or worse, and won't be punished.

My feeling is: no, it's not very fair, but we're talking about multi-billion pound companies and a list of authors whose high-flying careers are not going to crash and burn just because one paper from 10 years ago gets pulled. If this were some poor 24 year old's PhD thesis, it would be different, but these are grown-ups who can handle themselves.

So I say: retract.

ResearchBlogging.orgNewman, M. (2010). The rules of retraction BMJ, 341 (dec07 4) DOI: 10.1136/bmj.c6985

Keller MB, et al. (2001). Efficacy of paroxetine in the treatment of adolescent major depression: a randomized, controlled trial. Journal of the American Academy of Child and Adolescent Psychiatry, 40 (7), 762-72 PMID: 11437014

Retract That Seroxat?

Should a dodgy paper on antidepressants be retracted? And what's scientific retraction for, anyway?


Read all about it in a new article in the BMJ: Rules of Retraction. It's about the efforts of two academics, Jon Jureidini and Leemon McHenry. Their mission - so far unsuccesful - is to get this 2001 paper retracted: Efficacy of paroxetine in the treatment of adolescent major depression.

Jureidini is a member of Healthy Skepticism, a fantastic Australian organization that Neuroskeptic readers have encountered before. They've got lots of detail on the ill-fated "Study 329", including internal drug company documents, here.

So what's the story? Study 329 was a placebo-controlled trial of the SSRI paroxetine (Paxil, Seroxat) in 275 depressed adolescents. The paper concluded: that "Paroxetine is generally well tolerated and effective for major depression in adolescents." It was published in the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP).

There's two issues here: whether paroxetine worked, and whether it was safe. On safety, the paper concluded that "Paroxetine was generally well tolerated...and most adverse effects were not serious." Technically true, but only because there were so many mild side effects.

In fact, 11 patients on paroxetine reported serious adverse events, including suicidal ideation or behaviour, and 7 were hospitalized. Just 2 patients in the placebo group had such events. Yet we are reassured that "Of the 11, only headache (1 patient) was considered by the treating investigator to be related to paroxetine treatment."

The drug company argue that it didn't become clear that paroxetine caused suicidal ideation in adolescents until after the paper was published. In 2002, British authorities reviewed the evidence and said that paroxetine should not be given in this age group.

That's as maybe; the fact remains that in this paper there was a strongly raised risk. However, in fairness, all that data was there in the paper, for readers to draw their own conclusions from. The paper downplays it, but the numbers are there.

*

The efficacy question is where the allegations of dodgy practices are most convincing. The paper concludes that paroxetine worked, while imipramine, an older antidepressant, didn't.

Jureidini and McHenry say that paroxetine only worked on a few of the outcomes - ways of measuring depression and how much the patients improved. On most of the outcomes, it didn't work, but the paper focusses on the ones where it did. According to the BMJ

Study 329’s results showed that paroxetine was no more effective than the placebo according to measurements of eight outcomes specified by Martin Keller, professor of psychiatry at Brown University, when he first drew up the trial.

Two of these were primary outcomes...the drug also showed no significant effect for the initial six secondary outcome measures. [it] only produced a positive result when four new secondary outcome measures, which were introduced following the initial data analysis, were used... Fifteen other new secondary outcome measures failed to throw up positive results.

Here's the worst example. In the original protocol, two "primary" endpoints were specified: the change in the total Hamilton Scale (HAMD) score, and % of patients who 'responded', defined as either an improvement of more than 50% of their starting HAMD score or a final HAMD of 8 or below.

On neither of these measures did paroxetine work better than placebo at the p=0.05 significance level. It did work if you defined 'responded' to mean only a final HAMD of 8 or below, but this was not how it was defined in the protocol. In fact, the Methods section of the paper follows the protocol faithfully. Yet in the Results section, the authors still say that:
Of the depression-related variables, paroxetine separated statistically from placebo at endpoint among four of the parameters: response (i.e., primary outcome measure)...
It may seem like a subtle point. But it's absolutely crucial. Paroxetine just did not work on either pre-defined primary outcome measure, and the paper says that it did.

Finally, there were also issues of ghostwriting. I've never been that concerned by this in itself. If the science is bad, it's bad whoever wrote it. Still, it's hardly a good thing.

*

Does any of this matter? In one sense, no. Authorities have told doctors not to use paroxetine in adolescents with depression since 2002 (in the UK) and 2003 (in the USA). So retracting this paper wouldn't change much in the real world of treatment.

But in another sense, the stakes are enormous. If this paper were retracted, it would set a precedent and send a message: this kind of p-value fishing to get positive results, is grounds for retraction.

This would be huge, because this kind of fishing is sadly very common. Retracting this paper would be saying: selective outcome reporting is a form of misconduct. So this debate is really not about Seroxat, but about science.


There are no Senates or Supreme Courts in science. However, journal editors are in a unique position to help change this. They're just about the only people (grant awarders being the others) who have the power to actually impose sanctions on scientists. They have no official power. But they have clout.

Were the JAACAP to retract this paper, which they've so far said they have no plans to do, it would go some way to making these practices unacceptable. And I think no-one can seriously disagree that they should be unacceptable, and that science and medicine would be much better off if they were. Do we want more papers like this, or do we want fewer?

So I think the question of whether to retract or not boils down to whether it's OK to punish some people "to make an example of them", even though we know of plenty of others who have done the same, or worse, and won't be punished.

My feeling is: no, it's not very fair, but we're talking about multi-billion pound companies and a list of authors whose high-flying careers are not going to crash and burn just because one paper from 10 years ago gets pulled. If this were some poor 24 year old's PhD thesis, it would be different, but these are grown-ups who can handle themselves.

So I say: retract.

ResearchBlogging.orgNewman, M. (2010). The rules of retraction BMJ, 341 (dec07 4) DOI: 10.1136/bmj.c6985

Keller MB, et al. (2001). Efficacy of paroxetine in the treatment of adolescent major depression: a randomized, controlled trial. Journal of the American Academy of Child and Adolescent Psychiatry, 40 (7), 762-72 PMID: 11437014