Showing posts with label drugs. Show all posts
Showing posts with label drugs. Show all posts

Sunday, January 2, 2011

The Ethics of Getting as High as a Kite

Are drugs good or bad?

I mean, in the ethical sense. Medically, all drugs have potential harms variously associated with use, long-term use, overdose, etc. Politically, by buying illegal drugs, you're probably ultimately funding criminals and terrorists (although you might well blame prohibition, not drugs, for that). But setting that aside, assuming no-one gets harmed as a result: is it morally wrong to take recreational drugs per se?

It's an important question, because your opinion about this will influence your opinions about less abstract, more immediate issues: whether cannabis ought to be sold in coffee shops, how many years you should spend in jail for dealing coke.

However, no-one really asks this question, directly. The medical and the political aspects of drugs are endlessly debated, but after listening to these arguments for a while, you'll realize that while people on both sides talk about public health risks and harm reduction, most of the time they're really just disagreeing about the abstract question of whether taking drugs for fun is acceptable.

Here's the two major schools of thought as I see them. There are those who see no problem with recreational drug use, assuming no-one gets hurts. If it feels good, it is good. If it makes people happy, what's not to like? If people want to enjoy themselves in that particular way, it's no-one else's business. Call this the 'hedonist' view.

On the other hand, there are those who see drug use as a shameful escape from reality. There's more to life than "having fun", life is serious. You ought to be out there doing something, not just sitting around with a silly grin on your face. That's cheating, getting enjoyment for nothing. Call this the 'puritan' school.

People differ on which one they favour, but most of us identify with both to some extent. Few people are puritan enough to forgo all of life's pleasures, not even a quiet drink or a hot bath. Few hedonists would be happy if their own kids announced that they had no ambition to succeed in any kind of career, they'd just live off their inheritance and buy heroin.

As a whole, society has a mixed view. We have a puritanical objection to people who just take drugs and do nothing else with their lives; "junkies", "crackheads", "alkies". But we have no problem with drug use by people who clearly have engaged with the world, and succeeded.

Musicians, actors, and other stars take industrial quantities of drugs. Everyone knows it. It's not even an open secret in most cases, it's just open. Even gossip columnists don't notice unless someone gets so far gone that they do something funny. We don't care, because, whether or not we actually like their work, they're not just drug users, they're also doing their jobs.

The Ethics of Getting as High as a Kite

Are drugs good or bad?

I mean, in the ethical sense. Medically, all drugs have potential harms variously associated with use, long-term use, overdose, etc. Politically, by buying illegal drugs, you're probably ultimately funding criminals and terrorists (although you might well blame prohibition, not drugs, for that). But setting that aside, assuming no-one gets harmed as a result: is it morally wrong to take recreational drugs per se?

It's an important question, because your opinion about this will influence your opinions about less abstract, more immediate issues: whether cannabis ought to be sold in coffee shops, how many years you should spend in jail for dealing coke.

However, no-one really asks this question, directly. The medical and the political aspects of drugs are endlessly debated, but after listening to these arguments for a while, you'll realize that while people on both sides talk about public health risks and harm reduction, most of the time they're really just disagreeing about the abstract question of whether taking drugs for fun is acceptable.

Here's the two major schools of thought as I see them. There are those who see no problem with recreational drug use, assuming no-one gets hurts. If it feels good, it is good. If it makes people happy, what's not to like? If people want to enjoy themselves in that particular way, it's no-one else's business. Call this the 'hedonist' view.

On the other hand, there are those who see drug use as a shameful escape from reality. There's more to life than "having fun", life is serious. You ought to be out there doing something, not just sitting around with a silly grin on your face. That's cheating, getting enjoyment for nothing. Call this the 'puritan' school.

People differ on which one they favour, but most of us identify with both to some extent. Few people are puritan enough to forgo all of life's pleasures, not even a quiet drink or a hot bath. Few hedonists would be happy if their own kids announced that they had no ambition to succeed in any kind of career, they'd just live off their inheritance and buy heroin.

As a whole, society has a mixed view. We have a puritanical objection to people who just take drugs and do nothing else with their lives; "junkies", "crackheads", "alkies". But we have no problem with drug use by people who clearly have engaged with the world, and succeeded.

Musicians, actors, and other stars take industrial quantities of drugs. Everyone knows it. It's not even an open secret in most cases, it's just open. Even gossip columnists don't notice unless someone gets so far gone that they do something funny. We don't care, because, whether or not we actually like their work, they're not just drug users, they're also doing their jobs.

Thursday, December 23, 2010

Depression Treatment Increased From 1998 to 2007

A paper just out reports on the changing patterns of treatment for depression in the USA, over the period from 1998 to 2007.

The headline news is that it increased: the overall rate of people treated for some form of "depression" went from 2.37% to 2.88% per year. That's an increase of 21%, which is not trivial, but it's much less than the increase in the previous decade: it was just 0.73% in 1987.

But the increase was concentrated in. some groups of people.
  • Americans over 50 accounted for the bulk of the rise. Their use went up by about 50%, while rates in younger people stayed almost steady. In '98 the peak age band was 35-49, now it's 50-64, with almost 5% of those people getting treated in any given year.
  • Men's rates of treatment went up by over 40% while women's only increased by 10%. Women are still more likely to get treated for depression than men, though, with a ratio of 1.7 women for each 1 man. But that ratio is a lot closer than it used to be.
  • Black people's rates increased hugely, by 120%. Rates in black people now stand at 2.2% which is close behind whites at 3.2%. Hispanics are now the least treated major ethnic group at 1.9%: in previous studies, blacks were the least treated. (There was no data on Asians or others).
So the increase wasn't an across the board rise, as we saw from '87 to '98. Rather the '98-'07 increase was more of a "catching up" by people who've historically had low levels of treatment, closing in on the level of the historically highest group: middle-aged white women.

In terms of what treatments people got, out of everyone treated for depression, 80% got some kind of drugs, and that didn't change much. But use of psychotherapy declined a bit from 54% to 43% (some people got both).

What's also interesting is that the same authors reported last year that, over pretty much the same time period ('96 to '05), the number of Americans who used antidepressants in any given year sky-rocketed from 5% to 10% - that is to say, much faster than the rate of depression treatment rose! And the data are comparable, because they came from the same national MEPS surveys.

In other words, the decade must have seen antidepressants increasingly being used to treat stuff other than depression. What stuff? Well, all kinds of things. SSRIs are popular in everything from anxiety and OCD to premature ejaculation. Several of the "other new" drugs, like mirtazapine and trazodone, are very good at putting you to sleep (rather too good, some users would say...)

ResearchBlogging.orgMarcus SC, & Olfson M (2010). National trends in the treatment for depression from 1998 to 2007. Archives of general psychiatry, 67 (12), 1265-73 PMID: 21135326

Depression Treatment Increased From 1998 to 2007

A paper just out reports on the changing patterns of treatment for depression in the USA, over the period from 1998 to 2007.

The headline news is that it increased: the overall rate of people treated for some form of "depression" went from 2.37% to 2.88% per year. That's an increase of 21%, which is not trivial, but it's much less than the increase in the previous decade: it was just 0.73% in 1987.

But the increase was concentrated in. some groups of people.
  • Americans over 50 accounted for the bulk of the rise. Their use went up by about 50%, while rates in younger people stayed almost steady. In '98 the peak age band was 35-49, now it's 50-64, with almost 5% of those people getting treated in any given year.
  • Men's rates of treatment went up by over 40% while women's only increased by 10%. Women are still more likely to get treated for depression than men, though, with a ratio of 1.7 women for each 1 man. But that ratio is a lot closer than it used to be.
  • Black people's rates increased hugely, by 120%. Rates in black people now stand at 2.2% which is close behind whites at 3.2%. Hispanics are now the least treated major ethnic group at 1.9%: in previous studies, blacks were the least treated. (There was no data on Asians or others).
So the increase wasn't an across the board rise, as we saw from '87 to '98. Rather the '98-'07 increase was more of a "catching up" by people who've historically had low levels of treatment, closing in on the level of the historically highest group: middle-aged white women.

In terms of what treatments people got, out of everyone treated for depression, 80% got some kind of drugs, and that didn't change much. But use of psychotherapy declined a bit from 54% to 43% (some people got both).

What's also interesting is that the same authors reported last year that, over pretty much the same time period ('96 to '05), the number of Americans who used antidepressants in any given year sky-rocketed from 5% to 10% - that is to say, much faster than the rate of depression treatment rose! And the data are comparable, because they came from the same national MEPS surveys.

In other words, the decade must have seen antidepressants increasingly being used to treat stuff other than depression. What stuff? Well, all kinds of things. SSRIs are popular in everything from anxiety and OCD to premature ejaculation. Several of the "other new" drugs, like mirtazapine and trazodone, are very good at putting you to sleep (rather too good, some users would say...)

ResearchBlogging.orgMarcus SC, & Olfson M (2010). National trends in the treatment for depression from 1998 to 2007. Archives of general psychiatry, 67 (12), 1265-73 PMID: 21135326

Saturday, November 27, 2010

The Town That Went Mad

Pont St. Esprit is a small town in southern France. In 1951 it became famous as the site of one of the most mysterious medical outbreaks of modern times.

As Dr's Gabbai, Lisbonne and Pourquier wrote to the British Medical Journal, 15 days after the "incident":
The first symptoms appeared after a latent period of 6 to 48 hours. In this first phase, the symptoms were generalized, and consisted in a depressive state with anguish and slight agitation.

After some hours the symptoms became more clearly defined, and most of the patients presented with digestive disturbances... Disturbances of the autonomic nervous system accompanied the digestive disorders-gusts of warmth, followed by the impression of "cold waves", with intense sweating crises. We also noted frequent excessive salivation.

The patients were pale and often showed a regular bradycardia (40 to 50 beats a minute), with weakness of the pulse. The heart sounds were rather muffled; the extremities were cold... Thereafter a constant symptom appeared - insomnia lasting several days... A state of giddiness persisted, accompanied by abundant sweating and a disagreeable odour. The special odour struck the patient and his attendants.
In most patients, these symptoms, including the total insomnia, persisted for several days. In some of the patients, these symptoms progressed to full-blown psychosis:
Logorrhoea [speaking a lot], psychomotor agitation, and absolute insomnia always presaged the appearance of mental disorders. Towards evening visual hallucinations appeared, recalling those of alcoholism. The particular themes were visions of animals and of flames. All these visions were fleeting and variable.

In many of the patients they were followed by dreamy delirium. The delirium seemed to be systematized, with animal hallucinations and self-accusation, and it was sometimes mystical or macabre. In some cases terrifying visions were followed by fugues, and two patients even threw themselves out of windows... Every attempt at restraint increased the agitation.

In severe cases muscular spasms appeared, recalling those of tetanus, but seeming to be less sustained and less painful... The duration of these periods of delirium was very varied. They lasted several hours in some patients, in others they still persist.
In total, about 150 people suffered some symptoms. About 25 severe cases developed the "delirium". 4 people died "in muscular spasm and in a state of cardiovascular collapse"; three of these were old and in poor health, but one was a healthy 25-year-old man.

At first, the cause was assumed to be ergotism - poisoning caused by chemicals produced by a fungus which can infect grain crops. Contaminated bread was, therefore, thought to be responsible. Ergotism produces symptoms similar to those reported at Pont St. Esprit, including hallucinations, because some of the toxins are chemically related to LSD.

However, there have been other theories. Some (including Albert Hofmann, the inventor of LSD) attribute the poisoning to pesticides containing mercury, or to the flour bleaching agent nitrogen trichloride.

More recently, journalist Hank Albarelli claimed that it was in fact a CIA experiment to test out the effects of LSD as a chemical weapon, though this is disputed. What really happened is, in other words, still a mystery.

Link: The Crazies (2010) is a movie about a remarkably similar outbreak of mass insanity in a small town.

ResearchBlogging.orgGABBAI, LISBONNE, & POURQUIER (1951). Ergot poisoning at Pont St. Esprit. British medical journal, 2 (4732), 650-1 PMID: 14869677

The Town That Went Mad

Pont St. Esprit is a small town in southern France. In 1951 it became famous as the site of one of the most mysterious medical outbreaks of modern times.

As Dr's Gabbai, Lisbonne and Pourquier wrote to the British Medical Journal, 15 days after the "incident":
The first symptoms appeared after a latent period of 6 to 48 hours. In this first phase, the symptoms were generalized, and consisted in a depressive state with anguish and slight agitation.

After some hours the symptoms became more clearly defined, and most of the patients presented with digestive disturbances... Disturbances of the autonomic nervous system accompanied the digestive disorders-gusts of warmth, followed by the impression of "cold waves", with intense sweating crises. We also noted frequent excessive salivation.

The patients were pale and often showed a regular bradycardia (40 to 50 beats a minute), with weakness of the pulse. The heart sounds were rather muffled; the extremities were cold... Thereafter a constant symptom appeared - insomnia lasting several days... A state of giddiness persisted, accompanied by abundant sweating and a disagreeable odour. The special odour struck the patient and his attendants.
In most patients, these symptoms, including the total insomnia, persisted for several days. In some of the patients, these symptoms progressed to full-blown psychosis:
Logorrhoea [speaking a lot], psychomotor agitation, and absolute insomnia always presaged the appearance of mental disorders. Towards evening visual hallucinations appeared, recalling those of alcoholism. The particular themes were visions of animals and of flames. All these visions were fleeting and variable.

In many of the patients they were followed by dreamy delirium. The delirium seemed to be systematized, with animal hallucinations and self-accusation, and it was sometimes mystical or macabre. In some cases terrifying visions were followed by fugues, and two patients even threw themselves out of windows... Every attempt at restraint increased the agitation.

In severe cases muscular spasms appeared, recalling those of tetanus, but seeming to be less sustained and less painful... The duration of these periods of delirium was very varied. They lasted several hours in some patients, in others they still persist.
In total, about 150 people suffered some symptoms. About 25 severe cases developed the "delirium". 4 people died "in muscular spasm and in a state of cardiovascular collapse"; three of these were old and in poor health, but one was a healthy 25-year-old man.

At first, the cause was assumed to be ergotism - poisoning caused by chemicals produced by a fungus which can infect grain crops. Contaminated bread was, therefore, thought to be responsible. Ergotism produces symptoms similar to those reported at Pont St. Esprit, including hallucinations, because some of the toxins are chemically related to LSD.

However, there have been other theories. Some (including Albert Hofmann, the inventor of LSD) attribute the poisoning to pesticides containing mercury, or to the flour bleaching agent nitrogen trichloride.

More recently, journalist Hank Albarelli claimed that it was in fact a CIA experiment to test out the effects of LSD as a chemical weapon, though this is disputed. What really happened is, in other words, still a mystery.

Link: The Crazies (2010) is a movie about a remarkably similar outbreak of mass insanity in a small town.

ResearchBlogging.orgGABBAI, LISBONNE, & POURQUIER (1951). Ergot poisoning at Pont St. Esprit. British medical journal, 2 (4732), 650-1 PMID: 14869677

Wednesday, October 20, 2010

You Read It Here First...Again

A couple of months ago I pointed out that a Letter published in the American Journal of Psychiatry, critiquing a certain paper about antidepressants, made very similar points to the ones that I did in my blog post about the paper. The biggest difference was that my post came out 9 months sooner.


Well, it's happened again. Except I was only 3 months ahead this time. Remember my post Clever New Scheme, criticizing a study which claimed to have found a brilliant way of deciding which antidepressant is right for someone, based on their brain activity?

That post went up on July 21st. Yesterday, October 19th, a Letter was published by the journal that ran the original paper. Three months ago, I said -
...there were two groups in this trial and they got entirely different sets of drugs. One group also got rEEG-based treatment personalization. That group did better, but that might have nothing to do with the rEEG...

...it would have been very simple to avoid this issue. Just give everyone rEEG, but shuffle the assignments in the control group, so that everyone was guided by someone else's EEG...

This would be a genuinely controlled test of the personalized rEEG system, because both groups would get the same kinds of drugs... Second, it would allow the trial to be double-blind: in this study the investigators knew which group people were in, because it was obvious from the drug choice... Thirdly, it wouldn't have meant they had to exclude people whose rEEG recommended they get the same treatment that they would have got in the control group...
Now Alexander C. Tsai says, in his Letter:
DeBattista et al. chose a study design that conflates the effect of rEEG-guided pharmacotherapy with the effects of differing medication regimes...
A more definitive study design would have been one in which study participants were randomized to receive rEEG-guided pharmacotherapy vs. sham rEEG-guided pharmacotherapy.

Such a study design could have been genuinely double blinded,
would not have required the inclusion of potential subjects whose rEEG treatment regimen was different from the control, and would be more likely to result in medication regimens that were balanced on average across the intervention vs. control arms.
To be fair, he also makes a separate point questioning how meaningful the small between-group difference was.

I'm mentioning this not because I want to show off, or to accuse Tsai of ripping me off, but because it's a good example of why people like Royce Murray are wrong. Murray recently wrote an editorial in the academic journal Analytical Chemistry, accusing blogging of being unreliable compared to proper, peer-reviewed science.

Murray is certainly right that one could use a blog as a platform to push crap ideas, but one can also use peer reviewed papers to do that, and often it's bloggers who are the first to pick up on this when it happens.

ResearchBlogging.orgTsai AC (2010). Unclear clinical significance of findings on the use of referenced-EEG-guided pharmacotherapy. Journal of psychiatric research PMID: 20943234

You Read It Here First...Again

A couple of months ago I pointed out that a Letter published in the American Journal of Psychiatry, critiquing a certain paper about antidepressants, made very similar points to the ones that I did in my blog post about the paper. The biggest difference was that my post came out 9 months sooner.


Well, it's happened again. Except I was only 3 months ahead this time. Remember my post Clever New Scheme, criticizing a study which claimed to have found a brilliant way of deciding which antidepressant is right for someone, based on their brain activity?

That post went up on July 21st. Yesterday, October 19th, a Letter was published by the journal that ran the original paper. Three months ago, I said -
...there were two groups in this trial and they got entirely different sets of drugs. One group also got rEEG-based treatment personalization. That group did better, but that might have nothing to do with the rEEG...

...it would have been very simple to avoid this issue. Just give everyone rEEG, but shuffle the assignments in the control group, so that everyone was guided by someone else's EEG...

This would be a genuinely controlled test of the personalized rEEG system, because both groups would get the same kinds of drugs... Second, it would allow the trial to be double-blind: in this study the investigators knew which group people were in, because it was obvious from the drug choice... Thirdly, it wouldn't have meant they had to exclude people whose rEEG recommended they get the same treatment that they would have got in the control group...
Now Alexander C. Tsai says, in his Letter:
DeBattista et al. chose a study design that conflates the effect of rEEG-guided pharmacotherapy with the effects of differing medication regimes...
A more definitive study design would have been one in which study participants were randomized to receive rEEG-guided pharmacotherapy vs. sham rEEG-guided pharmacotherapy.

Such a study design could have been genuinely double blinded,
would not have required the inclusion of potential subjects whose rEEG treatment regimen was different from the control, and would be more likely to result in medication regimens that were balanced on average across the intervention vs. control arms.
To be fair, he also makes a separate point questioning how meaningful the small between-group difference was.

I'm mentioning this not because I want to show off, or to accuse Tsai of ripping me off, but because it's a good example of why people like Royce Murray are wrong. Murray recently wrote an editorial in the academic journal Analytical Chemistry, accusing blogging of being unreliable compared to proper, peer-reviewed science.

Murray is certainly right that one could use a blog as a platform to push crap ideas, but one can also use peer reviewed papers to do that, and often it's bloggers who are the first to pick up on this when it happens.

ResearchBlogging.orgTsai AC (2010). Unclear clinical significance of findings on the use of referenced-EEG-guided pharmacotherapy. Journal of psychiatric research PMID: 20943234

Friday, October 15, 2010

Worst. Antidepressant. Ever.

Reboxetine is an antidepressant. Except it's not, because it doesn't treat depression.

This is the conclusion of a much-publicized article just out in the BMJ: Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and SSRI controlled trials.

Reboxetine was introduced to some fanfare, because its mechanism of action is unique - it's a selective norepinephrine reuptake inhibitor (NRI), which has no effect on serotonin, unlike Prozac and other newer antidepressants. Several older tricyclic antidepressants were NRIs, but they weren't selective because they also blocked a shed-load of receptors.

So in theory reboxetine treats depression while avoiding the side effects of other drugs, but last year, Cipriani et al in a headline-grabbing meta-analysis concluded that in fact it's the exact opposite: reboxetine was the least effective new antidepressant, and was also one of the worst in terms of side effects. Oh dear.

And that was only based on the published data. It turns out that Pfizer, the manufacturers of reboxetine, had chosen to not publish the results of most of their clinical trials of the drug, because the data showed that it was crap.

The new BMJ paper includes these unpublished results - it took an inordinate amount of time and pressure to make Pfizer agree to share them, but they eventually did - and we learn that reboxetine is:
  • no more effective than a placebo at treating depression.
  • less effective than SSRIs, which incidentally are better than placebo in this dataset (a bit).
  • worse tolerated than most SSRIs, and much worse tolerated than placebo.
The one faint glimmer of hope that it's not a complete dud was that it did seem to work better than placebo in depressed inpatients. However, this could well have been a fluke, because the numbers involved were tiny: there was one trial showing a humongous benefit in inpatients, but it only had a total of 52 people.)

Claims that reboxetine is dangerous on the basis of this study are a bit misleading - it may be, but there was no evidence for that in these data. It caused nasty and annoying side-effects, but that's not the same thing, because if you don't like side-effects, you could just stop taking it (which is what many people in these trials did).

Anyway, what are the lessons of this sorry tale, beyond reboxetine being rubbish? The main one is: we have to start forcing drug companies and other researchers to publish the results of clinical trials, whatever the results are. I've discussed this previously and suggested one possible way of doing that.

The situation regarding publication bias is far better than it was 10 years ago, thanks to initiatives such as clinicaltrials.gov; almost all of the reboxetine trials were completed before the year 2000; if they were run today, it would have been much harder to hide them, but still not impossible, especially in Europe. We need to make it impossible, everywhere, now.

The other implication is, ironically, good news for antidepressants - well, except reboxetine. The existence of reboxetine, a drug which has lots of side effects, but doesn't work, is evidence against the theory (put forward by Joanna Moncrieff, Irving Kirsch and others) that even the antidepressants that do seem to work, only work because of active placebo effects driven by their side effects.

So given that reboxetine had more side effects than SSRIs, it ought to have worked better, but actually it worked worse. This is by no means the nail in the coffin of the active placebo hypothesis but it is, to my mind, quite convincing.

Link: This study also blogged by Good, Bad and Bogus.

ResearchBlogging.orgEyding, D., Lelgemann, M., Grouven, U., Harter, M., Kromp, M., Kaiser, T., Kerekes, M., Gerken, M., & Wieseler, B. (2010). Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials BMJ, 341 (oct12 1) DOI: 10.1136/bmj.c4737

Worst. Antidepressant. Ever.

Reboxetine is an antidepressant. Except it's not, because it doesn't treat depression.

This is the conclusion of a much-publicized article just out in the BMJ: Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and SSRI controlled trials.

Reboxetine was introduced to some fanfare, because its mechanism of action is unique - it's a selective norepinephrine reuptake inhibitor (NRI), which has no effect on serotonin, unlike Prozac and other newer antidepressants. Several older tricyclic antidepressants were NRIs, but they weren't selective because they also blocked a shed-load of receptors.

So in theory reboxetine treats depression while avoiding the side effects of other drugs, but last year, Cipriani et al in a headline-grabbing meta-analysis concluded that in fact it's the exact opposite: reboxetine was the least effective new antidepressant, and was also one of the worst in terms of side effects. Oh dear.

And that was only based on the published data. It turns out that Pfizer, the manufacturers of reboxetine, had chosen to not publish the results of most of their clinical trials of the drug, because the data showed that it was crap.

The new BMJ paper includes these unpublished results - it took an inordinate amount of time and pressure to make Pfizer agree to share them, but they eventually did - and we learn that reboxetine is:
  • no more effective than a placebo at treating depression.
  • less effective than SSRIs, which incidentally are better than placebo in this dataset (a bit).
  • worse tolerated than most SSRIs, and much worse tolerated than placebo.
The one faint glimmer of hope that it's not a complete dud was that it did seem to work better than placebo in depressed inpatients. However, this could well have been a fluke, because the numbers involved were tiny: there was one trial showing a humongous benefit in inpatients, but it only had a total of 52 people.)

Claims that reboxetine is dangerous on the basis of this study are a bit misleading - it may be, but there was no evidence for that in these data. It caused nasty and annoying side-effects, but that's not the same thing, because if you don't like side-effects, you could just stop taking it (which is what many people in these trials did).

Anyway, what are the lessons of this sorry tale, beyond reboxetine being rubbish? The main one is: we have to start forcing drug companies and other researchers to publish the results of clinical trials, whatever the results are. I've discussed this previously and suggested one possible way of doing that.

The situation regarding publication bias is far better than it was 10 years ago, thanks to initiatives such as clinicaltrials.gov; almost all of the reboxetine trials were completed before the year 2000; if they were run today, it would have been much harder to hide them, but still not impossible, especially in Europe. We need to make it impossible, everywhere, now.

The other implication is, ironically, good news for antidepressants - well, except reboxetine. The existence of reboxetine, a drug which has lots of side effects, but doesn't work, is evidence against the theory (put forward by Joanna Moncrieff, Irving Kirsch and others) that even the antidepressants that do seem to work, only work because of active placebo effects driven by their side effects.

So given that reboxetine had more side effects than SSRIs, it ought to have worked better, but actually it worked worse. This is by no means the nail in the coffin of the active placebo hypothesis but it is, to my mind, quite convincing.

Link: This study also blogged by Good, Bad and Bogus.

ResearchBlogging.orgEyding, D., Lelgemann, M., Grouven, U., Harter, M., Kromp, M., Kaiser, T., Kerekes, M., Gerken, M., & Wieseler, B. (2010). Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials BMJ, 341 (oct12 1) DOI: 10.1136/bmj.c4737

Wednesday, October 13, 2010

Cannabinoids in Huntington's Disease

Two recent papers have provided strong evidence that the brain's endocannabinoid system is dysfunctional in Huntington's Disease, paving the way to possible new treatments.

Huntington's Disease is a genetic neurological disorder. Symptoms generally appear around age 40, and progress gradually from subtle movement abnormalities to dementia and complete loss of motor control. It's incurable, although medication can mask some of the symptoms. Singer Woodie Guthrie is perhaps the disease's best known victim: he ended his days in a mental institution.

The biology of Huntington's is only partially understood. It's caused by mutations in the huntingtin gene, which lead to the build-up of damaging proteins in brain cells, especially in the striatum. But exactly how this produces symptoms is unclear.

The two new papers show that cannabinoids play an important role. First off, Van Laere et al used PET imaging to measure levels of CB1 receptors in the brain of patients in various stages of Huntington's. CB1 is the main cannabinoid receptor in the brain; it responds to natural endocannabinoid neurotransmitters, and also to THC, the active ingredient in marijuana.

They found serious reductions in all areas of the brain compared to healthy people, and interestingly, the loss of CB1 receptors occurred early in the course of the disease:

That was an important finding, but it didn't prove that CB1 loss was causing any problems: it might have just been a side-effect of the disease. Now another study using animals has shown that it's not: Blazquez et al. They studied mice with the same mutation that causes Huntington's in humans. These unfortunate rodents develop Huntington's, unsurprisingly.

They found that Huntington's mice who also had a mutation eliminating the CB1 receptor suffered more severe symptoms, which appeared earlier, and progressed faster. This suggests that CB1 plays a neuroprotective role, which is consistent with lots of earlier studies in other disorders.

If so, drugs that activate CB1 - like THC - might be able to slow down the progression of the disease, and indeed it did: Huntington's mice given THC injections stayed healthier for longer, although they eventually succumbed to the disease. Further experiments showed that mutant huntingtin switches off expression of the CB1 receptor gene, explaining the loss of CB1.

This graph shows performance on the RotaRod test of co-ordination: mice with Huntington's (R6/2) got worse and worse starting at 6 weeks of age (white bars), but THC slowed down the decline (black bars). The story was similar for other symptoms, and for the neural damage seen in the disease.

They conclude that:
Altogether, these results support the notion that downregulation of type 1 cannabinoid receptors is a key pathogenic event in Huntington’s disease, and suggest that activation of these receptors in patients with Huntington’s disease may attenuate disease progression.
Now, this doesn't mean people with Huntington's should be heading out to buy Bob Marley posters and bongs just yet. For one thing, Huntington's disease often causes psychiatric symptoms, including depression and psychosis. Cannabis use has been linked to psychosis fairly convincingly, so marijuana might make those symptoms worse.

Still, it's very promising. In particular, it will be interesting to try out next-generation endocannabinoid boosting drugs, such as FAAH inhibitors, which block the breakdown of anandamide, one of the most important endocannabinoids.

In animals FAAH inhibitors have pain relieving, anti-anxiety, and other beneficial effects, but they don't cause the same behavioural disruptions that THC does. This suggests that they wouldn't get people high, either, but there's no published data on what they do in humans yet...

ResearchBlogging.orgVan Laere K, et al. (2010). Widespread decrease of type 1 cannabinoid receptor availability in Huntington disease in vivo. Journal of nuclear medicine : official publication, Society of Nuclear Medicine, 51 (9), 1413-7 PMID: 20720046

Blázquez C, et al. (2010). Loss of striatal type 1 cannabinoid receptors is a key pathogenic factor in Huntington's disease. Brain : a journal of neurology PMID: 20929960

Cannabinoids in Huntington's Disease

Two recent papers have provided strong evidence that the brain's endocannabinoid system is dysfunctional in Huntington's Disease, paving the way to possible new treatments.

Huntington's Disease is a genetic neurological disorder. Symptoms generally appear around age 40, and progress gradually from subtle movement abnormalities to dementia and complete loss of motor control. It's incurable, although medication can mask some of the symptoms. Singer Woodie Guthrie is perhaps the disease's best known victim: he ended his days in a mental institution.

The biology of Huntington's is only partially understood. It's caused by mutations in the huntingtin gene, which lead to the build-up of damaging proteins in brain cells, especially in the striatum. But exactly how this produces symptoms is unclear.

The two new papers show that cannabinoids play an important role. First off, Van Laere et al used PET imaging to measure levels of CB1 receptors in the brain of patients in various stages of Huntington's. CB1 is the main cannabinoid receptor in the brain; it responds to natural endocannabinoid neurotransmitters, and also to THC, the active ingredient in marijuana.

They found serious reductions in all areas of the brain compared to healthy people, and interestingly, the loss of CB1 receptors occurred early in the course of the disease:

That was an important finding, but it didn't prove that CB1 loss was causing any problems: it might have just been a side-effect of the disease. Now another study using animals has shown that it's not: Blazquez et al. They studied mice with the same mutation that causes Huntington's in humans. These unfortunate rodents develop Huntington's, unsurprisingly.

They found that Huntington's mice who also had a mutation eliminating the CB1 receptor suffered more severe symptoms, which appeared earlier, and progressed faster. This suggests that CB1 plays a neuroprotective role, which is consistent with lots of earlier studies in other disorders.

If so, drugs that activate CB1 - like THC - might be able to slow down the progression of the disease, and indeed it did: Huntington's mice given THC injections stayed healthier for longer, although they eventually succumbed to the disease. Further experiments showed that mutant huntingtin switches off expression of the CB1 receptor gene, explaining the loss of CB1.

This graph shows performance on the RotaRod test of co-ordination: mice with Huntington's (R6/2) got worse and worse starting at 6 weeks of age (white bars), but THC slowed down the decline (black bars). The story was similar for other symptoms, and for the neural damage seen in the disease.

They conclude that:
Altogether, these results support the notion that downregulation of type 1 cannabinoid receptors is a key pathogenic event in Huntington’s disease, and suggest that activation of these receptors in patients with Huntington’s disease may attenuate disease progression.
Now, this doesn't mean people with Huntington's should be heading out to buy Bob Marley posters and bongs just yet. For one thing, Huntington's disease often causes psychiatric symptoms, including depression and psychosis. Cannabis use has been linked to psychosis fairly convincingly, so marijuana might make those symptoms worse.

Still, it's very promising. In particular, it will be interesting to try out next-generation endocannabinoid boosting drugs, such as FAAH inhibitors, which block the breakdown of anandamide, one of the most important endocannabinoids.

In animals FAAH inhibitors have pain relieving, anti-anxiety, and other beneficial effects, but they don't cause the same behavioural disruptions that THC does. This suggests that they wouldn't get people high, either, but there's no published data on what they do in humans yet...

ResearchBlogging.orgVan Laere K, et al. (2010). Widespread decrease of type 1 cannabinoid receptor availability in Huntington disease in vivo. Journal of nuclear medicine : official publication, Society of Nuclear Medicine, 51 (9), 1413-7 PMID: 20720046

Blázquez C, et al. (2010). Loss of striatal type 1 cannabinoid receptors is a key pathogenic factor in Huntington's disease. Brain : a journal of neurology PMID: 20929960

Sunday, September 26, 2010

Big Pharma Explain How To Pick Cherries

Here at Neuroskeptic, we see a lot of bad science. Maybe, over the years (all 2 of them) that I've been writing this blog, I've become a bit jaded. Maybe I'm less distressed by it than I used to be. Cynical, even.

But this one really takes the biscuit. And then it takes the tin. And relieves itself in it: A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs.

Don't worry - it's from a big pharmaceutical company (GlaxoSmithKline), so I don't have to worry about hurting feelings.

It's is full to bursting with colourful graphs and pictures, but the basic idea is very simple. As in "simpleton".

Suppose you're testing a new drug against placebo. You decide to do a multicentre trial, i.e. you enlist lots of doctors to give the drug, or placebo, to their patients. Each clinic or hospital which takes part is a "centre". Multicentre trials are popular because they're an easy way of quickly testing a drug on a large number of patients.

Anyway, suppose that the results come in, and it turns out that the drug didn't work any better than placebo, which unfortunately is what happens rather often in modern trials of antidepressants. Oh dear. The drug's crap. That's the end of that chapter.

...
or is it?!? say GSK. Maybe not. They have a clever trick. Look at the results from each centre individually. Placebo response rates will probably vary between centres: in some of them, the placebo people don't get better, in others, they get lots better.

Now, suppose that you just chucked out all of the data from centres where the people on placebo got much better, on the grounds that there must be something weird going on in those ones. They reanalyzed the data from 1,837 patients given paroxetine or placebo, across 124 centres. In the dataset as a whole, paroxetine barely outperformed placebo. However, in the centres where people on placebo only improved a little, the drug was much better than placebo!

Well, of course it was. Imagine that the drug has no effect. Some people just get better and others don't. Let's assume that each person randomly gets between 0 and 25 better, with an equal chance of any outcome. Half are on drug and half are on placebo, but it makes no difference.

Let's further assume that there are 50 centres, with 20 people per centre (1000 people total). I knocked up a "simulation" of this in Excel (it took 10 minutes). Here's what you get:

The blue dots show, for each imaginary centre, drug improvement vs. placebo improvement. There's no correlation (it's random), and, on average, there is no difference: both average out at 12 points. The drug doesn't work.

The red dots show the "Treatment Effect" i.e. [drug improvement - placebo improvement]. The average is 0 - because the drug doesn't work. But there's a strong negative correlation between Treatment Effect and the placebo improvement - in centres where people improved lots on placebo, the drug worked worse.

This is exactly what Glaxo show in Figure 1a (see above). They write:
The analysis of the surface response indicated the predominant role of center specific placebo response as compared with the dose strength in determining the Treatment Effect of paroxetine.
But of course they correlate. You're correlating placebo improvement with itself: the "Treatment Effect" is a function of the placebo improvement. It's classic regression to the mean.

Of course if you chuck out the centres where people on placebo do well (the grey box in my picture), the drug seems to work pretty nicely. But this is cheating. It is cherry-picking. It is completely unscientific. (To give the authors their due, they also eliminated the centres where the placebo response was very low. This could, under some assumptions, make the analysis unbiased, but they don't show that this was their intention, let alone that it would eliminate all of the bias.)

The authors note that this could be a source of bias, but say that it wouldn't be one if it was planned out in advance: "in order to overcome the bias risk, the enrichment strategy should be accounted for and pre-planned in the study protocol." This is like saying that if you announce, before playing chess, that you are going to cheat, it's not cheating.

To be fair to the authors, assuming the drug does work, this method would improve your chances of correctly detecting the effect. Centres with very high placebo responses quite possibly are junk. Assuming the drug works.

But if we're assuming the drug works, why are we bothering to do a trial? The whole point of a trial is to discover something we don't know. The authors justify their approach by suggesting that it would be useful for drug companies who want to do a "proof-of-concept" trial to find out whether an experimental drug might work under the most favourable conditions, i.e. whether they should bother continuing to research it.

They say that such trials "are inherently exploratory in their conception, aimed at signal detection, open to innovation..." - in other words, that they're not meant to be as rigorous as late-stage trials.

Fair enough. But this method is not even suitable for proof-of-concept, because it would (as I have shown above in my 10 minute simulation) increase your chance of finding an "effect" from a drug that doesn't work.

Whatever the truth is, this method will give the same result, so it's not useful evidence. It's like saying "Heads I win, tails you lose". You've set it up so that I lose - the coin toss doesn't tell us anything.

All of the author's results are based on trials in which the drug "should have worked": they do not appear to have simulated what would happen if they used this method on trials where it didn't work, as I just did. So I'm doing Pharma a big favour by writing this post, because if they adopt this approach, they're more likely to waste money on drugs that don't work.

They should be paying me for this stuff.

ResearchBlogging.orgMerlo-Pich E, Alexander RC, Fava M, & Gomeni R (2010). A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs. Clinical Pharmacology and Therapeutics PMID: 20861834

Big Pharma Explain How To Pick Cherries

Here at Neuroskeptic, we see a lot of bad science. Maybe, over the years (all 2 of them) that I've been writing this blog, I've become a bit jaded. Maybe I'm less distressed by it than I used to be. Cynical, even.

But this one really takes the biscuit. And then it takes the tin. And relieves itself in it: A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs.

Don't worry - it's from a big pharmaceutical company (GlaxoSmithKline), so I don't have to worry about hurting feelings.

It's is full to bursting with colourful graphs and pictures, but the basic idea is very simple. As in "simpleton".

Suppose you're testing a new drug against placebo. You decide to do a multicentre trial, i.e. you enlist lots of doctors to give the drug, or placebo, to their patients. Each clinic or hospital which takes part is a "centre". Multicentre trials are popular because they're an easy way of quickly testing a drug on a large number of patients.

Anyway, suppose that the results come in, and it turns out that the drug didn't work any better than placebo, which unfortunately is what happens rather often in modern trials of antidepressants. Oh dear. The drug's crap. That's the end of that chapter.

...
or is it?!? say GSK. Maybe not. They have a clever trick. Look at the results from each centre individually. Placebo response rates will probably vary between centres: in some of them, the placebo people don't get better, in others, they get lots better.

Now, suppose that you just chucked out all of the data from centres where the people on placebo got much better, on the grounds that there must be something weird going on in those ones. They reanalyzed the data from 1,837 patients given paroxetine or placebo, across 124 centres. In the dataset as a whole, paroxetine barely outperformed placebo. However, in the centres where people on placebo only improved a little, the drug was much better than placebo!

Well, of course it was. Imagine that the drug has no effect. Some people just get better and others don't. Let's assume that each person randomly gets between 0 and 25 better, with an equal chance of any outcome. Half are on drug and half are on placebo, but it makes no difference.

Let's further assume that there are 50 centres, with 20 people per centre (1000 people total). I knocked up a "simulation" of this in Excel (it took 10 minutes). Here's what you get:

The blue dots show, for each imaginary centre, drug improvement vs. placebo improvement. There's no correlation (it's random), and, on average, there is no difference: both average out at 12 points. The drug doesn't work.

The red dots show the "Treatment Effect" i.e. [drug improvement - placebo improvement]. The average is 0 - because the drug doesn't work. But there's a strong negative correlation between Treatment Effect and the placebo improvement - in centres where people improved lots on placebo, the drug worked worse.

This is exactly what Glaxo show in Figure 1a (see above). They write:
The analysis of the surface response indicated the predominant role of center specific placebo response as compared with the dose strength in determining the Treatment Effect of paroxetine.
But of course they correlate. You're correlating placebo improvement with itself: the "Treatment Effect" is a function of the placebo improvement. It's classic regression to the mean.

Of course if you chuck out the centres where people on placebo do well (the grey box in my picture), the drug seems to work pretty nicely. But this is cheating. It is cherry-picking. It is completely unscientific. (To give the authors their due, they also eliminated the centres where the placebo response was very low. This could, under some assumptions, make the analysis unbiased, but they don't show that this was their intention, let alone that it would eliminate all of the bias.)

The authors note that this could be a source of bias, but say that it wouldn't be one if it was planned out in advance: "in order to overcome the bias risk, the enrichment strategy should be accounted for and pre-planned in the study protocol." This is like saying that if you announce, before playing chess, that you are going to cheat, it's not cheating.

To be fair to the authors, assuming the drug does work, this method would improve your chances of correctly detecting the effect. Centres with very high placebo responses quite possibly are junk. Assuming the drug works.

But if we're assuming the drug works, why are we bothering to do a trial? The whole point of a trial is to discover something we don't know. The authors justify their approach by suggesting that it would be useful for drug companies who want to do a "proof-of-concept" trial to find out whether an experimental drug might work under the most favourable conditions, i.e. whether they should bother continuing to research it.

They say that such trials "are inherently exploratory in their conception, aimed at signal detection, open to innovation..." - in other words, that they're not meant to be as rigorous as late-stage trials.

Fair enough. But this method is not even suitable for proof-of-concept, because it would (as I have shown above in my 10 minute simulation) increase your chance of finding an "effect" from a drug that doesn't work.

Whatever the truth is, this method will give the same result, so it's not useful evidence. It's like saying "Heads I win, tails you lose". You've set it up so that I lose - the coin toss doesn't tell us anything.

All of the author's results are based on trials in which the drug "should have worked": they do not appear to have simulated what would happen if they used this method on trials where it didn't work, as I just did. So I'm doing Pharma a big favour by writing this post, because if they adopt this approach, they're more likely to waste money on drugs that don't work.

They should be paying me for this stuff.

ResearchBlogging.orgMerlo-Pich E, Alexander RC, Fava M, & Gomeni R (2010). A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs. Clinical Pharmacology and Therapeutics PMID: 20861834

Tuesday, September 14, 2010

Stopping Antidepressants: Not So Fast

People who quit antidepressants slowly, by gradually decreasing the dose, are much less likely to suffer a relapse, according to Baldessarini et al. in the American Journal of Psychiatry.

They describe a large sample (400) of patients from Sardinia, Italy, who had responded well to antidepressants, and then stopped taking them. The antidepressants had been prescribed for either depression, or panic attacks.

People who quit suddenly (over 1-7 days) were more likely to relapse, and relapsed sooner, than the ones who stopped gradually (over a period of 2 weeks or more).

This graph shows what % of the patients in each group remained well at each time point (in terms of days since their final pill.) As you can see, the two lines separate early, and then remain apart by about the same distance (20%) for the whole 12 months.

What this means is that rapid discontinuation didn't just accelerate relapses that were "going to happen anyway". It actually caused more relapses - about 1 in 5 "extra" people. These "extra" relapses all happened in the first 3 months, because after that, the slope of the lines is identical.

On the other hand, they rarely happened immediately - it's not as if people relapsed within days of their last pill. The pattern was broadly similar for older antidepressants (tricyclics) and newer ones (SSRIs).

The authors note that these data throw up important questions about "relapse prevention" trials comparing people who stay on antidepressants vs. those who are switched - abruptly - to placebo. People who stay on the drug usually do better, but is this because the drug works, or because the people on placebo were withdrawn too fast?

This was an observational study, not an experiment. There was no randomization. People quit antidepressants for various "personal or clinical reasons"; 80% of the time it was their own decision, and only 20% of the time was it due to their doctor's advice.

So it's possible that there was some underlying difference between the two groups, that could explain the differences. Regression analysis revealed that the results weren't due to differences in dose, duration of treatment, diagnosis, age etc., but you can't measure every possible confound.

Only randomized controlled trials could provide a final answer, but there's little chance of anyone doing one. Drug companies are unlikely to fund a study about how to stop using their products. So we have only observational data to go on. These data fit in with previous studies showing that there's a similar story when it comes to quitting lithium and antipsychotics. Gradual is better.

But that's common sense. Tapering medications slowly is a good idea in general, because it gives your system more time to adapt. Of course, sometimes there are overriding medical reasons to quit quickly, but apart from in such cases, I'd always want to come off anything as gradually as possible.

ResearchBlogging.orgBaldessarini RJ, Tondo L, Ghiani C, & Lepri B (2010). Illness risk following rapid versus gradual discontinuation of antidepressants. The American journal of psychiatry, 167 (8), 934-41 PMID: 20478876