Showing posts with label antidepressants. Show all posts
Showing posts with label antidepressants. Show all posts

Friday, December 10, 2010

Meditation vs. Medication for Depression

What's the best way to overcome depression? Antidepressant drugs, or Buddhist meditation?

A new trial has examined this question: Segal et al. The short answer is that 8 weeks of mindfulness mediation training was just as good as prolonged antidepressant treatment over 18 months. But like all clinical trials, there are some catches.

Right mindfulness, sammā-sati, is the 7th step on the Buddha's Nobel Eightfold Path of enlightenment. In its modern therapeutic form, however, it's a secular practice: you don't have to be a Buddhist to meditate here (but it presumably helps).

Mindfulness meditation is also branded nowadays as mindfulness-based cognitive behavioural therapy (MCBT), although how much it has in common with regular CBT is debatable. The technique is derived from the Buddhist tradition.

The essence of mindfulness is deceptively simple: you try to become a detached observer of your own feelings and thoughts. Rather than just getting angry, you notice the feelings of anger, without letting them take over. As I've written before, while this might sound easy, we're not always aware of our own feelings.

MCBT has attracted a lot of attention as a possible way of helping people with depression achieve relapse prevention. The idea is that if you can train people to become aware of depressive thoughts and feelings if they start to reappear, they'll be able to avoid being sucked into the cycle of depression.

The 160 patients in this trial were initially treated with antidepressants, starting with an SSRI, and if that didn't work, moving onto venlafaxine (up to 375 mg, as necessary, which is a serious dose) or mirtazapine for people who couldn't take the side effects. This is a sensible treatment regime, not one relying on low doses and doubtful drugs, as in many other antidepressant trials.

About half of the patients both stayed in the trial and achieved remission. After 5 months of sustained treatment, these 84 patients were randomized into 3 groups: continuation of their antidepressant, placebo pills, or mindfulness. The people who ended up on placebo had their antidepressants gradually replaced by sugar pills over a number of weeks, to avoid withdrawal effects.

Here's what happened:

People on placebo did very badly, with only 20% remaining well 18 months later. People who either stayed on the drugs, or who got the mindfulness training, did a lot better, with 70% staying well, and there were no differences between the two.

However here's the catch. This was only true of a sub-set of the patients, the ones who had an "unstable remission", meaning that when they were originally treated with drugs, their symptoms went up and down a bit. The "stable remission" people showed no benefits of either treatment, with the ones on placebo doing slightly better, if anything.

Overall, though, this is a decent study, and shows that, for some people, mindfulness can be helpful. A skeptic could complain that mindfulness was no better than medication, but it might have two advantages: cost, and side effects, though this would depend on the medication you were talking about (some are a lot more expensive, and more prone to side-effects, than others.) The mindfulness meditation also wasn't double-blind, so the benefits may have been placebo effects, but that could be said of almost any trial of psychotherapy.

I also wonder whether you'd do even better if you became all mindful and stayed on medication: this study had no combined-treatment group, unfortunately, but this is something to look into...

ResearchBlogging.orgSegal ZV, Bieling P, Young T, Macqueen G, Cooke R, Martin L, Bloch R, & Levitan RD (2010). Antidepressant Monotherapy vs Sequential Pharmacotherapy and Mindfulness-Based Cognitive Therapy, or Placebo, for Relapse Prophylaxis in Recurrent Depression. Archives of general psychiatry, 67 (12), 1256-64 PMID: 21135325

Meditation vs. Medication for Depression

What's the best way to overcome depression? Antidepressant drugs, or Buddhist meditation?

A new trial has examined this question: Segal et al. The short answer is that 8 weeks of mindfulness mediation training was just as good as prolonged antidepressant treatment over 18 months. But like all clinical trials, there are some catches.

Right mindfulness, sammā-sati, is the 7th step on the Buddha's Nobel Eightfold Path of enlightenment. In its modern therapeutic form, however, it's a secular practice: you don't have to be a Buddhist to meditate here (but it presumably helps).

Mindfulness meditation is also branded nowadays as mindfulness-based cognitive behavioural therapy (MCBT), although how much it has in common with regular CBT is debatable. The technique is derived from the Buddhist tradition.

The essence of mindfulness is deceptively simple: you try to become a detached observer of your own feelings and thoughts. Rather than just getting angry, you notice the feelings of anger, without letting them take over. As I've written before, while this might sound easy, we're not always aware of our own feelings.

MCBT has attracted a lot of attention as a possible way of helping people with depression achieve relapse prevention. The idea is that if you can train people to become aware of depressive thoughts and feelings if they start to reappear, they'll be able to avoid being sucked into the cycle of depression.

The 160 patients in this trial were initially treated with antidepressants, starting with an SSRI, and if that didn't work, moving onto venlafaxine (up to 375 mg, as necessary, which is a serious dose) or mirtazapine for people who couldn't take the side effects. This is a sensible treatment regime, not one relying on low doses and doubtful drugs, as in many other antidepressant trials.

About half of the patients both stayed in the trial and achieved remission. After 5 months of sustained treatment, these 84 patients were randomized into 3 groups: continuation of their antidepressant, placebo pills, or mindfulness. The people who ended up on placebo had their antidepressants gradually replaced by sugar pills over a number of weeks, to avoid withdrawal effects.

Here's what happened:

People on placebo did very badly, with only 20% remaining well 18 months later. People who either stayed on the drugs, or who got the mindfulness training, did a lot better, with 70% staying well, and there were no differences between the two.

However here's the catch. This was only true of a sub-set of the patients, the ones who had an "unstable remission", meaning that when they were originally treated with drugs, their symptoms went up and down a bit. The "stable remission" people showed no benefits of either treatment, with the ones on placebo doing slightly better, if anything.

Overall, though, this is a decent study, and shows that, for some people, mindfulness can be helpful. A skeptic could complain that mindfulness was no better than medication, but it might have two advantages: cost, and side effects, though this would depend on the medication you were talking about (some are a lot more expensive, and more prone to side-effects, than others.) The mindfulness meditation also wasn't double-blind, so the benefits may have been placebo effects, but that could be said of almost any trial of psychotherapy.

I also wonder whether you'd do even better if you became all mindful and stayed on medication: this study had no combined-treatment group, unfortunately, but this is something to look into...

ResearchBlogging.orgSegal ZV, Bieling P, Young T, Macqueen G, Cooke R, Martin L, Bloch R, & Levitan RD (2010). Antidepressant Monotherapy vs Sequential Pharmacotherapy and Mindfulness-Based Cognitive Therapy, or Placebo, for Relapse Prophylaxis in Recurrent Depression. Archives of general psychiatry, 67 (12), 1256-64 PMID: 21135325

Tuesday, November 30, 2010

Exercise and Depression: It's Complicated

Some ideas seem so nice, so inoffensive and so harmless, that it seems a shame to criticize them.


Take the idea that exercise is a useful treatment for depression. It's got something for everyone.

For doctors, it's attractive because it means they can recommend exercise - which is free, quick, and easy, at least for them - instead of spending the time and money on drugs or therapy. Governments like it for the same reason, and because it's another way of improving the nation's fitness. For people who don't much like psychiatry, exercise offers a lovely alternative to psych drugs - why take those nasty antidepressants if exercise will do just as well? And so on.

But this doesn't mean it's true. And a large observational study from Norway has just cast doubt on it: Physical activity and common mental disorders.

The authors took a large community sample of Norwegian people, the HUNT-2 study, which was done between 1995 and 1997. Over 90,000 people were invited to take part and full data were available from over 40,000.

What they found was that there was an association between taking part in physical exercise as a leisure activity, and lower self-reported symptoms of depression. It didn't matter whether the activity was intense or mild, and it didn't really matter how often you did it: so long as you did it, you got the benefit.

Crucially, however, the same was not true of physical exercise which was part of your job. That didn't help at all, and indeed the most strenuous jobs were associated with more depression (but less anxiety, strangely).

How does this fit with the very popular idea that exercise helps in depression? Well, many randomized trials have indeed
shown exercise to be better than not-exercize for depression
, but the problem is that these trials are never really placebo controlled. You can usually tell whether or not you're going jogging in the park every morning.

So the direct effects of exercise per se are hard to distinguish from the social and psychological meaning of "exercise". Knowing that you're starting a program of exercise could make you feel better: you're taking positive action to improve your life, you're not helpless in the face of your problems. By contrast, doing heavy work as part of your job, while physiologically beneficial, is unlikely to be so much fun.

This doesn't mean that telling people to get more exercise isn't a good idea, but if the meaning of exercise is more important than the physiology, that has some big implications for how it ought to be used.

It's good news for people who just can't take part in strenuous physical exercise because of physical illness or disability, something which is quite common in mental health. It suggests that these people could still get the benefits attributed to exercise even if they did less demanding forms of meaningful activity.

But it's bad news for doctors tempted to default to "get out and go jogging" whenever they see a potentially depressed person. Because if it's the meaning of exercise that counts, and you recommend exercise in a way which sounds like you're dismissing their problems, the meaning will be anything but helpful.

In clinical trials of exercise, the exercise program has, almost by definition, a positive value: it's the whole point of the trial. And the participants just wouldn't have volunteered for the trial if they didn't, on some level, think it would make them feel better.

But not everyone thinks that way. If you go to your doctor looking to get medication, or psychotherapy, or something like that, and you're told that all you need to do is go and get more exercise, it would be easy to see that as a brush-off, especially if it's done unsympathetically. The point is, if exercise doesn't feel like a positive step, it probably won't be one.

ResearchBlogging.orgHarvey SB, Hotopf M, Overland S, & Mykletun A (2010). Physical activity and common mental disorders. The British journal of psychiatry : the journal of mental science, 197, 357-64 PMID: 21037212

Exercise and Depression: It's Complicated

Some ideas seem so nice, so inoffensive and so harmless, that it seems a shame to criticize them.


Take the idea that exercise is a useful treatment for depression. It's got something for everyone.

For doctors, it's attractive because it means they can recommend exercise - which is free, quick, and easy, at least for them - instead of spending the time and money on drugs or therapy. Governments like it for the same reason, and because it's another way of improving the nation's fitness. For people who don't much like psychiatry, exercise offers a lovely alternative to psych drugs - why take those nasty antidepressants if exercise will do just as well? And so on.

But this doesn't mean it's true. And a large observational study from Norway has just cast doubt on it: Physical activity and common mental disorders.

The authors took a large community sample of Norwegian people, the HUNT-2 study, which was done between 1995 and 1997. Over 90,000 people were invited to take part and full data were available from over 40,000.

What they found was that there was an association between taking part in physical exercise as a leisure activity, and lower self-reported symptoms of depression. It didn't matter whether the activity was intense or mild, and it didn't really matter how often you did it: so long as you did it, you got the benefit.

Crucially, however, the same was not true of physical exercise which was part of your job. That didn't help at all, and indeed the most strenuous jobs were associated with more depression (but less anxiety, strangely).

How does this fit with the very popular idea that exercise helps in depression? Well, many randomized trials have indeed
shown exercise to be better than not-exercize for depression
, but the problem is that these trials are never really placebo controlled. You can usually tell whether or not you're going jogging in the park every morning.

So the direct effects of exercise per se are hard to distinguish from the social and psychological meaning of "exercise". Knowing that you're starting a program of exercise could make you feel better: you're taking positive action to improve your life, you're not helpless in the face of your problems. By contrast, doing heavy work as part of your job, while physiologically beneficial, is unlikely to be so much fun.

This doesn't mean that telling people to get more exercise isn't a good idea, but if the meaning of exercise is more important than the physiology, that has some big implications for how it ought to be used.

It's good news for people who just can't take part in strenuous physical exercise because of physical illness or disability, something which is quite common in mental health. It suggests that these people could still get the benefits attributed to exercise even if they did less demanding forms of meaningful activity.

But it's bad news for doctors tempted to default to "get out and go jogging" whenever they see a potentially depressed person. Because if it's the meaning of exercise that counts, and you recommend exercise in a way which sounds like you're dismissing their problems, the meaning will be anything but helpful.

In clinical trials of exercise, the exercise program has, almost by definition, a positive value: it's the whole point of the trial. And the participants just wouldn't have volunteered for the trial if they didn't, on some level, think it would make them feel better.

But not everyone thinks that way. If you go to your doctor looking to get medication, or psychotherapy, or something like that, and you're told that all you need to do is go and get more exercise, it would be easy to see that as a brush-off, especially if it's done unsympathetically. The point is, if exercise doesn't feel like a positive step, it probably won't be one.

ResearchBlogging.orgHarvey SB, Hotopf M, Overland S, & Mykletun A (2010). Physical activity and common mental disorders. The British journal of psychiatry : the journal of mental science, 197, 357-64 PMID: 21037212

Tuesday, November 2, 2010

Blue Morning

Recently, I wrote about diurnal mood variation: the way in which depression often waxes and wanes over the course of the day. Mornings are generally the worst.

A related phenomenon is late insomnia, or "early morning waking".

But this phrase is rather an understatement. Everyone's woken up early. Maybe you had a flight to catch. Or you were drunk and threw up. Or you just needed a pee. That's early morning waking, but not the depressive kind. When you're depressed, the waking up is the least of your problems.

Suddenly, you are awake, more awake than you've ever been. And you know something terrible has happened, or is about to happen, or that you've done something terribly wrong. It feels like a Eureka moment. You can be a level-headed person, not given to jumping to conclusions, but you will be convinced of this.

In a panic attack, you think you're going to die. Your heart is beating too fast, your breathing's too deep: your body is exploding, you can feel it too closely. With this, With this, you think you should die or even, in some sense, already have. It feels cold: you can no longer feel the warmth of your own body.

The moment passes; the terrible truth that you were so certain of five minutes ago becomes a little doubtful. Maybe it's not quite so bad. At this point, the wakefulness goes too, and you become, well, as tired as you ought to be at 3 am. You try to go back to sleep. If you're lucky, you succeed. If not, you lie awake until morning in a state of miserable contemplation.

While it's happening, you think that you're going to feel this way forever; bizarrely, you think you always have felt this way. In fact, this is the darkest hour.

*

Why does this happen? There has been almost no research on early morning waking. Presumably, because it's so hard to study. To observe it, you would have to get your depressed patients to spend all night in your brain scanner (or, if you prefer, on your analyst's couch), and even then, it doesn't happen every night.

But here's my theory: the key is the biology of sleep. There are many stages of sleep; at a very rough approximation there's dreaming REM, and dreamless slow-wave. Now, REM sleep tends to happen during the second half of the night - the early morning.

During REM sleep, the brain is, in many respects, awake. This is presumably what allows us to have concious dreams. Whereas in slow wave sleep, the brain really is offline; slow waves are also seen in the brain of people in comas, or under deep anaesthesia.

When we're awake, the brain is awash with modulatory neurotransmitters, such as serotonin, norepinephrine, and acetylcholine. During REM, acetylcholine is present, while in slow-wave sleep it's not; indeed acetylcholine may well be what stops slow waves and "wakes up" the cortex.

But unlike during waking, serotonin and norepinephrine neurons are entirely inactive during REM sleep - and only during REM sleep. This fact is surprisingly little-known, but it seems to me that it explains an awful lot.

For one thing, it explains why drugs which increase serotonin levels, such as SSRI antidepressants, inhibit REM sleep. Indeed, high doses of MAOi antidepressants prevent REM entirely (without any noticeable ill-effects, suggesting REM is dispensable). SSRIs only partially suppress it.

Ironically, SSRIs can make dreams more vivid and colourful. I've been told by sleep scientists that this is because they delay the onset of REM so the dreams are "shifted" later into the night making you more likely to remember them when you wake up. But there could be more to it than that.

The fact that REM is a serotonin-free zone also explains wet dreams. Serotonin is well known to suppresses ejaculation; that's why SSRIs delay orgasm, one of their least popular side effects, although it's useful to treat premature ejaculation: every cloud has a silver lining.

So, having said all that: could this also explain the terror of early-morning waking? Suppose that, for whatever reason, you woke up during REM sleep, but your serotonin cells didn't wake up quick enough, leaving you awake, but with no serotonin (a situation which never normally occurs, remember). How would that feel?

Using a technique called acute tryptophan depletion (ATD), you can lower someone's serotonin levels. In most people, this doesn't do very much, but in some people with a history of depression, it causes them to relapse. Here's what happened to one patient after ATD:
[her] previous episodes of clinical depression were associated with the loss of important friendships had, while depressed, been preoccupied with fears that she would never be able to sustain a relationship. She had not had such fears since then.

She had been fully recovered and had not taken any medication for over a year. About 2 h after drinking the tryptophan-free mixture she experienced a sudden onset of sadness, despair, and uncontrollable crying. She feared that a current important relationship would end.
We don't know why tryptophan depletion does this to some people, or why it doesn't affect everyone the same way, and it's pure speculation that early morning waking has anything to do with this. But having said that, the pieces do seem to fit.

Blue Morning

Recently, I wrote about diurnal mood variation: the way in which depression often waxes and wanes over the course of the day. Mornings are generally the worst.

A related phenomenon is late insomnia, or "early morning waking".

But this phrase is rather an understatement. Everyone's woken up early. Maybe you had a flight to catch. Or you were drunk and threw up. Or you just needed a pee. That's early morning waking, but not the depressive kind. When you're depressed, the waking up is the least of your problems.

Suddenly, you are awake, more awake than you've ever been. And you know something terrible has happened, or is about to happen, or that you've done something terribly wrong. It feels like a Eureka moment. You can be a level-headed person, not given to jumping to conclusions, but you will be convinced of this.

In a panic attack, you think you're going to die. Your heart is beating too fast, your breathing's too deep: your body is exploding, you can feel it too closely. With this, With this, you think you should die or even, in some sense, already have. It feels cold: you can no longer feel the warmth of your own body.

The moment passes; the terrible truth that you were so certain of five minutes ago becomes a little doubtful. Maybe it's not quite so bad. At this point, the wakefulness goes too, and you become, well, as tired as you ought to be at 3 am. You try to go back to sleep. If you're lucky, you succeed. If not, you lie awake until morning in a state of miserable contemplation.

While it's happening, you think that you're going to feel this way forever; bizarrely, you think you always have felt this way. In fact, this is the darkest hour.

*

Why does this happen? There has been almost no research on early morning waking. Presumably, because it's so hard to study. To observe it, you would have to get your depressed patients to spend all night in your brain scanner (or, if you prefer, on your analyst's couch), and even then, it doesn't happen every night.

But here's my theory: the key is the biology of sleep. There are many stages of sleep; at a very rough approximation there's dreaming REM, and dreamless slow-wave. Now, REM sleep tends to happen during the second half of the night - the early morning.

During REM sleep, the brain is, in many respects, awake. This is presumably what allows us to have concious dreams. Whereas in slow wave sleep, the brain really is offline; slow waves are also seen in the brain of people in comas, or under deep anaesthesia.

When we're awake, the brain is awash with modulatory neurotransmitters, such as serotonin, norepinephrine, and acetylcholine. During REM, acetylcholine is present, while in slow-wave sleep it's not; indeed acetylcholine may well be what stops slow waves and "wakes up" the cortex.

But unlike during waking, serotonin and norepinephrine neurons are entirely inactive during REM sleep - and only during REM sleep. This fact is surprisingly little-known, but it seems to me that it explains an awful lot.

For one thing, it explains why drugs which increase serotonin levels, such as SSRI antidepressants, inhibit REM sleep. Indeed, high doses of MAOi antidepressants prevent REM entirely (without any noticeable ill-effects, suggesting REM is dispensable). SSRIs only partially suppress it.

Ironically, SSRIs can make dreams more vivid and colourful. I've been told by sleep scientists that this is because they delay the onset of REM so the dreams are "shifted" later into the night making you more likely to remember them when you wake up. But there could be more to it than that.

The fact that REM is a serotonin-free zone also explains wet dreams. Serotonin is well known to suppresses ejaculation; that's why SSRIs delay orgasm, one of their least popular side effects, although it's useful to treat premature ejaculation: every cloud has a silver lining.

So, having said all that: could this also explain the terror of early-morning waking? Suppose that, for whatever reason, you woke up during REM sleep, but your serotonin cells didn't wake up quick enough, leaving you awake, but with no serotonin (a situation which never normally occurs, remember). How would that feel?

Using a technique called acute tryptophan depletion (ATD), you can lower someone's serotonin levels. In most people, this doesn't do very much, but in some people with a history of depression, it causes them to relapse. Here's what happened to one patient after ATD:
[her] previous episodes of clinical depression were associated with the loss of important friendships had, while depressed, been preoccupied with fears that she would never be able to sustain a relationship. She had not had such fears since then.

She had been fully recovered and had not taken any medication for over a year. About 2 h after drinking the tryptophan-free mixture she experienced a sudden onset of sadness, despair, and uncontrollable crying. She feared that a current important relationship would end.
We don't know why tryptophan depletion does this to some people, or why it doesn't affect everyone the same way, and it's pure speculation that early morning waking has anything to do with this. But having said that, the pieces do seem to fit.

Thursday, October 21, 2010

Shock and Cure - With Magnets

Electroconvulsive therapy (ECT) is the oldest treatment in psychiatry that's still in use today. ECT uses a brief electrical current to induce a generalized seizure. No-one knows why, but in many cases this rapidly alleviates depression - amongst other things.

The problem with ECT is that it may cause memory loss. It's hotly debated how serious of a problem this is, and most psychiatrists agree that the risk is justified if the alternative is untreatable illness, but it's fair to say that whether or not it's not as bad as some people believe, the fear that it might be, is the main limitation to the use of the treatment.

Wouldn't it be handy if there was a way of getting the benefits of ECT without the risk of side effects? To that end, people have tried tinkering with the specifics of the electrical stimulation - the frequency and waveform of the current, the location of the electrodes, etc. - but unfortunately it seems like the settings that work best, tend to be the ones with the most side effects.

Enter magnetic seizure therapy (MST). As the name suggests, this is like ECT, except it uses powerful magnets, instead of electrical current, to cause the seizures. In fact though, the magnets work by creating electrical currents in the brain by electromagnetic induction, so it's not entirely different.

MST is thought to be more selective than ECT, in that it induces seizures in the surface of the brain - the cerebral cortex - but not the hippocampus, and other structures buried deeper in the brain, which are involved in memory.

It was first proposed in 2001, and since then it's been tested in a number of very small trials in monkeys and people. Now a group of German psychiatrists say that it's as effective as ECT, but with fewer side effects, in a new trial of 20 severely depressed people. Ironically, they work on Sigmund Freud Street, Bonn. I am not sure what Freud would say about this.

The trial was randomized, but not blinded: it's hard to blind people to this because the equipment used looks completely different. Nor was there a placebo group. All the patients had failed to improve with multiple antidepressants, and psychotherapy in almost all cases, and were therefore eligible for ECT. If anything, the MST group were slightly more ill than the ECT group at baseline.

The ECT they used was right unilateral. This is probably not quite as effective as stimulation which targets both sides of the brain (bitemporal or bifrontal), but has fewer side-effects.

So what happened? After 12 sessions, MST and ECT both seemed to work, and they were equally effective on average. Some patients got much better, some only got a bit better.

What about side effects? MST was noticeably "gentler", in that it didn't cause headaches or muscle pain, and people recovered from the seizures much faster (2 minutes vs 8 minutes to reorientation) after MST. This may have been because the seizures (as assessed using EEG) were less intense.

In terms of the all-important memory and cognitive side effects, however, it's not clear what was going on. They used a whole bunch of neuropsychological tests. In some of them, people got worse over the course of the sessions. In others, they got better. But in several, the scores went up and down with no meaningful pattern. If anything the MST group seemed to do a bit better but to be honest it's impossible to tell because there's so much data and it's so messy.

Unfortunately the tests they used have been criticized for not picking up the kinds of memory problems that some ECT patients complain of e.g. the "wiping" of old memories. For some reason they didn't just ask people whether they felt their memory was damaged or not.

Overall, this trial confirms that MST is a promising idea, but it remains to be seen whether it has any meaningful advantages over old school shock therapy...

ResearchBlogging.orgKayser S, Bewernick BH, Grubert C, Hadrysiewicz BL, Axmacher N, & Schlaepfer TE (2010). Antidepressant effects, of magnetic seizure therapy and electroconvulsive therapy, in treatment-resistant depression. Journal of psychiatric research PMID: 20951997

Shock and Cure - With Magnets

Electroconvulsive therapy (ECT) is the oldest treatment in psychiatry that's still in use today. ECT uses a brief electrical current to induce a generalized seizure. No-one knows why, but in many cases this rapidly alleviates depression - amongst other things.

The problem with ECT is that it may cause memory loss. It's hotly debated how serious of a problem this is, and most psychiatrists agree that the risk is justified if the alternative is untreatable illness, but it's fair to say that whether or not it's not as bad as some people believe, the fear that it might be, is the main limitation to the use of the treatment.

Wouldn't it be handy if there was a way of getting the benefits of ECT without the risk of side effects? To that end, people have tried tinkering with the specifics of the electrical stimulation - the frequency and waveform of the current, the location of the electrodes, etc. - but unfortunately it seems like the settings that work best, tend to be the ones with the most side effects.

Enter magnetic seizure therapy (MST). As the name suggests, this is like ECT, except it uses powerful magnets, instead of electrical current, to cause the seizures. In fact though, the magnets work by creating electrical currents in the brain by electromagnetic induction, so it's not entirely different.

MST is thought to be more selective than ECT, in that it induces seizures in the surface of the brain - the cerebral cortex - but not the hippocampus, and other structures buried deeper in the brain, which are involved in memory.

It was first proposed in 2001, and since then it's been tested in a number of very small trials in monkeys and people. Now a group of German psychiatrists say that it's as effective as ECT, but with fewer side effects, in a new trial of 20 severely depressed people. Ironically, they work on Sigmund Freud Street, Bonn. I am not sure what Freud would say about this.

The trial was randomized, but not blinded: it's hard to blind people to this because the equipment used looks completely different. Nor was there a placebo group. All the patients had failed to improve with multiple antidepressants, and psychotherapy in almost all cases, and were therefore eligible for ECT. If anything, the MST group were slightly more ill than the ECT group at baseline.

The ECT they used was right unilateral. This is probably not quite as effective as stimulation which targets both sides of the brain (bitemporal or bifrontal), but has fewer side-effects.

So what happened? After 12 sessions, MST and ECT both seemed to work, and they were equally effective on average. Some patients got much better, some only got a bit better.

What about side effects? MST was noticeably "gentler", in that it didn't cause headaches or muscle pain, and people recovered from the seizures much faster (2 minutes vs 8 minutes to reorientation) after MST. This may have been because the seizures (as assessed using EEG) were less intense.

In terms of the all-important memory and cognitive side effects, however, it's not clear what was going on. They used a whole bunch of neuropsychological tests. In some of them, people got worse over the course of the sessions. In others, they got better. But in several, the scores went up and down with no meaningful pattern. If anything the MST group seemed to do a bit better but to be honest it's impossible to tell because there's so much data and it's so messy.

Unfortunately the tests they used have been criticized for not picking up the kinds of memory problems that some ECT patients complain of e.g. the "wiping" of old memories. For some reason they didn't just ask people whether they felt their memory was damaged or not.

Overall, this trial confirms that MST is a promising idea, but it remains to be seen whether it has any meaningful advantages over old school shock therapy...

ResearchBlogging.orgKayser S, Bewernick BH, Grubert C, Hadrysiewicz BL, Axmacher N, & Schlaepfer TE (2010). Antidepressant effects, of magnetic seizure therapy and electroconvulsive therapy, in treatment-resistant depression. Journal of psychiatric research PMID: 20951997

Wednesday, October 20, 2010

You Read It Here First...Again

A couple of months ago I pointed out that a Letter published in the American Journal of Psychiatry, critiquing a certain paper about antidepressants, made very similar points to the ones that I did in my blog post about the paper. The biggest difference was that my post came out 9 months sooner.


Well, it's happened again. Except I was only 3 months ahead this time. Remember my post Clever New Scheme, criticizing a study which claimed to have found a brilliant way of deciding which antidepressant is right for someone, based on their brain activity?

That post went up on July 21st. Yesterday, October 19th, a Letter was published by the journal that ran the original paper. Three months ago, I said -
...there were two groups in this trial and they got entirely different sets of drugs. One group also got rEEG-based treatment personalization. That group did better, but that might have nothing to do with the rEEG...

...it would have been very simple to avoid this issue. Just give everyone rEEG, but shuffle the assignments in the control group, so that everyone was guided by someone else's EEG...

This would be a genuinely controlled test of the personalized rEEG system, because both groups would get the same kinds of drugs... Second, it would allow the trial to be double-blind: in this study the investigators knew which group people were in, because it was obvious from the drug choice... Thirdly, it wouldn't have meant they had to exclude people whose rEEG recommended they get the same treatment that they would have got in the control group...
Now Alexander C. Tsai says, in his Letter:
DeBattista et al. chose a study design that conflates the effect of rEEG-guided pharmacotherapy with the effects of differing medication regimes...
A more definitive study design would have been one in which study participants were randomized to receive rEEG-guided pharmacotherapy vs. sham rEEG-guided pharmacotherapy.

Such a study design could have been genuinely double blinded,
would not have required the inclusion of potential subjects whose rEEG treatment regimen was different from the control, and would be more likely to result in medication regimens that were balanced on average across the intervention vs. control arms.
To be fair, he also makes a separate point questioning how meaningful the small between-group difference was.

I'm mentioning this not because I want to show off, or to accuse Tsai of ripping me off, but because it's a good example of why people like Royce Murray are wrong. Murray recently wrote an editorial in the academic journal Analytical Chemistry, accusing blogging of being unreliable compared to proper, peer-reviewed science.

Murray is certainly right that one could use a blog as a platform to push crap ideas, but one can also use peer reviewed papers to do that, and often it's bloggers who are the first to pick up on this when it happens.

ResearchBlogging.orgTsai AC (2010). Unclear clinical significance of findings on the use of referenced-EEG-guided pharmacotherapy. Journal of psychiatric research PMID: 20943234

You Read It Here First...Again

A couple of months ago I pointed out that a Letter published in the American Journal of Psychiatry, critiquing a certain paper about antidepressants, made very similar points to the ones that I did in my blog post about the paper. The biggest difference was that my post came out 9 months sooner.


Well, it's happened again. Except I was only 3 months ahead this time. Remember my post Clever New Scheme, criticizing a study which claimed to have found a brilliant way of deciding which antidepressant is right for someone, based on their brain activity?

That post went up on July 21st. Yesterday, October 19th, a Letter was published by the journal that ran the original paper. Three months ago, I said -
...there were two groups in this trial and they got entirely different sets of drugs. One group also got rEEG-based treatment personalization. That group did better, but that might have nothing to do with the rEEG...

...it would have been very simple to avoid this issue. Just give everyone rEEG, but shuffle the assignments in the control group, so that everyone was guided by someone else's EEG...

This would be a genuinely controlled test of the personalized rEEG system, because both groups would get the same kinds of drugs... Second, it would allow the trial to be double-blind: in this study the investigators knew which group people were in, because it was obvious from the drug choice... Thirdly, it wouldn't have meant they had to exclude people whose rEEG recommended they get the same treatment that they would have got in the control group...
Now Alexander C. Tsai says, in his Letter:
DeBattista et al. chose a study design that conflates the effect of rEEG-guided pharmacotherapy with the effects of differing medication regimes...
A more definitive study design would have been one in which study participants were randomized to receive rEEG-guided pharmacotherapy vs. sham rEEG-guided pharmacotherapy.

Such a study design could have been genuinely double blinded,
would not have required the inclusion of potential subjects whose rEEG treatment regimen was different from the control, and would be more likely to result in medication regimens that were balanced on average across the intervention vs. control arms.
To be fair, he also makes a separate point questioning how meaningful the small between-group difference was.

I'm mentioning this not because I want to show off, or to accuse Tsai of ripping me off, but because it's a good example of why people like Royce Murray are wrong. Murray recently wrote an editorial in the academic journal Analytical Chemistry, accusing blogging of being unreliable compared to proper, peer-reviewed science.

Murray is certainly right that one could use a blog as a platform to push crap ideas, but one can also use peer reviewed papers to do that, and often it's bloggers who are the first to pick up on this when it happens.

ResearchBlogging.orgTsai AC (2010). Unclear clinical significance of findings on the use of referenced-EEG-guided pharmacotherapy. Journal of psychiatric research PMID: 20943234

Friday, October 15, 2010

Worst. Antidepressant. Ever.

Reboxetine is an antidepressant. Except it's not, because it doesn't treat depression.

This is the conclusion of a much-publicized article just out in the BMJ: Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and SSRI controlled trials.

Reboxetine was introduced to some fanfare, because its mechanism of action is unique - it's a selective norepinephrine reuptake inhibitor (NRI), which has no effect on serotonin, unlike Prozac and other newer antidepressants. Several older tricyclic antidepressants were NRIs, but they weren't selective because they also blocked a shed-load of receptors.

So in theory reboxetine treats depression while avoiding the side effects of other drugs, but last year, Cipriani et al in a headline-grabbing meta-analysis concluded that in fact it's the exact opposite: reboxetine was the least effective new antidepressant, and was also one of the worst in terms of side effects. Oh dear.

And that was only based on the published data. It turns out that Pfizer, the manufacturers of reboxetine, had chosen to not publish the results of most of their clinical trials of the drug, because the data showed that it was crap.

The new BMJ paper includes these unpublished results - it took an inordinate amount of time and pressure to make Pfizer agree to share them, but they eventually did - and we learn that reboxetine is:
  • no more effective than a placebo at treating depression.
  • less effective than SSRIs, which incidentally are better than placebo in this dataset (a bit).
  • worse tolerated than most SSRIs, and much worse tolerated than placebo.
The one faint glimmer of hope that it's not a complete dud was that it did seem to work better than placebo in depressed inpatients. However, this could well have been a fluke, because the numbers involved were tiny: there was one trial showing a humongous benefit in inpatients, but it only had a total of 52 people.)

Claims that reboxetine is dangerous on the basis of this study are a bit misleading - it may be, but there was no evidence for that in these data. It caused nasty and annoying side-effects, but that's not the same thing, because if you don't like side-effects, you could just stop taking it (which is what many people in these trials did).

Anyway, what are the lessons of this sorry tale, beyond reboxetine being rubbish? The main one is: we have to start forcing drug companies and other researchers to publish the results of clinical trials, whatever the results are. I've discussed this previously and suggested one possible way of doing that.

The situation regarding publication bias is far better than it was 10 years ago, thanks to initiatives such as clinicaltrials.gov; almost all of the reboxetine trials were completed before the year 2000; if they were run today, it would have been much harder to hide them, but still not impossible, especially in Europe. We need to make it impossible, everywhere, now.

The other implication is, ironically, good news for antidepressants - well, except reboxetine. The existence of reboxetine, a drug which has lots of side effects, but doesn't work, is evidence against the theory (put forward by Joanna Moncrieff, Irving Kirsch and others) that even the antidepressants that do seem to work, only work because of active placebo effects driven by their side effects.

So given that reboxetine had more side effects than SSRIs, it ought to have worked better, but actually it worked worse. This is by no means the nail in the coffin of the active placebo hypothesis but it is, to my mind, quite convincing.

Link: This study also blogged by Good, Bad and Bogus.

ResearchBlogging.orgEyding, D., Lelgemann, M., Grouven, U., Harter, M., Kromp, M., Kaiser, T., Kerekes, M., Gerken, M., & Wieseler, B. (2010). Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials BMJ, 341 (oct12 1) DOI: 10.1136/bmj.c4737

Worst. Antidepressant. Ever.

Reboxetine is an antidepressant. Except it's not, because it doesn't treat depression.

This is the conclusion of a much-publicized article just out in the BMJ: Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and SSRI controlled trials.

Reboxetine was introduced to some fanfare, because its mechanism of action is unique - it's a selective norepinephrine reuptake inhibitor (NRI), which has no effect on serotonin, unlike Prozac and other newer antidepressants. Several older tricyclic antidepressants were NRIs, but they weren't selective because they also blocked a shed-load of receptors.

So in theory reboxetine treats depression while avoiding the side effects of other drugs, but last year, Cipriani et al in a headline-grabbing meta-analysis concluded that in fact it's the exact opposite: reboxetine was the least effective new antidepressant, and was also one of the worst in terms of side effects. Oh dear.

And that was only based on the published data. It turns out that Pfizer, the manufacturers of reboxetine, had chosen to not publish the results of most of their clinical trials of the drug, because the data showed that it was crap.

The new BMJ paper includes these unpublished results - it took an inordinate amount of time and pressure to make Pfizer agree to share them, but they eventually did - and we learn that reboxetine is:
  • no more effective than a placebo at treating depression.
  • less effective than SSRIs, which incidentally are better than placebo in this dataset (a bit).
  • worse tolerated than most SSRIs, and much worse tolerated than placebo.
The one faint glimmer of hope that it's not a complete dud was that it did seem to work better than placebo in depressed inpatients. However, this could well have been a fluke, because the numbers involved were tiny: there was one trial showing a humongous benefit in inpatients, but it only had a total of 52 people.)

Claims that reboxetine is dangerous on the basis of this study are a bit misleading - it may be, but there was no evidence for that in these data. It caused nasty and annoying side-effects, but that's not the same thing, because if you don't like side-effects, you could just stop taking it (which is what many people in these trials did).

Anyway, what are the lessons of this sorry tale, beyond reboxetine being rubbish? The main one is: we have to start forcing drug companies and other researchers to publish the results of clinical trials, whatever the results are. I've discussed this previously and suggested one possible way of doing that.

The situation regarding publication bias is far better than it was 10 years ago, thanks to initiatives such as clinicaltrials.gov; almost all of the reboxetine trials were completed before the year 2000; if they were run today, it would have been much harder to hide them, but still not impossible, especially in Europe. We need to make it impossible, everywhere, now.

The other implication is, ironically, good news for antidepressants - well, except reboxetine. The existence of reboxetine, a drug which has lots of side effects, but doesn't work, is evidence against the theory (put forward by Joanna Moncrieff, Irving Kirsch and others) that even the antidepressants that do seem to work, only work because of active placebo effects driven by their side effects.

So given that reboxetine had more side effects than SSRIs, it ought to have worked better, but actually it worked worse. This is by no means the nail in the coffin of the active placebo hypothesis but it is, to my mind, quite convincing.

Link: This study also blogged by Good, Bad and Bogus.

ResearchBlogging.orgEyding, D., Lelgemann, M., Grouven, U., Harter, M., Kromp, M., Kaiser, T., Kerekes, M., Gerken, M., & Wieseler, B. (2010). Reboxetine for acute treatment of major depression: systematic review and meta-analysis of published and unpublished placebo and selective serotonin reuptake inhibitor controlled trials BMJ, 341 (oct12 1) DOI: 10.1136/bmj.c4737

Sunday, September 26, 2010

Big Pharma Explain How To Pick Cherries

Here at Neuroskeptic, we see a lot of bad science. Maybe, over the years (all 2 of them) that I've been writing this blog, I've become a bit jaded. Maybe I'm less distressed by it than I used to be. Cynical, even.

But this one really takes the biscuit. And then it takes the tin. And relieves itself in it: A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs.

Don't worry - it's from a big pharmaceutical company (GlaxoSmithKline), so I don't have to worry about hurting feelings.

It's is full to bursting with colourful graphs and pictures, but the basic idea is very simple. As in "simpleton".

Suppose you're testing a new drug against placebo. You decide to do a multicentre trial, i.e. you enlist lots of doctors to give the drug, or placebo, to their patients. Each clinic or hospital which takes part is a "centre". Multicentre trials are popular because they're an easy way of quickly testing a drug on a large number of patients.

Anyway, suppose that the results come in, and it turns out that the drug didn't work any better than placebo, which unfortunately is what happens rather often in modern trials of antidepressants. Oh dear. The drug's crap. That's the end of that chapter.

...
or is it?!? say GSK. Maybe not. They have a clever trick. Look at the results from each centre individually. Placebo response rates will probably vary between centres: in some of them, the placebo people don't get better, in others, they get lots better.

Now, suppose that you just chucked out all of the data from centres where the people on placebo got much better, on the grounds that there must be something weird going on in those ones. They reanalyzed the data from 1,837 patients given paroxetine or placebo, across 124 centres. In the dataset as a whole, paroxetine barely outperformed placebo. However, in the centres where people on placebo only improved a little, the drug was much better than placebo!

Well, of course it was. Imagine that the drug has no effect. Some people just get better and others don't. Let's assume that each person randomly gets between 0 and 25 better, with an equal chance of any outcome. Half are on drug and half are on placebo, but it makes no difference.

Let's further assume that there are 50 centres, with 20 people per centre (1000 people total). I knocked up a "simulation" of this in Excel (it took 10 minutes). Here's what you get:

The blue dots show, for each imaginary centre, drug improvement vs. placebo improvement. There's no correlation (it's random), and, on average, there is no difference: both average out at 12 points. The drug doesn't work.

The red dots show the "Treatment Effect" i.e. [drug improvement - placebo improvement]. The average is 0 - because the drug doesn't work. But there's a strong negative correlation between Treatment Effect and the placebo improvement - in centres where people improved lots on placebo, the drug worked worse.

This is exactly what Glaxo show in Figure 1a (see above). They write:
The analysis of the surface response indicated the predominant role of center specific placebo response as compared with the dose strength in determining the Treatment Effect of paroxetine.
But of course they correlate. You're correlating placebo improvement with itself: the "Treatment Effect" is a function of the placebo improvement. It's classic regression to the mean.

Of course if you chuck out the centres where people on placebo do well (the grey box in my picture), the drug seems to work pretty nicely. But this is cheating. It is cherry-picking. It is completely unscientific. (To give the authors their due, they also eliminated the centres where the placebo response was very low. This could, under some assumptions, make the analysis unbiased, but they don't show that this was their intention, let alone that it would eliminate all of the bias.)

The authors note that this could be a source of bias, but say that it wouldn't be one if it was planned out in advance: "in order to overcome the bias risk, the enrichment strategy should be accounted for and pre-planned in the study protocol." This is like saying that if you announce, before playing chess, that you are going to cheat, it's not cheating.

To be fair to the authors, assuming the drug does work, this method would improve your chances of correctly detecting the effect. Centres with very high placebo responses quite possibly are junk. Assuming the drug works.

But if we're assuming the drug works, why are we bothering to do a trial? The whole point of a trial is to discover something we don't know. The authors justify their approach by suggesting that it would be useful for drug companies who want to do a "proof-of-concept" trial to find out whether an experimental drug might work under the most favourable conditions, i.e. whether they should bother continuing to research it.

They say that such trials "are inherently exploratory in their conception, aimed at signal detection, open to innovation..." - in other words, that they're not meant to be as rigorous as late-stage trials.

Fair enough. But this method is not even suitable for proof-of-concept, because it would (as I have shown above in my 10 minute simulation) increase your chance of finding an "effect" from a drug that doesn't work.

Whatever the truth is, this method will give the same result, so it's not useful evidence. It's like saying "Heads I win, tails you lose". You've set it up so that I lose - the coin toss doesn't tell us anything.

All of the author's results are based on trials in which the drug "should have worked": they do not appear to have simulated what would happen if they used this method on trials where it didn't work, as I just did. So I'm doing Pharma a big favour by writing this post, because if they adopt this approach, they're more likely to waste money on drugs that don't work.

They should be paying me for this stuff.

ResearchBlogging.orgMerlo-Pich E, Alexander RC, Fava M, & Gomeni R (2010). A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs. Clinical Pharmacology and Therapeutics PMID: 20861834

Big Pharma Explain How To Pick Cherries

Here at Neuroskeptic, we see a lot of bad science. Maybe, over the years (all 2 of them) that I've been writing this blog, I've become a bit jaded. Maybe I'm less distressed by it than I used to be. Cynical, even.

But this one really takes the biscuit. And then it takes the tin. And relieves itself in it: A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs.

Don't worry - it's from a big pharmaceutical company (GlaxoSmithKline), so I don't have to worry about hurting feelings.

It's is full to bursting with colourful graphs and pictures, but the basic idea is very simple. As in "simpleton".

Suppose you're testing a new drug against placebo. You decide to do a multicentre trial, i.e. you enlist lots of doctors to give the drug, or placebo, to their patients. Each clinic or hospital which takes part is a "centre". Multicentre trials are popular because they're an easy way of quickly testing a drug on a large number of patients.

Anyway, suppose that the results come in, and it turns out that the drug didn't work any better than placebo, which unfortunately is what happens rather often in modern trials of antidepressants. Oh dear. The drug's crap. That's the end of that chapter.

...
or is it?!? say GSK. Maybe not. They have a clever trick. Look at the results from each centre individually. Placebo response rates will probably vary between centres: in some of them, the placebo people don't get better, in others, they get lots better.

Now, suppose that you just chucked out all of the data from centres where the people on placebo got much better, on the grounds that there must be something weird going on in those ones. They reanalyzed the data from 1,837 patients given paroxetine or placebo, across 124 centres. In the dataset as a whole, paroxetine barely outperformed placebo. However, in the centres where people on placebo only improved a little, the drug was much better than placebo!

Well, of course it was. Imagine that the drug has no effect. Some people just get better and others don't. Let's assume that each person randomly gets between 0 and 25 better, with an equal chance of any outcome. Half are on drug and half are on placebo, but it makes no difference.

Let's further assume that there are 50 centres, with 20 people per centre (1000 people total). I knocked up a "simulation" of this in Excel (it took 10 minutes). Here's what you get:

The blue dots show, for each imaginary centre, drug improvement vs. placebo improvement. There's no correlation (it's random), and, on average, there is no difference: both average out at 12 points. The drug doesn't work.

The red dots show the "Treatment Effect" i.e. [drug improvement - placebo improvement]. The average is 0 - because the drug doesn't work. But there's a strong negative correlation between Treatment Effect and the placebo improvement - in centres where people improved lots on placebo, the drug worked worse.

This is exactly what Glaxo show in Figure 1a (see above). They write:
The analysis of the surface response indicated the predominant role of center specific placebo response as compared with the dose strength in determining the Treatment Effect of paroxetine.
But of course they correlate. You're correlating placebo improvement with itself: the "Treatment Effect" is a function of the placebo improvement. It's classic regression to the mean.

Of course if you chuck out the centres where people on placebo do well (the grey box in my picture), the drug seems to work pretty nicely. But this is cheating. It is cherry-picking. It is completely unscientific. (To give the authors their due, they also eliminated the centres where the placebo response was very low. This could, under some assumptions, make the analysis unbiased, but they don't show that this was their intention, let alone that it would eliminate all of the bias.)

The authors note that this could be a source of bias, but say that it wouldn't be one if it was planned out in advance: "in order to overcome the bias risk, the enrichment strategy should be accounted for and pre-planned in the study protocol." This is like saying that if you announce, before playing chess, that you are going to cheat, it's not cheating.

To be fair to the authors, assuming the drug does work, this method would improve your chances of correctly detecting the effect. Centres with very high placebo responses quite possibly are junk. Assuming the drug works.

But if we're assuming the drug works, why are we bothering to do a trial? The whole point of a trial is to discover something we don't know. The authors justify their approach by suggesting that it would be useful for drug companies who want to do a "proof-of-concept" trial to find out whether an experimental drug might work under the most favourable conditions, i.e. whether they should bother continuing to research it.

They say that such trials "are inherently exploratory in their conception, aimed at signal detection, open to innovation..." - in other words, that they're not meant to be as rigorous as late-stage trials.

Fair enough. But this method is not even suitable for proof-of-concept, because it would (as I have shown above in my 10 minute simulation) increase your chance of finding an "effect" from a drug that doesn't work.

Whatever the truth is, this method will give the same result, so it's not useful evidence. It's like saying "Heads I win, tails you lose". You've set it up so that I lose - the coin toss doesn't tell us anything.

All of the author's results are based on trials in which the drug "should have worked": they do not appear to have simulated what would happen if they used this method on trials where it didn't work, as I just did. So I'm doing Pharma a big favour by writing this post, because if they adopt this approach, they're more likely to waste money on drugs that don't work.

They should be paying me for this stuff.

ResearchBlogging.orgMerlo-Pich E, Alexander RC, Fava M, & Gomeni R (2010). A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs. Clinical Pharmacology and Therapeutics PMID: 20861834