Friday, February 13, 2009

Music Reviews?

Two sites I visit daily are Pitchfork and DrownedInSound. They're music review sites, which means they're full of stuff like this:
Major General moves a fair piece between the Hüsker Dü-like urgency of opener "Jeff Penalty" to the loungey, languid closer "I'm Done Singing", hitting mid-1990s alt-rock, tipsy Billy Joel balladry, Sunday afternoon swing, and Eastern European folk-tinged rave-ups along the way. I can't quite tell if "Jeff Penalty" is a highlight or the highlight, but it's certainly a winner, spinning a note-perfect yarn of seeing the Jello Biafra-less Dead Kennedys revue. In Nicolay's tale, the crowd reluctantly accepts "Jeff Whatisname" and refuses to stop believin'. It's as good a song about navigating aging in the scene-- never selling out, after all, just turns you into the old guy in the room-- as any on the Hold Steady's Stay Positive, and if they were to slip it into a setlist sometime soon, they wouldn't miss a step.
Now, I don't know if I'm alone in this, but I don't find that helpful. It would be interesting if you were really into the band in question & cared about every detail of what they do, but when you're looking for recommendations as to what to listen to, is that what you look for? And can a review really tell you why something is any good or not?

What I look for in these reviews is the bit where they tell you what other music it sounds like - because if it's similar to something I already like then I'll probably like it, if not I probably won't. Everyone has things they like, and there's little rhyme or reason to that - at the moment I'm listening to a lot of
Darker My Love, The Aliens, and 1990s Greenday, but I couldn't tell you why. I just like them. And I'll probably like stuff which sounds like them. That's what having a certain taste is, surely.

That's why Pandora, last.fm and other automatic music-recommendation engines are rapidly becoming more useful to me than reviewers. You can type a band or a song into Pandora and it'll recommend other music that sounds like it. Perfect. Except that at the moment Pandora only works if you live in the US, for copyright reasons...

Music Reviews?

Two sites I visit daily are Pitchfork and DrownedInSound. They're music review sites, which means they're full of stuff like this:
Major General moves a fair piece between the Hüsker Dü-like urgency of opener "Jeff Penalty" to the loungey, languid closer "I'm Done Singing", hitting mid-1990s alt-rock, tipsy Billy Joel balladry, Sunday afternoon swing, and Eastern European folk-tinged rave-ups along the way. I can't quite tell if "Jeff Penalty" is a highlight or the highlight, but it's certainly a winner, spinning a note-perfect yarn of seeing the Jello Biafra-less Dead Kennedys revue. In Nicolay's tale, the crowd reluctantly accepts "Jeff Whatisname" and refuses to stop believin'. It's as good a song about navigating aging in the scene-- never selling out, after all, just turns you into the old guy in the room-- as any on the Hold Steady's Stay Positive, and if they were to slip it into a setlist sometime soon, they wouldn't miss a step.
Now, I don't know if I'm alone in this, but I don't find that helpful. It would be interesting if you were really into the band in question & cared about every detail of what they do, but when you're looking for recommendations as to what to listen to, is that what you look for? And can a review really tell you why something is any good or not?

What I look for in these reviews is the bit where they tell you what other music it sounds like - because if it's similar to something I already like then I'll probably like it, if not I probably won't. Everyone has things they like, and there's little rhyme or reason to that - at the moment I'm listening to a lot of
Darker My Love, The Aliens, and 1990s Greenday, but I couldn't tell you why. I just like them. And I'll probably like stuff which sounds like them. That's what having a certain taste is, surely.

That's why Pandora, last.fm and other automatic music-recommendation engines are rapidly becoming more useful to me than reviewers. You can type a band or a song into Pandora and it'll recommend other music that sounds like it. Perfect. Except that at the moment Pandora only works if you live in the US, for copyright reasons...

Wednesday, February 11, 2009

What's the Best Antidepressant?

Edit: For more discussion of this paper, see here. (29.10.09)

It's escitalopram (Lexapro aka Cipralex) - hurrah! That is if you believe a meta-analysis just published in The Lancet. Should you believe it? The Lancet's a highly-regarded journal. However, this paper certainly bears a close reading.

The question of whether any antidepressant works "better" than any other is an old one. There are many who hold that all antidepressants are pretty much equal. Then again, there are people who deny that they really work at all. If you think about it, it would be pretty odd if tianeptine, a drug which enhances the reuptake of serotonin, was exactly as good as tranylcypromine, which blocks the breakdown of serotonin, noradrenaline and dopamine. They work in completely different ways, so one of them probably ought to work better. Every psychiatrist I've spoken to believes that some drugs are better than others - but they rarely agree on which ones are better. So there's room for more knowledge here.

The Lancet paper tries to establish the comparative efficacy and tolerability of 12 "newer" antidepressants. This includes SSRIs like fluoxetine (Prozac) and citalopram, as well as the noradrenaline reuptake inhibitor reboxetine (Edronax), dual-action venlafaxine (Effexor), and a few others. However, it doesn't include pre-1990 drugs like tricyclics and MAOis - sometimes regarded as a bit more powerful (but much less safe) than any newer drugs.

The headline results?
Mirtazapine, escitalopram, venlafaxine, and sertraline were among the most efficacious treatments [in that order], and escitalopram, sertraline, bupropion, and citalopram were better tolerated than the other remaining antidepressants [in that order]
In other words, escitalopram has the mildest side effects and is also very effective; mirtazepine is slightly more effective, but the side effects are considerably worse. Sertraline offers a good combination of tolerability and power, but escitalopram is even better. (Sertraline is much cheaper though, because the patent has expired.) Hurrah. Reboxetine, on the other hand, is declared total rubbish, being the least effective and also the worst tolerated of the 12. Oh dear.

But how did they reach these bold conclusions? They did a meta-analysis of 117 randomized controlled trials directly comparing one antidepressant against another ("head-to-head comparitor trials"). There was plenty of data - in total the trials covered 25,928 people. But the data was patchy. There are plenty of trials comparing fluoxetine vs. venlafaxine, but there are very few comparing, say, venlafaxine with citalopram. The diagram at the top shows the number of each type of comparison; some drugs were almost never compared with anything. Why? Generally, because these trials are run by drug companies comparing their newest product with an established competitor, in an attempt to show that theirs is better.

In an attempt to get around this problem, the authors did a "multiple-treatments meta-analysis"; essentially, this involves indirectly comparing drug A and drug B, by looking at direct comparisons of both to drug C. If A is much better than C, and B is a little better than C, you can work out that A is better than B.

Of course, this involves a lot of assumptions. And in the case where you have 12 drugs, not just 3, it becomes very very complicated. The methods section offers little insight into exactly what the authors did:
We did a random-effects model within a Bayesian framework using Markov chain Monte Carlo methods in WinBUGS (MRC Biostatistics Unit, Cambridge, UK). We modelled the binary outcomes in every treatment group of every study, and specified the relations among the odds ratios (ORs) across studies making diff erent comparisons. This method combines direct and indirect evidence for any given pair of treatments. We used p values less than 0·05 and 95% CIs (according to whether the CI included the null value) to assess signifi cance, and looked at a plausible range for the magnitude of the population difference. We also assessed the probability that each antidepressant drug was the most effi cacious regimen, the second best, the third best, and so on, by calculating the OR for each drug compared with an arbitrary common control group, and counting the proportion of iterations of the Markov chain in which each drug had the highest OR, the second highest, and so on. We ranked treatments in terms of acceptability with the same methods.
I don't know what that means, in practice. I know vaguely what it means in theory but in any kind of data-crunching like this, there are always things that can go wrong and difficult decisions to be made. So the analysis might have been completely reasonable - but we don't know. The authors deny that any drug company funded the study. I vaguely know some of them, and I don't believe for a second that they deliberately fixed the results in favor of escitalopram. But readers of the paper have no way of knowing whether their analysis method was reliable or not.

The more basic problem with this kind of thing is that it doesn't address the question of whether some drugs are better for some people. Anecdotal evidence strongly suggests that some are ("Sertraline made me feel terrible, but citalopram helped" - you hear this kind of thing a lot when talking about antidepressants) but there's not much hard evidence. For patients and doctors, though, it would be very useful to know which drug to prescribe to a certain person. The answer will not always be escitalopram.

Reboxetine may not be good for everyone, but for some people, it might be all they need. For example, given that reboxetine tends to have a stimulant-like "energizing" effect and to wake you up, you might assume that it would be good for someone whose main depression symptom was fatigue & sleepiness. You'd have to assume that, though, because there's no scientific evidence.

Finally, just for a sense of perspective, here's what happened in a couple of other recent antidepressant beauty contests. As you can see, they don't really agree on much...
  • Gartlehner et. al. (2008) concluded that "Second-generation antidepressants did not substantially differ in efficacy or effectiveness for the treatment of major depressive disorder on the basis of 203 studies; however, the incidence of specific adverse events and the onset of action differed."
  • Montgomery et. al. (2007) said that "[in "moderate-to-severe depression"] three antidepressants met these criteria [for superiority to any other drug]: clomipramine, venlafaxine, and escitalopram. Three antidepressants were found to have probable superiority: milnacipran, duloxetine, and mirtazapine." Note that clomipramine is an older drug not considered in the Lancet paper.
  • Papakostas et. al. (2008) report that "These results suggest that the NRI reboxetine and the SSRIs differ with respect to their side-effect profile and overall tolerability but not their efficacy in treating MDD."

ResearchBlogging.orgA CIPRIANI, T FURUKAWA, G SALANTI, J GEDDES, J HIGGINS, R CHURCHILL, N WATANABE, A NAKAGAWA, I OMORI, H MCGUIRE (2009). Comparative efficacy and acceptability of 12 new-generation antidepressants: a multiple-treatments meta-analysis The Lancet DOI: 10.1016/S0140-6736(09)60046-5

What's the Best Antidepressant?

Edit: For more discussion of this paper, see here. (29.10.09)

It's escitalopram (Lexapro aka Cipralex) - hurrah! That is if you believe a meta-analysis just published in The Lancet. Should you believe it? The Lancet's a highly-regarded journal. However, this paper certainly bears a close reading.

The question of whether any antidepressant works "better" than any other is an old one. There are many who hold that all antidepressants are pretty much equal. Then again, there are people who deny that they really work at all. If you think about it, it would be pretty odd if tianeptine, a drug which enhances the reuptake of serotonin, was exactly as good as tranylcypromine, which blocks the breakdown of serotonin, noradrenaline and dopamine. They work in completely different ways, so one of them probably ought to work better. Every psychiatrist I've spoken to believes that some drugs are better than others - but they rarely agree on which ones are better. So there's room for more knowledge here.

The Lancet paper tries to establish the comparative efficacy and tolerability of 12 "newer" antidepressants. This includes SSRIs like fluoxetine (Prozac) and citalopram, as well as the noradrenaline reuptake inhibitor reboxetine (Edronax), dual-action venlafaxine (Effexor), and a few others. However, it doesn't include pre-1990 drugs like tricyclics and MAOis - sometimes regarded as a bit more powerful (but much less safe) than any newer drugs.

The headline results?
Mirtazapine, escitalopram, venlafaxine, and sertraline were among the most efficacious treatments [in that order], and escitalopram, sertraline, bupropion, and citalopram were better tolerated than the other remaining antidepressants [in that order]
In other words, escitalopram has the mildest side effects and is also very effective; mirtazepine is slightly more effective, but the side effects are considerably worse. Sertraline offers a good combination of tolerability and power, but escitalopram is even better. (Sertraline is much cheaper though, because the patent has expired.) Hurrah. Reboxetine, on the other hand, is declared total rubbish, being the least effective and also the worst tolerated of the 12. Oh dear.

But how did they reach these bold conclusions? They did a meta-analysis of 117 randomized controlled trials directly comparing one antidepressant against another ("head-to-head comparitor trials"). There was plenty of data - in total the trials covered 25,928 people. But the data was patchy. There are plenty of trials comparing fluoxetine vs. venlafaxine, but there are very few comparing, say, venlafaxine with citalopram. The diagram at the top shows the number of each type of comparison; some drugs were almost never compared with anything. Why? Generally, because these trials are run by drug companies comparing their newest product with an established competitor, in an attempt to show that theirs is better.

In an attempt to get around this problem, the authors did a "multiple-treatments meta-analysis"; essentially, this involves indirectly comparing drug A and drug B, by looking at direct comparisons of both to drug C. If A is much better than C, and B is a little better than C, you can work out that A is better than B.

Of course, this involves a lot of assumptions. And in the case where you have 12 drugs, not just 3, it becomes very very complicated. The methods section offers little insight into exactly what the authors did:
We did a random-effects model within a Bayesian framework using Markov chain Monte Carlo methods in WinBUGS (MRC Biostatistics Unit, Cambridge, UK). We modelled the binary outcomes in every treatment group of every study, and specified the relations among the odds ratios (ORs) across studies making diff erent comparisons. This method combines direct and indirect evidence for any given pair of treatments. We used p values less than 0·05 and 95% CIs (according to whether the CI included the null value) to assess signifi cance, and looked at a plausible range for the magnitude of the population difference. We also assessed the probability that each antidepressant drug was the most effi cacious regimen, the second best, the third best, and so on, by calculating the OR for each drug compared with an arbitrary common control group, and counting the proportion of iterations of the Markov chain in which each drug had the highest OR, the second highest, and so on. We ranked treatments in terms of acceptability with the same methods.
I don't know what that means, in practice. I know vaguely what it means in theory but in any kind of data-crunching like this, there are always things that can go wrong and difficult decisions to be made. So the analysis might have been completely reasonable - but we don't know. The authors deny that any drug company funded the study. I vaguely know some of them, and I don't believe for a second that they deliberately fixed the results in favor of escitalopram. But readers of the paper have no way of knowing whether their analysis method was reliable or not.

The more basic problem with this kind of thing is that it doesn't address the question of whether some drugs are better for some people. Anecdotal evidence strongly suggests that some are ("Sertraline made me feel terrible, but citalopram helped" - you hear this kind of thing a lot when talking about antidepressants) but there's not much hard evidence. For patients and doctors, though, it would be very useful to know which drug to prescribe to a certain person. The answer will not always be escitalopram.

Reboxetine may not be good for everyone, but for some people, it might be all they need. For example, given that reboxetine tends to have a stimulant-like "energizing" effect and to wake you up, you might assume that it would be good for someone whose main depression symptom was fatigue & sleepiness. You'd have to assume that, though, because there's no scientific evidence.

Finally, just for a sense of perspective, here's what happened in a couple of other recent antidepressant beauty contests. As you can see, they don't really agree on much...
  • Gartlehner et. al. (2008) concluded that "Second-generation antidepressants did not substantially differ in efficacy or effectiveness for the treatment of major depressive disorder on the basis of 203 studies; however, the incidence of specific adverse events and the onset of action differed."
  • Montgomery et. al. (2007) said that "[in "moderate-to-severe depression"] three antidepressants met these criteria [for superiority to any other drug]: clomipramine, venlafaxine, and escitalopram. Three antidepressants were found to have probable superiority: milnacipran, duloxetine, and mirtazapine." Note that clomipramine is an older drug not considered in the Lancet paper.
  • Papakostas et. al. (2008) report that "These results suggest that the NRI reboxetine and the SSRIs differ with respect to their side-effect profile and overall tolerability but not their efficacy in treating MDD."

ResearchBlogging.orgA CIPRIANI, T FURUKAWA, G SALANTI, J GEDDES, J HIGGINS, R CHURCHILL, N WATANABE, A NAKAGAWA, I OMORI, H MCGUIRE (2009). Comparative efficacy and acceptability of 12 new-generation antidepressants: a multiple-treatments meta-analysis The Lancet DOI: 10.1016/S0140-6736(09)60046-5

Saturday, February 7, 2009

The Case Against Placebos

In one form or another, this argument has become popular: Most forms of complementary and alternative medicine (CAM) are just elaborate placebos. However, the placebo effect is incredibly powerful and useful, so these treatments are useful too.

Amongst many other people, Michael Brooks from the Guardian makes such a case here. It's an interesting idea. But I don't buy it.

Firstly, to my knowledge, there's no evidence that placebo treatments are clinically effective in the long term. There's no evidence against it, either, but this lack of evidence is important. (I'm not an expert so if such evidence exists, please say so!) There are, certainly, those well-known studies showing that placebos can improve symptoms in the lab, or in short-term clinical trials. And any doctor can tell you that placebos are a useful way of keeping people who want a quick fix satisfied. But is that what we want? Valium is a quick fix for anxiety and insomnia. It works great, in the short term. That doesn't mean you should take it every night. I don't think you should be taking a placebo every night either.

There's something pretty unsettling about the notion of handing out placebos. They're not physiologically addictive, but this doesn't mean that they can't become an expensive and damaging habit. Unlike many people, I'm not especially concerned about the "deception" aspect of it - if deception is what patients need to feel better, then they should get it. What I find unsettling is the idea that we should be medically treating people who we know don't need real medicine.

Prescribing someone any kind of treatment - whether real drugs, sugar pills, CAM, or anything else - legitimizes the notion that they're ill. The idea that one is ill is a very powerful one and you can do someone great harm by leading them to see themselves as ill unnecessarily.

Suppose you have a couple of weeks where you're feeling a bit tired, a bit down, a bit achey, a bit fuzzy. Maybe you're ill - maybe you've got mild anemia, for example. Most likely, though, you're not. Suppose you go to some kind of professional, whether it be your doctor, your homeopath, or anyone else. They might tell you that it's nothing to worry about, it's normal, just get on with your life, and it'll pass. You'd get annoyed, because you'd hoped for a quick fix, but you live with it, and you don't see yourself as suffering from a medical problem, so you don't expect to need treatment. (Could that be the most powerful placebo of all?)

But what if the professional thinks they can treat you? They give you a pill, or a foot rub, or some lovely oil, with confidence and a smile. You expect to get better, and you do. Hooray! Until the next time you start feeling a bit miserable. At which point, you go back to the professional, for more treatment. After all, it worked wonders last time. Again, it works, for a while. Then you start to notice a pain in your back you never did before - could the professional help? Sure. And while you're there, why not see if he has anything to help with that winter cold?

I do not know how often this happens, but it can't be uncommon. Medicalization is not just driven by drug companies. "Complementary and alternative" medicalization is at least as bad; perhaps worse, because drug companies at least have to convince trained doctors to prescribe their drugs. CAM, almost exclusively aimed at consumers, has no such constraints. There is nothing to stop any perfectly healthy person who believes themselves to be ill from going to a homeopath or a nutritionist, and having that belief validated. I would hope that no responsible CAM practioner would ever give a medical diagnosis, but this isn't the point - if you treat someone, even with sugar pills, you are telling them that they are ill.

If the claims of CAM practioners, or indeed CAM-as-placebo supporters, were valid, there probably wouldn't be such demand for CAM. If people really could go to a professional placebo-giver and walk out feeling happy and healthy for ever after, that would be great. Such a person would, presumably, rarely if ever need to see another practitioner, at least for the original ailment (and how many can one person have?) Unfortunately, I don't see this happening very often, although again I'm not aware of any evidence on this point. Saying that most CAM customers are satisfied with their service is not equivalent. The sheer amount of CAM, like the sheer amount of antidepressants being prescribed today, strongly suggests that it is, to an important extent, creating its own market.

[BPSDB]