Six months ago, I asked
What's The Best Antidepressant?, and I discussed
a paper by Andrea Cipriani et al. The paper claimed that of the modern antidepressants, escitalopram (
Lexapro) and sertraline (
Zoloft) offer the best combination of effectiveness and mild side effects, and that sertraline has the advantage of being much cheaper.
The Cipriani paper was a
meta-analysis of trials comparing one drug against another. With a total of over 25,000 patients, it boasted an impressively large dataset, but I advised caution. Their method of crunching the numbers (indirect comparisons) was complex, and rested on a lot of assumptions.
I wasn't the only skeptic. Cipriani et al has attracted plenty of comments in the medical literature, and they make for some fascinating reading. Indeed, they amount to crash-course in the controversies surrounding antidepressants today - a whole debate in microcosm. So here's the microcosm, in a nutshell:
*
In
The Lancet, the original paper was accompanied by
glowing praise by one Sagar Parikh:
Free of any potential funding bias... Now, the clinician can identify the four best treatments... A new gold standard of reliable information has been compiled for patients to review.
But critical comments swiftly appeared in the
Lancet's letters pages. While not accusing Cipriani and colleagues themselves of bias or conflicts-of-interest, Tom Jefferson noted that way back in 2003,
David Healy drew attention to
:
documents that a communications agency acting on behalf of the makers of sertraline were forced to make available by a US court. Among them was a register of completed sertraline studies awaiting to be assigned to authors. This practice (rent-a-key-opinion-leader) is of unknown prevalence but it undermines any attempt at reviewing the evidence in a meaningful way.
This is what's known as
medical ghostwriting, and it is indeed
a scandal. However, by itself, ghostwriting doesn't distort evidence as such. It's what's published - or
not published - that counts. Almost all antidepressant trials are run and funded by drug companies. All too often, they just don't publish data showing their products in an unfavourable light. The fearsome
John Ioannidis - known for writing papers with titles like
Why most published research findings are false - pulled no punches in reminding readers of this, in his letter:
Among placebo controlled antidepressant trials registered with the US FDA, most negative results are unpublished or published as positive. Take sertraline, which Cipriani and colleagues recommend as the best ... of five FDA-registered trials, the only positive trial was published, one negative trial was published as positive, and three negative trials were unpublished. Head-to-head comparisons can suffer worse bias, since regulatory registration is uncommon. Meta-analysis of published plus industry-furnished data could spuriously suggest that the best drugs are those with the most shamelessly biased data ...
Ioannidis also noted that Cipriani did not include placebo-controlled trials in their analysis. He helpfully provided a table showing that if you do include these trials, the ranking of antidepressants is very different:
Of course, Ioannidis was not saying that the drug-vs-placebo data is
better than the drug-vs-drug trials. After all, he had just declared it to be biased. But neither is it necessarily
worse, and there's no good reason not to consider it.
Cipriani et al's response to their critics was a little light on detail. In response to concerns of industrial publication bias, they said that:
we contacted the original authors and pharmaceutical companies to obtain further data or to confirm reported figures.
But of course the pharmaceutical companies were under no obligation to play ball. They could just have chosen not to reveal embarrassing data. Rather more reassuring is the fact that the original paper did look for correlations between the drug company running each trial, and the results of the trial; they didn't find any. Rather cheekily, Cipriani et al then went on to suggest that
they were the ones who were sticking it to Big Pharma:
The standard thinking has become that most antidepressants are of similar average efficacy and tolerability ... In some ways, this is a comfortable position for industry and its hired academic opinion leaders—it sets a low threshold for the introduction of new agents which can initially be marketed on the basis of small differences in specific adverse effects rather than on clear advantages in terms of overall average efficacy and acceptability.
They certainly have a point here. If aspiring antidepressants had to be proven
better than existing ones in order to be sold, instead of just as good, there would probably have been no new antidepressants since Prozac in 1990. (And Prozac is only "better" than the drugs available in 1960 in that it's safer and has fewer side effects; it's no more effective.)
But this is not really relevant to whether the Cipriani analysis is valid. And in
The Lancet letters, the authors did not address some of the criticisms, such as Ioannidis's point about including placebo-controlled trials, at all. They do point out that their raw data is
available online for anyone to play around with.
The debate continued in the pages of
Evidence Based Mental Health. In 2008, Gerald Gartlehner and Bradley Gaynes conducted
a rather similar meta-analysis, but they reached very different conclusions. They declared that all post-1990 antidepressants are equally effective (or ineffective).
In
their comments on the Cipriani paper, Gartlehner and Gaynes say that they were just more cautious in interpreting the results of a complex and problematic statistical process:
Ranking sertraline and escitalopram higher than other drugs conveys a precision
and existence of clinically important differences that is not reflected in the body of evidence. ...for sertraline and escitalopram the range of probabilities actually extends from the first to the eighth rank for both efficacy and acceptability... the validity of results of indirect comparisons depends on various assumptions, some of which are unverifiable ... We simply took underlying uncertainties into greater consideration and interpreted findings more cautiously than Cipriani and colleagues.
They also accuse Cipriani et al of various technical shortcomings - and in a meta-analysis, such 'technicalities' can often greatly the skew the results:
they included studies with very different populations such as frail elderly, patients with accompanying anxiety and inpatients as well as outpatients ... the effect measure of choice was odds ratios rather than relative risks. Odds ratios have mathematical advantages that statisticians value. Practitioners, however, frequently overestimate their clinical importance...
Cipriani et al respond to some of these technical criticisms, while admitting that their analysis has limitations. But, they say, even an imperfect ranking of antidepressants is better than none at all:
We have a choice. We may either make the best use of the available randomised evidence or we essentially ignore it. We believe that it is better to have a set of criteria based on the available evidence than to have no criteria at all... We believe that, despite the likely biases of the included trials, and the limitations of our approach, our analysis makes the best use of the randomised evidence, providing clinicians with evidence based criteria that can be used to guide treatment choices.
*
What are we to make of all this? Here's my two cents. It's implausible that all antidepressants are truly equally effective. They affect the brain in different ways. The pharmacological differences between
SSRIs such as Prozac, Zoloft and Lexapro are minimal at best but mirtazapine and reboxetine, say, target entirely different systems. They work differently, so it would be odd if they all worked equally well.
The search phrase that most often leads people to this blog is "best antidepressant". People really want to know which antidepressant is most likely to help them. In truth, everyone responds differently to every drug, so there is no one best treatment. But Cipriani et al are quite right that even a
roughly correct ranking could help improve the treatment of people with depression, even if the differences are tiny. If Drug X helps 1% more people than Drug Y on average, that's a lot of people when
30 million Americans take antidepressants every year.
So, what
is the best antidepressant, on average? I don't know. But maybe it's escitalopram or sertraline. Stranger things have happened.
Ioannidis JP (2009). Ranking antidepressants. Lancet, 373 (9677) PMID: 19465221Gartlehner, G., & Gaynes, B. (2009). Are all antidepressants equal? Evidence-Based Mental Health, 12 (4), 98-100 DOI: 10.1136/ebmh.12.4.98