Showing posts with label mental health. Show all posts
Showing posts with label mental health. Show all posts

Tuesday, September 20, 2011

Antidepressants In The UK

Antidepressant sales have been rising for many years in Western countries, as regular Neuroskeptic readers  will remember.


Most of the studies on antidepressant use come from the USA and the UK, although the pattern also seems to hold for other European countries. The rapid rise of antidepressants from niche drugs to mega-sellers is perhaps the single biggest change in the way medicine treats mental illness since the invention of psychiatric drugs.

But while a rise in sales has been observed in many countries, that doesn't mean the same causes were at work in every case. For example, in the USA, there is good evidence that more people have started taking antidepressants over the past 15 years.

In the UK, however, it's a bit more tricky. Antidepressant prescriptions have certainly risen. However, a large 2009 study revealed that, between 1993 and 2005, there was not any significant rise in people starting on antidepressants for depression. Rather, the rise in prescriptions was caused by patients getting more prescriptions each. The same number of users were using more antidepressants.

Now a new paper has looked at antidepressant use over much the same period (1995-2007), but using a different set of data. Pauline Lockhart and Bruce Guthrie looked at pharmacy records of drugs actually dispensed, not just prescribed, and their data only covers a specific region, Tayside in Scotland. The 2009 study was nationwide.

So what happened?

The new paper confirmed the 2009 survey's finding of a strong increase in the number of antidepressant prescriptions per patient.

However, unlike the old study, this one found an increase in the number of people who used antidepressants each year. It went up from 8% of the population in 1995, to 13% in 2007 - an extremely high figure, higher even than the USA.

In other words, more people took them, and they took more of them on average - adding up to a threefold increase in antidepressants actually sold. The increase was seen across men and women of all ages and social classes.

There's no good evidence of an increase in mental illness in Britain in this period, by the way.

But why did the 2009 paper report no change in antidepressant users, while this one did? It could be that the increase was localized to the Tayside area. Another possibility is that there was an increase nationwide, but it wasn't about people with depression.

The 2009 study only looked at people with a diagnosis of depression. Yet modern antidepressants are widely used for other things as well - like anxiety, insomnia, pain, premature ejaculation. Maybe this non-depression-based use of antidepressants is what's on the rise.

ResearchBlogging.orgLockhart, P. and Guthrie, B. (2011). Trends in primary care antidepressantprescribing 1995–2007 British Journal of General Practice

Antidepressants In The UK

Antidepressant sales have been rising for many years in Western countries, as regular Neuroskeptic readers  will remember.


Most of the studies on antidepressant use come from the USA and the UK, although the pattern also seems to hold for other European countries. The rapid rise of antidepressants from niche drugs to mega-sellers is perhaps the single biggest change in the way medicine treats mental illness since the invention of psychiatric drugs.

But while a rise in sales has been observed in many countries, that doesn't mean the same causes were at work in every case. For example, in the USA, there is good evidence that more people have started taking antidepressants over the past 15 years.

In the UK, however, it's a bit more tricky. Antidepressant prescriptions have certainly risen. However, a large 2009 study revealed that, between 1993 and 2005, there was not any significant rise in people starting on antidepressants for depression. Rather, the rise in prescriptions was caused by patients getting more prescriptions each. The same number of users were using more antidepressants.

Now a new paper has looked at antidepressant use over much the same period (1995-2007), but using a different set of data. Pauline Lockhart and Bruce Guthrie looked at pharmacy records of drugs actually dispensed, not just prescribed, and their data only covers a specific region, Tayside in Scotland. The 2009 study was nationwide.

So what happened?

The new paper confirmed the 2009 survey's finding of a strong increase in the number of antidepressant prescriptions per patient.

However, unlike the old study, this one found an increase in the number of people who used antidepressants each year. It went up from 8% of the population in 1995, to 13% in 2007 - an extremely high figure, higher even than the USA.

In other words, more people took them, and they took more of them on average - adding up to a threefold increase in antidepressants actually sold. The increase was seen across men and women of all ages and social classes.

There's no good evidence of an increase in mental illness in Britain in this period, by the way.

But why did the 2009 paper report no change in antidepressant users, while this one did? It could be that the increase was localized to the Tayside area. Another possibility is that there was an increase nationwide, but it wasn't about people with depression.

The 2009 study only looked at people with a diagnosis of depression. Yet modern antidepressants are widely used for other things as well - like anxiety, insomnia, pain, premature ejaculation. Maybe this non-depression-based use of antidepressants is what's on the rise.

ResearchBlogging.orgLockhart, P. and Guthrie, B. (2011). Trends in primary care antidepressantprescribing 1995–2007 British Journal of General Practice

Tuesday, August 30, 2011

On Antipsychiatry

So leading US psychiatrist Stephen Stahl is annoyed at Daniel Carlat (of the The Carlat Psychiatry Blog and many other publications.)

After first surveying the current outlook for the development of new psychiatric drugs - not good, with many companies pulling out - Stahl laments:

Undoubtedly this is to the great delight of the anti-psychiatry community, lights up the antipsychiatry blogs (e.g., Carlat, http://carlatpsychiatry.blogspot.com/), who attract the Pharmascolds, scientologists and antimedication crowd who believe either there is no such thing as mental illness, that medication should not be used, or both.



Did you know that psychiatric illnesses are pure inventions of Pharma and their experts to treat patients that do not exist with drugs that are dangerous and do not work with the purpose only of profiting themselves? Stop the profits! Make mental illness go away by legislation and committee!


Stahl ends with the warning: Be careful what you ask for. You might just get it - "it" being an end to drug development in psychiatry.



Well, I would say the same to him.



Stahl paints opponents of modern pharmaceutical industry behaviour as "antipsychiatrists". They're not. Well, he only names one of them, Daniel Carlat, and he's certainly not. Carlat edits the Carlat Psychiatry Report. Let's take a look at the latest issue:



Benzodiazepines: A Guide to Safe Prescribing - discusses benzodiazepines, including a helpful table of their doses and half-lives. Useful to someone planning to prescribe these drugs, that is, which not many anti-psychiatrists would. Says that "They work quickly and effectively for anxiety and agitation...In most cases benzodiazepines have a benign side-effect profile..." Hardly likely to please the antimedication crowd.



Update on Medications for PTSD - including a review of trials of antidepressants, antipsychotics, and more exotic drugs. Says that psychotherapy is the key to treating PTSD, but that medication can be helpful: "Getting some comfort from meds can often enable a patient to more easily face" the hard task of therapy. Not enormously pro-medication, but very far from being anti.



Combined Antidepressants No More Effective Than Monotherapy - discusses a recent study finding that starting depressed patients on a combo of two antidepressants offers no benefits over just one drug. So, the piece concludes, "We recommend never using antidepressants, and banning them all forever"... no wait, that's what it would have said if Stahl were right. It actually said "we recommend...starting with a single antidepressant". Not none.



Overall Carlat is, as far as I can see, really pretty moderate. Yes, he's been critical of certain drugs, of Pharma-influenced psychiatrists and the culture of giving doctors freebies to promote products. Nonetheless, he believes that mental illness exists, and he thinks that medication can be useful in treating it.



Maybe Stahl's right and Carlat leads a secret double life as a Scientologist. Maybe he is the reincarnation of R. D. Laing, or Thomas Szasz in a rubber mask. If not, though, branding him an antipsychiatrist shows that Stahl is unclear on what "psychiatry" is.



Psychiatry means the diagnosis and treatment of mental illness. Carlat, and indeed many other like-minded critics, are trying to improve that process by encouraging correct diagnosis and appropriate treatment.



When Carlat criticizes, say, the psychiatry textbook that turned out to have been written with "help" from a drug company, he's doing, I assume, because, as a psychiatrist who cares about psychiatry, he doesn't like seeing his field corrupted by propaganda.



This is why Stahl should heed his own warning: Be careful what you ask for.



Because Stahl seems to be asking for all the opponents of the excesses of the modern pharmaceutical industry to be opponents of psychiatry itself. At the moment, they're not. There are many, psychiatrists and others, who are trying to improve psychiatry, by protecting it from what they see as negative influences.



Maybe they're wrong about which influences are negative, maybe Pharma has had a more positive impact than they think, but even if they're wrong, they're not anti-psychiatry, they're pro-psychiatry.



However, if Stahl succeeds in painting all of these people as outside the psychiatric mainstream, he might find that psychiatry, stripped of such voices of sanity, turns into something so crazy that true antipsychiatry becomes the only reasonable option.

On Antipsychiatry

So leading US psychiatrist Stephen Stahl is annoyed at Daniel Carlat (of the The Carlat Psychiatry Blog and many other publications.)

After first surveying the current outlook for the development of new psychiatric drugs - not good, with many companies pulling out - Stahl laments:

Undoubtedly this is to the great delight of the anti-psychiatry community, lights up the antipsychiatry blogs (e.g., Carlat, http://carlatpsychiatry.blogspot.com/), who attract the Pharmascolds, scientologists and antimedication crowd who believe either there is no such thing as mental illness, that medication should not be used, or both.



Did you know that psychiatric illnesses are pure inventions of Pharma and their experts to treat patients that do not exist with drugs that are dangerous and do not work with the purpose only of profiting themselves? Stop the profits! Make mental illness go away by legislation and committee!


Stahl ends with the warning: Be careful what you ask for. You might just get it - "it" being an end to drug development in psychiatry.



Well, I would say the same to him.



Stahl paints opponents of modern pharmaceutical industry behaviour as "antipsychiatrists". They're not. Well, he only names one of them, Daniel Carlat, and he's certainly not. Carlat edits the Carlat Psychiatry Report. Let's take a look at the latest issue:



Benzodiazepines: A Guide to Safe Prescribing - discusses benzodiazepines, including a helpful table of their doses and half-lives. Useful to someone planning to prescribe these drugs, that is, which not many anti-psychiatrists would. Says that "They work quickly and effectively for anxiety and agitation...In most cases benzodiazepines have a benign side-effect profile..." Hardly likely to please the antimedication crowd.



Update on Medications for PTSD - including a review of trials of antidepressants, antipsychotics, and more exotic drugs. Says that psychotherapy is the key to treating PTSD, but that medication can be helpful: "Getting some comfort from meds can often enable a patient to more easily face" the hard task of therapy. Not enormously pro-medication, but very far from being anti.



Combined Antidepressants No More Effective Than Monotherapy - discusses a recent study finding that starting depressed patients on a combo of two antidepressants offers no benefits over just one drug. So, the piece concludes, "We recommend never using antidepressants, and banning them all forever"... no wait, that's what it would have said if Stahl were right. It actually said "we recommend...starting with a single antidepressant". Not none.



Overall Carlat is, as far as I can see, really pretty moderate. Yes, he's been critical of certain drugs, of Pharma-influenced psychiatrists and the culture of giving doctors freebies to promote products. Nonetheless, he believes that mental illness exists, and he thinks that medication can be useful in treating it.



Maybe Stahl's right and Carlat leads a secret double life as a Scientologist. Maybe he is the reincarnation of R. D. Laing, or Thomas Szasz in a rubber mask. If not, though, branding him an antipsychiatrist shows that Stahl is unclear on what "psychiatry" is.



Psychiatry means the diagnosis and treatment of mental illness. Carlat, and indeed many other like-minded critics, are trying to improve that process by encouraging correct diagnosis and appropriate treatment.



When Carlat criticizes, say, the psychiatry textbook that turned out to have been written with "help" from a drug company, he's doing, I assume, because, as a psychiatrist who cares about psychiatry, he doesn't like seeing his field corrupted by propaganda.



This is why Stahl should heed his own warning: Be careful what you ask for.



Because Stahl seems to be asking for all the opponents of the excesses of the modern pharmaceutical industry to be opponents of psychiatry itself. At the moment, they're not. There are many, psychiatrists and others, who are trying to improve psychiatry, by protecting it from what they see as negative influences.



Maybe they're wrong about which influences are negative, maybe Pharma has had a more positive impact than they think, but even if they're wrong, they're not anti-psychiatry, they're pro-psychiatry.



However, if Stahl succeeds in painting all of these people as outside the psychiatric mainstream, he might find that psychiatry, stripped of such voices of sanity, turns into something so crazy that true antipsychiatry becomes the only reasonable option.

Sunday, August 28, 2011

Confused

What is confusion?





According to Collins English Dictionary, the main meaning of the word "confused" is:

confused [kənˈfjuːzd] adj
1. feeling or exhibiting an inability to understand; bewildered; perplexed
That sounds about right. But hang on. Isn't there something odd about this: "feeling or exhibiting an inability to understand..."?



Those are two completely different things. Sometimes people exhibit a lack of understanding and don't feel it - they think they understand, but actually they don't. Indeed, that's the worst kind of confusion, because it leads to people making mistakes based on wrong assumptions. Whereas feeling confused is much less of a problem. If you know you're confused, you won't go around acting as if you're not.



The feeling of confusion happens when you've just avoided being confused, or just come out of it. Confusion is a feeling, and also, a status, and the two are not just separate but (to some extent) mutually exclusive. If you feel confused, you can't actually be seriously confused.



Yet we use the same word for both, and the dictionary treats them both as being not just the same but part of the same definition. Confusing.



Or take being drunk. "Drunk" is a feeling, certainly. It's also a state, and they only sometimes go together. You can be drunker than you feel, with hilarious or tragic consequences. Everyone knows that you can't trust a drunken person to know how drunk they are.



Consider "depression". Depression is a feeling. No question about that. We've all felt at least a little depressed. Depression is also a state, that certain people go into as a result of mental illness, physical illness or as a side effect of certain drugs.



But the state of depression is no more equivalent to the feeling of depression than being confused means feeling confused. In my experience of depression, feeling depressed is a sign that I'm only slightly depressed. When I'm really depressed, I don't think I'm depressed at all.



This is one of the most insidious things about depression: it 'creeps up on you'. Over a period of time - usually several days, in my case, but it can be much longer or shorter - your mind changes.



You stop noticing opportunities, and become obsessed with risks.
Your ability to take decisions and come up with ideas withers and your imagination fails you. Your thoughts get stuck in loops. You feel weary doing the things you used to enjoy and angry around people you used to like.



In other words, your mind changes. Your memory, thinking and perceptions are all altered - but you don't notice that. You notice the effects, of course, but you think they're outside: you think the world has suddenly become less friendly. A classic case of confusion, in the worst sense.

Confused

What is confusion?





According to Collins English Dictionary, the main meaning of the word "confused" is:

confused [kənˈfjuːzd] adj
1. feeling or exhibiting an inability to understand; bewildered; perplexed
That sounds about right. But hang on. Isn't there something odd about this: "feeling or exhibiting an inability to understand..."?



Those are two completely different things. Sometimes people exhibit a lack of understanding and don't feel it - they think they understand, but actually they don't. Indeed, that's the worst kind of confusion, because it leads to people making mistakes based on wrong assumptions. Whereas feeling confused is much less of a problem. If you know you're confused, you won't go around acting as if you're not.



The feeling of confusion happens when you've just avoided being confused, or just come out of it. Confusion is a feeling, and also, a status, and the two are not just separate but (to some extent) mutually exclusive. If you feel confused, you can't actually be seriously confused.



Yet we use the same word for both, and the dictionary treats them both as being not just the same but part of the same definition. Confusing.



Or take being drunk. "Drunk" is a feeling, certainly. It's also a state, and they only sometimes go together. You can be drunker than you feel, with hilarious or tragic consequences. Everyone knows that you can't trust a drunken person to know how drunk they are.



Consider "depression". Depression is a feeling. No question about that. We've all felt at least a little depressed. Depression is also a state, that certain people go into as a result of mental illness, physical illness or as a side effect of certain drugs.



But the state of depression is no more equivalent to the feeling of depression than being confused means feeling confused. In my experience of depression, feeling depressed is a sign that I'm only slightly depressed. When I'm really depressed, I don't think I'm depressed at all.



This is one of the most insidious things about depression: it 'creeps up on you'. Over a period of time - usually several days, in my case, but it can be much longer or shorter - your mind changes.



You stop noticing opportunities, and become obsessed with risks.
Your ability to take decisions and come up with ideas withers and your imagination fails you. Your thoughts get stuck in loops. You feel weary doing the things you used to enjoy and angry around people you used to like.



In other words, your mind changes. Your memory, thinking and perceptions are all altered - but you don't notice that. You notice the effects, of course, but you think they're outside: you think the world has suddenly become less friendly. A classic case of confusion, in the worst sense.

Thursday, August 25, 2011

New Mutations - New Eugenics?

True or false: you inherit your genes from your parents.





Mostly true, but not quite. In theory, you do indeed get half of your DNA from your mother and half from your father; but in practice, there's sometimes a third parent as well, random chance. Genes don't always get transmitted as they should: mutations occur.



As a result, it's not true that "genetic" always implies "inherited". A disease, for example, could be entirely genetic, and almost never inherited. Down's syndrome is the textbook example, but it's something of a special case and until recently, it was widely assumed that most disease risk genes were inherited.



Yet recent evidence suggests that many cases of neurological and psychiatric disorders are caused by uninherited, de novo mutation events. Here are two papers from the last few weeks about schizophrenia(1,2) - but the story looks similar for autism, intellectual disabilities, some forms of epilepsy, ADHD, and others. Indeed they're often the same mutations.



Biologically, a given mutation is what it is, whether it's de novo or inherited. But on a social and a psychological level, I think there are crucial differences, and in particular I think that if it turns out that de novo mutations are important in disease, we're going to see attempts to take these variants out of circulation - far more so than in the case of the very same genes, were they inherited.



The old eugenics movement was based on the idea that if we stop people with bad genes from breeding - by sterilization, voluntary or otherwise, say - we'll be able to eliminate diseases and other undesirable traits. This idea is now generally regarded as extremely unethical, but many of its opponents have shared with the eugenicists the belief that it could work.



But if de novo mutations are what cause the majority of disease, then this approach would be pointless. Sterilizing certain people, or encouraging the healthy ones to have more children, would never be able to eliminate the 'bad genes' because new ones are being created every generation, pretty much at random.



So the de novo paradigm ought to be welcomed by opponents of eugenics. It wasn't just morally wrong - it was biologically misguided too.



But hang on. This is the 21st century. We have in vitro fertilization (IVF), and you can analyze the genes of an IVF embryo before you decide to make it into a child. In the near future, we might be able to routinely sequence the genome of any unborn child shortly after conception.



From there, it would be a small step to allowing parents to decide not to have children with de novo mutations.



This would be, in its effects, a form of eugenics - in the sense that it would produce the effect that the old eugenicists wanted. No more 'bad' genes, or not nearly as many. Opinions will differ as to whether it's morally different. But I would have said that politically, it's a lot more likely to happen.



I can't see forced sterilization returning any time soon. But if you were expecting a baby and you knew that it was not just carrying your and your partner's DNA, but had also suffered a mutation - might you not want to avoid that?



Psychologically, it matters that it did not inherit the gene. It would be a big step to decide that your child should not inherit one of your own genes. Of course, some genes are obviously harmful, like one that raises the risk of cancer, but think about the grey areas - a gene for social anxiety, mild autistic symptoms, obesity, a personality trait.



You might well feel that carrying that gene is what makes you, you; and so it would be natural for your child to have it. You might decide that if it was good enough for you (and all your ancestors), it's good enough for your children. You might well resent the very idea that it's a 'bad' gene at all, as an attack on your own self-worth.



But none of that applies if it's a de novo mutation. Indeed, quite the opposite - all those same considerations would probably lead you to want your children to carry as close as possible to a carbon copy of your DNA, with no random changes. It was good enough for you.



My point is that I think there will be much more support for the idea of genetic screening against de novo mutations than against inherited genes. More people will want it, it will be more socially acceptable, and more widely used. I'm not saying this would be a good or a bad thing, just making a prediction. In the future, diseases and traits that are primarily caused by de novo mutations will increasingly selected against.

New Mutations - New Eugenics?

True or false: you inherit your genes from your parents.





Mostly true, but not quite. In theory, you do indeed get half of your DNA from your mother and half from your father; but in practice, there's sometimes a third parent as well, random chance. Genes don't always get transmitted as they should: mutations occur.



As a result, it's not true that "genetic" always implies "inherited". A disease, for example, could be entirely genetic, and almost never inherited. Down's syndrome is the textbook example, but it's something of a special case and until recently, it was widely assumed that most disease risk genes were inherited.



Yet recent evidence suggests that many cases of neurological and psychiatric disorders are caused by uninherited, de novo mutation events. Here are two papers from the last few weeks about schizophrenia(1,2) - but the story looks similar for autism, intellectual disabilities, some forms of epilepsy, ADHD, and others. Indeed they're often the same mutations.



Biologically, a given mutation is what it is, whether it's de novo or inherited. But on a social and a psychological level, I think there are crucial differences, and in particular I think that if it turns out that de novo mutations are important in disease, we're going to see attempts to take these variants out of circulation - far more so than in the case of the very same genes, were they inherited.



The old eugenics movement was based on the idea that if we stop people with bad genes from breeding - by sterilization, voluntary or otherwise, say - we'll be able to eliminate diseases and other undesirable traits. This idea is now generally regarded as extremely unethical, but many of its opponents have shared with the eugenicists the belief that it could work.



But if de novo mutations are what cause the majority of disease, then this approach would be pointless. Sterilizing certain people, or encouraging the healthy ones to have more children, would never be able to eliminate the 'bad genes' because new ones are being created every generation, pretty much at random.



So the de novo paradigm ought to be welcomed by opponents of eugenics. It wasn't just morally wrong - it was biologically misguided too.



But hang on. This is the 21st century. We have in vitro fertilization (IVF), and you can analyze the genes of an IVF embryo before you decide to make it into a child. In the near future, we might be able to routinely sequence the genome of any unborn child shortly after conception.



From there, it would be a small step to allowing parents to decide not to have children with de novo mutations.



This would be, in its effects, a form of eugenics - in the sense that it would produce the effect that the old eugenicists wanted. No more 'bad' genes, or not nearly as many. Opinions will differ as to whether it's morally different. But I would have said that politically, it's a lot more likely to happen.



I can't see forced sterilization returning any time soon. But if you were expecting a baby and you knew that it was not just carrying your and your partner's DNA, but had also suffered a mutation - might you not want to avoid that?



Psychologically, it matters that it did not inherit the gene. It would be a big step to decide that your child should not inherit one of your own genes. Of course, some genes are obviously harmful, like one that raises the risk of cancer, but think about the grey areas - a gene for social anxiety, mild autistic symptoms, obesity, a personality trait.



You might well feel that carrying that gene is what makes you, you; and so it would be natural for your child to have it. You might decide that if it was good enough for you (and all your ancestors), it's good enough for your children. You might well resent the very idea that it's a 'bad' gene at all, as an attack on your own self-worth.



But none of that applies if it's a de novo mutation. Indeed, quite the opposite - all those same considerations would probably lead you to want your children to carry as close as possible to a carbon copy of your DNA, with no random changes. It was good enough for you.



My point is that I think there will be much more support for the idea of genetic screening against de novo mutations than against inherited genes. More people will want it, it will be more socially acceptable, and more widely used. I'm not saying this would be a good or a bad thing, just making a prediction. In the future, diseases and traits that are primarily caused by de novo mutations will increasingly selected against.

Friday, August 19, 2011

The Ethics of Forgetfulness Drugs

Drugs that could modify or erase memories could soon be possible. We shouldn't rush to judge them unethical, says a Nature opinion piece by Adam Kolber, of the Neuroethics & Law Blog.



The idea of a pill that could make you forget something, or that could modify the emotional charge of a past experience, does seem rather disturbing.



Yet experiments on animals have gone a long to revealing the molecular mechanisms behind the formation and maintanence of memory traces. Much of the early work focussed on dangerously toxic drugs but recently more targeted approaches have appeared.



Kolber argues that we should not shy away from research in this area or brand the whole idea unethical. Rather we should consider the costs and benefits on a case-by-case basis.

The fears about pharmaceutical memory manipulation are overblown. Thoughtful regulation may some day be appropriate but excessive hand-wringing now over the ethics of tampering with memory could stall research into preventing post-traumatic stress in millions of people. Delay could also hinder people who are already debilitated by harrowing memories from being offered the best hope yet of reclaiming their lives.
He says that

Given the close connection between memory and a sense of self, some bioethicists...worry that giving people too much power to alter their life stories could ultimately weaken their sense of identity and make their lives less genuine.



These arguments are not persuasive. Some memories, such as those of rescue workers who clean up scenes of mass destruction, may have no redeeming value. Drugs may speed up the healing process more effectively than counselling, arguably making patients more true to themselves than they would be if a traumatic experience were to dominate their lives.
This is a complex issue. I can see his point, although I'm not sure the rescue worker example is the best one. A rescue worker, at least a professional one, has chosen to do that kind of work. The experiences that are part of that job are ones they decided to have - or at least that they knew were a realistic possibility - and that may be an expression of their identity.



The argument is perhaps more convincing in the case of someone who, quite unexpectedly, suffers an out-of-the-blue trauma. In this case, the trauma has nothing to do with their lives; if it interferes with their ability to function, it might "stop them from being themselves".



Kolber ends by quoting a fascinating story from Time magazine in 2007, which I didn't catch at the time:

Take a scenario recounted by a US doctor in 2007 (ref. 9). The doctor had biopsied a suspected cancer patient and sent a tissue sample to a pathologist while the woman was still in the operating room. Thinking she was completely sedated, the pathologist announced a bleak prognosis over the intercom.



The patient, who had received only local anaesthesia, heard the news and began to shriek, “Oh my God. My kids!” An anaesthesiologist standing by quickly injected her with propofol, a sedative that causes some people to forget what happened a few minutes before they were injected.



When the woman woke up, she had no memory of hearing her prognosis.
ResearchBlogging.orgKolber A (2011). Neuroethics: Give memory-altering drugs a chance. Nature, 476 (7360), 275-6 PMID: 21850084

The Ethics of Forgetfulness Drugs

Drugs that could modify or erase memories could soon be possible. We shouldn't rush to judge them unethical, says a Nature opinion piece by Adam Kolber, of the Neuroethics & Law Blog.



The idea of a pill that could make you forget something, or that could modify the emotional charge of a past experience, does seem rather disturbing.



Yet experiments on animals have gone a long to revealing the molecular mechanisms behind the formation and maintanence of memory traces. Much of the early work focussed on dangerously toxic drugs but recently more targeted approaches have appeared.



Kolber argues that we should not shy away from research in this area or brand the whole idea unethical. Rather we should consider the costs and benefits on a case-by-case basis.

The fears about pharmaceutical memory manipulation are overblown. Thoughtful regulation may some day be appropriate but excessive hand-wringing now over the ethics of tampering with memory could stall research into preventing post-traumatic stress in millions of people. Delay could also hinder people who are already debilitated by harrowing memories from being offered the best hope yet of reclaiming their lives.
He says that

Given the close connection between memory and a sense of self, some bioethicists...worry that giving people too much power to alter their life stories could ultimately weaken their sense of identity and make their lives less genuine.



These arguments are not persuasive. Some memories, such as those of rescue workers who clean up scenes of mass destruction, may have no redeeming value. Drugs may speed up the healing process more effectively than counselling, arguably making patients more true to themselves than they would be if a traumatic experience were to dominate their lives.
This is a complex issue. I can see his point, although I'm not sure the rescue worker example is the best one. A rescue worker, at least a professional one, has chosen to do that kind of work. The experiences that are part of that job are ones they decided to have - or at least that they knew were a realistic possibility - and that may be an expression of their identity.



The argument is perhaps more convincing in the case of someone who, quite unexpectedly, suffers an out-of-the-blue trauma. In this case, the trauma has nothing to do with their lives; if it interferes with their ability to function, it might "stop them from being themselves".



Kolber ends by quoting a fascinating story from Time magazine in 2007, which I didn't catch at the time:

Take a scenario recounted by a US doctor in 2007 (ref. 9). The doctor had biopsied a suspected cancer patient and sent a tissue sample to a pathologist while the woman was still in the operating room. Thinking she was completely sedated, the pathologist announced a bleak prognosis over the intercom.



The patient, who had received only local anaesthesia, heard the news and began to shriek, “Oh my God. My kids!” An anaesthesiologist standing by quickly injected her with propofol, a sedative that causes some people to forget what happened a few minutes before they were injected.



When the woman woke up, she had no memory of hearing her prognosis.
ResearchBlogging.orgKolber A (2011). Neuroethics: Give memory-altering drugs a chance. Nature, 476 (7360), 275-6 PMID: 21850084

Monday, August 8, 2011

So Apparantly I'm Bipolar

According to a new paper, yours truly is bipolar.





I've written before of my experience of depression, and the fact that I take antidepressants, but I've never been diagnosed with bipolar.



I've taken a few drugs in my time. On certain dopamine-based drugs I got euphoric, filled with energy, talkative, confident, with no need for sleep, and a boundless desire to do stuff, which is textbook hypomania. So I think I know what it feels like, and I can confidently say that it has never happened to me out of the blue.



On antidepressants, I have had some mild experiences of this type. Ironically, the closest I've come to it was when I quit an SSRI antidepressant. I've also experienced periods of irritability and agitation on antidepressants. Either way, that's antidepressants. Bipolar is when you get high on your own supply of neurotransmitters.



Well, it used to be. Jules Angst et al have got some new, broader criteria for "bipolarity" in depression. They say that manic symptoms in response to antidepressants do count, exactly like out-of-the-blue mania.



What's more, under the new "Bipolar Specifier" criteria, there's no minimum duration. Under existing criteria the symptoms have to last 4 or 7 days, depending on severity. Under the new regime if you've ever been irritable, high, agitated or hyperactive, on antidepressants or not, you meet "Bipolar Specifier" criteria, so long as it was marked enough that someone else noticed it.



All you need is:

an episode of elevated mood, an episode of irritable mood, or an episode of increased activity with at least 3 of the symptoms listed under Criterion B of the DSM-IV-TR associated with at least 1 of the 3 following consequences: (1) unequivocal and observable change in functioning uncharacteristic of the person’s usual behavior, (2) marked impairment in social or occupational functioning observable by others, or (3) requiring hospitalization or outpatient treatment.

The bipolar net just got bigger. And they caught me in it. Me and 47% of depressed people in their study. They recruited 509 psychiatrists from around the world, and got each of them to assess between 10 and 20 consecutive adult depressed patients who were referred to them for evaluation or treatment. A total of 5635 patients were included.



Only 16% met existing DSM-IV criteria for bipolar disorder, so the new system with 47% identified an "extra" 31%, trebling the number of bipolar cases.



A cynic would say that this is a breathtaking piece of psychiatric marketing. You give people antidepressants, then you diagnose them with bipolar on the basis of their reaction to those drugs, thus justifying selling them yet more drugs.



The cynic would not be surprised to learn that this study was sponsored by pharmaceutical company Sanofi.

All investigators recruited received fees, on a per patient basis, from sanofi-aventis in

recognition of their participation in the study....The sponsor of this study (sanofi-aventis) was involved in the study design, conduct, monitoring, data analysis, and preparation of the report.
In fairness, the authors do show that patients meeting their criteria tend to have characteristics typical of bipolar people. And they show that their system is at least as good as DSM-IV at picking out these cases:



For example, DSM-IV bipolar patients had a younger age of onset than DSM-IV depressed ones. "Bipolar specifier" patients did too, compared to the 53% who didn't meet the criteria. Same for a family history of manic symptoms, multiple episodes, and shorter episodes. All of those are pretty well established correlates of bipolar disorder.



That's fine, and the results are better than I expected when I picked up this paper. But all this shows us is that the bipolar specifier was no worse than the DSM-IV criteria as applied in this study.



It doesn't tell us whether either was any good.



DSM-IV criteria were used in a mechanical cookbook fashion - symptoms were assessed by the psychiatrist, written down, sent back to the study authors, who then diagnosed them if they ticked enough boxes. Is that a good approach? We don't know.



Most importantly, we have no idea whether these people would do better being treated as bipolar rather than as depressed. The difference being that bipolar people get mood stabilizers. Maybe these people would benefit from mood stabilizers, maybe not. Existing literature on mood stabilizers in bipolar people can't be assumed to generalize to these 47%.



In the discussion, the authors argue that antidepressants are not much good in bipolar people, whereas mood stabilizers are. Fun fact: Sanofi make many of the most popular formulations of valproic acid/valproate , a big selling mood stabilizer.



I think that is no coincidence. Maybe that sounds crazy, but hey, what do you expect? I'm bipolar.



ResearchBlogging.orgAngst J, Azorin JM, Bowden CL, Perugi G, Vieta E, Gamma A, Young AH, & for the BRIDGE Study Group (2011). Prevalence and Characteristics of Undiagnosed Bipolar Disorders in Patients With a Major Depressive Episode: The BRIDGE Study. Archives of general psychiatry, 68 (8), 791-798 PMID: 21810644

So Apparantly I'm Bipolar

According to a new paper, yours truly is bipolar.





I've written before of my experience of depression, and the fact that I take antidepressants, but I've never been diagnosed with bipolar.



I've taken a few drugs in my time. On certain dopamine-based drugs I got euphoric, filled with energy, talkative, confident, with no need for sleep, and a boundless desire to do stuff, which is textbook hypomania. So I think I know what it feels like, and I can confidently say that it has never happened to me out of the blue.



On antidepressants, I have had some mild experiences of this type. Ironically, the closest I've come to it was when I quit an SSRI antidepressant. I've also experienced periods of irritability and agitation on antidepressants. Either way, that's antidepressants. Bipolar is when you get high on your own supply of neurotransmitters.



Well, it used to be. Jules Angst et al have got some new, broader criteria for "bipolarity" in depression. They say that manic symptoms in response to antidepressants do count, exactly like out-of-the-blue mania.



What's more, under the new "Bipolar Specifier" criteria, there's no minimum duration. Under existing criteria the symptoms have to last 4 or 7 days, depending on severity. Under the new regime if you've ever been irritable, high, agitated or hyperactive, on antidepressants or not, you meet "Bipolar Specifier" criteria, so long as it was marked enough that someone else noticed it.



All you need is:

an episode of elevated mood, an episode of irritable mood, or an episode of increased activity with at least 3 of the symptoms listed under Criterion B of the DSM-IV-TR associated with at least 1 of the 3 following consequences: (1) unequivocal and observable change in functioning uncharacteristic of the person’s usual behavior, (2) marked impairment in social or occupational functioning observable by others, or (3) requiring hospitalization or outpatient treatment.

The bipolar net just got bigger. And they caught me in it. Me and 47% of depressed people in their study. They recruited 509 psychiatrists from around the world, and got each of them to assess between 10 and 20 consecutive adult depressed patients who were referred to them for evaluation or treatment. A total of 5635 patients were included.



Only 16% met existing DSM-IV criteria for bipolar disorder, so the new system with 47% identified an "extra" 31%, trebling the number of bipolar cases.



A cynic would say that this is a breathtaking piece of psychiatric marketing. You give people antidepressants, then you diagnose them with bipolar on the basis of their reaction to those drugs, thus justifying selling them yet more drugs.



The cynic would not be surprised to learn that this study was sponsored by pharmaceutical company Sanofi.

All investigators recruited received fees, on a per patient basis, from sanofi-aventis in

recognition of their participation in the study....The sponsor of this study (sanofi-aventis) was involved in the study design, conduct, monitoring, data analysis, and preparation of the report.
In fairness, the authors do show that patients meeting their criteria tend to have characteristics typical of bipolar people. And they show that their system is at least as good as DSM-IV at picking out these cases:



For example, DSM-IV bipolar patients had a younger age of onset than DSM-IV depressed ones. "Bipolar specifier" patients did too, compared to the 53% who didn't meet the criteria. Same for a family history of manic symptoms, multiple episodes, and shorter episodes. All of those are pretty well established correlates of bipolar disorder.



That's fine, and the results are better than I expected when I picked up this paper. But all this shows us is that the bipolar specifier was no worse than the DSM-IV criteria as applied in this study.



It doesn't tell us whether either was any good.



DSM-IV criteria were used in a mechanical cookbook fashion - symptoms were assessed by the psychiatrist, written down, sent back to the study authors, who then diagnosed them if they ticked enough boxes. Is that a good approach? We don't know.



Most importantly, we have no idea whether these people would do better being treated as bipolar rather than as depressed. The difference being that bipolar people get mood stabilizers. Maybe these people would benefit from mood stabilizers, maybe not. Existing literature on mood stabilizers in bipolar people can't be assumed to generalize to these 47%.



In the discussion, the authors argue that antidepressants are not much good in bipolar people, whereas mood stabilizers are. Fun fact: Sanofi make many of the most popular formulations of valproic acid/valproate , a big selling mood stabilizer.



I think that is no coincidence. Maybe that sounds crazy, but hey, what do you expect? I'm bipolar.



ResearchBlogging.orgAngst J, Azorin JM, Bowden CL, Perugi G, Vieta E, Gamma A, Young AH, & for the BRIDGE Study Group (2011). Prevalence and Characteristics of Undiagnosed Bipolar Disorders in Patients With a Major Depressive Episode: The BRIDGE Study. Archives of general psychiatry, 68 (8), 791-798 PMID: 21810644

Thursday, August 4, 2011

Brain-Modifying Drugs

What if there was a drug that didn't just affect the levels of chemicals in your brain, it turned off genes in your brain? That possibility - either exciting or sinister depending on how you look at it - could be remarkably close, according to a report just out from a Spanish group.

The authors took an antidepressant, sertraline, and chemically welded it to a small interfering RNA (siRNA). A siRNA is kind of like a pair of genetic handcuffs. It selectively blocks the expression of a particular gene, by binding to and interfering with RNA messengers. In this case, the target was the serotonin 5HT1A receptor.

The authors injected their molecule into the brains of some mice. The sertraline was there to target the siRNA at specific cell types. Sertraline works by binding to and blocking the serotonin transporter (SERT), and this is only expressed on cells that release serotonin; so only these cells were subject to the 5HT1A silencing.

The idea is that this receptor acts as a kind of automatic off-switch for these cells, making them reduce their firing in response to their own output, to keep them from firing too fast. There's a theory that this feedback can be a bad thing, because it stops antidepressants from being able to boost serotonin levels very much, although this is debated.

Anyway, it worked. The treated mice showed a strong and selective reduction in the density of the 5HT1A receptor in the target area (the Raphe nuclei containing serotonin cells), but not in the rest of the brain.

Note that this isn't genetic modification as such. The gene wasn't deleted, it was just silenced, temporarily one hopes; the effect persisted for at least 3 days, but they didn't investigate just how long it lasted.

That's remarkable enough, but what's more, it also worked when they administered the drug via the intranasal route. In many siRNA experiments, the payload is injected directly into the brain. That's fine for lab mice, but not very practical for humans. Intranasal administration, however, is popular and easy.

So siRNA-sertraline, and who knows what other drugs built along these lines, may be closer to being ready for human consumption than anyone would have predicted. However... the mouse's brain is a lot closer to its nose than the human brain is, so it might not go quite as smoothly.

The mind boggles at the potential. If you could selectively alter the gene expression of selective neurons, you could do things to the brain that are currently impossible. Existing drugs hit the whole brain, yet there are many reasons why you'd prefer to only affect certain areas. And editing gene expression would allow much more detailed control over those cells than is currently possible.

Currently available drugs are shotguns and sledgehammers. These approaches could provide sniper rifles and scalpels. But whether it will prove to be safe remains to be seen. I certainly wouldn't want to be first one to snort this particular drug.

ResearchBlogging.orgBortolozzi, A., Castañé, A., Semakova, J., Santana, N., Alvarado, G., Cortés, R., Ferrés-Coy, A., Fernández, G., Carmona, M., Toth, M., Perales, J., Montefeltro, A., & Artigas, F. (2011). Selective siRNA-mediated suppression of 5-HT1A autoreceptors evokes strong anti-depressant-like effects Molecular Psychiatry DOI: 10.1038/mp.2011.92

Brain-Modifying Drugs

What if there was a drug that didn't just affect the levels of chemicals in your brain, it turned off genes in your brain? That possibility - either exciting or sinister depending on how you look at it - could be remarkably close, according to a report just out from a Spanish group.

The authors took an antidepressant, sertraline, and chemically welded it to a small interfering RNA (siRNA). A siRNA is kind of like a pair of genetic handcuffs. It selectively blocks the expression of a particular gene, by binding to and interfering with RNA messengers. In this case, the target was the serotonin 5HT1A receptor.

The authors injected their molecule into the brains of some mice. The sertraline was there to target the siRNA at specific cell types. Sertraline works by binding to and blocking the serotonin transporter (SERT), and this is only expressed on cells that release serotonin; so only these cells were subject to the 5HT1A silencing.

The idea is that this receptor acts as a kind of automatic off-switch for these cells, making them reduce their firing in response to their own output, to keep them from firing too fast. There's a theory that this feedback can be a bad thing, because it stops antidepressants from being able to boost serotonin levels very much, although this is debated.

Anyway, it worked. The treated mice showed a strong and selective reduction in the density of the 5HT1A receptor in the target area (the Raphe nuclei containing serotonin cells), but not in the rest of the brain.

Note that this isn't genetic modification as such. The gene wasn't deleted, it was just silenced, temporarily one hopes; the effect persisted for at least 3 days, but they didn't investigate just how long it lasted.

That's remarkable enough, but what's more, it also worked when they administered the drug via the intranasal route. In many siRNA experiments, the payload is injected directly into the brain. That's fine for lab mice, but not very practical for humans. Intranasal administration, however, is popular and easy.

So siRNA-sertraline, and who knows what other drugs built along these lines, may be closer to being ready for human consumption than anyone would have predicted. However... the mouse's brain is a lot closer to its nose than the human brain is, so it might not go quite as smoothly.

The mind boggles at the potential. If you could selectively alter the gene expression of selective neurons, you could do things to the brain that are currently impossible. Existing drugs hit the whole brain, yet there are many reasons why you'd prefer to only affect certain areas. And editing gene expression would allow much more detailed control over those cells than is currently possible.

Currently available drugs are shotguns and sledgehammers. These approaches could provide sniper rifles and scalpels. But whether it will prove to be safe remains to be seen. I certainly wouldn't want to be first one to snort this particular drug.

ResearchBlogging.orgBortolozzi, A., Castañé, A., Semakova, J., Santana, N., Alvarado, G., Cortés, R., Ferrés-Coy, A., Fernández, G., Carmona, M., Toth, M., Perales, J., Montefeltro, A., & Artigas, F. (2011). Selective siRNA-mediated suppression of 5-HT1A autoreceptors evokes strong anti-depressant-like effects Molecular Psychiatry DOI: 10.1038/mp.2011.92

Wednesday, August 3, 2011

Antipsychotics - The New Valium?

Antipsychotics, originally designed to control the hallucinations and delusions seen in schizophrenia, have been expanding their domain in recent years.

Nowadays, they're widely used in bipolar disorder, depression, and as a new paper reveals, increasingly in anxiety disorders as well.

The authors, Comer et al, looked at the NAMCS survey, which provides yearly data on the use of medications in visits to office-based doctors across the USA.

Back in 1996, just 10% of visits in which an anxiety disorder was diagnosed ended in a prescription for an antipsychotic. By 2007 it was over 20%. No atypical is licensed for use in anxiety disorders in the USA, so all of these prescriptions are off-label.

Not all of these prescriptions will have been for anxiety. They may have been prescribed to treat psychosis, in people who also happened to be anxious. However, the increase was accounted for by the rise in non-psychotic patients, and there was a rise in the rate of people with only anxiety disorders.

The increase was driven by the newer, "atypical" antipsychotics.

Whether the modern trend for prescribing antipsychotics for anxiety is a good or a bad thing, is not for us to say. The authors discuss various concerns ranging from the side effects (obesity, diabetes and more), to the fact that there have only been a few clinical trials of these drugs in anxiety.

But what's really disturbing about these results, to me, is how fast the change happened. Between 2000 and 2004, use doubled from 10% to 20% of anxiety visits. That's an astonishingly fast change in medical practice.

Why? It wasn't because that period saw the publication of a load of large, well-designed clinical trials demonstrating that these drugs work wonders in anxiety disorders. It didn't.

But as Comer et al put it:
An increasing number of office-based psychiatrists are specializing in pharmacotherapy to the exclusion of psychotherapy. Limitations in the availability of psychosocial interventions may place heavy clinical demands on the pharmacological dimensions of mental health care for anxiety disorder patients.
In other words, antipsychotics may have become popular because they're the treatment for people who can't afford anything better.

These data show that antipsychotics were over twice as likely to be prescribed to African American patients; the poor i.e. patients with public health insurance; and children under 18.

ResearchBlogging.orgComer JS, Mojtabai R, & Olfson M (2011). National Trends in the Antipsychotic Treatment of Psychiatric Outpatients With Anxiety Disorders. The American journal of psychiatry PMID: 21799067

Antipsychotics - The New Valium?

Antipsychotics, originally designed to control the hallucinations and delusions seen in schizophrenia, have been expanding their domain in recent years.

Nowadays, they're widely used in bipolar disorder, depression, and as a new paper reveals, increasingly in anxiety disorders as well.

The authors, Comer et al, looked at the NAMCS survey, which provides yearly data on the use of medications in visits to office-based doctors across the USA.

Back in 1996, just 10% of visits in which an anxiety disorder was diagnosed ended in a prescription for an antipsychotic. By 2007 it was over 20%. No atypical is licensed for use in anxiety disorders in the USA, so all of these prescriptions are off-label.

Not all of these prescriptions will have been for anxiety. They may have been prescribed to treat psychosis, in people who also happened to be anxious. However, the increase was accounted for by the rise in non-psychotic patients, and there was a rise in the rate of people with only anxiety disorders.

The increase was driven by the newer, "atypical" antipsychotics.

Whether the modern trend for prescribing antipsychotics for anxiety is a good or a bad thing, is not for us to say. The authors discuss various concerns ranging from the side effects (obesity, diabetes and more), to the fact that there have only been a few clinical trials of these drugs in anxiety.

But what's really disturbing about these results, to me, is how fast the change happened. Between 2000 and 2004, use doubled from 10% to 20% of anxiety visits. That's an astonishingly fast change in medical practice.

Why? It wasn't because that period saw the publication of a load of large, well-designed clinical trials demonstrating that these drugs work wonders in anxiety disorders. It didn't.

But as Comer et al put it:
An increasing number of office-based psychiatrists are specializing in pharmacotherapy to the exclusion of psychotherapy. Limitations in the availability of psychosocial interventions may place heavy clinical demands on the pharmacological dimensions of mental health care for anxiety disorder patients.
In other words, antipsychotics may have become popular because they're the treatment for people who can't afford anything better.

These data show that antipsychotics were over twice as likely to be prescribed to African American patients; the poor i.e. patients with public health insurance; and children under 18.

ResearchBlogging.orgComer JS, Mojtabai R, & Olfson M (2011). National Trends in the Antipsychotic Treatment of Psychiatric Outpatients With Anxiety Disorders. The American journal of psychiatry PMID: 21799067

Friday, July 22, 2011

New Antidepressant - Old Tricks

The past decade has been a bad one for antidepressant manufacturers.

Quite apart from all the bad press these drugs have been getting lately, there's been a remarkable lack of new antidepressants making it to the market. The only really novel drugs to hit the shelves since 2000 have been agomelatine and vilazodone. There were a couple of others that were just minor variants on old molecules, but that's it. Quite a contrast from the 1990s when new drugs were ten-a-penny.

This makes "Lu AA21004" rather special. It's a new antidepressant currently in development and by all accounts it's making good progress. It's now in Phase III trials, the last stage before approval. And a large clinical trial has just been published finding that it works.

But is it a medical advance or merely a commercial one?

Pharmacologically, Lu AA21004 is kind of a new twist on an old classic . Its main mechanism of action is inhibiting the reuptake of serotonin, just like Prozac and other SSRIs. However, unlike them, it also blocks serotonin 5HT3 and 5HT7 receptors, activates 5HT1A receptors and partially agonizes 5HT1B.

None of these things cry out "antidepressant" to me, but they do at least make it a bit different.

The new trial took 430 depressed people and randomized them to get Lu AA21004, at two different doses, 5mg or 10mg, or the older antidepressant venlafaxine at the high-ish dose of 225 mg, or placebo.

It worked. Over 6 weeks, people on the new drug improved more than those on placebo, and equally as well as people on venlafaxine; the lower 5 mg dose was a bit less effective, but not significantly so.

The size of the effect was medium, with a benefit over-and-above placebo of about 5 points on the MADRS depression scale, which considering that the baseline scores in this study averaged 34, is not huge, but it compares well to other antidepressant trials.

Now we come to the side effects, and this is the most important bit, as we'll see later. The authors did not specifically probe for these, they just relied on spontaneous report, which tends to underestimate adverse events.


Basically, the main problem with Lu AA21004 was that it made people sick. Literally - 9% of people on the highest dose suffered vomiting, and 38% got nausea. However, the 5 mg dose was no worse than venlafaxine for nausea, and was relatively vomit-free. Unlike venlafaxine, it didn't cause dry mouth, constipation, or sexual problems.

So that's lovely then. Let's get this stuff to market!

Hang on.

The big selling point for this drug is clearly the lack of side effects. It was no more effective than the (much cheaper, because off-patent) venlafaxine. It was better tolerated, but that's not a great achievement to be honest. Venlafaxine is quite notorious for causing side effects, especially at higher doses.

I take venlafaxine 300 mg and the side effects aren't the end of the world, but they're no fun, and the point is, they're well known to be worse than you get with other modern drugs, most notably SSRIs.

If you ask me, this study should have compared the new drug to an SSRI, because they're used much more widely than venlafaxine. Which one? How about escitalopram, a drug which is, according to most of the literature, one of the best SSRIs, as effective as venlafaxine, but with fewer side effects.

Actually, according to Lundbeck, who make escitalopram, it's even better than venlafaxine. Now, they would say that, given that they make it - but the makers of Lu AA21004 ought to believe them, because, er, they're the same people. "Lu" stands for Lundbeck.

The real competitor for this drug, according to Lundbeck, is escitalopram. But no-one wants to be in competition with themselves.

This may be why, although there are no fewer than 26 registered clinical trials of Lu AA21004 either ongoing or completed, only one is comparing it to an SSRI. The others either compare it to venlafaxine, or to duloxetine, which has even worse side effects. The one trial that will compare it to escitalopram has a narrow focus (sexual dysfunction).

Pharmacologically, remember, this drug is an SSRI with a few "special moves", in terms of hitting some serotonin receptors. The question is - do those extra tricks actually make it better? Or is it just a glorified, and expensive, new SSRI? We don't know and we're not going to find out any time soon.

If Lu AA21004 is no more effective, and no better tolerated, than tried-and-tested old escitalopram, anyone who buys it will be paying extra for no real benefit. The only winner, in that case, being Lundbeck - especially given that escitalopram goes off-patent in 2012...

ResearchBlogging.orgAlvarez E, Perez V, Dragheim M, Loft H, & Artigas F (2011). A double-blind, randomized, placebo-controlled, active reference study of Lu AA21004 in patients with major depressive disorder. The International Journal of Neuropsychopharmacology , 1-12 PMID: 21767441