Showing posts with label papers. Show all posts
Showing posts with label papers. Show all posts

Tuesday, September 20, 2011

Antidepressants In The UK

Antidepressant sales have been rising for many years in Western countries, as regular Neuroskeptic readers  will remember.


Most of the studies on antidepressant use come from the USA and the UK, although the pattern also seems to hold for other European countries. The rapid rise of antidepressants from niche drugs to mega-sellers is perhaps the single biggest change in the way medicine treats mental illness since the invention of psychiatric drugs.

But while a rise in sales has been observed in many countries, that doesn't mean the same causes were at work in every case. For example, in the USA, there is good evidence that more people have started taking antidepressants over the past 15 years.

In the UK, however, it's a bit more tricky. Antidepressant prescriptions have certainly risen. However, a large 2009 study revealed that, between 1993 and 2005, there was not any significant rise in people starting on antidepressants for depression. Rather, the rise in prescriptions was caused by patients getting more prescriptions each. The same number of users were using more antidepressants.

Now a new paper has looked at antidepressant use over much the same period (1995-2007), but using a different set of data. Pauline Lockhart and Bruce Guthrie looked at pharmacy records of drugs actually dispensed, not just prescribed, and their data only covers a specific region, Tayside in Scotland. The 2009 study was nationwide.

So what happened?

The new paper confirmed the 2009 survey's finding of a strong increase in the number of antidepressant prescriptions per patient.

However, unlike the old study, this one found an increase in the number of people who used antidepressants each year. It went up from 8% of the population in 1995, to 13% in 2007 - an extremely high figure, higher even than the USA.

In other words, more people took them, and they took more of them on average - adding up to a threefold increase in antidepressants actually sold. The increase was seen across men and women of all ages and social classes.

There's no good evidence of an increase in mental illness in Britain in this period, by the way.

But why did the 2009 paper report no change in antidepressant users, while this one did? It could be that the increase was localized to the Tayside area. Another possibility is that there was an increase nationwide, but it wasn't about people with depression.

The 2009 study only looked at people with a diagnosis of depression. Yet modern antidepressants are widely used for other things as well - like anxiety, insomnia, pain, premature ejaculation. Maybe this non-depression-based use of antidepressants is what's on the rise.

ResearchBlogging.orgLockhart, P. and Guthrie, B. (2011). Trends in primary care antidepressantprescribing 1995–2007 British Journal of General Practice

Antidepressants In The UK

Antidepressant sales have been rising for many years in Western countries, as regular Neuroskeptic readers  will remember.


Most of the studies on antidepressant use come from the USA and the UK, although the pattern also seems to hold for other European countries. The rapid rise of antidepressants from niche drugs to mega-sellers is perhaps the single biggest change in the way medicine treats mental illness since the invention of psychiatric drugs.

But while a rise in sales has been observed in many countries, that doesn't mean the same causes were at work in every case. For example, in the USA, there is good evidence that more people have started taking antidepressants over the past 15 years.

In the UK, however, it's a bit more tricky. Antidepressant prescriptions have certainly risen. However, a large 2009 study revealed that, between 1993 and 2005, there was not any significant rise in people starting on antidepressants for depression. Rather, the rise in prescriptions was caused by patients getting more prescriptions each. The same number of users were using more antidepressants.

Now a new paper has looked at antidepressant use over much the same period (1995-2007), but using a different set of data. Pauline Lockhart and Bruce Guthrie looked at pharmacy records of drugs actually dispensed, not just prescribed, and their data only covers a specific region, Tayside in Scotland. The 2009 study was nationwide.

So what happened?

The new paper confirmed the 2009 survey's finding of a strong increase in the number of antidepressant prescriptions per patient.

However, unlike the old study, this one found an increase in the number of people who used antidepressants each year. It went up from 8% of the population in 1995, to 13% in 2007 - an extremely high figure, higher even than the USA.

In other words, more people took them, and they took more of them on average - adding up to a threefold increase in antidepressants actually sold. The increase was seen across men and women of all ages and social classes.

There's no good evidence of an increase in mental illness in Britain in this period, by the way.

But why did the 2009 paper report no change in antidepressant users, while this one did? It could be that the increase was localized to the Tayside area. Another possibility is that there was an increase nationwide, but it wasn't about people with depression.

The 2009 study only looked at people with a diagnosis of depression. Yet modern antidepressants are widely used for other things as well - like anxiety, insomnia, pain, premature ejaculation. Maybe this non-depression-based use of antidepressants is what's on the rise.

ResearchBlogging.orgLockhart, P. and Guthrie, B. (2011). Trends in primary care antidepressantprescribing 1995–2007 British Journal of General Practice

Sunday, September 11, 2011

Neuroscience Fails Stats 101?

According to a new paper, a full half of neuroscience papers that try to do a (very simple) statistical comparison are getting it wrong: Erroneous analyses of interactions in neuroscience: a problem of significance.

Here's the problem. Suppose you want to know whether a certain 'treatment' has an affect on a certain variable. The treatment could be a drug, an environmental change, a genetic variant, whatever. The target population could be animals, humans, brain cells, or anything else.

So you give the treatment to some targets and give a control treatment to others. You measure the outcome variable. You use a t-test of significance to see whether the effect is large enough that it wouldn't have happened by chance. You find that it was significant.

That's fine. Then you try a different treatment, and it doesn't cause a significant effect against the control. Does that mean the first treatment was more powerful than the second?

No. It just doesn't. The only way to find that out would be to compare the two treatments directly - and that would be very easy to do, because you have all the data to hand. If you just compare the two treatments to control you might end up with this scenario:

Both treatments are very similar but one (B) is slightly better so it's significantly different from control, while A isn't. But they're basically the same. It's probably just fluke that B did slightly better than A. If you compared A and B directly you'd find they were not significantly different.

An analogy: Passing a significance test is like winning a prize. You can only do it if you're much better than the average. But that doesn't mean you're much better than everyone who didn't win the prize, because some of them will have almost been good enough.

Usain Bolt is the fastest man in the world (when he's not false-starting himself out of races). Much faster than me. But he's not much faster than the second fastest man in the world.




ResearchBlogging.orgNieuwenhuis S, Forstmann BU, & Wagenmakers EJ (2011). Erroneous analyses of interactions in neuroscience: a problem of significance. Nature neuroscience, 14 (9), 1105-7 PMID: 21878926

Neuroscience Fails Stats 101?

According to a new paper, a full half of neuroscience papers that try to do a (very simple) statistical comparison are getting it wrong: Erroneous analyses of interactions in neuroscience: a problem of significance.

Here's the problem. Suppose you want to know whether a certain 'treatment' has an affect on a certain variable. The treatment could be a drug, an environmental change, a genetic variant, whatever. The target population could be animals, humans, brain cells, or anything else.

So you give the treatment to some targets and give a control treatment to others. You measure the outcome variable. You use a t-test of significance to see whether the effect is large enough that it wouldn't have happened by chance. You find that it was significant.

That's fine. Then you try a different treatment, and it doesn't cause a significant effect against the control. Does that mean the first treatment was more powerful than the second?

No. It just doesn't. The only way to find that out would be to compare the two treatments directly - and that would be very easy to do, because you have all the data to hand. If you just compare the two treatments to control you might end up with this scenario:

Both treatments are very similar but one (B) is slightly better so it's significantly different from control, while A isn't. But they're basically the same. It's probably just fluke that B did slightly better than A. If you compared A and B directly you'd find they were not significantly different.

An analogy: Passing a significance test is like winning a prize. You can only do it if you're much better than the average. But that doesn't mean you're much better than everyone who didn't win the prize, because some of them will have almost been good enough.

Usain Bolt is the fastest man in the world (when he's not false-starting himself out of races). Much faster than me. But he's not much faster than the second fastest man in the world.




ResearchBlogging.orgNieuwenhuis S, Forstmann BU, & Wagenmakers EJ (2011). Erroneous analyses of interactions in neuroscience: a problem of significance. Nature neuroscience, 14 (9), 1105-7 PMID: 21878926

Thursday, September 1, 2011

Men, Women and Spatial Intelligence

Do men and women differ in their cognitive capacities? It's been a popular topic of conversation since as far back as we have records of what people were talking about.


While it's now (almost) generally accepted that men and women are at most only very slightly different in average IQ, there are still a couple of lines of evidence in favor of a gender difference.

First, there's the idea that men are more variable in their intelligence, so there are more very smart men, and also more very stupid ones. This averages out so the mean is the same.

Second, there's the theory that men are on average better at some things, notably "spatial" stuff involving the ability to mentally process shapes, patterns and images, while women are better at social, emotional and perhaps verbal tasks. Again, this averages out overall.

According to proponents, these differences explain why men continue to dominate the upper echelons of things like mathematics, physics, and chess. These all tap spatial processing and since men are more variable, there'll be more extremely high achievers - Nobel Prizes, grandmasters. (There are also presumably more men who are rubbish at these things, but we don't notice them.)

The male spatial advantage has been reported in many parts of the world, but is it "innate", something to do with the male brain? A new PNAS study says - probably not, it's to do with culture. But I'm not convinced.

The authors went to India and studied two tribes, the Khasi and the Karbi. Both live right next to other in the hills of Northeastern India and genetically, they're closely related. Culturally though, the Karbi are patrilineal - property and status is passed down from father to son, with women owning no land of their own. The Khasi are matrilineal, with men forbidden to own land. Moreover, Khasi women also get just as much education as the men, while Karbi ones get much less.


The authors took about 1200 people from 8 villages - 4 per culture - and got them to do a jigsaw puzzle. The quicker you do it, the better your spatial ability. Here were the results. I added the gender-stereotypical colours.

In the patrilineal group, women did substantially worse on average (remember that more time means worse). In the matrilineal society, they performed as well as men. Well, a tiny bit worse, but it wasn't significant. Differences in education explained some of the effect, but only a small part of it.

OK.

This was a large study, and the results are statistically very strong. However, there's a curious result that the authors don't discuss in the paper - the matrilineal group just did much better overall. Looking at the men, they were 10 seconds faster in the matrilineal culture. That's nearly as big as the gender difference in the patrilineal group (15 seconds)!

The individual variability was also much higher in the patrilineal society, for both genders.

Now, maybe, this is a real effect. Maybe being in a patrilineal society makes everyone less spatially aware, not just women; that seems a bit of a stretch, though.

There's also the problem that this study essentially only has two datapoints. One society is matrilineal and has low gender difference in visuospatial processing. One is patrilineal and has a high difference. But that's just not enough data to conclude that there's a correlation between the two things, let alone a causal relationship; you would need to study lots of societies to do that.

Personally, I have no idea what drives the difference, but this study is a reminder of how difficult the question is.

ResearchBlogging.orgHoffman M, Gneezy U, List JA (2011). Nurture affects gender differences in spatial abilities. Proceedings of the National Academy of Sciences of the United States of America PMID: 21876159

Men, Women and Spatial Intelligence

Do men and women differ in their cognitive capacities? It's been a popular topic of conversation since as far back as we have records of what people were talking about.


While it's now (almost) generally accepted that men and women are at most only very slightly different in average IQ, there are still a couple of lines of evidence in favor of a gender difference.

First, there's the idea that men are more variable in their intelligence, so there are more very smart men, and also more very stupid ones. This averages out so the mean is the same.

Second, there's the theory that men are on average better at some things, notably "spatial" stuff involving the ability to mentally process shapes, patterns and images, while women are better at social, emotional and perhaps verbal tasks. Again, this averages out overall.

According to proponents, these differences explain why men continue to dominate the upper echelons of things like mathematics, physics, and chess. These all tap spatial processing and since men are more variable, there'll be more extremely high achievers - Nobel Prizes, grandmasters. (There are also presumably more men who are rubbish at these things, but we don't notice them.)

The male spatial advantage has been reported in many parts of the world, but is it "innate", something to do with the male brain? A new PNAS study says - probably not, it's to do with culture. But I'm not convinced.

The authors went to India and studied two tribes, the Khasi and the Karbi. Both live right next to other in the hills of Northeastern India and genetically, they're closely related. Culturally though, the Karbi are patrilineal - property and status is passed down from father to son, with women owning no land of their own. The Khasi are matrilineal, with men forbidden to own land. Moreover, Khasi women also get just as much education as the men, while Karbi ones get much less.


The authors took about 1200 people from 8 villages - 4 per culture - and got them to do a jigsaw puzzle. The quicker you do it, the better your spatial ability. Here were the results. I added the gender-stereotypical colours.

In the patrilineal group, women did substantially worse on average (remember that more time means worse). In the matrilineal society, they performed as well as men. Well, a tiny bit worse, but it wasn't significant. Differences in education explained some of the effect, but only a small part of it.

OK.

This was a large study, and the results are statistically very strong. However, there's a curious result that the authors don't discuss in the paper - the matrilineal group just did much better overall. Looking at the men, they were 10 seconds faster in the matrilineal culture. That's nearly as big as the gender difference in the patrilineal group (15 seconds)!

The individual variability was also much higher in the patrilineal society, for both genders.

Now, maybe, this is a real effect. Maybe being in a patrilineal society makes everyone less spatially aware, not just women; that seems a bit of a stretch, though.

There's also the problem that this study essentially only has two datapoints. One society is matrilineal and has low gender difference in visuospatial processing. One is patrilineal and has a high difference. But that's just not enough data to conclude that there's a correlation between the two things, let alone a causal relationship; you would need to study lots of societies to do that.

Personally, I have no idea what drives the difference, but this study is a reminder of how difficult the question is.

ResearchBlogging.orgHoffman M, Gneezy U, List JA (2011). Nurture affects gender differences in spatial abilities. Proceedings of the National Academy of Sciences of the United States of America PMID: 21876159

Sunday, August 21, 2011

Is Sleep Brain Defragmentation?

After a period of heavy use, hard disks tend to get 'fragmented'. Data gets written all over random parts of the disk, and it gets inefficient to keep track of it all.





That's why you need to run a defragmentation program occasionally. Ideally, you do this overnight, while you're asleep, so it doesn't stop you from using the computer.



A new paper from some Stanford neuroscientists argues that the function of sleep is to reorganize neural connections - a bit like a disk defrag for the brain - although it's also a bit like compressing files to make more room, and a bit like a system reset: Synaptic plasticity in sleep: learning, homeostasis and disease



The basic idea is simple. While you're awake, you're having experiences, and your brain is forming memories. Memory formation involves a process called long-term potentiation (LTP) which is essentially the strengthening of synaptic connections between nerve cells.



Yet if LTP is strengthening synapses, and we're learning all our lives, wouldn't the synapses eventually hit a limit? Couldn't they max out, so that they could never get any stronger?



Worse, the synapses that strengthen during memory are primarily glutamate synapses - and these are dangerous. Glutamate is a common neurotransmitter, and it's even a flavouring, but it's also a toxin.



Too much glutamate damages the very cells that receive the messages. Rather like how sound is useful for communication, but stand next to a pneumatic drill for an hour, and you'll go deaf.



So, if our brains were constantly forming stronger glutamate synapses, we might eventually run into serious problems. This is why we sleep, according to the new paper. Indeed, sleep deprivation is harmful to health, and this theory would explain why.





The authors argue that during deep, dreamless slow-wave sleep (SWS), the brain is essentially removing the "extra" synaptic strength formed during the previous day. But it does so in a way that preserves the memories. A bit like how defragmentation reorganizes the hard disk to increase efficiency, without losing data.



One possible mechanism is 'synaptic scaling'. When some of the inputs onto a given cell become stronger, all of the synapses on that cell could weaken. This would preserve the relative strength of the different inputs while keeping the total inputs constant. It's known that synaptic scaling happens in the brain, although it's not clear whether it has anything to do with sleep.



There are other theories of the restorative function of sleep, but this one seems pretty plausible. It stands in contrast to the idea that sleep is purely a form of inactivity designed to save energy, rather than being important in itself.



What this paper doesn't explain, and doesn't try to, is dreaming, REM sleep, which is very different to slow-wave sleep. REM is not required for life, so long as you get SWS, and some animals don't have REM, but they all have SWS, although in some animals, only one side of the brain has it at a time.



So it makes sense, but what's the evidence? There's quite a bit - but, it all comes from very simple animals, like flies and fish.



The pictures above show that, in various parts of the brain of the fruit fly, measures of synaptic strength are increased in flies that have been awake for some time, compared to recently rested ones. In general, synapses increase during the wake cycle and then return to baseline during sleep.



There's similar evidence from fish. But the authors admit that no-one has yet shown that the same is true of any mammals - let alone humans.



I'd say that this is important, because the fly brain is literally a million times smaller than ours. Synaptic overgrowth could be a more serious problem for a fly because they just have fewer neurons to play with. Sleep may have evolved to prune extra connections in primitive brains, and then shifted to playing a very different role in ours.



ResearchBlogging.orgWang G, Grone B, Colas D, Appelbaum L, & Mourrain P (2011). Synaptic plasticity in sleep: learning, homeostasis and disease. Trends in Neurosciences PMID: 21840068

Is Sleep Brain Defragmentation?

After a period of heavy use, hard disks tend to get 'fragmented'. Data gets written all over random parts of the disk, and it gets inefficient to keep track of it all.





That's why you need to run a defragmentation program occasionally. Ideally, you do this overnight, while you're asleep, so it doesn't stop you from using the computer.



A new paper from some Stanford neuroscientists argues that the function of sleep is to reorganize neural connections - a bit like a disk defrag for the brain - although it's also a bit like compressing files to make more room, and a bit like a system reset: Synaptic plasticity in sleep: learning, homeostasis and disease



The basic idea is simple. While you're awake, you're having experiences, and your brain is forming memories. Memory formation involves a process called long-term potentiation (LTP) which is essentially the strengthening of synaptic connections between nerve cells.



Yet if LTP is strengthening synapses, and we're learning all our lives, wouldn't the synapses eventually hit a limit? Couldn't they max out, so that they could never get any stronger?



Worse, the synapses that strengthen during memory are primarily glutamate synapses - and these are dangerous. Glutamate is a common neurotransmitter, and it's even a flavouring, but it's also a toxin.



Too much glutamate damages the very cells that receive the messages. Rather like how sound is useful for communication, but stand next to a pneumatic drill for an hour, and you'll go deaf.



So, if our brains were constantly forming stronger glutamate synapses, we might eventually run into serious problems. This is why we sleep, according to the new paper. Indeed, sleep deprivation is harmful to health, and this theory would explain why.





The authors argue that during deep, dreamless slow-wave sleep (SWS), the brain is essentially removing the "extra" synaptic strength formed during the previous day. But it does so in a way that preserves the memories. A bit like how defragmentation reorganizes the hard disk to increase efficiency, without losing data.



One possible mechanism is 'synaptic scaling'. When some of the inputs onto a given cell become stronger, all of the synapses on that cell could weaken. This would preserve the relative strength of the different inputs while keeping the total inputs constant. It's known that synaptic scaling happens in the brain, although it's not clear whether it has anything to do with sleep.



There are other theories of the restorative function of sleep, but this one seems pretty plausible. It stands in contrast to the idea that sleep is purely a form of inactivity designed to save energy, rather than being important in itself.



What this paper doesn't explain, and doesn't try to, is dreaming, REM sleep, which is very different to slow-wave sleep. REM is not required for life, so long as you get SWS, and some animals don't have REM, but they all have SWS, although in some animals, only one side of the brain has it at a time.



So it makes sense, but what's the evidence? There's quite a bit - but, it all comes from very simple animals, like flies and fish.



The pictures above show that, in various parts of the brain of the fruit fly, measures of synaptic strength are increased in flies that have been awake for some time, compared to recently rested ones. In general, synapses increase during the wake cycle and then return to baseline during sleep.



There's similar evidence from fish. But the authors admit that no-one has yet shown that the same is true of any mammals - let alone humans.



I'd say that this is important, because the fly brain is literally a million times smaller than ours. Synaptic overgrowth could be a more serious problem for a fly because they just have fewer neurons to play with. Sleep may have evolved to prune extra connections in primitive brains, and then shifted to playing a very different role in ours.



ResearchBlogging.orgWang G, Grone B, Colas D, Appelbaum L, & Mourrain P (2011). Synaptic plasticity in sleep: learning, homeostasis and disease. Trends in Neurosciences PMID: 21840068

Friday, August 19, 2011

The Ethics of Forgetfulness Drugs

Drugs that could modify or erase memories could soon be possible. We shouldn't rush to judge them unethical, says a Nature opinion piece by Adam Kolber, of the Neuroethics & Law Blog.



The idea of a pill that could make you forget something, or that could modify the emotional charge of a past experience, does seem rather disturbing.



Yet experiments on animals have gone a long to revealing the molecular mechanisms behind the formation and maintanence of memory traces. Much of the early work focussed on dangerously toxic drugs but recently more targeted approaches have appeared.



Kolber argues that we should not shy away from research in this area or brand the whole idea unethical. Rather we should consider the costs and benefits on a case-by-case basis.

The fears about pharmaceutical memory manipulation are overblown. Thoughtful regulation may some day be appropriate but excessive hand-wringing now over the ethics of tampering with memory could stall research into preventing post-traumatic stress in millions of people. Delay could also hinder people who are already debilitated by harrowing memories from being offered the best hope yet of reclaiming their lives.
He says that

Given the close connection between memory and a sense of self, some bioethicists...worry that giving people too much power to alter their life stories could ultimately weaken their sense of identity and make their lives less genuine.



These arguments are not persuasive. Some memories, such as those of rescue workers who clean up scenes of mass destruction, may have no redeeming value. Drugs may speed up the healing process more effectively than counselling, arguably making patients more true to themselves than they would be if a traumatic experience were to dominate their lives.
This is a complex issue. I can see his point, although I'm not sure the rescue worker example is the best one. A rescue worker, at least a professional one, has chosen to do that kind of work. The experiences that are part of that job are ones they decided to have - or at least that they knew were a realistic possibility - and that may be an expression of their identity.



The argument is perhaps more convincing in the case of someone who, quite unexpectedly, suffers an out-of-the-blue trauma. In this case, the trauma has nothing to do with their lives; if it interferes with their ability to function, it might "stop them from being themselves".



Kolber ends by quoting a fascinating story from Time magazine in 2007, which I didn't catch at the time:

Take a scenario recounted by a US doctor in 2007 (ref. 9). The doctor had biopsied a suspected cancer patient and sent a tissue sample to a pathologist while the woman was still in the operating room. Thinking she was completely sedated, the pathologist announced a bleak prognosis over the intercom.



The patient, who had received only local anaesthesia, heard the news and began to shriek, “Oh my God. My kids!” An anaesthesiologist standing by quickly injected her with propofol, a sedative that causes some people to forget what happened a few minutes before they were injected.



When the woman woke up, she had no memory of hearing her prognosis.
ResearchBlogging.orgKolber A (2011). Neuroethics: Give memory-altering drugs a chance. Nature, 476 (7360), 275-6 PMID: 21850084

The Ethics of Forgetfulness Drugs

Drugs that could modify or erase memories could soon be possible. We shouldn't rush to judge them unethical, says a Nature opinion piece by Adam Kolber, of the Neuroethics & Law Blog.



The idea of a pill that could make you forget something, or that could modify the emotional charge of a past experience, does seem rather disturbing.



Yet experiments on animals have gone a long to revealing the molecular mechanisms behind the formation and maintanence of memory traces. Much of the early work focussed on dangerously toxic drugs but recently more targeted approaches have appeared.



Kolber argues that we should not shy away from research in this area or brand the whole idea unethical. Rather we should consider the costs and benefits on a case-by-case basis.

The fears about pharmaceutical memory manipulation are overblown. Thoughtful regulation may some day be appropriate but excessive hand-wringing now over the ethics of tampering with memory could stall research into preventing post-traumatic stress in millions of people. Delay could also hinder people who are already debilitated by harrowing memories from being offered the best hope yet of reclaiming their lives.
He says that

Given the close connection between memory and a sense of self, some bioethicists...worry that giving people too much power to alter their life stories could ultimately weaken their sense of identity and make their lives less genuine.



These arguments are not persuasive. Some memories, such as those of rescue workers who clean up scenes of mass destruction, may have no redeeming value. Drugs may speed up the healing process more effectively than counselling, arguably making patients more true to themselves than they would be if a traumatic experience were to dominate their lives.
This is a complex issue. I can see his point, although I'm not sure the rescue worker example is the best one. A rescue worker, at least a professional one, has chosen to do that kind of work. The experiences that are part of that job are ones they decided to have - or at least that they knew were a realistic possibility - and that may be an expression of their identity.



The argument is perhaps more convincing in the case of someone who, quite unexpectedly, suffers an out-of-the-blue trauma. In this case, the trauma has nothing to do with their lives; if it interferes with their ability to function, it might "stop them from being themselves".



Kolber ends by quoting a fascinating story from Time magazine in 2007, which I didn't catch at the time:

Take a scenario recounted by a US doctor in 2007 (ref. 9). The doctor had biopsied a suspected cancer patient and sent a tissue sample to a pathologist while the woman was still in the operating room. Thinking she was completely sedated, the pathologist announced a bleak prognosis over the intercom.



The patient, who had received only local anaesthesia, heard the news and began to shriek, “Oh my God. My kids!” An anaesthesiologist standing by quickly injected her with propofol, a sedative that causes some people to forget what happened a few minutes before they were injected.



When the woman woke up, she had no memory of hearing her prognosis.
ResearchBlogging.orgKolber A (2011). Neuroethics: Give memory-altering drugs a chance. Nature, 476 (7360), 275-6 PMID: 21850084

Wednesday, August 17, 2011

Pharmaceutical Company Threatens Blogger

Boiron, a multinational pharmaceutical company, have threatened an Italian blogger with legal action, the BMJ reports.



Many people are concerned when big pharmaceutical companies do this kind of thing. So I don't think we should make any exception merely because Boiron's pharmaceuticals happen to be homeopathic ones.



Samuel Riva, who blogs (in Italian) at blogzero.it, put up some articles critical of homeopathy

which included pictures of Boiron’s blockbuster homoeopathic product Oscillococcinum, marketed as a remedy against flu symptoms. The pictures were accompanied by captions, which joked about the total absence of any active molecules in homoeopathic preparations
Boiron wrote to Riva's internet provider threatening legal action, if the offending references to Boiron weren't taken down. They also wanted them to lock Riva out of his blog, the BMJ says. In response Riva removed the references to Boiron, including the pictures and captions, but kept the posts on homeopathy in general.



Hmmm.



Above you can see a new picture I made of a Boiron product, with some captions you may find interesting. I've made sure to limit these to quotes from Wikipedia, and from Boiron USA's own website, and some simple mathematical calculations.



Beyond that, I make no comment whatsoever.



ResearchBlogging.orgTurone F (2011). Homoeopathy multinational Boiron threatens amateur Italian blogger. BMJ (Clinical research ed.), 343 PMID: 21840920

Pharmaceutical Company Threatens Blogger

Boiron, a multinational pharmaceutical company, have threatened an Italian blogger with legal action, the BMJ reports.



Many people are concerned when big pharmaceutical companies do this kind of thing. So I don't think we should make any exception merely because Boiron's pharmaceuticals happen to be homeopathic ones.



Samuel Riva, who blogs (in Italian) at blogzero.it, put up some articles critical of homeopathy

which included pictures of Boiron’s blockbuster homoeopathic product Oscillococcinum, marketed as a remedy against flu symptoms. The pictures were accompanied by captions, which joked about the total absence of any active molecules in homoeopathic preparations
Boiron wrote to Riva's internet provider threatening legal action, if the offending references to Boiron weren't taken down. They also wanted them to lock Riva out of his blog, the BMJ says. In response Riva removed the references to Boiron, including the pictures and captions, but kept the posts on homeopathy in general.



Hmmm.



Above you can see a new picture I made of a Boiron product, with some captions you may find interesting. I've made sure to limit these to quotes from Wikipedia, and from Boiron USA's own website, and some simple mathematical calculations.



Beyond that, I make no comment whatsoever.



ResearchBlogging.orgTurone F (2011). Homoeopathy multinational Boiron threatens amateur Italian blogger. BMJ (Clinical research ed.), 343 PMID: 21840920

Monday, August 15, 2011

A Ghostwriter Speaks

PLoS ONE offers the confessions of a former medical ghostwriter: Being the Ghost in the Machine.





The article (which is open access and short, so well worth a read) explains how Linda Logdberg became a medical writer; what excited her about the job; what she actually did; and what made her eventually give it up.



Ghostwriting of course has a bad press at the moment and it's recently been banned by some leading research centres. Ghostwriting certainly is concerning, because of what it implies about the process leading up the publication.



However, it doesn't create bad science. A bad paper is bad because of what it says, not because of who (ghost)wrote it. Real scientists can write bad papers without a ghostwriter's help.



When pharmaceutical companies pay a ghostwriter, they are not doing this to get access to special dark arts that real scientists are innocent of. As far as I can see, it's just more efficient to use a specialist writer to do your scientific sins, when you're doing it all the time.



Rather like every evil sorcerer has an apprentice to do the day-to-day work of sacrificing animals and mixing potions.



Logdberg says:

My career came to an end over a job involving revising a manuscript supporting the use of a drug for attention deficit-hyperactivity disorder (ADHD), with a duration of action that fell between that of shorter- and longer-acting formulations.



However, I have two children with ADHD, and I failed to see the benefit of a drug that would wear off right at suppertime, rather than a few hours before or a few hours after. Suppertime is a time in ADHD households when tempers and homework arguments are often at their worst.



...Attempts to discuss my misgivings with the [medical] contact met with the curt admonition to ‘‘just write it.’’ But perhaps because this particular disorder was so close to home, I was unwilling to turn this ugly duckling of a ‘‘me-too’’ drug into a marketable swan.
Many scientists will recall being in that kind of situation, albeit in a different context.



When writing a grant application, for example, you are almost literally trying to sell your proposed research to the awarding committee, on several levels. You need to sell the importance of the scientific question; the likely practical benefits of the research; the chance of success using your methods; what makes you the right person to do this work, and so on.



Writing a paper is much the same, although in this case you're selling research you've already done, and the data you collected.



Turning ugly ducklings into fundable, or publishable, swans, is part and parcel of modern science. Of course, the ducklings are not always as ugly as in the case Logdberg describes, but they are rarely as beautiful as they eventually end up.



ResearchBlogging.orgLogdberg, L. (2011). Being the Ghost in the Machine: A Medical Ghostwriter's Personal View PLoS Medicine, 8 (8) DOI: 10.1371/journal.pmed.1001071

A Ghostwriter Speaks

PLoS ONE offers the confessions of a former medical ghostwriter: Being the Ghost in the Machine.





The article (which is open access and short, so well worth a read) explains how Linda Logdberg became a medical writer; what excited her about the job; what she actually did; and what made her eventually give it up.



Ghostwriting of course has a bad press at the moment and it's recently been banned by some leading research centres. Ghostwriting certainly is concerning, because of what it implies about the process leading up the publication.



However, it doesn't create bad science. A bad paper is bad because of what it says, not because of who (ghost)wrote it. Real scientists can write bad papers without a ghostwriter's help.



When pharmaceutical companies pay a ghostwriter, they are not doing this to get access to special dark arts that real scientists are innocent of. As far as I can see, it's just more efficient to use a specialist writer to do your scientific sins, when you're doing it all the time.



Rather like every evil sorcerer has an apprentice to do the day-to-day work of sacrificing animals and mixing potions.



Logdberg says:

My career came to an end over a job involving revising a manuscript supporting the use of a drug for attention deficit-hyperactivity disorder (ADHD), with a duration of action that fell between that of shorter- and longer-acting formulations.



However, I have two children with ADHD, and I failed to see the benefit of a drug that would wear off right at suppertime, rather than a few hours before or a few hours after. Suppertime is a time in ADHD households when tempers and homework arguments are often at their worst.



...Attempts to discuss my misgivings with the [medical] contact met with the curt admonition to ‘‘just write it.’’ But perhaps because this particular disorder was so close to home, I was unwilling to turn this ugly duckling of a ‘‘me-too’’ drug into a marketable swan.
Many scientists will recall being in that kind of situation, albeit in a different context.



When writing a grant application, for example, you are almost literally trying to sell your proposed research to the awarding committee, on several levels. You need to sell the importance of the scientific question; the likely practical benefits of the research; the chance of success using your methods; what makes you the right person to do this work, and so on.



Writing a paper is much the same, although in this case you're selling research you've already done, and the data you collected.



Turning ugly ducklings into fundable, or publishable, swans, is part and parcel of modern science. Of course, the ducklings are not always as ugly as in the case Logdberg describes, but they are rarely as beautiful as they eventually end up.



ResearchBlogging.orgLogdberg, L. (2011). Being the Ghost in the Machine: A Medical Ghostwriter's Personal View PLoS Medicine, 8 (8) DOI: 10.1371/journal.pmed.1001071

Thursday, August 11, 2011

Do We Need Placebos?

A news feature in Nature asks whether placebo controls are always a good idea: Why Fake It?



The piece looks at experimental neurosurgical treatments for Parkinson's, such as "Spheramine". This consists of cultured human cells, which are implanted directly into the brain of the sufferer. The idea is that the cells will grow and help produce dopamine, which is deficient in Parkinson's.



Peggy Willocks, a 44 year old teacher, took part in a trial of the surgery in 2000. She says it helped stave off the symptoms for years, but the development of Spheramine was axed in 2008 after a controlled trial found it didn't work any better than a placebo.



The placebo was "sham surgery" i.e. putting the patient through a full surgical procedure, and making holes in their skull, but without doing anything to their brain.



It's cheap and easy to do a placebo controlled trial of a drug - all you need is a sugar pill. But with neurosurgery, it's clearly a lot more involved. A placebo has to be believable. Convincing sham surgery is expensive, time-consuming, and it has real risks, albeit small ones.



Is it ethical to put patients through that?



That, I think, can only be decided on a trial-by-trial basis. It depends on the likely benefits of the treatment, and whether the trial is scientifically sound. Obviously, it'd be wrong to do sham surgery as part of a flawed trial that won't tell us anything useful.



The Nature article, however, goes further than this, and suggests that placebo controlled trials may be unsuitable for testing these kinds of treatments, failing to detect a real benefit in some patients:

There are hints from some of the failed phase II trials that patients followed up beyond study endpoints might tell a more positive story. Some say, therefore, that sham controls are sinking the prospects of valuable drugs.



Anders Björklund, a neuroscientist at Lund University in Sweden who is collaborating with [Roger Barker of Cambridge], says that sham surgery can lead researchers to throw out a strategy prematurely if the trial fails because of technical or methodological glitches rather than a true lack of efficacy.
A patient advocate agrees:

According to Perry Cohen, who leads a network of patient activists called the Parkinson Pipeline Project, that’s exactly what is happening. He had always questioned the need for sham surgery, he says, but after the string of phase II failures, “We started saying, ‘Hey, this is a problem. These trials failed, but we know they are working for some people.’”
...Cohen [says] that patients have different priorities and that researchers must take these into account. Researchers use placebo controls to weed out false positives. But for patients, the real ogre is the false negatives — which can sink a therapy before it has been optimized.
I'm not sure about this. If I had Parkinson's, I would certainly hate to miss out on the genuine cure because a trial had failed to recognize that it worked. But equally, I would not be happy to be given a rubbish treatment that would have failed a placebo controlled trial, but never got one, because of arguments like this.



Placebo controlled trials can fail to detect benefits if they are too short, too small, methodologically flawed, or whatever. Certainly, a trial can be placebo controlled, and still crap. But the answer is surely to do better trials, not no trials.



It may well be that we shouldn't rush to do placebo controlled trials until later in the development process, when the technique has been properly refined. But the history of medicine is littered with treatments that "we know work for some people" - that didn't.



ResearchBlogging.orgKatsnelson, A. (2011). Experimental therapies for Parkinson's disease: Why fake it? Nature, 476 (7359), 142-144 DOI: 10.1038/476142a

Do We Need Placebos?

A news feature in Nature asks whether placebo controls are always a good idea: Why Fake It?



The piece looks at experimental neurosurgical treatments for Parkinson's, such as "Spheramine". This consists of cultured human cells, which are implanted directly into the brain of the sufferer. The idea is that the cells will grow and help produce dopamine, which is deficient in Parkinson's.



Peggy Willocks, a 44 year old teacher, took part in a trial of the surgery in 2000. She says it helped stave off the symptoms for years, but the development of Spheramine was axed in 2008 after a controlled trial found it didn't work any better than a placebo.



The placebo was "sham surgery" i.e. putting the patient through a full surgical procedure, and making holes in their skull, but without doing anything to their brain.



It's cheap and easy to do a placebo controlled trial of a drug - all you need is a sugar pill. But with neurosurgery, it's clearly a lot more involved. A placebo has to be believable. Convincing sham surgery is expensive, time-consuming, and it has real risks, albeit small ones.



Is it ethical to put patients through that?



That, I think, can only be decided on a trial-by-trial basis. It depends on the likely benefits of the treatment, and whether the trial is scientifically sound. Obviously, it'd be wrong to do sham surgery as part of a flawed trial that won't tell us anything useful.



The Nature article, however, goes further than this, and suggests that placebo controlled trials may be unsuitable for testing these kinds of treatments, failing to detect a real benefit in some patients:

There are hints from some of the failed phase II trials that patients followed up beyond study endpoints might tell a more positive story. Some say, therefore, that sham controls are sinking the prospects of valuable drugs.



Anders Björklund, a neuroscientist at Lund University in Sweden who is collaborating with [Roger Barker of Cambridge], says that sham surgery can lead researchers to throw out a strategy prematurely if the trial fails because of technical or methodological glitches rather than a true lack of efficacy.
A patient advocate agrees:

According to Perry Cohen, who leads a network of patient activists called the Parkinson Pipeline Project, that’s exactly what is happening. He had always questioned the need for sham surgery, he says, but after the string of phase II failures, “We started saying, ‘Hey, this is a problem. These trials failed, but we know they are working for some people.’”
...Cohen [says] that patients have different priorities and that researchers must take these into account. Researchers use placebo controls to weed out false positives. But for patients, the real ogre is the false negatives — which can sink a therapy before it has been optimized.
I'm not sure about this. If I had Parkinson's, I would certainly hate to miss out on the genuine cure because a trial had failed to recognize that it worked. But equally, I would not be happy to be given a rubbish treatment that would have failed a placebo controlled trial, but never got one, because of arguments like this.



Placebo controlled trials can fail to detect benefits if they are too short, too small, methodologically flawed, or whatever. Certainly, a trial can be placebo controlled, and still crap. But the answer is surely to do better trials, not no trials.



It may well be that we shouldn't rush to do placebo controlled trials until later in the development process, when the technique has been properly refined. But the history of medicine is littered with treatments that "we know work for some people" - that didn't.



ResearchBlogging.orgKatsnelson, A. (2011). Experimental therapies for Parkinson's disease: Why fake it? Nature, 476 (7359), 142-144 DOI: 10.1038/476142a

Monday, August 8, 2011

So Apparantly I'm Bipolar

According to a new paper, yours truly is bipolar.





I've written before of my experience of depression, and the fact that I take antidepressants, but I've never been diagnosed with bipolar.



I've taken a few drugs in my time. On certain dopamine-based drugs I got euphoric, filled with energy, talkative, confident, with no need for sleep, and a boundless desire to do stuff, which is textbook hypomania. So I think I know what it feels like, and I can confidently say that it has never happened to me out of the blue.



On antidepressants, I have had some mild experiences of this type. Ironically, the closest I've come to it was when I quit an SSRI antidepressant. I've also experienced periods of irritability and agitation on antidepressants. Either way, that's antidepressants. Bipolar is when you get high on your own supply of neurotransmitters.



Well, it used to be. Jules Angst et al have got some new, broader criteria for "bipolarity" in depression. They say that manic symptoms in response to antidepressants do count, exactly like out-of-the-blue mania.



What's more, under the new "Bipolar Specifier" criteria, there's no minimum duration. Under existing criteria the symptoms have to last 4 or 7 days, depending on severity. Under the new regime if you've ever been irritable, high, agitated or hyperactive, on antidepressants or not, you meet "Bipolar Specifier" criteria, so long as it was marked enough that someone else noticed it.



All you need is:

an episode of elevated mood, an episode of irritable mood, or an episode of increased activity with at least 3 of the symptoms listed under Criterion B of the DSM-IV-TR associated with at least 1 of the 3 following consequences: (1) unequivocal and observable change in functioning uncharacteristic of the person’s usual behavior, (2) marked impairment in social or occupational functioning observable by others, or (3) requiring hospitalization or outpatient treatment.

The bipolar net just got bigger. And they caught me in it. Me and 47% of depressed people in their study. They recruited 509 psychiatrists from around the world, and got each of them to assess between 10 and 20 consecutive adult depressed patients who were referred to them for evaluation or treatment. A total of 5635 patients were included.



Only 16% met existing DSM-IV criteria for bipolar disorder, so the new system with 47% identified an "extra" 31%, trebling the number of bipolar cases.



A cynic would say that this is a breathtaking piece of psychiatric marketing. You give people antidepressants, then you diagnose them with bipolar on the basis of their reaction to those drugs, thus justifying selling them yet more drugs.



The cynic would not be surprised to learn that this study was sponsored by pharmaceutical company Sanofi.

All investigators recruited received fees, on a per patient basis, from sanofi-aventis in

recognition of their participation in the study....The sponsor of this study (sanofi-aventis) was involved in the study design, conduct, monitoring, data analysis, and preparation of the report.
In fairness, the authors do show that patients meeting their criteria tend to have characteristics typical of bipolar people. And they show that their system is at least as good as DSM-IV at picking out these cases:



For example, DSM-IV bipolar patients had a younger age of onset than DSM-IV depressed ones. "Bipolar specifier" patients did too, compared to the 53% who didn't meet the criteria. Same for a family history of manic symptoms, multiple episodes, and shorter episodes. All of those are pretty well established correlates of bipolar disorder.



That's fine, and the results are better than I expected when I picked up this paper. But all this shows us is that the bipolar specifier was no worse than the DSM-IV criteria as applied in this study.



It doesn't tell us whether either was any good.



DSM-IV criteria were used in a mechanical cookbook fashion - symptoms were assessed by the psychiatrist, written down, sent back to the study authors, who then diagnosed them if they ticked enough boxes. Is that a good approach? We don't know.



Most importantly, we have no idea whether these people would do better being treated as bipolar rather than as depressed. The difference being that bipolar people get mood stabilizers. Maybe these people would benefit from mood stabilizers, maybe not. Existing literature on mood stabilizers in bipolar people can't be assumed to generalize to these 47%.



In the discussion, the authors argue that antidepressants are not much good in bipolar people, whereas mood stabilizers are. Fun fact: Sanofi make many of the most popular formulations of valproic acid/valproate , a big selling mood stabilizer.



I think that is no coincidence. Maybe that sounds crazy, but hey, what do you expect? I'm bipolar.



ResearchBlogging.orgAngst J, Azorin JM, Bowden CL, Perugi G, Vieta E, Gamma A, Young AH, & for the BRIDGE Study Group (2011). Prevalence and Characteristics of Undiagnosed Bipolar Disorders in Patients With a Major Depressive Episode: The BRIDGE Study. Archives of general psychiatry, 68 (8), 791-798 PMID: 21810644

So Apparantly I'm Bipolar

According to a new paper, yours truly is bipolar.





I've written before of my experience of depression, and the fact that I take antidepressants, but I've never been diagnosed with bipolar.



I've taken a few drugs in my time. On certain dopamine-based drugs I got euphoric, filled with energy, talkative, confident, with no need for sleep, and a boundless desire to do stuff, which is textbook hypomania. So I think I know what it feels like, and I can confidently say that it has never happened to me out of the blue.



On antidepressants, I have had some mild experiences of this type. Ironically, the closest I've come to it was when I quit an SSRI antidepressant. I've also experienced periods of irritability and agitation on antidepressants. Either way, that's antidepressants. Bipolar is when you get high on your own supply of neurotransmitters.



Well, it used to be. Jules Angst et al have got some new, broader criteria for "bipolarity" in depression. They say that manic symptoms in response to antidepressants do count, exactly like out-of-the-blue mania.



What's more, under the new "Bipolar Specifier" criteria, there's no minimum duration. Under existing criteria the symptoms have to last 4 or 7 days, depending on severity. Under the new regime if you've ever been irritable, high, agitated or hyperactive, on antidepressants or not, you meet "Bipolar Specifier" criteria, so long as it was marked enough that someone else noticed it.



All you need is:

an episode of elevated mood, an episode of irritable mood, or an episode of increased activity with at least 3 of the symptoms listed under Criterion B of the DSM-IV-TR associated with at least 1 of the 3 following consequences: (1) unequivocal and observable change in functioning uncharacteristic of the person’s usual behavior, (2) marked impairment in social or occupational functioning observable by others, or (3) requiring hospitalization or outpatient treatment.

The bipolar net just got bigger. And they caught me in it. Me and 47% of depressed people in their study. They recruited 509 psychiatrists from around the world, and got each of them to assess between 10 and 20 consecutive adult depressed patients who were referred to them for evaluation or treatment. A total of 5635 patients were included.



Only 16% met existing DSM-IV criteria for bipolar disorder, so the new system with 47% identified an "extra" 31%, trebling the number of bipolar cases.



A cynic would say that this is a breathtaking piece of psychiatric marketing. You give people antidepressants, then you diagnose them with bipolar on the basis of their reaction to those drugs, thus justifying selling them yet more drugs.



The cynic would not be surprised to learn that this study was sponsored by pharmaceutical company Sanofi.

All investigators recruited received fees, on a per patient basis, from sanofi-aventis in

recognition of their participation in the study....The sponsor of this study (sanofi-aventis) was involved in the study design, conduct, monitoring, data analysis, and preparation of the report.
In fairness, the authors do show that patients meeting their criteria tend to have characteristics typical of bipolar people. And they show that their system is at least as good as DSM-IV at picking out these cases:



For example, DSM-IV bipolar patients had a younger age of onset than DSM-IV depressed ones. "Bipolar specifier" patients did too, compared to the 53% who didn't meet the criteria. Same for a family history of manic symptoms, multiple episodes, and shorter episodes. All of those are pretty well established correlates of bipolar disorder.



That's fine, and the results are better than I expected when I picked up this paper. But all this shows us is that the bipolar specifier was no worse than the DSM-IV criteria as applied in this study.



It doesn't tell us whether either was any good.



DSM-IV criteria were used in a mechanical cookbook fashion - symptoms were assessed by the psychiatrist, written down, sent back to the study authors, who then diagnosed them if they ticked enough boxes. Is that a good approach? We don't know.



Most importantly, we have no idea whether these people would do better being treated as bipolar rather than as depressed. The difference being that bipolar people get mood stabilizers. Maybe these people would benefit from mood stabilizers, maybe not. Existing literature on mood stabilizers in bipolar people can't be assumed to generalize to these 47%.



In the discussion, the authors argue that antidepressants are not much good in bipolar people, whereas mood stabilizers are. Fun fact: Sanofi make many of the most popular formulations of valproic acid/valproate , a big selling mood stabilizer.



I think that is no coincidence. Maybe that sounds crazy, but hey, what do you expect? I'm bipolar.



ResearchBlogging.orgAngst J, Azorin JM, Bowden CL, Perugi G, Vieta E, Gamma A, Young AH, & for the BRIDGE Study Group (2011). Prevalence and Characteristics of Undiagnosed Bipolar Disorders in Patients With a Major Depressive Episode: The BRIDGE Study. Archives of general psychiatry, 68 (8), 791-798 PMID: 21810644