Showing posts with label history. Show all posts
Showing posts with label history. Show all posts

Wednesday, May 5, 2010

This Season's Hottest Brain Regions

Are you a budding neuroscientist who's not sure which part of the brain to specialize in? Or perhaps you're a purveyor of media neuro-nonsense who's wondering which area to namedrop as being the key to sex / intelligence / politics next?

Well, wonder no more, because Neuroskeptic can now exclusively reveal which parts of the brain are hot, and which are not, right now (thanks to the high-tech method of searching PubMed and counting the papers published referring to eight major brain regions, each year from 1985 to 2009.)

The hippocampus stands out as an extremely hot region with both a huge number of papers and rapid growth over 25 years. So it's probably a good place to build a career... but on the other hand, the market may be saturated already, and it shows some signs of flatlining in the past few years. The cerebellum has long been popular, but growth has been extremely slow lately.

To better highlight the growth curves here's the same data but normalized to the year 2000 (so "2" means twice as many papers as in 2000, etc.)

This shows major differences. The orbitofrontal cortex and cingulate cortex are both undergoing massive growth at the moment. The amygdala and parietal cortex are pretty hot too. By contrast, the cerebellum and the caudate are stuck in the scientific doldrums.

Why are the patterns so different for different parts of the brain? That's a big question which hopefully will get discussed in the Comments. I suspect that the recent rise of the cingulate cortex and the orbitofrontal cortex, however, has much to do with the rise of fMRI (i.e. within the last 10 years, mostly), which allows them to be easily studied in humans for the first time.

Both of these areas are quite difficult to study with older technologies like EEG, because of their location within the head. That said, the same problem applies to plenty of other regions, but the orbitofrontal and cingulate cortex are also difficult to study in lab rats and mice, because it's not clear which parts of the rodent brain map onto which parts of the human brain in these regions. By contrast, things like the cerebellum and caudate nucleus have exact rodent equivalents, perhaps making them more attractive to early researchers.

This Season's Hottest Brain Regions

Are you a budding neuroscientist who's not sure which part of the brain to specialize in? Or perhaps you're a purveyor of media neuro-nonsense who's wondering which area to namedrop as being the key to sex / intelligence / politics next?

Well, wonder no more, because Neuroskeptic can now exclusively reveal which parts of the brain are hot, and which are not, right now (thanks to the high-tech method of searching PubMed and counting the papers published referring to eight major brain regions, each year from 1985 to 2009.)

The hippocampus stands out as an extremely hot region with both a huge number of papers and rapid growth over 25 years. So it's probably a good place to build a career... but on the other hand, the market may be saturated already, and it shows some signs of flatlining in the past few years. The cerebellum has long been popular, but growth has been extremely slow lately.

To better highlight the growth curves here's the same data but normalized to the year 2000 (so "2" means twice as many papers as in 2000, etc.)

This shows major differences. The orbitofrontal cortex and cingulate cortex are both undergoing massive growth at the moment. The amygdala and parietal cortex are pretty hot too. By contrast, the cerebellum and the caudate are stuck in the scientific doldrums.

Why are the patterns so different for different parts of the brain? That's a big question which hopefully will get discussed in the Comments. I suspect that the recent rise of the cingulate cortex and the orbitofrontal cortex, however, has much to do with the rise of fMRI (i.e. within the last 10 years, mostly), which allows them to be easily studied in humans for the first time.

Both of these areas are quite difficult to study with older technologies like EEG, because of their location within the head. That said, the same problem applies to plenty of other regions, but the orbitofrontal and cingulate cortex are also difficult to study in lab rats and mice, because it's not clear which parts of the rodent brain map onto which parts of the human brain in these regions. By contrast, things like the cerebellum and caudate nucleus have exact rodent equivalents, perhaps making them more attractive to early researchers.

Saturday, March 20, 2010

Absinthe Fact and Fiction

Absinthe is a spirit. It's very strong, and very green. But is it something more?

I used to think so, until I came across this paper taking a skeptical look at the history and science of the drink, Padosch et al's Absinthism a fictitious 19th century syndrome with present impact

Absinthe is prepared by crushing and dissolving the herb wormwood in unflavoured neutral alcohol and then distilling the result; other herbs and spices are added later for taste and colour.

It became extremely popular in the late 19th century, especially in France, but it developed a reputation as a dangerous and hallucinogenic drug. Overuse was said to cause insanity, "absinthism", much worse than regular alcoholism. Eventually, absinthe was banned in the USA and most but not all European countries.

Much of the concern over absinthe came from animal experiments. Wormwood oil was found to cause hyperactivity and seizures in cats and rodents, whereas normal alcohol just made them drunk. But, Padosch et al explain, the relevance of these experiments to drinkers is unclear, because they involved high doses of pure wormwood extract, whereas absinthe is much more dilute. The fact that authors at the time used the word absinthe to refer to both the drink and the pure extract added to the confusion.

It's now known that wormwood, or at least some varieties of it, contains thujone, which can indeed cause seizures, and death, due to being a GABA antagonist. Until a few years ago it was thought that old-style absinthe might have contained up to 260 mg of thujone per litre, a substantial dose.

But that was based on the assumption that all of the thujone in the wormwood ended up in the drink prepared from it. Chemical analysis of actual absinthe has repeatedly found that it contains no more than about 6 mg/L thujone. The alcohol in absinthe would kill you long before you drank enough to get any other effects. As the saying goes, "the dose makes the poison", something that is easily forgotten.

As Padosch et al point out, it's possible that there are other undiscovered psychoactive compounds in absinthe, or that long-term exposure to low doses of thujone does cause "absinthism". But there is no evidence for that so far. Rather, they say, absinthism was just chronic alcoholism, and absinthe was no more or less dangerous than any other spirit.

I'm not sure why, but drinks seem to attract more than their fair share of urban myths. Amongst many others I've heard that the flakes of gold in Goldschläger cause cuts which let alcohol into your blood faster; Aftershock crystallizes in your stomach, so if you drink water the morning afterwards, you get drunk again; and that the little worm you get at the bottom of some tequilas apparently contains especially concentrated alcohol, or hallucinogens, or even cocaine maybe.

Slightly more serious is the theory that drinking different kinds of drinks instead of sticking to just one gets you drunk faster, or gives you a worse hangover, or something, especially if you do it in a certain order. Almost everyone I know believes this, although in my drinking experience it's not true, but I'm not sure that it's completely bogus, as I have heard somewhat plausible explanations i.e. drinking spirits alongside beer leads to a concentration of alcohol in your stomach that's optimal for absorption into the bloodstream... maybe.

Link: Not specifically related to this but The Poison Review is an excellent blog I've recently discovered all about poisons, toxins, drugs, and such fun stuff.

ResearchBlogging.orgPadosch SA, Lachenmeier DW, & Kröner LU (2006). Absinthism: a fictitious 19th century syndrome with present impact. Substance abuse treatment, prevention, and policy, 1 (1) PMID: 16722551

Absinthe Fact and Fiction

Absinthe is a spirit. It's very strong, and very green. But is it something more?

I used to think so, until I came across this paper taking a skeptical look at the history and science of the drink, Padosch et al's Absinthism a fictitious 19th century syndrome with present impact

Absinthe is prepared by crushing and dissolving the herb wormwood in unflavoured neutral alcohol and then distilling the result; other herbs and spices are added later for taste and colour.

It became extremely popular in the late 19th century, especially in France, but it developed a reputation as a dangerous and hallucinogenic drug. Overuse was said to cause insanity, "absinthism", much worse than regular alcoholism. Eventually, absinthe was banned in the USA and most but not all European countries.

Much of the concern over absinthe came from animal experiments. Wormwood oil was found to cause hyperactivity and seizures in cats and rodents, whereas normal alcohol just made them drunk. But, Padosch et al explain, the relevance of these experiments to drinkers is unclear, because they involved high doses of pure wormwood extract, whereas absinthe is much more dilute. The fact that authors at the time used the word absinthe to refer to both the drink and the pure extract added to the confusion.

It's now known that wormwood, or at least some varieties of it, contains thujone, which can indeed cause seizures, and death, due to being a GABA antagonist. Until a few years ago it was thought that old-style absinthe might have contained up to 260 mg of thujone per litre, a substantial dose.

But that was based on the assumption that all of the thujone in the wormwood ended up in the drink prepared from it. Chemical analysis of actual absinthe has repeatedly found that it contains no more than about 6 mg/L thujone. The alcohol in absinthe would kill you long before you drank enough to get any other effects. As the saying goes, "the dose makes the poison", something that is easily forgotten.

As Padosch et al point out, it's possible that there are other undiscovered psychoactive compounds in absinthe, or that long-term exposure to low doses of thujone does cause "absinthism". But there is no evidence for that so far. Rather, they say, absinthism was just chronic alcoholism, and absinthe was no more or less dangerous than any other spirit.

I'm not sure why, but drinks seem to attract more than their fair share of urban myths. Amongst many others I've heard that the flakes of gold in Goldschläger cause cuts which let alcohol into your blood faster; Aftershock crystallizes in your stomach, so if you drink water the morning afterwards, you get drunk again; and that the little worm you get at the bottom of some tequilas apparently contains especially concentrated alcohol, or hallucinogens, or even cocaine maybe.

Slightly more serious is the theory that drinking different kinds of drinks instead of sticking to just one gets you drunk faster, or gives you a worse hangover, or something, especially if you do it in a certain order. Almost everyone I know believes this, although in my drinking experience it's not true, but I'm not sure that it's completely bogus, as I have heard somewhat plausible explanations i.e. drinking spirits alongside beer leads to a concentration of alcohol in your stomach that's optimal for absorption into the bloodstream... maybe.

Link: Not specifically related to this but The Poison Review is an excellent blog I've recently discovered all about poisons, toxins, drugs, and such fun stuff.

ResearchBlogging.orgPadosch SA, Lachenmeier DW, & Kröner LU (2006). Absinthism: a fictitious 19th century syndrome with present impact. Substance abuse treatment, prevention, and policy, 1 (1) PMID: 16722551

Saturday, February 27, 2010

The Decline and Fall of the Cannabinoid Antagonists

Cannabinoid Receptor, Type 1 (CB1) antagonists were supposed to be the next big thing.

They're weight loss drugs, and with obesity rates rising and the diet craze showing no signs of abating, that's a large and growing market (...sorry). They worked, at least in the short term, and they were at least as effective as existing pills. They may even have had health benefits over and above promoting weight loss, such as improving blood fat and sugar levels through metabolic effects.

It all started off well. Rimonabant, manufactured by Sanofi, was the first CB1 antagonist to become available for human use: it hit the European market in 2006, as Acomplia. Four large clinical trials showed convincingly that it helped people lose weight. Rival drug companies were hard at work developing other CB1 antagonists, and inverse agonists (similar, but even more potent). The "bants" included Merck's taranabant, Pfizer's otenabant, and more.

Even more excitingly, there were indications that CB1 antagonists could do more than help people lose weight: they might also be useful in helping people quit smoking, alcohol or drugs. The animal evidence that CB1 antagonists did this was strong. Human trials were underway. Optimists saw rimonabant and related drugs as offering something unprecedented: self-control in a pill, abstinence on demand.

*

But it ended in tears, literally. Rimonabant was pulled from the European market in late 2008; it was never approved in the USA at all. After rimonabant was withdrawn, drug companies abandoned the development of other CB1 antagonists.

The problem was that they made people depressed. In several large clinical trials of rimonabant it raised the risk of suffering depression and other psychiatric problems, like anxiety and irritability, compared to placebo. The reported rates of these symptoms ranged from a few % up to over 40% depending upon the population, but there have been no trials (except very small ones) in which these effects weren't seen. This means that CB1 antagonists cause depression rather more consistently than antidepressants treat it.

Merck have just released the data from a trial of taranabant: A clinical trial assessing the safety and efficacy of taranabant, a CB1R inverse agonist, in obese and overweight patients. It makes a fitting epitaph to the CB1 antagonists. They gave taranabant, at a range of doses, or placebo, to overweight people to go alongside diet and exercise to help them lose weight. The results were extremely similar to those seen with rimonabant; the drug worked:

But there were side effects. Alongside things like nausea, vomiting, and sweating, about 35% of people taking high doses of taranabant reported "psychiatric disorders". 20% of people on placebo also did, so this is not quite as bad as it first appears, but it's still striking, especially since a number of people on high doses of taranabant reported suicidal thoughts or behaviours...

Suicidal ideation was reported in three patients in the taranabant 6-mg group in year 1 and in one patient in the 4-mg group in year 2. There was one suicide attempt reported in a patient with a previous history of suicide attempts in the 6/2-mg group while the patient was receiving 2-mg, and one episode of suicidal behavior reported in a patient in the 6/2-mg group while the patient was receiving 6-mg. There were no completed suicides. The adjudication of possibly suicide-related adverse experiences during years 1 and 2 indicated an increased incidence of suicidality in the taranabant groups...
This is the kind of thing that gives drug companies nightmares, especially today, in the post-SSRI lawsuits era. This is why rimonabant was removed from the EU market in 2008 and why it was never approved in the US.

*

Safety concerns have plagued weight loss medications for decades. The problem is not that they don't work: plenty of drugs cause weight loss, at least for as long as you keep taking them. But unfortunately, there's always a 'but'.

Fenfluramine worked, but it caused heart valve defects, and was banned. Sibutramine works, but it's just been suspended from the European market due to concerns over heart disease (a different kind). Amphetamine-like stimulants such as phentermine work, but they're addictive and liable to abuse. What with rimonabant and sibutramine are gone, the only weight-loss drug approved for use in Europe is orlistat, which seems to be safe, but has some very unpleasant side effects...

Still, CB1 antagonists have a unique mechanism of action: they block the CB1 receptor, which is what gets activated by the cannabinoid ingredients in marijuana, and also the brain's own cannabinoids neurotransmitters
(endocannabinoids). The past five years has seen a huge amount of research showing that the CB1 receptor is involved in everything from memory and emotion to motivation, pain sensation and hormone secretion. We recently learned that there are even CB1 receptors on the tongue that regulate taste.

CB1 is able to do all this because it's found almost everywhere in the brain. To simplify, but only a little, the endocannabinoid system is a general feedback mechanism, which allows cells on the receiving end of neural transmission to "talk back" to the neuron sending them signals; if they're receiving lots of input, they tell the cell sending the signals to quiet down. In other words, endocannabinoids regulate the release of just about every other neurotransmitter. To be honest, given how important the system is in the brain, it's surprising that depression and anxiety are the biggest problems with CB1 antagonists.

For all that, we still don't know why they cause psychiatric symptoms, although a number of mechanisms have been suggested. Hopefully, someone will work this out sooner or later, since that would add an important piece to the puzzle of what goes on in the brain during depression...

ResearchBlogging.orgAronne, L., Tonstad, S., Moreno, M., Gantz, I., Erondu, N., Suryawanshi, S., Molony, C., Sieberts, S., Nayee, J., Meehan, A., Shapiro, D., Heymsfield, S., Kaufman, K., & Amatruda, J. (2010). A clinical trial assessing the safety and efficacy of taranabant, a CB1R inverse agonist, in obese and overweight patients: a high-dose study International Journal of Obesity DOI: 10.1038/ijo.2010.21

The Decline and Fall of the Cannabinoid Antagonists

Cannabinoid Receptor, Type 1 (CB1) antagonists were supposed to be the next big thing.

They're weight loss drugs, and with obesity rates rising and the diet craze showing no signs of abating, that's a large and growing market (...sorry). They worked, at least in the short term, and they were at least as effective as existing pills. They may even have had health benefits over and above promoting weight loss, such as improving blood fat and sugar levels through metabolic effects.

It all started off well. Rimonabant, manufactured by Sanofi, was the first CB1 antagonist to become available for human use: it hit the European market in 2006, as Acomplia. Four large clinical trials showed convincingly that it helped people lose weight. Rival drug companies were hard at work developing other CB1 antagonists, and inverse agonists (similar, but even more potent). The "bants" included Merck's taranabant, Pfizer's otenabant, and more.

Even more excitingly, there were indications that CB1 antagonists could do more than help people lose weight: they might also be useful in helping people quit smoking, alcohol or drugs. The animal evidence that CB1 antagonists did this was strong. Human trials were underway. Optimists saw rimonabant and related drugs as offering something unprecedented: self-control in a pill, abstinence on demand.

*

But it ended in tears, literally. Rimonabant was pulled from the European market in late 2008; it was never approved in the USA at all. After rimonabant was withdrawn, drug companies abandoned the development of other CB1 antagonists.

The problem was that they made people depressed. In several large clinical trials of rimonabant it raised the risk of suffering depression and other psychiatric problems, like anxiety and irritability, compared to placebo. The reported rates of these symptoms ranged from a few % up to over 40% depending upon the population, but there have been no trials (except very small ones) in which these effects weren't seen. This means that CB1 antagonists cause depression rather more consistently than antidepressants treat it.

Merck have just released the data from a trial of taranabant: A clinical trial assessing the safety and efficacy of taranabant, a CB1R inverse agonist, in obese and overweight patients. It makes a fitting epitaph to the CB1 antagonists. They gave taranabant, at a range of doses, or placebo, to overweight people to go alongside diet and exercise to help them lose weight. The results were extremely similar to those seen with rimonabant; the drug worked:

But there were side effects. Alongside things like nausea, vomiting, and sweating, about 35% of people taking high doses of taranabant reported "psychiatric disorders". 20% of people on placebo also did, so this is not quite as bad as it first appears, but it's still striking, especially since a number of people on high doses of taranabant reported suicidal thoughts or behaviours...

Suicidal ideation was reported in three patients in the taranabant 6-mg group in year 1 and in one patient in the 4-mg group in year 2. There was one suicide attempt reported in a patient with a previous history of suicide attempts in the 6/2-mg group while the patient was receiving 2-mg, and one episode of suicidal behavior reported in a patient in the 6/2-mg group while the patient was receiving 6-mg. There were no completed suicides. The adjudication of possibly suicide-related adverse experiences during years 1 and 2 indicated an increased incidence of suicidality in the taranabant groups...
This is the kind of thing that gives drug companies nightmares, especially today, in the post-SSRI lawsuits era. This is why rimonabant was removed from the EU market in 2008 and why it was never approved in the US.

*

Safety concerns have plagued weight loss medications for decades. The problem is not that they don't work: plenty of drugs cause weight loss, at least for as long as you keep taking them. But unfortunately, there's always a 'but'.

Fenfluramine worked, but it caused heart valve defects, and was banned. Sibutramine works, but it's just been suspended from the European market due to concerns over heart disease (a different kind). Amphetamine-like stimulants such as phentermine work, but they're addictive and liable to abuse. What with rimonabant and sibutramine are gone, the only weight-loss drug approved for use in Europe is orlistat, which seems to be safe, but has some very unpleasant side effects...

Still, CB1 antagonists have a unique mechanism of action: they block the CB1 receptor, which is what gets activated by the cannabinoid ingredients in marijuana, and also the brain's own cannabinoids neurotransmitters
(endocannabinoids). The past five years has seen a huge amount of research showing that the CB1 receptor is involved in everything from memory and emotion to motivation, pain sensation and hormone secretion. We recently learned that there are even CB1 receptors on the tongue that regulate taste.

CB1 is able to do all this because it's found almost everywhere in the brain. To simplify, but only a little, the endocannabinoid system is a general feedback mechanism, which allows cells on the receiving end of neural transmission to "talk back" to the neuron sending them signals; if they're receiving lots of input, they tell the cell sending the signals to quiet down. In other words, endocannabinoids regulate the release of just about every other neurotransmitter. To be honest, given how important the system is in the brain, it's surprising that depression and anxiety are the biggest problems with CB1 antagonists.

For all that, we still don't know why they cause psychiatric symptoms, although a number of mechanisms have been suggested. Hopefully, someone will work this out sooner or later, since that would add an important piece to the puzzle of what goes on in the brain during depression...

ResearchBlogging.orgAronne, L., Tonstad, S., Moreno, M., Gantz, I., Erondu, N., Suryawanshi, S., Molony, C., Sieberts, S., Nayee, J., Meehan, A., Shapiro, D., Heymsfield, S., Kaufman, K., & Amatruda, J. (2010). A clinical trial assessing the safety and efficacy of taranabant, a CB1R inverse agonist, in obese and overweight patients: a high-dose study International Journal of Obesity DOI: 10.1038/ijo.2010.21

Tuesday, January 19, 2010

Yo Momma, Victorian Style

I've just finished Extraordinary Popular Delusions and the Madness of Crowds, which is something of a cult classic amongst people of an atheist or skeptical persuasion. Written by Scottish author Charles Mackay in 1841, the book details some of the bizarre things that people had believed and done over the preceding centuries.

It's best known for its chapters on outbreaks of mass irrationality, such as financial bubbles like the Tulipomania, the European witch trials, and "animal magnetism" (the sections on which include some excellent descriptions of psychosomatic illness and the placebo effect). Heavy stuff.

But my favorite bit was the charming "Popular Follies of Great Cities", which covers the spread of comedy catchphrases in 19th century London. Remember when everyone went around saying "Wasssssssssssupppppppp?" or "Doh!" or some variant of "Your mum / yo momma?" (that last one is still going on). It turns out this is nothing new.

Two hundred years ago Londoners, at least working-class ones, were fond of such phrases too. There was the question "Who are you?", which could be aimed at anyone doing or trying to do something above their station; the universal answer to any stupid or unwelcome question, "Quoz", and best of all, "Has your mother sold her mangle?" the implications of which Mackay does not discuss in detail.

Each of these were popular for a few months and then went out of fashion. Personally, I think it's time we brought some of them back into use. So - has your mother sold her mangle? I thought so.

Yo Momma, Victorian Style

I've just finished Extraordinary Popular Delusions and the Madness of Crowds, which is something of a cult classic amongst people of an atheist or skeptical persuasion. Written by Scottish author Charles Mackay in 1841, the book details some of the bizarre things that people had believed and done over the preceding centuries.

It's best known for its chapters on outbreaks of mass irrationality, such as financial bubbles like the Tulipomania, the European witch trials, and "animal magnetism" (the sections on which include some excellent descriptions of psychosomatic illness and the placebo effect). Heavy stuff.

But my favorite bit was the charming "Popular Follies of Great Cities", which covers the spread of comedy catchphrases in 19th century London. Remember when everyone went around saying "Wasssssssssssupppppppp?" or "Doh!" or some variant of "Your mum / yo momma?" (that last one is still going on). It turns out this is nothing new.

Two hundred years ago Londoners, at least working-class ones, were fond of such phrases too. There was the question "Who are you?", which could be aimed at anyone doing or trying to do something above their station; the universal answer to any stupid or unwelcome question, "Quoz", and best of all, "Has your mother sold her mangle?" the implications of which Mackay does not discuss in detail.

Each of these were popular for a few months and then went out of fashion. Personally, I think it's time we brought some of them back into use. So - has your mother sold her mangle? I thought so.

Thursday, January 14, 2010

A Brief History of Bipolar Kids

Can children get bipolar disorder?

It depends who you ask. It's "controversial". Some say that, like schizophrenia, bipolar strikes in adolescence or after, and that pre-pubertal onset is extraordinarily rare. Others say that kids can be, and often are, bipolar, but their symptoms may differ from the ones seen in adults. You know a 20 year old's manic when they stay up for 3 days straight writing a book about how God's chosen them to save the world. A "bipolar" 10 year old, though, is more likely to show irritability and mood swings. Critics say that this isn't evidence of bipolar, it's evidence of... irritability and mood swings. Or, indeed, of being 10.

But what's not always appreciated is how new the concept of pediatric bipolar as a common disorder is, and how specific it is to American psychiatry. Here are a few graphs I put together to illustrate this, based on numbers of scientific publications.

First up, when did people start talking about it? Here's the number of PubMed hits for pediatric bipolar each year. As you can see, it was rarely talked about before the year 2000, after which its popularity shot up rapidly; it seems to have plateaued now, but it's hard to tell.

In fact, the true trend is even more dramatic, because many of the early hits were not about psychiatry at all. For example, in 1999, 5 of the 10 were nothing to do with manic-depression. One was about the growth pattern of a certain kind of bacteria (they're "bipolar", because they have two poles of growth.)

Is the post-2000 spike just a reflection of the fact that people are publishing more papers about bipolar in general? No. Here's a graph showing pediatric bipolar hits as a fraction of all "bipolar disorder" hits for that year. It's been rising for a while and it's now 5%.

Where are these publications coming from? America. Taking the first two pages of PubMed hits for pediatric bipolar, and excluding the non-psychiatric ones, 30 are from the USA, and just 4 are from elsewhere. For "bipolar disorder", it's 13 vs. 25. (This is in terms of the affiliation listed for the primary authors of the study.)

What about paediatric bipolar, the British spelling? It's almost unheard of. There are only 53 PubMed hits in total, as against 564 for pediatric bipolar. Of the first 20 hits, 9 are non-psychiatric, and 3 are from an Australian journal, criticizing the American concept of pediatric bipolar!

It's remarkable that the monthly British Journal of Psychiatry has never published a paper about "pediatric bipolar" or "paediatric bipolar": if you search their archives you get just 5 hits, and they are all in the references sections, not the papers themselves. The monthly American Journal of Psychiatry has published 37 papers mentioning "pediatric bipolar", of which 25 are not just in the references, and 10 are in the titles.

So, at least in terms of the literature, pediatric bipolar is overwhelmingly a 21st century American phenomenon. It barely existed before 2000, and it barely exists elsewhere. This corresponds to what some non-American psychiatrists have observed. In The Paediatric Bipolar Hypothesis: The View from Australia and New Zealand, Australian psychiatrists Peter Parry, Gareth Furber and Stephen Allison point out that
Traditionally, bipolar affective disorder has been considered rare in children and uncommon in adolescence ... However paediatric bipolar disorder (PBD) has become a topical issue in child and adolescent psychiatry over the last decade, driven by research in the USA. The proponents of PBD are concerned that the traditional approach to bipolar disorder in children and adolescents is missing a large number of distressed children, whose course of bipolar illness could be ameliorated or attenuated by early treatment.
Pediatric bipolar has certainly become more common as a diagnosis in the USA recently - a 40-fold increase in 12 years up to 2003:
The number of visits to primary care physicians in the under 20 age group where the diagnosis was bipolar disorder increased from 0.01% in 1994/5 to 0.44% in 2002/3
Whereas elsewhere, it's still regarded as incredibly uncommon...
Soutullo et al. reported that none of the 2,500 children 10 years or younger referred to the Royal Manchester Children's Hospital ... had a diagnosis of mania or bipolar disorder ... A more recent German survey revealed German child and adolescent psychiatrists were largely holding to a traditional stance as only 8% claimed to have diagnosed a pre-pubertal child with bipolar disorder.
Parry, Furber and Allison then present the results of a survey of 199 child and adolescent psychiatrists in Australia and New Zealand.
The majority of participants (53.4%) said they had never seen a case of pre-pubertal bipolar disorder, whilst a further 28.5% estimated they'd seen only 1 or 2 cases. Only 35 participants (18.2%) estimated having seen 3 or more cases of pre-pubertal bipolar disorder. ... Most participants (83.1%) were of the opinion that bipolar disorder in pre-pubertal children was either "very rare (less than 0.01%)", "rare (less than 0.1%)", or "cannot be diagnosed in this age group".
Of course this is just a survey, but the results are striking.

Peter Parry reports as a conflict-of-interest that he's a member of Healthy Skepticism, who are, in their own words, in the business of "Improving health by reducing harm from misleading drug promotion". I'm sure neither he nor I need to spell out why drug companies might conceivably have an interest in promoting the concept of pediatric bipolar disorder, given the wide range of drugs available for bipolar adults...

ResearchBlogging.orgParry, P., Furber, G., & Allison, S. (2009). The Paediatric Bipolar Hypothesis: The View from Australia and New Zealand Child and Adolescent Mental Health, 14 (3), 140-147 DOI: 10.1111/j.1475-3588.2008.00505.x

A Brief History of Bipolar Kids

Can children get bipolar disorder?

It depends who you ask. It's "controversial". Some say that, like schizophrenia, bipolar strikes in adolescence or after, and that pre-pubertal onset is extraordinarily rare. Others say that kids can be, and often are, bipolar, but their symptoms may differ from the ones seen in adults. You know a 20 year old's manic when they stay up for 3 days straight writing a book about how God's chosen them to save the world. A "bipolar" 10 year old, though, is more likely to show irritability and mood swings. Critics say that this isn't evidence of bipolar, it's evidence of... irritability and mood swings. Or, indeed, of being 10.

But what's not always appreciated is how new the concept of pediatric bipolar as a common disorder is, and how specific it is to American psychiatry. Here are a few graphs I put together to illustrate this, based on numbers of scientific publications.

First up, when did people start talking about it? Here's the number of PubMed hits for pediatric bipolar each year. As you can see, it was rarely talked about before the year 2000, after which its popularity shot up rapidly; it seems to have plateaued now, but it's hard to tell.

In fact, the true trend is even more dramatic, because many of the early hits were not about psychiatry at all. For example, in 1999, 5 of the 10 were nothing to do with manic-depression. One was about the growth pattern of a certain kind of bacteria (they're "bipolar", because they have two poles of growth.)

Is the post-2000 spike just a reflection of the fact that people are publishing more papers about bipolar in general? No. Here's a graph showing pediatric bipolar hits as a fraction of all "bipolar disorder" hits for that year. It's been rising for a while and it's now 5%.

Where are these publications coming from? America. Taking the first two pages of PubMed hits for pediatric bipolar, and excluding the non-psychiatric ones, 30 are from the USA, and just 4 are from elsewhere. For "bipolar disorder", it's 13 vs. 25. (This is in terms of the affiliation listed for the primary authors of the study.)

What about paediatric bipolar, the British spelling? It's almost unheard of. There are only 53 PubMed hits in total, as against 564 for pediatric bipolar. Of the first 20 hits, 9 are non-psychiatric, and 3 are from an Australian journal, criticizing the American concept of pediatric bipolar!

It's remarkable that the monthly British Journal of Psychiatry has never published a paper about "pediatric bipolar" or "paediatric bipolar": if you search their archives you get just 5 hits, and they are all in the references sections, not the papers themselves. The monthly American Journal of Psychiatry has published 37 papers mentioning "pediatric bipolar", of which 25 are not just in the references, and 10 are in the titles.

So, at least in terms of the literature, pediatric bipolar is overwhelmingly a 21st century American phenomenon. It barely existed before 2000, and it barely exists elsewhere. This corresponds to what some non-American psychiatrists have observed. In The Paediatric Bipolar Hypothesis: The View from Australia and New Zealand, Australian psychiatrists Peter Parry, Gareth Furber and Stephen Allison point out that
Traditionally, bipolar affective disorder has been considered rare in children and uncommon in adolescence ... However paediatric bipolar disorder (PBD) has become a topical issue in child and adolescent psychiatry over the last decade, driven by research in the USA. The proponents of PBD are concerned that the traditional approach to bipolar disorder in children and adolescents is missing a large number of distressed children, whose course of bipolar illness could be ameliorated or attenuated by early treatment.
Pediatric bipolar has certainly become more common as a diagnosis in the USA recently - a 40-fold increase in 12 years up to 2003:
The number of visits to primary care physicians in the under 20 age group where the diagnosis was bipolar disorder increased from 0.01% in 1994/5 to 0.44% in 2002/3
Whereas elsewhere, it's still regarded as incredibly uncommon...
Soutullo et al. reported that none of the 2,500 children 10 years or younger referred to the Royal Manchester Children's Hospital ... had a diagnosis of mania or bipolar disorder ... A more recent German survey revealed German child and adolescent psychiatrists were largely holding to a traditional stance as only 8% claimed to have diagnosed a pre-pubertal child with bipolar disorder.
Parry, Furber and Allison then present the results of a survey of 199 child and adolescent psychiatrists in Australia and New Zealand.
The majority of participants (53.4%) said they had never seen a case of pre-pubertal bipolar disorder, whilst a further 28.5% estimated they'd seen only 1 or 2 cases. Only 35 participants (18.2%) estimated having seen 3 or more cases of pre-pubertal bipolar disorder. ... Most participants (83.1%) were of the opinion that bipolar disorder in pre-pubertal children was either "very rare (less than 0.01%)", "rare (less than 0.1%)", or "cannot be diagnosed in this age group".
Of course this is just a survey, but the results are striking.

Peter Parry reports as a conflict-of-interest that he's a member of Healthy Skepticism, who are, in their own words, in the business of "Improving health by reducing harm from misleading drug promotion". I'm sure neither he nor I need to spell out why drug companies might conceivably have an interest in promoting the concept of pediatric bipolar disorder, given the wide range of drugs available for bipolar adults...

ResearchBlogging.orgParry, P., Furber, G., & Allison, S. (2009). The Paediatric Bipolar Hypothesis: The View from Australia and New Zealand Child and Adolescent Mental Health, 14 (3), 140-147 DOI: 10.1111/j.1475-3588.2008.00505.x

Wednesday, January 13, 2010

The Kids Are Alright

You may have heard about the amusing, er, debate between adult movie superstar Ron Jeremy and the video game industry:
Violent video games have "a much bigger negative influence on kids" than pornography, a leading porn star has claimed
Who's right? Neither. There are no big negative influences on today's kids, at least, none that have only recently started. Kids today are better behaved than they were 20 or 25 years ago, before any of the supposedly morally corrosive new technologies arrived to corrupt their minds: mobile phones, social networking, internet porn, violent video games...

Those are some strong claims I just made. The fear that something is very wrong in 21st century society, and that new technology has something to do with it, is widespread - whether the panic be about sexting, cyberbullying, the Facebook Generation, whatever - but the statistics tell a quite different and more positive story.

Crime rates fell, a lot, during the 1990s and have since declined a bit more, or stayed stable, in the USA (source):

In the UK, the dates are a little different but the recent drop is similar: crime rose up to the mid 1990s and then fell back down to where it started (source):

The picture is roughly the same in other industrialized countries. Bearing in mind that the vast majority of crime is committed by young people (specifically young men), this is evidence that something is not rotten in the state of today's yoof.

That's in terms of how they relate to others - what about how they feel about themselves? Have rates of mental illness increased? That's a difficult one because mental illness statistics are problematic, but in terms of the body count, suicide rates in young people have declined, albeit slightly, over the same period (source US, UK).

We don't know why crime rates fell. Everyone agrees that it happened, but everyone has their own ideas as to the cause, ranging from more abortions (the "Freakonomics theory"), to less lead pollution, to cellphones making it easier to report crimes, to... I'm sure you can make up your own. Ditto for suicide.

The point is, whatever reduced them, it's unlikely that something else was acting to increase them by any significant amount over the same period. It's possible - maybe something about 21st century life causes loads of crime and suicide, but luckily, some other mystery factor(s) reduced them even more at just the right time. But that's pretty implausible; if nothing else, Occam's razor tells us not to multiply explanatory factors unnecessarily. Which means it's implausible that the internet, video games, and the rest, are causing any significant degree of harm. Which is great news. Unless you're one of those pundits who loves bad news.

The Kids Are Alright

You may have heard about the amusing, er, debate between adult movie superstar Ron Jeremy and the video game industry:
Violent video games have "a much bigger negative influence on kids" than pornography, a leading porn star has claimed
Who's right? Neither. There are no big negative influences on today's kids, at least, none that have only recently started. Kids today are better behaved than they were 20 or 25 years ago, before any of the supposedly morally corrosive new technologies arrived to corrupt their minds: mobile phones, social networking, internet porn, violent video games...

Those are some strong claims I just made. The fear that something is very wrong in 21st century society, and that new technology has something to do with it, is widespread - whether the panic be about sexting, cyberbullying, the Facebook Generation, whatever - but the statistics tell a quite different and more positive story.

Crime rates fell, a lot, during the 1990s and have since declined a bit more, or stayed stable, in the USA (source):

In the UK, the dates are a little different but the recent drop is similar: crime rose up to the mid 1990s and then fell back down to where it started (source):

The picture is roughly the same in other industrialized countries. Bearing in mind that the vast majority of crime is committed by young people (specifically young men), this is evidence that something is not rotten in the state of today's yoof.

That's in terms of how they relate to others - what about how they feel about themselves? Have rates of mental illness increased? That's a difficult one because mental illness statistics are problematic, but in terms of the body count, suicide rates in young people have declined, albeit slightly, over the same period (source US, UK).

We don't know why crime rates fell. Everyone agrees that it happened, but everyone has their own ideas as to the cause, ranging from more abortions (the "Freakonomics theory"), to less lead pollution, to cellphones making it easier to report crimes, to... I'm sure you can make up your own. Ditto for suicide.

The point is, whatever reduced them, it's unlikely that something else was acting to increase them by any significant amount over the same period. It's possible - maybe something about 21st century life causes loads of crime and suicide, but luckily, some other mystery factor(s) reduced them even more at just the right time. But that's pretty implausible; if nothing else, Occam's razor tells us not to multiply explanatory factors unnecessarily. Which means it's implausible that the internet, video games, and the rest, are causing any significant degree of harm. Which is great news. Unless you're one of those pundits who loves bad news.

Saturday, January 9, 2010

A Decade for Psychiatric Disorders...?

Nature kicks off the 2010s with an editorial pep-talk for psychiatry: A decade for psychiatric disorders.
New techniques — genome-wide association studies, imaging and the optical manipulation of neural circuits — are ushering in an era in which the neural circuitry underlying cognitive dysfunctions, for example, will be delineated... Whether for schizophrenia, depression, autism or any other psychiatric disorders, it is clear... that understanding of these conditions is entering a scientific phase more penetratingly insightful than has hitherto been possible.
But I don't feel too peppy.

The 2010s is not the decade for psychiatric disorders. Clinically, that decade was the 1950s. The 50s was when the first generation of psychiatric drugs were discovered - neuroleptics for psychosis (1952), MAOis (1952) and tricyclics (1957) for depression, and lithium for mania (1949, although it took a while to catch on).

Since then, there have been plenty of new drugs invented, but not a single one has proven more effective than those available in 1959. New antidepressants like Prozac are safer in overdose, and have milder side effects, than older ones. New "atypical" antipsychotics have different side effects to older ones. But they work no better. Compared to lithium, newer "mood stabilizers" probably aren't even as good. (The only exception is clozapine, a powerful antipsychotic, but dangerous side-effects limit its use.)

Scientifically, the 1960s were the decade of psychiatry. We learned that antipsychotics block dopamine receptors in the brain, and that antidepressants inhibit the reuptake or breakdown of monoamines: noradrenaline and serotonin. So it was natural, if unimaginative, to hypothesise that psychosis is caused by "too much dopamine", and that depression is a case of "not enough monoamines". (As for lithium, we still don't know how it works. Two out of three ain't bad.)

These are still the core dogmas of biological psychiatry. Since the 60s, the amount of money and people involved in the field has exploded, but today's research is still essentially making footnotes to the work done 30 or 40 years ago. It would be somewhat unfair to say that we haven't made any solid advances since then, but only somewhat.

The double helix structure of DNA was discovered in 1953, just after antipsychotics and antidepressants. Imagine if biologists had learned about the double helix, but instead of using it to understand genetics, or catch criminals, or sequence genomes, they spent 50 years arguing about whether all DNA was shaped like that, or only some of it.

The standard response to the charge that psychiatry has lagged behind the rest of medicine is that "It's hard". And it is, because it's about human life, which is complex. But so is the subject matter of every science: the whole point is to seek simplicity in the complexity. Genetics was hard, until we worked out how to do it.

What's remarkable is that so many things in psychiatry are simple. For example: any drug which blocks the dopamine transporter (DAT) in the brain has stimulant effects: increased energy, focus, and motivation, and at high doses, euphoria, grandiosity, and potentially addiction. Cocaine, amphetamine, Ritalin etc all work this way. There are no cocaine-like drugs that don't block DAT and no DAT inhibitors that aren't cocaine-like. Simple. The stimulant high looks strikingly like the mania seen in bipolar disorder, and is pretty much the exact opposite of what happens in clinical depression. Couldn't be easier.

There are plenty of cases just like this. What's also striking is that neuroscience has advanced in leaps and bounds since the 1960s. A 60s, or even a 90s, textbook about neuroscience looks incredibly dated - a 60s psychiatry textbook is essentially still up-to-date except for the drug names. Contemporary neuroscience is far from being a mature science like genetics, it has its problems (see: all my previous posts) but compared to psychiatry, "basic" neuroscience is rock-solid. Although I trained as basic neuroscientist, so I would say that.

Why? That's an excellent question. But if you ask me, and judging by the academic literature I'm not alone, the answer is: diagnosis. The weak link in psychiatry research is the diagnoses we are forced to use: "major depressive disorder", "schizophrenia", etc.

Basic neuroscientists don't use these. If a neuroscientist wants to study the effect of, say, pepperoni pizza on the human caudate nucleus, they can order a Dominos, recruit their friends as research subjects, pop them in an MRI scanner and get to work doing rigorous (and delicious) science. They've got the pepperoni pizza, they've got the human caudate nucleus - away they go.

Whereas in order to do research in psychiatry, you need patients, and to decide who's a patient and who isn't you basically have to use DSM-IV criteria, which are all but meaningless in most cases. It doesn't matter what amazing new scientific tools you have - genome-wide association studies, proteomics, brain imaging, whatever. If you're using them to study differences between "depressed people" and "normal people", and your "depressed people" are a mix of people who aren't ill and just need a holiday or a divorce, undiagnosed thyroid cases, local bums lying about being depressed to get paid for being in the study, and (if you're lucky) a few "really" clinically depressed people, you'll not get very far.

Edit 10.1.2009 - Changed the date of the discovery of the structure of DNA from 1952 to the correct 1953, oops.

ResearchBlogging.orgNature (2010). A decade for psychiatric disorders Nature, 463 (7277), 9-9 DOI: 10.1038/463009a

A Decade for Psychiatric Disorders...?

Nature kicks off the 2010s with an editorial pep-talk for psychiatry: A decade for psychiatric disorders.
New techniques — genome-wide association studies, imaging and the optical manipulation of neural circuits — are ushering in an era in which the neural circuitry underlying cognitive dysfunctions, for example, will be delineated... Whether for schizophrenia, depression, autism or any other psychiatric disorders, it is clear... that understanding of these conditions is entering a scientific phase more penetratingly insightful than has hitherto been possible.
But I don't feel too peppy.

The 2010s is not the decade for psychiatric disorders. Clinically, that decade was the 1950s. The 50s was when the first generation of psychiatric drugs were discovered - neuroleptics for psychosis (1952), MAOis (1952) and tricyclics (1957) for depression, and lithium for mania (1949, although it took a while to catch on).

Since then, there have been plenty of new drugs invented, but not a single one has proven more effective than those available in 1959. New antidepressants like Prozac are safer in overdose, and have milder side effects, than older ones. New "atypical" antipsychotics have different side effects to older ones. But they work no better. Compared to lithium, newer "mood stabilizers" probably aren't even as good. (The only exception is clozapine, a powerful antipsychotic, but dangerous side-effects limit its use.)

Scientifically, the 1960s were the decade of psychiatry. We learned that antipsychotics block dopamine receptors in the brain, and that antidepressants inhibit the reuptake or breakdown of monoamines: noradrenaline and serotonin. So it was natural, if unimaginative, to hypothesise that psychosis is caused by "too much dopamine", and that depression is a case of "not enough monoamines". (As for lithium, we still don't know how it works. Two out of three ain't bad.)

These are still the core dogmas of biological psychiatry. Since the 60s, the amount of money and people involved in the field has exploded, but today's research is still essentially making footnotes to the work done 30 or 40 years ago. It would be somewhat unfair to say that we haven't made any solid advances since then, but only somewhat.

The double helix structure of DNA was discovered in 1953, just after antipsychotics and antidepressants. Imagine if biologists had learned about the double helix, but instead of using it to understand genetics, or catch criminals, or sequence genomes, they spent 50 years arguing about whether all DNA was shaped like that, or only some of it.

The standard response to the charge that psychiatry has lagged behind the rest of medicine is that "It's hard". And it is, because it's about human life, which is complex. But so is the subject matter of every science: the whole point is to seek simplicity in the complexity. Genetics was hard, until we worked out how to do it.

What's remarkable is that so many things in psychiatry are simple. For example: any drug which blocks the dopamine transporter (DAT) in the brain has stimulant effects: increased energy, focus, and motivation, and at high doses, euphoria, grandiosity, and potentially addiction. Cocaine, amphetamine, Ritalin etc all work this way. There are no cocaine-like drugs that don't block DAT and no DAT inhibitors that aren't cocaine-like. Simple. The stimulant high looks strikingly like the mania seen in bipolar disorder, and is pretty much the exact opposite of what happens in clinical depression. Couldn't be easier.

There are plenty of cases just like this. What's also striking is that neuroscience has advanced in leaps and bounds since the 1960s. A 60s, or even a 90s, textbook about neuroscience looks incredibly dated - a 60s psychiatry textbook is essentially still up-to-date except for the drug names. Contemporary neuroscience is far from being a mature science like genetics, it has its problems (see: all my previous posts) but compared to psychiatry, "basic" neuroscience is rock-solid. Although I trained as basic neuroscientist, so I would say that.

Why? That's an excellent question. But if you ask me, and judging by the academic literature I'm not alone, the answer is: diagnosis. The weak link in psychiatry research is the diagnoses we are forced to use: "major depressive disorder", "schizophrenia", etc.

Basic neuroscientists don't use these. If a neuroscientist wants to study the effect of, say, pepperoni pizza on the human caudate nucleus, they can order a Dominos, recruit their friends as research subjects, pop them in an MRI scanner and get to work doing rigorous (and delicious) science. They've got the pepperoni pizza, they've got the human caudate nucleus - away they go.

Whereas in order to do research in psychiatry, you need patients, and to decide who's a patient and who isn't you basically have to use DSM-IV criteria, which are all but meaningless in most cases. It doesn't matter what amazing new scientific tools you have - genome-wide association studies, proteomics, brain imaging, whatever. If you're using them to study differences between "depressed people" and "normal people", and your "depressed people" are a mix of people who aren't ill and just need a holiday or a divorce, undiagnosed thyroid cases, local bums lying about being depressed to get paid for being in the study, and (if you're lucky) a few "really" clinically depressed people, you'll not get very far.

Edit 10.1.2009 - Changed the date of the discovery of the structure of DNA from 1952 to the correct 1953, oops.

ResearchBlogging.orgNature (2010). A decade for psychiatric disorders Nature, 463 (7277), 9-9 DOI: 10.1038/463009a