Showing posts with label media. Show all posts
Showing posts with label media. Show all posts

Saturday, February 19, 2011

The Web of Morgellons

A fascinating new paper: Morgellons Disease, or Antipsychotic-Responsive Delusional Parasitosis, in an HIV Patient: Beliefs in The Age of the Internet

“Mr. A” was a 43-year-old man...His most pressing medical complaint was worrisome fatigue. He was not depressed...had no formal psychiatric history, no family psychiatric history, and he was a successful businessman.

He was referred to the psychiatry department by his primary-care physician (PCP) because of a 2-year-long complaint of pruritus [itching] accompanied by the belief of being infested with parasites. Numerous visits to the infectious disease clinic and an extensive medical work-up...had not uncovered any medical disorder, to the patient’s great frustration.

Although no parasites were ever trapped, Mr. A caused skin damage by probing for them and by applying topical solutions such as hydrogen peroxide to “bring them to the surface.” After reading about Morgellons disease on the Internet, he “recalled” extruding particles from his skin, including “dirt” and “fuzz.”

During the initial consultation visit with the psychiatrist, Mr. A was apprehensive but cautiously optimistic that a medication could help. The psychiatrist had been forewarned by the PCP that the patient had discovered a website describing Morgellons and “latched onto” this diagnosis.

However, it was notable that the patient allowed the possibility (“30%”) that he was suffering from delusions (and not Morgellons), mostly because he trusted his PCP, “who has taken very good care of me for many years.”

The patient agreed to a risperidone [an antipsychotic] trial of up to 2 mg per day. [i.e. a lowish dose]. Within weeks, his preoccupation with being infested lessened significantly... Although not 100% convinced that he might not have Morgellons disease, he is no longer pruritic and is no longer damaging his skin or trying to trap insects. He remains greatly improved 1 year later.
(Mr A. had also been HIV+ for 20 years, but he still had good immune function and the HIV may have had nothing to do with the case.)

"Morgellons" is, according to people who say they suffer from it, a mysterious disease characterised by the feeling of parasites or insects moving underneath the skin, accompanied by skin lesions out of which emerge strange, brightly-coloured fibres or threads. Other symptoms include fatigue, aches and pains, and difficulty concentrating.

According to almost all doctors, there are no parasites, the lesions are caused by the patient's own scratching or attempts to dig out the non-existent critters, and the fibres come from clothes, carpets, or other textiles which the patient has somehow inserted into their own skin. It may seem unbelievable that someone could do this "unconsciously", but stranger things have happened.

As the authors of this paper, Freudenreich et al, say, Morgellons is a disease of the internet age. It was "discovered" in 2002 by a Mary Leitao, with Patient Zero being her own 2 year old son. Since then its fame, and the reported number of cases, has grown steadily - especially in California.

Delusional parasitosis is the opposite of Morgellons: doctors believe in it, but the people who have it, don't. It's seen in some mental disorders and is also quite common in abusers of certain drugs like methamphetamine. It feels like there are bugs beneath your skin. There aren't, but the belief that there are is very powerful.

This then is the raw material in most cases; what the concept of "Morgellons" adds is a theory, a social context and a set of expectations that helps make sense of the otherwise baffling symptoms. And as we know expectations, whether positive or negative, tend to be become experiences. The diagnosis doesn't create the symptoms out of nowhere but rather takes them and reshapes them into a coherent pattern.

As Freudenreich et al note, doctors may be tempted to argue with the patient - you don't have Morgellons, there's no such thing, it's absurd - but the whole point is that mainstream medicine couldn't explain the symptoms, which is why the patient turned to less orthodox ideas.

Remember the extensive tests that came up negative "to the patient’s great frustration." And remember that "delusional parasitosis" is not an explanation, just a description, of the symptoms. To diagnose someone with that is saying "We've no idea why but you've imagined this". True, maybe, but not very palatable.

Rather, they say, doctors should just suggest that maybe there's something else going on, and should prescribe a treatment on that basis. Not rejecting the patient's beliefs but saying, maybe you're right, but in my experience this treatment makes people with your condition feel better, and that's why you're here, right?

Whether the pills worked purely as a placebo or whether there was a direct pharmacological effect, we'll never know. Probably it was a bit of both. It's not clear that it's important, really. The patient improved, and it's unlikely that it would have worked as well if they'd been given in a negative atmosphere of coercion or rejection - if indeed he'd agreed to take them at all.

Morgellons is a classic case of a disease that consists of an underlying experience filtered through the lens of a socially-transmitted interpretation. But every disease is that, to a degree. Even the most rigorously "medical" conditions like cancer also come with a set of expectations and a social meaning; psychiatric disorders certainly do.

I guess Morgellons is too new to be a textbook case yet - but it should be. Everyone with an interest in the mind, everyone who treats diseases, and everyone who's ever been ill - everyone really - ought to be familiar with it because while it's an extreme case, it's not unique. "All life is here" in those tangled little fibres.

ResearchBlogging.orgFreudenreich O, Kontos N, Tranulis C, & Cather C (2010). Morgellons disease, or antipsychotic-responsive delusional parasitosis, in an hiv patient: beliefs in the age of the internet. Psychosomatics, 51 (6), 453-7 PMID: 21051675

The Web of Morgellons

A fascinating new paper: Morgellons Disease, or Antipsychotic-Responsive Delusional Parasitosis, in an HIV Patient: Beliefs in The Age of the Internet

“Mr. A” was a 43-year-old man...His most pressing medical complaint was worrisome fatigue. He was not depressed...had no formal psychiatric history, no family psychiatric history, and he was a successful businessman.

He was referred to the psychiatry department by his primary-care physician (PCP) because of a 2-year-long complaint of pruritus [itching] accompanied by the belief of being infested with parasites. Numerous visits to the infectious disease clinic and an extensive medical work-up...had not uncovered any medical disorder, to the patient’s great frustration.

Although no parasites were ever trapped, Mr. A caused skin damage by probing for them and by applying topical solutions such as hydrogen peroxide to “bring them to the surface.” After reading about Morgellons disease on the Internet, he “recalled” extruding particles from his skin, including “dirt” and “fuzz.”

During the initial consultation visit with the psychiatrist, Mr. A was apprehensive but cautiously optimistic that a medication could help. The psychiatrist had been forewarned by the PCP that the patient had discovered a website describing Morgellons and “latched onto” this diagnosis.

However, it was notable that the patient allowed the possibility (“30%”) that he was suffering from delusions (and not Morgellons), mostly because he trusted his PCP, “who has taken very good care of me for many years.”

The patient agreed to a risperidone [an antipsychotic] trial of up to 2 mg per day. [i.e. a lowish dose]. Within weeks, his preoccupation with being infested lessened significantly... Although not 100% convinced that he might not have Morgellons disease, he is no longer pruritic and is no longer damaging his skin or trying to trap insects. He remains greatly improved 1 year later.
(Mr A. had also been HIV+ for 20 years, but he still had good immune function and the HIV may have had nothing to do with the case.)

"Morgellons" is, according to people who say they suffer from it, a mysterious disease characterised by the feeling of parasites or insects moving underneath the skin, accompanied by skin lesions out of which emerge strange, brightly-coloured fibres or threads. Other symptoms include fatigue, aches and pains, and difficulty concentrating.

According to almost all doctors, there are no parasites, the lesions are caused by the patient's own scratching or attempts to dig out the non-existent critters, and the fibres come from clothes, carpets, or other textiles which the patient has somehow inserted into their own skin. It may seem unbelievable that someone could do this "unconsciously", but stranger things have happened.

As the authors of this paper, Freudenreich et al, say, Morgellons is a disease of the internet age. It was "discovered" in 2002 by a Mary Leitao, with Patient Zero being her own 2 year old son. Since then its fame, and the reported number of cases, has grown steadily - especially in California.

Delusional parasitosis is the opposite of Morgellons: doctors believe in it, but the people who have it, don't. It's seen in some mental disorders and is also quite common in abusers of certain drugs like methamphetamine. It feels like there are bugs beneath your skin. There aren't, but the belief that there are is very powerful.

This then is the raw material in most cases; what the concept of "Morgellons" adds is a theory, a social context and a set of expectations that helps make sense of the otherwise baffling symptoms. And as we know expectations, whether positive or negative, tend to be become experiences. The diagnosis doesn't create the symptoms out of nowhere but rather takes them and reshapes them into a coherent pattern.

As Freudenreich et al note, doctors may be tempted to argue with the patient - you don't have Morgellons, there's no such thing, it's absurd - but the whole point is that mainstream medicine couldn't explain the symptoms, which is why the patient turned to less orthodox ideas.

Remember the extensive tests that came up negative "to the patient’s great frustration." And remember that "delusional parasitosis" is not an explanation, just a description, of the symptoms. To diagnose someone with that is saying "We've no idea why but you've imagined this". True, maybe, but not very palatable.

Rather, they say, doctors should just suggest that maybe there's something else going on, and should prescribe a treatment on that basis. Not rejecting the patient's beliefs but saying, maybe you're right, but in my experience this treatment makes people with your condition feel better, and that's why you're here, right?

Whether the pills worked purely as a placebo or whether there was a direct pharmacological effect, we'll never know. Probably it was a bit of both. It's not clear that it's important, really. The patient improved, and it's unlikely that it would have worked as well if they'd been given in a negative atmosphere of coercion or rejection - if indeed he'd agreed to take them at all.

Morgellons is a classic case of a disease that consists of an underlying experience filtered through the lens of a socially-transmitted interpretation. But every disease is that, to a degree. Even the most rigorously "medical" conditions like cancer also come with a set of expectations and a social meaning; psychiatric disorders certainly do.

I guess Morgellons is too new to be a textbook case yet - but it should be. Everyone with an interest in the mind, everyone who treats diseases, and everyone who's ever been ill - everyone really - ought to be familiar with it because while it's an extreme case, it's not unique. "All life is here" in those tangled little fibres.

ResearchBlogging.orgFreudenreich O, Kontos N, Tranulis C, & Cather C (2010). Morgellons disease, or antipsychotic-responsive delusional parasitosis, in an hiv patient: beliefs in the age of the internet. Psychosomatics, 51 (6), 453-7 PMID: 21051675

Thursday, February 17, 2011

WMDs vs MDD

Weapons of Mass Destruction. Nuclear, chemical and biological weapons. They're really nasty, right?

Well, some of them are. Nuclear weapons are Very Destructive Indeed. Even a tiny one, detonated in the middle of a major city, would probably kill hundreds of thousands. A medium-sized nuke could kill millions. The biggest would wipe a small country off the map in one go.

Chemical and biological weapons, on the other hand, while hardly nice, are just not on the same scale.

Sure, there are nightmare scenarios - a genetically engineered supervirus that kills a billion people - but they're hypothetical. If someone does design such a virus, then we can worry. As it is, biological weapons have never proven very useful. The 2001 US anthrax letters killed 5 people. Jared Loughner killed 6 with a gun he bought from a chain store.

Chemical weapons are little better. They were used heavily in WW1 and the Iran-Iraq War against military targets and killed many but never achieved a decisive victory, and the vast majority of deaths in these wars were caused by plain old bullets and bombs. Iraq's use of chemical weapons against Kurds in Halabja killed perhaps 5,000 - but this was a full-scale assault by an advanced air force, lasting several hours, on a defenceless population.

When a state-of-the-art nerve agent was used in the Tokyo subway attack, after much preparation by the cult responsible, who had professional chemists and advanced labs, 13 people died. In London on the 7th July 2005, terrorists killed 52 people with explosives made from haircare products.

Nuclear weapons aside, the best way to cause mass destruction is just to make an explosion, the bigger the better; yet conventional explosives, no matter how big, are not "WMDs", while chemical and biological weapons are.

So it seems to me that the term and the concept of "WMDs" is fundamentally unhelpful. It lumps together the apocalyptically powerful with the much less destructive. If you have to discuss everything except guns and explosives in one category, terms like "Unconventional weapons" are better as they avoid the misleading implication that all of these weapons are very, and equivalently, deadly; but grouping them together at all is risky.

That's WMDs. But there are plenty of other unhelpful concepts out there, some of which I've discussed previously. Take the concept of "major depressive disorder", for example. At least as the term is currently used, it lumps together extremely serious cases requiring hospitalization with mild "symptoms" which 40% of people experience by age 32.

WMDs vs MDD

Weapons of Mass Destruction. Nuclear, chemical and biological weapons. They're really nasty, right?

Well, some of them are. Nuclear weapons are Very Destructive Indeed. Even a tiny one, detonated in the middle of a major city, would probably kill hundreds of thousands. A medium-sized nuke could kill millions. The biggest would wipe a small country off the map in one go.

Chemical and biological weapons, on the other hand, while hardly nice, are just not on the same scale.

Sure, there are nightmare scenarios - a genetically engineered supervirus that kills a billion people - but they're hypothetical. If someone does design such a virus, then we can worry. As it is, biological weapons have never proven very useful. The 2001 US anthrax letters killed 5 people. Jared Loughner killed 6 with a gun he bought from a chain store.

Chemical weapons are little better. They were used heavily in WW1 and the Iran-Iraq War against military targets and killed many but never achieved a decisive victory, and the vast majority of deaths in these wars were caused by plain old bullets and bombs. Iraq's use of chemical weapons against Kurds in Halabja killed perhaps 5,000 - but this was a full-scale assault by an advanced air force, lasting several hours, on a defenceless population.

When a state-of-the-art nerve agent was used in the Tokyo subway attack, after much preparation by the cult responsible, who had professional chemists and advanced labs, 13 people died. In London on the 7th July 2005, terrorists killed 52 people with explosives made from haircare products.

Nuclear weapons aside, the best way to cause mass destruction is just to make an explosion, the bigger the better; yet conventional explosives, no matter how big, are not "WMDs", while chemical and biological weapons are.

So it seems to me that the term and the concept of "WMDs" is fundamentally unhelpful. It lumps together the apocalyptically powerful with the much less destructive. If you have to discuss everything except guns and explosives in one category, terms like "Unconventional weapons" are better as they avoid the misleading implication that all of these weapons are very, and equivalently, deadly; but grouping them together at all is risky.

That's WMDs. But there are plenty of other unhelpful concepts out there, some of which I've discussed previously. Take the concept of "major depressive disorder", for example. At least as the term is currently used, it lumps together extremely serious cases requiring hospitalization with mild "symptoms" which 40% of people experience by age 32.

Wednesday, February 16, 2011

Boy Without A Cerebellum...Has No Cerebellum

A reader pointed me to this piece:
Boy Without a Cerebellum Baffles Doctors
Argh. This is going to be a bit awkward. So I'll just say at the outset that I have nothing against kids struggling with serious illnesses and I wish them all the best.


The article's about Chase Britton, a boy who apparantly lacks two important parts of the brain: the cerebellum and the pons. Despite this, the article says, Chase is a lovely kid and is determined to be as active as possible.

As I said, I am all in favor of this. However the article runs into trouble is where it starts to argue that "doctors are baffled" by this:

When he was 1 year old, doctors did an MRI, expecting to find he had a mild case of cerebral palsy. Instead, they discovered he was completely missing his cerebellum -- the part of the brain that controls motor skills, balance and emotions.

"That's when the doctor called and didn't know what to say to us," Britton said in a telephone interview. "No one had ever seen it before. And then we'd go to the neurologists and they'd say, 'That's impossible.' 'He has the MRI of a vegetable,' one of the doctors said to us."

Chase is not a vegetable, leaving doctors bewildered and experts rethinking what they thought they knew about the human brain.

They don't say which doctor made the "vegetable" comment but whoever it was deserves to be hit over the head with a large marrow because it's just not true. The cerebellum is more or less a kind of sidekick for the rest of the brain. Although it actually contains more brain cells than the rest of the brain put together (they're really small ones), it's not required for any of our basic functions such as sensation or movement.

Without it, you can still move, because movement commands are initiated in the motor cortex. Such movement is clumsy and awkward (ataxia), because the cerebellum helps to coordinate things like posture and gait, getting the timing exactly right to allow you to move smoothly. Like how your mouse makes it easy and intuitive to move the cursor around the screen.

Imagine if you had no mouse and had to move the cursor with a pair of big rusty iron levers to go left and right, up and down. It would be annoying, but eventually, maybe, you could learn to compensate.

From the footage of Chase alongside the article it's clear that he has problems with coordination, albeit he's gradually learning to be able to move despite them.

Lacking a pons is another kettle of fish however. The pons is part of your brainstem and it controls, amongst other things, breathing. In fact you (or rather your body) can survive perfectly well if the whole of your brain above the pons is removed; only the brainstem is required for vital functions.

So it seems very unlikely that Chase actually lacks a pons. The article claims that scans show that "There is only fluid where the cerebellum and pons should be" but as Steven Novella points out in his post on the case, the pons might be so shrunken that it's not easily visible - at least not in the place it normally is - yet functional remnants could remain.

As for the idea that the case is bafflingly unique, it's not really. There are no less than 6 known types of pontocerebellar hypoplasia caused by different genes; Novella points to a case series of children whose cerebellums seemed to develop normally in the womb, but then degenerated when they were born prematurely, which Chase was.

The article has had well over a thousand comments and has attracted lots of links from religious websites amongst others. The case seems, if you believe the article, to mean that the brain isn't all that important, almost as if there was some kind of immaterial soul at work instead... or at the very least suggesting that the brain is much more "plastic" and changeable than neuroscientists suppose.

Unfortunately, the heroic efforts that Chase has been required to make to cope with his disability suggest otherwise and as I've written before, while neuroplasticity is certainly real it has its limits.

Boy Without A Cerebellum...Has No Cerebellum

A reader pointed me to this piece:
Boy Without a Cerebellum Baffles Doctors
Argh. This is going to be a bit awkward. So I'll just say at the outset that I have nothing against kids struggling with serious illnesses and I wish them all the best.


The article's about Chase Britton, a boy who apparantly lacks two important parts of the brain: the cerebellum and the pons. Despite this, the article says, Chase is a lovely kid and is determined to be as active as possible.

As I said, I am all in favor of this. However the article runs into trouble is where it starts to argue that "doctors are baffled" by this:

When he was 1 year old, doctors did an MRI, expecting to find he had a mild case of cerebral palsy. Instead, they discovered he was completely missing his cerebellum -- the part of the brain that controls motor skills, balance and emotions.

"That's when the doctor called and didn't know what to say to us," Britton said in a telephone interview. "No one had ever seen it before. And then we'd go to the neurologists and they'd say, 'That's impossible.' 'He has the MRI of a vegetable,' one of the doctors said to us."

Chase is not a vegetable, leaving doctors bewildered and experts rethinking what they thought they knew about the human brain.

They don't say which doctor made the "vegetable" comment but whoever it was deserves to be hit over the head with a large marrow because it's just not true. The cerebellum is more or less a kind of sidekick for the rest of the brain. Although it actually contains more brain cells than the rest of the brain put together (they're really small ones), it's not required for any of our basic functions such as sensation or movement.

Without it, you can still move, because movement commands are initiated in the motor cortex. Such movement is clumsy and awkward (ataxia), because the cerebellum helps to coordinate things like posture and gait, getting the timing exactly right to allow you to move smoothly. Like how your mouse makes it easy and intuitive to move the cursor around the screen.

Imagine if you had no mouse and had to move the cursor with a pair of big rusty iron levers to go left and right, up and down. It would be annoying, but eventually, maybe, you could learn to compensate.

From the footage of Chase alongside the article it's clear that he has problems with coordination, albeit he's gradually learning to be able to move despite them.

Lacking a pons is another kettle of fish however. The pons is part of your brainstem and it controls, amongst other things, breathing. In fact you (or rather your body) can survive perfectly well if the whole of your brain above the pons is removed; only the brainstem is required for vital functions.

So it seems very unlikely that Chase actually lacks a pons. The article claims that scans show that "There is only fluid where the cerebellum and pons should be" but as Steven Novella points out in his post on the case, the pons might be so shrunken that it's not easily visible - at least not in the place it normally is - yet functional remnants could remain.

As for the idea that the case is bafflingly unique, it's not really. There are no less than 6 known types of pontocerebellar hypoplasia caused by different genes; Novella points to a case series of children whose cerebellums seemed to develop normally in the womb, but then degenerated when they were born prematurely, which Chase was.

The article has had well over a thousand comments and has attracted lots of links from religious websites amongst others. The case seems, if you believe the article, to mean that the brain isn't all that important, almost as if there was some kind of immaterial soul at work instead... or at the very least suggesting that the brain is much more "plastic" and changeable than neuroscientists suppose.

Unfortunately, the heroic efforts that Chase has been required to make to cope with his disability suggest otherwise and as I've written before, while neuroplasticity is certainly real it has its limits.

Tuesday, February 8, 2011

The Social Network and Anorexia

Could social networks be more important than the media in the spread of eating disorders?

There's a story about eating disorders roughly like this: eating disorders (ED) are about wanting to be thin. The idea that thinness is desireable is something that's spread by Western media, especially visual media i.e. TV and magazines. Therefore, Western media exposure causes eating disorders.

It's a nice simple theory. And it seems to fit with the fact that eating disorders, hitherto very rare, start to appear in a certain country in conjunction with the spread of Westernized media. A number of studies have shown this. However, a new paper suggests that there may be rather more to it: Social network media exposure and adolescent eating pathology in Fiji.

Fiji is a former British colony, a tropical island nation of less than a million. Just over half the population are ethnic native Fijian people. Until recently, these Fijians were relatively untouched by Western culture, but this is starting to change.

The authors of this study surveyed 523 Fijian high school girls. Interviews took place in 2007. They asked them various questions relating to, one the one hand, eating disorder symptoms, and on the other hand, their exposure to various forms of media.

They looked at both individual exposure - hours of TV watched, electronic entertainment in the home - and "indirect" or "social network" exposure, such as TV watched by the parents, and the amount of electronic entertainment their friends owned. On top of this they measured Westernization/"globalization", such as the amount of overseas travel by the girls or their parents.

So what happened? Basically, social network media exposure, urbanization, and Westernization correlated with ED symptoms, but when you controlled for those variables, personal media exposure didn't correlate. Here's the data; the column I've highlighted is the data where each variable is controlled for the others. The correlations are pretty small (0 is none, 1.0 would be perfect) but significant.


They conclude that:
Although consistent with the prevailing sociocultural model for the relation between media exposure and disordered eating... our finding, that indirect exposure to media content may be even more influential than direct exposure in this particular social context, is novel.
The idea that eating disorders are simply a product of a culture which values thinness as attractive has always seemed a bit shaky to me because people with anorexia frequently starve themselves far past the point of being attractive even by the unrealistic standards of magazines and movies.

In fact, if eating disorders were just an attempt to "look good", they wouldn't be nearly so dangerous as they are, because no matter how thin-obsessed our culture may be, no-one thinks this is attractive, or normal, or sane. But this, or worse, is what a lot of anorexics end up as.

On the other hand, eating disorders are associated with modern Western culture. There must be a link, but maybe it's more complicated than just "thin = good" causes anorexia. What if you also need the idea of "eating disorders"?

This was the argument put forward by Ethan Watters in Crazy Like Us (my review)... in his account of the rise of anorexia in Hong Kong. Essentially, he said, anorexia was vanishingly rare in Hong Kong until after the much-publicized death of a 14 year old girl, Charlene Chi-Ying, in the street. As he put it:
In trying to explain what happened to Charlene, local reporters often simply copied out of American diagnostic manuals. The mental-health experts quoted in the Hong Kong papers and magazines confidently reported that anorexia in Hong Kong was the same disorder that appeared in the United States and Europe...

As the general public and the region's mental-health professionals came to understand the American diagnosis of anorexia, the presentation of the illness in [Hong Kong psychiatrist] Lee's patient population appeared to transform into the more virulent American standard. Lee once saw two or three anorexic patients a year; by the end of the 1990s he was seeing that many new cases each month.
Now it's important not to see this as trivializing the condition or as a way of blaming the victim; "they're just following a trend!". You only have to look at someone with anorexia to see that there is nothing trivial about it. However, that doesn't mean it's not a social phenomenon.

It's a long way from the data in this study to Watters' conclusions, but maybe not an impossible leap. Part of Westernization, after all, is exposure to Western ideas about what is healthy eating and what's an eating disorder...

ResearchBlogging.orgBecker, A., Fay, K., Agnew-Blais, J., Khan, A., Striegel-Moore, R., & Gilman, S. (2011). Social network media exposure and adolescent eating pathology in Fiji The British Journal of Psychiatry, 198 (1), 43-50 DOI: 10.1192/bjp.bp.110.078675

The Social Network and Anorexia

Could social networks be more important than the media in the spread of eating disorders?

There's a story about eating disorders roughly like this: eating disorders (ED) are about wanting to be thin. The idea that thinness is desireable is something that's spread by Western media, especially visual media i.e. TV and magazines. Therefore, Western media exposure causes eating disorders.

It's a nice simple theory. And it seems to fit with the fact that eating disorders, hitherto very rare, start to appear in a certain country in conjunction with the spread of Westernized media. A number of studies have shown this. However, a new paper suggests that there may be rather more to it: Social network media exposure and adolescent eating pathology in Fiji.

Fiji is a former British colony, a tropical island nation of less than a million. Just over half the population are ethnic native Fijian people. Until recently, these Fijians were relatively untouched by Western culture, but this is starting to change.

The authors of this study surveyed 523 Fijian high school girls. Interviews took place in 2007. They asked them various questions relating to, one the one hand, eating disorder symptoms, and on the other hand, their exposure to various forms of media.

They looked at both individual exposure - hours of TV watched, electronic entertainment in the home - and "indirect" or "social network" exposure, such as TV watched by the parents, and the amount of electronic entertainment their friends owned. On top of this they measured Westernization/"globalization", such as the amount of overseas travel by the girls or their parents.

So what happened? Basically, social network media exposure, urbanization, and Westernization correlated with ED symptoms, but when you controlled for those variables, personal media exposure didn't correlate. Here's the data; the column I've highlighted is the data where each variable is controlled for the others. The correlations are pretty small (0 is none, 1.0 would be perfect) but significant.


They conclude that:
Although consistent with the prevailing sociocultural model for the relation between media exposure and disordered eating... our finding, that indirect exposure to media content may be even more influential than direct exposure in this particular social context, is novel.
The idea that eating disorders are simply a product of a culture which values thinness as attractive has always seemed a bit shaky to me because people with anorexia frequently starve themselves far past the point of being attractive even by the unrealistic standards of magazines and movies.

In fact, if eating disorders were just an attempt to "look good", they wouldn't be nearly so dangerous as they are, because no matter how thin-obsessed our culture may be, no-one thinks this is attractive, or normal, or sane. But this, or worse, is what a lot of anorexics end up as.

On the other hand, eating disorders are associated with modern Western culture. There must be a link, but maybe it's more complicated than just "thin = good" causes anorexia. What if you also need the idea of "eating disorders"?

This was the argument put forward by Ethan Watters in Crazy Like Us (my review)... in his account of the rise of anorexia in Hong Kong. Essentially, he said, anorexia was vanishingly rare in Hong Kong until after the much-publicized death of a 14 year old girl, Charlene Chi-Ying, in the street. As he put it:
In trying to explain what happened to Charlene, local reporters often simply copied out of American diagnostic manuals. The mental-health experts quoted in the Hong Kong papers and magazines confidently reported that anorexia in Hong Kong was the same disorder that appeared in the United States and Europe...

As the general public and the region's mental-health professionals came to understand the American diagnosis of anorexia, the presentation of the illness in [Hong Kong psychiatrist] Lee's patient population appeared to transform into the more virulent American standard. Lee once saw two or three anorexic patients a year; by the end of the 1990s he was seeing that many new cases each month.
Now it's important not to see this as trivializing the condition or as a way of blaming the victim; "they're just following a trend!". You only have to look at someone with anorexia to see that there is nothing trivial about it. However, that doesn't mean it's not a social phenomenon.

It's a long way from the data in this study to Watters' conclusions, but maybe not an impossible leap. Part of Westernization, after all, is exposure to Western ideas about what is healthy eating and what's an eating disorder...

ResearchBlogging.orgBecker, A., Fay, K., Agnew-Blais, J., Khan, A., Striegel-Moore, R., & Gilman, S. (2011). Social network media exposure and adolescent eating pathology in Fiji The British Journal of Psychiatry, 198 (1), 43-50 DOI: 10.1192/bjp.bp.110.078675

Wednesday, February 2, 2011

Pharma: Tamed But Still A Big Beast

Everyone knows that Big Pharma go around lying, concealing data and distorting science in an effort to sell their pills. Right?

Actually, not so much. They used to, but most of the really scandalous stuff happened many years ago. The late 80's through to about the turn of the century were the Golden Age of pharmaceutical company deception.

This is when we had drugs that don't work getting approved, with the trials showing that they don't work buried, and only now being uncovered. Data on drug-induced suicides seemingly fudged to make them seem less scary. Textbooks "written by" leading psychiatrists that were, allegedly, in fact ghost-written on behalf of drug companies. Ghost-writing programs with chuckle-some names like CASPPER. And so on.

But today, we have to give credit where credit's due: things have improved. Credit is due not to the companies but to the authorities who put a stop to this nonsense through rules. Mandatory clinical trial registration to ensure all the data is available and stop outcoming cherrypicking. Anti-ghostwriting rules (albeit they're not universal yet.) etc.

What's shocking is how long it took to get these simple rules in place. The next generation of scientists and doctors will look back on the 1990s with disbelief: they let them do what? But at least we woke up eventually.

Still, there's more left to do. At the moment, the main problem, as I see it, is that different jurisdictions have different rules, with the best ideas being confined to one particular place. For instance, the USA has by far the most sensible system of clinical trial registration and reporting. Europe needs to catch up (we are, but slowly.)

Yet the USA is also one of the only countries (with New Zealand) to permit direct-to-consumer (DTC) advertising for prescription drugs. To the rest of the world, this is really weird. We all have a right to free speech. But drug companies pushing drugs directly to patients just isn't a free speech issue, in Europe. Corporations don't speak, they advertise.

By encouraging self-diagnosis and self-treatment, DTC replaces medical judgement with marketing, undermining the doctor-patient relationship. The patient is meant to present his symptoms and the doctor is meant to make a diagnosis and prescribe a treatment. DTC encourages self-diagnosis and self-prescription: the fact that a doctor is still, technically, in charge and has to sign that prescription, means little in practice.

So there's a lot to be happy about, but there's also a lot still to do.

Pharma: Tamed But Still A Big Beast

Everyone knows that Big Pharma go around lying, concealing data and distorting science in an effort to sell their pills. Right?

Actually, not so much. They used to, but most of the really scandalous stuff happened many years ago. The late 80's through to about the turn of the century were the Golden Age of pharmaceutical company deception.

This is when we had drugs that don't work getting approved, with the trials showing that they don't work buried, and only now being uncovered. Data on drug-induced suicides seemingly fudged to make them seem less scary. Textbooks "written by" leading psychiatrists that were, allegedly, in fact ghost-written on behalf of drug companies. Ghost-writing programs with chuckle-some names like CASPPER. And so on.

But today, we have to give credit where credit's due: things have improved. Credit is due not to the companies but to the authorities who put a stop to this nonsense through rules. Mandatory clinical trial registration to ensure all the data is available and stop outcoming cherrypicking. Anti-ghostwriting rules (albeit they're not universal yet.) etc.

What's shocking is how long it took to get these simple rules in place. The next generation of scientists and doctors will look back on the 1990s with disbelief: they let them do what? But at least we woke up eventually.

Still, there's more left to do. At the moment, the main problem, as I see it, is that different jurisdictions have different rules, with the best ideas being confined to one particular place. For instance, the USA has by far the most sensible system of clinical trial registration and reporting. Europe needs to catch up (we are, but slowly.)

Yet the USA is also one of the only countries (with New Zealand) to permit direct-to-consumer (DTC) advertising for prescription drugs. To the rest of the world, this is really weird. We all have a right to free speech. But drug companies pushing drugs directly to patients just isn't a free speech issue, in Europe. Corporations don't speak, they advertise.

By encouraging self-diagnosis and self-treatment, DTC replaces medical judgement with marketing, undermining the doctor-patient relationship. The patient is meant to present his symptoms and the doctor is meant to make a diagnosis and prescribe a treatment. DTC encourages self-diagnosis and self-prescription: the fact that a doctor is still, technically, in charge and has to sign that prescription, means little in practice.

So there's a lot to be happy about, but there's also a lot still to do.

Sunday, January 16, 2011

Psychoanalysis: So Bad It's Good?

Many of the best things in life are terrible.


We all know about the fun to be found in failure, as exemplified by Judge A Book By Its Cover and of course FailBlog. The whole genre of B-movie appreciation is based on the maxim of: so bad, it's good.

But could the same thing apply to psychotherapies?

Here's the argument. Freudian psychoanalysis is a bit silly. Freud had pretensions to scientific respectability, but never really achieved it, and with good reason. You can believe Freud, and if you do, it kind of make sense. But to anyone else, it's a bit weird. If psychoanalysis were a person, it would be the Pope.

By contrast, cognitive-behavioural therapy is eminently reasonable. It relies on straightforward empirical observations of the patient's symptoms, and on trying to change people's beliefs by rational arguments and real-life examples ("behavioural experiments"). CBT practitioners are always keen to do randomized controlled trials to provide hard evidence for their success. CBT is Richard Dawkins.

But what if the very irrationality of psychoanalysis is its strength? Mental illness is irrational. So's life, right? So maybe you need an irrational kind of therapy to deal with it.

This is almost the argument advanced by Robert Rowland Smith in a short piece In Defence of Psychoanalysis:
...The irony is that in becoming more “scientific”, CBT becomes less therapeutic. Now, Freud himself liked to be thought of as a scientist (he began his career in neurology, working on the spinal ganglia), but it’s the non-scientific features that make psychoanalysis the more, not the less, powerful.

I’m referring to the therapeutic relationship itself. Although like psychoanalysis largely a talking cure, CBT prefers to set aside the emotions in play between doctor and patient. Psychoanalysis does the reverse. To the annoyance no doubt of many a psychoanalytic patient, the very interaction between the two becomes the subject-matter of the therapy.

The respected therapist and writer Irvin Yalom, among others, argues that depression and associated forms of sadness stem from an inability to make good contact with others. Relationships are fundamental to happiness. And so a science that has the courage to include the doctor’s relationship with the patient within the treatment itself, and to work with it, is a science already modelling the solution it prescribes. What psychoanalysis loses in scientific stature, it gains in humanity.
Rowland Smith's argument is that psychoanalysis offers a genuine therapeutic relationship complete with transference and countertransference, while CBT doesn't. He also suggests that analysis is able to offer this relationship precisely because it's unscientific.

Human relationships aren't built on rational, scientific foundations. They can be based on lots of stuff, but reason and evidence ain't high on the list. Someone who agrees with you on everything, or helps you to discover things, is a colleague, but not yet a friend unless you also get along with them personally. Working too closely together on some technical problem can indeed prevent friendships forming, because you never have time to get to know each other personally.

Maybe CBT is just too sensible: too good at making therapists and patients into colleagues in the therapeutic process. It provides the therapist with a powerful tool for understanding and treating the patient's symptoms, at least on a surface level, and involving the patient in that process. But could this very rationality make a truly human relationship impossible?

I'm not convinced. For one thing, there can be no guarantee that psychoanalysis does generate a genuine relationship in any particular case. But you might say that you can never guarantee that, so that's a general problem with all such therapy.

More seriously, psychoanalysis still tries to be scientific, or at least technical, in that it makes use of a specialist vocabulary and ideas ultimately derived from Sigmund Freud. Few psychoanalysts today agree with Freud on everything, but, by definition, they agree with him on some things. That's why they're called "psychoanalysts".

But if psychoanalysis works because of the therapeutic relationship, despite, or even because, Freud was wrong about most things... why not just chat about the patient's problems with the minimum of theoretical baggage? Broadly speaking, counselling is just that. Rowland Smith makes an interesting point, but it's far from clear that it's an argument for psychoanalysis per se.

Note:
A truncated version of this post briefly appeared earlier because I was a wrong-button-clicking klutz this morning. Please ignore that if you saw it.

Psychoanalysis: So Bad It's Good?

Many of the best things in life are terrible.


We all know about the fun to be found in failure, as exemplified by Judge A Book By Its Cover and of course FailBlog. The whole genre of B-movie appreciation is based on the maxim of: so bad, it's good.

But could the same thing apply to psychotherapies?

Here's the argument. Freudian psychoanalysis is a bit silly. Freud had pretensions to scientific respectability, but never really achieved it, and with good reason. You can believe Freud, and if you do, it kind of make sense. But to anyone else, it's a bit weird. If psychoanalysis were a person, it would be the Pope.

By contrast, cognitive-behavioural therapy is eminently reasonable. It relies on straightforward empirical observations of the patient's symptoms, and on trying to change people's beliefs by rational arguments and real-life examples ("behavioural experiments"). CBT practitioners are always keen to do randomized controlled trials to provide hard evidence for their success. CBT is Richard Dawkins.

But what if the very irrationality of psychoanalysis is its strength? Mental illness is irrational. So's life, right? So maybe you need an irrational kind of therapy to deal with it.

This is almost the argument advanced by Robert Rowland Smith in a short piece In Defence of Psychoanalysis:
...The irony is that in becoming more “scientific”, CBT becomes less therapeutic. Now, Freud himself liked to be thought of as a scientist (he began his career in neurology, working on the spinal ganglia), but it’s the non-scientific features that make psychoanalysis the more, not the less, powerful.

I’m referring to the therapeutic relationship itself. Although like psychoanalysis largely a talking cure, CBT prefers to set aside the emotions in play between doctor and patient. Psychoanalysis does the reverse. To the annoyance no doubt of many a psychoanalytic patient, the very interaction between the two becomes the subject-matter of the therapy.

The respected therapist and writer Irvin Yalom, among others, argues that depression and associated forms of sadness stem from an inability to make good contact with others. Relationships are fundamental to happiness. And so a science that has the courage to include the doctor’s relationship with the patient within the treatment itself, and to work with it, is a science already modelling the solution it prescribes. What psychoanalysis loses in scientific stature, it gains in humanity.
Rowland Smith's argument is that psychoanalysis offers a genuine therapeutic relationship complete with transference and countertransference, while CBT doesn't. He also suggests that analysis is able to offer this relationship precisely because it's unscientific.

Human relationships aren't built on rational, scientific foundations. They can be based on lots of stuff, but reason and evidence ain't high on the list. Someone who agrees with you on everything, or helps you to discover things, is a colleague, but not yet a friend unless you also get along with them personally. Working too closely together on some technical problem can indeed prevent friendships forming, because you never have time to get to know each other personally.

Maybe CBT is just too sensible: too good at making therapists and patients into colleagues in the therapeutic process. It provides the therapist with a powerful tool for understanding and treating the patient's symptoms, at least on a surface level, and involving the patient in that process. But could this very rationality make a truly human relationship impossible?

I'm not convinced. For one thing, there can be no guarantee that psychoanalysis does generate a genuine relationship in any particular case. But you might say that you can never guarantee that, so that's a general problem with all such therapy.

More seriously, psychoanalysis still tries to be scientific, or at least technical, in that it makes use of a specialist vocabulary and ideas ultimately derived from Sigmund Freud. Few psychoanalysts today agree with Freud on everything, but, by definition, they agree with him on some things. That's why they're called "psychoanalysts".

But if psychoanalysis works because of the therapeutic relationship, despite, or even because, Freud was wrong about most things... why not just chat about the patient's problems with the minimum of theoretical baggage? Broadly speaking, counselling is just that. Rowland Smith makes an interesting point, but it's far from clear that it's an argument for psychoanalysis per se.

Note:
A truncated version of this post briefly appeared earlier because I was a wrong-button-clicking klutz this morning. Please ignore that if you saw it.

Saturday, January 15, 2011

Autistic Children In The Media

Emory University's Jennifer Sarrett offers an interesting although sadly brief analysis of the way in which autism is treated in the mass media: Trapped Children.

She examines media depictions of children with autism, first in the 1960s, and then today. In those 40 years, professionals radically changed their minds about autism: in the 60s, a lot of people thought it was caused by emotionally distant refrigerator mothers; nowadays, we think it's a neural wiring disorder caused by deleted genes.

Yet, she says, while theories about the causes have changed, the media's view of what autism is hasn't, and assumptions from the 60s are still around (even amongst professionals). She identifies two enduring themes:

Fragmentation. The child with autism is somehow not a whole person; they are fundamentally "broken". And the family with an autistic child is emotionally shattered, too. In the 60s, the theory was that the broken family caused the autism. Nowadays, it's the other way round: having an autistic child stresses family relationships to breaking-point.

Imprisonment. The child with autism is at heart "normal", but their autism has them trapped, blocked-off from the world. Bruno Bettelheim, a leading champion of the refrigerator mother theory, called his major book The Empty Fortress. Either professionals, or parents, need to "break through" the autism to contact the "real" child imprisoned by the disorder. Likewise, this real child is eager to get out, but this is very difficult: they are crying out for help. In the 60s, it was psychoanalysis that could free the child. Today, it's anything from Prozac to chelation and other quack "biomedical" cures.

The problem with these kinds of articles is that you can really make up any themes you want, and find examples to fit. That doesn't mean it's a pointless exercise, it just means that the examples can't prove the analysis right. You need to ask yourself: does this, in general, ring true?

Sarrett's analysis does ring true for me, especially the theme of imprisonment, which is almost never made explicit, but it seems to lurk in the background of a lot of modern thought about autism. The autistic isn't really autistic. Their autism is something external - if only we could reach the normal child underneath! Every attempt to "cure" or "rescue" the autistic child relies on this belief.

I said that this paper is sadly brief. There's so much more to say on this topic; in particular, we need to compare representations of autism to those of other developmental disorders like Down's syndrome, in order to work out what's specific to autism as opposed to just general "disability" or "disorder".

However, I think if you did this, you'd probably end up agreeing with the paper. I can't remember Down's syndrome being portrayed as a kind of self-fragmentation or imprisonment; this article seems quite typical.

Sarrett recommends accounts by authors who have autism themselves for an alternative and more valid view of autism: people like Temple Grandin and Daniel Tammet:
autistic voices can promote a much needed faithfulness and tolerance to future representations of autism and those diagnosed with autism.
Although she admits that these authors only speak for a subset of those with "high-functioning" autism or Asperger's, and that
there remains a population of people with autism who are not writing, speaking and reading, making the representations advanced by these narratives subject to questions about generalizability.
ResearchBlogging.orgSarrett JC (2011). Trapped Children: Popular Images of Children with Autism in the 1960s and 2000s. The Journal of medical humanities PMID: 21225325

Autistic Children In The Media

Emory University's Jennifer Sarrett offers an interesting although sadly brief analysis of the way in which autism is treated in the mass media: Trapped Children.

She examines media depictions of children with autism, first in the 1960s, and then today. In those 40 years, professionals radically changed their minds about autism: in the 60s, a lot of people thought it was caused by emotionally distant refrigerator mothers; nowadays, we think it's a neural wiring disorder caused by deleted genes.

Yet, she says, while theories about the causes have changed, the media's view of what autism is hasn't, and assumptions from the 60s are still around (even amongst professionals). She identifies two enduring themes:

Fragmentation. The child with autism is somehow not a whole person; they are fundamentally "broken". And the family with an autistic child is emotionally shattered, too. In the 60s, the theory was that the broken family caused the autism. Nowadays, it's the other way round: having an autistic child stresses family relationships to breaking-point.

Imprisonment. The child with autism is at heart "normal", but their autism has them trapped, blocked-off from the world. Bruno Bettelheim, a leading champion of the refrigerator mother theory, called his major book The Empty Fortress. Either professionals, or parents, need to "break through" the autism to contact the "real" child imprisoned by the disorder. Likewise, this real child is eager to get out, but this is very difficult: they are crying out for help. In the 60s, it was psychoanalysis that could free the child. Today, it's anything from Prozac to chelation and other quack "biomedical" cures.

The problem with these kinds of articles is that you can really make up any themes you want, and find examples to fit. That doesn't mean it's a pointless exercise, it just means that the examples can't prove the analysis right. You need to ask yourself: does this, in general, ring true?

Sarrett's analysis does ring true for me, especially the theme of imprisonment, which is almost never made explicit, but it seems to lurk in the background of a lot of modern thought about autism. The autistic isn't really autistic. Their autism is something external - if only we could reach the normal child underneath! Every attempt to "cure" or "rescue" the autistic child relies on this belief.

I said that this paper is sadly brief. There's so much more to say on this topic; in particular, we need to compare representations of autism to those of other developmental disorders like Down's syndrome, in order to work out what's specific to autism as opposed to just general "disability" or "disorder".

However, I think if you did this, you'd probably end up agreeing with the paper. I can't remember Down's syndrome being portrayed as a kind of self-fragmentation or imprisonment; this article seems quite typical.

Sarrett recommends accounts by authors who have autism themselves for an alternative and more valid view of autism: people like Temple Grandin and Daniel Tammet:
autistic voices can promote a much needed faithfulness and tolerance to future representations of autism and those diagnosed with autism.
Although she admits that these authors only speak for a subset of those with "high-functioning" autism or Asperger's, and that
there remains a population of people with autism who are not writing, speaking and reading, making the representations advanced by these narratives subject to questions about generalizability.
ResearchBlogging.orgSarrett JC (2011). Trapped Children: Popular Images of Children with Autism in the 1960s and 2000s. The Journal of medical humanities PMID: 21225325

Wednesday, January 12, 2011

A Brief Guide to Being Shot in the Head

You know what this is about. I don't have anything especially useful to say about the recent tragedy, or the question of crazy vs. political: at this stage, it's all speculation. Let's wait for the trial.
But anyway, the incredible thing is that Rep. Gabrielle Giffords survived a bullet to the head. How?

One of the amazing things about the brain is that almost all of it is unnecessary. The bullet passed through Gifford's left cerebral cortex, various parts of which are responsible for moving the right side of the body, seeing and hearing things from the right, and, in most people, language. But the only part of the brain which you actually need in order to live is the brainstem, which forms the top of the spinal cord.

The main reason you need your brainstem is that it controls breathing. It also controls your heart rate and blood pressure, but your heart pumps itself, without any input from the brain: the brain just does the fine tuning. Breathing, however, is controlled directly by several brainstem nuclei, and if you stop breathing, your blood will run out of oxygen and you'll die (without artificial ventilation.)

Damage to any other part of the brain is survivable. Of course, you might just bleed to death from the head injury, or get an infection; there's also the risk of brain swelling which can be fatal by compressing the brainstem (amongst other problems). This is why doctors have removed a large part of Gifford's skull, to give the brain room.

But the brainstem can do a surprising amount on its own. In the early days of neuroscience, there was a bit of a fad for decerebrating animals, essentially removing everything except the brainstem. These animals were still "alive", at least in the sense that they weren't corpses; decerebrate cats can walk and run.

They don't walk to anywhere, but this shows that the spinal cord and brainstem can control movement and respond to sensory feedback. It's even on YouTube. The famous headless chicken that lived for over a year - that really happened, it's no myth - is another such case.

A Brief Guide to Being Shot in the Head

You know what this is about. I don't have anything especially useful to say about the recent tragedy, or the question of crazy vs. political: at this stage, it's all speculation. Let's wait for the trial.
But anyway, the incredible thing is that Rep. Gabrielle Giffords survived a bullet to the head. How?

One of the amazing things about the brain is that almost all of it is unnecessary. The bullet passed through Gifford's left cerebral cortex, various parts of which are responsible for moving the right side of the body, seeing and hearing things from the right, and, in most people, language. But the only part of the brain which you actually need in order to live is the brainstem, which forms the top of the spinal cord.

The main reason you need your brainstem is that it controls breathing. It also controls your heart rate and blood pressure, but your heart pumps itself, without any input from the brain: the brain just does the fine tuning. Breathing, however, is controlled directly by several brainstem nuclei, and if you stop breathing, your blood will run out of oxygen and you'll die (without artificial ventilation.)

Damage to any other part of the brain is survivable. Of course, you might just bleed to death from the head injury, or get an infection; there's also the risk of brain swelling which can be fatal by compressing the brainstem (amongst other problems). This is why doctors have removed a large part of Gifford's skull, to give the brain room.

But the brainstem can do a surprising amount on its own. In the early days of neuroscience, there was a bit of a fad for decerebrating animals, essentially removing everything except the brainstem. These animals were still "alive", at least in the sense that they weren't corpses; decerebrate cats can walk and run.

They don't walk to anywhere, but this shows that the spinal cord and brainstem can control movement and respond to sensory feedback. It's even on YouTube. The famous headless chicken that lived for over a year - that really happened, it's no myth - is another such case.

Monday, January 3, 2011

Left Wing vs. Right Wing Brains

So apparently: Left wing or right wing? It's written in the brain

People with liberal views tended to have increased grey matter in the anterior cingulate cortex, a region of the brain linked to decision-making, in particular when conflicting information is being presented...

Conservatives, meanwhile, had increased grey matter in the amygdala, an area of the brain associated with processing emotion.

This was based on a study of 90 young adults using MRI to measure brain structure. Sadly that press release is all we know about the study at the moment, because it hasn't been published yet. The BBC also have no fewer than three radio shows about it here, here and here.

Politics blog Heresy Corner discusses it...
Subjects who professed liberal or left-wing opinions tended to have a larger anterior cingulate cortex, an area of the brain which, we were told, helps process complex and conflicting information. (Perhaps they need this extra grey matter to be able to cope with the internal contradictions of left-wing philosophy.)
This kind of story tends to attract chuckle-some comments.

In truth, without seeing the full scientific paper, we can't know whether the differences they found were really statistically solid, or whether they were voodoo or fishy. The authors, Geraint Rees and Ryota Kanai, have both published a lot of excellent neuroscience in the past, but that's no guarantee.

In fact, however, I suspect that the brain is just the wrong place to look if you're interested in politics, because most political views don't originate in the individual brain, they originate in the wider culture and are absorbed and regurgitated without much thought. This is a real shame, because all of us, left or right, have a brain, and it's really quite nifty:

But when it comes to politics we generally don't use it. The brain is a powerful organ designed to help you deal with reality in all its complexity. For a lot of people, politics doesn't take place there, it happens in fairytale kingdoms populated by evil monsters, foolish jesters, and brave knights.

Given that the characters in this story are mindless stereotypes, there's no need for empathy. Because the plot comes fully-formed from TV or a newspaper, there's no need for original ideas. Because everything is either obviously right or obviously wrong, there's not much reasoning required. And so on. Which is why this happens amongst other things.

I don't think individual personality is very important in determining which political narratives and values you adopt: your family background, job, and position in society is much more important.

Where individual differences matter, I think, is in deciding how "conservative" or "radical" you are within whatever party you find yourself. Not in the sense of left or right, but in terms of how keen you are on grand ideas and big changes, as opposed to cautious, boring pragmatism.

In this sense, there are conservative liberals (i.e. Obama) and radical conservatives (i.e. Palin), and that's the kind of thing I'd be looking for if I were trying to find political differences in the brain.

Links: If right wingers have bigger amygdalae, does that mean patient SM, the woman with no amygdalae at all, must be a communist? Then again, Neuroskeptic readers may remember that the brain itself is a communist...