Showing posts with label neurofetish. Show all posts
Showing posts with label neurofetish. Show all posts

Friday, July 15, 2011

Violent Brains In The Supreme Court

Back in June, the U.S. Supreme Court ruled that a Californian law banning the sale of violent videogames to children was unconstitutional because it violated the right to free speech.

However, the ruling wasn't unanimous. Justice Stephen Breyer filed a dissenting opinion. Unfortunately, it contains a whopping misuse of neuroscience. The ruling is here. Thanks to the Law & Neuroscience Blog for noticing this.

Breyer says (on page 13 of his bit)
Cutting-edge neuroscience has shown that “virtual violence in video game playing results in those neural patterns that are considered characteristic for aggressive cognition and behavior.”
He then cites this fMRI study from 2006. It's from the same group as this one I wrote about recently.

Breyer quotes this study as part of a discussion of the evidence linking violent video game use to violence. I have nothing to say about this, but I will point out than the fact that violent crime fell heavily in America after 1990, which is when the Super Nintendo and Sega Megadrive were invented.

Anyway, does this study show that playing violent games causes aggressive brain activity? Not exactly. By which I mean "no".

They scanned 13 young men playing a shooter game. The main finding was that during "violent" moments of the game, activity in the rostral ACC and the amygdala activity falls. At least this is the interpretation the authors give.

OK, but even if this neural response is "characteristic for aggressive cognition and behavior", it only lasted a few seconds. There's no evidence at all that this causes any lasting effects on brain function, or behaviour.

The real problem though is that the whole thing is based on the theory that violence is associated with reduced amygdala (and rACC) activity.

The authors cite various studies to this effect, but they don't distinguish between reduced activity as an immediate neural response to violence, as in this study, and reduced activity in people with high exposure to violent media, in response to non-violent stimuli.

This is rather like saying that because having a haircut reduces your total hair, and because bald people have no hair, haircuts cause baldness. Short-term doesn't automatically become long-term.

Besides, the whole idea that amygdala deactivation = violence is a bit weird because they used to destroy people's amydalas to reduce violent aggression in severe mental and neurological illness:
Different surgical approaches have involved various stereotactic devices and modalities for amygdaloid nucleus destruction, such as the injection of alcohol, oil, kaolin, or wax; cryoprobe lesioning; mechanical destruction; diathermy loop; and radiofrequency lesioning...
Lovely. It even worked sometimes, apparantly. Although it killed 4% of people. You can't reduce the activity of a region much more than by destroying it, yet destroying the amygdala reduced violence, or at the very least, didn't make it worse.

The truth is that aggression isn't a single thing. Everyone knows that there are two main kinds, "in cold blood" and "in the heat of the moment". Killing someone in a spontaneous bar brawl is one thing, but carefully planning to sneak up behind them and stab them is quite another.

Just based on what we know about the rare cases of amygdala-less people, I would imagine that destroying the amygdala would reduce violence "in the heat of the moment", which is motivated by anger and fear. The kind of patients who got this surgery seem to have been that kind of violent person, not the cold calculating kind.

So, even if violent video games reduced amygdala activity long term, that would probably reduce some kinds of violence.

ResearchBlogging.orgWeber, R., Ritterfeld, U., & Mathiak, K. (2006). Does Playing Violent Video Games Induce Aggression? Empirical Evidence of a Functional Magnetic Resonance Imaging Study Media Psychology, 8 (1), 39-60 DOI: 10.1207/S1532785XMEP0801_4

Violent Brains In The Supreme Court

Back in June, the U.S. Supreme Court ruled that a Californian law banning the sale of violent videogames to children was unconstitutional because it violated the right to free speech.

However, the ruling wasn't unanimous. Justice Stephen Breyer filed a dissenting opinion. Unfortunately, it contains a whopping misuse of neuroscience. The ruling is here. Thanks to the Law & Neuroscience Blog for noticing this.

Breyer says (on page 13 of his bit)
Cutting-edge neuroscience has shown that “virtual violence in video game playing results in those neural patterns that are considered characteristic for aggressive cognition and behavior.”
He then cites this fMRI study from 2006. It's from the same group as this one I wrote about recently.

Breyer quotes this study as part of a discussion of the evidence linking violent video game use to violence. I have nothing to say about this, but I will point out than the fact that violent crime fell heavily in America after 1990, which is when the Super Nintendo and Sega Megadrive were invented.

Anyway, does this study show that playing violent games causes aggressive brain activity? Not exactly. By which I mean "no".

They scanned 13 young men playing a shooter game. The main finding was that during "violent" moments of the game, activity in the rostral ACC and the amygdala activity falls. At least this is the interpretation the authors give.

OK, but even if this neural response is "characteristic for aggressive cognition and behavior", it only lasted a few seconds. There's no evidence at all that this causes any lasting effects on brain function, or behaviour.

The real problem though is that the whole thing is based on the theory that violence is associated with reduced amygdala (and rACC) activity.

The authors cite various studies to this effect, but they don't distinguish between reduced activity as an immediate neural response to violence, as in this study, and reduced activity in people with high exposure to violent media, in response to non-violent stimuli.

This is rather like saying that because having a haircut reduces your total hair, and because bald people have no hair, haircuts cause baldness. Short-term doesn't automatically become long-term.

Besides, the whole idea that amygdala deactivation = violence is a bit weird because they used to destroy people's amydalas to reduce violent aggression in severe mental and neurological illness:
Different surgical approaches have involved various stereotactic devices and modalities for amygdaloid nucleus destruction, such as the injection of alcohol, oil, kaolin, or wax; cryoprobe lesioning; mechanical destruction; diathermy loop; and radiofrequency lesioning...
Lovely. It even worked sometimes, apparantly. Although it killed 4% of people. You can't reduce the activity of a region much more than by destroying it, yet destroying the amygdala reduced violence, or at the very least, didn't make it worse.

The truth is that aggression isn't a single thing. Everyone knows that there are two main kinds, "in cold blood" and "in the heat of the moment". Killing someone in a spontaneous bar brawl is one thing, but carefully planning to sneak up behind them and stab them is quite another.

Just based on what we know about the rare cases of amygdala-less people, I would imagine that destroying the amygdala would reduce violence "in the heat of the moment", which is motivated by anger and fear. The kind of patients who got this surgery seem to have been that kind of violent person, not the cold calculating kind.

So, even if violent video games reduced amygdala activity long term, that would probably reduce some kinds of violence.

ResearchBlogging.orgWeber, R., Ritterfeld, U., & Mathiak, K. (2006). Does Playing Violent Video Games Induce Aggression? Empirical Evidence of a Functional Magnetic Resonance Imaging Study Media Psychology, 8 (1), 39-60 DOI: 10.1207/S1532785XMEP0801_4

Wednesday, July 13, 2011

The Brain Is Not Made of DNA

A new paper claims to have found A novel functional brain imaging endophenotype of autism.
They used fMRI to show that the brains of teenagers with autism showed no activation differences to looking at smiling happy faces, or afraid faces, compared to unemotional ones. In teens without autism, there was strong activation in many emotional and face-related brain regions. The unaffected brothers and sisters of the autistic people showed intermediate effects.

This is a fine study. The finding that siblings of people with autism have weakened neural responses to emotional faces is quite important as it suggests that this finding correlates (to some degree) with your position on the autism "spectrum".

The abstract of the paper actually downplays this, and says "The response in unaffected siblings did not differ significantly from the response in autism". However, there was a significant linear trend of group, and looking at the graphs, it's clear the siblings were In The Middle, like Malcolm.


There's plenty more nice things you could do with these results, which is an unusally large and rich dataset (120 people - 40 in each group). You could see, for example, whether siblings tend to be similar in terms of neural response. You could see whether the siblings who are most alike in brain response, are closest in symptoms. Or just look a the structural data on brain size and shape to see if there are characteristic differences between siblings that make one of the autistic and the other not.

There are a few problems. Most of the analyses are subject to the non-independence problem, because they defined their regions of interest based on the areas that showed a significant happy vs neutral face effect in the control group. So it's no surprise that when they generated graphs from these areas, the control group showed the strongest effect. However, they also do whole-brain analyses which avoid this problem and I don't think it undermines the main results.

So it's a decent study. But is this a "biomarker", or "endophenotype", as the title of the paper has it?

These are both hot topics in neuroscience at the moment. As the authors put it (emphasis mine):
An endophenotype is a heritable feature associated with a condition, present in affected individuals regardless of whether their condition is manifested, which co-segregates with the condition in families and which is present in unaffected family members at a higher rate than in the general population.

In such family members, endophenotypes represent instances in which genes associated with a particular condition exert measurable effects in individuals in whom they are insufficient to cause the condition itself...

The promise of characterizing endophenotypes lies in their hypothesized intermediate position between genotype and phenotype... the etiology of the endophenotype is likely to be correspondingly simpler: it can be said to be ‘closer to the level of gene action’.
The idea, in other words, is that if we can find a difference in the brains of people with autism, and their unaffected relatives who (presumably) share some of the same genes, we might have found a mechanism by which the genes ultimately cause the symptoms.

It might be easier, then, to find the genes for brain-not-lighting-up-to-happy-faces, than it will be to find genes for autism. Then once we've found those, we can use them to better understand autism.

My concern is that, while in theory endophenotypes seem "closer to the genetics" because they're "biological" rather than "behavioural", this is just a philosophical illusion based on the idea that the mind is not the brain.

We actually have no idea whether brain-not-lighting-up-to-happy-faces is closer to genetics than autistic behaviour. I'd say that our default assumption should be that everything is exactly the same "distance" from DNA, that is to say, everything is the product of complex interactions between genes and environment.

Some things are under the more or less exclusive control of a small number of genes, and these are called "genetic", but it's important not to assume that just because something's "in the brain", it's probably "more genetic" in this sense. The brain is a product of the environment as well.

If you scanned my brain while playing an audio recording of Urda love poetry, not much would happen. I don't know Urdu. In someone who did speak Urdu, all kinds of language and emotional areas would light up. That doesn't mean Urdu-brain-response is genetic. It's exactly as genetic as speaking-Urdu, which isn't genetic.

ResearchBlogging.orgSpencer, M., Holt, R., Chura, L., Suckling, J., Calder, A., Bullmore, E., & Baron-Cohen, S. (2011). A novel functional brain imaging endophenotype of autism: the neural response to facial expression of emotion Translational Psychiatry, 1 (7) DOI: 10.1038/tp.2011.18

The Brain Is Not Made of DNA

A new paper claims to have found A novel functional brain imaging endophenotype of autism.
They used fMRI to show that the brains of teenagers with autism showed no activation differences to looking at smiling happy faces, or afraid faces, compared to unemotional ones. In teens without autism, there was strong activation in many emotional and face-related brain regions. The unaffected brothers and sisters of the autistic people showed intermediate effects.

This is a fine study. The finding that siblings of people with autism have weakened neural responses to emotional faces is quite important as it suggests that this finding correlates (to some degree) with your position on the autism "spectrum".

The abstract of the paper actually downplays this, and says "The response in unaffected siblings did not differ significantly from the response in autism". However, there was a significant linear trend of group, and looking at the graphs, it's clear the siblings were In The Middle, like Malcolm.


There's plenty more nice things you could do with these results, which is an unusally large and rich dataset (120 people - 40 in each group). You could see, for example, whether siblings tend to be similar in terms of neural response. You could see whether the siblings who are most alike in brain response, are closest in symptoms. Or just look a the structural data on brain size and shape to see if there are characteristic differences between siblings that make one of the autistic and the other not.

There are a few problems. Most of the analyses are subject to the non-independence problem, because they defined their regions of interest based on the areas that showed a significant happy vs neutral face effect in the control group. So it's no surprise that when they generated graphs from these areas, the control group showed the strongest effect. However, they also do whole-brain analyses which avoid this problem and I don't think it undermines the main results.

So it's a decent study. But is this a "biomarker", or "endophenotype", as the title of the paper has it?

These are both hot topics in neuroscience at the moment. As the authors put it (emphasis mine):
An endophenotype is a heritable feature associated with a condition, present in affected individuals regardless of whether their condition is manifested, which co-segregates with the condition in families and which is present in unaffected family members at a higher rate than in the general population.

In such family members, endophenotypes represent instances in which genes associated with a particular condition exert measurable effects in individuals in whom they are insufficient to cause the condition itself...

The promise of characterizing endophenotypes lies in their hypothesized intermediate position between genotype and phenotype... the etiology of the endophenotype is likely to be correspondingly simpler: it can be said to be ‘closer to the level of gene action’.
The idea, in other words, is that if we can find a difference in the brains of people with autism, and their unaffected relatives who (presumably) share some of the same genes, we might have found a mechanism by which the genes ultimately cause the symptoms.

It might be easier, then, to find the genes for brain-not-lighting-up-to-happy-faces, than it will be to find genes for autism. Then once we've found those, we can use them to better understand autism.

My concern is that, while in theory endophenotypes seem "closer to the genetics" because they're "biological" rather than "behavioural", this is just a philosophical illusion based on the idea that the mind is not the brain.

We actually have no idea whether brain-not-lighting-up-to-happy-faces is closer to genetics than autistic behaviour. I'd say that our default assumption should be that everything is exactly the same "distance" from DNA, that is to say, everything is the product of complex interactions between genes and environment.

Some things are under the more or less exclusive control of a small number of genes, and these are called "genetic", but it's important not to assume that just because something's "in the brain", it's probably "more genetic" in this sense. The brain is a product of the environment as well.

If you scanned my brain while playing an audio recording of Urda love poetry, not much would happen. I don't know Urdu. In someone who did speak Urdu, all kinds of language and emotional areas would light up. That doesn't mean Urdu-brain-response is genetic. It's exactly as genetic as speaking-Urdu, which isn't genetic.

ResearchBlogging.orgSpencer, M., Holt, R., Chura, L., Suckling, J., Calder, A., Bullmore, E., & Baron-Cohen, S. (2011). A novel functional brain imaging endophenotype of autism: the neural response to facial expression of emotion Translational Psychiatry, 1 (7) DOI: 10.1038/tp.2011.18

Sunday, July 3, 2011

The NeuROFLscience of Jokes

A new paper in the Journal of Neuroscience investigates the neural basis of humour: Why Clowns Taste Funny.

The authors note that some things are funny because of ambiguous words. For example:
Q: Why don’t cannibals eat clowns?
A: Because they taste funny!
Previous studies, apparently, have shown that these kinds of jokes lead to activation in the lIFG (left inferior frontal gyrus), although it's also involved in processing ambiguity that's not funny, and indeed, language in general.

In this study they gave people fMRI and played them audio clips of sentences that were either funny or not, and that either contained ambiguity or not. Examples of non-funny ambiguity included crackers like this:
Q: What happened to the post?
A: As usual, it was given to the best-qualified applicant.

They found that, relative to straightforward ones, ambiguous sentences led to increased activation in two areas, the lIFG and also the left ITG. That fits with previous work.

By contrast, funny stimuli, whether ambiguous or not, sent the brain into overdrive, with humour causing activation all over a wide range of hilarious areas such as the amygdala, ventral striatum, hypothalamus, temporal lobes and more.

Many of these areas are known to be involved in emotion and pleasure, although some are fairly random such as visual area BA19.
There were strong associations between BOLD signal change and funniness in the midbrain, the left ventral striatum, and the left anterior and posterior IFG.
The problem is, like so many neuroimaging studies, it's not clear what this adds to our understanding of the topic. All this really shows is that linguistic ambiguity activates language areas, and enjoyable stimuli activate pleasure areas (amongst many others); it doesn't tell us why some things are funny.

So more research is needed, and future neuro-humour studies will need a new set of neuro-jokes in order to maximize the laughs. Here's a few I came up with:

Q: Why did the chicken cross the road?
A :Because of activation in the motor cortex, causing muscle contractions in his legs.

Q: What neuroimaging methodology is most useful for studying the brains of cats and dogs?
A: PET scanning.

Knock knock.
Who's there?
John.
I doubt that. The 'self' is an illusion. The concept of 'John' as an individual is incompatible with modern neuroscience.

ResearchBlogging.orgBekinschtein TA, Davis MH, Rodd JM, & Owen AM (2011). Why Clowns Taste Funny: The Relationship between Humor and Semantic Ambiguity. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31 (26), 9665-71 PMID: 21715632

The NeuROFLscience of Jokes

A new paper in the Journal of Neuroscience investigates the neural basis of humour: Why Clowns Taste Funny.

The authors note that some things are funny because of ambiguous words. For example:
Q: Why don’t cannibals eat clowns?
A: Because they taste funny!
Previous studies, apparently, have shown that these kinds of jokes lead to activation in the lIFG (left inferior frontal gyrus), although it's also involved in processing ambiguity that's not funny, and indeed, language in general.

In this study they gave people fMRI and played them audio clips of sentences that were either funny or not, and that either contained ambiguity or not. Examples of non-funny ambiguity included crackers like this:
Q: What happened to the post?
A: As usual, it was given to the best-qualified applicant.

They found that, relative to straightforward ones, ambiguous sentences led to increased activation in two areas, the lIFG and also the left ITG. That fits with previous work.

By contrast, funny stimuli, whether ambiguous or not, sent the brain into overdrive, with humour causing activation all over a wide range of hilarious areas such as the amygdala, ventral striatum, hypothalamus, temporal lobes and more.

Many of these areas are known to be involved in emotion and pleasure, although some are fairly random such as visual area BA19.
There were strong associations between BOLD signal change and funniness in the midbrain, the left ventral striatum, and the left anterior and posterior IFG.
The problem is, like so many neuroimaging studies, it's not clear what this adds to our understanding of the topic. All this really shows is that linguistic ambiguity activates language areas, and enjoyable stimuli activate pleasure areas (amongst many others); it doesn't tell us why some things are funny.

So more research is needed, and future neuro-humour studies will need a new set of neuro-jokes in order to maximize the laughs. Here's a few I came up with:

Q: Why did the chicken cross the road?
A :Because of activation in the motor cortex, causing muscle contractions in his legs.

Q: What neuroimaging methodology is most useful for studying the brains of cats and dogs?
A: PET scanning.

Knock knock.
Who's there?
John.
I doubt that. The 'self' is an illusion. The concept of 'John' as an individual is incompatible with modern neuroscience.

ResearchBlogging.orgBekinschtein TA, Davis MH, Rodd JM, & Owen AM (2011). Why Clowns Taste Funny: The Relationship between Humor and Semantic Ambiguity. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31 (26), 9665-71 PMID: 21715632

Friday, June 24, 2011

Blind Spots & Braintrust

This is a review of two recently published books about ethics: Bazerman and Tenbrunsel's Blind Spots (not to be confused with this one), and Patricia Churchland's Braintrust.

The pair may come from the same publisher (Princeton), but they couldn't be more different.


Blind Spots is a good book. It tells a story in a clear and compelling fashion, which is what a book is for.

The story is that we often act unethically, not because we're faced with ethical questions and decide to pick the "bad" option, but because we fail to see that there is an ethical issue at all.

This is not the same as saying that 'the road to hell is paved with good intentions'. That old phrase warns against trying to be good and, as a result, causing evil, because your plans go wrong. Blind Spots is saying, even if all of your attempts to be good work out just fine, you might still cause evil despite that.

For example, you could be a good employee, who never calls in sick unnecessarily, kind to your friends and colleagues, and a generous charity donor.

Unfortunately, you're an accountant connected to Enron, and your work - ultimately - consists of defrauding innocent people. But of course, you don't think of it like that, because we don't tend to think about things "ultimately".

Which is hard to disagree with. At worst, you could say it's obvious, although I think it's still something we ought to be reminded of. That's not all there is to the book, though: it also discusses how this happens and suggests ways to avoid it within organizations.

For example, the authors give an example of how setting up rewards and punishments to "make people be ethical", can make them less so, by encouraging people to think of the issue as a personal trade-off between gain and loss, rather than an ethical dilemma - what the authors call "ethical fading".

A day-care centre was annoyed at the fact that some parents were picking up their children late. This was antisocial because it meant staff had to work late into the evening.

So they started charging parents a late fee. Not a big one, but enough to send people a message: this is wrong, don't do. But in fact what happened was that late pickups became more common.

Previously, many people were making an effort to be on time, as a matter of principle. Once the fees were in place, it stopped being an ethical issue and just became a financial trade-off: is it worth paying the fee to get an extra hour?

Of course, you could make the fees higher to get around this, but even then, you've caused ethical fading, and you'll be relying on the sanctions from that point on.


Braintrust, by contrast, is just not a good read. The bulk of the book consists of discussions of various neurotransmitters and brain areas and how they may be related to human social behaviour. Oxytocin, for example, may make us behave all trusting and kindly, as it's involved in maternal bonding. There's a long discussion of the neurochemistry of male sexual behaviour in voles.

It's not clear how this is relevant to ethics. Whether it's oxytocin that does it, or something else, and whether voles are a useful model of human behaviour or not, clearly sometimes we trust people and sometimes we don't. That's psychology. And biology can't yet explain it.

Churchland doesn't claim that the various biological concepts that she covers can fully explain anything, and she doesn't vouch that all of these findings are rock solid. Which is good, because they can't, and they're not. So why spend well over half of the book talking about them?

Churchland's big idea seems to be that human morality emerges out of our more general capacity for sociability. Hence all the stuff about oxytocin and "the social brain". OK. But I'd have said that's a given - there's obviously some relation between sociability and morality.

I think there is an interesting idea in here, albeit not very clearly expressed, namely that morality isn't a special function of the brain, but just one of the many forms in which our social cognition can take.

In other words, I think the claim is that ethics isn't just related to sociability, it is sociability. Even asocial animals care about their own welfare, in terms of pleasure and pain; social ones become social when they extend this caring to others; intelligent social animals including humans and maybe some primates also have a system for inferring the motivations and thoughts of others.

At the end of the book, Churchland stops reviewing neuroscience, and starts talking about the implications for philosophy. This is best section of the book, but it's too short.

Churchland makes the interesting point, for example, that when we are considering philosophical "ethical dilemmas", like the famous trolley problems, we may not be applying any kind of ethical "rules" as such. Rather, she thinks that our moral reasoning is pretty much a kind of pattern recognition based on previous experience - like all our other social reasoning.

Someone who'd just read a book about the horrors of Stalinism might tend to adopt an anti-consequentialist, every-life-is-sacred approach. Whereas if you'd just watched a movie in which the hero, reluctantly but rightly, decides to sacrifice one guy to save many other people, would do the opposite. Then the ethical "rules" might be confabulated to cover it.

This is a nice idea. It's open to criticism, but it's a serious suggestion, and one that deserves a decent discussion. Sadly, there isn't one. If only there were more room in the book for this kind of stuff - but oxytocin covers so many pages.

Basically, the good parts of this book are not about the brain at all.

Reading Braintrust is like going on date but then bumping into an annoying friend who insists on coming along for dinner. Jesus, The Brain, you want to say. I like you and all, but seriously, you are getting in the way right now.

Links: Other blog reviews.

Blind Spots & Braintrust

This is a review of two recently published books about ethics: Bazerman and Tenbrunsel's Blind Spots (not to be confused with this one), and Patricia Churchland's Braintrust.

The pair may come from the same publisher (Princeton), but they couldn't be more different.


Blind Spots is a good book. It tells a story in a clear and compelling fashion, which is what a book is for.

The story is that we often act unethically, not because we're faced with ethical questions and decide to pick the "bad" option, but because we fail to see that there is an ethical issue at all.

This is not the same as saying that 'the road to hell is paved with good intentions'. That old phrase warns against trying to be good and, as a result, causing evil, because your plans go wrong. Blind Spots is saying, even if all of your attempts to be good work out just fine, you might still cause evil despite that.

For example, you could be a good employee, who never calls in sick unnecessarily, kind to your friends and colleagues, and a generous charity donor.

Unfortunately, you're an accountant connected to Enron, and your work - ultimately - consists of defrauding innocent people. But of course, you don't think of it like that, because we don't tend to think about things "ultimately".

Which is hard to disagree with. At worst, you could say it's obvious, although I think it's still something we ought to be reminded of. That's not all there is to the book, though: it also discusses how this happens and suggests ways to avoid it within organizations.

For example, the authors give an example of how setting up rewards and punishments to "make people be ethical", can make them less so, by encouraging people to think of the issue as a personal trade-off between gain and loss, rather than an ethical dilemma - what the authors call "ethical fading".

A day-care centre was annoyed at the fact that some parents were picking up their children late. This was antisocial because it meant staff had to work late into the evening.

So they started charging parents a late fee. Not a big one, but enough to send people a message: this is wrong, don't do. But in fact what happened was that late pickups became more common.

Previously, many people were making an effort to be on time, as a matter of principle. Once the fees were in place, it stopped being an ethical issue and just became a financial trade-off: is it worth paying the fee to get an extra hour?

Of course, you could make the fees higher to get around this, but even then, you've caused ethical fading, and you'll be relying on the sanctions from that point on.


Braintrust, by contrast, is just not a good read. The bulk of the book consists of discussions of various neurotransmitters and brain areas and how they may be related to human social behaviour. Oxytocin, for example, may make us behave all trusting and kindly, as it's involved in maternal bonding. There's a long discussion of the neurochemistry of male sexual behaviour in voles.

It's not clear how this is relevant to ethics. Whether it's oxytocin that does it, or something else, and whether voles are a useful model of human behaviour or not, clearly sometimes we trust people and sometimes we don't. That's psychology. And biology can't yet explain it.

Churchland doesn't claim that the various biological concepts that she covers can fully explain anything, and she doesn't vouch that all of these findings are rock solid. Which is good, because they can't, and they're not. So why spend well over half of the book talking about them?

Churchland's big idea seems to be that human morality emerges out of our more general capacity for sociability. Hence all the stuff about oxytocin and "the social brain". OK. But I'd have said that's a given - there's obviously some relation between sociability and morality.

I think there is an interesting idea in here, albeit not very clearly expressed, namely that morality isn't a special function of the brain, but just one of the many forms in which our social cognition can take.

In other words, I think the claim is that ethics isn't just related to sociability, it is sociability. Even asocial animals care about their own welfare, in terms of pleasure and pain; social ones become social when they extend this caring to others; intelligent social animals including humans and maybe some primates also have a system for inferring the motivations and thoughts of others.

At the end of the book, Churchland stops reviewing neuroscience, and starts talking about the implications for philosophy. This is best section of the book, but it's too short.

Churchland makes the interesting point, for example, that when we are considering philosophical "ethical dilemmas", like the famous trolley problems, we may not be applying any kind of ethical "rules" as such. Rather, she thinks that our moral reasoning is pretty much a kind of pattern recognition based on previous experience - like all our other social reasoning.

Someone who'd just read a book about the horrors of Stalinism might tend to adopt an anti-consequentialist, every-life-is-sacred approach. Whereas if you'd just watched a movie in which the hero, reluctantly but rightly, decides to sacrifice one guy to save many other people, would do the opposite. Then the ethical "rules" might be confabulated to cover it.

This is a nice idea. It's open to criticism, but it's a serious suggestion, and one that deserves a decent discussion. Sadly, there isn't one. If only there were more room in the book for this kind of stuff - but oxytocin covers so many pages.

Basically, the good parts of this book are not about the brain at all.

Reading Braintrust is like going on date but then bumping into an annoying friend who insists on coming along for dinner. Jesus, The Brain, you want to say. I like you and all, but seriously, you are getting in the way right now.

Links: Other blog reviews.

Thursday, June 23, 2011

My Grandma: Neurophilosopher

John Galliano is the British designer who got videoed being a bit unpleasant and ended up in court on racism charges.


His defence is that he was drunk and/or high. Which from the video he fairly obviously was. But here's an interesting quote from his lawyer:
Some things may have come out of his mouth that didn’t come from his brain.
So where did they come from, then... hmm. Don't answer that.

I doubt that the lawyer was actually trying to say that Galliano's mouth was moving of its own accord or under the control of some other organ. Rather she was expressing the idea that "my brain" in this context doesn't mean, literally, the whole of the grey blob of neurons in my skull.

Rather "my brain" means, roughly, "that part of my brain responsible for rational thought".

My grandmother once talked about a friend who'd had a stroke. She said, as far as I can remember, "Sometimes the stroke means you can't talk or walk, which is bad enough, but sometimes it gets into your brain and that can be really nasty."

Of course she knew that all strokes happen in the brain. What she was saying was that some strokes, but not all, affect the part of the brain responsible for "me" as a person - thoughts, emotions, and so forth.

So, this is all anecdotal evidence, but there seems to be a popular, common-sense temptation to believe in the "me part" of the brain, a tendency which neuroscientists are not immune to and which can lead to dubious conclusions.

I'd love to see someone do a proper study of what non-neuroscientists, ideally people with little exposure to neuroscience like children, think about the brain. A bit like this, but really in depth. I suspect that you'd find that many of the ideas underpinning today's neuroscience had their origins in pre-scientific, common sense intuitions.

We neuroscientists are human, and we have neuro-intuitions too. But if neuroscience has taught us anything, it's not to trust those.

My Grandma: Neurophilosopher

John Galliano is the British designer who got videoed being a bit unpleasant and ended up in court on racism charges.


His defence is that he was drunk and/or high. Which from the video he fairly obviously was. But here's an interesting quote from his lawyer:
Some things may have come out of his mouth that didn’t come from his brain.
So where did they come from, then... hmm. Don't answer that.

I doubt that the lawyer was actually trying to say that Galliano's mouth was moving of its own accord or under the control of some other organ. Rather she was expressing the idea that "my brain" in this context doesn't mean, literally, the whole of the grey blob of neurons in my skull.

Rather "my brain" means, roughly, "that part of my brain responsible for rational thought".

My grandmother once talked about a friend who'd had a stroke. She said, as far as I can remember, "Sometimes the stroke means you can't talk or walk, which is bad enough, but sometimes it gets into your brain and that can be really nasty."

Of course she knew that all strokes happen in the brain. What she was saying was that some strokes, but not all, affect the part of the brain responsible for "me" as a person - thoughts, emotions, and so forth.

So, this is all anecdotal evidence, but there seems to be a popular, common-sense temptation to believe in the "me part" of the brain, a tendency which neuroscientists are not immune to and which can lead to dubious conclusions.

I'd love to see someone do a proper study of what non-neuroscientists, ideally people with little exposure to neuroscience like children, think about the brain. A bit like this, but really in depth. I suspect that you'd find that many of the ideas underpinning today's neuroscience had their origins in pre-scientific, common sense intuitions.

We neuroscientists are human, and we have neuro-intuitions too. But if neuroscience has taught us anything, it's not to trust those.

Monday, June 6, 2011

The Unhelpful Brain

A reader pointed me to this study from a few months back which used fMRI to look at the effects of "Coaching With Compassion".


Unfortunately, the authors say at the outset that their paper is "Not to be quoted or reproduced without the expressed permission of one of the authors prior to publication" so I'm not going to... oh, hang on. Have I just broken the rules by quoting that? I hope not. But fair enough.

The paper describes an fMRI study of brain responses to being shown a variety of statements. The participants were students and the statements were about the university experience. They were either positive, negative, or neutral.

The authors found that the human brain responds differently to different kinds of stuff.

That's it. Well that ought to be it. The paper discusses things like Coaching With Compassion, The Ideal Self, and Intentional Change Theory, which are awesome no doubt, but they're not what this study is about.

Here's why. Before getting scanned, the students got two sessions of academic and career coaching. One session was focussed on hopes and goals for the future, dreams, and what they wanted to achieve in their studies. Yes you can! The other session, with a different coach, was all about challenges, fears, and disappointments. Maybe you can't.

The positive and the negative statements in the fMRI bit were based on these coaching interviews. The coach who did the nice bit said the nice statements (via recorded video clips) and vice versa. The positive and negative coaches were randomly assigned to each participant to avoid coach effects, and so on, which is good, the fMRI methodology was fine, and the data analysis looks good.

Who'd have thought it? Different parts of the brain were activated by positive, negative and neutral statements, and these were roughly what you'd expect from previous studies.

The reason this says nothing about coaching is that while participants got coaching beforehand, they all got the same coaching. These statements would have been positive or negative anyway - coaching or no. We don't know what, if any, effect coaching had.

Had half of them been randomized to get coached, and the other half assigned to a "placebo" coaching, say chatting about sports or the weather, then it would tell you something about coaching.

But that wouldn't mean it told you anything interesting about it, and this is the deeper problem with studies like this, of which this is only a good example.

Suppose that you found that positive, Compassionate Coaching made the brain respond more strongly to positive statements, or changed brain activity during decision-making, or whatever. That would be a result, and it might be really strong and statistically very significant, but for the life of me I can't see why you'd care, if you were interested in coaching.

Of course coaching affects the brain, and not just as a side effect: if it works, it'll work via changing the brain, in some way. But everything that changes behaviour changes the brain. That's what the brain does. How it does so is a detail of interest only to neuroscientists.

If you're a coach, or want to get coaching, or want to know whether coaching is effective, then you should look at coaching. The brain will be there, in the background, activating and deactivating happily, but it's not going to help you.

These kinds of studies happen, I think, because there's an inherent allure to seeing "the neural basis of" thoughts and feelings. It seems paradoxical and disturbing: you can't see thoughts! They're made of pixie dust and magic!

In the same way, quantum physics is universally agreed to be "weird". But it's always there, everywhere in the universe, and always has been. We're the weird ones, with our strange conviction that the most everyday thing in the world is really bizarre. God must find quantum physics incredibly boring.

Brains are not quite as commonplace as quarks, but they are at work whenever anyone, or most animals for that matter, does anything. Of course: how else would behaviour happen? We find this odd and fascinating. As a neuroscientist I'm no exception, the allure never "wears off". But that's just us.

Even people trying to be neuro-skeptical often fall into this trap. Here's Steven Rose in book review:

The weird locution – “it was not me; it was my brain that made me do it” – is increasingly used by neuroscientists who are sure that human thought and action are reducible to brain processes, and by legal defence teams pleading diminished responsibility for their clients. The trouble is that this way of speaking – and thinking, if such a term remains permissible – leaves unresolved who is the “me” that the brain drives.”

Well, human thought and action are reducible to brain processes. To deny this or (as is more common) imply that it's unhelpful, but not explain why, gets us nowhere.

The point is that all behaviour is brain activity, and that's why saying "It's brain activity" tells us nothing about any given behaviour. It’s an empty truism, like saying that a fire was started by something hot. Well, duh.

The Unhelpful Brain

A reader pointed me to this study from a few months back which used fMRI to look at the effects of "Coaching With Compassion".


Unfortunately, the authors say at the outset that their paper is "Not to be quoted or reproduced without the expressed permission of one of the authors prior to publication" so I'm not going to... oh, hang on. Have I just broken the rules by quoting that? I hope not. But fair enough.

The paper describes an fMRI study of brain responses to being shown a variety of statements. The participants were students and the statements were about the university experience. They were either positive, negative, or neutral.

The authors found that the human brain responds differently to different kinds of stuff.

That's it. Well that ought to be it. The paper discusses things like Coaching With Compassion, The Ideal Self, and Intentional Change Theory, which are awesome no doubt, but they're not what this study is about.

Here's why. Before getting scanned, the students got two sessions of academic and career coaching. One session was focussed on hopes and goals for the future, dreams, and what they wanted to achieve in their studies. Yes you can! The other session, with a different coach, was all about challenges, fears, and disappointments. Maybe you can't.

The positive and the negative statements in the fMRI bit were based on these coaching interviews. The coach who did the nice bit said the nice statements (via recorded video clips) and vice versa. The positive and negative coaches were randomly assigned to each participant to avoid coach effects, and so on, which is good, the fMRI methodology was fine, and the data analysis looks good.

Who'd have thought it? Different parts of the brain were activated by positive, negative and neutral statements, and these were roughly what you'd expect from previous studies.

The reason this says nothing about coaching is that while participants got coaching beforehand, they all got the same coaching. These statements would have been positive or negative anyway - coaching or no. We don't know what, if any, effect coaching had.

Had half of them been randomized to get coached, and the other half assigned to a "placebo" coaching, say chatting about sports or the weather, then it would tell you something about coaching.

But that wouldn't mean it told you anything interesting about it, and this is the deeper problem with studies like this, of which this is only a good example.

Suppose that you found that positive, Compassionate Coaching made the brain respond more strongly to positive statements, or changed brain activity during decision-making, or whatever. That would be a result, and it might be really strong and statistically very significant, but for the life of me I can't see why you'd care, if you were interested in coaching.

Of course coaching affects the brain, and not just as a side effect: if it works, it'll work via changing the brain, in some way. But everything that changes behaviour changes the brain. That's what the brain does. How it does so is a detail of interest only to neuroscientists.

If you're a coach, or want to get coaching, or want to know whether coaching is effective, then you should look at coaching. The brain will be there, in the background, activating and deactivating happily, but it's not going to help you.

These kinds of studies happen, I think, because there's an inherent allure to seeing "the neural basis of" thoughts and feelings. It seems paradoxical and disturbing: you can't see thoughts! They're made of pixie dust and magic!

In the same way, quantum physics is universally agreed to be "weird". But it's always there, everywhere in the universe, and always has been. We're the weird ones, with our strange conviction that the most everyday thing in the world is really bizarre. God must find quantum physics incredibly boring.

Brains are not quite as commonplace as quarks, but they are at work whenever anyone, or most animals for that matter, does anything. Of course: how else would behaviour happen? We find this odd and fascinating. As a neuroscientist I'm no exception, the allure never "wears off". But that's just us.

Even people trying to be neuro-skeptical often fall into this trap. Here's Steven Rose in book review:

The weird locution – “it was not me; it was my brain that made me do it” – is increasingly used by neuroscientists who are sure that human thought and action are reducible to brain processes, and by legal defence teams pleading diminished responsibility for their clients. The trouble is that this way of speaking – and thinking, if such a term remains permissible – leaves unresolved who is the “me” that the brain drives.”

Well, human thought and action are reducible to brain processes. To deny this or (as is more common) imply that it's unhelpful, but not explain why, gets us nowhere.

The point is that all behaviour is brain activity, and that's why saying "It's brain activity" tells us nothing about any given behaviour. It’s an empty truism, like saying that a fire was started by something hot. Well, duh.

Tuesday, May 3, 2011

Psychiatry and Phrenology

The notorious John P. "Most Published Research Findings Are False" Ioannidis has turned his baleful statistical gaze upon the literature on brain volume abnormalities in psychiatric disorders.


Reports of regional volume differences in the brains of people with mental illness compared to healthy people have appeared in increasing numbers in recent years. Such studies have given plenty of positive results. People with depression have smaller hippocampi. The amygdala is bigger in people with autism. And so on.

Last month, Ioannidis took a comprehensive look at this literature and he argues that it suffers from a fairly serious case of "excess significance bias" - essentially, that scientists are somehow biased towards reporting differences between patients and controls, and are not telling people about the times when there wasn't a difference. This could be because of publication bias, p-value fishing or other scientific sins.

Scientists tend to call a difference between two groups significant if it has a p value of less than 0.05. This means that if there were no real difference, just some random noise, this result would be less than 5% likely to occur.

However, there's many ways you could end up with a low (i.e. good) p value. You would get a significant result, even if the true difference was very small, if you do a big enough study. Even a small difference will be detected if you study enough people. On the other hand, when the true difference is huge, you might only need a small study to get the same p value.

A power calculation is a way of specifying how likely a given study would be to detect a difference of a given size, based on the size of the study. These are usually used ahead of time to work out how big your upcoming study needs to be, assuming you can guess roughly how big the real effect you're interested in is going to be.

Ioannidis turned this on its head and asked: assuming that the true difference in the brain volume is what the average of all the published studies says it is, how many of the published studies were big enough that they ought to have succesfully detected it?

He found 41 seperate meta-analyses for different brain regions in various disorders. These were published in 8 papers - because each paper reported on multiple regions. He only looked at meta-analyses published in the past 4 years, but these analyses will themselves have included older work. This means that this paper is a kind of meta-meta-analysis. He didn't directly consider the raw brain scans at all.

The meta-analyses found many significant volume differences - but in 29 of those 41, there was an excess of significant papers. In other words, the papers were too small to have a good chance to detect the effect that they themselves found - suggesting that something funny was going on. Although, strangely, in 10/41 there were too few, and only in 2 were there the "right" number.


For what it's worth, studies on schizophrenia and on relatives-of-people-with-schizophrenia showed the least evidence of this problem, while autism was terrible, with 4 times as many significant papers as expected by chance. I'm not sure this is worth much, though. We don't know if this tells us more about schizophrenia vs autism, or more about the researchers that study them.

Anyway, this is an important study, and the inverse power calculation approach is certainly a useful one. It's not new, but it's not used as widely as it ought to be. It does make the assumption that the meta-analyses are "right" about the effect size, and then paradoxically concludes that they are biased. However, this means that the true bias is probably even bigger than this suggests (because if the analyses as biased, the true effect size is smaller than assumed, and the studies should have been even less likely to find it.)

Unfortunately, this doesn't tell us which of the studies are wrong, so it's not directly useful for people researching mental illness. It tells us that there is something wrong with scientific publishing, however. Truth be told, I suspect that a similar picture would emerge if you did this kind of thing in many other fields of science. The only real solution, in my book, would be to require the pre-registration of scientific studies. Ioannidis actually advocates this at the end of the paper.

ResearchBlogging.orgIoannidis JP (2011). Excess Significance Bias in the Literature on Brain Volume Abnormalities. Archives of general psychiatry PMID: 21464342

Psychiatry and Phrenology

The notorious John P. "Most Published Research Findings Are False" Ioannidis has turned his baleful statistical gaze upon the literature on brain volume abnormalities in psychiatric disorders.


Reports of regional volume differences in the brains of people with mental illness compared to healthy people have appeared in increasing numbers in recent years. Such studies have given plenty of positive results. People with depression have smaller hippocampi. The amygdala is bigger in people with autism. And so on.

Last month, Ioannidis took a comprehensive look at this literature and he argues that it suffers from a fairly serious case of "excess significance bias" - essentially, that scientists are somehow biased towards reporting differences between patients and controls, and are not telling people about the times when there wasn't a difference. This could be because of publication bias, p-value fishing or other scientific sins.

Scientists tend to call a difference between two groups significant if it has a p value of less than 0.05. This means that if there were no real difference, just some random noise, this result would be less than 5% likely to occur.

However, there's many ways you could end up with a low (i.e. good) p value. You would get a significant result, even if the true difference was very small, if you do a big enough study. Even a small difference will be detected if you study enough people. On the other hand, when the true difference is huge, you might only need a small study to get the same p value.

A power calculation is a way of specifying how likely a given study would be to detect a difference of a given size, based on the size of the study. These are usually used ahead of time to work out how big your upcoming study needs to be, assuming you can guess roughly how big the real effect you're interested in is going to be.

Ioannidis turned this on its head and asked: assuming that the true difference in the brain volume is what the average of all the published studies says it is, how many of the published studies were big enough that they ought to have succesfully detected it?

He found 41 seperate meta-analyses for different brain regions in various disorders. These were published in 8 papers - because each paper reported on multiple regions. He only looked at meta-analyses published in the past 4 years, but these analyses will themselves have included older work. This means that this paper is a kind of meta-meta-analysis. He didn't directly consider the raw brain scans at all.

The meta-analyses found many significant volume differences - but in 29 of those 41, there was an excess of significant papers. In other words, the papers were too small to have a good chance to detect the effect that they themselves found - suggesting that something funny was going on. Although, strangely, in 10/41 there were too few, and only in 2 were there the "right" number.


For what it's worth, studies on schizophrenia and on relatives-of-people-with-schizophrenia showed the least evidence of this problem, while autism was terrible, with 4 times as many significant papers as expected by chance. I'm not sure this is worth much, though. We don't know if this tells us more about schizophrenia vs autism, or more about the researchers that study them.

Anyway, this is an important study, and the inverse power calculation approach is certainly a useful one. It's not new, but it's not used as widely as it ought to be. It does make the assumption that the meta-analyses are "right" about the effect size, and then paradoxically concludes that they are biased. However, this means that the true bias is probably even bigger than this suggests (because if the analyses as biased, the true effect size is smaller than assumed, and the studies should have been even less likely to find it.)

Unfortunately, this doesn't tell us which of the studies are wrong, so it's not directly useful for people researching mental illness. It tells us that there is something wrong with scientific publishing, however. Truth be told, I suspect that a similar picture would emerge if you did this kind of thing in many other fields of science. The only real solution, in my book, would be to require the pre-registration of scientific studies. Ioannidis actually advocates this at the end of the paper.

ResearchBlogging.orgIoannidis JP (2011). Excess Significance Bias in the Literature on Brain Volume Abnormalities. Archives of general psychiatry PMID: 21464342

Thursday, March 31, 2011

Women Are Better Connected... Neurally

The search for differences between the brains of men and women has a long and rather confusing history. Any structural differences are small, and their significance is controversial. The one rock-solid finding is that men's brains are slightly bigger on average. Then again, men are slightly bigger on average in general.

A new paper just out from Tomasi and Volkow (of cell-phones-affect-brain fame) offers, on the face of it, extremely strong evidence for a gender difference in the brain, not in structure but in function: Gender Differences in Brain Functional Connectivity Density.

Here's the headline pic:
They used resting-state "functional connectivity" (though see here for why this term may be misleading) fMRI in men and women. This essentially means that they put people in the MRI scanner, told them to just lie there and relax, and measured the degree to which activity in different parts of the brain was correlated to activity in every other part. They had a whopping 561 brains in total, though they didn't scan everyone themselves: they downloaded the data from here.

As you can see the results were highly consistent around the world. In both men and women, the main "connectivity hub" was an area called the ventral precuneus. This is interesting in itself although not a new finding as the precuneus has long been known to be involved in resting-state networks. However, the degree of connectivity was higher in women than in men 14% higher, in fact.

The method they used, which they've dubbed "Local Functional Connectivity Density Mapping", is apparantly a fast way of calculating the degree to which each part of the brain is functionally related to each other part.

You could do this by taking every single voxel and correlating it with every other voxel, for every single person, but this would take forever unless you had a supercomputer. LFCDM is, they say, a short-cut. I'm not really qualified to judge whether it's a valid one, but it looks solid.

Also, men's brains were on average bigger, but interestingly they show that women had, relative to brain size, more grey matter than men. Here's the data (I'm not sure about the color scheme...)

So what does the functional connectivity finding mean? It could mean anything, or nothing. You could interpret the highly interconnected female brain as an explanation for why women are more holistic, better at multi-tasking, and more in touch with their emotions than men with their fragmented faculties. Or whatever.

Or you could say, that that's sexist rubbish, and all this means is that men and women on average are thinking about different things when they lie in MRI scanners. We already know that resting-state functional connectivity centred on the precuneus is suppressed whenever your attention is directed towards an external "task".

That's not a fault of this research, which is excellent as far as it goes and certainly raises lots of interesting questions about functional connectivity. But we don't know what it means quite yet.

ResearchBlogging.orgTomasi D, & Volkow ND (2011). Gender differences in brain functional connectivity density. Human brain mapping PMID: 21425398