Thursday, January 8, 2009

The British Media's Favorite Diagnoses

I was bored again last night, so time for some more graphs.
This shows the total number of LexisNexis UK News Search hits in the "UK Broadsheets" category from 1st January of each year to 1st January of the next year, for four terms. A hit represents a broadsheet newspaper article containing the specified string(s). (This article might not be "about" that condition e.g., a report about a crime committed by someone with schizophrenia which be a hit for "schizophrenia".)This is the same data for schizophrenia, bipolar/manic depression and autism/Asperger's, but shown as the ratio of hits compared to the number of hits for "Epilepsy" in the same year. I did this because hits for all conditions increase over time, which probably represents the fact that newspapers are getting longer & maybe that they're getting more interested in health (speculation.) Assuming that coverage of epilepsy is relatively immune to "fashion", which seems plausible, this allows trends in the "popularity" of the other three conditions to be seen more clearly.

What's the story? Firstly, the popularity of schizophrenia has remained fairly stable relative to epilepsy since 1985; this is what you'd expect, since rates of schizophrenia haven't changed much over that time. I was a little surprised that the recent cannabis-causes-schizophrenia theme, which some British papers have been pushing quite hard, hasn't had much effect. Hmm.

Bipolar disorder has become much more popular since about 2000; it's now close to being as popular as schizophrenia. Given that the true rates of these two disorders have probably not changed for 30 years, this points to some kind of cultural, as opposed to medical, trend; bipolar is almost certainly more diagnosed and less stigmatized today than in the past - indeed in some circles it's more trendy than just plain depression. (Note that "bipolar" will also give hits for articles using it in the political sense ("bipolar world"), but this is pretty uncommon.)

As for autism, coverage spiked in 2001-2002, the height of the British MMR-causes-autism scare. So no surprise there, but what did surprise me is that the popularity of autism has continued to increase since, with no sign of having peaked yet. Despite the fact that even the most stubborn armchair developmental neurologists have now largely stopped using the British newspapers to argue that vaccines cause autism, autism still gets more mentions than ever before.

So British newspaper readers can expect to hear plenty more about autism in 2009. Just remember that if you want in-depth discussions of this topic you might be better off reading LeftbrainRightbrain. That the newspapers are devoting increasing space to serious illnesses such as autism and bipolar disorder is in many ways a good thing, but quantity isn't quality, as MMR and the media's deeply uncritical coverage of the Kirsch et. al. (2008) antidepressant meta-analysis showed (more on that soon...)

Feel free to draw more conclusions from these coloured lines, as the mood takes you.

P.S I would have liked to do "depression", but that word has many meanings, e.g. in economics. "Clinical depression", on the other hand, seems to me increasingly old-fashioned; people just call it depression. Any ideas as to the best thing to search for?

[BPSDB]

The British Media's Favorite Diagnoses

I was bored again last night, so time for some more graphs.
This shows the total number of LexisNexis UK News Search hits in the "UK Broadsheets" category from 1st January of each year to 1st January of the next year, for four terms. A hit represents a broadsheet newspaper article containing the specified string(s). (This article might not be "about" that condition e.g., a report about a crime committed by someone with schizophrenia which be a hit for "schizophrenia".)This is the same data for schizophrenia, bipolar/manic depression and autism/Asperger's, but shown as the ratio of hits compared to the number of hits for "Epilepsy" in the same year. I did this because hits for all conditions increase over time, which probably represents the fact that newspapers are getting longer & maybe that they're getting more interested in health (speculation.) Assuming that coverage of epilepsy is relatively immune to "fashion", which seems plausible, this allows trends in the "popularity" of the other three conditions to be seen more clearly.

What's the story? Firstly, the popularity of schizophrenia has remained fairly stable relative to epilepsy since 1985; this is what you'd expect, since rates of schizophrenia haven't changed much over that time. I was a little surprised that the recent cannabis-causes-schizophrenia theme, which some British papers have been pushing quite hard, hasn't had much effect. Hmm.

Bipolar disorder has become much more popular since about 2000; it's now close to being as popular as schizophrenia. Given that the true rates of these two disorders have probably not changed for 30 years, this points to some kind of cultural, as opposed to medical, trend; bipolar is almost certainly more diagnosed and less stigmatized today than in the past - indeed in some circles it's more trendy than just plain depression. (Note that "bipolar" will also give hits for articles using it in the political sense ("bipolar world"), but this is pretty uncommon.)

As for autism, coverage spiked in 2001-2002, the height of the British MMR-causes-autism scare. So no surprise there, but what did surprise me is that the popularity of autism has continued to increase since, with no sign of having peaked yet. Despite the fact that even the most stubborn armchair developmental neurologists have now largely stopped using the British newspapers to argue that vaccines cause autism, autism still gets more mentions than ever before.

So British newspaper readers can expect to hear plenty more about autism in 2009. Just remember that if you want in-depth discussions of this topic you might be better off reading LeftbrainRightbrain. That the newspapers are devoting increasing space to serious illnesses such as autism and bipolar disorder is in many ways a good thing, but quantity isn't quality, as MMR and the media's deeply uncritical coverage of the Kirsch et. al. (2008) antidepressant meta-analysis showed (more on that soon...)

Feel free to draw more conclusions from these coloured lines, as the mood takes you.

P.S I would have liked to do "depression", but that word has many meanings, e.g. in economics. "Clinical depression", on the other hand, seems to me increasingly old-fashioned; people just call it depression. Any ideas as to the best thing to search for?

[BPSDB]

Links You Might Like #2, and a note on Powerwatch

The Chronicle of Higher Education has a must-read piece about the integration of sociology and behavioral genetics. In it we learn that the American Journal of Sociology has just run a special issue devoted to that theme - wow. Looks fascinating (I haven't read it yet). As John Hawks notes, however, the traditional feuding between biological and social theorists of behaviour doesn't seem to be over yet...

There was an interesting discussion of the psychology of philosophy over at The Garden of Forking Paths.

Finally, Powerwatch UK have, very decently, included a link to my December 21st criticisms of a paper about leukemia and power lines, in their coverage of that study.

Links You Might Like #2, and a note on Powerwatch

The Chronicle of Higher Education has a must-read piece about the integration of sociology and behavioral genetics. In it we learn that the American Journal of Sociology has just run a special issue devoted to that theme - wow. Looks fascinating (I haven't read it yet). As John Hawks notes, however, the traditional feuding between biological and social theorists of behaviour doesn't seem to be over yet...

There was an interesting discussion of the psychology of philosophy over at The Garden of Forking Paths.

Finally, Powerwatch UK have, very decently, included a link to my December 21st criticisms of a paper about leukemia and power lines, in their coverage of that study.

Tuesday, January 6, 2009

Critiquing a Classic: "The Seductive Allure of Neuroscience Explanations"


One of the most blogged-about psychology papers of 2008 was Weisberg et. al.'s The Seductive Allure of Neuroscience Explanations.

As most of you probably already know, Weisberg et. al. set out to test whether adding an impressive-sounding, but completely irrelevant, sentence about neuroscience to explanations for common aspects of human behaviour made people more likely to accept those explanations as good ones. As they noted in their Introduction:
Although it is hardly mysterious that members of the public should find psychological research fascinating, this fascination seems particularly acute for findings that were obtained using a neuropsychological measure. Indeed, one can hardly open a newspaper’s science section without seeing a report on a neuroscience discovery or on a new application of neuroscience findings to economics, politics, or law. Research on nonneural cognitive psychology does not seem to pique the public’s interest in the same way, even though the two fields are concerned with similar questions.
They found that the pointless neuroscience made people rate bad psychological "explanations" as being better. The bad psychological explanations were simply descriptions of the phenomena in need of explanation (something like "People like dogs because they have a preference for domestic canines"). Without the neuroscience, people could tell that the bad explanations were bad, compared to other, good explanations. The neuroscience blinded them to this. This confusion was equally present in "normal" volunteers and in cognitive neuroscience students, although cognitive neuroscience experts (PhDs and professors) seemed to be immune.

But is this really true?

This kind of research - which claims to provide hard, scientific evidence for the existence of a commonly believed in psychological phenomenon, usually some annoyingly irrational human quirk - is dangerous; it should always be read with extra care. The danger is that the results can seem so obviously true ("Well of course!") and so important ("How many times have I complained about this?") that the methodological strengths and weaknesses of the study go unnoticed. People see a peer-reviewed paper which seemingly confirms the existence of one of their pet peeves, and they believe it - becoming even more peeved in the process.(*)

In this case, the peeve is obvious: the popular media certainly seem to inordinately keen on neuroimaging studies, and often seem to throw in pictures of brain scans and references to brain regions just to make their story seem more exciting. The number of people who confuse neural localization with explanation is depressing. Those not involved in cognitive neuroscience must find this rather frustrating. Even neuroimagers roll their eyes at it (although some may be secretly glad of it!)

So Weisberg et al. struck a chord with most readers, including most of the potentially skeptical ones - which is exactly why it needs to be read very carefully critiqued. Personally, having done so, I think that it's an excellent paper, but the data presented only allow fairly modest conclusions to be drawn, so far. The authors have not shown that neuroscience, specifically, is seductive or alluring.

Most fundamentally, the explanations including the dodgy neuroscience differed from the non-neurosciencey explanations in more than just neuroscience. Most obviously, they were longer, which may have made them seem "better" to the untrained, or bored, eye; indeed the authors themselves cite a paper, Kikas (2003), in which the length of explanations altered how people perceived them. Secondly, the explanations with added neuroscience were more "complex" - they included two separate "explanations", a psychological one and a neuroscience one. This complexity, rather than the presence of neuroscience per se, might have contributed to their impressiveness.

Perhaps the authors should have used three conditions - psychology, "double psychology" (with additional psychological explanations or technical terminology), and neuroscience (with additional neuroscience). As it stands, the authors have strictly shown is that longer, more jargon-filled explanations are rated as better - which is an interesting finding, but is not necessarily specific to neuroscience.

In their discussion (and to their credit) the authors fully acknowledge these points (emphasis mine)
Other kinds of information besides neuroscience could have similar effects. We focused the current experiments on neuroscience because it provides a particularly fertile testing ground, due to its current stature both in psychological research and in the popular press. However, we believe that our results are not necessarily limited to neuroscience or even to psychology. Rather, people may be responding to some more general property of the neuroscience information that encouraged them to find the explanations in the With Neuroscience condition more satisfying.
But this is rather a large caveat. If all the authors have shown is that people can be "Blinded with Science" (yes...like the song) in a non-specific manner, that has little to do with neuroscience. The authors go on to discuss various interesting, and plausible, theories about what might make seemingly "scientific" explanations seductive, and why neuroscience might be especially prone to this - but they are, as they acknowledge, just speculations. At this stage, we don't know, and we don't know how important this effect is in the real world, when people are reading newspapers and looking at pictures of brain scans.

Secondly, the group differences - between the "normal people", the neuroscience students, and the neuroscience experts - are hard to interpret. There were 81 normal people, mean age 20, but we don't know who they were or how they were recruited - were they students, internet users, the authors' friends? (10 of them didn't give their age and for 2 gender was "unreported" -?) We don't know whether their level of education, their interests, or values were different from the cognitive neuroscience students in the second group (mean age 20), who may likewise have been different in terms of education, intelligence and beliefs from the expert neuroscientists in the third group (mean age 27). Maybe such personal factors, rather than neuroscience knowledge, explained the group similarities and differences?

Finally, the effects seen in this paper were, on the face of it, small - people rated the explanations on a 7 point scale from -3 (bad) to +3 (excellent), but the mean scores were all between -1 and +1. The dodgy neuroscience added about 1 point on a 7 point scale of satisfactoriness. Is that "a lot" or "a little"? It's impossible to say.

All of that said - this is still a great paper, and the point of this post is not to criticize or "debunk" Weisberg et. al.'s excellent work. If you haven't read their paper, you should read it, in full, right now, and I'm looking forward to further stuff from the same group. What I'm trying to do is to warn against another kind of seductive allure, probably the oldest and most dangerous of all - the allure of that which confirms what we already thought we knew.

(*)Or do they? Or is this just one of my pet peeves? Maybe I need to do an experiment about the allure of psychology papers confirming the allure of psychologist's pet peeves...


ResearchBlogging.orgDeena Skolnick Weisberg, Frank C. Keil, Joshua Goodstein, Elizabeth Rawson, Jeremy R. Gray (2008). The Seductive Allure of Neuroscience Explanations Journal of Cognitive Neuroscience, 20 (3), 470-477 DOI: 10.1162/jocn.2008.20040

Critiquing a Classic: "The Seductive Allure of Neuroscience Explanations"


One of the most blogged-about psychology papers of 2008 was Weisberg et. al.'s The Seductive Allure of Neuroscience Explanations.

As most of you probably already know, Weisberg et. al. set out to test whether adding an impressive-sounding, but completely irrelevant, sentence about neuroscience to explanations for common aspects of human behaviour made people more likely to accept those explanations as good ones. As they noted in their Introduction:
Although it is hardly mysterious that members of the public should find psychological research fascinating, this fascination seems particularly acute for findings that were obtained using a neuropsychological measure. Indeed, one can hardly open a newspaper’s science section without seeing a report on a neuroscience discovery or on a new application of neuroscience findings to economics, politics, or law. Research on nonneural cognitive psychology does not seem to pique the public’s interest in the same way, even though the two fields are concerned with similar questions.
They found that the pointless neuroscience made people rate bad psychological "explanations" as being better. The bad psychological explanations were simply descriptions of the phenomena in need of explanation (something like "People like dogs because they have a preference for domestic canines"). Without the neuroscience, people could tell that the bad explanations were bad, compared to other, good explanations. The neuroscience blinded them to this. This confusion was equally present in "normal" volunteers and in cognitive neuroscience students, although cognitive neuroscience experts (PhDs and professors) seemed to be immune.

But is this really true?

This kind of research - which claims to provide hard, scientific evidence for the existence of a commonly believed in psychological phenomenon, usually some annoyingly irrational human quirk - is dangerous; it should always be read with extra care. The danger is that the results can seem so obviously true ("Well of course!") and so important ("How many times have I complained about this?") that the methodological strengths and weaknesses of the study go unnoticed. People see a peer-reviewed paper which seemingly confirms the existence of one of their pet peeves, and they believe it - becoming even more peeved in the process.(*)

In this case, the peeve is obvious: the popular media certainly seem to inordinately keen on neuroimaging studies, and often seem to throw in pictures of brain scans and references to brain regions just to make their story seem more exciting. The number of people who confuse neural localization with explanation is depressing. Those not involved in cognitive neuroscience must find this rather frustrating. Even neuroimagers roll their eyes at it (although some may be secretly glad of it!)

So Weisberg et al. struck a chord with most readers, including most of the potentially skeptical ones - which is exactly why it needs to be read very carefully critiqued. Personally, having done so, I think that it's an excellent paper, but the data presented only allow fairly modest conclusions to be drawn, so far. The authors have not shown that neuroscience, specifically, is seductive or alluring.

Most fundamentally, the explanations including the dodgy neuroscience differed from the non-neurosciencey explanations in more than just neuroscience. Most obviously, they were longer, which may have made them seem "better" to the untrained, or bored, eye; indeed the authors themselves cite a paper, Kikas (2003), in which the length of explanations altered how people perceived them. Secondly, the explanations with added neuroscience were more "complex" - they included two separate "explanations", a psychological one and a neuroscience one. This complexity, rather than the presence of neuroscience per se, might have contributed to their impressiveness.

Perhaps the authors should have used three conditions - psychology, "double psychology" (with additional psychological explanations or technical terminology), and neuroscience (with additional neuroscience). As it stands, the authors have strictly shown is that longer, more jargon-filled explanations are rated as better - which is an interesting finding, but is not necessarily specific to neuroscience.

In their discussion (and to their credit) the authors fully acknowledge these points (emphasis mine)
Other kinds of information besides neuroscience could have similar effects. We focused the current experiments on neuroscience because it provides a particularly fertile testing ground, due to its current stature both in psychological research and in the popular press. However, we believe that our results are not necessarily limited to neuroscience or even to psychology. Rather, people may be responding to some more general property of the neuroscience information that encouraged them to find the explanations in the With Neuroscience condition more satisfying.
But this is rather a large caveat. If all the authors have shown is that people can be "Blinded with Science" (yes...like the song) in a non-specific manner, that has little to do with neuroscience. The authors go on to discuss various interesting, and plausible, theories about what might make seemingly "scientific" explanations seductive, and why neuroscience might be especially prone to this - but they are, as they acknowledge, just speculations. At this stage, we don't know, and we don't know how important this effect is in the real world, when people are reading newspapers and looking at pictures of brain scans.

Secondly, the group differences - between the "normal people", the neuroscience students, and the neuroscience experts - are hard to interpret. There were 81 normal people, mean age 20, but we don't know who they were or how they were recruited - were they students, internet users, the authors' friends? (10 of them didn't give their age and for 2 gender was "unreported" -?) We don't know whether their level of education, their interests, or values were different from the cognitive neuroscience students in the second group (mean age 20), who may likewise have been different in terms of education, intelligence and beliefs from the expert neuroscientists in the third group (mean age 27). Maybe such personal factors, rather than neuroscience knowledge, explained the group similarities and differences?

Finally, the effects seen in this paper were, on the face of it, small - people rated the explanations on a 7 point scale from -3 (bad) to +3 (excellent), but the mean scores were all between -1 and +1. The dodgy neuroscience added about 1 point on a 7 point scale of satisfactoriness. Is that "a lot" or "a little"? It's impossible to say.

All of that said - this is still a great paper, and the point of this post is not to criticize or "debunk" Weisberg et. al.'s excellent work. If you haven't read their paper, you should read it, in full, right now, and I'm looking forward to further stuff from the same group. What I'm trying to do is to warn against another kind of seductive allure, probably the oldest and most dangerous of all - the allure of that which confirms what we already thought we knew.

(*)Or do they? Or is this just one of my pet peeves? Maybe I need to do an experiment about the allure of psychology papers confirming the allure of psychologist's pet peeves...


ResearchBlogging.orgDeena Skolnick Weisberg, Frank C. Keil, Joshua Goodstein, Elizabeth Rawson, Jeremy R. Gray (2008). The Seductive Allure of Neuroscience Explanations Journal of Cognitive Neuroscience, 20 (3), 470-477 DOI: 10.1162/jocn.2008.20040

Sunday, January 4, 2009

Lessons from the Video Game Brain

See also Lessons from the Placebo Gene. Also, if you like this kind of thing, see my other fMRI-curmudgeonry(1, 2)

The life of a neurocurmudgeon is a hard one, but once in a while, fate smiles upon us. This article in the Daily Telegraph neatly embodies several of the mistakes that people make about the brain, all in one bite-size portion.

The article is about a recent fMRI study published in the Journal of Psychiatric Research. 22 healthy Stanford student volunteers (half of them male) played a "video game" while being scanned. The game wasn't an actual game like Left 4 Dead(*), but rather a kind of very primitive cross between Pong and Risk, designed specifically for the purposes of the experiment:
Balls appeared on one-half of the screen from the side at 40 pixel/s, and 10 balls were constantly on the screen at any given time. One’s own space was defined as the space behind the wall and opposite side to where the balls appeared. The ball disappeared whenever clicked by the subject. Anytime a ball hit the wall before it could be clicked, the ball was removed and the wall moved at 20 pixel/s, making the space narrower. Anytime all the balls were at least 100 pixels apart from the wall ... the wall moved such that the space became wider.
Essentially they had to click on balls to stop them moving a line. This may not sound like much fun, but the author's justification for using this task was that it allowed them to have a control condition in which the instructions and were the same (click on the balls) but there was no "success" or "failure" because the line defining the "territory" was always fixed. That's actually a pretty good idea. The students did the task 40 times during the scan for 24s at a time, alternating between the two conditions, "no success" (line fixed) and "game with success/failure" (line moves).

The results: While men & women were equally good at clicking balls, men were more successful at gaining "territory" than the women. In both genders, doing the task vs. just resting in the scanner activated various visual and motor-related areas - no surprise. Playing the game vs. doing the control task in which there was no success or failure produced more activation in a handful of areas but only "at a more liberal threshold" i.e. this activation was not statistically reliable. A region-of-interest analysis found activation in the left nucleus accumbens and right orbitofrontal cortex, which are "reward-related" areas. In males, the game-specific activation was greater than in females in the right nucleus accumbens, the orbitofrontal cortex, and the right amygdala.

These areas are indeed "neural circuitries involved in reward and addiction" as the authors put it, but they're also activated whenever you experience anything pleasant or enjoyable, such as drinking water when you're thirsty. Water is not known to be addictive. So whether this study is relevant to video-game "addiction" is anyone's guess. As far as I can tell, all it shows is that men are more interested in simple, repetitive, abstract video games. But that's hardly news: in 2007 there was an International Pac-Man Championship with 30,000 entrants; the top 10 competitors were all male. (If anything in that last sentence surprises you, you haven't spent enough time on the internet.)

Anyway, that's the study. This is what the Telegraph made of it:
Playing on computer consoles activates parts of the male brain which are linked to rewarding feelings and addiction, scans have shown. The more opponents they vanquish and points they score, the more stimulated this region becomes. In contrast, these parts of women's brains are much less likely to be triggered by sessions on the Sony PlayStation, Nintendo Wii or Xbox.
Well, not quite. No opponents were vanquished and no Wii's were played. But so far this is just another fMRI study that attracted the attention of a journalist who knew how to spin a good story. Readers of Neuroskeptic will know this is not uncommon. However, it doesn't end there. Here's the really instructive bit:
Professor Allan Reiss of the Centre for Interdisciplinary Brain Sciences Research at Stanford University, California, who led the research, said that women understood computer games just as well as men but did not have the same neurological drive to win.
"These gender differences may help explain why males are more attracted to, and more likely to become 'hooked' on video games than females," he said.
"I think it's fair to say that males tend to be more intrinsically territorial. It doesn't take a genius to figure out who historically are the conquerors and tyrants of our species – they're the males.
"Most of the computer games that are really popular with males are territory and aggression-type games."
Now this is a theory - men like video games because we're intrinsically drawn to competition, conquest and territory-grabbing. This may or may not be true; personally, in the light of what I know of history and anthropology, I suspect it is, but even if you disagree, you can see that this is an important theory: it makes a big difference whether it's true or not.

However, the fMRI results have nothing to do with this theory. They neither support nor refute it, and nor could they; this experiment is essentially irrelevant to the theory in question. Prof. Allan Reiss is simply stating his personal opinions about human nature - however intelligent & informed these opinions may be. (Just to be clear, it's quite possible that Reiss didn't expect to be quoted in the way he was; he may have, not unreasonably, thought that he was just giving his informal opinion.) The Telegraph's sub-headline?
Men's passion for computer games stems from a deep-rooted urge to conquer, according to research
There are some lessons here.

1. If you want to know about something, study it.

If you want to learn about human behaviour, study human behaviour. Stanley Milgram discovered important things about behaviour; if he had never even heard about the brain, it wouldn't have stopped him from doing that.

Neuroscience can tell us about how behaviour happens. We get thirsty when we haven't drunk water for a while. Neuroscience, and only neuroscience, will tell you how. Some people get depressed or manic. One day, I hope, neuroscience will tell us the complete story of how - maybe mania will turn out to be caused by hyper-stimulation of a certain dopamine receptor - and we'll be able to stop it happening with some pill with a 100% success rate.

However, neuroscience can't tell you what human behaviour is: it cannot describe behaviour, it can only explain it. People know about thirst and depression and mania long before they knew anything about the brain. More importantly, and more subtly, neuroscience can only explain behaviour in the "how" sense; only rarely can it tell you why behaviour is the way that it is.

If someone is behaving in a certain way because of brain damage or disease, that's one of these rare cases. In that case "damage to area X caused by disease Y" is "why". But in most cases, it's not. To say that men like video games because their reward systems are more sensitive to video games is not a "why" explanation. It's a "how" explanation, and it leaves completely open the question of why the male brain is more sensitive to video games. The answer might be "innate biological differences due to evolution", or it might be "sexist upbringing", or "paternalistic culture", or anything else.

(This is often overlooked in discussions about psychiatry. Some people object to the idea that clinical depression is a neuro-chemical state, pointing out that depression can be caused by stress, rejection and other events in life. This is confused; there is no reason why stress or rejection could not cause a state of low serotonin. By extension, saying that someone has "low serotonin" always leaves open the question of why.)

2. Brains are people too

This leads on to a more subtle point. Some people understand the difference between how and why explanations, but feel that if the "how" is something to do with the brain, the "why" must be to do with the brain too. They look at brain scans showing that people behave in a certain way because their brain is a certain way (e.g. men like games because their reward system is more activated by games), and they think that there must be a "biological" explanation for why this is.

There might be, but there might not be. Brains are alive; they see and hear; they think; they talk; they feel. Your brain does everything you do, because you are "your" brain. The astonishing thing about brains is that they are both material, biological objects, and concious, living people, at the same time.

Your brain is not your liver, which is only affected by chemical and biological influences, like hormones, toxins, and bacteria. Your liver doesn't care whether you're a Christian or a Muslim, it cares about whether you drink alcohol. Your brain does care about your religion because some pattern of connections in your brain gives you the religion that you have.

Brain scans, by confronting us with the biological, material nature of the brain, make us look for biological, material why explanations. We forget that the brain might be the way it is because of cultural or historical or psychological or sociological or economic factors, because we forget that brains are people. We tend to think of people as being something beyond and above their brains. Ironically, it's this primitive dualism that leads to the most crude materialistic explanations for human behaviour.

3. Beware neuro-fetishists

There's a doctoral thesis in "Science Studies" to be written about how it came to happen, but that we fetishize the brain is obvious. For much of the 20th century, psychology was seen in the same way. Freud joined Nietschze, Marx and Heidegger in the ranks of Germanic names that literary theorists and lefty intellectuals loved to drop.

Then the bottom fell out of psychoanalysis, Prozac and fMRI arrived and the Decade of the Brain was upon us. Today, neuroscience is the new psychology - or perhaps psychology is becoming a branch of neuroscience. (If I asked you to depict psychology visually, you'd probably draw a brain - if you do a Google image search for "psychology", 10 out of the 21 front page hits depict either a brain or a head; this might not surprise you but it would have seemed odd 50 years ago.) There's a presumption that neuroscience is key to answering both how and why questions about the mind.

Neuroscience is now hot, but what people are mostly interested in are psychological and philosophical questions. People care about The Big Questions like -

"Is there life after death? Do we have free will? Is human nature fixed? Are men smarter/more aggressive/more promiscuous/better drivers than women? Why do people become criminals/geniuses/mad?"

These are good questions - but neuroscience has little to say about them, because they're not questions about the brain. They're questions for philosophers, or geneticists, or psychologists. No brain scan is going to tell you whether men are better drivers than women. It might tell you something about the processes by which make decisions while driving, but only a neuroscientist is likely to find that interesting.

P.S It turns out that people were saying similar things about this research back in Feburary. A blogger who writes about research on video games (neat) wrote about it way back then. So why did the Telegraph decide to resurrect the story as if it were new? That's just another one of life's mysteries.

[BPSDB]

(*) Which is so awesome.

ResearchBlogging.orgF HOEFT, C WATSON, S KESLER, K BETTINGER, A REISS (2008). Gender differences in the mesocorticolimbic system during computer game-play Journal of Psychiatric Research, 42 (4), 253-258 DOI: 10.1016/j.jpsychires.2007.11.010