Sunday, November 2, 2008

Registration: Not Just For Clinical Trials

In a previous post, I said that I'd write about how to improve the quality of scientific research by ending the scrabbling for "positive results" at the cost of accuracy. So here we go. This is a long post, so if you'd prefer the short version, the answer is that we ought to get scientists in many fields to pre-register their research - to go on record and declare what they are looking for before they start looking for anything.

This is not my idea. Clinical trial registration is finally becoming a reality. Several organizations now offer registration services - such as Current Controlled Trials. Their site is well worth a click, if only to see the future of medical science unfolding before your eyes in the form of a list of recently registered protocols. Each of these protocols, remember, will eventually become a published scientific paper. If it doesn't, everyone will know that either the trial was never finished, or worse, it was finished and the results were never published. Without registration, a trial could be run and never published without anyone knowing what had happened - making it very easy for "inconvenient" data to never see the light of day. This is publication bias. We know it happens. Trial registration makes it all but impossible. It's important.

In fact, if someone were designing the system of clinical trials from scratch, they would, almost certainly, make registration an integral step right from the start. Unfortunately, no-one intelligently designed clinical trials. They evolved, and they're still evolving. We're not there yet. Trial registration is still a "good idea" rather than a routine part of clinical research, and while many first-class medical journals now require pre-registration and refuse to publish unregistered trials, plenty of other respectable publications have yet to catch up.

What I want to point out is that it's not just clinical trials which would benefit from registration. Registration is a way to defeat publication bias, wherever it occurs, and any field in which there are "negative results" is vulnerable to the risk that they won't be reported. In some parts of science there are no negative results - in much of physics, chemistry, and molecular biology, you either get a result, or you've failed. If you try to work out the structure of a protein, say, then you'll either come up with a structure, or give up. Of course, you might come out with the wrong structure if you mess up, but you could never "find nothing". All proteins have a structure, so there must be one to find.

But in many other areas of research there is often genuinely nothing to find. A gene might not be linked to any diseases. A treatment might have no effect. A pollutant might not cause any harm. Basically, if you're looking for a correlation between two things, or an effect of one thing upon another, you might get a negative result. Just off the top of my head, this covers almost all genetic association and linkage studies, almost all neuroimaging, most experimental psychology, much of climate science, epidemiology, sociology, criminology, and probably others I don't know about. Oh, and clinical trials, but we already knew that. People don't tend to publish negative results, for various reasons. Wherever this is a problem, trial registration would be useful.

Publication bias is known to be a problem in behavioural genetics (finding genes associated with psychological traits). For example Munafo et. al. (2007) found pretty strong evidence of publication bias in research on whether a certain allele (DRD2 Taq1A) predisposes to alcoholism. They concluded by saying that
Publication of nonsignificant results in the psychiatric genetics literature is important to protect against the existence of a biased corpus of data in the public domain.
Which is true, but saying it won't change anything, because everyone already knew this. No-one likes publication bias, but it happens anyway - so we need a system to prevent it. Curiously however, registration is rarely mentioned as an option. Salanti et. al. (2005) wrote at length about the pitfalls of genetic association studies, but did not. Colhoun et. al. (2003) , in a widely cited paper in the Lancet, explained how publication bias was a major problem but then flat-out dismissed registration, saying that
an effective mechanism for establishment of prospective registers of proposed analyses is not feasible.
They didn't say why, and if it works for clinical trials, I can see very little reason why it shouldn't work for other research. Indeed another similar paper in the same journal raised the idea of "prestudy registration of intent". Clearly it deserves serious thought.

Registration would also help combat "outcome reporting bias", or as it's known in the trade, data dredging. Any set of results can be looked at in a number of ways, and some of these ways will lead to different conclusions to others. Let's say that you want to find out whether a certain gene is associated with obesity. You might start by taking a thousand men and seeing whether the gene correlates with body weight. Let's say it doesn't, which is really annoying, because you were hoping that you could spend the next five years getting paid to find out more about this gene. Well, you still could! You could check whether the gene is associated with Body Mass Index (weight in proportion to height.) If that doesn't work, try percentage of body fat. Still nothing? Try eating habits. Eureka! Just by chance, you've found a correlation. Now you report that, and don't mention all the other things you tried first. You get a paper, "Gene XYZ123 influences eating behaviour in males", and a new grant to follow up on it. Sorted. Lynn McTaggart would be proud.

This kind of thing happens all the time, although that's an extreme example. The motives are not always selfish - most scientists genuinely want to find positive results about their "pet" genes, or drugs, or whatever. It is all too easy to dredge data without being aware of it. Registration would put an end to most of this nonsense, because when you register your research - before the results are in - you would have to publically outline what statistical tests you are planning to do. Essentially, you would need to write the Methods section of your paper before you collected any results.

If you were feeling particularly puritan, you could make people register the Introduction in advance too. Nominally, this is a statement of why you did the research, how it fits into the existing literature, what hypothesis you were testing and what you expected to find. In fact, it's generally a retrospective justification for getting the results you did, along with a confident "prediction" that you were going to find ... exactly what you found. This is not a serious problem, as publication bias is, because everyone knows that it happens and so no-one (except undergraduates) takes Introductions seriously. But writing Introductions that no-one can read with a straight face ("Oh sure, they really predicted that ahead of time" "Ha, sure they didn't just decide to do that post-hoc and then PubMed a reference to justify it") is silly. Registration would be a way of getting everyone to put their toys away and get serious.

Registration: Not Just For Clinical Trials

In a previous post, I said that I'd write about how to improve the quality of scientific research by ending the scrabbling for "positive results" at the cost of accuracy. So here we go. This is a long post, so if you'd prefer the short version, the answer is that we ought to get scientists in many fields to pre-register their research - to go on record and declare what they are looking for before they start looking for anything.

This is not my idea. Clinical trial registration is finally becoming a reality. Several organizations now offer registration services - such as Current Controlled Trials. Their site is well worth a click, if only to see the future of medical science unfolding before your eyes in the form of a list of recently registered protocols. Each of these protocols, remember, will eventually become a published scientific paper. If it doesn't, everyone will know that either the trial was never finished, or worse, it was finished and the results were never published. Without registration, a trial could be run and never published without anyone knowing what had happened - making it very easy for "inconvenient" data to never see the light of day. This is publication bias. We know it happens. Trial registration makes it all but impossible. It's important.

In fact, if someone were designing the system of clinical trials from scratch, they would, almost certainly, make registration an integral step right from the start. Unfortunately, no-one intelligently designed clinical trials. They evolved, and they're still evolving. We're not there yet. Trial registration is still a "good idea" rather than a routine part of clinical research, and while many first-class medical journals now require pre-registration and refuse to publish unregistered trials, plenty of other respectable publications have yet to catch up.

What I want to point out is that it's not just clinical trials which would benefit from registration. Registration is a way to defeat publication bias, wherever it occurs, and any field in which there are "negative results" is vulnerable to the risk that they won't be reported. In some parts of science there are no negative results - in much of physics, chemistry, and molecular biology, you either get a result, or you've failed. If you try to work out the structure of a protein, say, then you'll either come up with a structure, or give up. Of course, you might come out with the wrong structure if you mess up, but you could never "find nothing". All proteins have a structure, so there must be one to find.

But in many other areas of research there is often genuinely nothing to find. A gene might not be linked to any diseases. A treatment might have no effect. A pollutant might not cause any harm. Basically, if you're looking for a correlation between two things, or an effect of one thing upon another, you might get a negative result. Just off the top of my head, this covers almost all genetic association and linkage studies, almost all neuroimaging, most experimental psychology, much of climate science, epidemiology, sociology, criminology, and probably others I don't know about. Oh, and clinical trials, but we already knew that. People don't tend to publish negative results, for various reasons. Wherever this is a problem, trial registration would be useful.

Publication bias is known to be a problem in behavioural genetics (finding genes associated with psychological traits). For example Munafo et. al. (2007) found pretty strong evidence of publication bias in research on whether a certain allele (DRD2 Taq1A) predisposes to alcoholism. They concluded by saying that
Publication of nonsignificant results in the psychiatric genetics literature is important to protect against the existence of a biased corpus of data in the public domain.
Which is true, but saying it won't change anything, because everyone already knew this. No-one likes publication bias, but it happens anyway - so we need a system to prevent it. Curiously however, registration is rarely mentioned as an option. Salanti et. al. (2005) wrote at length about the pitfalls of genetic association studies, but did not. Colhoun et. al. (2003) , in a widely cited paper in the Lancet, explained how publication bias was a major problem but then flat-out dismissed registration, saying that
an effective mechanism for establishment of prospective registers of proposed analyses is not feasible.
They didn't say why, and if it works for clinical trials, I can see very little reason why it shouldn't work for other research. Indeed another similar paper in the same journal raised the idea of "prestudy registration of intent". Clearly it deserves serious thought.

Registration would also help combat "outcome reporting bias", or as it's known in the trade, data dredging. Any set of results can be looked at in a number of ways, and some of these ways will lead to different conclusions to others. Let's say that you want to find out whether a certain gene is associated with obesity. You might start by taking a thousand men and seeing whether the gene correlates with body weight. Let's say it doesn't, which is really annoying, because you were hoping that you could spend the next five years getting paid to find out more about this gene. Well, you still could! You could check whether the gene is associated with Body Mass Index (weight in proportion to height.) If that doesn't work, try percentage of body fat. Still nothing? Try eating habits. Eureka! Just by chance, you've found a correlation. Now you report that, and don't mention all the other things you tried first. You get a paper, "Gene XYZ123 influences eating behaviour in males", and a new grant to follow up on it. Sorted. Lynn McTaggart would be proud.

This kind of thing happens all the time, although that's an extreme example. The motives are not always selfish - most scientists genuinely want to find positive results about their "pet" genes, or drugs, or whatever. It is all too easy to dredge data without being aware of it. Registration would put an end to most of this nonsense, because when you register your research - before the results are in - you would have to publically outline what statistical tests you are planning to do. Essentially, you would need to write the Methods section of your paper before you collected any results.

If you were feeling particularly puritan, you could make people register the Introduction in advance too. Nominally, this is a statement of why you did the research, how it fits into the existing literature, what hypothesis you were testing and what you expected to find. In fact, it's generally a retrospective justification for getting the results you did, along with a confident "prediction" that you were going to find ... exactly what you found. This is not a serious problem, as publication bias is, because everyone knows that it happens and so no-one (except undergraduates) takes Introductions seriously. But writing Introductions that no-one can read with a straight face ("Oh sure, they really predicted that ahead of time" "Ha, sure they didn't just decide to do that post-hoc and then PubMed a reference to justify it") is silly. Registration would be a way of getting everyone to put their toys away and get serious.

Friday, October 31, 2008

Mood Is Chemistry. No Really, It Is.

What goes on in the brains of people who are depressed? For a lot of us, the answer is remarkably straightforward - they don't have enough serotonin, innit? The belief that serotonin is somehow the brain's "happy chemical" is almost folk wisdom nowadays - I just searched for serotonin on the Guardian website and clicked on the top article, this gem which describes serotonin as "the feelgood hormone".

Now, some more informed people don't like this pop psychopharmacology. The esteemed Dr. Ben Goldacre, for example, wrote that
That’s the serotonin hypothesis. It was always shaky, and the evidence now is hugely contradictory. I’m not giving that lecture here, but as a brief illustration, there is a drug called tianeptine – a selective serotonin reuptake enhancer, not an inhibitor – and yet research shows this drug is a pretty effective treatment for depression too.

Meanwhile in popular culture the depression/serotonin theory is proven and absolute, because it was never about research, or theory, it was about marketing, and journalists who pride themselves on never pushing pills or the hegemony will still blindly push the model until the cows come home....

The serotonin hypothesis will always be a winner in popular culture, even when it has flailed in academia, because it speaks to us of a simple, abrogating explanation, and plays into our notions of a crudely dualistic world where there can only be weak people, or uncontrollable, external, molecular pressures.

It's an excellent article and you should read the whole thing. Goldacre is far from alone in his skepticism towards to the serotonin hypothesis. Indeed in certain circles, the idea that the serotonin hypothesis is basically just drug company propaganda is almost folk wisdom nowadays (stop me if you've heard this one before).

Now, I'm not going to defend the idea that all depression is caused by "low serotonin".That's almost certainly wrong, and the very best you can say about it is that there's no strong evidence for it. In fact, it's not even clear, from a neurobiological perspective, what "low serotonin" means - low in which parts of the brain? Are we talking about low firing rates of serotonin neurons, or low amounts of serotonin released each time they fire, or low levels of serotonin just hanging around the synapses all the time? (And the next time someone tries to sell you a supplement or a herb or other short-cut to higher serotonin levels, just remember that there is such a thing as too much happy hormone.)

But - it's possible to be too skeptical. Serotonin does, unquestionably, play an important role in mood. Rumors of the death of the serotonin hypothesis have been greatly exaggerated. Ironically, the best evidence for the mood-relevance of serotonin doesn't come from antidepressants. Most common antidepressants, e.g. the famous Prozac, are said to work by "boosting serotonin levels", but actually it's far from clear that they do. Although these drugs inhibit the transporter protein which gets rid of serotonin after it's been released, which should in theory increase serotonin levels, in fact the picture is more complex, because serotonin inhibits the firing of the very cells that release it. Ironically, Prozac might even decrease serotonin levels in the places where they matter, at least in the short term. (For pharmacology geeks, see here).

The best evidence that mood is chemical, and that serotonin is one of the chemicals involved, comes from a foul-tasting, frothy, nausea-inducing milkshake known as the acute tryptophan depletion (ATD) mixture. The ATD mixture contains 100g of amino acids, which are what the body uses to synthesize proteins. You put the amino acids (white powders) into 200 ml of water. Some of them dissolve, some don't, so it's pretty lumpy. When (if) you manage to drink it all, interesting things happen. The influx of extra amino acids stimulates protein synthesis in your body. Some of the amino acids also get transported into the brain via certain proteins.

The key ingredient of the ATD mixture is the absence of tryptophan. Tryptophan is an amino acid, and like all amino acids it's used to make proteins, but it's also necessary for the production of serotonin in the brain. When you drink the ATD mixture - containing no tryptophan, remember - all of the tryptophan already in your blood gets used up in the burst of protein synthesis. Any that survives can't get transported into the brain, because other amino acids are already using the transporter proteins. The result is that tryptophan levels in the blood, and the brain, drop dramatically over the course of several hours.

ATD has been used in research for over a decade. The ATD procedure is unpleasant - volunteers have to consume the drink, which is truly horrible, on an empty stomach, and then not eat anything for the next 7 hours. Sometimes people vomit while trying to drink the stuff, or shortly afterwards, or two hours later. It's not much fun. But ATD has proven, to my mind, beyond any doubt, that the link between serotonin and mood is more than just a myth. The reason is that if you give ATD to people who have previously suffered clinical depression, a large proportion of them become depressed again, just for a few hours. This has been replicated several times. The effects can be dramatic (although not always, and sometimes there is no discernible effect) -
A feature of particular interest was that the participants who had full relapses of symptoms described a reappearance of some of the depressive thoughts they had experienced when previously depressed. One of these participants whose previous episodes of clinical depression were associated with the loss of important friendships had, while depressed, been preoccupied with fears that she would never be able to sustain a relationship. She had not had such fears since then. She had been fully recovered and had not taken any medication for over a year. About 2 h after drinking the tryptophan-free mixture she experienced a sudden onset of sadness, despair, and uncontrollable crying. She feared that a current important relationship would end. She recognised that she was depressed but still considered that her fears were appropriate. The evening of the test day she started to feel better and the next day was fully recovered. She said that her fears about her current relationship had been unfounded and she now saw them as unrealistic.
From Smith, Cowen & Fairburn Lancet (1997)
Good skeptics will immediately notice that everything I've just described could be a reverse placebo effect - people are warned to expect to get depressed, and then they have to drink a god-awful mixture that makes them feel sick, so it's no surprise they feel down. This is a real concern, so for comparison there is a placebo drink - exactly the same, but with plenty of tryptophan - and everything is done double blind. The placebo drink does not produce the same effects (although occasionally there are placebo responses - sometimes very striking ones.)

So does this mean that low serotonin = depression after all? No, for the very simple reason that if you do the exact same experiments on people who have never suffered from depression, they feel fine and dandy (well, except that they feel sick.) A few people say they feel a bit down, but the reactions are nowhere near as strong as those seen in many people with a history of clinical depression. Yet the biochemical effect - reduced brain tryptophan and hence (presumably) reduced serotonin synthesis, is the same.

If we knew what made some people vulnerable to the effects of tryptophan depletion, we would be a long way towards understanding depression. We still don't. But it's something to do with serotonin. In some people, in some circumstances, serotonin is the only thing between happiness and despair. No, really.

Mood Is Chemistry. No Really, It Is.

What goes on in the brains of people who are depressed? For a lot of us, the answer is remarkably straightforward - they don't have enough serotonin, innit? The belief that serotonin is somehow the brain's "happy chemical" is almost folk wisdom nowadays - I just searched for serotonin on the Guardian website and clicked on the top article, this gem which describes serotonin as "the feelgood hormone".

Now, some more informed people don't like this pop psychopharmacology. The esteemed Dr. Ben Goldacre, for example, wrote that
That’s the serotonin hypothesis. It was always shaky, and the evidence now is hugely contradictory. I’m not giving that lecture here, but as a brief illustration, there is a drug called tianeptine – a selective serotonin reuptake enhancer, not an inhibitor – and yet research shows this drug is a pretty effective treatment for depression too.

Meanwhile in popular culture the depression/serotonin theory is proven and absolute, because it was never about research, or theory, it was about marketing, and journalists who pride themselves on never pushing pills or the hegemony will still blindly push the model until the cows come home....

The serotonin hypothesis will always be a winner in popular culture, even when it has flailed in academia, because it speaks to us of a simple, abrogating explanation, and plays into our notions of a crudely dualistic world where there can only be weak people, or uncontrollable, external, molecular pressures.

It's an excellent article and you should read the whole thing. Goldacre is far from alone in his skepticism towards to the serotonin hypothesis. Indeed in certain circles, the idea that the serotonin hypothesis is basically just drug company propaganda is almost folk wisdom nowadays (stop me if you've heard this one before).

Now, I'm not going to defend the idea that all depression is caused by "low serotonin".That's almost certainly wrong, and the very best you can say about it is that there's no strong evidence for it. In fact, it's not even clear, from a neurobiological perspective, what "low serotonin" means - low in which parts of the brain? Are we talking about low firing rates of serotonin neurons, or low amounts of serotonin released each time they fire, or low levels of serotonin just hanging around the synapses all the time? (And the next time someone tries to sell you a supplement or a herb or other short-cut to higher serotonin levels, just remember that there is such a thing as too much happy hormone.)

But - it's possible to be too skeptical. Serotonin does, unquestionably, play an important role in mood. Rumors of the death of the serotonin hypothesis have been greatly exaggerated. Ironically, the best evidence for the mood-relevance of serotonin doesn't come from antidepressants. Most common antidepressants, e.g. the famous Prozac, are said to work by "boosting serotonin levels", but actually it's far from clear that they do. Although these drugs inhibit the transporter protein which gets rid of serotonin after it's been released, which should in theory increase serotonin levels, in fact the picture is more complex, because serotonin inhibits the firing of the very cells that release it. Ironically, Prozac might even decrease serotonin levels in the places where they matter, at least in the short term. (For pharmacology geeks, see here).

The best evidence that mood is chemical, and that serotonin is one of the chemicals involved, comes from a foul-tasting, frothy, nausea-inducing milkshake known as the acute tryptophan depletion (ATD) mixture. The ATD mixture contains 100g of amino acids, which are what the body uses to synthesize proteins. You put the amino acids (white powders) into 200 ml of water. Some of them dissolve, some don't, so it's pretty lumpy. When (if) you manage to drink it all, interesting things happen. The influx of extra amino acids stimulates protein synthesis in your body. Some of the amino acids also get transported into the brain via certain proteins.

The key ingredient of the ATD mixture is the absence of tryptophan. Tryptophan is an amino acid, and like all amino acids it's used to make proteins, but it's also necessary for the production of serotonin in the brain. When you drink the ATD mixture - containing no tryptophan, remember - all of the tryptophan already in your blood gets used up in the burst of protein synthesis. Any that survives can't get transported into the brain, because other amino acids are already using the transporter proteins. The result is that tryptophan levels in the blood, and the brain, drop dramatically over the course of several hours.

ATD has been used in research for over a decade. The ATD procedure is unpleasant - volunteers have to consume the drink, which is truly horrible, on an empty stomach, and then not eat anything for the next 7 hours. Sometimes people vomit while trying to drink the stuff, or shortly afterwards, or two hours later. It's not much fun. But ATD has proven, to my mind, beyond any doubt, that the link between serotonin and mood is more than just a myth. The reason is that if you give ATD to people who have previously suffered clinical depression, a large proportion of them become depressed again, just for a few hours. This has been replicated several times. The effects can be dramatic (although not always, and sometimes there is no discernible effect) -
A feature of particular interest was that the participants who had full relapses of symptoms described a reappearance of some of the depressive thoughts they had experienced when previously depressed. One of these participants whose previous episodes of clinical depression were associated with the loss of important friendships had, while depressed, been preoccupied with fears that she would never be able to sustain a relationship. She had not had such fears since then. She had been fully recovered and had not taken any medication for over a year. About 2 h after drinking the tryptophan-free mixture she experienced a sudden onset of sadness, despair, and uncontrollable crying. She feared that a current important relationship would end. She recognised that she was depressed but still considered that her fears were appropriate. The evening of the test day she started to feel better and the next day was fully recovered. She said that her fears about her current relationship had been unfounded and she now saw them as unrealistic.
From Smith, Cowen & Fairburn Lancet (1997)
Good skeptics will immediately notice that everything I've just described could be a reverse placebo effect - people are warned to expect to get depressed, and then they have to drink a god-awful mixture that makes them feel sick, so it's no surprise they feel down. This is a real concern, so for comparison there is a placebo drink - exactly the same, but with plenty of tryptophan - and everything is done double blind. The placebo drink does not produce the same effects (although occasionally there are placebo responses - sometimes very striking ones.)

So does this mean that low serotonin = depression after all? No, for the very simple reason that if you do the exact same experiments on people who have never suffered from depression, they feel fine and dandy (well, except that they feel sick.) A few people say they feel a bit down, but the reactions are nowhere near as strong as those seen in many people with a history of clinical depression. Yet the biochemical effect - reduced brain tryptophan and hence (presumably) reduced serotonin synthesis, is the same.

If we knew what made some people vulnerable to the effects of tryptophan depletion, we would be a long way towards understanding depression. We still don't. But it's something to do with serotonin. In some people, in some circumstances, serotonin is the only thing between happiness and despair. No, really.

Thursday, October 30, 2008

fMRI Reveals True Nature of Hatred

Given that I've taken to calling myself Neuroskeptic, I feel it's time to take a skeptical line on some neuroscience. Fortunately, an ideal example has just popped up. The paper, ominously titled "Neural Correlates Of Hate", was published in the open-access journal PLoS One. It's been picked up by the major science news sites and various newspapers, with headlines generally some variation of
Brain's 'hate circuit' identified

Those of us who keep up with the news won't be surprised. It seems like every week, reports come in that scientists have discovered the brain circuit for something.

By and large, these reports are nonsense. I will now explain why, and then tell you my theory of why everyone is so fascinated by neuroscience (and especially neuroimaging), before finishing by explaining why people aren't actually interested in neuroscience at all. Nice twist, eh? First, I'd like to make it clear that I'm not out to criticize the paper itself or the authors, Dr. Zeki and Dr. Romaya. No doubt the methodology of the experiment could be critiqued, but this is true of all such research, and I think the data from this study are valuable and interesting - to a specialist. What concerns me is the way in which this study and others like it are reported, and indeed the fact that they are repored as news at all.

So what did the authors do? They posted some adverts and recruited seventeen healthy volunteers. They showed them photos, which the volunteers had previously sent them. Some of the photos were of someone who the volunteer really hated - generally either ex-lovers or work rivals, predictably enough. Others were of people that the volunteer knew, but had "neutral feelings" towards. This was an fMRI study, so the whole process took place inside an MRI scanner configured to measure changes in blood oxygenation levels across the brain (which is considered a proxy for metabolic activity, itself a proxy for neural firing.) They then calculated which areas of the brain showed greater oxygenation changes when people were looking at their own personal hate figures than at the other faces. They found several areas in which the difference was statistically significant, which is what the yellow areas on this picture represent:

(Taken from Zeki & Romaya PLoS One 2008, without explicit permission)

This is all very well and good. Some people take a skeptical line on the whole business of fMRI, and they would probably consider these blobs-on-the-brain to be pretty much meaningless. I'm not one of them - I think these data tell us something about the human brain, although only in the context of other research, and only when the limitations of fMRI are borne in mind. (I hope to expand on my views of fMRI soon.) This is one piece of a big puzzle.

But one thing is clear, the brain's "hate circuit" is nowhere to be found in this study. This phrasing doesn't appear in the paper: it seems to have originated in the university press release (as this kind of stuff generally does.) What this data shows is that certain parts of the brain become more active when people are looking at pictures of people that they hate, and presumably therefore experiencing the emotion of hatred. These areas are not only activated by hatred; the putamen, for example, is known to be involved in the control of all movements. Every area which lit up in this study has lit up in a hundred other experiments which have nothing to do with hate. It's not as if scientists have just found a new bit of the brain tucked away somewhere, which turns out to be the root cause of all human evil. (Which is a pity, because that would look great on a grant application.)

Now, given that, I really can't see why anyone but a professional neuroscientist would want to know which parts of the brain activate when you look at pictures of a hated rival, not least because most laymen wouldn't know their putamen from their parietal lobe. (That's like saying "arse from their elbow," for non-neuroscience geeks.) And there's no reason they should. Neuroanatomy is very difficult, as any undergraduate neuroscientist knows. The brain is just an organ. It has various parts. Some people, like me, spend our lives trying to figure out how it all works, and we would say that it's very interesting. Of course, we would say that, because the brain pays our bills. To anyone else, it's just a grey lump.

Except, of course, that it's not. People are fascinated by the brain. We can't get enough cognitive neuroscience and fMRI images. They're a staple of the newspaper science pages. Does this mean people are interested in neuroscience? No. People don't understand neuroscience, because it's bloody hard. What interests people is not specific findings about the brain but the fact that science is "discovering things" about the brain and by implication, human life. At the back of all of our minds is the exciting feeling that whenever scientists find "the circuit" associated with some emotion or some behavior, an important truth about human nature has been revealed. (Neuroscientists get this feeling too, but we know it's more complicated than that. Some of us anyway.)

Sometimes this feeling surfaces and is expressed in words. Terence Kealey is a biochemist and head of the UK's only private University, The University of Buckingham. He's known for his libertarian politics. About a year ago he penned a profoundly revealing article for the Times. I would encourage you to read it, but you might need a dangerously large spoon of salt. Essentially, Kealey reads an fMRI study in which social science students were able to donate money to charity, and thinks it proves that
...people like being taxed for charity, but they like giving money to good causes even more... [which] challenges so many political assumptions. First, it disproves the Left’s belief that only the state will succour the poor: actually, philanthropy is hardwired into our brains and, in the absence of state aid, private giving is biologically determined...
Nothing in this paragraph is implied by the brain images which Kealey is talking about. Not a word. It's really quite impressively divorced from reality. In particular, there is absolutely no good reason to think that because a certain part of the brain is activated when we do something, that thing is "hardwired" or "biologically determined". This is because the brain is the organ of learning, and if we learn to do something, some part of the brain will be involved in that learning. Neuroimaging has very little to do with the nature / nurture debate. But my goal is here is not to bash Terence Kealey. Well to be honest it is a bit, but the main point is that the mistake that Kealey makes - seeing fMRI as a way of investigating the roots of human behavior - is very common.

The idea of a "hate circuit" is beguiling, I think, because it seems to show that hatred is a deep-seated human emotion with a biological basis. Personally, I think that's probably true. But I don't think that because of brain scans. I think that because I read the news and I read history. People across the world have been hating other people, in depressingly stereotypical ways, for as long as we can determine. That's human nature, but brain scans don't tell us anything about that. They tell us about the brain, which is a grey lump. Some of us have a professional interest in grey lumps, but everyone else would learn much more about hatred by going to see some Shakespeare or reading a history of the Balkans or something.

To sum up, neuroimaging and neuroscience in general are fascinating in their own right, but highly technical. As such there's no good reason why lay people should be any more interested in them than they are in chemistry. Given that they are in fact very interested, logically there must be bad reasons for this, such as the mistaken belief that brain scans can tell us about human behaviour, human nature, or everyday life. They don't and they probably can't. Vulgarized neuroscience now takes the place that Freudianism did 30 years ago, in that it offers simplistic, mechanistic explanations for complex behaviours, whose only claim to credibility is that they are "scientific". This kind of thing does real neuroscience, including fMRI, no favours.

fMRI Reveals True Nature of Hatred

Given that I've taken to calling myself Neuroskeptic, I feel it's time to take a skeptical line on some neuroscience. Fortunately, an ideal example has just popped up. The paper, ominously titled "Neural Correlates Of Hate", was published in the open-access journal PLoS One. It's been picked up by the major science news sites and various newspapers, with headlines generally some variation of
Brain's 'hate circuit' identified

Those of us who keep up with the news won't be surprised. It seems like every week, reports come in that scientists have discovered the brain circuit for something.

By and large, these reports are nonsense. I will now explain why, and then tell you my theory of why everyone is so fascinated by neuroscience (and especially neuroimaging), before finishing by explaining why people aren't actually interested in neuroscience at all. Nice twist, eh? First, I'd like to make it clear that I'm not out to criticize the paper itself or the authors, Dr. Zeki and Dr. Romaya. No doubt the methodology of the experiment could be critiqued, but this is true of all such research, and I think the data from this study are valuable and interesting - to a specialist. What concerns me is the way in which this study and others like it are reported, and indeed the fact that they are repored as news at all.

So what did the authors do? They posted some adverts and recruited seventeen healthy volunteers. They showed them photos, which the volunteers had previously sent them. Some of the photos were of someone who the volunteer really hated - generally either ex-lovers or work rivals, predictably enough. Others were of people that the volunteer knew, but had "neutral feelings" towards. This was an fMRI study, so the whole process took place inside an MRI scanner configured to measure changes in blood oxygenation levels across the brain (which is considered a proxy for metabolic activity, itself a proxy for neural firing.) They then calculated which areas of the brain showed greater oxygenation changes when people were looking at their own personal hate figures than at the other faces. They found several areas in which the difference was statistically significant, which is what the yellow areas on this picture represent:

(Taken from Zeki & Romaya PLoS One 2008, without explicit permission)

This is all very well and good. Some people take a skeptical line on the whole business of fMRI, and they would probably consider these blobs-on-the-brain to be pretty much meaningless. I'm not one of them - I think these data tell us something about the human brain, although only in the context of other research, and only when the limitations of fMRI are borne in mind. (I hope to expand on my views of fMRI soon.) This is one piece of a big puzzle.

But one thing is clear, the brain's "hate circuit" is nowhere to be found in this study. This phrasing doesn't appear in the paper: it seems to have originated in the university press release (as this kind of stuff generally does.) What this data shows is that certain parts of the brain become more active when people are looking at pictures of people that they hate, and presumably therefore experiencing the emotion of hatred. These areas are not only activated by hatred; the putamen, for example, is known to be involved in the control of all movements. Every area which lit up in this study has lit up in a hundred other experiments which have nothing to do with hate. It's not as if scientists have just found a new bit of the brain tucked away somewhere, which turns out to be the root cause of all human evil. (Which is a pity, because that would look great on a grant application.)

Now, given that, I really can't see why anyone but a professional neuroscientist would want to know which parts of the brain activate when you look at pictures of a hated rival, not least because most laymen wouldn't know their putamen from their parietal lobe. (That's like saying "arse from their elbow," for non-neuroscience geeks.) And there's no reason they should. Neuroanatomy is very difficult, as any undergraduate neuroscientist knows. The brain is just an organ. It has various parts. Some people, like me, spend our lives trying to figure out how it all works, and we would say that it's very interesting. Of course, we would say that, because the brain pays our bills. To anyone else, it's just a grey lump.

Except, of course, that it's not. People are fascinated by the brain. We can't get enough cognitive neuroscience and fMRI images. They're a staple of the newspaper science pages. Does this mean people are interested in neuroscience? No. People don't understand neuroscience, because it's bloody hard. What interests people is not specific findings about the brain but the fact that science is "discovering things" about the brain and by implication, human life. At the back of all of our minds is the exciting feeling that whenever scientists find "the circuit" associated with some emotion or some behavior, an important truth about human nature has been revealed. (Neuroscientists get this feeling too, but we know it's more complicated than that. Some of us anyway.)

Sometimes this feeling surfaces and is expressed in words. Terence Kealey is a biochemist and head of the UK's only private University, The University of Buckingham. He's known for his libertarian politics. About a year ago he penned a profoundly revealing article for the Times. I would encourage you to read it, but you might need a dangerously large spoon of salt. Essentially, Kealey reads an fMRI study in which social science students were able to donate money to charity, and thinks it proves that
...people like being taxed for charity, but they like giving money to good causes even more... [which] challenges so many political assumptions. First, it disproves the Left’s belief that only the state will succour the poor: actually, philanthropy is hardwired into our brains and, in the absence of state aid, private giving is biologically determined...
Nothing in this paragraph is implied by the brain images which Kealey is talking about. Not a word. It's really quite impressively divorced from reality. In particular, there is absolutely no good reason to think that because a certain part of the brain is activated when we do something, that thing is "hardwired" or "biologically determined". This is because the brain is the organ of learning, and if we learn to do something, some part of the brain will be involved in that learning. Neuroimaging has very little to do with the nature / nurture debate. But my goal is here is not to bash Terence Kealey. Well to be honest it is a bit, but the main point is that the mistake that Kealey makes - seeing fMRI as a way of investigating the roots of human behavior - is very common.

The idea of a "hate circuit" is beguiling, I think, because it seems to show that hatred is a deep-seated human emotion with a biological basis. Personally, I think that's probably true. But I don't think that because of brain scans. I think that because I read the news and I read history. People across the world have been hating other people, in depressingly stereotypical ways, for as long as we can determine. That's human nature, but brain scans don't tell us anything about that. They tell us about the brain, which is a grey lump. Some of us have a professional interest in grey lumps, but everyone else would learn much more about hatred by going to see some Shakespeare or reading a history of the Balkans or something.

To sum up, neuroimaging and neuroscience in general are fascinating in their own right, but highly technical. As such there's no good reason why lay people should be any more interested in them than they are in chemistry. Given that they are in fact very interested, logically there must be bad reasons for this, such as the mistaken belief that brain scans can tell us about human behaviour, human nature, or everyday life. They don't and they probably can't. Vulgarized neuroscience now takes the place that Freudianism did 30 years ago, in that it offers simplistic, mechanistic explanations for complex behaviours, whose only claim to credibility is that they are "scientific". This kind of thing does real neuroscience, including fMRI, no favours.

Monday, October 27, 2008

Cheer Up, Citizens

Does anyone else find this here video (via BBC) very odd? It's what can only be described as a government propaganda clip, but rather than trying to persuade or inform it's basically telling you to cheer up. Or as they put it "Increase your wellbeing today!" - they must have a jargon quota to meet. Maybe this is what happens when media-obsessed politicians listen to people to Lord Layard who say that they should be trying to make the population happier?

Also - a lot of psychologists and philosophers would say that everything we do is motivated by the desire to increase our own well-being, which would make the advice a bit redundent. Is this video proof that this theory of human nature is wrong? I wonder what Jeremy Bentham would say.