Wednesday, May 27, 2009

Questioning One in Four: Part 1

Link - Part 2, Part 3

One in four people suffer mental illness at some point in their lives.

Everyone knows that. But where does that number come from? The answer may surprise. Join me, if you will, as I explore the biography of a statistic.

"1 in 4" is ubiquitous, at least in the English-speaking world. I can't think of another such number which is better known, except perhaps the fact that 1 in 3 people will suffer from cancer.

Anyone who's used the London Underground or watched British TV recently will be familiar with the Time to Change anti-stigma advertising drive. This £18 million campaign, run by the charities Mind and Rethink, is awash with "1 in 4"s, left right and center. Mind have it on their About Us page. The BBC have it on their main mental health page. There's even a One in Four magazine. And so on.

In the next post, I'll be examining the truth behind this statistic, but first, a little history. Google archive reveals that 1 in 4 is a child of the 1990s. English-language news media from the late 1980s contain the statement that in 1 in 4 (American) families will have a member who suffers from mental illness, but this is not the same thing.

As far as I can tell, "1 in 4 people" entered the popular mind in the early to mid 1990s. By 1995, it was common and being referred to as an accepted fact. See for example this snap-shot of the newspapers in 1995 under the search term ("one in four" + mental), showing that the idea had taken root by this point. Whereas the equivalent from 1992 is quite different.

Interestingly, the early 1990s also feature repeated references to 1 in 4 (Americans) suffering from mental illness in any given year; this statistic, however, gradually fades from view as the decade goes on. By 2000, 1 in 4 appears more often than ever, but now it refers almost mostly to lifetime prevalence.

These graphs show the number of Google archive hits from 1950 to 2008. I had hoped that this would illustrate my argument nicely, but sadly, the picture isn't all that clear. Here it is anyway - the top graph shows the increase in ("1 in 4" + mental) hits. The second shows, by way of comparison, the number of hits for just ("mental health"), which is much more level. That's nice. But the bottom graphs shows that ("1 in 8" + mental) also becomes more popular over about the same time-frame, which is a bit confusing, as 1 in 8 is not a number especially linked to mental health.

But - where did 1 in 4 come from? When I set out to write this post, I thought it would be fairly easy to find out, but having done a lot of digging, I genuinely don't know.

My first guess was that it must have been the National Comorbidity Survey (NCS). The NCS was an ambitious attempt to measure the prevalence of mental disorders in a representative sample of the U.S. population, masterminded by Harvard Prof. Ronald C. Kessler. Data collection took place between 1990 - 1992 and the results started to be published in 1993 - just about the time when 1 in 4 started to appear in the media.

But in fact the headline finding from the NCS, as published in 1994, was that the lifetime prevalence of mental disorders was nearly 50%! That's 1 in 2 (sic). The proportion estimated to suffer from a disorder in any given year was almost 1 in 3. But no sign of 1 in 4.

Meanwhile, in Britain, 1993 also saw the first Psychiatric Morbidity Survey, a similar enterprise. (Attentive Neuroskeptic fans will recall that this was the survey that the Mental Health Foundation recently distorted to make it look like rates of anxiety disorders are rising). Could this be the source? No, the headline number here was 1 in 6, which referred to mental illness in the past week, not over the lifetime.

Going further back, the Epidemiological Catchment Area (ECA) project, the first large-scale psychiatric epidemiology study, happened in the early 80's. The ECA famously concluded that 1 in 3 Americans suffer at least one mental illness over the lifetime, and 1 in 5 do in any given six month period! 3, 5 - but still not 4.


The World Health Organization quoted 1 in 4 lifetime in 2001, to much media fanfare, and I have seen the WHO given as a source for the figure. But where did they get it from? Well, good question.

Their report, New Understanding New Hope: The World Health Report 2001, notes that according to the WHO's own data, 450 million people worldwide currently suffer from a "neuropsychiatric conditions". With 6 billion people on Earth that's less than 1 in 12 (and that includes Alzheimer's, Parkinson's, epilepsy, etc.) And that's at any one time, not over the whole lifetime.

The report then quotes at least 1 in 4 as a lifetime prevalence (on page 23). Finally! But this is not based on WHO data. Instead, they cite three references: Regier et al. 1988; Wells et al. 1989; and Almeida-Filho et al. 1997. Let's check these references.

The first refers to an Epidemiological Catchment Area study of 12 month prevalence. Not lifetime. The ECA, as we've previously seen, gave a lifetime estimate of 1 in 3. The 12 month estimate is 15.4%, or 1 in 6. No 1 in 4 to be found here. The second refers to a 1989 paper from Christchurch, New Zealand. It reported a lifetime prevalence of 65.8% (sic) for any mental disorder. 2 in 3. For the "main" diagnoses, i.e. excluding most anxiety disorders, it was 36.6%. 1 in 3. The closest I could find to 1 in 4 in this study was 22.9% for main disorders, also excluding substance abuse disorders. 1 in 4, 1 in 3, or 2 in 3 - take your pick. The last reference is to a Brazilian study finding lifetime prevalence rates from 31.0% to 50.5% in three cities.

So, in 2001, the WHO quoted 1 in 4, but their only references, if taken seriously, put the lifetime prevalence is more like 1 in 2. So we still don't know where 1 in 4 comes from.

Recently, the National Comorbidity Survey Replication (NCS-R), another Kessler project, claimed a lifetime prevalence of any disorder in Americans of 50.8%. But the proportion suffering from a disorder in any one year was estimated at about one in four. So that's 1 in 4 at last, but that number appeared only appeared in 2005 - far too late to explain the origin of the meme. (And it was yearly, not lifetime, but you can see how people might have misinterpreted it.)

So, I give up. I don't believe there is a single source for 1 in 4. If anyone thinks they know where I've gone wrong, please let me know. But as far as I can see, 1 in 4 lifetime represents a kind of informal average of all of the studies I've discussed. It's a number that sticks in people's minds because it's high enough to capture the sense that "they're very common" while not being so high as to make people think "that's ridiculous" (as most of the actual estimates do). It's less a statistic, more a collective guess.

In the next post, I'll try to make sense of all these numbers.

[BPSDB]

ResearchBlogging.orgGrant, B. (2006). About 26% of people in the US have an anxiety, mood, impulse control, or substance disorder Evidence-Based Mental Health, 9 (1), 27-27 DOI: 10.1136/ebmh.9.1.27

Questioning One in Four: Part 1

Link - Part 2, Part 3

One in four people suffer mental illness at some point in their lives.

Everyone knows that. But where does that number come from? The answer may surprise. Join me, if you will, as I explore the biography of a statistic.

"1 in 4" is ubiquitous, at least in the English-speaking world. I can't think of another such number which is better known, except perhaps the fact that 1 in 3 people will suffer from cancer.

Anyone who's used the London Underground or watched British TV recently will be familiar with the Time to Change anti-stigma advertising drive. This £18 million campaign, run by the charities Mind and Rethink, is awash with "1 in 4"s, left right and center. Mind have it on their About Us page. The BBC have it on their main mental health page. There's even a One in Four magazine. And so on.

In the next post, I'll be examining the truth behind this statistic, but first, a little history. Google archive reveals that 1 in 4 is a child of the 1990s. English-language news media from the late 1980s contain the statement that in 1 in 4 (American) families will have a member who suffers from mental illness, but this is not the same thing.

As far as I can tell, "1 in 4 people" entered the popular mind in the early to mid 1990s. By 1995, it was common and being referred to as an accepted fact. See for example this snap-shot of the newspapers in 1995 under the search term ("one in four" + mental), showing that the idea had taken root by this point. Whereas the equivalent from 1992 is quite different.

Interestingly, the early 1990s also feature repeated references to 1 in 4 (Americans) suffering from mental illness in any given year; this statistic, however, gradually fades from view as the decade goes on. By 2000, 1 in 4 appears more often than ever, but now it refers almost mostly to lifetime prevalence.

These graphs show the number of Google archive hits from 1950 to 2008. I had hoped that this would illustrate my argument nicely, but sadly, the picture isn't all that clear. Here it is anyway - the top graph shows the increase in ("1 in 4" + mental) hits. The second shows, by way of comparison, the number of hits for just ("mental health"), which is much more level. That's nice. But the bottom graphs shows that ("1 in 8" + mental) also becomes more popular over about the same time-frame, which is a bit confusing, as 1 in 8 is not a number especially linked to mental health.

But - where did 1 in 4 come from? When I set out to write this post, I thought it would be fairly easy to find out, but having done a lot of digging, I genuinely don't know.

My first guess was that it must have been the National Comorbidity Survey (NCS). The NCS was an ambitious attempt to measure the prevalence of mental disorders in a representative sample of the U.S. population, masterminded by Harvard Prof. Ronald C. Kessler. Data collection took place between 1990 - 1992 and the results started to be published in 1993 - just about the time when 1 in 4 started to appear in the media.

But in fact the headline finding from the NCS, as published in 1994, was that the lifetime prevalence of mental disorders was nearly 50%! That's 1 in 2 (sic). The proportion estimated to suffer from a disorder in any given year was almost 1 in 3. But no sign of 1 in 4.

Meanwhile, in Britain, 1993 also saw the first Psychiatric Morbidity Survey, a similar enterprise. (Attentive Neuroskeptic fans will recall that this was the survey that the Mental Health Foundation recently distorted to make it look like rates of anxiety disorders are rising). Could this be the source? No, the headline number here was 1 in 6, which referred to mental illness in the past week, not over the lifetime.

Going further back, the Epidemiological Catchment Area (ECA) project, the first large-scale psychiatric epidemiology study, happened in the early 80's. The ECA famously concluded that 1 in 3 Americans suffer at least one mental illness over the lifetime, and 1 in 5 do in any given six month period! 3, 5 - but still not 4.


The World Health Organization quoted 1 in 4 lifetime in 2001, to much media fanfare, and I have seen the WHO given as a source for the figure. But where did they get it from? Well, good question.

Their report, New Understanding New Hope: The World Health Report 2001, notes that according to the WHO's own data, 450 million people worldwide currently suffer from a "neuropsychiatric conditions". With 6 billion people on Earth that's less than 1 in 12 (and that includes Alzheimer's, Parkinson's, epilepsy, etc.) And that's at any one time, not over the whole lifetime.

The report then quotes at least 1 in 4 as a lifetime prevalence (on page 23). Finally! But this is not based on WHO data. Instead, they cite three references: Regier et al. 1988; Wells et al. 1989; and Almeida-Filho et al. 1997. Let's check these references.

The first refers to an Epidemiological Catchment Area study of 12 month prevalence. Not lifetime. The ECA, as we've previously seen, gave a lifetime estimate of 1 in 3. The 12 month estimate is 15.4%, or 1 in 6. No 1 in 4 to be found here. The second refers to a 1989 paper from Christchurch, New Zealand. It reported a lifetime prevalence of 65.8% (sic) for any mental disorder. 2 in 3. For the "main" diagnoses, i.e. excluding most anxiety disorders, it was 36.6%. 1 in 3. The closest I could find to 1 in 4 in this study was 22.9% for main disorders, also excluding substance abuse disorders. 1 in 4, 1 in 3, or 2 in 3 - take your pick. The last reference is to a Brazilian study finding lifetime prevalence rates from 31.0% to 50.5% in three cities.

So, in 2001, the WHO quoted 1 in 4, but their only references, if taken seriously, put the lifetime prevalence is more like 1 in 2. So we still don't know where 1 in 4 comes from.

Recently, the National Comorbidity Survey Replication (NCS-R), another Kessler project, claimed a lifetime prevalence of any disorder in Americans of 50.8%. But the proportion suffering from a disorder in any one year was estimated at about one in four. So that's 1 in 4 at last, but that number appeared only appeared in 2005 - far too late to explain the origin of the meme. (And it was yearly, not lifetime, but you can see how people might have misinterpreted it.)

So, I give up. I don't believe there is a single source for 1 in 4. If anyone thinks they know where I've gone wrong, please let me know. But as far as I can see, 1 in 4 lifetime represents a kind of informal average of all of the studies I've discussed. It's a number that sticks in people's minds because it's high enough to capture the sense that "they're very common" while not being so high as to make people think "that's ridiculous" (as most of the actual estimates do). It's less a statistic, more a collective guess.

In the next post, I'll try to make sense of all these numbers.

[BPSDB]

ResearchBlogging.orgGrant, B. (2006). About 26% of people in the US have an anxiety, mood, impulse control, or substance disorder Evidence-Based Mental Health, 9 (1), 27-27 DOI: 10.1136/ebmh.9.1.27

Saturday, May 23, 2009

Do Antidepressants Help in Mild Depression?

Yes! says the BBC, reporting on the results of a new trial -
Drugs 'can help mild depression'
Not so fast. Read this before you reach for the Prozac.
It was about this time last year that Irving Kirsch and colleagues released Initial Severity and Antidepressant Benefits. This bombshell of a meta-analysis concluded, notoriously, that the benefits of antidepressants over and above placebo are in general pretty small. Moreover, it claimed that the benefits are even smaller - indeed pretty much zero - in people whose depression is not very severe to begin with.

However, Neuroskeptic readers will know that antidepressant trials are not all they're cracked up to be (1,2). On top of which Kirsch et al. were a little "creative" with their statistics, as bloggers P J Leonard and Robert Waldmann aptly demonstrated. So, the claim that antidepressants don't work in mild depression rests on shaky foundations.

But that doesn't mean that they do work. In fact, there have been very few studies looking at the effectiveness of drugs in mild to moderate depression. That's a shame, because mild depression is the most common reason why people are given antidepressants in real life.

Now a new clinical trial, run by the British National Health Service, has appeared. It was (drumroll) a Randomised controlled trial to determine the clinical effectiveness and cost-effectiveness of selective serotonin reuptake inhibitors plus supportive care, versus supportive care alone, for mild to moderate depression with somatic symptoms in primary care.

The researchers enlisted GPs (family doctors) from across the UK, and got them to refer suitable patients to the study. Patients could be included if their doctors considered that they were depressed and had been for at least 8 weeks. They also had to be aged 18 or over, and they had to be rated between 12 and 19 on the HAMD, a scale used to measure the severity of depression. (Slightly oddly, they were also required to show at least some evidence of "somatic" symptoms - aches, pains, indigestion, that kind of thing. I'm not sure why.) Patients were excluded if they "expressed suicidal intent" or if they admitted to drug or alcohol misuse.

A total of 602 patients were referred to the trial, but of these only 220 actually took part; the rest either didn't want to do it or were unsuitable for whatever reason. It took the researchers nearly 4 years and heroic efforts to recruit those 220 people, including reimbursing doctors £45 for each patient referred. This kind of research is frustrating. This is probably why there's so little of it.The volunteers were randomly assigned to get supportive care alone, or supportive care plus the doctor's choice of SSRI antidepressant. "Supportive care" is basically a euphemism for "doing sweet F. A.". The GPs were meant to see the patients 5 times over a 12 week period; given that a typical GP consultation in the UK lasts about 10 minutes, the idea that this constitutes any kind of "care", supportive or not, is a bit of a joke.

What happened? Well, to cut a very long story short, the patients assigned to SSRIs did better than the ones assigned to supportive care alone. Hurrah! But they only did slightly better. After 12 weeks they had a mean HAMD score of 8.7 compared to 11.2 in the supportive care group. The SSRI group also did a bit better on some other measures of health, well-being and general satisfaction. The difference on the BDI, a self-reported measure of depression, was not significant however (13.0 vs. 15.1)

So does that mean antidepressants "work" in mild depression? Maybe. Maybe not. The most obvious issue, of course, is that there was no placebo group in this trial. So any benefit of the pills could have just been psychological. Gettingly randomly assigned to "supportive care" and condemned to twiddle your thumbs for 12 weeks is not going to make anyone feel better. Starting on antidepressants, on the other hand, feels like a fresh start. It gives hope. It's change you can believe in.

But if giving people pills makes them feel better, isn't that good enough reason to do it? Who cares if it's all the placebo effect? Well, there's some truth to that, but the problem is that patients included in this trial were a rather unusual bunch. In particular, they were people who agreed to be randomized to get antidepressants or not, i.e. they had no strong preference either for or against pills.

Given that an awful lot of people do have such a preference, we can't assume that these results apply to the average patient in the clinic. As the authors note (page 59, emphasis mine):
The tallies of surgery logs completed by a number of the study GPs at various points during the study showed that only around 1 in 10 patients with a new episode of depression were referred into the study, mainly because the rest did not fulfil the inclusion criteria, particularly in terms of a lack of equipoise about the benefits of drug treatment on the part of the doctor or patient or both.
And of those 602 referred, only about a third actually took part, as mentioned above. So what we have here is a study on an unusual 3% of patients. What about the other 97%? We don't know. Still.

Or don't we? Well, it depends who "we" are. I suspect that a moderately competent doctor with experience treating depression probably does have a good idea of who is likely to benefit from drugs and who isn't. There's no substitute for real, hands-on clinical experience. There's more to life than trials...

ResearchBlogging.orgT Kendrick, J Chatwin, C Dowrick, A Tylee, R Morriss, R Peveler, M Leese, P McCrone, T Harris, M Moore, R Byng, G Brown, S Barthel, H Mander, A Ring, V Kelly, V Wallace, M Gabbay, T Craig and A Mann (2009). Randomised controlled trial to determine the clinical effectiveness and cost-effectiveness of selective serotonin reuptake inhibitors plus supportive care, versus supportive care alone, for mild to moderate depression with somatic symptoms in primary care Health Technology Assessment, 13 (22)

Do Antidepressants Help in Mild Depression?

Yes! says the BBC, reporting on the results of a new trial -
Drugs 'can help mild depression'
Not so fast. Read this before you reach for the Prozac.
It was about this time last year that Irving Kirsch and colleagues released Initial Severity and Antidepressant Benefits. This bombshell of a meta-analysis concluded, notoriously, that the benefits of antidepressants over and above placebo are in general pretty small. Moreover, it claimed that the benefits are even smaller - indeed pretty much zero - in people whose depression is not very severe to begin with.

However, Neuroskeptic readers will know that antidepressant trials are not all they're cracked up to be (1,2). On top of which Kirsch et al. were a little "creative" with their statistics, as bloggers P J Leonard and Robert Waldmann aptly demonstrated. So, the claim that antidepressants don't work in mild depression rests on shaky foundations.

But that doesn't mean that they do work. In fact, there have been very few studies looking at the effectiveness of drugs in mild to moderate depression. That's a shame, because mild depression is the most common reason why people are given antidepressants in real life.

Now a new clinical trial, run by the British National Health Service, has appeared. It was (drumroll) a Randomised controlled trial to determine the clinical effectiveness and cost-effectiveness of selective serotonin reuptake inhibitors plus supportive care, versus supportive care alone, for mild to moderate depression with somatic symptoms in primary care.

The researchers enlisted GPs (family doctors) from across the UK, and got them to refer suitable patients to the study. Patients could be included if their doctors considered that they were depressed and had been for at least 8 weeks. They also had to be aged 18 or over, and they had to be rated between 12 and 19 on the HAMD, a scale used to measure the severity of depression. (Slightly oddly, they were also required to show at least some evidence of "somatic" symptoms - aches, pains, indigestion, that kind of thing. I'm not sure why.) Patients were excluded if they "expressed suicidal intent" or if they admitted to drug or alcohol misuse.

A total of 602 patients were referred to the trial, but of these only 220 actually took part; the rest either didn't want to do it or were unsuitable for whatever reason. It took the researchers nearly 4 years and heroic efforts to recruit those 220 people, including reimbursing doctors £45 for each patient referred. This kind of research is frustrating. This is probably why there's so little of it.The volunteers were randomly assigned to get supportive care alone, or supportive care plus the doctor's choice of SSRI antidepressant. "Supportive care" is basically a euphemism for "doing sweet F. A.". The GPs were meant to see the patients 5 times over a 12 week period; given that a typical GP consultation in the UK lasts about 10 minutes, the idea that this constitutes any kind of "care", supportive or not, is a bit of a joke.

What happened? Well, to cut a very long story short, the patients assigned to SSRIs did better than the ones assigned to supportive care alone. Hurrah! But they only did slightly better. After 12 weeks they had a mean HAMD score of 8.7 compared to 11.2 in the supportive care group. The SSRI group also did a bit better on some other measures of health, well-being and general satisfaction. The difference on the BDI, a self-reported measure of depression, was not significant however (13.0 vs. 15.1)

So does that mean antidepressants "work" in mild depression? Maybe. Maybe not. The most obvious issue, of course, is that there was no placebo group in this trial. So any benefit of the pills could have just been psychological. Gettingly randomly assigned to "supportive care" and condemned to twiddle your thumbs for 12 weeks is not going to make anyone feel better. Starting on antidepressants, on the other hand, feels like a fresh start. It gives hope. It's change you can believe in.

But if giving people pills makes them feel better, isn't that good enough reason to do it? Who cares if it's all the placebo effect? Well, there's some truth to that, but the problem is that patients included in this trial were a rather unusual bunch. In particular, they were people who agreed to be randomized to get antidepressants or not, i.e. they had no strong preference either for or against pills.

Given that an awful lot of people do have such a preference, we can't assume that these results apply to the average patient in the clinic. As the authors note (page 59, emphasis mine):
The tallies of surgery logs completed by a number of the study GPs at various points during the study showed that only around 1 in 10 patients with a new episode of depression were referred into the study, mainly because the rest did not fulfil the inclusion criteria, particularly in terms of a lack of equipoise about the benefits of drug treatment on the part of the doctor or patient or both.
And of those 602 referred, only about a third actually took part, as mentioned above. So what we have here is a study on an unusual 3% of patients. What about the other 97%? We don't know. Still.

Or don't we? Well, it depends who "we" are. I suspect that a moderately competent doctor with experience treating depression probably does have a good idea of who is likely to benefit from drugs and who isn't. There's no substitute for real, hands-on clinical experience. There's more to life than trials...

ResearchBlogging.orgT Kendrick, J Chatwin, C Dowrick, A Tylee, R Morriss, R Peveler, M Leese, P McCrone, T Harris, M Moore, R Byng, G Brown, S Barthel, H Mander, A Ring, V Kelly, V Wallace, M Gabbay, T Craig and A Mann (2009). Randomised controlled trial to determine the clinical effectiveness and cost-effectiveness of selective serotonin reuptake inhibitors plus supportive care, versus supportive care alone, for mild to moderate depression with somatic symptoms in primary care Health Technology Assessment, 13 (22)

Thursday, May 21, 2009

Genes, Brains and the Perils of Publication

Much of science, and especially neuroscience, consists of the search for "positive results". A positive result is simply a correlation or a causal relationship between one thing and another. It could be an association between a genetic variant and some personality trait. It could be a brain area which gets activated when you think about something.


It's only natural that "positive results" are especially interesting. But "negative" results are still results. If you find that one thing is not correlated with another, you've found a correlation. It just happens to have a value of zero.

For every gene which causes bipolar disorder, say, there will be a hundred which have nothing to do with it. So, if you find a gene that doesn't cause bipolar, that's a finding. It deserves to be treated just as seriously as finding that a gene does cause it. In particular, it deserves to be published.

Sadly, negative results tend not to get published. There are lots of reasons for this and much has been written about it, both on this blog and in the literature, most notably by John Ionnidis (see this and this, for starters). A paper just published in Science offers a perfect example of the problem: Neural Mechanisms of a Genome-Wide Supported Psychosis Variant.

The authors, a German group, report on a genetic variant, rs1344706, which was recently found to be associated with a slightly raised risk of psychotic illness in a genome-wide association study. (Genome-wide studies can and do throw up false positives so rs1344706 might have nothing to do with psychosis - but let's assume that it does.)

They decided to see whether the variant had an effect on the brains of people who have never suffered from psychosis. That's an extremely reasonable idea, because if a certain gene causes an illness, it could well also cause subtle effects in people who don't have the full-blown disease.

So, they took 115 healthy people and used fMRI to measure neural activity while they were doing some simple cognitive tasks, such as the n-back task, a fairly tricky memory test. People with schizophrenia and other psychotic disorders often have difficulties on this test. They also used a test which involves recognizing people's emotions from pictures of their faces.
They found that -
Regional brain activation was not significantly related to genotype...Rs1344706 genotype had no impact on performance.
In other words, the gene didn't do anything. The sample size was large - with 115 people, they had an excellent chance to detect any effect, if there was one, and they didn't. That's a perfectly good finding, a useful contribution to the scientific record. It was reasonable to think that rs1344706 might affect cognitive performance or brain activation in healthy people, and it didn't.
But that's not what the paper is about. These perfectly good negative findings were relegated to just a couple of sentences - I've just quoted almost every word they say about them - and the rest of the article concerns a positive result.The positive result is that the variant was associated with differences in functional connectivity. Functional connectivity is the correlation between activity in different parts of the brain; if one part of the brain tends to light up at the same time as another part they are said to be functionally connected.
In risk-allele carriers, connectivity both within DLPFC (same side) and to contralateral DLPFC was reduced. Conversely, the hippocampal formation was uncoupled from DLPFC in non–risk-allele homozygotes but showed dose-dependent increased connectivity in risk-allele carriers. Lastly, the risk allele predicted extensive increases of connectivity from amygdala including to hippocampus, orbitofrontal cortex, and medial prefrontal cortex.
And they conclude, optimistically:
...our findings establish dysconnectivity as a core neurogenetic mechanism, where reduced DLPFC connectivity could contribute to disturbed executive function and increased coupling with HF to deficient interactions between prefrontal and limbic structures ... Lastly, our findings validate the intermediate phenotype strategy in psychiatry by showing that mechanisms underlying genetic findings supported by genome-wide association are highly penetrant in brain, agree with the pathophysiology of overt disease, and mirror candidate gene effects. Confirming a century-old conjecture by combining genetics with imaging, we find that altered connectivity emerges as part of the core neurogenetic architecture of schizophrenia and possibly bipolar disorder, identifying novel potential therapeutic targets.
I have no wish to criticize these findings as such. But the way in which this paper is written is striking. The negative results are passed over as quickly as possible. This despite the fact that they are very clear and easy to interpret - the rs1344706 variant has no effect on cognitive task performance or neural activation. It is not a cognition gene, at least not in healthy volunteers.

By contrast, the genetic association with connectivity is modest (see the graphs above - there is a lot of overlap), and very difficult to interpret, since it is clearly not associated with any kind of actual differences in behaviour.

And yet this positive result got the experiment published in no less a journal than Science! The negative results alone would have struggled to get accepted anywhere, and would probably have ended up either unpublished, or published in some rubbish minor journal and never read. It's no wonder the authors decided to write their paper in the way they did. They were just doing the smart thing. And they are perfectly respectable scientists - Andreas Meyer-Lindenberg, the senior author, has done some excellent work in this and other fields.

The fault here is with a system which all but forces researchers to search for "positive results" at all costs.

[BPSDB]

ResearchBlogging.orgEsslinger, C., Walter, H., Kirsch, P., Erk, S., Schnell, K., Arnold, C., Haddad, L., Mier, D., Opitz von Boberfeld, C., Raab, K., Witt, S., Rietschel, M., Cichon, S., & Meyer-Lindenberg, A. (2009). Neural Mechanisms of a Genome-Wide Supported Psychosis Variant Science, 324 (5927), 605-605 DOI: 10.1126/science.1167768

Genes, Brains and the Perils of Publication

Much of science, and especially neuroscience, consists of the search for "positive results". A positive result is simply a correlation or a causal relationship between one thing and another. It could be an association between a genetic variant and some personality trait. It could be a brain area which gets activated when you think about something.


It's only natural that "positive results" are especially interesting. But "negative" results are still results. If you find that one thing is not correlated with another, you've found a correlation. It just happens to have a value of zero.

For every gene which causes bipolar disorder, say, there will be a hundred which have nothing to do with it. So, if you find a gene that doesn't cause bipolar, that's a finding. It deserves to be treated just as seriously as finding that a gene does cause it. In particular, it deserves to be published.

Sadly, negative results tend not to get published. There are lots of reasons for this and much has been written about it, both on this blog and in the literature, most notably by John Ionnidis (see this and this, for starters). A paper just published in Science offers a perfect example of the problem: Neural Mechanisms of a Genome-Wide Supported Psychosis Variant.

The authors, a German group, report on a genetic variant, rs1344706, which was recently found to be associated with a slightly raised risk of psychotic illness in a genome-wide association study. (Genome-wide studies can and do throw up false positives so rs1344706 might have nothing to do with psychosis - but let's assume that it does.)

They decided to see whether the variant had an effect on the brains of people who have never suffered from psychosis. That's an extremely reasonable idea, because if a certain gene causes an illness, it could well also cause subtle effects in people who don't have the full-blown disease.

So, they took 115 healthy people and used fMRI to measure neural activity while they were doing some simple cognitive tasks, such as the n-back task, a fairly tricky memory test. People with schizophrenia and other psychotic disorders often have difficulties on this test. They also used a test which involves recognizing people's emotions from pictures of their faces.
They found that -
Regional brain activation was not significantly related to genotype...Rs1344706 genotype had no impact on performance.
In other words, the gene didn't do anything. The sample size was large - with 115 people, they had an excellent chance to detect any effect, if there was one, and they didn't. That's a perfectly good finding, a useful contribution to the scientific record. It was reasonable to think that rs1344706 might affect cognitive performance or brain activation in healthy people, and it didn't.
But that's not what the paper is about. These perfectly good negative findings were relegated to just a couple of sentences - I've just quoted almost every word they say about them - and the rest of the article concerns a positive result.The positive result is that the variant was associated with differences in functional connectivity. Functional connectivity is the correlation between activity in different parts of the brain; if one part of the brain tends to light up at the same time as another part they are said to be functionally connected.
In risk-allele carriers, connectivity both within DLPFC (same side) and to contralateral DLPFC was reduced. Conversely, the hippocampal formation was uncoupled from DLPFC in non–risk-allele homozygotes but showed dose-dependent increased connectivity in risk-allele carriers. Lastly, the risk allele predicted extensive increases of connectivity from amygdala including to hippocampus, orbitofrontal cortex, and medial prefrontal cortex.
And they conclude, optimistically:
...our findings establish dysconnectivity as a core neurogenetic mechanism, where reduced DLPFC connectivity could contribute to disturbed executive function and increased coupling with HF to deficient interactions between prefrontal and limbic structures ... Lastly, our findings validate the intermediate phenotype strategy in psychiatry by showing that mechanisms underlying genetic findings supported by genome-wide association are highly penetrant in brain, agree with the pathophysiology of overt disease, and mirror candidate gene effects. Confirming a century-old conjecture by combining genetics with imaging, we find that altered connectivity emerges as part of the core neurogenetic architecture of schizophrenia and possibly bipolar disorder, identifying novel potential therapeutic targets.
I have no wish to criticize these findings as such. But the way in which this paper is written is striking. The negative results are passed over as quickly as possible. This despite the fact that they are very clear and easy to interpret - the rs1344706 variant has no effect on cognitive task performance or neural activation. It is not a cognition gene, at least not in healthy volunteers.

By contrast, the genetic association with connectivity is modest (see the graphs above - there is a lot of overlap), and very difficult to interpret, since it is clearly not associated with any kind of actual differences in behaviour.

And yet this positive result got the experiment published in no less a journal than Science! The negative results alone would have struggled to get accepted anywhere, and would probably have ended up either unpublished, or published in some rubbish minor journal and never read. It's no wonder the authors decided to write their paper in the way they did. They were just doing the smart thing. And they are perfectly respectable scientists - Andreas Meyer-Lindenberg, the senior author, has done some excellent work in this and other fields.

The fault here is with a system which all but forces researchers to search for "positive results" at all costs.

[BPSDB]

ResearchBlogging.orgEsslinger, C., Walter, H., Kirsch, P., Erk, S., Schnell, K., Arnold, C., Haddad, L., Mier, D., Opitz von Boberfeld, C., Raab, K., Witt, S., Rietschel, M., Cichon, S., & Meyer-Lindenberg, A. (2009). Neural Mechanisms of a Genome-Wide Supported Psychosis Variant Science, 324 (5927), 605-605 DOI: 10.1126/science.1167768

Saturday, May 16, 2009

Legal Highs

The past couple of weeks has seen British newspapers and politicians fretting about "legal highs". Legal highs are perfectly legal substances that "help people get out of their minds yet stay within the law" as the Guardian puts it.

Like Ritalin and booze, you mean? No, they're talking about things like
spice arctic synergy. You know, spice arctic synergy, the famous drug. No? Well, you know about it now, and so does everyone who reads the Guardian. I would be interested to see what the sales of spice arctic synergy are like in the next few weeks.

From what I can tell "spice" is just the latest of the many brands of "herbal highs" that can be bought in head shops and other such "alternative" retailers. Other famous brands are Druid's Fantasy, Aztec Acid, and Wizard's Willy. (I may have made some of those up.) These are blends of possibly psychoactive plants which can be smoked or eaten; the effects are supposedly a bit like cannabis or magic mushrooms but, at least so far as I'm told, mostly consist of nausea and headache. And wasting £20.

The consumer base for these silly products largely consists of teenagers who aren't cool enough to buy any proper drugs. On the drug credibility scale, most "legal highs" rank somewhere between sniffing glue and drinking your own pee after taking mushrooms in order to recapture some of the hallucinogens. (That works, allegedly.) No self-respecting drug user would be seen dead with any. Ban them, on the other hand, and everyone will want some.

To be fair, there are some genuinely potent legal drugs out there. Salvia divinorum, for example, contains a pharmacologically unique dissociative hallucinogen called salvinorin. Back when I was an uncool teenager a few friends of mine tried it, but they only ever took it once. The experience apparently amounted to ten minutes of terror and indescribable visions that seem to last hours; no-one I know who's taken it enjoyed it, and in the case of one of them it let him to vow never to take any hallucinogens ever again. I tried some, but it did nothing at all (except waste my money.) So it's a bit unpredictable.

Should it be banned? Maybe. It's certainly not stuff I would want my kids to go near, if I had any. Hypocritical as that might be. But that doesn't mean it's actually harmful, still less that prohibiting it would prevent harm overall (I suspect people would just find another drug to take, maybe an even dodgier one.) It would be worth a serious and evidence-based look, like all areas of drug policy, but given that the government have a history of hysterical shrieking whenever their own appointed experts try to do that, I'm not hopeful.

And when I read that one MP has made it his personal mission to stop Salvia, I just couldn't stop thinking of bisturbile cranabolic amphetamoids.

Legal Highs

The past couple of weeks has seen British newspapers and politicians fretting about "legal highs". Legal highs are perfectly legal substances that "help people get out of their minds yet stay within the law" as the Guardian puts it.

Like Ritalin and booze, you mean? No, they're talking about things like
spice arctic synergy. You know, spice arctic synergy, the famous drug. No? Well, you know about it now, and so does everyone who reads the Guardian. I would be interested to see what the sales of spice arctic synergy are like in the next few weeks.

From what I can tell "spice" is just the latest of the many brands of "herbal highs" that can be bought in head shops and other such "alternative" retailers. Other famous brands are Druid's Fantasy, Aztec Acid, and Wizard's Willy. (I may have made some of those up.) These are blends of possibly psychoactive plants which can be smoked or eaten; the effects are supposedly a bit like cannabis or magic mushrooms but, at least so far as I'm told, mostly consist of nausea and headache. And wasting £20.

The consumer base for these silly products largely consists of teenagers who aren't cool enough to buy any proper drugs. On the drug credibility scale, most "legal highs" rank somewhere between sniffing glue and drinking your own pee after taking mushrooms in order to recapture some of the hallucinogens. (That works, allegedly.) No self-respecting drug user would be seen dead with any. Ban them, on the other hand, and everyone will want some.

To be fair, there are some genuinely potent legal drugs out there. Salvia divinorum, for example, contains a pharmacologically unique dissociative hallucinogen called salvinorin. Back when I was an uncool teenager a few friends of mine tried it, but they only ever took it once. The experience apparently amounted to ten minutes of terror and indescribable visions that seem to last hours; no-one I know who's taken it enjoyed it, and in the case of one of them it let him to vow never to take any hallucinogens ever again. I tried some, but it did nothing at all (except waste my money.) So it's a bit unpredictable.

Should it be banned? Maybe. It's certainly not stuff I would want my kids to go near, if I had any. Hypocritical as that might be. But that doesn't mean it's actually harmful, still less that prohibiting it would prevent harm overall (I suspect people would just find another drug to take, maybe an even dodgier one.) It would be worth a serious and evidence-based look, like all areas of drug policy, but given that the government have a history of hysterical shrieking whenever their own appointed experts try to do that, I'm not hopeful.

And when I read that one MP has made it his personal mission to stop Salvia, I just couldn't stop thinking of bisturbile cranabolic amphetamoids.

Friday, May 15, 2009

Science vs. Free Will, Again

The question of whether we have "free will" has kept philosophers occupied for at least 2000 years. Wouldn't it be nice if science came along and sorted the whole thing out?
That's the reason why so many people are excited about reports like the one just published in Science, Movement Intention After Parietal Cortex Stimulation in Humans. The report itself is extremely straightforward. The authors, a team of French neurosurgeons, used electrodes to stimulate various points on the surface of the brains of seven patients. The patients were all suffering from brain tumours in various places, and they were undergoing surgery to remove them. As often happens, the authors decided to try to squeeze a little research out of the procedure as well.

The authors stimulated points in various areas of the brain, but the most interesting results came from the premotor cortex (the blue area on the picture above) and the posterior parietal cortex (red and yellow).

When certain points on the premotor cortex were stimulated, the patients moved. But they were not aware that they had done so. For example:
...during stimulation patient PM1 exhibited a large multijoint movement involving flexion of the left wrist, fingers, and elbow ... He did not spontaneously comment on this, and when asked whether he had felt a movement he responded negatively.
That's pretty interesting in itself, but even more so is what happened when the posterior parietal cortex got zapped. Stimulation here produced a desire or intention to move, although no movement actually occured:
Stimulation of all these sites produced a pure intention, that is, a felt desire to move without any overt movement being produced... Without prompting by the examiner, all three patients spontaneously used terms such as “will,” “desire,” and “wanting to,” which convey the voluntary character of the movement intention and its attribution to an internal source, that is, located within the self.
And as if that wasn't enough philosophically-provocative fun, high intensity stimulation of the same area made the patients believe that they had in fact moved, although they didn't move a muscle:
[with higher electrode currents] conscious motor intentions were replaced by a sensation that a movement had been accomplished [but] no actual movement was observed. Thus, these patients experienced awareness of an illusory movement. For example, patient PP3 reported after low-intensity stimulation of one site (5 mA, 4 s; site a in Fig. 1), “I felt a desire to lick my lips” and at a higher intensity (8 mA, 4 s), “I moved my mouth, I talked, what did I say?”
Wow. What are we to make of all this?

A while back I wrote about Wilder Penfield's idea of "double conciousness" which Christian neurosurgeon Michael Egnor described approvingly as
Penfield found that he could invoke all sorts of things- movements, sensations, memories. But in every instance (hundreds of thousands of individual stimulations- in different locations in each patient- during his career), the patients were aware that the stimulation was being done to them, but not by them. There was a part of the mind that was independent of brain stimulation and that constituted a part of subjective experience that Penfield was not able to manipulate with his surgery.

Penfield called this "double consciousness", meaning that there was a part of subjective experience that he could invoke or modify materially, and a different part that was immune to such manipulation.
So Penfield, one of the great pioneers of 20th century neuroscience, claimed that stimulation of the brain could never produce desires or intentions which were experienced as the subject's "own". The person whose brain you were stimulating always felt that whatever happened to them came from outside.

But this French report directly contradicts that. We can only speculate as to why. It could be that Penfield just never hit the right spot, but this seems extremely unlikely, as he did a lot of stimulating over the course of his career. A cynic might ask whether Penfield did observe similar phenomena and just never reported them, but if we're going to go down that road, it's equally likely that these neurosurgeons just made it all up. Fortunately, any neurosurgeon should be able to try to replicate these results with a few prods of an electrode, so it shouldn't take long before the truth becomes clearer.

If these present results hold up, they'll certainly suggest some interesting ideas about the organisation of the brain - such as that the perception of movement depends upon the neurones encoding the intention to move rather than those involved in producing the actual motor act.

It would also be interesting to find out what happens when you simulataneously stimulate the premotor spot which makes your arm move, and the posterior parietal spot which makes you want to move your arm. Would that make you want to move your arm - and do so? If so, that would suggest that something very similar to that is going on whenever we do anything. What is life, but wanting to move, and moving?

But whether that's true or not, intentions (and everything else) are still something that happens in the brain, and the brain is a material object subject to the laws of physics. Neuroscience can tell us how exactly it all fits together, but at the end of the day, it's all a bunch of cells. Free will, in other words, appears to be in trouble, whatever the details of the brain's mechanisms happen to be.

ResearchBlogging.orgDesmurget, M., Reilly, K., Richard, N., Szathmari, A., Mottolese, C., & Sirigu, A. (2009). Movement Intention After Parietal Cortex Stimulation in Humans Science, 324 (5928), 811-813 DOI: 10.1126/science.1169896

Science vs. Free Will, Again

The question of whether we have "free will" has kept philosophers occupied for at least 2000 years. Wouldn't it be nice if science came along and sorted the whole thing out?
That's the reason why so many people are excited about reports like the one just published in Science, Movement Intention After Parietal Cortex Stimulation in Humans. The report itself is extremely straightforward. The authors, a team of French neurosurgeons, used electrodes to stimulate various points on the surface of the brains of seven patients. The patients were all suffering from brain tumours in various places, and they were undergoing surgery to remove them. As often happens, the authors decided to try to squeeze a little research out of the procedure as well.

The authors stimulated points in various areas of the brain, but the most interesting results came from the premotor cortex (the blue area on the picture above) and the posterior parietal cortex (red and yellow).

When certain points on the premotor cortex were stimulated, the patients moved. But they were not aware that they had done so. For example:
...during stimulation patient PM1 exhibited a large multijoint movement involving flexion of the left wrist, fingers, and elbow ... He did not spontaneously comment on this, and when asked whether he had felt a movement he responded negatively.
That's pretty interesting in itself, but even more so is what happened when the posterior parietal cortex got zapped. Stimulation here produced a desire or intention to move, although no movement actually occured:
Stimulation of all these sites produced a pure intention, that is, a felt desire to move without any overt movement being produced... Without prompting by the examiner, all three patients spontaneously used terms such as “will,” “desire,” and “wanting to,” which convey the voluntary character of the movement intention and its attribution to an internal source, that is, located within the self.
And as if that wasn't enough philosophically-provocative fun, high intensity stimulation of the same area made the patients believe that they had in fact moved, although they didn't move a muscle:
[with higher electrode currents] conscious motor intentions were replaced by a sensation that a movement had been accomplished [but] no actual movement was observed. Thus, these patients experienced awareness of an illusory movement. For example, patient PP3 reported after low-intensity stimulation of one site (5 mA, 4 s; site a in Fig. 1), “I felt a desire to lick my lips” and at a higher intensity (8 mA, 4 s), “I moved my mouth, I talked, what did I say?”
Wow. What are we to make of all this?

A while back I wrote about Wilder Penfield's idea of "double conciousness" which Christian neurosurgeon Michael Egnor described approvingly as
Penfield found that he could invoke all sorts of things- movements, sensations, memories. But in every instance (hundreds of thousands of individual stimulations- in different locations in each patient- during his career), the patients were aware that the stimulation was being done to them, but not by them. There was a part of the mind that was independent of brain stimulation and that constituted a part of subjective experience that Penfield was not able to manipulate with his surgery.

Penfield called this "double consciousness", meaning that there was a part of subjective experience that he could invoke or modify materially, and a different part that was immune to such manipulation.
So Penfield, one of the great pioneers of 20th century neuroscience, claimed that stimulation of the brain could never produce desires or intentions which were experienced as the subject's "own". The person whose brain you were stimulating always felt that whatever happened to them came from outside.

But this French report directly contradicts that. We can only speculate as to why. It could be that Penfield just never hit the right spot, but this seems extremely unlikely, as he did a lot of stimulating over the course of his career. A cynic might ask whether Penfield did observe similar phenomena and just never reported them, but if we're going to go down that road, it's equally likely that these neurosurgeons just made it all up. Fortunately, any neurosurgeon should be able to try to replicate these results with a few prods of an electrode, so it shouldn't take long before the truth becomes clearer.

If these present results hold up, they'll certainly suggest some interesting ideas about the organisation of the brain - such as that the perception of movement depends upon the neurones encoding the intention to move rather than those involved in producing the actual motor act.

It would also be interesting to find out what happens when you simulataneously stimulate the premotor spot which makes your arm move, and the posterior parietal spot which makes you want to move your arm. Would that make you want to move your arm - and do so? If so, that would suggest that something very similar to that is going on whenever we do anything. What is life, but wanting to move, and moving?

But whether that's true or not, intentions (and everything else) are still something that happens in the brain, and the brain is a material object subject to the laws of physics. Neuroscience can tell us how exactly it all fits together, but at the end of the day, it's all a bunch of cells. Free will, in other words, appears to be in trouble, whatever the details of the brain's mechanisms happen to be.

ResearchBlogging.orgDesmurget, M., Reilly, K., Richard, N., Szathmari, A., Mottolese, C., & Sirigu, A. (2009). Movement Intention After Parietal Cortex Stimulation in Humans Science, 324 (5928), 811-813 DOI: 10.1126/science.1169896