Showing posts with label papers. Show all posts
Showing posts with label papers. Show all posts

Thursday, July 14, 2011

New Brain Cells: Torrent, or Trickle?

An important paper just out asks, Could adult hippocampal neurogenesis be relevant for human behavior?

Neuroscientists, and the media, are very excited by hippocampal neurogenesis - the ongoing creation of new neurons in an area called the dentate gyrus of the hippocampus. This is because it was thought, for a long time, that no new neurons were created in the adult brain. It turned out that this was wrong.

There's lots of exciting suggestive evidence that the process is involved in learning and memory, responses to stress, depression, and the action of antidepressants, to name just a few, although this is controversial.

However, there's a big question which has rarely been considered: how much neurogenesis are we talking about? Are there enough new cells that it would be realistic for them to be doing important stuff, or is it just a little trickle?
The most common source of skepticism toward a functional role for adult neurogenesis is the perception that too few new neurons are added in adulthood to have a significant impact. Interestingly, this concern, while valid, is usually raised informally and rarely in the scientific literature. Very few studies have addressed this issue...
The new paper reviews the evidence. Firstly, they point out that in the hippocampus, there's a group of cells called dentate gyrus granule cells which are unusual in that activity in just a few of these cells can have big downstream consequences. And these are the cells that new born neurons turn into.
Each granule cell contacts only 10–15 CA3 pyramidal cells...a single granule cell is able to trigger firing in downstream CA3 targets...Because of this “detonator” action...a single granule neuron can potentially have a large impact despite representing only a tiny fraction of the population.
So new cells may play an important role. But exactly how many are there? They re-analyze data from their own lab in rats, and, making a few assumptions, arrive at the following rough estimate: in 3 month old rats, there are 650k "young" cells less than 8 weeks old; even in 2 year old rats (ancient, for a rat) there are 50k.

This is enough to have a big impact downstream:
Since there are approximately 500,000 CA3 pyramidal cells, and each granule cell contacts 11–15 pyramidal cells, this suggests that even in the oldest animals, each CA3 pyramidal cell could receive a direct contact from a young granule cell
That's all in rats, though. What about humans? It's hard to tell. The problem is that the best way to assess the rate of neurogenesis is to inject a drug called BrdU and then study the brain post-mortem. Unfortunately, this drug can cause cancer so you can't just give it to people for the purposes of science. The only time it's used in humans is (ironically) to help detect cancer.

However, one study did manage to look at BrdU staining in the hippocampus, using people who'd been injected with BrdU for cancer (not brain cancer) and then died. This study found, the authors say, rates of neurogeneis at least as high as in rats, considering the low dose of BrdU, the fact that the patients were old, and stressed (by having cancer).

They admit that this is just one study, and comparing doses between rats and humans is inexact. They nonetheless conclude:
Are these numbers potentially sufficient to exert a functional impact in humans? We feel that the answer to this question is an overwhelming "yes".
ResearchBlogging.orgSnyder JS, & Cameron HA (2011). Could adult hippocampal neurogenesis be relevant for human behavior? Behavioural brain research PMID: 21736900

New Brain Cells: Torrent, or Trickle?

An important paper just out asks, Could adult hippocampal neurogenesis be relevant for human behavior?

Neuroscientists, and the media, are very excited by hippocampal neurogenesis - the ongoing creation of new neurons in an area called the dentate gyrus of the hippocampus. This is because it was thought, for a long time, that no new neurons were created in the adult brain. It turned out that this was wrong.

There's lots of exciting suggestive evidence that the process is involved in learning and memory, responses to stress, depression, and the action of antidepressants, to name just a few, although this is controversial.

However, there's a big question which has rarely been considered: how much neurogenesis are we talking about? Are there enough new cells that it would be realistic for them to be doing important stuff, or is it just a little trickle?
The most common source of skepticism toward a functional role for adult neurogenesis is the perception that too few new neurons are added in adulthood to have a significant impact. Interestingly, this concern, while valid, is usually raised informally and rarely in the scientific literature. Very few studies have addressed this issue...
The new paper reviews the evidence. Firstly, they point out that in the hippocampus, there's a group of cells called dentate gyrus granule cells which are unusual in that activity in just a few of these cells can have big downstream consequences. And these are the cells that new born neurons turn into.
Each granule cell contacts only 10–15 CA3 pyramidal cells...a single granule cell is able to trigger firing in downstream CA3 targets...Because of this “detonator” action...a single granule neuron can potentially have a large impact despite representing only a tiny fraction of the population.
So new cells may play an important role. But exactly how many are there? They re-analyze data from their own lab in rats, and, making a few assumptions, arrive at the following rough estimate: in 3 month old rats, there are 650k "young" cells less than 8 weeks old; even in 2 year old rats (ancient, for a rat) there are 50k.

This is enough to have a big impact downstream:
Since there are approximately 500,000 CA3 pyramidal cells, and each granule cell contacts 11–15 pyramidal cells, this suggests that even in the oldest animals, each CA3 pyramidal cell could receive a direct contact from a young granule cell
That's all in rats, though. What about humans? It's hard to tell. The problem is that the best way to assess the rate of neurogenesis is to inject a drug called BrdU and then study the brain post-mortem. Unfortunately, this drug can cause cancer so you can't just give it to people for the purposes of science. The only time it's used in humans is (ironically) to help detect cancer.

However, one study did manage to look at BrdU staining in the hippocampus, using people who'd been injected with BrdU for cancer (not brain cancer) and then died. This study found, the authors say, rates of neurogeneis at least as high as in rats, considering the low dose of BrdU, the fact that the patients were old, and stressed (by having cancer).

They admit that this is just one study, and comparing doses between rats and humans is inexact. They nonetheless conclude:
Are these numbers potentially sufficient to exert a functional impact in humans? We feel that the answer to this question is an overwhelming "yes".
ResearchBlogging.orgSnyder JS, & Cameron HA (2011). Could adult hippocampal neurogenesis be relevant for human behavior? Behavioural brain research PMID: 21736900

Wednesday, July 13, 2011

The Brain Is Not Made of DNA

A new paper claims to have found A novel functional brain imaging endophenotype of autism.
They used fMRI to show that the brains of teenagers with autism showed no activation differences to looking at smiling happy faces, or afraid faces, compared to unemotional ones. In teens without autism, there was strong activation in many emotional and face-related brain regions. The unaffected brothers and sisters of the autistic people showed intermediate effects.

This is a fine study. The finding that siblings of people with autism have weakened neural responses to emotional faces is quite important as it suggests that this finding correlates (to some degree) with your position on the autism "spectrum".

The abstract of the paper actually downplays this, and says "The response in unaffected siblings did not differ significantly from the response in autism". However, there was a significant linear trend of group, and looking at the graphs, it's clear the siblings were In The Middle, like Malcolm.


There's plenty more nice things you could do with these results, which is an unusally large and rich dataset (120 people - 40 in each group). You could see, for example, whether siblings tend to be similar in terms of neural response. You could see whether the siblings who are most alike in brain response, are closest in symptoms. Or just look a the structural data on brain size and shape to see if there are characteristic differences between siblings that make one of the autistic and the other not.

There are a few problems. Most of the analyses are subject to the non-independence problem, because they defined their regions of interest based on the areas that showed a significant happy vs neutral face effect in the control group. So it's no surprise that when they generated graphs from these areas, the control group showed the strongest effect. However, they also do whole-brain analyses which avoid this problem and I don't think it undermines the main results.

So it's a decent study. But is this a "biomarker", or "endophenotype", as the title of the paper has it?

These are both hot topics in neuroscience at the moment. As the authors put it (emphasis mine):
An endophenotype is a heritable feature associated with a condition, present in affected individuals regardless of whether their condition is manifested, which co-segregates with the condition in families and which is present in unaffected family members at a higher rate than in the general population.

In such family members, endophenotypes represent instances in which genes associated with a particular condition exert measurable effects in individuals in whom they are insufficient to cause the condition itself...

The promise of characterizing endophenotypes lies in their hypothesized intermediate position between genotype and phenotype... the etiology of the endophenotype is likely to be correspondingly simpler: it can be said to be ‘closer to the level of gene action’.
The idea, in other words, is that if we can find a difference in the brains of people with autism, and their unaffected relatives who (presumably) share some of the same genes, we might have found a mechanism by which the genes ultimately cause the symptoms.

It might be easier, then, to find the genes for brain-not-lighting-up-to-happy-faces, than it will be to find genes for autism. Then once we've found those, we can use them to better understand autism.

My concern is that, while in theory endophenotypes seem "closer to the genetics" because they're "biological" rather than "behavioural", this is just a philosophical illusion based on the idea that the mind is not the brain.

We actually have no idea whether brain-not-lighting-up-to-happy-faces is closer to genetics than autistic behaviour. I'd say that our default assumption should be that everything is exactly the same "distance" from DNA, that is to say, everything is the product of complex interactions between genes and environment.

Some things are under the more or less exclusive control of a small number of genes, and these are called "genetic", but it's important not to assume that just because something's "in the brain", it's probably "more genetic" in this sense. The brain is a product of the environment as well.

If you scanned my brain while playing an audio recording of Urda love poetry, not much would happen. I don't know Urdu. In someone who did speak Urdu, all kinds of language and emotional areas would light up. That doesn't mean Urdu-brain-response is genetic. It's exactly as genetic as speaking-Urdu, which isn't genetic.

ResearchBlogging.orgSpencer, M., Holt, R., Chura, L., Suckling, J., Calder, A., Bullmore, E., & Baron-Cohen, S. (2011). A novel functional brain imaging endophenotype of autism: the neural response to facial expression of emotion Translational Psychiatry, 1 (7) DOI: 10.1038/tp.2011.18

The Brain Is Not Made of DNA

A new paper claims to have found A novel functional brain imaging endophenotype of autism.
They used fMRI to show that the brains of teenagers with autism showed no activation differences to looking at smiling happy faces, or afraid faces, compared to unemotional ones. In teens without autism, there was strong activation in many emotional and face-related brain regions. The unaffected brothers and sisters of the autistic people showed intermediate effects.

This is a fine study. The finding that siblings of people with autism have weakened neural responses to emotional faces is quite important as it suggests that this finding correlates (to some degree) with your position on the autism "spectrum".

The abstract of the paper actually downplays this, and says "The response in unaffected siblings did not differ significantly from the response in autism". However, there was a significant linear trend of group, and looking at the graphs, it's clear the siblings were In The Middle, like Malcolm.


There's plenty more nice things you could do with these results, which is an unusally large and rich dataset (120 people - 40 in each group). You could see, for example, whether siblings tend to be similar in terms of neural response. You could see whether the siblings who are most alike in brain response, are closest in symptoms. Or just look a the structural data on brain size and shape to see if there are characteristic differences between siblings that make one of the autistic and the other not.

There are a few problems. Most of the analyses are subject to the non-independence problem, because they defined their regions of interest based on the areas that showed a significant happy vs neutral face effect in the control group. So it's no surprise that when they generated graphs from these areas, the control group showed the strongest effect. However, they also do whole-brain analyses which avoid this problem and I don't think it undermines the main results.

So it's a decent study. But is this a "biomarker", or "endophenotype", as the title of the paper has it?

These are both hot topics in neuroscience at the moment. As the authors put it (emphasis mine):
An endophenotype is a heritable feature associated with a condition, present in affected individuals regardless of whether their condition is manifested, which co-segregates with the condition in families and which is present in unaffected family members at a higher rate than in the general population.

In such family members, endophenotypes represent instances in which genes associated with a particular condition exert measurable effects in individuals in whom they are insufficient to cause the condition itself...

The promise of characterizing endophenotypes lies in their hypothesized intermediate position between genotype and phenotype... the etiology of the endophenotype is likely to be correspondingly simpler: it can be said to be ‘closer to the level of gene action’.
The idea, in other words, is that if we can find a difference in the brains of people with autism, and their unaffected relatives who (presumably) share some of the same genes, we might have found a mechanism by which the genes ultimately cause the symptoms.

It might be easier, then, to find the genes for brain-not-lighting-up-to-happy-faces, than it will be to find genes for autism. Then once we've found those, we can use them to better understand autism.

My concern is that, while in theory endophenotypes seem "closer to the genetics" because they're "biological" rather than "behavioural", this is just a philosophical illusion based on the idea that the mind is not the brain.

We actually have no idea whether brain-not-lighting-up-to-happy-faces is closer to genetics than autistic behaviour. I'd say that our default assumption should be that everything is exactly the same "distance" from DNA, that is to say, everything is the product of complex interactions between genes and environment.

Some things are under the more or less exclusive control of a small number of genes, and these are called "genetic", but it's important not to assume that just because something's "in the brain", it's probably "more genetic" in this sense. The brain is a product of the environment as well.

If you scanned my brain while playing an audio recording of Urda love poetry, not much would happen. I don't know Urdu. In someone who did speak Urdu, all kinds of language and emotional areas would light up. That doesn't mean Urdu-brain-response is genetic. It's exactly as genetic as speaking-Urdu, which isn't genetic.

ResearchBlogging.orgSpencer, M., Holt, R., Chura, L., Suckling, J., Calder, A., Bullmore, E., & Baron-Cohen, S. (2011). A novel functional brain imaging endophenotype of autism: the neural response to facial expression of emotion Translational Psychiatry, 1 (7) DOI: 10.1038/tp.2011.18

Saturday, July 9, 2011

Depression: From Treatment to Diagnosis?

In theory, medicine works like this. You get some signs or symptoms. You go to the doctor, and depending on those, you get a diagnosis. Your doctor decides on the best available treatment on that basis.

The logic of this system depends upon the sequence. A diagnosis is meant to be an objective statement about the nature of your illness; treatments (if any) come afterwards. It would be odd if the treatments on offer influenced what diagnosis you got.

An interesting paper just out suggests that exactly this kind of reverse influence has happened. The authors looked at what happened in the USA in 2003 when antidepressants were slapped with a "black box" warning, cautioning against their use in children and adolescents, due to concerns over suicide in young people.

They used the data from the annual National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS). These record data on the number of patients visiting their doctor regarding different illnesses, and what medications were prescribed if any.

What happened? The warning led to a reduction in the use of antidepressants. No surprise there, but unexpectedly, this wasn't because teens who visited their doctor regarding depression, were less likely to get given these drugs.

Actually, the proportion of depression visits, that were also antidepressant visits, was almost unchanged:
The proportion of depression visits with an antidepressant prescribed, having risen from 54% in 1998–1999 to 66% in 2002–2003, remained stable in 2004–2005 (65%) and in 2006–2007 (64%)
The difference was caused by a reduction in the number of teens getting diagnosed with depression - or rather, the number of visits where depression was mentioned; we can't tell if this meant doctors were less likely to diagnose, or patients were less likely to complain, or whatever.

This graph shows the story. After 2003, both antidepressant visits and depression visits fall, while the proportion of "antidepressant & depression" visits to the total depression visits (purple line), is constant.

The effect seen is just a correlation - it might have been a coincidence that all this happened after the black box warning in 2003. It seems very likely to be causal, though. Antidepressant use was rising steadily up until that point - and in adults, both depression and antidepressant visits rose after 2003.

It's also dangerous to pile too many heavy conclusions on the back of one study. But having said that -

Getting diagnosed with depression - at least if you're a teenager in the USA - is not just a function of having certain symptoms. The treatments on offer are a factor in determining whether you're diagnosed.

One alternative view, is that the fall in depression visits represents the fact that kids on antidepressants tend to have multiple visits - in order to monitor their progress, adjust dosage etc. So when antidepressant use fell, the number of visits fell. But if it were true, we'd presumably expect to see a fall in the proportion of visits that dealt with antidepressants, which we didn't.

This is disturbing either way you look at it. If you think the pre-2003 diagnoses were appropriate, then after 2003, kids must have been going undiagnosed with depression. On the other hand, if you think post-2003 was a welcome move away from over-diagnosis of depression, then pre-2003 must have been bad.

As to what happened to the kids who would have got a diagnosis of depression post-2003 were it not for the black box warning, we've got no way of knowing.


Why did this happen? Psychologist Abraham Maslow famously said "It's tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." The history of psychiatry bears this out.

Sigmund Freud's psychoanalysis was essentially the theory that most mental disturbance was a 'neurosis' or 'complex' of the kind that's best treated by lying on a coach and talking about your dreams and your childhood, which as luck would have it, was exactly what Freud had just invented.

Along came psychiatric drugs, and suddenly everything was a 'chemical imbalance'. I've previously suggested that the invention of SSRI antidepressants, in particular, may have changed the concept of depression into one which was most amenable to treatment with SSRIs.

Recently, we're seeing the rise of the view that everything from psychosis to paedophilia is about 'cognitive biases' that can be treated by the latest treatment paradigm, CBT.

We always think we've hit the nail on the head.

ResearchBlogging.orgChen SY, & Toh S (2011). National trends in prescribing antidepressants before and after an FDA advisory on suicidality risk in youths. Psychiatric services (Washington, D.C.), 62 (7), 727-33 PMID: 21724784

Depression: From Treatment to Diagnosis?

In theory, medicine works like this. You get some signs or symptoms. You go to the doctor, and depending on those, you get a diagnosis. Your doctor decides on the best available treatment on that basis.

The logic of this system depends upon the sequence. A diagnosis is meant to be an objective statement about the nature of your illness; treatments (if any) come afterwards. It would be odd if the treatments on offer influenced what diagnosis you got.

An interesting paper just out suggests that exactly this kind of reverse influence has happened. The authors looked at what happened in the USA in 2003 when antidepressants were slapped with a "black box" warning, cautioning against their use in children and adolescents, due to concerns over suicide in young people.

They used the data from the annual National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS). These record data on the number of patients visiting their doctor regarding different illnesses, and what medications were prescribed if any.

What happened? The warning led to a reduction in the use of antidepressants. No surprise there, but unexpectedly, this wasn't because teens who visited their doctor regarding depression, were less likely to get given these drugs.

Actually, the proportion of depression visits, that were also antidepressant visits, was almost unchanged:
The proportion of depression visits with an antidepressant prescribed, having risen from 54% in 1998–1999 to 66% in 2002–2003, remained stable in 2004–2005 (65%) and in 2006–2007 (64%)
The difference was caused by a reduction in the number of teens getting diagnosed with depression - or rather, the number of visits where depression was mentioned; we can't tell if this meant doctors were less likely to diagnose, or patients were less likely to complain, or whatever.

This graph shows the story. After 2003, both antidepressant visits and depression visits fall, while the proportion of "antidepressant & depression" visits to the total depression visits (purple line), is constant.

The effect seen is just a correlation - it might have been a coincidence that all this happened after the black box warning in 2003. It seems very likely to be causal, though. Antidepressant use was rising steadily up until that point - and in adults, both depression and antidepressant visits rose after 2003.

It's also dangerous to pile too many heavy conclusions on the back of one study. But having said that -

Getting diagnosed with depression - at least if you're a teenager in the USA - is not just a function of having certain symptoms. The treatments on offer are a factor in determining whether you're diagnosed.

One alternative view, is that the fall in depression visits represents the fact that kids on antidepressants tend to have multiple visits - in order to monitor their progress, adjust dosage etc. So when antidepressant use fell, the number of visits fell. But if it were true, we'd presumably expect to see a fall in the proportion of visits that dealt with antidepressants, which we didn't.

This is disturbing either way you look at it. If you think the pre-2003 diagnoses were appropriate, then after 2003, kids must have been going undiagnosed with depression. On the other hand, if you think post-2003 was a welcome move away from over-diagnosis of depression, then pre-2003 must have been bad.

As to what happened to the kids who would have got a diagnosis of depression post-2003 were it not for the black box warning, we've got no way of knowing.


Why did this happen? Psychologist Abraham Maslow famously said "It's tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." The history of psychiatry bears this out.

Sigmund Freud's psychoanalysis was essentially the theory that most mental disturbance was a 'neurosis' or 'complex' of the kind that's best treated by lying on a coach and talking about your dreams and your childhood, which as luck would have it, was exactly what Freud had just invented.

Along came psychiatric drugs, and suddenly everything was a 'chemical imbalance'. I've previously suggested that the invention of SSRI antidepressants, in particular, may have changed the concept of depression into one which was most amenable to treatment with SSRIs.

Recently, we're seeing the rise of the view that everything from psychosis to paedophilia is about 'cognitive biases' that can be treated by the latest treatment paradigm, CBT.

We always think we've hit the nail on the head.

ResearchBlogging.orgChen SY, & Toh S (2011). National trends in prescribing antidepressants before and after an FDA advisory on suicidality risk in youths. Psychiatric services (Washington, D.C.), 62 (7), 727-33 PMID: 21724784

Wednesday, July 6, 2011

The Partly Asleep Brain

Some animals - such as dolphins and whales - are able to "sleep with half their brain". One side of the brain goes into sleep-mode activity while the other remains awake.


But a remarkable new study has revealed that something similar may happen in humans as well - every night.

The research used a combination of scalp EEG, and electrodes planted inside the brain, to record brain activity from 5 people undergoing surgery to help cure severe epilepsy. The subjects were then allowed to go to sleep for the night, while recording took place.

As expected, after falling asleep, the EEG showed delta wave activity - strong, slow waves of electrical activity (0.5 to 4 Hz) which are typical of deep, dreamless "slow wave sleep".

However, the electrodes inside the brain told a different story. While they recorded delta waves most of the time, they also showed that there were episodes, lasting from a few seconds to up to 2 minutes, in which the motor cortex suddenly went into "waking mode". Delta waves disappeared, and were replaced with fast, unpredictable activity.

This image shows one episode, lasting just 5 seconds. The hotter the color, the more activity in a particular frequency. The higher the band, the higher the frequency. This shows a clear burst of high frequency activity in the motor cortex. The other parts of the brain showed the opposite effect - even stronger slow wave activity - at the same time.

Another area, the dorsolateral prefrontal cortex, also showed this phenomenon occasionally, but it was much less common than in the motor cortex.

There's a few caveats. These patients had severe epilepsy, and they were taking anti-convulsant drugs. This wouldn't obviously create the effects seen here, but we can't rule it out. Still, these results are intriguing.

They challenge the view of slow wave sleep as a "whole brain" phenomenon. We've known for a while that this isn't true of animals, and in people with certain sleep disorders, but this is first demonstration in healthy humans.

It may help to explain the mysterious fact that, although slow wave sleep is often referred to as "dreamless", there are consistent reports that people woken up from this phase of sleep do report dreaming (or at least thinking) about things.

While episodic arousal of the motor cortex probably wouldn't explain this per se, if the same thing happens in the visual cortex or other sensory areas, it might create dreams.

ResearchBlogging.orgNobili L, Ferrara M, Moroni F, De Gennaro L, Russo GL, Campus C, Cardinale F, & De Carli F (2011). Dissociated wake-like and sleep-like electro-cortical activity during sleep. NeuroImage PMID: 21718789

The Partly Asleep Brain

Some animals - such as dolphins and whales - are able to "sleep with half their brain". One side of the brain goes into sleep-mode activity while the other remains awake.


But a remarkable new study has revealed that something similar may happen in humans as well - every night.

The research used a combination of scalp EEG, and electrodes planted inside the brain, to record brain activity from 5 people undergoing surgery to help cure severe epilepsy. The subjects were then allowed to go to sleep for the night, while recording took place.

As expected, after falling asleep, the EEG showed delta wave activity - strong, slow waves of electrical activity (0.5 to 4 Hz) which are typical of deep, dreamless "slow wave sleep".

However, the electrodes inside the brain told a different story. While they recorded delta waves most of the time, they also showed that there were episodes, lasting from a few seconds to up to 2 minutes, in which the motor cortex suddenly went into "waking mode". Delta waves disappeared, and were replaced with fast, unpredictable activity.

This image shows one episode, lasting just 5 seconds. The hotter the color, the more activity in a particular frequency. The higher the band, the higher the frequency. This shows a clear burst of high frequency activity in the motor cortex. The other parts of the brain showed the opposite effect - even stronger slow wave activity - at the same time.

Another area, the dorsolateral prefrontal cortex, also showed this phenomenon occasionally, but it was much less common than in the motor cortex.

There's a few caveats. These patients had severe epilepsy, and they were taking anti-convulsant drugs. This wouldn't obviously create the effects seen here, but we can't rule it out. Still, these results are intriguing.

They challenge the view of slow wave sleep as a "whole brain" phenomenon. We've known for a while that this isn't true of animals, and in people with certain sleep disorders, but this is first demonstration in healthy humans.

It may help to explain the mysterious fact that, although slow wave sleep is often referred to as "dreamless", there are consistent reports that people woken up from this phase of sleep do report dreaming (or at least thinking) about things.

While episodic arousal of the motor cortex probably wouldn't explain this per se, if the same thing happens in the visual cortex or other sensory areas, it might create dreams.

ResearchBlogging.orgNobili L, Ferrara M, Moroni F, De Gennaro L, Russo GL, Campus C, Cardinale F, & De Carli F (2011). Dissociated wake-like and sleep-like electro-cortical activity during sleep. NeuroImage PMID: 21718789

Autism Isn't Very Genetic...Or Is It?

The environment is more important than genetics in setting the risk for autism, according to a new study that's got the media in a tizzy.

The paper, which is free, is here: Genetic Heritability and Shared Environmental Factors Among Twin Pairs With Autism

It's a twin study, and like all such research, it aims to estimate heritability, the proportion of the variability in autism risk caused by straightforward genetic effects. A heritability of 0% means no genetics and 100% means purely genetic. Note, however, that complex interactions between genes, epigenetics, and gene-environment interactions would throw the whole thing off.

Twin studies rely on the fact that there are two kinds of twins. Identical, or monozygotic (MZ), pairs have identical DNA, while dizygotic (DZ) twins are no more alike than any other brothers or sisters, genetically. So MZ twins ought to be more alike than DZ twins (have a higher "concordance"), and the size of the MZ-DZ difference is a measure of heritability.

There have been several previous twin studies of autism, and they've tended to find a heritability of around 90%, with high MZ concordance and very low DZ. However, these tended to be small and used outdated methods of diagnosis.

The new study used California records to find all twin pairs, born in the state between 1987 and 2004, where at least one of the twins had a diagnosis of autism on the DDS register of people receiving state services for developmental disorders.

They found 1156 twin pairs. Of these, they managed to recruit and get full data from 202 pairs. They gave all these 404 kids full autism diagnostic assessments. This is not a great response rate. Parents of responders tended to be slightly better educated and more likely to be white than the non-responders.

Here's the key data: concordance was higher in MZ twins, but not by nearly as much as previous studies would predict. Putting these data into a statistical model, assuming a baseline rate of autism of 1% in boys and 0.3% in girls, found that the most likely explanation was a heritability of about 35-40% and an effect of "shared environment", i.e. family factors, of 55-60%.

So. Autism's not very genetic?

Maybe. This is certainly a major study and all autism researchers need to take note. But there's some caveats.

My major concern is that the DZ concordance might be too high, because a kid might be more likely to get diagnosed with autism if their twin already had a diagnosis. Suppose you're a parent and one of your twins is diagnosed - of course you're going to worry about the other one, and start thinking, are they really so different?

Although all the people in this study were (re)assessed for study purposes, the diagnostic instruments are hardly immune to the effects of prior diagnosis. The ADI interview is based on parental report of early childhood behaviour. Parents know whether the other twin has autism. The other interview, the ADOS, is based on direct observation of the patient, so it might avoid this - but you have to score on the ADI to get a diagnosis.

This, by itself, wouldn't explain the discrepency between these data and older twin studies. But we also know that diagnoses of autism in general has skyrocketed recently. People seem to be becoming more willing to accept that diagnosis, and more aware of the symptoms. So it's quite possible that some of the "unaffected" twins from older studies would get a diagnosis today if they were to have the kind of modern, formal assessment done in this study.

This doesn't mean that the new study is wrong. If this explanation is true, then the study is quite right - there is a strong shared environmental influence on autism diagnosis. But not necessarily on autism.

One reason to suspect that this is going on - and this is purely a hunch - is that the estimates of shared environmental influence, i.e family environment, was 55%. This is exceptionally high, because almost every other human disorder or trait for which twin studies have been done, have reported low shared environmental effects, and high individual environmental effects (smoking, alcoholism, anxiety, depression). In fact people have written books about this.

Maybe autism's different. Yet I'm more willing to accept that autism diagnosis is different.

A related, but seperate, point: it's very likely that some autism is more genetic than others. In particular we know that some cases are caused by single genetic variants, and these tend to be severe with associated low IQ and sometimes other abnormalities; this is sometimes called "syndromic" autism.

It's always easier to spot a severe case than a mild one. So it's quite possible that older studies had a higher proportion of these cases, because the diagnostic system was only able to pick up those ones. Maybe in more recent times, as diagnosis has expanded, "autism" is coming to cover a "less genetic" set of things.

The good thing about these data is that they span births from 1987 to 2004. So it would be possible to check this theory by looking to see whether the early data i.e. the older twins, have a higher heritability.

Finally, Michelle Dawson pointed out on Twitter that there's another large twin study from Wisconsin, as yet unpublished but presented at a conference. They found broadly comparable results.

ResearchBlogging.orgJoachim Hallmayer, et al. (2011). Genetic Heritability and Shared Environmental Factors Among Twin Pairs With Autism Archives of General Psychiatry

Autism Isn't Very Genetic...Or Is It?

The environment is more important than genetics in setting the risk for autism, according to a new study that's got the media in a tizzy.

The paper, which is free, is here: Genetic Heritability and Shared Environmental Factors Among Twin Pairs With Autism

It's a twin study, and like all such research, it aims to estimate heritability, the proportion of the variability in autism risk caused by straightforward genetic effects. A heritability of 0% means no genetics and 100% means purely genetic. Note, however, that complex interactions between genes, epigenetics, and gene-environment interactions would throw the whole thing off.

Twin studies rely on the fact that there are two kinds of twins. Identical, or monozygotic (MZ), pairs have identical DNA, while dizygotic (DZ) twins are no more alike than any other brothers or sisters, genetically. So MZ twins ought to be more alike than DZ twins (have a higher "concordance"), and the size of the MZ-DZ difference is a measure of heritability.

There have been several previous twin studies of autism, and they've tended to find a heritability of around 90%, with high MZ concordance and very low DZ. However, these tended to be small and used outdated methods of diagnosis.

The new study used California records to find all twin pairs, born in the state between 1987 and 2004, where at least one of the twins had a diagnosis of autism on the DDS register of people receiving state services for developmental disorders.

They found 1156 twin pairs. Of these, they managed to recruit and get full data from 202 pairs. They gave all these 404 kids full autism diagnostic assessments. This is not a great response rate. Parents of responders tended to be slightly better educated and more likely to be white than the non-responders.

Here's the key data: concordance was higher in MZ twins, but not by nearly as much as previous studies would predict. Putting these data into a statistical model, assuming a baseline rate of autism of 1% in boys and 0.3% in girls, found that the most likely explanation was a heritability of about 35-40% and an effect of "shared environment", i.e. family factors, of 55-60%.

So. Autism's not very genetic?

Maybe. This is certainly a major study and all autism researchers need to take note. But there's some caveats.

My major concern is that the DZ concordance might be too high, because a kid might be more likely to get diagnosed with autism if their twin already had a diagnosis. Suppose you're a parent and one of your twins is diagnosed - of course you're going to worry about the other one, and start thinking, are they really so different?

Although all the people in this study were (re)assessed for study purposes, the diagnostic instruments are hardly immune to the effects of prior diagnosis. The ADI interview is based on parental report of early childhood behaviour. Parents know whether the other twin has autism. The other interview, the ADOS, is based on direct observation of the patient, so it might avoid this - but you have to score on the ADI to get a diagnosis.

This, by itself, wouldn't explain the discrepency between these data and older twin studies. But we also know that diagnoses of autism in general has skyrocketed recently. People seem to be becoming more willing to accept that diagnosis, and more aware of the symptoms. So it's quite possible that some of the "unaffected" twins from older studies would get a diagnosis today if they were to have the kind of modern, formal assessment done in this study.

This doesn't mean that the new study is wrong. If this explanation is true, then the study is quite right - there is a strong shared environmental influence on autism diagnosis. But not necessarily on autism.

One reason to suspect that this is going on - and this is purely a hunch - is that the estimates of shared environmental influence, i.e family environment, was 55%. This is exceptionally high, because almost every other human disorder or trait for which twin studies have been done, have reported low shared environmental effects, and high individual environmental effects (smoking, alcoholism, anxiety, depression). In fact people have written books about this.

Maybe autism's different. Yet I'm more willing to accept that autism diagnosis is different.

A related, but seperate, point: it's very likely that some autism is more genetic than others. In particular we know that some cases are caused by single genetic variants, and these tend to be severe with associated low IQ and sometimes other abnormalities; this is sometimes called "syndromic" autism.

It's always easier to spot a severe case than a mild one. So it's quite possible that older studies had a higher proportion of these cases, because the diagnostic system was only able to pick up those ones. Maybe in more recent times, as diagnosis has expanded, "autism" is coming to cover a "less genetic" set of things.

The good thing about these data is that they span births from 1987 to 2004. So it would be possible to check this theory by looking to see whether the early data i.e. the older twins, have a higher heritability.

Finally, Michelle Dawson pointed out on Twitter that there's another large twin study from Wisconsin, as yet unpublished but presented at a conference. They found broadly comparable results.

ResearchBlogging.orgJoachim Hallmayer, et al. (2011). Genetic Heritability and Shared Environmental Factors Among Twin Pairs With Autism Archives of General Psychiatry

Tuesday, July 5, 2011

Melancholia In 100 Words


The British Journal of Psychiatry have a regular series called "In 100 Words", which produces some gems. This month they have Melanchola in 100 Words, featuring perhaps the most influential musician you haven't heard of, Robert Johnson.
I got stones in my pathway/And my road seems dark at night/I have pains in my heart/They have taken my appetite.

Robert Johnson, known as the King of the Delta blues singers, distilled into these lines the essence of severe depressive illness – somatic ills, fear and suspicion, emotional and physical pain, nocturnal troubles and struggle against obstacles. The words are one with the powerful, haunting music. ICD-10 and DSM-IV have their place, but poets have often been there before us, and done a better job. We can all learn from Robert Johnson, born just 100 years ago.
I've previously written about the blues and what shade of blue they were talking about, here. But this actually isn't the first Melancholia in 100 Words to appear in the BJP. Here's another one from 2009

Melancholia is a classical episodic depressive disorder that combines mood, psychomotor, cognitive and vegetative components with high suicide risk. In the present psychiatric classification it is buried as a modifier in both bipolar and unipolar depressions. It is hardly used to characterise patients in the clinic or research.

The syndrome is frequently recognised in delusional and agitated depression, and in the elderly. Cortisol or sleep EEG abnormalities are prognostically helpful. Melancholia is particularly responsive to tricyclic antidepressants and electroconvulsive therapy but not to selective serotonin reuptake inhibitors or psychotherapy. Recognising melancholia as a distinct disorder improves clinical care and research.

Melancholia In 100 Words


The British Journal of Psychiatry have a regular series called "In 100 Words", which produces some gems. This month they have Melanchola in 100 Words, featuring perhaps the most influential musician you haven't heard of, Robert Johnson.
I got stones in my pathway/And my road seems dark at night/I have pains in my heart/They have taken my appetite.

Robert Johnson, known as the King of the Delta blues singers, distilled into these lines the essence of severe depressive illness – somatic ills, fear and suspicion, emotional and physical pain, nocturnal troubles and struggle against obstacles. The words are one with the powerful, haunting music. ICD-10 and DSM-IV have their place, but poets have often been there before us, and done a better job. We can all learn from Robert Johnson, born just 100 years ago.
I've previously written about the blues and what shade of blue they were talking about, here. But this actually isn't the first Melancholia in 100 Words to appear in the BJP. Here's another one from 2009

Melancholia is a classical episodic depressive disorder that combines mood, psychomotor, cognitive and vegetative components with high suicide risk. In the present psychiatric classification it is buried as a modifier in both bipolar and unipolar depressions. It is hardly used to characterise patients in the clinic or research.

The syndrome is frequently recognised in delusional and agitated depression, and in the elderly. Cortisol or sleep EEG abnormalities are prognostically helpful. Melancholia is particularly responsive to tricyclic antidepressants and electroconvulsive therapy but not to selective serotonin reuptake inhibitors or psychotherapy. Recognising melancholia as a distinct disorder improves clinical care and research.

Monday, July 4, 2011

Gamma Waves: The Brain's Clock, Or Neural Noise?

Gamma waves are very hot at the moment.


Gamma band activity is a term for electrical oscillations recorded from the brain that have a frequency of over 25 Hz. In most brains, a peak frequency of about 40 Hz is seen. This makes gamma waves the fastest brain waves.

If you believe some recent claims, gamma waves are the answer to all the mysteries of life and the universe. They're said to underlie the symptoms of schizophrenia and autism, and they've been invoked to answer deep questions such as the binding problem and maybe conciousness itself. You can even buy a Nintendo game that promises to boost them.

A new paper from Burns et al casts doubt on all of these grand claims. Gamma-based theories of brain function all assume that gamma waves act a bit like a clock, with a consistent rhythm of about 40 Hz. Activity of about 40 Hz is indeed observed in brain recordings but is that just because the brain is randomly generating all kinds of signals, and only the 40 Hz ones "get through"?

To put it another way, imagine that you got a letter in the mail at 9 am every morning. That could be because someone is sending you one letter each day like clockwork. But it could also be that loads of people are sending you letters at random times, and your mailman only has room in his sack to deliver one each morning.

Here's the key data, recorded using electrodes implanted into the brains of two male macaque monkeys:


This shows that the monkey data closely resemble what you'd expect if gamma activity were filtered noise, and are not what you'd see if it were a more meaningful "clock". The "triangle" on the graph shows the number of bursts of a given frequency and duration.
The data also show that the phase of the gamma activity isn't consistent, which it would be if it were clocklike. In fact, the phases change entirely randomly.

So if gamma is just "filtered noise", what's the "filter"? Why 40 Hz, not 80 or 4000? Probably because this is just the maximum frequency at which neurons can fire. It takes a certain finite amount of time for cells to communicate with each other: a silicon chip can get a clock speed of many billions of hertz, but a cell just physically can't.

There's a catch, though. These monkeys were asleep, anaesthetized with the powerful opiate sufentanil. This is a good choice of drug: unlike most other sedatives and anaesthetics, you wouldn't expect an opiate to directly affect gamma oscillations. But still. If you believe that coherent gamma waves are the key to high-level concious experience, as many do, you might not expect to see much of that in the primary visual cortex in asleep animals.

However, this is clearly a very important issue, and it's not the first gamma-skeptic paper. In 2008, Yuval-Greenberg et al reported that many attempts to measure gamma activity using EEG were contaminated by electrical activity from scalp muscles. Rather than coming from the brain, the "gamma" activity reflected nothing more than tiny eye movements. The implications are still being debated.

This paper attacks the gamma hypothesis from a completely different angle, saying that even the "real" gamma in the brain, may be nothing more interesting than filtered noise.

ResearchBlogging.orgBurns SP, Xing D, & Shapley RM (2011). Is gamma-band activity in the local field potential of v1 cortex a "clock" or filtered noise? The Journal of neuroscience : the official journal of the Society for Neuroscience, 31 (26), 9658-64 PMID: 21715631

Gamma Waves: The Brain's Clock, Or Neural Noise?

Gamma waves are very hot at the moment.


Gamma band activity is a term for electrical oscillations recorded from the brain that have a frequency of over 25 Hz. In most brains, a peak frequency of about 40 Hz is seen. This makes gamma waves the fastest brain waves.

If you believe some recent claims, gamma waves are the answer to all the mysteries of life and the universe. They're said to underlie the symptoms of schizophrenia and autism, and they've been invoked to answer deep questions such as the binding problem and maybe conciousness itself. You can even buy a Nintendo game that promises to boost them.

A new paper from Burns et al casts doubt on all of these grand claims. Gamma-based theories of brain function all assume that gamma waves act a bit like a clock, with a consistent rhythm of about 40 Hz. Activity of about 40 Hz is indeed observed in brain recordings but is that just because the brain is randomly generating all kinds of signals, and only the 40 Hz ones "get through"?

To put it another way, imagine that you got a letter in the mail at 9 am every morning. That could be because someone is sending you one letter each day like clockwork. But it could also be that loads of people are sending you letters at random times, and your mailman only has room in his sack to deliver one each morning.

Here's the key data, recorded using electrodes implanted into the brains of two male macaque monkeys:


This shows that the monkey data closely resemble what you'd expect if gamma activity were filtered noise, and are not what you'd see if it were a more meaningful "clock". The "triangle" on the graph shows the number of bursts of a given frequency and duration.
The data also show that the phase of the gamma activity isn't consistent, which it would be if it were clocklike. In fact, the phases change entirely randomly.

So if gamma is just "filtered noise", what's the "filter"? Why 40 Hz, not 80 or 4000? Probably because this is just the maximum frequency at which neurons can fire. It takes a certain finite amount of time for cells to communicate with each other: a silicon chip can get a clock speed of many billions of hertz, but a cell just physically can't.

There's a catch, though. These monkeys were asleep, anaesthetized with the powerful opiate sufentanil. This is a good choice of drug: unlike most other sedatives and anaesthetics, you wouldn't expect an opiate to directly affect gamma oscillations. But still. If you believe that coherent gamma waves are the key to high-level concious experience, as many do, you might not expect to see much of that in the primary visual cortex in asleep animals.

However, this is clearly a very important issue, and it's not the first gamma-skeptic paper. In 2008, Yuval-Greenberg et al reported that many attempts to measure gamma activity using EEG were contaminated by electrical activity from scalp muscles. Rather than coming from the brain, the "gamma" activity reflected nothing more than tiny eye movements. The implications are still being debated.

This paper attacks the gamma hypothesis from a completely different angle, saying that even the "real" gamma in the brain, may be nothing more interesting than filtered noise.

ResearchBlogging.orgBurns SP, Xing D, & Shapley RM (2011). Is gamma-band activity in the local field potential of v1 cortex a "clock" or filtered noise? The Journal of neuroscience : the official journal of the Society for Neuroscience, 31 (26), 9658-64 PMID: 21715631

Sunday, July 3, 2011

The NeuROFLscience of Jokes

A new paper in the Journal of Neuroscience investigates the neural basis of humour: Why Clowns Taste Funny.

The authors note that some things are funny because of ambiguous words. For example:
Q: Why don’t cannibals eat clowns?
A: Because they taste funny!
Previous studies, apparently, have shown that these kinds of jokes lead to activation in the lIFG (left inferior frontal gyrus), although it's also involved in processing ambiguity that's not funny, and indeed, language in general.

In this study they gave people fMRI and played them audio clips of sentences that were either funny or not, and that either contained ambiguity or not. Examples of non-funny ambiguity included crackers like this:
Q: What happened to the post?
A: As usual, it was given to the best-qualified applicant.

They found that, relative to straightforward ones, ambiguous sentences led to increased activation in two areas, the lIFG and also the left ITG. That fits with previous work.

By contrast, funny stimuli, whether ambiguous or not, sent the brain into overdrive, with humour causing activation all over a wide range of hilarious areas such as the amygdala, ventral striatum, hypothalamus, temporal lobes and more.

Many of these areas are known to be involved in emotion and pleasure, although some are fairly random such as visual area BA19.
There were strong associations between BOLD signal change and funniness in the midbrain, the left ventral striatum, and the left anterior and posterior IFG.
The problem is, like so many neuroimaging studies, it's not clear what this adds to our understanding of the topic. All this really shows is that linguistic ambiguity activates language areas, and enjoyable stimuli activate pleasure areas (amongst many others); it doesn't tell us why some things are funny.

So more research is needed, and future neuro-humour studies will need a new set of neuro-jokes in order to maximize the laughs. Here's a few I came up with:

Q: Why did the chicken cross the road?
A :Because of activation in the motor cortex, causing muscle contractions in his legs.

Q: What neuroimaging methodology is most useful for studying the brains of cats and dogs?
A: PET scanning.

Knock knock.
Who's there?
John.
I doubt that. The 'self' is an illusion. The concept of 'John' as an individual is incompatible with modern neuroscience.

ResearchBlogging.orgBekinschtein TA, Davis MH, Rodd JM, & Owen AM (2011). Why Clowns Taste Funny: The Relationship between Humor and Semantic Ambiguity. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31 (26), 9665-71 PMID: 21715632

The NeuROFLscience of Jokes

A new paper in the Journal of Neuroscience investigates the neural basis of humour: Why Clowns Taste Funny.

The authors note that some things are funny because of ambiguous words. For example:
Q: Why don’t cannibals eat clowns?
A: Because they taste funny!
Previous studies, apparently, have shown that these kinds of jokes lead to activation in the lIFG (left inferior frontal gyrus), although it's also involved in processing ambiguity that's not funny, and indeed, language in general.

In this study they gave people fMRI and played them audio clips of sentences that were either funny or not, and that either contained ambiguity or not. Examples of non-funny ambiguity included crackers like this:
Q: What happened to the post?
A: As usual, it was given to the best-qualified applicant.

They found that, relative to straightforward ones, ambiguous sentences led to increased activation in two areas, the lIFG and also the left ITG. That fits with previous work.

By contrast, funny stimuli, whether ambiguous or not, sent the brain into overdrive, with humour causing activation all over a wide range of hilarious areas such as the amygdala, ventral striatum, hypothalamus, temporal lobes and more.

Many of these areas are known to be involved in emotion and pleasure, although some are fairly random such as visual area BA19.
There were strong associations between BOLD signal change and funniness in the midbrain, the left ventral striatum, and the left anterior and posterior IFG.
The problem is, like so many neuroimaging studies, it's not clear what this adds to our understanding of the topic. All this really shows is that linguistic ambiguity activates language areas, and enjoyable stimuli activate pleasure areas (amongst many others); it doesn't tell us why some things are funny.

So more research is needed, and future neuro-humour studies will need a new set of neuro-jokes in order to maximize the laughs. Here's a few I came up with:

Q: Why did the chicken cross the road?
A :Because of activation in the motor cortex, causing muscle contractions in his legs.

Q: What neuroimaging methodology is most useful for studying the brains of cats and dogs?
A: PET scanning.

Knock knock.
Who's there?
John.
I doubt that. The 'self' is an illusion. The concept of 'John' as an individual is incompatible with modern neuroscience.

ResearchBlogging.orgBekinschtein TA, Davis MH, Rodd JM, & Owen AM (2011). Why Clowns Taste Funny: The Relationship between Humor and Semantic Ambiguity. The Journal of neuroscience : the official journal of the Society for Neuroscience, 31 (26), 9665-71 PMID: 21715632