Showing posts with label animals. Show all posts
Showing posts with label animals. Show all posts

Sunday, August 21, 2011

Is Sleep Brain Defragmentation?

After a period of heavy use, hard disks tend to get 'fragmented'. Data gets written all over random parts of the disk, and it gets inefficient to keep track of it all.





That's why you need to run a defragmentation program occasionally. Ideally, you do this overnight, while you're asleep, so it doesn't stop you from using the computer.



A new paper from some Stanford neuroscientists argues that the function of sleep is to reorganize neural connections - a bit like a disk defrag for the brain - although it's also a bit like compressing files to make more room, and a bit like a system reset: Synaptic plasticity in sleep: learning, homeostasis and disease



The basic idea is simple. While you're awake, you're having experiences, and your brain is forming memories. Memory formation involves a process called long-term potentiation (LTP) which is essentially the strengthening of synaptic connections between nerve cells.



Yet if LTP is strengthening synapses, and we're learning all our lives, wouldn't the synapses eventually hit a limit? Couldn't they max out, so that they could never get any stronger?



Worse, the synapses that strengthen during memory are primarily glutamate synapses - and these are dangerous. Glutamate is a common neurotransmitter, and it's even a flavouring, but it's also a toxin.



Too much glutamate damages the very cells that receive the messages. Rather like how sound is useful for communication, but stand next to a pneumatic drill for an hour, and you'll go deaf.



So, if our brains were constantly forming stronger glutamate synapses, we might eventually run into serious problems. This is why we sleep, according to the new paper. Indeed, sleep deprivation is harmful to health, and this theory would explain why.





The authors argue that during deep, dreamless slow-wave sleep (SWS), the brain is essentially removing the "extra" synaptic strength formed during the previous day. But it does so in a way that preserves the memories. A bit like how defragmentation reorganizes the hard disk to increase efficiency, without losing data.



One possible mechanism is 'synaptic scaling'. When some of the inputs onto a given cell become stronger, all of the synapses on that cell could weaken. This would preserve the relative strength of the different inputs while keeping the total inputs constant. It's known that synaptic scaling happens in the brain, although it's not clear whether it has anything to do with sleep.



There are other theories of the restorative function of sleep, but this one seems pretty plausible. It stands in contrast to the idea that sleep is purely a form of inactivity designed to save energy, rather than being important in itself.



What this paper doesn't explain, and doesn't try to, is dreaming, REM sleep, which is very different to slow-wave sleep. REM is not required for life, so long as you get SWS, and some animals don't have REM, but they all have SWS, although in some animals, only one side of the brain has it at a time.



So it makes sense, but what's the evidence? There's quite a bit - but, it all comes from very simple animals, like flies and fish.



The pictures above show that, in various parts of the brain of the fruit fly, measures of synaptic strength are increased in flies that have been awake for some time, compared to recently rested ones. In general, synapses increase during the wake cycle and then return to baseline during sleep.



There's similar evidence from fish. But the authors admit that no-one has yet shown that the same is true of any mammals - let alone humans.



I'd say that this is important, because the fly brain is literally a million times smaller than ours. Synaptic overgrowth could be a more serious problem for a fly because they just have fewer neurons to play with. Sleep may have evolved to prune extra connections in primitive brains, and then shifted to playing a very different role in ours.



ResearchBlogging.orgWang G, Grone B, Colas D, Appelbaum L, & Mourrain P (2011). Synaptic plasticity in sleep: learning, homeostasis and disease. Trends in Neurosciences PMID: 21840068

Is Sleep Brain Defragmentation?

After a period of heavy use, hard disks tend to get 'fragmented'. Data gets written all over random parts of the disk, and it gets inefficient to keep track of it all.





That's why you need to run a defragmentation program occasionally. Ideally, you do this overnight, while you're asleep, so it doesn't stop you from using the computer.



A new paper from some Stanford neuroscientists argues that the function of sleep is to reorganize neural connections - a bit like a disk defrag for the brain - although it's also a bit like compressing files to make more room, and a bit like a system reset: Synaptic plasticity in sleep: learning, homeostasis and disease



The basic idea is simple. While you're awake, you're having experiences, and your brain is forming memories. Memory formation involves a process called long-term potentiation (LTP) which is essentially the strengthening of synaptic connections between nerve cells.



Yet if LTP is strengthening synapses, and we're learning all our lives, wouldn't the synapses eventually hit a limit? Couldn't they max out, so that they could never get any stronger?



Worse, the synapses that strengthen during memory are primarily glutamate synapses - and these are dangerous. Glutamate is a common neurotransmitter, and it's even a flavouring, but it's also a toxin.



Too much glutamate damages the very cells that receive the messages. Rather like how sound is useful for communication, but stand next to a pneumatic drill for an hour, and you'll go deaf.



So, if our brains were constantly forming stronger glutamate synapses, we might eventually run into serious problems. This is why we sleep, according to the new paper. Indeed, sleep deprivation is harmful to health, and this theory would explain why.





The authors argue that during deep, dreamless slow-wave sleep (SWS), the brain is essentially removing the "extra" synaptic strength formed during the previous day. But it does so in a way that preserves the memories. A bit like how defragmentation reorganizes the hard disk to increase efficiency, without losing data.



One possible mechanism is 'synaptic scaling'. When some of the inputs onto a given cell become stronger, all of the synapses on that cell could weaken. This would preserve the relative strength of the different inputs while keeping the total inputs constant. It's known that synaptic scaling happens in the brain, although it's not clear whether it has anything to do with sleep.



There are other theories of the restorative function of sleep, but this one seems pretty plausible. It stands in contrast to the idea that sleep is purely a form of inactivity designed to save energy, rather than being important in itself.



What this paper doesn't explain, and doesn't try to, is dreaming, REM sleep, which is very different to slow-wave sleep. REM is not required for life, so long as you get SWS, and some animals don't have REM, but they all have SWS, although in some animals, only one side of the brain has it at a time.



So it makes sense, but what's the evidence? There's quite a bit - but, it all comes from very simple animals, like flies and fish.



The pictures above show that, in various parts of the brain of the fruit fly, measures of synaptic strength are increased in flies that have been awake for some time, compared to recently rested ones. In general, synapses increase during the wake cycle and then return to baseline during sleep.



There's similar evidence from fish. But the authors admit that no-one has yet shown that the same is true of any mammals - let alone humans.



I'd say that this is important, because the fly brain is literally a million times smaller than ours. Synaptic overgrowth could be a more serious problem for a fly because they just have fewer neurons to play with. Sleep may have evolved to prune extra connections in primitive brains, and then shifted to playing a very different role in ours.



ResearchBlogging.orgWang G, Grone B, Colas D, Appelbaum L, & Mourrain P (2011). Synaptic plasticity in sleep: learning, homeostasis and disease. Trends in Neurosciences PMID: 21840068

Thursday, August 4, 2011

Brain-Modifying Drugs

What if there was a drug that didn't just affect the levels of chemicals in your brain, it turned off genes in your brain? That possibility - either exciting or sinister depending on how you look at it - could be remarkably close, according to a report just out from a Spanish group.

The authors took an antidepressant, sertraline, and chemically welded it to a small interfering RNA (siRNA). A siRNA is kind of like a pair of genetic handcuffs. It selectively blocks the expression of a particular gene, by binding to and interfering with RNA messengers. In this case, the target was the serotonin 5HT1A receptor.

The authors injected their molecule into the brains of some mice. The sertraline was there to target the siRNA at specific cell types. Sertraline works by binding to and blocking the serotonin transporter (SERT), and this is only expressed on cells that release serotonin; so only these cells were subject to the 5HT1A silencing.

The idea is that this receptor acts as a kind of automatic off-switch for these cells, making them reduce their firing in response to their own output, to keep them from firing too fast. There's a theory that this feedback can be a bad thing, because it stops antidepressants from being able to boost serotonin levels very much, although this is debated.

Anyway, it worked. The treated mice showed a strong and selective reduction in the density of the 5HT1A receptor in the target area (the Raphe nuclei containing serotonin cells), but not in the rest of the brain.

Note that this isn't genetic modification as such. The gene wasn't deleted, it was just silenced, temporarily one hopes; the effect persisted for at least 3 days, but they didn't investigate just how long it lasted.

That's remarkable enough, but what's more, it also worked when they administered the drug via the intranasal route. In many siRNA experiments, the payload is injected directly into the brain. That's fine for lab mice, but not very practical for humans. Intranasal administration, however, is popular and easy.

So siRNA-sertraline, and who knows what other drugs built along these lines, may be closer to being ready for human consumption than anyone would have predicted. However... the mouse's brain is a lot closer to its nose than the human brain is, so it might not go quite as smoothly.

The mind boggles at the potential. If you could selectively alter the gene expression of selective neurons, you could do things to the brain that are currently impossible. Existing drugs hit the whole brain, yet there are many reasons why you'd prefer to only affect certain areas. And editing gene expression would allow much more detailed control over those cells than is currently possible.

Currently available drugs are shotguns and sledgehammers. These approaches could provide sniper rifles and scalpels. But whether it will prove to be safe remains to be seen. I certainly wouldn't want to be first one to snort this particular drug.

ResearchBlogging.orgBortolozzi, A., Castañé, A., Semakova, J., Santana, N., Alvarado, G., Cortés, R., Ferrés-Coy, A., Fernández, G., Carmona, M., Toth, M., Perales, J., Montefeltro, A., & Artigas, F. (2011). Selective siRNA-mediated suppression of 5-HT1A autoreceptors evokes strong anti-depressant-like effects Molecular Psychiatry DOI: 10.1038/mp.2011.92

Brain-Modifying Drugs

What if there was a drug that didn't just affect the levels of chemicals in your brain, it turned off genes in your brain? That possibility - either exciting or sinister depending on how you look at it - could be remarkably close, according to a report just out from a Spanish group.

The authors took an antidepressant, sertraline, and chemically welded it to a small interfering RNA (siRNA). A siRNA is kind of like a pair of genetic handcuffs. It selectively blocks the expression of a particular gene, by binding to and interfering with RNA messengers. In this case, the target was the serotonin 5HT1A receptor.

The authors injected their molecule into the brains of some mice. The sertraline was there to target the siRNA at specific cell types. Sertraline works by binding to and blocking the serotonin transporter (SERT), and this is only expressed on cells that release serotonin; so only these cells were subject to the 5HT1A silencing.

The idea is that this receptor acts as a kind of automatic off-switch for these cells, making them reduce their firing in response to their own output, to keep them from firing too fast. There's a theory that this feedback can be a bad thing, because it stops antidepressants from being able to boost serotonin levels very much, although this is debated.

Anyway, it worked. The treated mice showed a strong and selective reduction in the density of the 5HT1A receptor in the target area (the Raphe nuclei containing serotonin cells), but not in the rest of the brain.

Note that this isn't genetic modification as such. The gene wasn't deleted, it was just silenced, temporarily one hopes; the effect persisted for at least 3 days, but they didn't investigate just how long it lasted.

That's remarkable enough, but what's more, it also worked when they administered the drug via the intranasal route. In many siRNA experiments, the payload is injected directly into the brain. That's fine for lab mice, but not very practical for humans. Intranasal administration, however, is popular and easy.

So siRNA-sertraline, and who knows what other drugs built along these lines, may be closer to being ready for human consumption than anyone would have predicted. However... the mouse's brain is a lot closer to its nose than the human brain is, so it might not go quite as smoothly.

The mind boggles at the potential. If you could selectively alter the gene expression of selective neurons, you could do things to the brain that are currently impossible. Existing drugs hit the whole brain, yet there are many reasons why you'd prefer to only affect certain areas. And editing gene expression would allow much more detailed control over those cells than is currently possible.

Currently available drugs are shotguns and sledgehammers. These approaches could provide sniper rifles and scalpels. But whether it will prove to be safe remains to be seen. I certainly wouldn't want to be first one to snort this particular drug.

ResearchBlogging.orgBortolozzi, A., Castañé, A., Semakova, J., Santana, N., Alvarado, G., Cortés, R., Ferrés-Coy, A., Fernández, G., Carmona, M., Toth, M., Perales, J., Montefeltro, A., & Artigas, F. (2011). Selective siRNA-mediated suppression of 5-HT1A autoreceptors evokes strong anti-depressant-like effects Molecular Psychiatry DOI: 10.1038/mp.2011.92

Monday, July 25, 2011

Ban These Sick Ape-Man Frankensteins

According to a new report, urgent action is required to stop scientists creating a monstrous race of apes with fully functional human brains (just as Christine O'Donnell warned us about those mice), thus causing Planet Of The Apes to come true.

OK, that's not quite what the Academy of Medical Sciences said. But judging from most of the media coverage, you might think it was.

The report is actually about "Animals containing human material" and it notes that under British law, experiments of this kind are covered by generic animal research rules, but there are no special animal-human regulations.

Should there be?

I think there should be. We as a society allow experiments on animals or animal embryos that we don't allow on humans, even on human embyros. Clearly, we need to decide what we're going to do about organisms that have both human and animal DNA, or whatever. This doesn't mean restricting it - to clear up the rules could also facilitate such research, by making it explicit what is allowed.

However, we should tread carefully here. This is an area where our intuitions can lead us astray.

Although we have absolutely no idea how to make an animal-human "hybrid", or even whether it's possible at all, the very idea of it has many people worried. It's probably a case of the uncanny valley and lots of cultural baggage (Planet of the Apes et al).

So, for whatever reason, we have a hang-up about making monstrous ape-men. Fair enough. So long as we remember that this is entirely hypothetical, and that it might, for all we know, be literally impossible.

Yet other things in this debate are very real. Over-zealous regulation of research could easily end up delaying, say, a cure for Alzheimer's for, say, 10 years. That would be dooming tens of millions of people to suffering and death.

The problem is, that's hard to picture. It's hard to imagine how bad Alzheimer's is unless you have personal experience. Even if you do, it's hard to multiply that badness by ten million anonymous, hypothetical people. "One ape-man is a tragedy; a million deaths is a statistic".

Delaying science is easy to do (for politicians), and hard to picture why it's bad. Whereas "a monstrous ape-man" is the exact opposite. Easy to imagine - just look at the media interest in this story - yet nowhere close to being reality.

This is a problem. The human mind and the way we think about these issues is a problem. Even when that mind is safely inside a nice normal human skull.

Ban These Sick Ape-Man Frankensteins

According to a new report, urgent action is required to stop scientists creating a monstrous race of apes with fully functional human brains (just as Christine O'Donnell warned us about those mice), thus causing Planet Of The Apes to come true.

OK, that's not quite what the Academy of Medical Sciences said. But judging from most of the media coverage, you might think it was.

The report is actually about "Animals containing human material" and it notes that under British law, experiments of this kind are covered by generic animal research rules, but there are no special animal-human regulations.

Should there be?

I think there should be. We as a society allow experiments on animals or animal embryos that we don't allow on humans, even on human embyros. Clearly, we need to decide what we're going to do about organisms that have both human and animal DNA, or whatever. This doesn't mean restricting it - to clear up the rules could also facilitate such research, by making it explicit what is allowed.

However, we should tread carefully here. This is an area where our intuitions can lead us astray.

Although we have absolutely no idea how to make an animal-human "hybrid", or even whether it's possible at all, the very idea of it has many people worried. It's probably a case of the uncanny valley and lots of cultural baggage (Planet of the Apes et al).

So, for whatever reason, we have a hang-up about making monstrous ape-men. Fair enough. So long as we remember that this is entirely hypothetical, and that it might, for all we know, be literally impossible.

Yet other things in this debate are very real. Over-zealous regulation of research could easily end up delaying, say, a cure for Alzheimer's for, say, 10 years. That would be dooming tens of millions of people to suffering and death.

The problem is, that's hard to picture. It's hard to imagine how bad Alzheimer's is unless you have personal experience. Even if you do, it's hard to multiply that badness by ten million anonymous, hypothetical people. "One ape-man is a tragedy; a million deaths is a statistic".

Delaying science is easy to do (for politicians), and hard to picture why it's bad. Whereas "a monstrous ape-man" is the exact opposite. Easy to imagine - just look at the media interest in this story - yet nowhere close to being reality.

This is a problem. The human mind and the way we think about these issues is a problem. Even when that mind is safely inside a nice normal human skull.

Thursday, July 21, 2011

What Did Marc Hauser Do?

Marc Hauser, the cognitive psychologist who's been under scrutiny over a case of scientific misconduct since August last year (see past posts), has resigned from Harvard University.


He'd already been suspended from teaching, but until this announcement, it looked as though he might be able to hang on and resume his research, which focussed on the evolution of language and morality. Not any more. Hauser says he's quitting the field that made him famous:
“While on leave over the past year, I have begun doing some extremely interesting and rewarding work focusing on the educational needs of at-risk teenagers. I have also been offered some exciting opportunities in the private sector,” Hauser wrote in a resignation letter to the dean, dated July 7. “While I may return to teaching and research in the years to come, I look forward to focusing my energies in the coming year on these new and interesting challenges.”
So that's the end of the Hauser controversy, then?

Not really. The problem is, we still don't know what actually happened. It's hard for anyone to draw a line under this and move on, as Hauser seems to be doing.

Harvard have been reluctant to reveal any more than the barest details of the case. When the allegations first appeared, they set up an internal investigation. In August 2010 this concluded that Hauser was "soley responsible" for 8 cases of scientific misconduct.

But no-one - outside Harvard's investigative committee - knows what they were. He's been found guilty, and he's been punished, but no-one knows the crimes or the evidence against him.

Am I alone in finding this situation unsatisfactory?

Marc Hauser has published hundreds of scientific papers as well as various books. Only a small number of papers were implicated in the misconduct allegations. But to scientifically evaluate the rest of Hauser's work, we need to know what happened - and how easy the misconduct was to detect.

It makes a big difference, for example, whether the misconduct was the kind of thing that could have been going on, leaving no trace, for many years prior to this.

The lack of firm facts has led to discussion of the case being dominated by rumours and speculation. In October last year, for example, a newspaper published an article claiming that the case against Hauser might not be as strong as it first seemed.

This led to a rebuttal by Gerry Altman, then Editor of Cognition, a journal from which Hauser retracted a paper. Altman said that based on the information he had, Hauser was indeed guilty. But he admitted that he was going on what the Harvard investigation told him; he had not had access to the full data.

When Harvard found Hauser guilty, the Dean of his Faculty justified their secrecy:
The work of the investigating committee as well as its final report are considered confidential to protect both the individuals who made the allegations and those who assisted in the investigation.

Our investigative process will not succeed if individuals do not have complete confidence that their identities can be protected throughout the process and after the findings are reported to the appropriate agencies.

Furthermore, when the allegations concern research involving federal funding, funding agency regulations govern our processes ... For example, federal regulations impose an ongoing obligation to protect the identities of those who provided assistance to the investigation.
However, while this is certainly important, I don't see why it would prevent Harvard from releasing the conclusions of the report. They don't need to name the people who gave evidence against Hauser - but they do need to spell out what he did, and what they think he didn't do, so that the scientific community can come to their own conclusions as to the validity of the rest of Hauser's work.

In his letter, the Dean closed by saying that Harvard were going to
form a faculty committee this fall to reaffirm or recommend changes to the communication and confidentiality practices associated with the conclusion of cases involving allegations of professional misconduct.
I hope so.

What Did Marc Hauser Do?

Marc Hauser, the cognitive psychologist who's been under scrutiny over a case of scientific misconduct since August last year (see past posts), has resigned from Harvard University.


He'd already been suspended from teaching, but until this announcement, it looked as though he might be able to hang on and resume his research, which focussed on the evolution of language and morality. Not any more. Hauser says he's quitting the field that made him famous:
“While on leave over the past year, I have begun doing some extremely interesting and rewarding work focusing on the educational needs of at-risk teenagers. I have also been offered some exciting opportunities in the private sector,” Hauser wrote in a resignation letter to the dean, dated July 7. “While I may return to teaching and research in the years to come, I look forward to focusing my energies in the coming year on these new and interesting challenges.”
So that's the end of the Hauser controversy, then?

Not really. The problem is, we still don't know what actually happened. It's hard for anyone to draw a line under this and move on, as Hauser seems to be doing.

Harvard have been reluctant to reveal any more than the barest details of the case. When the allegations first appeared, they set up an internal investigation. In August 2010 this concluded that Hauser was "soley responsible" for 8 cases of scientific misconduct.

But no-one - outside Harvard's investigative committee - knows what they were. He's been found guilty, and he's been punished, but no-one knows the crimes or the evidence against him.

Am I alone in finding this situation unsatisfactory?

Marc Hauser has published hundreds of scientific papers as well as various books. Only a small number of papers were implicated in the misconduct allegations. But to scientifically evaluate the rest of Hauser's work, we need to know what happened - and how easy the misconduct was to detect.

It makes a big difference, for example, whether the misconduct was the kind of thing that could have been going on, leaving no trace, for many years prior to this.

The lack of firm facts has led to discussion of the case being dominated by rumours and speculation. In October last year, for example, a newspaper published an article claiming that the case against Hauser might not be as strong as it first seemed.

This led to a rebuttal by Gerry Altman, then Editor of Cognition, a journal from which Hauser retracted a paper. Altman said that based on the information he had, Hauser was indeed guilty. But he admitted that he was going on what the Harvard investigation told him; he had not had access to the full data.

When Harvard found Hauser guilty, the Dean of his Faculty justified their secrecy:
The work of the investigating committee as well as its final report are considered confidential to protect both the individuals who made the allegations and those who assisted in the investigation.

Our investigative process will not succeed if individuals do not have complete confidence that their identities can be protected throughout the process and after the findings are reported to the appropriate agencies.

Furthermore, when the allegations concern research involving federal funding, funding agency regulations govern our processes ... For example, federal regulations impose an ongoing obligation to protect the identities of those who provided assistance to the investigation.
However, while this is certainly important, I don't see why it would prevent Harvard from releasing the conclusions of the report. They don't need to name the people who gave evidence against Hauser - but they do need to spell out what he did, and what they think he didn't do, so that the scientific community can come to their own conclusions as to the validity of the rest of Hauser's work.

In his letter, the Dean closed by saying that Harvard were going to
form a faculty committee this fall to reaffirm or recommend changes to the communication and confidentiality practices associated with the conclusion of cases involving allegations of professional misconduct.
I hope so.

Thursday, July 14, 2011

New Brain Cells: Torrent, or Trickle?

An important paper just out asks, Could adult hippocampal neurogenesis be relevant for human behavior?

Neuroscientists, and the media, are very excited by hippocampal neurogenesis - the ongoing creation of new neurons in an area called the dentate gyrus of the hippocampus. This is because it was thought, for a long time, that no new neurons were created in the adult brain. It turned out that this was wrong.

There's lots of exciting suggestive evidence that the process is involved in learning and memory, responses to stress, depression, and the action of antidepressants, to name just a few, although this is controversial.

However, there's a big question which has rarely been considered: how much neurogenesis are we talking about? Are there enough new cells that it would be realistic for them to be doing important stuff, or is it just a little trickle?
The most common source of skepticism toward a functional role for adult neurogenesis is the perception that too few new neurons are added in adulthood to have a significant impact. Interestingly, this concern, while valid, is usually raised informally and rarely in the scientific literature. Very few studies have addressed this issue...
The new paper reviews the evidence. Firstly, they point out that in the hippocampus, there's a group of cells called dentate gyrus granule cells which are unusual in that activity in just a few of these cells can have big downstream consequences. And these are the cells that new born neurons turn into.
Each granule cell contacts only 10–15 CA3 pyramidal cells...a single granule cell is able to trigger firing in downstream CA3 targets...Because of this “detonator” action...a single granule neuron can potentially have a large impact despite representing only a tiny fraction of the population.
So new cells may play an important role. But exactly how many are there? They re-analyze data from their own lab in rats, and, making a few assumptions, arrive at the following rough estimate: in 3 month old rats, there are 650k "young" cells less than 8 weeks old; even in 2 year old rats (ancient, for a rat) there are 50k.

This is enough to have a big impact downstream:
Since there are approximately 500,000 CA3 pyramidal cells, and each granule cell contacts 11–15 pyramidal cells, this suggests that even in the oldest animals, each CA3 pyramidal cell could receive a direct contact from a young granule cell
That's all in rats, though. What about humans? It's hard to tell. The problem is that the best way to assess the rate of neurogenesis is to inject a drug called BrdU and then study the brain post-mortem. Unfortunately, this drug can cause cancer so you can't just give it to people for the purposes of science. The only time it's used in humans is (ironically) to help detect cancer.

However, one study did manage to look at BrdU staining in the hippocampus, using people who'd been injected with BrdU for cancer (not brain cancer) and then died. This study found, the authors say, rates of neurogeneis at least as high as in rats, considering the low dose of BrdU, the fact that the patients were old, and stressed (by having cancer).

They admit that this is just one study, and comparing doses between rats and humans is inexact. They nonetheless conclude:
Are these numbers potentially sufficient to exert a functional impact in humans? We feel that the answer to this question is an overwhelming "yes".
ResearchBlogging.orgSnyder JS, & Cameron HA (2011). Could adult hippocampal neurogenesis be relevant for human behavior? Behavioural brain research PMID: 21736900

New Brain Cells: Torrent, or Trickle?

An important paper just out asks, Could adult hippocampal neurogenesis be relevant for human behavior?

Neuroscientists, and the media, are very excited by hippocampal neurogenesis - the ongoing creation of new neurons in an area called the dentate gyrus of the hippocampus. This is because it was thought, for a long time, that no new neurons were created in the adult brain. It turned out that this was wrong.

There's lots of exciting suggestive evidence that the process is involved in learning and memory, responses to stress, depression, and the action of antidepressants, to name just a few, although this is controversial.

However, there's a big question which has rarely been considered: how much neurogenesis are we talking about? Are there enough new cells that it would be realistic for them to be doing important stuff, or is it just a little trickle?
The most common source of skepticism toward a functional role for adult neurogenesis is the perception that too few new neurons are added in adulthood to have a significant impact. Interestingly, this concern, while valid, is usually raised informally and rarely in the scientific literature. Very few studies have addressed this issue...
The new paper reviews the evidence. Firstly, they point out that in the hippocampus, there's a group of cells called dentate gyrus granule cells which are unusual in that activity in just a few of these cells can have big downstream consequences. And these are the cells that new born neurons turn into.
Each granule cell contacts only 10–15 CA3 pyramidal cells...a single granule cell is able to trigger firing in downstream CA3 targets...Because of this “detonator” action...a single granule neuron can potentially have a large impact despite representing only a tiny fraction of the population.
So new cells may play an important role. But exactly how many are there? They re-analyze data from their own lab in rats, and, making a few assumptions, arrive at the following rough estimate: in 3 month old rats, there are 650k "young" cells less than 8 weeks old; even in 2 year old rats (ancient, for a rat) there are 50k.

This is enough to have a big impact downstream:
Since there are approximately 500,000 CA3 pyramidal cells, and each granule cell contacts 11–15 pyramidal cells, this suggests that even in the oldest animals, each CA3 pyramidal cell could receive a direct contact from a young granule cell
That's all in rats, though. What about humans? It's hard to tell. The problem is that the best way to assess the rate of neurogenesis is to inject a drug called BrdU and then study the brain post-mortem. Unfortunately, this drug can cause cancer so you can't just give it to people for the purposes of science. The only time it's used in humans is (ironically) to help detect cancer.

However, one study did manage to look at BrdU staining in the hippocampus, using people who'd been injected with BrdU for cancer (not brain cancer) and then died. This study found, the authors say, rates of neurogeneis at least as high as in rats, considering the low dose of BrdU, the fact that the patients were old, and stressed (by having cancer).

They admit that this is just one study, and comparing doses between rats and humans is inexact. They nonetheless conclude:
Are these numbers potentially sufficient to exert a functional impact in humans? We feel that the answer to this question is an overwhelming "yes".
ResearchBlogging.orgSnyder JS, & Cameron HA (2011). Could adult hippocampal neurogenesis be relevant for human behavior? Behavioural brain research PMID: 21736900

Wednesday, July 6, 2011

The Partly Asleep Brain

Some animals - such as dolphins and whales - are able to "sleep with half their brain". One side of the brain goes into sleep-mode activity while the other remains awake.


But a remarkable new study has revealed that something similar may happen in humans as well - every night.

The research used a combination of scalp EEG, and electrodes planted inside the brain, to record brain activity from 5 people undergoing surgery to help cure severe epilepsy. The subjects were then allowed to go to sleep for the night, while recording took place.

As expected, after falling asleep, the EEG showed delta wave activity - strong, slow waves of electrical activity (0.5 to 4 Hz) which are typical of deep, dreamless "slow wave sleep".

However, the electrodes inside the brain told a different story. While they recorded delta waves most of the time, they also showed that there were episodes, lasting from a few seconds to up to 2 minutes, in which the motor cortex suddenly went into "waking mode". Delta waves disappeared, and were replaced with fast, unpredictable activity.

This image shows one episode, lasting just 5 seconds. The hotter the color, the more activity in a particular frequency. The higher the band, the higher the frequency. This shows a clear burst of high frequency activity in the motor cortex. The other parts of the brain showed the opposite effect - even stronger slow wave activity - at the same time.

Another area, the dorsolateral prefrontal cortex, also showed this phenomenon occasionally, but it was much less common than in the motor cortex.

There's a few caveats. These patients had severe epilepsy, and they were taking anti-convulsant drugs. This wouldn't obviously create the effects seen here, but we can't rule it out. Still, these results are intriguing.

They challenge the view of slow wave sleep as a "whole brain" phenomenon. We've known for a while that this isn't true of animals, and in people with certain sleep disorders, but this is first demonstration in healthy humans.

It may help to explain the mysterious fact that, although slow wave sleep is often referred to as "dreamless", there are consistent reports that people woken up from this phase of sleep do report dreaming (or at least thinking) about things.

While episodic arousal of the motor cortex probably wouldn't explain this per se, if the same thing happens in the visual cortex or other sensory areas, it might create dreams.

ResearchBlogging.orgNobili L, Ferrara M, Moroni F, De Gennaro L, Russo GL, Campus C, Cardinale F, & De Carli F (2011). Dissociated wake-like and sleep-like electro-cortical activity during sleep. NeuroImage PMID: 21718789

The Partly Asleep Brain

Some animals - such as dolphins and whales - are able to "sleep with half their brain". One side of the brain goes into sleep-mode activity while the other remains awake.


But a remarkable new study has revealed that something similar may happen in humans as well - every night.

The research used a combination of scalp EEG, and electrodes planted inside the brain, to record brain activity from 5 people undergoing surgery to help cure severe epilepsy. The subjects were then allowed to go to sleep for the night, while recording took place.

As expected, after falling asleep, the EEG showed delta wave activity - strong, slow waves of electrical activity (0.5 to 4 Hz) which are typical of deep, dreamless "slow wave sleep".

However, the electrodes inside the brain told a different story. While they recorded delta waves most of the time, they also showed that there were episodes, lasting from a few seconds to up to 2 minutes, in which the motor cortex suddenly went into "waking mode". Delta waves disappeared, and were replaced with fast, unpredictable activity.

This image shows one episode, lasting just 5 seconds. The hotter the color, the more activity in a particular frequency. The higher the band, the higher the frequency. This shows a clear burst of high frequency activity in the motor cortex. The other parts of the brain showed the opposite effect - even stronger slow wave activity - at the same time.

Another area, the dorsolateral prefrontal cortex, also showed this phenomenon occasionally, but it was much less common than in the motor cortex.

There's a few caveats. These patients had severe epilepsy, and they were taking anti-convulsant drugs. This wouldn't obviously create the effects seen here, but we can't rule it out. Still, these results are intriguing.

They challenge the view of slow wave sleep as a "whole brain" phenomenon. We've known for a while that this isn't true of animals, and in people with certain sleep disorders, but this is first demonstration in healthy humans.

It may help to explain the mysterious fact that, although slow wave sleep is often referred to as "dreamless", there are consistent reports that people woken up from this phase of sleep do report dreaming (or at least thinking) about things.

While episodic arousal of the motor cortex probably wouldn't explain this per se, if the same thing happens in the visual cortex or other sensory areas, it might create dreams.

ResearchBlogging.orgNobili L, Ferrara M, Moroni F, De Gennaro L, Russo GL, Campus C, Cardinale F, & De Carli F (2011). Dissociated wake-like and sleep-like electro-cortical activity during sleep. NeuroImage PMID: 21718789

Monday, July 4, 2011

Gamma Waves: The Brain's Clock, Or Neural Noise?

Gamma waves are very hot at the moment.


Gamma band activity is a term for electrical oscillations recorded from the brain that have a frequency of over 25 Hz. In most brains, a peak frequency of about 40 Hz is seen. This makes gamma waves the fastest brain waves.

If you believe some recent claims, gamma waves are the answer to all the mysteries of life and the universe. They're said to underlie the symptoms of schizophrenia and autism, and they've been invoked to answer deep questions such as the binding problem and maybe conciousness itself. You can even buy a Nintendo game that promises to boost them.

A new paper from Burns et al casts doubt on all of these grand claims. Gamma-based theories of brain function all assume that gamma waves act a bit like a clock, with a consistent rhythm of about 40 Hz. Activity of about 40 Hz is indeed observed in brain recordings but is that just because the brain is randomly generating all kinds of signals, and only the 40 Hz ones "get through"?

To put it another way, imagine that you got a letter in the mail at 9 am every morning. That could be because someone is sending you one letter each day like clockwork. But it could also be that loads of people are sending you letters at random times, and your mailman only has room in his sack to deliver one each morning.

Here's the key data, recorded using electrodes implanted into the brains of two male macaque monkeys:


This shows that the monkey data closely resemble what you'd expect if gamma activity were filtered noise, and are not what you'd see if it were a more meaningful "clock". The "triangle" on the graph shows the number of bursts of a given frequency and duration.
The data also show that the phase of the gamma activity isn't consistent, which it would be if it were clocklike. In fact, the phases change entirely randomly.

So if gamma is just "filtered noise", what's the "filter"? Why 40 Hz, not 80 or 4000? Probably because this is just the maximum frequency at which neurons can fire. It takes a certain finite amount of time for cells to communicate with each other: a silicon chip can get a clock speed of many billions of hertz, but a cell just physically can't.

There's a catch, though. These monkeys were asleep, anaesthetized with the powerful opiate sufentanil. This is a good choice of drug: unlike most other sedatives and anaesthetics, you wouldn't expect an opiate to directly affect gamma oscillations. But still. If you believe that coherent gamma waves are the key to high-level concious experience, as many do, you might not expect to see much of that in the primary visual cortex in asleep animals.

However, this is clearly a very important issue, and it's not the first gamma-skeptic paper. In 2008, Yuval-Greenberg et al reported that many attempts to measure gamma activity using EEG were contaminated by electrical activity from scalp muscles. Rather than coming from the brain, the "gamma" activity reflected nothing more than tiny eye movements. The implications are still being debated.

This paper attacks the gamma hypothesis from a completely different angle, saying that even the "real" gamma in the brain, may be nothing more interesting than filtered noise.

ResearchBlogging.orgBurns SP, Xing D, & Shapley RM (2011). Is gamma-band activity in the local field potential of v1 cortex a "clock" or filtered noise? The Journal of neuroscience : the official journal of the Society for Neuroscience, 31 (26), 9658-64 PMID: 21715631

Gamma Waves: The Brain's Clock, Or Neural Noise?

Gamma waves are very hot at the moment.


Gamma band activity is a term for electrical oscillations recorded from the brain that have a frequency of over 25 Hz. In most brains, a peak frequency of about 40 Hz is seen. This makes gamma waves the fastest brain waves.

If you believe some recent claims, gamma waves are the answer to all the mysteries of life and the universe. They're said to underlie the symptoms of schizophrenia and autism, and they've been invoked to answer deep questions such as the binding problem and maybe conciousness itself. You can even buy a Nintendo game that promises to boost them.

A new paper from Burns et al casts doubt on all of these grand claims. Gamma-based theories of brain function all assume that gamma waves act a bit like a clock, with a consistent rhythm of about 40 Hz. Activity of about 40 Hz is indeed observed in brain recordings but is that just because the brain is randomly generating all kinds of signals, and only the 40 Hz ones "get through"?

To put it another way, imagine that you got a letter in the mail at 9 am every morning. That could be because someone is sending you one letter each day like clockwork. But it could also be that loads of people are sending you letters at random times, and your mailman only has room in his sack to deliver one each morning.

Here's the key data, recorded using electrodes implanted into the brains of two male macaque monkeys:


This shows that the monkey data closely resemble what you'd expect if gamma activity were filtered noise, and are not what you'd see if it were a more meaningful "clock". The "triangle" on the graph shows the number of bursts of a given frequency and duration.
The data also show that the phase of the gamma activity isn't consistent, which it would be if it were clocklike. In fact, the phases change entirely randomly.

So if gamma is just "filtered noise", what's the "filter"? Why 40 Hz, not 80 or 4000? Probably because this is just the maximum frequency at which neurons can fire. It takes a certain finite amount of time for cells to communicate with each other: a silicon chip can get a clock speed of many billions of hertz, but a cell just physically can't.

There's a catch, though. These monkeys were asleep, anaesthetized with the powerful opiate sufentanil. This is a good choice of drug: unlike most other sedatives and anaesthetics, you wouldn't expect an opiate to directly affect gamma oscillations. But still. If you believe that coherent gamma waves are the key to high-level concious experience, as many do, you might not expect to see much of that in the primary visual cortex in asleep animals.

However, this is clearly a very important issue, and it's not the first gamma-skeptic paper. In 2008, Yuval-Greenberg et al reported that many attempts to measure gamma activity using EEG were contaminated by electrical activity from scalp muscles. Rather than coming from the brain, the "gamma" activity reflected nothing more than tiny eye movements. The implications are still being debated.

This paper attacks the gamma hypothesis from a completely different angle, saying that even the "real" gamma in the brain, may be nothing more interesting than filtered noise.

ResearchBlogging.orgBurns SP, Xing D, & Shapley RM (2011). Is gamma-band activity in the local field potential of v1 cortex a "clock" or filtered noise? The Journal of neuroscience : the official journal of the Society for Neuroscience, 31 (26), 9658-64 PMID: 21715631

Friday, June 10, 2011

Do Pigs Get Autism?

What happens to a pig if it has a gene for autism?

There has been lots of research on mice who carry the same genes associated with autism in humans. Rats and recently monkeys have been studied as well. But the possibility of autistic pigs has been strangely neglected by science.

A new paper might just change that: Characterization of porcine autism susceptibility candidate 2 as a candidate gene for the number of corpora lutea in pigs. The authors found that, in female pigs, variation in a certain gene affects the function of the ovaries.

The corpus luteum is a little yellow blob (technically speaking) in the ovary. Its job is to secrete progesterone. Women's ovaries grow a new one during every menstrual cycle, and it normally breaks down and disappears before the period. However, if you get pregnant, the corpus luteum sticks around and continues producing that hormone.

Pigs, like many animals, can have more than one of these per ovary and it turns out that one of the genes controlling the number is a homolog of the human gene AUTS2. AUTS2 mutations are linked to autism (hence the name), smoking and mental retardation. The authors of this paper found several variants in this gene in domestic pig populations, and they show that it's expressed in the pig ovary.

It's quite a long leap from porcine lady bits to autism, I would say, but this actually does make sense, if you accept the Extreme Male Brain theory of autism. Boys are at least four times more likely to have autism than girls, and some say that masculinizing hormone testosterone may be the reason. This study fits with that, given that progesterone is a female hormone. Maybe mutations in AUTS2 gene alter sex hormone production?

On the other hand, it might be a coincidence. AUTS2 is strongly expressed in the brain, as well as the ovaries. Maybe it's just required for cell function, and if it's mutated, cells stop working normally: whether they be in the brain, or the corpus luteum.

Either way, it would be interesting to see whether AUTS2 affects pig behaviour... but I'm not sure what an autistic pig would look like.

ResearchBlogging.orgSato S, Hayashi T, & Kobayashi E (2011). Characterization of porcine autism susceptibility candidate 2 as a candidate gene for the number of corpora lutea in pigs. Animal reproduction science PMID: 21641132

Do Pigs Get Autism?

What happens to a pig if it has a gene for autism?

There has been lots of research on mice who carry the same genes associated with autism in humans. Rats and recently monkeys have been studied as well. But the possibility of autistic pigs has been strangely neglected by science.

A new paper might just change that: Characterization of porcine autism susceptibility candidate 2 as a candidate gene for the number of corpora lutea in pigs. The authors found that, in female pigs, variation in a certain gene affects the function of the ovaries.

The corpus luteum is a little yellow blob (technically speaking) in the ovary. Its job is to secrete progesterone. Women's ovaries grow a new one during every menstrual cycle, and it normally breaks down and disappears before the period. However, if you get pregnant, the corpus luteum sticks around and continues producing that hormone.

Pigs, like many animals, can have more than one of these per ovary and it turns out that one of the genes controlling the number is a homolog of the human gene AUTS2. AUTS2 mutations are linked to autism (hence the name), smoking and mental retardation. The authors of this paper found several variants in this gene in domestic pig populations, and they show that it's expressed in the pig ovary.

It's quite a long leap from porcine lady bits to autism, I would say, but this actually does make sense, if you accept the Extreme Male Brain theory of autism. Boys are at least four times more likely to have autism than girls, and some say that masculinizing hormone testosterone may be the reason. This study fits with that, given that progesterone is a female hormone. Maybe mutations in AUTS2 gene alter sex hormone production?

On the other hand, it might be a coincidence. AUTS2 is strongly expressed in the brain, as well as the ovaries. Maybe it's just required for cell function, and if it's mutated, cells stop working normally: whether they be in the brain, or the corpus luteum.

Either way, it would be interesting to see whether AUTS2 affects pig behaviour... but I'm not sure what an autistic pig would look like.

ResearchBlogging.orgSato S, Hayashi T, & Kobayashi E (2011). Characterization of porcine autism susceptibility candidate 2 as a candidate gene for the number of corpora lutea in pigs. Animal reproduction science PMID: 21641132