Showing posts with label genes. Show all posts
Showing posts with label genes. Show all posts

Thursday, May 27, 2010

Do Genes Remember?

Almost all neuroscientists believe that memories are stored in the connections between neurons: synapses. Learning, then, consists of the strengthening of some synapses, the weakening of others, and maybe even the formation of entirely new ones. But a paper from Catherine Miller and colleagues suggests that changes to DNA are also involved: Cortical DNA methylation maintains remote memory.


DNA is a series of bases, and fundamentally there are just four: C, A, T and G. However, the Cs and the As can be methylated, i.e. modified by the addition of a very simple methyl chemical group. They then stay that way until they get demethylated in the reverse process. Methylating a gene generally reduces its expression.

It's a bit like writing notes in pencil on top of a printed document: it doesn't change the underlying genetic sequence, but it's a semi-permanent change and it can be inherited by dividing cells. Methylation is a classic example of an epigenetic change, and epigenetics is very hot right now.Miller et al found that learning induces the methylation of a gene called calcineurin (CaN) in the cells of the frontal cortex of rats. These changes appeared within 1 day of the learning event, and they persisted for at least 30 days (the longest time studied - they could well last much longer). Methylation of another gene, reelin, was also increased, but only for a few hours.

When they blocked these changes by injecting a DNA methylation inhibitor into the frontal cortex, it caused amnesia - even if the drug was given 30 days after the learning had taken place. In other words, the methylation inhibitors somehow erased the memory traces. These authors have previously reported that the same kind of learning causes a short-lived increase in methylation in the hippocampus. Taken together with these data, this fits with the well-known theory that memory traces start off being stored in the hippocampus and are then somehow transferred to the cortex later.

This kind of research has a bit of a history. The idea that memories are stored in DNA has led some to theorize that memories can be inherited. It also reminds me of the work of psychologist and Unabomber-victim James McConnell, who claimed that planarian worms can learn information by eating the ground-up remains of other worms who knew something...

These data are very interesting, but they don't imply anything quite so exciting. The pattern of methylation seemed entirely random (except in the sense that it was targeted at certain genes) - so rather than encoding information per se, the DNA changes were acting as a way of reducing CaN gene expression. Most likely, the reduction in CaN was limited to certain cells, and these were the cells that formed the connections that encoded the information.

ResearchBlogging.orgMiller, C., Gavin, C., White, J., Parrish, R., Honasoge, A., Yancey, C., Rivera, I., Rubio, M., Rumbaugh, G., & Sweatt, J. (2010). Cortical DNA methylation maintains remote memory Nature Neuroscience, 13 (6), 664-666 DOI: 10.1038/nn.2560

Do Genes Remember?

Almost all neuroscientists believe that memories are stored in the connections between neurons: synapses. Learning, then, consists of the strengthening of some synapses, the weakening of others, and maybe even the formation of entirely new ones. But a paper from Catherine Miller and colleagues suggests that changes to DNA are also involved: Cortical DNA methylation maintains remote memory.


DNA is a series of bases, and fundamentally there are just four: C, A, T and G. However, the Cs and the As can be methylated, i.e. modified by the addition of a very simple methyl chemical group. They then stay that way until they get demethylated in the reverse process. Methylating a gene generally reduces its expression.

It's a bit like writing notes in pencil on top of a printed document: it doesn't change the underlying genetic sequence, but it's a semi-permanent change and it can be inherited by dividing cells. Methylation is a classic example of an epigenetic change, and epigenetics is very hot right now.Miller et al found that learning induces the methylation of a gene called calcineurin (CaN) in the cells of the frontal cortex of rats. These changes appeared within 1 day of the learning event, and they persisted for at least 30 days (the longest time studied - they could well last much longer). Methylation of another gene, reelin, was also increased, but only for a few hours.

When they blocked these changes by injecting a DNA methylation inhibitor into the frontal cortex, it caused amnesia - even if the drug was given 30 days after the learning had taken place. In other words, the methylation inhibitors somehow erased the memory traces. These authors have previously reported that the same kind of learning causes a short-lived increase in methylation in the hippocampus. Taken together with these data, this fits with the well-known theory that memory traces start off being stored in the hippocampus and are then somehow transferred to the cortex later.

This kind of research has a bit of a history. The idea that memories are stored in DNA has led some to theorize that memories can be inherited. It also reminds me of the work of psychologist and Unabomber-victim James McConnell, who claimed that planarian worms can learn information by eating the ground-up remains of other worms who knew something...

These data are very interesting, but they don't imply anything quite so exciting. The pattern of methylation seemed entirely random (except in the sense that it was targeted at certain genes) - so rather than encoding information per se, the DNA changes were acting as a way of reducing CaN gene expression. Most likely, the reduction in CaN was limited to certain cells, and these were the cells that formed the connections that encoded the information.

ResearchBlogging.orgMiller, C., Gavin, C., White, J., Parrish, R., Honasoge, A., Yancey, C., Rivera, I., Rubio, M., Rumbaugh, G., & Sweatt, J. (2010). Cortical DNA methylation maintains remote memory Nature Neuroscience, 13 (6), 664-666 DOI: 10.1038/nn.2560

Wednesday, April 21, 2010

Of Yeast and Men

Nature reports on the Dissection of genetically complex traits with extremely large pools of yeast segregants.


Ehrenreich et al have a new way of mapping the genetic basis of complex traits in yeast, "complex" being what geneticists call anything which isn't controlled by one single gene. They dub their approach "Extreme QTL mapping". This suggests images of geneticists running experiments atop Everest, or perhaps collecting blood samples from lions with their bare hands, but actually
Extreme QTL mapping (X-QTL) has three key steps. The first is the generation of segregating populations of very large size. The second is selection-based phenotyping of these populations to recover large numbers of progeny with extreme trait values. This can be accomplished, for example, by selection for drug resistance or by cell sorting. The final step is quantitative measurement of pooled allele frequencies across the genome.
The basic idea is to cross breed two strains of yeast to generate lots of different hybrid strains each with a random selection of DNA from each "parent". Then, you put all the hybrids under some kind of selective pressure - for example, by adding the toxin 4-NQO to their dish.

Some yeast are more or less resistant to 4-NQO, and this trait is largely determined by genetics. So after a while, the vulnerable hybrids will die out and only the most highly resistant strains will be left in the 4-NQO dish to reproduce. It's a quick and dirty form of selective breeding. Finally, you can compare the genetics of the 4-NQO resistant hybrids to a control group of hybrids who didn't get any toxins, using a GWAS. Any genetic differences are likely to represent 4-NQO resistance genes.

Using this method, Ehrenreich et al found no less than 14 4-NQO resistance variants. That includes two replications of previous findings, and 12 new ones. Collectively, the genes explained
59% of the phenotypic variance in 4-NQO sensitivity in an additive model. Because we measured the heritability of this trait to be 0.84, the loci explained 70% of the genetic variance, indicating that we have explained most of the genetic basis of this trait with the loci detected by X-QTL.
In other words, they've found most of the genes with a substantial effect on 4-NOR resistance, but not all of them. (They then did the same thing for several other toxins). About 30% of the heritability is "missing". Compare that to most human complex traits, where the missing heritability is more like 95%-99% at the moment. For example, twin studies and similar find human height to have a heritability of about 0.8, and more than 40 genetic variants have been associated with height, but together they only explain 5% of the heritability.

Why is Neuroskeptic posting about yeast? Well, partly because we live in a yeast-based society. Without yeast, we would have no alcoholic drinks. I think it's important to acknowledge their contribution to our lives. But mainly because there's a lesson here for people interested in the genetics of complex traits in humans, like, say, personality, IQ, and mental illness.

Yeast resistance to toxins is about the most straightforwardly "biological" trait you could imagine. Finding its genetic basis ought to be easy. But it wasn't. It was...extreme. Ehrenreich et al had to breed and select yeast with extreme traits (e.g. extremely high resistance to toxins), and compare them to control yeast of the same ancestry, to find the genes, and they still had a good deal of missing variance.

If they'd had to work on a random bunch of yeast from the wild, they'd have had a lot more trouble. That's why previous yeast GWAS studies didn't get results as good as these. Yet when it comes to humans, we're indeed forced to use a random bunch of people from the wild. You can't selectively breed people.

You can breed, say, mice, but it takes a lot longer than with yeast. I think there have been a few studies breeding mice for a certain trait and then looking at their genetics but not with a great degree of success, even though the first thing every mouse researcher learns is that different strains of mice are very different (C57BL/6 mice, for example, are notoriously hard to handle and love biting people.)

This is bad news for human genetics, where the interesting traits are clearly a lot more complex, ill-defined, and hard to measure than in yeast. On the other hand, though, it's perhaps also rather reassuring, as it suggests that our failure to explain more than a few % of the heritability so far reflects technical limitations rather than because these traits just aren't as genetic as we think after all...

ResearchBlogging.orgEhrenreich IM, Torabi N, Jia Y, Kent J, Martis S, Shapiro JA, Gresham D, Caudy AA, & Kruglyak L (2010). Dissection of genetically complex traits with extremely large pools of yeast segregants. Nature, 464 (7291), 1039-42 PMID: 20393561

Of Yeast and Men

Nature reports on the Dissection of genetically complex traits with extremely large pools of yeast segregants.


Ehrenreich et al have a new way of mapping the genetic basis of complex traits in yeast, "complex" being what geneticists call anything which isn't controlled by one single gene. They dub their approach "Extreme QTL mapping". This suggests images of geneticists running experiments atop Everest, or perhaps collecting blood samples from lions with their bare hands, but actually
Extreme QTL mapping (X-QTL) has three key steps. The first is the generation of segregating populations of very large size. The second is selection-based phenotyping of these populations to recover large numbers of progeny with extreme trait values. This can be accomplished, for example, by selection for drug resistance or by cell sorting. The final step is quantitative measurement of pooled allele frequencies across the genome.
The basic idea is to cross breed two strains of yeast to generate lots of different hybrid strains each with a random selection of DNA from each "parent". Then, you put all the hybrids under some kind of selective pressure - for example, by adding the toxin 4-NQO to their dish.

Some yeast are more or less resistant to 4-NQO, and this trait is largely determined by genetics. So after a while, the vulnerable hybrids will die out and only the most highly resistant strains will be left in the 4-NQO dish to reproduce. It's a quick and dirty form of selective breeding. Finally, you can compare the genetics of the 4-NQO resistant hybrids to a control group of hybrids who didn't get any toxins, using a GWAS. Any genetic differences are likely to represent 4-NQO resistance genes.

Using this method, Ehrenreich et al found no less than 14 4-NQO resistance variants. That includes two replications of previous findings, and 12 new ones. Collectively, the genes explained
59% of the phenotypic variance in 4-NQO sensitivity in an additive model. Because we measured the heritability of this trait to be 0.84, the loci explained 70% of the genetic variance, indicating that we have explained most of the genetic basis of this trait with the loci detected by X-QTL.
In other words, they've found most of the genes with a substantial effect on 4-NOR resistance, but not all of them. (They then did the same thing for several other toxins). About 30% of the heritability is "missing". Compare that to most human complex traits, where the missing heritability is more like 95%-99% at the moment. For example, twin studies and similar find human height to have a heritability of about 0.8, and more than 40 genetic variants have been associated with height, but together they only explain 5% of the heritability.

Why is Neuroskeptic posting about yeast? Well, partly because we live in a yeast-based society. Without yeast, we would have no alcoholic drinks. I think it's important to acknowledge their contribution to our lives. But mainly because there's a lesson here for people interested in the genetics of complex traits in humans, like, say, personality, IQ, and mental illness.

Yeast resistance to toxins is about the most straightforwardly "biological" trait you could imagine. Finding its genetic basis ought to be easy. But it wasn't. It was...extreme. Ehrenreich et al had to breed and select yeast with extreme traits (e.g. extremely high resistance to toxins), and compare them to control yeast of the same ancestry, to find the genes, and they still had a good deal of missing variance.

If they'd had to work on a random bunch of yeast from the wild, they'd have had a lot more trouble. That's why previous yeast GWAS studies didn't get results as good as these. Yet when it comes to humans, we're indeed forced to use a random bunch of people from the wild. You can't selectively breed people.

You can breed, say, mice, but it takes a lot longer than with yeast. I think there have been a few studies breeding mice for a certain trait and then looking at their genetics but not with a great degree of success, even though the first thing every mouse researcher learns is that different strains of mice are very different (C57BL/6 mice, for example, are notoriously hard to handle and love biting people.)

This is bad news for human genetics, where the interesting traits are clearly a lot more complex, ill-defined, and hard to measure than in yeast. On the other hand, though, it's perhaps also rather reassuring, as it suggests that our failure to explain more than a few % of the heritability so far reflects technical limitations rather than because these traits just aren't as genetic as we think after all...

ResearchBlogging.orgEhrenreich IM, Torabi N, Jia Y, Kent J, Martis S, Shapiro JA, Gresham D, Caudy AA, & Kruglyak L (2010). Dissection of genetically complex traits with extremely large pools of yeast segregants. Nature, 464 (7291), 1039-42 PMID: 20393561

Tuesday, April 13, 2010

The Hunt for the Prozac Gene

One of the difficulties doctors face when prescribing antidepressants is that they're unpredictable.

One person might do well on a certain drug, but the next person might get no benefit from the exact same pills. Finding the right drug for each patient is often a matter of trying different ones until one works.

So a genetic test to work out whether a certain drug will help a particular person would be really useful. Not to mention really profitable for whoever patented it. Three recent papers, published in three major journals, all claim to have found genes that predict antidepressant response. Great! The problem is, they were different genes.

First up, American team Binder et al looked at about 200 variants in 10 genes involved in the corticosteroid stress response pathway. They found one, in a gene called CRHBP, that was significantly associated with poor response to the popular SSRI antidepressant citalopram (Celexa), using the large STAR*D project data set. But this was only true of African-Americans and Latinos, not whites.

Garriock et al used the exact same dataset, but they did a genome-wide association study (GWAS), which looks at variants across the whole genome, unlike Binder et al who focussed on a small number of specific candidate genes. Sadly no variants were statistically significantly correlated with response to citalopram, although in a GWAS, the threshold for genome-wide significance is very high due to multiple comparisons correction. Some were close to being significant, but they weren't obviously related to CRHBP, and most weren't anything to do with the brain.

Uher et al did another GWAS of response to escitalopram and nortriptyline in a different sample, the European GENDEP study. Escitalopram is extremely similar to citalopram, the drug in the STAR*D studies; nortriptyline however is very different. They found one genome-wide significant hit. A variant in a gene called UST was associated with response to nortriptyline, but not escitalopram. No variants were associated with response to escitalopram, although one in the gene IL11 was close. There were some other nearly-significant results, but they didn't overlap with either of the STAR*D studies.

Finally, one of the STAR*D studies found a variant significantly linked to tolerability (side effects) of citalopram. GENDEP didn't look at this.

*

The UST link to nortriptyline finding is the strongest thing here, but for citalopram / escitalopram, no consistent pharmacogenetic results emerged at all. What does this mean? Well, it's possible that there just aren't any genes for citalopram response, but that seems unlikely. Even if you believe that antidepressants only work as placebos, you'd expect there would be genes that alter placebo responses, or at the very least, that affect side-effects and hence the active placebo improvement.

The thing is that the "antidepressant response" in these studies isn't really that: it's a mix of many factors. We know that a lot of the improvement would have happened even with placebo pills, so much of it isn't a pharmacological effect. There are probably genes associated with placebo improvement, but they might not be the same ones that are associated with drug improvement and a gene might even have opposite effects that cancel out (better drug effect, worse placebo effect). Some of the recorded improvement won't even be real improvement at all, just people saying they feel better because they know they're expected to.

If I were looking for the genes for SSRI response, not that I plan to, here's what I'd do. To stack the odds in my favour, I'd forget people with an moderate or partial response, and focus on those who either do really well, or those who get no benefit at all, with a certain drug. I'd also want to exclude people who respond really well, but not due to the specific effects of the drug.

That would be hard but one angle would be to only include people whose improvement is specifically reversed by acute tryptophan depletion, which reduces serotonin levels thus counteracting SSRIs. This would be a hard study to do, though not impossible. (In fact there are dozens of patients on record who meet my criteria, and their blood samples are probably still sitting in freezers in labs around the world... maybe someone should dig them out).

Still, even if you did find some genes that way, would they be useful? We'd have had to go to such lengths to find them, that they're not going to help doctors decide what to do with the average patient who comes through the door with depression. That's true, but they might just help us to work out who will respond to SSRIs, as opposed to other drugs.

ResearchBlogging.orgBinder EB, Owens MJ, Liu W, Deveau TC, Rush AJ, Trivedi MH, Fava M, Bradley B, Ressler KJ, & Nemeroff CB (2010). Association of polymorphisms in genes regulating the corticotropin-releasing factor system with antidepressant treatment response. Archives of general psychiatry, 67 (4), 369-79 PMID: 20368512

Uher, R., Perroud, N., Ng, M., Hauser, J., Henigsberg, N., Maier, W., Mors, O., Placentino, A., Rietschel, M., Souery, D., Zagar, T., Czerski, P., Jerman, B., Larsen, E., Schulze, T., Zobel, A., Cohen-Woods, S., Pirlo, K., Butler, A., Muglia, P., Barnes, M., Lathrop, M., Farmer, A., Breen, G., Aitchison, K., Craig, I., Lewis, C., & McGuffin, P. (2010). Genome-Wide Pharmacogenetics of Antidepressant Response in the GENDEP Project American Journal of Psychiatry DOI: 10.1176/appi.ajp.2009.09070932

Garriock, H., Kraft, J., Shyn, S., Peters, E., Yokoyama, J., Jenkins, G., Reinalda, M., Slager, S., McGrath, P., & Hamilton, S. (2010). A Genomewide Association Study of Citalopram Response in Major Depressive Disorder Biological Psychiatry, 67 (2), 133-138 DOI: 10.1016/j.biopsych.2009.08.029

The Hunt for the Prozac Gene

One of the difficulties doctors face when prescribing antidepressants is that they're unpredictable.

One person might do well on a certain drug, but the next person might get no benefit from the exact same pills. Finding the right drug for each patient is often a matter of trying different ones until one works.

So a genetic test to work out whether a certain drug will help a particular person would be really useful. Not to mention really profitable for whoever patented it. Three recent papers, published in three major journals, all claim to have found genes that predict antidepressant response. Great! The problem is, they were different genes.

First up, American team Binder et al looked at about 200 variants in 10 genes involved in the corticosteroid stress response pathway. They found one, in a gene called CRHBP, that was significantly associated with poor response to the popular SSRI antidepressant citalopram (Celexa), using the large STAR*D project data set. But this was only true of African-Americans and Latinos, not whites.

Garriock et al used the exact same dataset, but they did a genome-wide association study (GWAS), which looks at variants across the whole genome, unlike Binder et al who focussed on a small number of specific candidate genes. Sadly no variants were statistically significantly correlated with response to citalopram, although in a GWAS, the threshold for genome-wide significance is very high due to multiple comparisons correction. Some were close to being significant, but they weren't obviously related to CRHBP, and most weren't anything to do with the brain.

Uher et al did another GWAS of response to escitalopram and nortriptyline in a different sample, the European GENDEP study. Escitalopram is extremely similar to citalopram, the drug in the STAR*D studies; nortriptyline however is very different. They found one genome-wide significant hit. A variant in a gene called UST was associated with response to nortriptyline, but not escitalopram. No variants were associated with response to escitalopram, although one in the gene IL11 was close. There were some other nearly-significant results, but they didn't overlap with either of the STAR*D studies.

Finally, one of the STAR*D studies found a variant significantly linked to tolerability (side effects) of citalopram. GENDEP didn't look at this.

*

The UST link to nortriptyline finding is the strongest thing here, but for citalopram / escitalopram, no consistent pharmacogenetic results emerged at all. What does this mean? Well, it's possible that there just aren't any genes for citalopram response, but that seems unlikely. Even if you believe that antidepressants only work as placebos, you'd expect there would be genes that alter placebo responses, or at the very least, that affect side-effects and hence the active placebo improvement.

The thing is that the "antidepressant response" in these studies isn't really that: it's a mix of many factors. We know that a lot of the improvement would have happened even with placebo pills, so much of it isn't a pharmacological effect. There are probably genes associated with placebo improvement, but they might not be the same ones that are associated with drug improvement and a gene might even have opposite effects that cancel out (better drug effect, worse placebo effect). Some of the recorded improvement won't even be real improvement at all, just people saying they feel better because they know they're expected to.

If I were looking for the genes for SSRI response, not that I plan to, here's what I'd do. To stack the odds in my favour, I'd forget people with an moderate or partial response, and focus on those who either do really well, or those who get no benefit at all, with a certain drug. I'd also want to exclude people who respond really well, but not due to the specific effects of the drug.

That would be hard but one angle would be to only include people whose improvement is specifically reversed by acute tryptophan depletion, which reduces serotonin levels thus counteracting SSRIs. This would be a hard study to do, though not impossible. (In fact there are dozens of patients on record who meet my criteria, and their blood samples are probably still sitting in freezers in labs around the world... maybe someone should dig them out).

Still, even if you did find some genes that way, would they be useful? We'd have had to go to such lengths to find them, that they're not going to help doctors decide what to do with the average patient who comes through the door with depression. That's true, but they might just help us to work out who will respond to SSRIs, as opposed to other drugs.

ResearchBlogging.orgBinder EB, Owens MJ, Liu W, Deveau TC, Rush AJ, Trivedi MH, Fava M, Bradley B, Ressler KJ, & Nemeroff CB (2010). Association of polymorphisms in genes regulating the corticotropin-releasing factor system with antidepressant treatment response. Archives of general psychiatry, 67 (4), 369-79 PMID: 20368512

Uher, R., Perroud, N., Ng, M., Hauser, J., Henigsberg, N., Maier, W., Mors, O., Placentino, A., Rietschel, M., Souery, D., Zagar, T., Czerski, P., Jerman, B., Larsen, E., Schulze, T., Zobel, A., Cohen-Woods, S., Pirlo, K., Butler, A., Muglia, P., Barnes, M., Lathrop, M., Farmer, A., Breen, G., Aitchison, K., Craig, I., Lewis, C., & McGuffin, P. (2010). Genome-Wide Pharmacogenetics of Antidepressant Response in the GENDEP Project American Journal of Psychiatry DOI: 10.1176/appi.ajp.2009.09070932

Garriock, H., Kraft, J., Shyn, S., Peters, E., Yokoyama, J., Jenkins, G., Reinalda, M., Slager, S., McGrath, P., & Hamilton, S. (2010). A Genomewide Association Study of Citalopram Response in Major Depressive Disorder Biological Psychiatry, 67 (2), 133-138 DOI: 10.1016/j.biopsych.2009.08.029

Friday, February 19, 2010

Drunk on Alcohol?

When you drink alcohol and get drunk, are you getting drunk on alcohol?

Well, obviously, you might think, and so did I. But it turns out that some people claim that the alcohol (ethanol) in drinks isn't the only thing responsible for their effects - they say that acetaldehyde may be important, perhaps even more so.

South Korean researchers Kim et al report that it's acetaldehyde, rather than ethanol, which explains alcohol's immediate effects on cognitive and motor skills. During the metabolism of ethanol in the body, it's first converted into acetaldehyde, which then gets converted into acetate and excreted. Acetaldehyde build-up is popularly renowned as a cause of hangovers (although it's unclear how true this is), but could it also be involved in the acute effects?

Kim et al gave 24 male volunteers a range of doses of ethanol (in the form of vodka and orange juice). Half of them carried a genetic variant (ALDH2*2) which impairs the breakdown of acetaldehyde in the body. About 50% of people of East Asian origin, e.g. Koreans, carry this variant, which is rare in other parts of the world.

As expected, compared to the others, the ALDH2*2 carriers had much higher blood acetaldehyde levels after drinking alcohol, while there was little or no difference in their blood ethanol levels.

Interestingly, though, the ALDH2*2 group also showed much more impairment of cognitive and motor skills, such as reaction time or a simulated driving task. On most measures, the non-carriers showed very little effect of alcohol, while the carriers were strongly affected, especially at high doses. Blood acetaldehyde was more strongly correlated with poor performance than blood alcohol was.

So the authors concluded that:
Acetaldehyde might be more important than alcohol in determining the effects on human psychomotor function and skills.
So is acetaldehyde to blame when you spend half an hour trying and failing to unlock your front door after a hard nights drinking? Should we be breathalyzing drivers for it? Maybe: this is an interesting finding, and there's quite a lot of animal evidence that acetaldehyde has acute sedative, hypnotic and amnesic effects, amongst others.

Still, there's another explanation for these results: maybe the
ALDH2*2 carriers just weren't paying much attention to the tasks, because they felt ill, as ALDH2*2 carriers generally do after drinking, as a result of acetaldehyde build-up. No-one's going to be operating at peak performance if they're suffering the notorious flush reaction or "Asian glow", which includes skin flushing, nausea, headache, and increased pulse...

ResearchBlogging.orgKim SW, Bae KY, Shin HY, Kim JM, Shin IS, Youn T, Kim J, Kim JK, & Yoon JS (2009). The Role of Acetaldehyde in Human Psychomotor Function: A Double-Blind Placebo-Controlled Crossover Study. Biological psychiatry PMID: 19914598

Drunk on Alcohol?

When you drink alcohol and get drunk, are you getting drunk on alcohol?

Well, obviously, you might think, and so did I. But it turns out that some people claim that the alcohol (ethanol) in drinks isn't the only thing responsible for their effects - they say that acetaldehyde may be important, perhaps even more so.

South Korean researchers Kim et al report that it's acetaldehyde, rather than ethanol, which explains alcohol's immediate effects on cognitive and motor skills. During the metabolism of ethanol in the body, it's first converted into acetaldehyde, which then gets converted into acetate and excreted. Acetaldehyde build-up is popularly renowned as a cause of hangovers (although it's unclear how true this is), but could it also be involved in the acute effects?

Kim et al gave 24 male volunteers a range of doses of ethanol (in the form of vodka and orange juice). Half of them carried a genetic variant (ALDH2*2) which impairs the breakdown of acetaldehyde in the body. About 50% of people of East Asian origin, e.g. Koreans, carry this variant, which is rare in other parts of the world.

As expected, compared to the others, the ALDH2*2 carriers had much higher blood acetaldehyde levels after drinking alcohol, while there was little or no difference in their blood ethanol levels.

Interestingly, though, the ALDH2*2 group also showed much more impairment of cognitive and motor skills, such as reaction time or a simulated driving task. On most measures, the non-carriers showed very little effect of alcohol, while the carriers were strongly affected, especially at high doses. Blood acetaldehyde was more strongly correlated with poor performance than blood alcohol was.

So the authors concluded that:
Acetaldehyde might be more important than alcohol in determining the effects on human psychomotor function and skills.
So is acetaldehyde to blame when you spend half an hour trying and failing to unlock your front door after a hard nights drinking? Should we be breathalyzing drivers for it? Maybe: this is an interesting finding, and there's quite a lot of animal evidence that acetaldehyde has acute sedative, hypnotic and amnesic effects, amongst others.

Still, there's another explanation for these results: maybe the
ALDH2*2 carriers just weren't paying much attention to the tasks, because they felt ill, as ALDH2*2 carriers generally do after drinking, as a result of acetaldehyde build-up. No-one's going to be operating at peak performance if they're suffering the notorious flush reaction or "Asian glow", which includes skin flushing, nausea, headache, and increased pulse...

ResearchBlogging.orgKim SW, Bae KY, Shin HY, Kim JM, Shin IS, Youn T, Kim J, Kim JK, & Yoon JS (2009). The Role of Acetaldehyde in Human Psychomotor Function: A Double-Blind Placebo-Controlled Crossover Study. Biological psychiatry PMID: 19914598

Sunday, December 27, 2009

The Genetics of Living To 100

Is there a gene for long life?

Boston-based group Sebastiani et al say they've found not one but two, in RNA Editing Genes Associated with Extreme Old Age in Humans and with Lifespan in C. elegans.

They took 4 groups of "oldest old" people: from New England, Italy, and Japan, and American Ashkenazi Jews. All were aged 90 or more, and many of them were 100, centenarians. As control groups, they used random healthy people who weren't especially old. The total sample size was an impressive 2105 old vs. 3044 controls.

On the basis of a pilot study, they chose to look at two candidate genes, ADARB1 and ADARB2. Both are involved in post-transcriptional RNA editing, one of the steps in the process by which genetic material, DNA, controls protein synthesis. It's something every cell in the body needs to do in order to function.

What happened? Their abstract makes the exciting claim that
18 single nucleotide polymorphisms (SNPs) in the RNA editing genes ADARB1 and ADARB2 are associated with extreme old age in a U.S. based study ... We describe replications of these findings in three independently conducted centenarian studies with different genetic backgrounds (Italian, Ashkenazi Jewish and Japanese) that collectively support an association of ADARB1 and ADARB2 with longevity.
But read the whole paper and the picture is a little more complex. For ADARB1, they looked at 31 variants (SNPs). In the New England sample, which was the largest, 5 of them were statistically significantly more common in old people compared to the controls. However, none of these were significantly associated in any of the other samples, although for 3 of the 5 variants, there was some evidence of an effect in the same direction in the other samples.

In ADARB2, out of 114 variants, 10 were significantly associated in the New England sample. Of these, 4 were independently significant in the Italian sample, and in the combined New England/Italian sample all 10 were still associated. But the Jewish and the Japanese samples showed a rather different picture: only 1 of the 10 associations was significant in the Jews, although several were weakly associated in the same direction, and in a pooled New England/Italian/Jewish analysis 9 were still significant. In the Japanese sample, one association was replicated but another variant was associated in the wrong direction.

They also did some lab work and found that in nematode worms (C. Elegans), mutants lacking the worm equivalent of the ADARB1 and ADARB2 genes had a 50% reduced lifespan - 10 days, instead of the normal 20 - despite no obvious symptoms of illness. Hmm.


I'm not quite sure what to make of this data. They looked at 4 separate, large samples, which is an excellent size by the standards of candidate gene association studies. The evidence implicating ADARB1 and (especially) ADARB2 variants in longevity is fairly convincing, although the most consistent effects came from the European-ancestry samples, suggesting that different things might be going on in other populations. This is the first research looking at these genes; ultimately, we won't know for sure until we get more. The worm data is a nice touch, but I'd like to see evidence from animals with a bit more similarity to humans, say mice.

Still, suppose that these genes are associated with long life; suppose they they control the rate of the ageing process, protecting you from dying from "natural causes" too early. That doesn't mean that you'll live to an old age - it just makes it possible. If you get hit a truck or fall of a cliff, you're dead, anti-ageing genes or not.

Frenchwoman Jeanne Calment, born 1875, died 1997, is the oldest person on record, at 122 years. But we'll never know whether someone with the genetic potential to outlive her died in WW2, or the Cultural Revolution, or just got hit by a truck. Calment presumably had the right genes, but she was also lucky.

So a trait's being genetically heritable doesn't make it pre-ordained and immutable. IQ, for example, most likely has a heritability of around 50% - some people likely have a higher potential for intellectual achievement than others. But if you're born into an abusive family, or deep poverty, or you never get a chance to go to school, you may never reach that potential. There's always that truck.

ResearchBlogging.orgSebastiani P, Montano M, Puca A, Solovieff N, Kojima T, Wang MC, Melista E, Meltzer M, Fischer SE, Andersen S, Hartley SH, Sedgewick A, Arai Y, Bergman A, Barzilai N, Terry DF, Riva A, Anselmi CV, Malovini A, Kitamoto A, Sawabe M, Arai T, Gondo Y, Steinberg MH, Hirose N, Atzmon G, Ruvkun G, Baldwin CT, & Perls TT (2009). RNA editing genes associated with extreme old age in humans and with lifespan in C. elegans. PloS one, 4 (12) PMID: 20011587

The Genetics of Living To 100

Is there a gene for long life?

Boston-based group Sebastiani et al say they've found not one but two, in RNA Editing Genes Associated with Extreme Old Age in Humans and with Lifespan in C. elegans.

They took 4 groups of "oldest old" people: from New England, Italy, and Japan, and American Ashkenazi Jews. All were aged 90 or more, and many of them were 100, centenarians. As control groups, they used random healthy people who weren't especially old. The total sample size was an impressive 2105 old vs. 3044 controls.

On the basis of a pilot study, they chose to look at two candidate genes, ADARB1 and ADARB2. Both are involved in post-transcriptional RNA editing, one of the steps in the process by which genetic material, DNA, controls protein synthesis. It's something every cell in the body needs to do in order to function.

What happened? Their abstract makes the exciting claim that
18 single nucleotide polymorphisms (SNPs) in the RNA editing genes ADARB1 and ADARB2 are associated with extreme old age in a U.S. based study ... We describe replications of these findings in three independently conducted centenarian studies with different genetic backgrounds (Italian, Ashkenazi Jewish and Japanese) that collectively support an association of ADARB1 and ADARB2 with longevity.
But read the whole paper and the picture is a little more complex. For ADARB1, they looked at 31 variants (SNPs). In the New England sample, which was the largest, 5 of them were statistically significantly more common in old people compared to the controls. However, none of these were significantly associated in any of the other samples, although for 3 of the 5 variants, there was some evidence of an effect in the same direction in the other samples.

In ADARB2, out of 114 variants, 10 were significantly associated in the New England sample. Of these, 4 were independently significant in the Italian sample, and in the combined New England/Italian sample all 10 were still associated. But the Jewish and the Japanese samples showed a rather different picture: only 1 of the 10 associations was significant in the Jews, although several were weakly associated in the same direction, and in a pooled New England/Italian/Jewish analysis 9 were still significant. In the Japanese sample, one association was replicated but another variant was associated in the wrong direction.

They also did some lab work and found that in nematode worms (C. Elegans), mutants lacking the worm equivalent of the ADARB1 and ADARB2 genes had a 50% reduced lifespan - 10 days, instead of the normal 20 - despite no obvious symptoms of illness. Hmm.


I'm not quite sure what to make of this data. They looked at 4 separate, large samples, which is an excellent size by the standards of candidate gene association studies. The evidence implicating ADARB1 and (especially) ADARB2 variants in longevity is fairly convincing, although the most consistent effects came from the European-ancestry samples, suggesting that different things might be going on in other populations. This is the first research looking at these genes; ultimately, we won't know for sure until we get more. The worm data is a nice touch, but I'd like to see evidence from animals with a bit more similarity to humans, say mice.

Still, suppose that these genes are associated with long life; suppose they they control the rate of the ageing process, protecting you from dying from "natural causes" too early. That doesn't mean that you'll live to an old age - it just makes it possible. If you get hit a truck or fall of a cliff, you're dead, anti-ageing genes or not.

Frenchwoman Jeanne Calment, born 1875, died 1997, is the oldest person on record, at 122 years. But we'll never know whether someone with the genetic potential to outlive her died in WW2, or the Cultural Revolution, or just got hit by a truck. Calment presumably had the right genes, but she was also lucky.

So a trait's being genetically heritable doesn't make it pre-ordained and immutable. IQ, for example, most likely has a heritability of around 50% - some people likely have a higher potential for intellectual achievement than others. But if you're born into an abusive family, or deep poverty, or you never get a chance to go to school, you may never reach that potential. There's always that truck.

ResearchBlogging.orgSebastiani P, Montano M, Puca A, Solovieff N, Kojima T, Wang MC, Melista E, Meltzer M, Fischer SE, Andersen S, Hartley SH, Sedgewick A, Arai Y, Bergman A, Barzilai N, Terry DF, Riva A, Anselmi CV, Malovini A, Kitamoto A, Sawabe M, Arai T, Gondo Y, Steinberg MH, Hirose N, Atzmon G, Ruvkun G, Baldwin CT, & Perls TT (2009). RNA editing genes associated with extreme old age in humans and with lifespan in C. elegans. PloS one, 4 (12) PMID: 20011587

Wednesday, December 23, 2009

Good News for Armchair Neuropathologists

Ever wanted to crack the mysteries of the brain? Dreamed of discovering the cause of mental illness?

Well, now, you can - or, at any rate, you can try - and you can do it from the comfort of your own home, thanks to the new Stanley Neuropathology Consortium Integrative Database.

Just register (it's free and instant) and you get access to a pool of data derived from the Stanley Neuropathology Consortium brain collection. The collection comprises 60 frozen brains - 15 each from people with schizophrenia, bipolar disorder, and clinical depression, and 15 "normals".

In a Neuropsychopharmacology paper announcing the project, administrators Sanghyeon Kim and Maree Webster point out that
Data sharing has become more important than ever in the biomedical sciences with the advance of high-throughput technology and web-based databases are one of the most efficient available resources to share datasets.
The Institute's 60 brains have long been the leading source of human brain tissue for researchers in biological psychiatry. Whenever you read about a new discovery relating to schizophrenia or bipolar disorder, chances are the Stanley brains were involved. The Institute provide slices of the brains free of charge to scientists who request them, and they've sent out over 200,000 to date.

Until now, if you wanted to find out what these scientists discovered about the brains, you'd have to look up the results in the many hundreds of scientific papers where the various results were published. If you knew where to look, and if you had a lot of time on your hands. The database collates all of the findings. That's a good idea. To ensure that they get all of the results, the Institute have another good idea:
Coded specimens are sent to researchers with the code varying from researcher to researcher to ensure that all studies are blinded. The code is released to the researcher only when the data have been collected and submitted to the Institute.
The data we're provided about the brains is quite exciting, if you like molecules, comprising 1749 markers from 12 different parts of the brain. Markers include levels of proteins, RNA, and the number and shape of various types of cells.

It's easy to use. While waiting for my coffee to brew, I compared the amount of the protein GFAP76 in the frontal cortex between the four groups. There was no significant difference. I guess GFAP76 doesn't cause mental illness - darn. So much for my Nobel Prize winning theory. But I did find that levels of GFAP76 were very strongly correlated with levels of another protein, "phosphirylated" (I think they mean "phosphorylated") PRKCA. You read it here first.

In the paper, Kim and Webster used the Database to find many differences between normal brains and diseased brains, including increased levels of dopamine in schizophrenia, and increased levels of glutamate in depression and bipolar. And decreased GAD67 proteins in the frontal cortex in bipolar and schizophrenia. And decreased reelin mRNA in the frontal cortex and cerebellum in bipolar and schizophrenia. And...

This leaves open the vital questions of what these differences mean, as I have complained before. And the problem with giving everyone in the world the results of 1749 different tests, and letting us cross-correlate them with each other and look for differences between 4 patient groups, is that you're making possible an awful lot of comparisons. With only 15 brains per group, none of the results can be considered anything more than provisional, anyway - what we really need are lots more brains.

But this database is still a welcome move. This kind of data pooling is the only sensible approach to doing modern science, and it's something people are advocating in other fields of neuroscience as well. It just makes sense to share results rather than leaving everyone to do there own thing in near-isolation from each other, now that we have the technology to do so. In fact, I'd say it's a... no-brainer.

ResearchBlogging.orgKim, S., & Webster, M. (2009). The Stanley Neuropathology Consortium Integrative Database: a Novel, Web-Based Tool for Exploring Neuropathological Markers in Psychiatric Disorders and the Biological Processes Associated with Abnormalities of Those Markers Neuropsychopharmacology, 35 (2), 473-482 DOI: 10.1038/npp.2009.151

Good News for Armchair Neuropathologists

Ever wanted to crack the mysteries of the brain? Dreamed of discovering the cause of mental illness?

Well, now, you can - or, at any rate, you can try - and you can do it from the comfort of your own home, thanks to the new Stanley Neuropathology Consortium Integrative Database.

Just register (it's free and instant) and you get access to a pool of data derived from the Stanley Neuropathology Consortium brain collection. The collection comprises 60 frozen brains - 15 each from people with schizophrenia, bipolar disorder, and clinical depression, and 15 "normals".

In a Neuropsychopharmacology paper announcing the project, administrators Sanghyeon Kim and Maree Webster point out that
Data sharing has become more important than ever in the biomedical sciences with the advance of high-throughput technology and web-based databases are one of the most efficient available resources to share datasets.
The Institute's 60 brains have long been the leading source of human brain tissue for researchers in biological psychiatry. Whenever you read about a new discovery relating to schizophrenia or bipolar disorder, chances are the Stanley brains were involved. The Institute provide slices of the brains free of charge to scientists who request them, and they've sent out over 200,000 to date.

Until now, if you wanted to find out what these scientists discovered about the brains, you'd have to look up the results in the many hundreds of scientific papers where the various results were published. If you knew where to look, and if you had a lot of time on your hands. The database collates all of the findings. That's a good idea. To ensure that they get all of the results, the Institute have another good idea:
Coded specimens are sent to researchers with the code varying from researcher to researcher to ensure that all studies are blinded. The code is released to the researcher only when the data have been collected and submitted to the Institute.
The data we're provided about the brains is quite exciting, if you like molecules, comprising 1749 markers from 12 different parts of the brain. Markers include levels of proteins, RNA, and the number and shape of various types of cells.

It's easy to use. While waiting for my coffee to brew, I compared the amount of the protein GFAP76 in the frontal cortex between the four groups. There was no significant difference. I guess GFAP76 doesn't cause mental illness - darn. So much for my Nobel Prize winning theory. But I did find that levels of GFAP76 were very strongly correlated with levels of another protein, "phosphirylated" (I think they mean "phosphorylated") PRKCA. You read it here first.

In the paper, Kim and Webster used the Database to find many differences between normal brains and diseased brains, including increased levels of dopamine in schizophrenia, and increased levels of glutamate in depression and bipolar. And decreased GAD67 proteins in the frontal cortex in bipolar and schizophrenia. And decreased reelin mRNA in the frontal cortex and cerebellum in bipolar and schizophrenia. And...

This leaves open the vital questions of what these differences mean, as I have complained before. And the problem with giving everyone in the world the results of 1749 different tests, and letting us cross-correlate them with each other and look for differences between 4 patient groups, is that you're making possible an awful lot of comparisons. With only 15 brains per group, none of the results can be considered anything more than provisional, anyway - what we really need are lots more brains.

But this database is still a welcome move. This kind of data pooling is the only sensible approach to doing modern science, and it's something people are advocating in other fields of neuroscience as well. It just makes sense to share results rather than leaving everyone to do there own thing in near-isolation from each other, now that we have the technology to do so. In fact, I'd say it's a... no-brainer.

ResearchBlogging.orgKim, S., & Webster, M. (2009). The Stanley Neuropathology Consortium Integrative Database: a Novel, Web-Based Tool for Exploring Neuropathological Markers in Psychiatric Disorders and the Biological Processes Associated with Abnormalities of Those Markers Neuropsychopharmacology, 35 (2), 473-482 DOI: 10.1038/npp.2009.151

Friday, September 4, 2009

Predicting Antidepressant Response with EEG

One of the limitations of antidepressants is that they don't always work. Worse, they don't work in an unpredictable way. Some people benefit from some drugs, and others don't, but there's no way of knowing in advance what will happen in any particular case - or of telling which pill is right for which person.

As a result, drug treatment for depression generally involves starting with a cheap medication with relatively mild side-effects, and if that fails, moving onto a series of other drugs until one helps. But since it can take several weeks for any new drug to work, this can be a frustrating process for patients and doctors alike.

Some means of predicting the antidepressant response would thus be very useful. Many have been proposed, but none have entered widespread clinical use. Now, a pair of papers(1,2) from UCLA's Andrew Leuchter et al make the case for prediction using quantitative EEG (QEEG).

EEG, electroencephalography, is a crude but effective way of recording electrical activity in the brain via electrodes attached to the head. "Quantitative" EEG just means using EEG to precisely measure the level of certain kinds of activity in the brain.

Leuchter et al's system is straightforward: it uses six electrodes on the front of the head. The patient simply relaxes with their eyes closed for a few minutes while neural activity is recorded.

This procedure is performed twice, once just before antidepressant treatment begins and then again a week later. The claim is that by examining the changes in the EEG signal after one week of drug treatment, the eventual benefit of the drug can be predicted. It's not an implausible idea, and if it did work, it would be rather helpful. But does it?

Leuchter et al say: yes! The first paper reports that in 73 depressed patients who were given the antidepressant escitalopram 10mg/day, QEEG changes after one week predicted clinical improvement six weeks later. Specifically, people who got substantially better at seven weeks had a higher "Antidepressant Treatment Response Index" (ATR) at one week than people who didn't: 59.0 ± 10.2 vs 49.8 ± 7.8, which is highly significant (
p less than 0.001).

In the companion paper, the authors examined patients who started on escitalopram and then either kept taking it or switched to a different antidepressant, bupropion. They found that patients who had a high ATR after a week of escitalopram tended to do well if they stayed on it, while patients who had a low ATR to escitalopram did better when they switched to the other drug.

These are interesting results, and they follow from ten years of previous work (mostly, but not exclusively, from the same group) on the topic. Because the current study didn't include a placebo group, we can't say that the QEEG predicts antidepressant response as such, only that it predicts improvement in depression symptoms. But even this is pretty exciting, if it really works.

In order to verify that it does, other researchers need to replicate this experiment. But they may find this a little difficult. What is the Antidepressant Treatment Response Index use in this study? It's derived from an analysis of the EEG signal, and we're told that you get it from this formula:

Some of the terms here are common parameters that any EEG expert will understand. But "A", "B", and "C" are not. They're constants, which are not given in the paper. They're secret numbers. Without knowing what those numbers are, no-one can calculate the "ATR" even if they have an EEG machine.

Why
keep them secret? Well...
"Financial support of this project was provided by Aspect Medical Systems. Aspect participated in the design and conduct of the study; collection, management, analysis, and interpretation of the data; and preparation and review of the manuscript."
Aspect is a large medical electronics company who developed the system used here. Presumably, they want to patent it (or already have). We're told that
"To facilitate independent replication of the work reported here, Aspect intends to make available a limited number of investigational systems for academic researchers. Please contact Scott Greenwald, Ph.D... for further information."
All very nice of them, but if they'd told us the three magic numbers, academics could start trying to independently replicate these results tomorrow. As it is, anyone who wants to do so will have to get Aspect's blessing, which, with the best will in the world, means they will not be entirely "independent".

[BPSDB]


ResearchBlogging.orgLeuchter AF, Cook IA, Gilmer WS, Marangell LB, Burgoyne KS, Howland RH, Trivedi MH, Zisook S, Jain R, Fava M, Iosifescu D, & Greenwald S (2009). Effectiveness of a quantitative electroencephalographic biomarker for predicting differential response or remission with escitalopram and bupropion in major depressive disorder. Psychiatry research PMID: 19709754

Leuchter AF, Cook IA, Marangell LB, Gilmer WS, Burgoyne KS, Howland RH, Trivedi MH, Zisook S, Jain R, McCracken JT, Fava M, Iosifescu D, & Greenwald S (2009). Comparative effectiveness of biomarkers and clinical indicators for predicting outcomes of SSRI treatment in Major Depressive Disorder: Results of the BRITE-MD study. Psychiatry research PMID: 19712979