Showing posts with label animals. Show all posts
Showing posts with label animals. Show all posts

Saturday, May 15, 2010

Do It Like You Dopamine It

Neuroskeptic readers will know that I'm a big fan of theories. Rather than just poking around (or scanning) the brain under different conditions and seeing what happens, it's always better to have a testable hypothesis.

I just found a 2007 paper by Israeli computational neuroscientists Niv et al that puts forward a very interesting theory about dopamine. Dopamine is a neurotransmitter, and dopamine cells are known to fire in phasic bursts - short volleys of spikes over millisecond timescales - in response to something which is either pleasurable in itself, or something that you've learned is associated with pleasure. Dopamine is therefore thought to be involved in learning what to do in order to get pleasurable rewards.

But baseline, tonic dopamine levels vary over longer periods as well. The function of this tonic dopamine firing, and its relationship, if any, to phasic dopamine signalling, is less clear. Niv et al's idea is that the tonic dopamine level represents the brain's estimate of the average availability of rewards in the environment, and that it therefore controls how "vigorously" we should do stuff.

A high reward availability means that, in general, there's lots of stuff going on, lots of potential gains to be made. So if you're not out there getting some reward, you're missing out. In economic terms, the opportunity cost of not acting, or acting slowly, is high - so you need to hurry up. On the other hand, if there's only minor rewards available, you might as well take things nice and slow, to conserve your energy. Niv et al present a simple mathematical model in which a hypothetical rat must decide how often to press a lever in order to get food, and show that it accounts for the data from animal learning experiments.

The distinction between phasic dopamine (a specific reward) vs. tonic dopamine (overall reward availability) is a bit like the distinction between fear vs. anxiety. Fear is what you feel when something scary, i.e. harmful, is right there in front of you. Anxiety is the sense that something harmful could be round the next corner.

This theory accounts for the fact that if you give someone a drug that increases dopamine levels, such as amphetamine, they become hyperactive - they do more stuff, faster, or at least try to. That's why they call it speed. This happens to animals too. Yet this hyperactivity starts almost immediately, which means that it can't be a product of learning.

It also rings true in human terms. The feeling that everything's incredibly important, and that everyday tasks are really exciting, is one of the main effects of amphetamine. Every speed addict will have a story about the time they stayed up all night cleaning every inch of their house or organizing their wardrobe. This can easily develop into the compulsive, pointless repetition of the same task over and over. People with bipolar disorder often report the same kind of thing during (hypo)mania.

What controls tonic dopamine levels? A really brilliantly elegant answer would be: phasic dopamine. Maybe every time phasic dopamine levels spike in response to a reward (or something which you've learned to associate with a reward), some of the dopamine gets left over. If there's lots of phasic dopamine firing, which suggests that the availability of rewards is high, the tonic dopamine levels rise.

Unfortunately, it's probably not that simple, as signals from different parts of the brain seem to alter tonic and phasic dopamine firing largely independently, and this would mean that tonic dopamine would only increase after a good few rewards, not pre-emptively, which seems unlikely. The truth is, we don't know what sets the dopamine tone, and we don't really know what it does; but Niv et al's account is the most convincing I've come across...

ResearchBlogging.orgNiv Y, Daw ND, Joel D, & Dayan P (2007). Tonic dopamine: opportunity costs and the control of response vigor. Psychopharmacology, 191 (3), 507-20 PMID: 17031711

Do It Like You Dopamine It

Neuroskeptic readers will know that I'm a big fan of theories. Rather than just poking around (or scanning) the brain under different conditions and seeing what happens, it's always better to have a testable hypothesis.

I just found a 2007 paper by Israeli computational neuroscientists Niv et al that puts forward a very interesting theory about dopamine. Dopamine is a neurotransmitter, and dopamine cells are known to fire in phasic bursts - short volleys of spikes over millisecond timescales - in response to something which is either pleasurable in itself, or something that you've learned is associated with pleasure. Dopamine is therefore thought to be involved in learning what to do in order to get pleasurable rewards.

But baseline, tonic dopamine levels vary over longer periods as well. The function of this tonic dopamine firing, and its relationship, if any, to phasic dopamine signalling, is less clear. Niv et al's idea is that the tonic dopamine level represents the brain's estimate of the average availability of rewards in the environment, and that it therefore controls how "vigorously" we should do stuff.

A high reward availability means that, in general, there's lots of stuff going on, lots of potential gains to be made. So if you're not out there getting some reward, you're missing out. In economic terms, the opportunity cost of not acting, or acting slowly, is high - so you need to hurry up. On the other hand, if there's only minor rewards available, you might as well take things nice and slow, to conserve your energy. Niv et al present a simple mathematical model in which a hypothetical rat must decide how often to press a lever in order to get food, and show that it accounts for the data from animal learning experiments.

The distinction between phasic dopamine (a specific reward) vs. tonic dopamine (overall reward availability) is a bit like the distinction between fear vs. anxiety. Fear is what you feel when something scary, i.e. harmful, is right there in front of you. Anxiety is the sense that something harmful could be round the next corner.

This theory accounts for the fact that if you give someone a drug that increases dopamine levels, such as amphetamine, they become hyperactive - they do more stuff, faster, or at least try to. That's why they call it speed. This happens to animals too. Yet this hyperactivity starts almost immediately, which means that it can't be a product of learning.

It also rings true in human terms. The feeling that everything's incredibly important, and that everyday tasks are really exciting, is one of the main effects of amphetamine. Every speed addict will have a story about the time they stayed up all night cleaning every inch of their house or organizing their wardrobe. This can easily develop into the compulsive, pointless repetition of the same task over and over. People with bipolar disorder often report the same kind of thing during (hypo)mania.

What controls tonic dopamine levels? A really brilliantly elegant answer would be: phasic dopamine. Maybe every time phasic dopamine levels spike in response to a reward (or something which you've learned to associate with a reward), some of the dopamine gets left over. If there's lots of phasic dopamine firing, which suggests that the availability of rewards is high, the tonic dopamine levels rise.

Unfortunately, it's probably not that simple, as signals from different parts of the brain seem to alter tonic and phasic dopamine firing largely independently, and this would mean that tonic dopamine would only increase after a good few rewards, not pre-emptively, which seems unlikely. The truth is, we don't know what sets the dopamine tone, and we don't really know what it does; but Niv et al's account is the most convincing I've come across...

ResearchBlogging.orgNiv Y, Daw ND, Joel D, & Dayan P (2007). Tonic dopamine: opportunity costs and the control of response vigor. Psychopharmacology, 191 (3), 507-20 PMID: 17031711

Thursday, May 6, 2010

Mice That Fight for Their Rights

Israeli biologists Feder et al report on Selective breeding for dominant and submissive behavior in Sabra mice.

Mice are social animals and like many species, they show dominance hierarchies. When they first meet, they'll often fight each other. The winner gets to be Mr (or Mrs) Big, and they enjoy first pick of the food, mating opportunities, etc - for as long as they can remain dominant.

But what determines which mice become top dog... ? Feder et al show that it's partially under genetic control. They took a normal population of laboratory mice, paired them up, and made them battle for supremacy in a simple set-up in which only one mouse can get access to a central food supply:

At first, only about 30% of pairs developed clear dominance/submission relationships, but the ones that did were selectively bred: dominant males mated with dominant females, and submissive males with submissive females. The offspring were put through the same process, and it was repeated.

The results were dramatic: After 4 generations of successive selection, 80% of the pairs showed clear dominance and submission behaviour. And with each generation of breeding, the dominance relationships appeared faster, and stronger: at first the winners only got slightly more access to the food, but by the 4th generation, they almost completely monopolized it. As expected the mice bred to be dominant were overwhelmingly more likely to end up on top. The differences were not due to general differences in activity levels or anxiety.

But the naturally timid mice could be made to fight for their rights by treating them with antidepressants - after a month of imipramine, they were taking crap from no-one.

Feder et al say that previous studies have also shown anti-submissive effects of antidepressants, while drugs used to treat mania reduce dominance. Anyone who's experienced a mood disorder will probably be able to relate to this: depressed people tend to feel like they belong at the bottom of the pecking order of life, while mania is classically associated with believing you're the greatest person in history.

So dominance and submission could provide a useful way of testing the effects of drugs on mood. If so, it would be useful, because current animal models of depression and antidepressants etc. mostly rely on putting animals in a glass of water and seeing how long they take to stop struggling...

ResearchBlogging.orgFeder, Y., Nesher, E., Ogran, A., Kreinin, A., Malatynska, E., Yadid, G., & Pinhasov, A. (2010). Selective breeding for dominant and submissive behavior in Sabra mice Journal of Affective Disorders DOI: 10.1016/j.jad.2010.03.018

Mice That Fight for Their Rights

Israeli biologists Feder et al report on Selective breeding for dominant and submissive behavior in Sabra mice.

Mice are social animals and like many species, they show dominance hierarchies. When they first meet, they'll often fight each other. The winner gets to be Mr (or Mrs) Big, and they enjoy first pick of the food, mating opportunities, etc - for as long as they can remain dominant.

But what determines which mice become top dog... ? Feder et al show that it's partially under genetic control. They took a normal population of laboratory mice, paired them up, and made them battle for supremacy in a simple set-up in which only one mouse can get access to a central food supply:

At first, only about 30% of pairs developed clear dominance/submission relationships, but the ones that did were selectively bred: dominant males mated with dominant females, and submissive males with submissive females. The offspring were put through the same process, and it was repeated.

The results were dramatic: After 4 generations of successive selection, 80% of the pairs showed clear dominance and submission behaviour. And with each generation of breeding, the dominance relationships appeared faster, and stronger: at first the winners only got slightly more access to the food, but by the 4th generation, they almost completely monopolized it. As expected the mice bred to be dominant were overwhelmingly more likely to end up on top. The differences were not due to general differences in activity levels or anxiety.

But the naturally timid mice could be made to fight for their rights by treating them with antidepressants - after a month of imipramine, they were taking crap from no-one.

Feder et al say that previous studies have also shown anti-submissive effects of antidepressants, while drugs used to treat mania reduce dominance. Anyone who's experienced a mood disorder will probably be able to relate to this: depressed people tend to feel like they belong at the bottom of the pecking order of life, while mania is classically associated with believing you're the greatest person in history.

So dominance and submission could provide a useful way of testing the effects of drugs on mood. If so, it would be useful, because current animal models of depression and antidepressants etc. mostly rely on putting animals in a glass of water and seeing how long they take to stop struggling...

ResearchBlogging.orgFeder, Y., Nesher, E., Ogran, A., Kreinin, A., Malatynska, E., Yadid, G., & Pinhasov, A. (2010). Selective breeding for dominant and submissive behavior in Sabra mice Journal of Affective Disorders DOI: 10.1016/j.jad.2010.03.018

Tuesday, May 4, 2010

Do Cats Hallucinate?

I have two cats. One is about four, and he is a psychopath. The other is sixteen - elderly, in cat terms - and I've recently noticed some changes in her behaviour.

For one, she's become a lot more affectionate, and she demands constant attention - she meows at people on sight, follows you around, and almost always comes and sits on top of you, or on top of whatever you're doing/reading/typing.

But on top of that, she's started pausing in the middle of whatever she's doing and staring at empty corners, or walls. All cats sit down and gaze into space a lot of the time, but this is different - it happens in the middle of normal actions, like eating or walking around. What does this mean?

Could she be hallucinating? Hallucinations are unfortunately not uncommon in elderly people. Seeing and hearing things that aren't there is a major symptom of Alzheimer's, and other forms of dementia. Do cats get Alzheimer's? The internet says: yes. In terms of scientific research there doesn't seem to have been much, but a few studies have found Alzheimer's-like changes (amyloid-beta protein accumulation) in the brains of old cats. Whether these cause the same symptoms as they do in people is unclear, but, why not?

How would you know if an animal was hallucinating? They can't talk about it, and unlike say hunger or pain, they don't have specific ways of communicating it through body language or cries. A hallucinating animal would, presumably, react fairly normally to whatever it thought it saw or heard: so hallucinations would manifest as normal behaviours, but in inappropriate situations. Whether this is what's happening to my cat, I'm not sure, but again, it's possible.

A more philosophical issue is whether we can conclude that this kind of out-of-context behaviour means the animal is experiencing a hallucination. But this is really just the age old question of whether animals have conciousness at all. If they do, then they can presumably hallucinate: if you can be concious of sensations, you can be concious of false sensations.

For what its worth, my view is that animals, at any rate for mammals, are concious. Humans are (although technically we only know for sure that we personally are, and have to assume the same is true of others.) Mammalian brains are structured in a similar way to our own; they're made of the same cells; they use the same neurotransmitters and the same drugs interfere with them in the same ways; pretty much all of the brain regions are there, although the sizes differ.

There's of course a big difference between us and other mammals: we have language, and conceptual thinking, and so forth. But does conciousness depend on that? It seems unlikely, just because most of what we're concious of at any one time isn't anything to do with those specifically human things.

Right now, I'm concious of what I can see, what I can hear, what I can feel with my fingertips, and the thoughts I'm writing down. Only 1/4 of that (to put it crudely) is unique to humans. And I'm not always aware of thoughts or words; there are plenty of times when I'm only aware of sensations and perceptions.

Probably the closest we get to animal conciousness is in strong, primitive experiences like pain, panic and anger, in which we "take leave of our senses" - not meaning that we become unconscious, but that we temporarily stop being able to "think straight" i.e. like a human. That doesn't mean that animals spend all their time in some extreme emotional state, but it's harder for us to know what it's like to be a relaxed cat because generally when we're relaxed, we're thinking (or daydreaming, etc. Although who's to say cats don't? They dream, after all...)

Do Cats Hallucinate?

I have two cats. One is about four, and he is a psychopath. The other is sixteen - elderly, in cat terms - and I've recently noticed some changes in her behaviour.

For one, she's become a lot more affectionate, and she demands constant attention - she meows at people on sight, follows you around, and almost always comes and sits on top of you, or on top of whatever you're doing/reading/typing.

But on top of that, she's started pausing in the middle of whatever she's doing and staring at empty corners, or walls. All cats sit down and gaze into space a lot of the time, but this is different - it happens in the middle of normal actions, like eating or walking around. What does this mean?

Could she be hallucinating? Hallucinations are unfortunately not uncommon in elderly people. Seeing and hearing things that aren't there is a major symptom of Alzheimer's, and other forms of dementia. Do cats get Alzheimer's? The internet says: yes. In terms of scientific research there doesn't seem to have been much, but a few studies have found Alzheimer's-like changes (amyloid-beta protein accumulation) in the brains of old cats. Whether these cause the same symptoms as they do in people is unclear, but, why not?

How would you know if an animal was hallucinating? They can't talk about it, and unlike say hunger or pain, they don't have specific ways of communicating it through body language or cries. A hallucinating animal would, presumably, react fairly normally to whatever it thought it saw or heard: so hallucinations would manifest as normal behaviours, but in inappropriate situations. Whether this is what's happening to my cat, I'm not sure, but again, it's possible.

A more philosophical issue is whether we can conclude that this kind of out-of-context behaviour means the animal is experiencing a hallucination. But this is really just the age old question of whether animals have conciousness at all. If they do, then they can presumably hallucinate: if you can be concious of sensations, you can be concious of false sensations.

For what its worth, my view is that animals, at any rate for mammals, are concious. Humans are (although technically we only know for sure that we personally are, and have to assume the same is true of others.) Mammalian brains are structured in a similar way to our own; they're made of the same cells; they use the same neurotransmitters and the same drugs interfere with them in the same ways; pretty much all of the brain regions are there, although the sizes differ.

There's of course a big difference between us and other mammals: we have language, and conceptual thinking, and so forth. But does conciousness depend on that? It seems unlikely, just because most of what we're concious of at any one time isn't anything to do with those specifically human things.

Right now, I'm concious of what I can see, what I can hear, what I can feel with my fingertips, and the thoughts I'm writing down. Only 1/4 of that (to put it crudely) is unique to humans. And I'm not always aware of thoughts or words; there are plenty of times when I'm only aware of sensations and perceptions.

Probably the closest we get to animal conciousness is in strong, primitive experiences like pain, panic and anger, in which we "take leave of our senses" - not meaning that we become unconscious, but that we temporarily stop being able to "think straight" i.e. like a human. That doesn't mean that animals spend all their time in some extreme emotional state, but it's harder for us to know what it's like to be a relaxed cat because generally when we're relaxed, we're thinking (or daydreaming, etc. Although who's to say cats don't? They dream, after all...)

Wednesday, April 21, 2010

Of Yeast and Men

Nature reports on the Dissection of genetically complex traits with extremely large pools of yeast segregants.


Ehrenreich et al have a new way of mapping the genetic basis of complex traits in yeast, "complex" being what geneticists call anything which isn't controlled by one single gene. They dub their approach "Extreme QTL mapping". This suggests images of geneticists running experiments atop Everest, or perhaps collecting blood samples from lions with their bare hands, but actually
Extreme QTL mapping (X-QTL) has three key steps. The first is the generation of segregating populations of very large size. The second is selection-based phenotyping of these populations to recover large numbers of progeny with extreme trait values. This can be accomplished, for example, by selection for drug resistance or by cell sorting. The final step is quantitative measurement of pooled allele frequencies across the genome.
The basic idea is to cross breed two strains of yeast to generate lots of different hybrid strains each with a random selection of DNA from each "parent". Then, you put all the hybrids under some kind of selective pressure - for example, by adding the toxin 4-NQO to their dish.

Some yeast are more or less resistant to 4-NQO, and this trait is largely determined by genetics. So after a while, the vulnerable hybrids will die out and only the most highly resistant strains will be left in the 4-NQO dish to reproduce. It's a quick and dirty form of selective breeding. Finally, you can compare the genetics of the 4-NQO resistant hybrids to a control group of hybrids who didn't get any toxins, using a GWAS. Any genetic differences are likely to represent 4-NQO resistance genes.

Using this method, Ehrenreich et al found no less than 14 4-NQO resistance variants. That includes two replications of previous findings, and 12 new ones. Collectively, the genes explained
59% of the phenotypic variance in 4-NQO sensitivity in an additive model. Because we measured the heritability of this trait to be 0.84, the loci explained 70% of the genetic variance, indicating that we have explained most of the genetic basis of this trait with the loci detected by X-QTL.
In other words, they've found most of the genes with a substantial effect on 4-NOR resistance, but not all of them. (They then did the same thing for several other toxins). About 30% of the heritability is "missing". Compare that to most human complex traits, where the missing heritability is more like 95%-99% at the moment. For example, twin studies and similar find human height to have a heritability of about 0.8, and more than 40 genetic variants have been associated with height, but together they only explain 5% of the heritability.

Why is Neuroskeptic posting about yeast? Well, partly because we live in a yeast-based society. Without yeast, we would have no alcoholic drinks. I think it's important to acknowledge their contribution to our lives. But mainly because there's a lesson here for people interested in the genetics of complex traits in humans, like, say, personality, IQ, and mental illness.

Yeast resistance to toxins is about the most straightforwardly "biological" trait you could imagine. Finding its genetic basis ought to be easy. But it wasn't. It was...extreme. Ehrenreich et al had to breed and select yeast with extreme traits (e.g. extremely high resistance to toxins), and compare them to control yeast of the same ancestry, to find the genes, and they still had a good deal of missing variance.

If they'd had to work on a random bunch of yeast from the wild, they'd have had a lot more trouble. That's why previous yeast GWAS studies didn't get results as good as these. Yet when it comes to humans, we're indeed forced to use a random bunch of people from the wild. You can't selectively breed people.

You can breed, say, mice, but it takes a lot longer than with yeast. I think there have been a few studies breeding mice for a certain trait and then looking at their genetics but not with a great degree of success, even though the first thing every mouse researcher learns is that different strains of mice are very different (C57BL/6 mice, for example, are notoriously hard to handle and love biting people.)

This is bad news for human genetics, where the interesting traits are clearly a lot more complex, ill-defined, and hard to measure than in yeast. On the other hand, though, it's perhaps also rather reassuring, as it suggests that our failure to explain more than a few % of the heritability so far reflects technical limitations rather than because these traits just aren't as genetic as we think after all...

ResearchBlogging.orgEhrenreich IM, Torabi N, Jia Y, Kent J, Martis S, Shapiro JA, Gresham D, Caudy AA, & Kruglyak L (2010). Dissection of genetically complex traits with extremely large pools of yeast segregants. Nature, 464 (7291), 1039-42 PMID: 20393561

Of Yeast and Men

Nature reports on the Dissection of genetically complex traits with extremely large pools of yeast segregants.


Ehrenreich et al have a new way of mapping the genetic basis of complex traits in yeast, "complex" being what geneticists call anything which isn't controlled by one single gene. They dub their approach "Extreme QTL mapping". This suggests images of geneticists running experiments atop Everest, or perhaps collecting blood samples from lions with their bare hands, but actually
Extreme QTL mapping (X-QTL) has three key steps. The first is the generation of segregating populations of very large size. The second is selection-based phenotyping of these populations to recover large numbers of progeny with extreme trait values. This can be accomplished, for example, by selection for drug resistance or by cell sorting. The final step is quantitative measurement of pooled allele frequencies across the genome.
The basic idea is to cross breed two strains of yeast to generate lots of different hybrid strains each with a random selection of DNA from each "parent". Then, you put all the hybrids under some kind of selective pressure - for example, by adding the toxin 4-NQO to their dish.

Some yeast are more or less resistant to 4-NQO, and this trait is largely determined by genetics. So after a while, the vulnerable hybrids will die out and only the most highly resistant strains will be left in the 4-NQO dish to reproduce. It's a quick and dirty form of selective breeding. Finally, you can compare the genetics of the 4-NQO resistant hybrids to a control group of hybrids who didn't get any toxins, using a GWAS. Any genetic differences are likely to represent 4-NQO resistance genes.

Using this method, Ehrenreich et al found no less than 14 4-NQO resistance variants. That includes two replications of previous findings, and 12 new ones. Collectively, the genes explained
59% of the phenotypic variance in 4-NQO sensitivity in an additive model. Because we measured the heritability of this trait to be 0.84, the loci explained 70% of the genetic variance, indicating that we have explained most of the genetic basis of this trait with the loci detected by X-QTL.
In other words, they've found most of the genes with a substantial effect on 4-NOR resistance, but not all of them. (They then did the same thing for several other toxins). About 30% of the heritability is "missing". Compare that to most human complex traits, where the missing heritability is more like 95%-99% at the moment. For example, twin studies and similar find human height to have a heritability of about 0.8, and more than 40 genetic variants have been associated with height, but together they only explain 5% of the heritability.

Why is Neuroskeptic posting about yeast? Well, partly because we live in a yeast-based society. Without yeast, we would have no alcoholic drinks. I think it's important to acknowledge their contribution to our lives. But mainly because there's a lesson here for people interested in the genetics of complex traits in humans, like, say, personality, IQ, and mental illness.

Yeast resistance to toxins is about the most straightforwardly "biological" trait you could imagine. Finding its genetic basis ought to be easy. But it wasn't. It was...extreme. Ehrenreich et al had to breed and select yeast with extreme traits (e.g. extremely high resistance to toxins), and compare them to control yeast of the same ancestry, to find the genes, and they still had a good deal of missing variance.

If they'd had to work on a random bunch of yeast from the wild, they'd have had a lot more trouble. That's why previous yeast GWAS studies didn't get results as good as these. Yet when it comes to humans, we're indeed forced to use a random bunch of people from the wild. You can't selectively breed people.

You can breed, say, mice, but it takes a lot longer than with yeast. I think there have been a few studies breeding mice for a certain trait and then looking at their genetics but not with a great degree of success, even though the first thing every mouse researcher learns is that different strains of mice are very different (C57BL/6 mice, for example, are notoriously hard to handle and love biting people.)

This is bad news for human genetics, where the interesting traits are clearly a lot more complex, ill-defined, and hard to measure than in yeast. On the other hand, though, it's perhaps also rather reassuring, as it suggests that our failure to explain more than a few % of the heritability so far reflects technical limitations rather than because these traits just aren't as genetic as we think after all...

ResearchBlogging.orgEhrenreich IM, Torabi N, Jia Y, Kent J, Martis S, Shapiro JA, Gresham D, Caudy AA, & Kruglyak L (2010). Dissection of genetically complex traits with extremely large pools of yeast segregants. Nature, 464 (7291), 1039-42 PMID: 20393561

Thursday, April 8, 2010

Social Learning in Antisocial Animals

In an unusual study with potentially revolutionary implications, Austrian biologists Wilkinson et al show evidence of Social learning in a non-social reptile.

Social learning means learning to do something by observing others doing it, rather than by doing it yourself. Many sociable animal species, including mammals, birds and even insects, have shown the ability to learn by observing others doing things. It's often seen as a distinct form of cognition, separate to "normal" learning, which evolved to facilitate group living. It's one of the things that everyone's favorite brain cells, mirror neurons, have been invoked to explain.

But if observational learning is a specifically social adaptation, then non-social animals would be predicted to lack this ability. One distinctly unfriendly species is the South American red-footed tortoise (Geochelone carbonaria). In the wild, they hatch from their eggs alone, and get no parental care; they live most of their lives without interacting with others.

Wilkinson et al found that red-footed tortoises can, nevertheless, learn by observation. They took four tortoises and got them to watch another "demonstrator" tortoise completing a difficult task: walking around an obstacle to get to some food (it's hard if you're a tortoise).

The observing animals all learned to do the task. In most cases, they walked around the obstacle to the right, which is what the demonstrators did, but sometimes they went left, showing that they were not simply copying the movements of the demonstrators. The wood chips on the floor of the floor of the cage were mixed up after each trial, to rule out the possibility that the tortoises were just following the smell of the demonstrator. None of four control tortoises, who got no demonstrations, managed to figure it out on their own.

The authors conclude that
The dominant hypothesis in this field claims that social learning evolved as a result of social living and therefore predicts that the tortoises would have difficulty with this task. They did not. The findings suggest that, in this case, social learning may be the result of a general ability to learn. Although the brain mechanisms that underlie the tortoises’ ability to learn socially remain unclear, it seems most likely that it is the product of a general learning mechanism that allows the tortoises to learn, through associative processes, to use the behaviour of another animal just as they would learn to use any cue in the environment.
This is a nice experiment, and the result is important: the idea that social learning is somehow evolutionarily and neurally "special" underlies a lot of modern social neuroscience. However, I'm not convinced that these tortoises can be accurately described as "non-social". Even the most anti-social species have to socialize in order to mate: no animal is an island. According to Wikipedia the red-footed tortoise has some quite elaborate (and hilarious) mating behaviours...
male to male combat is important in inducing breeding in redfoots. Male to male combat begins with a round of head bobbing from each male involved, and then proceeds to a wresting match where the males attempt to turn one another over. The succeeding male (usually the largest male) then attempts to mate with the females. The ritualistic head movements displayed by male red-foots are thought to be a method of species recognition. Other tortoise species have different challenging head movements....The unique body shape of the male redfooted tortoise facilitates the mating process by allowing him to maintain his balance during copulation while the female walks around, seemingly attempting to dislodge the male by walking under low-hanging vegetation.
ResearchBlogging.orgWilkinson, A., Kuenstner, K., Mueller, J., & Huber, L. (2010). Social learning in a non-social reptile (Geochelone carbonaria) Biology Letters DOI: 10.1098/rsbl.2010.0092

Social Learning in Antisocial Animals

In an unusual study with potentially revolutionary implications, Austrian biologists Wilkinson et al show evidence of Social learning in a non-social reptile.

Social learning means learning to do something by observing others doing it, rather than by doing it yourself. Many sociable animal species, including mammals, birds and even insects, have shown the ability to learn by observing others doing things. It's often seen as a distinct form of cognition, separate to "normal" learning, which evolved to facilitate group living. It's one of the things that everyone's favorite brain cells, mirror neurons, have been invoked to explain.

But if observational learning is a specifically social adaptation, then non-social animals would be predicted to lack this ability. One distinctly unfriendly species is the South American red-footed tortoise (Geochelone carbonaria). In the wild, they hatch from their eggs alone, and get no parental care; they live most of their lives without interacting with others.

Wilkinson et al found that red-footed tortoises can, nevertheless, learn by observation. They took four tortoises and got them to watch another "demonstrator" tortoise completing a difficult task: walking around an obstacle to get to some food (it's hard if you're a tortoise).

The observing animals all learned to do the task. In most cases, they walked around the obstacle to the right, which is what the demonstrators did, but sometimes they went left, showing that they were not simply copying the movements of the demonstrators. The wood chips on the floor of the floor of the cage were mixed up after each trial, to rule out the possibility that the tortoises were just following the smell of the demonstrator. None of four control tortoises, who got no demonstrations, managed to figure it out on their own.

The authors conclude that
The dominant hypothesis in this field claims that social learning evolved as a result of social living and therefore predicts that the tortoises would have difficulty with this task. They did not. The findings suggest that, in this case, social learning may be the result of a general ability to learn. Although the brain mechanisms that underlie the tortoises’ ability to learn socially remain unclear, it seems most likely that it is the product of a general learning mechanism that allows the tortoises to learn, through associative processes, to use the behaviour of another animal just as they would learn to use any cue in the environment.
This is a nice experiment, and the result is important: the idea that social learning is somehow evolutionarily and neurally "special" underlies a lot of modern social neuroscience. However, I'm not convinced that these tortoises can be accurately described as "non-social". Even the most anti-social species have to socialize in order to mate: no animal is an island. According to Wikipedia the red-footed tortoise has some quite elaborate (and hilarious) mating behaviours...
male to male combat is important in inducing breeding in redfoots. Male to male combat begins with a round of head bobbing from each male involved, and then proceeds to a wresting match where the males attempt to turn one another over. The succeeding male (usually the largest male) then attempts to mate with the females. The ritualistic head movements displayed by male red-foots are thought to be a method of species recognition. Other tortoise species have different challenging head movements....The unique body shape of the male redfooted tortoise facilitates the mating process by allowing him to maintain his balance during copulation while the female walks around, seemingly attempting to dislodge the male by walking under low-hanging vegetation.
ResearchBlogging.orgWilkinson, A., Kuenstner, K., Mueller, J., & Huber, L. (2010). Social learning in a non-social reptile (Geochelone carbonaria) Biology Letters DOI: 10.1098/rsbl.2010.0092

Saturday, March 20, 2010

Absinthe Fact and Fiction

Absinthe is a spirit. It's very strong, and very green. But is it something more?

I used to think so, until I came across this paper taking a skeptical look at the history and science of the drink, Padosch et al's Absinthism a fictitious 19th century syndrome with present impact

Absinthe is prepared by crushing and dissolving the herb wormwood in unflavoured neutral alcohol and then distilling the result; other herbs and spices are added later for taste and colour.

It became extremely popular in the late 19th century, especially in France, but it developed a reputation as a dangerous and hallucinogenic drug. Overuse was said to cause insanity, "absinthism", much worse than regular alcoholism. Eventually, absinthe was banned in the USA and most but not all European countries.

Much of the concern over absinthe came from animal experiments. Wormwood oil was found to cause hyperactivity and seizures in cats and rodents, whereas normal alcohol just made them drunk. But, Padosch et al explain, the relevance of these experiments to drinkers is unclear, because they involved high doses of pure wormwood extract, whereas absinthe is much more dilute. The fact that authors at the time used the word absinthe to refer to both the drink and the pure extract added to the confusion.

It's now known that wormwood, or at least some varieties of it, contains thujone, which can indeed cause seizures, and death, due to being a GABA antagonist. Until a few years ago it was thought that old-style absinthe might have contained up to 260 mg of thujone per litre, a substantial dose.

But that was based on the assumption that all of the thujone in the wormwood ended up in the drink prepared from it. Chemical analysis of actual absinthe has repeatedly found that it contains no more than about 6 mg/L thujone. The alcohol in absinthe would kill you long before you drank enough to get any other effects. As the saying goes, "the dose makes the poison", something that is easily forgotten.

As Padosch et al point out, it's possible that there are other undiscovered psychoactive compounds in absinthe, or that long-term exposure to low doses of thujone does cause "absinthism". But there is no evidence for that so far. Rather, they say, absinthism was just chronic alcoholism, and absinthe was no more or less dangerous than any other spirit.

I'm not sure why, but drinks seem to attract more than their fair share of urban myths. Amongst many others I've heard that the flakes of gold in Goldschläger cause cuts which let alcohol into your blood faster; Aftershock crystallizes in your stomach, so if you drink water the morning afterwards, you get drunk again; and that the little worm you get at the bottom of some tequilas apparently contains especially concentrated alcohol, or hallucinogens, or even cocaine maybe.

Slightly more serious is the theory that drinking different kinds of drinks instead of sticking to just one gets you drunk faster, or gives you a worse hangover, or something, especially if you do it in a certain order. Almost everyone I know believes this, although in my drinking experience it's not true, but I'm not sure that it's completely bogus, as I have heard somewhat plausible explanations i.e. drinking spirits alongside beer leads to a concentration of alcohol in your stomach that's optimal for absorption into the bloodstream... maybe.

Link: Not specifically related to this but The Poison Review is an excellent blog I've recently discovered all about poisons, toxins, drugs, and such fun stuff.

ResearchBlogging.orgPadosch SA, Lachenmeier DW, & Kröner LU (2006). Absinthism: a fictitious 19th century syndrome with present impact. Substance abuse treatment, prevention, and policy, 1 (1) PMID: 16722551

Absinthe Fact and Fiction

Absinthe is a spirit. It's very strong, and very green. But is it something more?

I used to think so, until I came across this paper taking a skeptical look at the history and science of the drink, Padosch et al's Absinthism a fictitious 19th century syndrome with present impact

Absinthe is prepared by crushing and dissolving the herb wormwood in unflavoured neutral alcohol and then distilling the result; other herbs and spices are added later for taste and colour.

It became extremely popular in the late 19th century, especially in France, but it developed a reputation as a dangerous and hallucinogenic drug. Overuse was said to cause insanity, "absinthism", much worse than regular alcoholism. Eventually, absinthe was banned in the USA and most but not all European countries.

Much of the concern over absinthe came from animal experiments. Wormwood oil was found to cause hyperactivity and seizures in cats and rodents, whereas normal alcohol just made them drunk. But, Padosch et al explain, the relevance of these experiments to drinkers is unclear, because they involved high doses of pure wormwood extract, whereas absinthe is much more dilute. The fact that authors at the time used the word absinthe to refer to both the drink and the pure extract added to the confusion.

It's now known that wormwood, or at least some varieties of it, contains thujone, which can indeed cause seizures, and death, due to being a GABA antagonist. Until a few years ago it was thought that old-style absinthe might have contained up to 260 mg of thujone per litre, a substantial dose.

But that was based on the assumption that all of the thujone in the wormwood ended up in the drink prepared from it. Chemical analysis of actual absinthe has repeatedly found that it contains no more than about 6 mg/L thujone. The alcohol in absinthe would kill you long before you drank enough to get any other effects. As the saying goes, "the dose makes the poison", something that is easily forgotten.

As Padosch et al point out, it's possible that there are other undiscovered psychoactive compounds in absinthe, or that long-term exposure to low doses of thujone does cause "absinthism". But there is no evidence for that so far. Rather, they say, absinthism was just chronic alcoholism, and absinthe was no more or less dangerous than any other spirit.

I'm not sure why, but drinks seem to attract more than their fair share of urban myths. Amongst many others I've heard that the flakes of gold in Goldschläger cause cuts which let alcohol into your blood faster; Aftershock crystallizes in your stomach, so if you drink water the morning afterwards, you get drunk again; and that the little worm you get at the bottom of some tequilas apparently contains especially concentrated alcohol, or hallucinogens, or even cocaine maybe.

Slightly more serious is the theory that drinking different kinds of drinks instead of sticking to just one gets you drunk faster, or gives you a worse hangover, or something, especially if you do it in a certain order. Almost everyone I know believes this, although in my drinking experience it's not true, but I'm not sure that it's completely bogus, as I have heard somewhat plausible explanations i.e. drinking spirits alongside beer leads to a concentration of alcohol in your stomach that's optimal for absorption into the bloodstream... maybe.

Link: Not specifically related to this but The Poison Review is an excellent blog I've recently discovered all about poisons, toxins, drugs, and such fun stuff.

ResearchBlogging.orgPadosch SA, Lachenmeier DW, & Kröner LU (2006). Absinthism: a fictitious 19th century syndrome with present impact. Substance abuse treatment, prevention, and policy, 1 (1) PMID: 16722551

Tuesday, January 26, 2010

The Grid in Your Head

According to a lovely new Nature paper combining fMRI imaging with animal experiments, the human brain encodes spatial information in the form of of a hexagonal grid - Evidence for grid cells in a human memory network.

If you've ever played Chinese checkers, you'll know what a hex grid is. It's already known that in rats, the entorhinal cortex of the brain contains "grid cells", each of which fires according to where in a certain place the rat is. The diagram above left shows how one example grid cell fires more often when the rat is in certain places in a 1m x 1m box.

Doeller et al wanted to test whether grid cells exist in humans, but being unable to just stick electrodes in people's heads, they made use of two useful facts about rat grid cells. First, the orientation of the grid is fixed in all the cells in each particular rat, although each cell prefers different locations, i.e. the "grids" are offset, but not rotated. Second, grid cells fire faster when the animal is walking or running in a direction which corresponds to "along the lines" of their brain's internal grid - especially when the movement is rapid.

So, if our brains do contain grid cells, our entorhinal cortex should be more active overall when we're moving along the lines of our grids, as opposed to across them. Bearing in mind that there are three axes, and that you could move either "forward" or "backward" along each one, that makes 6 directions, so the grid cell theory predicts that entorhinal cortex activity should correlate with direction of motion with "6-way directional symmetry", like this:

Doeller et al used fMRI to measure neural activity while 42 volunteers "walked" around a computer-generated landscape on a screen, and looked for areas where activity had the pattern above. Lo and behold, the entorhinal cortex did indeed show this pattern of activity in most volunteers. As a control, they looked for areas showing 4, 5, 7 or 8- fold directional symmetry, and didn't find any.

Doeller et al point out that they haven't directly proven the existence of grid cells in humans - in theory, these results could also indicate the presence of another type of cell which encodes direction with 6-way directional symmetry. But this is a great piece of research, and a nice example of using neuroimaging to test neurobiological theories, as opposed to just going hunting for blobs of activation without knowing what to look for, which I've criticized before.

ResearchBlogging.orgDoeller, C., Barry, C., & Burgess, N. (2010). Evidence for grid cells in a human memory network Nature DOI: 10.1038/nature08704

The Grid in Your Head

According to a lovely new Nature paper combining fMRI imaging with animal experiments, the human brain encodes spatial information in the form of of a hexagonal grid - Evidence for grid cells in a human memory network.

If you've ever played Chinese checkers, you'll know what a hex grid is. It's already known that in rats, the entorhinal cortex of the brain contains "grid cells", each of which fires according to where in a certain place the rat is. The diagram above left shows how one example grid cell fires more often when the rat is in certain places in a 1m x 1m box.

Doeller et al wanted to test whether grid cells exist in humans, but being unable to just stick electrodes in people's heads, they made use of two useful facts about rat grid cells. First, the orientation of the grid is fixed in all the cells in each particular rat, although each cell prefers different locations, i.e. the "grids" are offset, but not rotated. Second, grid cells fire faster when the animal is walking or running in a direction which corresponds to "along the lines" of their brain's internal grid - especially when the movement is rapid.

So, if our brains do contain grid cells, our entorhinal cortex should be more active overall when we're moving along the lines of our grids, as opposed to across them. Bearing in mind that there are three axes, and that you could move either "forward" or "backward" along each one, that makes 6 directions, so the grid cell theory predicts that entorhinal cortex activity should correlate with direction of motion with "6-way directional symmetry", like this:

Doeller et al used fMRI to measure neural activity while 42 volunteers "walked" around a computer-generated landscape on a screen, and looked for areas where activity had the pattern above. Lo and behold, the entorhinal cortex did indeed show this pattern of activity in most volunteers. As a control, they looked for areas showing 4, 5, 7 or 8- fold directional symmetry, and didn't find any.

Doeller et al point out that they haven't directly proven the existence of grid cells in humans - in theory, these results could also indicate the presence of another type of cell which encodes direction with 6-way directional symmetry. But this is a great piece of research, and a nice example of using neuroimaging to test neurobiological theories, as opposed to just going hunting for blobs of activation without knowing what to look for, which I've criticized before.

ResearchBlogging.orgDoeller, C., Barry, C., & Burgess, N. (2010). Evidence for grid cells in a human memory network Nature DOI: 10.1038/nature08704

Wednesday, January 20, 2010

The Sweet Taste of Cannabinoids

Every stoner knows about the munchies, the fondness for junk food that comes with smoking marijuana. Movies have been made about it.

It's not just that being on drugs makes you like eating: stimulants, like cocaine and amphetamine, decrease appetite. The munchies are something specific to marijuana. But why?

New research from a Japanese team reveals that marijuana directly affects the cells in the taste buds which detect sweet flavours - Endocannabinoids selectively enhance sweet taste.

Yoshida et al studied mice, and recorded the electrical signals from the chorda tympani (CT), which carries taste information from the tongue to the brain.

They found that injecting the mice with two chemicals, 2AG and AEA, markedly increased the strength of the signals produced in response to sweet tastes - such as sugar, or the sweetener saccharine. However, neither had any effect on the strength of the response to other flavours, like salty, bitter, or sour. Mice given endocannabinoids were also more eager to eat and drink sweet things, which confirms previous findings.

2-AG and AEA are both endocannabinoids, an important class of neurotransmitters. Marijuana's main active ingredient, Δ9-THC, works by mimicking the action of endocannabinoids. Although Δ9-THC wasn't tested in this study, it's extremely likely that it has the same effects as 2-AG and AEA.

In follow-up experiments, Yoshida et al found that endocannabinoids enhance sweet taste responses by acting on cannabinoid type 1 (CB1) receptors on the tongue's sweet taste cells themselves. In fact, over half of the sweet receptor cells expressed CB1 receptors!

This is an important finding, because CB1 receptors are already known to regulate the pleasurable response to sweet foods (amongst other things) in the brain. These new data don't challenge this, but suggest that CB1 also modulates the most basic aspects of sweet taste perception. The munchies are probably caused by Δ9-THC acting at multiple levels of nervous system.

This paper also sheds light on CB1 antagonists. Given that drugs which activate CB1 make people eat more, it would make sense if CB1 blockers made people eat less, and therefore lose weight, a kind of anti-munchies effect. And indeed they do. Which is why rimonabant, a CB1 antagonist, was released onto the market in 2006 as a weight loss drug. It worked pretty well, although unfortunately it also it caused clinical depression in some people, so it was banned in Europe in 2008 and was never approved in the USA for the same reason.

The depression was almost certainly caused by antagonism at CB1 receptors in the brain, but Yoshida et al's findings suggest that a CB1 antagonist which didn't enter the brain, and only affected peripheral sites such as the taste buds, might be able to make people less fond of sweet foods without causing the same side-effects. Who knows - in a few years you might even be able to buy CB1 antagonist chewing gum to help you stick to your diet...

ResearchBlogging.orgYoshida, R., Ohkuri, T., Jyotaki, M., Yasuo, T., Horio, N., Yasumatsu, K., Sanematsu, K., Shigemura, N., Yamamoto, T., Margolskee, R., & Ninomiya, Y. (2009). Endocannabinoids selectively enhance sweet taste Proceedings of the National Academy of Sciences, 107 (2), 935-939 DOI: 10.1073/pnas.0912048107

The Sweet Taste of Cannabinoids

Every stoner knows about the munchies, the fondness for junk food that comes with smoking marijuana. Movies have been made about it.

It's not just that being on drugs makes you like eating: stimulants, like cocaine and amphetamine, decrease appetite. The munchies are something specific to marijuana. But why?

New research from a Japanese team reveals that marijuana directly affects the cells in the taste buds which detect sweet flavours - Endocannabinoids selectively enhance sweet taste.

Yoshida et al studied mice, and recorded the electrical signals from the chorda tympani (CT), which carries taste information from the tongue to the brain.

They found that injecting the mice with two chemicals, 2AG and AEA, markedly increased the strength of the signals produced in response to sweet tastes - such as sugar, or the sweetener saccharine. However, neither had any effect on the strength of the response to other flavours, like salty, bitter, or sour. Mice given endocannabinoids were also more eager to eat and drink sweet things, which confirms previous findings.

2-AG and AEA are both endocannabinoids, an important class of neurotransmitters. Marijuana's main active ingredient, Δ9-THC, works by mimicking the action of endocannabinoids. Although Δ9-THC wasn't tested in this study, it's extremely likely that it has the same effects as 2-AG and AEA.

In follow-up experiments, Yoshida et al found that endocannabinoids enhance sweet taste responses by acting on cannabinoid type 1 (CB1) receptors on the tongue's sweet taste cells themselves. In fact, over half of the sweet receptor cells expressed CB1 receptors!

This is an important finding, because CB1 receptors are already known to regulate the pleasurable response to sweet foods (amongst other things) in the brain. These new data don't challenge this, but suggest that CB1 also modulates the most basic aspects of sweet taste perception. The munchies are probably caused by Δ9-THC acting at multiple levels of nervous system.

This paper also sheds light on CB1 antagonists. Given that drugs which activate CB1 make people eat more, it would make sense if CB1 blockers made people eat less, and therefore lose weight, a kind of anti-munchies effect. And indeed they do. Which is why rimonabant, a CB1 antagonist, was released onto the market in 2006 as a weight loss drug. It worked pretty well, although unfortunately it also it caused clinical depression in some people, so it was banned in Europe in 2008 and was never approved in the USA for the same reason.

The depression was almost certainly caused by antagonism at CB1 receptors in the brain, but Yoshida et al's findings suggest that a CB1 antagonist which didn't enter the brain, and only affected peripheral sites such as the taste buds, might be able to make people less fond of sweet foods without causing the same side-effects. Who knows - in a few years you might even be able to buy CB1 antagonist chewing gum to help you stick to your diet...

ResearchBlogging.orgYoshida, R., Ohkuri, T., Jyotaki, M., Yasuo, T., Horio, N., Yasumatsu, K., Sanematsu, K., Shigemura, N., Yamamoto, T., Margolskee, R., & Ninomiya, Y. (2009). Endocannabinoids selectively enhance sweet taste Proceedings of the National Academy of Sciences, 107 (2), 935-939 DOI: 10.1073/pnas.0912048107