backundkochrezepte
brothersandsisters
cubicasa
petroros
ionicfilter
acne-facts
consciouslifestyle
hosieryassociation
analpornoizle
acbdp
polskie-dziwki
polskie-kurwy
agwi
dsl-service-dsl-providers
airss
stone-island
turbomagazin
ursi2011
godsheritageevangelical
hungerdialogue
vezetestechnika
achatina
never-fail
Monday, September 19, 2011
The Ancients
Greece, of course, is rich in history (if not money, at the moment) and the National Archaeological Museum is predictably impressive. One of the most striking artefacts I remember was a kind of miniature suit made out of pure gold leaf, complete with a little face mask with tiny eye holes. It was the death mask of an infant from Mycenae, buried about 3000 years ago and dug up in the 19th century.
That's fascinating of course. When you think about it, it's also tragic. This was someone's baby son or daughter. However, it's hard to feel sad over it. If that baby died in front of you, or even if it happened yesterday and you read about it on the news, it would be sad.
You'd even feel sad if it were an entirely fictional baby that "died" in a movie. But being so old, it's not sad, it's just interesting, which is why these things have ended up in museums.
Most of the best exhibits are grave goods, placed in tombs with the dead, in the belief that the deceased would be able to use them in the next world. One Mycenaean warrior was buried with his sword, the blade specially bent so as to "kill" it, and ensure that it would travel to the afterlife with him.
That's fascinating, and also rather weird. Killing a sword so its dead owner could use the ghost of it in heaven? Those crazy ancients!
When you think about it, that's a horrible thing to think. That guy was probably a war hero and that grave was the most solemn memorial his culture could erect to his memory. That was the Arlington, the Tomb of the Unknown Soldier, of his day. We could have let it rest in peace. But we put it in a museum.
My point here is not that we ought to stop doing archaeology because it's offending the memory of the dead. What's interesting is the fact that no-one would even consider that. We just don't care about the dead of 3000 years ago, except as historical data. Yet there'd be outrage if someone went into a churchyard and starting digging up the dead of 300 years ago. You wouldn't even stuck some chewing gum to a gravestone or use it as a seat.
So there are two categories of the dead. There's the alive dead, who are felt to be with us, in the sense that they have a right to respect. Then there are the dead dead, the ancients, who are of purely historical interest. The alive dead still have power - wars are fought over their memories, honour, property rights.
Eventually, though, even the dead die, and that's generally a good thing. The Hungarians, so far as I know, don't dislike the Mongolians because of the Mongol Invasion of 1237, although the Hungarians who died then would probably have wanted them to.
Fortunately for modern international relations, they're dead.
The Ancients
Greece, of course, is rich in history (if not money, at the moment) and the National Archaeological Museum is predictably impressive. One of the most striking artefacts I remember was a kind of miniature suit made out of pure gold leaf, complete with a little face mask with tiny eye holes. It was the death mask of an infant from Mycenae, buried about 3000 years ago and dug up in the 19th century.
That's fascinating of course. When you think about it, it's also tragic. This was someone's baby son or daughter. However, it's hard to feel sad over it. If that baby died in front of you, or even if it happened yesterday and you read about it on the news, it would be sad.
You'd even feel sad if it were an entirely fictional baby that "died" in a movie. But being so old, it's not sad, it's just interesting, which is why these things have ended up in museums.
Most of the best exhibits are grave goods, placed in tombs with the dead, in the belief that the deceased would be able to use them in the next world. One Mycenaean warrior was buried with his sword, the blade specially bent so as to "kill" it, and ensure that it would travel to the afterlife with him.
That's fascinating, and also rather weird. Killing a sword so its dead owner could use the ghost of it in heaven? Those crazy ancients!
When you think about it, that's a horrible thing to think. That guy was probably a war hero and that grave was the most solemn memorial his culture could erect to his memory. That was the Arlington, the Tomb of the Unknown Soldier, of his day. We could have let it rest in peace. But we put it in a museum.
My point here is not that we ought to stop doing archaeology because it's offending the memory of the dead. What's interesting is the fact that no-one would even consider that. We just don't care about the dead of 3000 years ago, except as historical data. Yet there'd be outrage if someone went into a churchyard and starting digging up the dead of 300 years ago. You wouldn't even stuck some chewing gum to a gravestone or use it as a seat.
So there are two categories of the dead. There's the alive dead, who are felt to be with us, in the sense that they have a right to respect. Then there are the dead dead, the ancients, who are of purely historical interest. The alive dead still have power - wars are fought over their memories, honour, property rights.
Eventually, though, even the dead die, and that's generally a good thing. The Hungarians, so far as I know, don't dislike the Mongolians because of the Mongol Invasion of 1237, although the Hungarians who died then would probably have wanted them to.
Fortunately for modern international relations, they're dead.
Thursday, September 1, 2011
Men, Women and Spatial Intelligence
While it's now (almost) generally accepted that men and women are at most only very slightly different in average IQ, there are still a couple of lines of evidence in favor of a gender difference.
First, there's the idea that men are more variable in their intelligence, so there are more very smart men, and also more very stupid ones. This averages out so the mean is the same.
Second, there's the theory that men are on average better at some things, notably "spatial" stuff involving the ability to mentally process shapes, patterns and images, while women are better at social, emotional and perhaps verbal tasks. Again, this averages out overall.
According to proponents, these differences explain why men continue to dominate the upper echelons of things like mathematics, physics, and chess. These all tap spatial processing and since men are more variable, there'll be more extremely high achievers - Nobel Prizes, grandmasters. (There are also presumably more men who are rubbish at these things, but we don't notice them.)
The male spatial advantage has been reported in many parts of the world, but is it "innate", something to do with the male brain? A new PNAS study says - probably not, it's to do with culture. But I'm not convinced.
The authors went to India and studied two tribes, the Khasi and the Karbi. Both live right next to other in the hills of Northeastern India and genetically, they're closely related. Culturally though, the Karbi are patrilineal - property and status is passed down from father to son, with women owning no land of their own. The Khasi are matrilineal, with men forbidden to own land. Moreover, Khasi women also get just as much education as the men, while Karbi ones get much less.
The authors took about 1200 people from 8 villages - 4 per culture - and got them to do a jigsaw puzzle. The quicker you do it, the better your spatial ability. Here were the results. I added the gender-stereotypical colours.
In the patrilineal group, women did substantially worse on average (remember that more time means worse). In the matrilineal society, they performed as well as men. Well, a tiny bit worse, but it wasn't significant. Differences in education explained some of the effect, but only a small part of it.
OK.
This was a large study, and the results are statistically very strong. However, there's a curious result that the authors don't discuss in the paper - the matrilineal group just did much better overall. Looking at the men, they were 10 seconds faster in the matrilineal culture. That's nearly as big as the gender difference in the patrilineal group (15 seconds)!
The individual variability was also much higher in the patrilineal society, for both genders.
Now, maybe, this is a real effect. Maybe being in a patrilineal society makes everyone less spatially aware, not just women; that seems a bit of a stretch, though.
There's also the problem that this study essentially only has two datapoints. One society is matrilineal and has low gender difference in visuospatial processing. One is patrilineal and has a high difference. But that's just not enough data to conclude that there's a correlation between the two things, let alone a causal relationship; you would need to study lots of societies to do that.
Personally, I have no idea what drives the difference, but this study is a reminder of how difficult the question is.
Men, Women and Spatial Intelligence
While it's now (almost) generally accepted that men and women are at most only very slightly different in average IQ, there are still a couple of lines of evidence in favor of a gender difference.
First, there's the idea that men are more variable in their intelligence, so there are more very smart men, and also more very stupid ones. This averages out so the mean is the same.
Second, there's the theory that men are on average better at some things, notably "spatial" stuff involving the ability to mentally process shapes, patterns and images, while women are better at social, emotional and perhaps verbal tasks. Again, this averages out overall.
According to proponents, these differences explain why men continue to dominate the upper echelons of things like mathematics, physics, and chess. These all tap spatial processing and since men are more variable, there'll be more extremely high achievers - Nobel Prizes, grandmasters. (There are also presumably more men who are rubbish at these things, but we don't notice them.)
The male spatial advantage has been reported in many parts of the world, but is it "innate", something to do with the male brain? A new PNAS study says - probably not, it's to do with culture. But I'm not convinced.
The authors went to India and studied two tribes, the Khasi and the Karbi. Both live right next to other in the hills of Northeastern India and genetically, they're closely related. Culturally though, the Karbi are patrilineal - property and status is passed down from father to son, with women owning no land of their own. The Khasi are matrilineal, with men forbidden to own land. Moreover, Khasi women also get just as much education as the men, while Karbi ones get much less.
The authors took about 1200 people from 8 villages - 4 per culture - and got them to do a jigsaw puzzle. The quicker you do it, the better your spatial ability. Here were the results. I added the gender-stereotypical colours.
In the patrilineal group, women did substantially worse on average (remember that more time means worse). In the matrilineal society, they performed as well as men. Well, a tiny bit worse, but it wasn't significant. Differences in education explained some of the effect, but only a small part of it.
OK.
This was a large study, and the results are statistically very strong. However, there's a curious result that the authors don't discuss in the paper - the matrilineal group just did much better overall. Looking at the men, they were 10 seconds faster in the matrilineal culture. That's nearly as big as the gender difference in the patrilineal group (15 seconds)!
The individual variability was also much higher in the patrilineal society, for both genders.
Now, maybe, this is a real effect. Maybe being in a patrilineal society makes everyone less spatially aware, not just women; that seems a bit of a stretch, though.
There's also the problem that this study essentially only has two datapoints. One society is matrilineal and has low gender difference in visuospatial processing. One is patrilineal and has a high difference. But that's just not enough data to conclude that there's a correlation between the two things, let alone a causal relationship; you would need to study lots of societies to do that.
Personally, I have no idea what drives the difference, but this study is a reminder of how difficult the question is.
Thursday, August 25, 2011
New Mutations - New Eugenics?
Mostly true, but not quite. In theory, you do indeed get half of your DNA from your mother and half from your father; but in practice, there's sometimes a third parent as well, random chance. Genes don't always get transmitted as they should: mutations occur.
As a result, it's not true that "genetic" always implies "inherited". A disease, for example, could be entirely genetic, and almost never inherited. Down's syndrome is the textbook example, but it's something of a special case and until recently, it was widely assumed that most disease risk genes were inherited.
Yet recent evidence suggests that many cases of neurological and psychiatric disorders are caused by uninherited, de novo mutation events. Here are two papers from the last few weeks about schizophrenia(1,2) - but the story looks similar for autism, intellectual disabilities, some forms of epilepsy, ADHD, and others. Indeed they're often the same mutations.
Biologically, a given mutation is what it is, whether it's de novo or inherited. But on a social and a psychological level, I think there are crucial differences, and in particular I think that if it turns out that de novo mutations are important in disease, we're going to see attempts to take these variants out of circulation - far more so than in the case of the very same genes, were they inherited.
The old eugenics movement was based on the idea that if we stop people with bad genes from breeding - by sterilization, voluntary or otherwise, say - we'll be able to eliminate diseases and other undesirable traits. This idea is now generally regarded as extremely unethical, but many of its opponents have shared with the eugenicists the belief that it could work.
But if de novo mutations are what cause the majority of disease, then this approach would be pointless. Sterilizing certain people, or encouraging the healthy ones to have more children, would never be able to eliminate the 'bad genes' because new ones are being created every generation, pretty much at random.
So the de novo paradigm ought to be welcomed by opponents of eugenics. It wasn't just morally wrong - it was biologically misguided too.
But hang on. This is the 21st century. We have in vitro fertilization (IVF), and you can analyze the genes of an IVF embryo before you decide to make it into a child. In the near future, we might be able to routinely sequence the genome of any unborn child shortly after conception.
From there, it would be a small step to allowing parents to decide not to have children with de novo mutations.
This would be, in its effects, a form of eugenics - in the sense that it would produce the effect that the old eugenicists wanted. No more 'bad' genes, or not nearly as many. Opinions will differ as to whether it's morally different. But I would have said that politically, it's a lot more likely to happen.
I can't see forced sterilization returning any time soon. But if you were expecting a baby and you knew that it was not just carrying your and your partner's DNA, but had also suffered a mutation - might you not want to avoid that?
Psychologically, it matters that it did not inherit the gene. It would be a big step to decide that your child should not inherit one of your own genes. Of course, some genes are obviously harmful, like one that raises the risk of cancer, but think about the grey areas - a gene for social anxiety, mild autistic symptoms, obesity, a personality trait.
You might well feel that carrying that gene is what makes you, you; and so it would be natural for your child to have it. You might decide that if it was good enough for you (and all your ancestors), it's good enough for your children. You might well resent the very idea that it's a 'bad' gene at all, as an attack on your own self-worth.
But none of that applies if it's a de novo mutation. Indeed, quite the opposite - all those same considerations would probably lead you to want your children to carry as close as possible to a carbon copy of your DNA, with no random changes. It was good enough for you.
My point is that I think there will be much more support for the idea of genetic screening against de novo mutations than against inherited genes. More people will want it, it will be more socially acceptable, and more widely used. I'm not saying this would be a good or a bad thing, just making a prediction. In the future, diseases and traits that are primarily caused by de novo mutations will increasingly selected against.
New Mutations - New Eugenics?
Mostly true, but not quite. In theory, you do indeed get half of your DNA from your mother and half from your father; but in practice, there's sometimes a third parent as well, random chance. Genes don't always get transmitted as they should: mutations occur.
As a result, it's not true that "genetic" always implies "inherited". A disease, for example, could be entirely genetic, and almost never inherited. Down's syndrome is the textbook example, but it's something of a special case and until recently, it was widely assumed that most disease risk genes were inherited.
Yet recent evidence suggests that many cases of neurological and psychiatric disorders are caused by uninherited, de novo mutation events. Here are two papers from the last few weeks about schizophrenia(1,2) - but the story looks similar for autism, intellectual disabilities, some forms of epilepsy, ADHD, and others. Indeed they're often the same mutations.
Biologically, a given mutation is what it is, whether it's de novo or inherited. But on a social and a psychological level, I think there are crucial differences, and in particular I think that if it turns out that de novo mutations are important in disease, we're going to see attempts to take these variants out of circulation - far more so than in the case of the very same genes, were they inherited.
The old eugenics movement was based on the idea that if we stop people with bad genes from breeding - by sterilization, voluntary or otherwise, say - we'll be able to eliminate diseases and other undesirable traits. This idea is now generally regarded as extremely unethical, but many of its opponents have shared with the eugenicists the belief that it could work.
But if de novo mutations are what cause the majority of disease, then this approach would be pointless. Sterilizing certain people, or encouraging the healthy ones to have more children, would never be able to eliminate the 'bad genes' because new ones are being created every generation, pretty much at random.
So the de novo paradigm ought to be welcomed by opponents of eugenics. It wasn't just morally wrong - it was biologically misguided too.
But hang on. This is the 21st century. We have in vitro fertilization (IVF), and you can analyze the genes of an IVF embryo before you decide to make it into a child. In the near future, we might be able to routinely sequence the genome of any unborn child shortly after conception.
From there, it would be a small step to allowing parents to decide not to have children with de novo mutations.
This would be, in its effects, a form of eugenics - in the sense that it would produce the effect that the old eugenicists wanted. No more 'bad' genes, or not nearly as many. Opinions will differ as to whether it's morally different. But I would have said that politically, it's a lot more likely to happen.
I can't see forced sterilization returning any time soon. But if you were expecting a baby and you knew that it was not just carrying your and your partner's DNA, but had also suffered a mutation - might you not want to avoid that?
Psychologically, it matters that it did not inherit the gene. It would be a big step to decide that your child should not inherit one of your own genes. Of course, some genes are obviously harmful, like one that raises the risk of cancer, but think about the grey areas - a gene for social anxiety, mild autistic symptoms, obesity, a personality trait.
You might well feel that carrying that gene is what makes you, you; and so it would be natural for your child to have it. You might decide that if it was good enough for you (and all your ancestors), it's good enough for your children. You might well resent the very idea that it's a 'bad' gene at all, as an attack on your own self-worth.
But none of that applies if it's a de novo mutation. Indeed, quite the opposite - all those same considerations would probably lead you to want your children to carry as close as possible to a carbon copy of your DNA, with no random changes. It was good enough for you.
My point is that I think there will be much more support for the idea of genetic screening against de novo mutations than against inherited genes. More people will want it, it will be more socially acceptable, and more widely used. I'm not saying this would be a good or a bad thing, just making a prediction. In the future, diseases and traits that are primarily caused by de novo mutations will increasingly selected against.
Friday, August 12, 2011
Debating Greenfield

British neuroscientist Susan Greenfield regrets the recent controversy over certain of her remarks, and calls for a serious debate over "mind change" -
"Mind change" is an appropriately neutral, umbrella concept encompassing the diverse issues of whether and how modern technologies may be changing the functional state of the human brain, both for good and bad.Very well, here goes. I wonder if Greenfield will reply.
As Greenfield points out, the human brain is plastic and interacts with the environment. Indeed, this is how we are able to learn and adapt to anything. Were our brains entirely unresponsive to what happens to them we would have no memory and probably no behaviour at all.
The modern world is changing your brain, in other words.
However, the same is true of every other era. The Victorian era, the Roman Empire, the invention of agriculture - human brains were never the same after those came along.
Because the brain is where behaviour happens, any change in behaviour must be accompanied by a change in the brain. By talking about how behaviour changes, we will, implicitly, also be discussing the brain.
However it doesn't work in reverse. Changes in the brain can't be assumed to mean changes in behaviour. Greenfield cites, for example, this paper which purports to show reductions in the grey matter volume of certain areas of the brain cortex in Chinese students with internet addiction compared to those without.
However, there is a more subtle point. Even if these were a direct consequence of excessive internet use, it wouldn't mean that the internet use was changing behaviour.
We have no idea what a slight decrease in grey matter volume in the cerebellum, dorsolateral prefrontal cortex, and supplementary motor area would do to cognition and behaviour. It might not do anything.
My point here is that rather than worrying about the brain, we ought to focus on behaviour. Because that is also focussing on the brain, but it's focussing on the aspects of brain function that actually matter.
Greenfield then poses three questions.
1. Could sustained and often obsessive game-playing, in which actions have no consequences, enhance recklessness in real life?It's possible that it could, although I don't think we do live in an especially reckless society, given that crime rates are lower now than they have been for 20 years.
However, the question assumes that game playing has no consequences. Yet in-game actions do have in-game consequences. To a non-gamer, these may seem like no consequences, because they're not real.
Yet in the game, they're perfectly real, and if you spend 12 hours a day playing that game, and all your friends do as well - you are going to care about that. Those consequences will matter, to you, and with luck, you'll learn not to be so impulsive in the future.
In World of Warcraft, for example, actions have all too many consequences. If you impulsively decide to attack an enemy in the middle of a raid, you could cause a wipe, which would, quite possibly, ruin everyone's evening and get you a reputation as an oaf.
Exactly as your reputation would suffer if you and your friends went for an evening at the opera, and you stood up in the middle and shouted a profanity. Ah, but that's real life, the response goes. Is it? Is a performance in which hundreds of people sit solemnly, while grown adults dress up and pretend to be singing gods and fairies on the instructions of a deceased anti-semite, any more real than this?
3. How can young people develop empathy if they conduct relationships via a medium which does not allow them the opportunity to gain full experience of eye contact, interpret voice tone or body language, and learn how and when to give and receive hugs?I do not think that this accurately represents the experience of most children today. However, assuming that it were true, what would be the problem?
If everyone's relationships were conducted online, surely it would be more important to learn how to navigate the online world, than it would be to learn how to interpret body language, which (webcams aside), you would never see, or need to see.
If the brain is plastic and adapts to the environment, as Greenfield argues, then surely the fact that it is adapting to the information age is neither surprising nor concerning. If anything, we ought to be trying to help the process along, to make ourselves better adapted. It would be more worrying if it didn't adapt.
Some might be concerned by this. Surely, there is value in the old way of doing things, value that would be lost in the new era. Unless one can point to definite reasons why the new state of affairs is inherently worse than the old - not just different from it - it is hard to distinguish these concerns from the simple feeling of nostalgia over the past.
The same point could have equally well been made at any time in history. When our ancestors first settled down to farm crops, an early conservative might have lamented - "Young people today are growing up with no idea of how to stab a mammoth in the eye with a spear. All they know is how to plant, water and raise this new-fangled 'wheat'."
Debating Greenfield

British neuroscientist Susan Greenfield regrets the recent controversy over certain of her remarks, and calls for a serious debate over "mind change" -
"Mind change" is an appropriately neutral, umbrella concept encompassing the diverse issues of whether and how modern technologies may be changing the functional state of the human brain, both for good and bad.Very well, here goes. I wonder if Greenfield will reply.
As Greenfield points out, the human brain is plastic and interacts with the environment. Indeed, this is how we are able to learn and adapt to anything. Were our brains entirely unresponsive to what happens to them we would have no memory and probably no behaviour at all.
The modern world is changing your brain, in other words.
However, the same is true of every other era. The Victorian era, the Roman Empire, the invention of agriculture - human brains were never the same after those came along.
Because the brain is where behaviour happens, any change in behaviour must be accompanied by a change in the brain. By talking about how behaviour changes, we will, implicitly, also be discussing the brain.
However it doesn't work in reverse. Changes in the brain can't be assumed to mean changes in behaviour. Greenfield cites, for example, this paper which purports to show reductions in the grey matter volume of certain areas of the brain cortex in Chinese students with internet addiction compared to those without.
However, there is a more subtle point. Even if these were a direct consequence of excessive internet use, it wouldn't mean that the internet use was changing behaviour.
We have no idea what a slight decrease in grey matter volume in the cerebellum, dorsolateral prefrontal cortex, and supplementary motor area would do to cognition and behaviour. It might not do anything.
My point here is that rather than worrying about the brain, we ought to focus on behaviour. Because that is also focussing on the brain, but it's focussing on the aspects of brain function that actually matter.
Greenfield then poses three questions.
1. Could sustained and often obsessive game-playing, in which actions have no consequences, enhance recklessness in real life?It's possible that it could, although I don't think we do live in an especially reckless society, given that crime rates are lower now than they have been for 20 years.
However, the question assumes that game playing has no consequences. Yet in-game actions do have in-game consequences. To a non-gamer, these may seem like no consequences, because they're not real.
Yet in the game, they're perfectly real, and if you spend 12 hours a day playing that game, and all your friends do as well - you are going to care about that. Those consequences will matter, to you, and with luck, you'll learn not to be so impulsive in the future.
In World of Warcraft, for example, actions have all too many consequences. If you impulsively decide to attack an enemy in the middle of a raid, you could cause a wipe, which would, quite possibly, ruin everyone's evening and get you a reputation as an oaf.
Exactly as your reputation would suffer if you and your friends went for an evening at the opera, and you stood up in the middle and shouted a profanity. Ah, but that's real life, the response goes. Is it? Is a performance in which hundreds of people sit solemnly, while grown adults dress up and pretend to be singing gods and fairies on the instructions of a deceased anti-semite, any more real than this?
3. How can young people develop empathy if they conduct relationships via a medium which does not allow them the opportunity to gain full experience of eye contact, interpret voice tone or body language, and learn how and when to give and receive hugs?I do not think that this accurately represents the experience of most children today. However, assuming that it were true, what would be the problem?
If everyone's relationships were conducted online, surely it would be more important to learn how to navigate the online world, than it would be to learn how to interpret body language, which (webcams aside), you would never see, or need to see.
If the brain is plastic and adapts to the environment, as Greenfield argues, then surely the fact that it is adapting to the information age is neither surprising nor concerning. If anything, we ought to be trying to help the process along, to make ourselves better adapted. It would be more worrying if it didn't adapt.
Some might be concerned by this. Surely, there is value in the old way of doing things, value that would be lost in the new era. Unless one can point to definite reasons why the new state of affairs is inherently worse than the old - not just different from it - it is hard to distinguish these concerns from the simple feeling of nostalgia over the past.
The same point could have equally well been made at any time in history. When our ancestors first settled down to farm crops, an early conservative might have lamented - "Young people today are growing up with no idea of how to stab a mammoth in the eye with a spear. All they know is how to plant, water and raise this new-fangled 'wheat'."
Friday, July 29, 2011
What Big Eyes You Have
Actually, the paper in question talked about eyes but didn't make much of the brain finding, which is confined to the Supplement. Nonetheless, they did find an effect on brain size too. Peoples living further from the equator have larger eye sockets and also larger total cranial capacity (brain volume), apparantly. The authors include Robin Dunbar of "Dunbar's Number" fame.
Their idea is that humans evolved larger eyes because further from the equator, there's on average less light, so you need bigger eyes to collect more light and see well.
They looked at 19th century skulls stored in museum collections, and measured the size of the eye sockets (orbits). They did this by filling them with a bunch of little glass balls and counting how many balls fit. They had a total of 73 "healthy adult" skulls from 12 different places, ranging from Scandinavia to Kenya.
Latitude essentially meant northern-ness because only one population (Australian Aborigines) were from far south of the equator.
The heat of the Sahara was easy living compared to the deadly horrors of an English winter, in other words. Hmm.
The idea that higher latitudes are darker, so you'd need bigger eyes, and then a bigger brain (at least the visual parts of the brain) to process what you see, is certainly more plausible than that theory. However, the data in this paper seem pretty scanty.
Measuring skulls by filling them with little balls was cutting edge neuroscience in the 19th century. However, nowadays, we have MRI scanners. Although usually intended to image the brain, many MRI scans of the head also give an excellent image of the skull and eyes. Millions of people of all races get MRI scans every year.
Nowadays, people have medical records, so we can tell exactly how healthy people are. The people who became these skulls in a museum were said to be healthy, but how healthy a 19th century Indian or Kenyan could hope to be, by modern standards, I'm not sure. Certainly there's an excellent chance that they were malnourished and I suspect this would make your eyes and skull smaller.
What Big Eyes You Have
Actually, the paper in question talked about eyes but didn't make much of the brain finding, which is confined to the Supplement. Nonetheless, they did find an effect on brain size too. Peoples living further from the equator have larger eye sockets and also larger total cranial capacity (brain volume), apparantly. The authors include Robin Dunbar of "Dunbar's Number" fame.
Their idea is that humans evolved larger eyes because further from the equator, there's on average less light, so you need bigger eyes to collect more light and see well.
They looked at 19th century skulls stored in museum collections, and measured the size of the eye sockets (orbits). They did this by filling them with a bunch of little glass balls and counting how many balls fit. They had a total of 73 "healthy adult" skulls from 12 different places, ranging from Scandinavia to Kenya.
Latitude essentially meant northern-ness because only one population (Australian Aborigines) were from far south of the equator.
The heat of the Sahara was easy living compared to the deadly horrors of an English winter, in other words. Hmm.
The idea that higher latitudes are darker, so you'd need bigger eyes, and then a bigger brain (at least the visual parts of the brain) to process what you see, is certainly more plausible than that theory. However, the data in this paper seem pretty scanty.
Measuring skulls by filling them with little balls was cutting edge neuroscience in the 19th century. However, nowadays, we have MRI scanners. Although usually intended to image the brain, many MRI scans of the head also give an excellent image of the skull and eyes. Millions of people of all races get MRI scans every year.
Nowadays, people have medical records, so we can tell exactly how healthy people are. The people who became these skulls in a museum were said to be healthy, but how healthy a 19th century Indian or Kenyan could hope to be, by modern standards, I'm not sure. Certainly there's an excellent chance that they were malnourished and I suspect this would make your eyes and skull smaller.
Saturday, July 9, 2011
Depression: From Treatment to Diagnosis?
The logic of this system depends upon the sequence. A diagnosis is meant to be an objective statement about the nature of your illness; treatments (if any) come afterwards. It would be odd if the treatments on offer influenced what diagnosis you got.An interesting paper just out suggests that exactly this kind of reverse influence has happened. The authors looked at what happened in the USA in 2003 when antidepressants were slapped with a "black box" warning, cautioning against their use in children and adolescents, due to concerns over suicide in young people.
They used the data from the annual National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS). These record data on the number of patients visiting their doctor regarding different illnesses, and what medications were prescribed if any.
What happened? The warning led to a reduction in the use of antidepressants. No surprise there, but unexpectedly, this wasn't because teens who visited their doctor regarding depression, were less likely to get given these drugs.
Actually, the proportion of depression visits, that were also antidepressant visits, was almost unchanged:
The proportion of depression visits with an antidepressant prescribed, having risen from 54% in 1998–1999 to 66% in 2002–2003, remained stable in 2004–2005 (65%) and in 2006–2007 (64%)The difference was caused by a reduction in the number of teens getting diagnosed with depression - or rather, the number of visits where depression was mentioned; we can't tell if this meant doctors were less likely to diagnose, or patients were less likely to complain, or whatever.
This graph shows the story. After 2003, both antidepressant visits and depression visits fall, while the proportion of "antidepressant & depression" visits to the total depression visits (purple line), is constant.The effect seen is just a correlation - it might have been a coincidence that all this happened after the black box warning in 2003. It seems very likely to be causal, though. Antidepressant use was rising steadily up until that point - and in adults, both depression and antidepressant visits rose after 2003.
It's also dangerous to pile too many heavy conclusions on the back of one study. But having said that -
Getting diagnosed with depression - at least if you're a teenager in the USA - is not just a function of having certain symptoms. The treatments on offer are a factor in determining whether you're diagnosed.
One alternative view, is that the fall in depression visits represents the fact that kids on antidepressants tend to have multiple visits - in order to monitor their progress, adjust dosage etc. So when antidepressant use fell, the number of visits fell. But if it were true, we'd presumably expect to see a fall in the proportion of visits that dealt with antidepressants, which we didn't.
This is disturbing either way you look at it. If you think the pre-2003 diagnoses were appropriate, then after 2003, kids must have been going undiagnosed with depression. On the other hand, if you think post-2003 was a welcome move away from over-diagnosis of depression, then pre-2003 must have been bad.
As to what happened to the kids who would have got a diagnosis of depression post-2003 were it not for the black box warning, we've got no way of knowing.

Why did this happen? Psychologist Abraham Maslow famously said "It's tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." The history of psychiatry bears this out.
Sigmund Freud's psychoanalysis was essentially the theory that most mental disturbance was a 'neurosis' or 'complex' of the kind that's best treated by lying on a coach and talking about your dreams and your childhood, which as luck would have it, was exactly what Freud had just invented.
Along came psychiatric drugs, and suddenly everything was a 'chemical imbalance'. I've previously suggested that the invention of SSRI antidepressants, in particular, may have changed the concept of depression into one which was most amenable to treatment with SSRIs.
Recently, we're seeing the rise of the view that everything from psychosis to paedophilia is about 'cognitive biases' that can be treated by the latest treatment paradigm, CBT.
We always think we've hit the nail on the head.
Depression: From Treatment to Diagnosis?
The logic of this system depends upon the sequence. A diagnosis is meant to be an objective statement about the nature of your illness; treatments (if any) come afterwards. It would be odd if the treatments on offer influenced what diagnosis you got.An interesting paper just out suggests that exactly this kind of reverse influence has happened. The authors looked at what happened in the USA in 2003 when antidepressants were slapped with a "black box" warning, cautioning against their use in children and adolescents, due to concerns over suicide in young people.
They used the data from the annual National Ambulatory Medical Care Survey (NAMCS) and the National Hospital Ambulatory Medical Care Survey (NHAMCS). These record data on the number of patients visiting their doctor regarding different illnesses, and what medications were prescribed if any.
What happened? The warning led to a reduction in the use of antidepressants. No surprise there, but unexpectedly, this wasn't because teens who visited their doctor regarding depression, were less likely to get given these drugs.
Actually, the proportion of depression visits, that were also antidepressant visits, was almost unchanged:
The proportion of depression visits with an antidepressant prescribed, having risen from 54% in 1998–1999 to 66% in 2002–2003, remained stable in 2004–2005 (65%) and in 2006–2007 (64%)The difference was caused by a reduction in the number of teens getting diagnosed with depression - or rather, the number of visits where depression was mentioned; we can't tell if this meant doctors were less likely to diagnose, or patients were less likely to complain, or whatever.
This graph shows the story. After 2003, both antidepressant visits and depression visits fall, while the proportion of "antidepressant & depression" visits to the total depression visits (purple line), is constant.The effect seen is just a correlation - it might have been a coincidence that all this happened after the black box warning in 2003. It seems very likely to be causal, though. Antidepressant use was rising steadily up until that point - and in adults, both depression and antidepressant visits rose after 2003.
It's also dangerous to pile too many heavy conclusions on the back of one study. But having said that -
Getting diagnosed with depression - at least if you're a teenager in the USA - is not just a function of having certain symptoms. The treatments on offer are a factor in determining whether you're diagnosed.
One alternative view, is that the fall in depression visits represents the fact that kids on antidepressants tend to have multiple visits - in order to monitor their progress, adjust dosage etc. So when antidepressant use fell, the number of visits fell. But if it were true, we'd presumably expect to see a fall in the proportion of visits that dealt with antidepressants, which we didn't.
This is disturbing either way you look at it. If you think the pre-2003 diagnoses were appropriate, then after 2003, kids must have been going undiagnosed with depression. On the other hand, if you think post-2003 was a welcome move away from over-diagnosis of depression, then pre-2003 must have been bad.
As to what happened to the kids who would have got a diagnosis of depression post-2003 were it not for the black box warning, we've got no way of knowing.

Why did this happen? Psychologist Abraham Maslow famously said "It's tempting, if the only tool you have is a hammer, to treat everything as if it were a nail." The history of psychiatry bears this out.
Sigmund Freud's psychoanalysis was essentially the theory that most mental disturbance was a 'neurosis' or 'complex' of the kind that's best treated by lying on a coach and talking about your dreams and your childhood, which as luck would have it, was exactly what Freud had just invented.
Along came psychiatric drugs, and suddenly everything was a 'chemical imbalance'. I've previously suggested that the invention of SSRI antidepressants, in particular, may have changed the concept of depression into one which was most amenable to treatment with SSRIs.
Recently, we're seeing the rise of the view that everything from psychosis to paedophilia is about 'cognitive biases' that can be treated by the latest treatment paradigm, CBT.
We always think we've hit the nail on the head.
Tuesday, July 5, 2011
Melancholia In 100 Words

The British Journal of Psychiatry have a regular series called "In 100 Words", which produces some gems. This month they have Melanchola in 100 Words, featuring perhaps the most influential musician you haven't heard of, Robert Johnson.
I got stones in my pathway/And my road seems dark at night/I have pains in my heart/They have taken my appetite.I've previously written about the blues and what shade of blue they were talking about, here. But this actually isn't the first Melancholia in 100 Words to appear in the BJP. Here's another one from 2009
Robert Johnson, known as the King of the Delta blues singers, distilled into these lines the essence of severe depressive illness – somatic ills, fear and suspicion, emotional and physical pain, nocturnal troubles and struggle against obstacles. The words are one with the powerful, haunting music. ICD-10 and DSM-IV have their place, but poets have often been there before us, and done a better job. We can all learn from Robert Johnson, born just 100 years ago.
Melancholia is a classical episodic depressive disorder that combines mood, psychomotor, cognitive and vegetative components with high suicide risk. In the present psychiatric classification it is buried as a modifier in both bipolar and unipolar depressions. It is hardly used to characterise patients in the clinic or research.
The syndrome is frequently recognised in delusional and agitated depression, and in the elderly. Cortisol or sleep EEG abnormalities are prognostically helpful. Melancholia is particularly responsive to tricyclic antidepressants and electroconvulsive therapy but not to selective serotonin reuptake inhibitors or psychotherapy. Recognising melancholia as a distinct disorder improves clinical care and research.
Melancholia In 100 Words

The British Journal of Psychiatry have a regular series called "In 100 Words", which produces some gems. This month they have Melanchola in 100 Words, featuring perhaps the most influential musician you haven't heard of, Robert Johnson.
I got stones in my pathway/And my road seems dark at night/I have pains in my heart/They have taken my appetite.I've previously written about the blues and what shade of blue they were talking about, here. But this actually isn't the first Melancholia in 100 Words to appear in the BJP. Here's another one from 2009
Robert Johnson, known as the King of the Delta blues singers, distilled into these lines the essence of severe depressive illness – somatic ills, fear and suspicion, emotional and physical pain, nocturnal troubles and struggle against obstacles. The words are one with the powerful, haunting music. ICD-10 and DSM-IV have their place, but poets have often been there before us, and done a better job. We can all learn from Robert Johnson, born just 100 years ago.
Melancholia is a classical episodic depressive disorder that combines mood, psychomotor, cognitive and vegetative components with high suicide risk. In the present psychiatric classification it is buried as a modifier in both bipolar and unipolar depressions. It is hardly used to characterise patients in the clinic or research.
The syndrome is frequently recognised in delusional and agitated depression, and in the elderly. Cortisol or sleep EEG abnormalities are prognostically helpful. Melancholia is particularly responsive to tricyclic antidepressants and electroconvulsive therapy but not to selective serotonin reuptake inhibitors or psychotherapy. Recognising melancholia as a distinct disorder improves clinical care and research.
Friday, June 3, 2011
Political Suicide
In the British Journal of Psychiatry, two psychiatrists and an anthropologist discuss recent cases of self-immolation as a form of political protest in the Arab world:Since ancient times there has been a difference between suicide (an act of self-destruction) and self-immolation which, although self- destructive, has a sacrificial connotation. Self-immolation is associated with terrible physical pain (burning alive) and with the idea of courage... It is, however, a new phenomenon in Arab Muslim societies.There's certainly a perception that some suicide is "political", and quite different from similar actions done for "personal" reasons. The same goes for breaking the law: we make a distinction between "common criminals", who do it for their own sake, and people who do so for an ideal.
The self-immolation of the young Tunisian Mohamed Bouazizi, a street vendor, expresses both the extreme hurt associated with the harassment and humiliation that was inflicted on him after his wares had been confiscated, and the fact that there were no other ways to be heard in a country where he knew no kind of political system other than dictatorship...His gesture is now being replicated, mostly by other young men in Arab countries.
These events ....raise important issues for psychiatrists and mental health professionals. First, these events highlight the social, political and cultural dimensions of suicide as a powerful collective idiom of distress. In the Tunisian case there is a shift from an individual sinful suicide to a sacrifice which evokes martyrdom. Fire symbolises purification...
Second, in spite of the fact that the idiom of distress put forward by these Arab youth is radically different from the usual profile of youth suicide in Western countries, these events may also be an invitation to rethink the collective dimensions of youth suicide as a protest against society. Without minimising the role of psychopathology and interpersonal factors, it may be time to revisit the collective meaning associated by youth with the decision to exit a world in which they may feel they do not always have a voice.
But I wonder whether this political/personal distinction is so clear-cut, psychologically speaking. Even "political" suicide has a personal component: in most cases, millions of people are in the same political situation, but only a few people burn themselves. Politics alone doesn't explain any individual case.
Conversely the idea that "personal" suicide is simply a symptom of an individual's mental illness is likewise inadaquate - most people with mental illness, even very severe cases, do not do it. We have to look into the social sphere as well.
Emile Durkheim drew a distinction between "egoistic" suicide, related to an individual's "prolonged sense of not belonging, of not being integrated in a community" and "anomic" suicide, caused by upheavals in society leading to "an individual's moral confusion and lack of social direction". But aren't those different ways of looking at the same thing?
Political Suicide
In the British Journal of Psychiatry, two psychiatrists and an anthropologist discuss recent cases of self-immolation as a form of political protest in the Arab world:Since ancient times there has been a difference between suicide (an act of self-destruction) and self-immolation which, although self- destructive, has a sacrificial connotation. Self-immolation is associated with terrible physical pain (burning alive) and with the idea of courage... It is, however, a new phenomenon in Arab Muslim societies.There's certainly a perception that some suicide is "political", and quite different from similar actions done for "personal" reasons. The same goes for breaking the law: we make a distinction between "common criminals", who do it for their own sake, and people who do so for an ideal.
The self-immolation of the young Tunisian Mohamed Bouazizi, a street vendor, expresses both the extreme hurt associated with the harassment and humiliation that was inflicted on him after his wares had been confiscated, and the fact that there were no other ways to be heard in a country where he knew no kind of political system other than dictatorship...His gesture is now being replicated, mostly by other young men in Arab countries.
These events ....raise important issues for psychiatrists and mental health professionals. First, these events highlight the social, political and cultural dimensions of suicide as a powerful collective idiom of distress. In the Tunisian case there is a shift from an individual sinful suicide to a sacrifice which evokes martyrdom. Fire symbolises purification...
Second, in spite of the fact that the idiom of distress put forward by these Arab youth is radically different from the usual profile of youth suicide in Western countries, these events may also be an invitation to rethink the collective dimensions of youth suicide as a protest against society. Without minimising the role of psychopathology and interpersonal factors, it may be time to revisit the collective meaning associated by youth with the decision to exit a world in which they may feel they do not always have a voice.
But I wonder whether this political/personal distinction is so clear-cut, psychologically speaking. Even "political" suicide has a personal component: in most cases, millions of people are in the same political situation, but only a few people burn themselves. Politics alone doesn't explain any individual case.
Conversely the idea that "personal" suicide is simply a symptom of an individual's mental illness is likewise inadaquate - most people with mental illness, even very severe cases, do not do it. We have to look into the social sphere as well.
Emile Durkheim drew a distinction between "egoistic" suicide, related to an individual's "prolonged sense of not belonging, of not being integrated in a community" and "anomic" suicide, caused by upheavals in society leading to "an individual's moral confusion and lack of social direction". But aren't those different ways of looking at the same thing?
Saturday, May 7, 2011
Bin Laden's Smile

Why was he so "popular"? I think it was his smile.
Bin Laden always smiled. This was his unique selling point. Most photos of extremists show either a hateful scowl, emotionless resolve, or at best a forced, unfriendly smile.
Bin Laden smiled, but it wasn't an evil smile. It looked perfectly genuine. He wasn't smiling because he'd just killed lots of enemies. He was just calm and content with being a killer. At peace. His videos illustrate this most dramatically. He was collected, quiet, almost shy. I've seen more passionate performances by college chemistry lecturers.
That was surely his appeal. No-one joins a movement like Al Qaeda unless they're angry, but Bin Laden seemed to be living proof that you didn't have to stay angry to stay a member. Al Qaeda was the way out of that. Al Qaeda could bring you inner peace. Whether Bin Laden was really like that, I have no idea. He might have been tormented by inner doubts, and just good at acting for the cameras. The point is, it doesn't matter. The images were out there, and that was the message.
His calm was also the reason why he was hated and feared more than the other members of his organization, including the ones who had a more direct role in 9/11. Osama was the one man to whom the image of the ranting, delusional extremist couldn't apply. Someone who planned terrorist attacks out of insane rage: that would be bad enough, but at least it would be understandable. That someone could do it with an agreeable smile on their face, was something else.
Given which, it's no surprise that the U.S. reported that Osama died a coward, hiding behind his wife. Nothing could have shattered the Osama image better than that. He wasn't beyond human emotion after all, he was scared just like anyone else. Again, whether or not that actually happened, is not the point. It's the message that went out, and I suspect that's the message that will stick.
Bin Laden's Smile

Why was he so "popular"? I think it was his smile.
Bin Laden always smiled. This was his unique selling point. Most photos of extremists show either a hateful scowl, emotionless resolve, or at best a forced, unfriendly smile.
Bin Laden smiled, but it wasn't an evil smile. It looked perfectly genuine. He wasn't smiling because he'd just killed lots of enemies. He was just calm and content with being a killer. At peace. His videos illustrate this most dramatically. He was collected, quiet, almost shy. I've seen more passionate performances by college chemistry lecturers.
That was surely his appeal. No-one joins a movement like Al Qaeda unless they're angry, but Bin Laden seemed to be living proof that you didn't have to stay angry to stay a member. Al Qaeda was the way out of that. Al Qaeda could bring you inner peace. Whether Bin Laden was really like that, I have no idea. He might have been tormented by inner doubts, and just good at acting for the cameras. The point is, it doesn't matter. The images were out there, and that was the message.
His calm was also the reason why he was hated and feared more than the other members of his organization, including the ones who had a more direct role in 9/11. Osama was the one man to whom the image of the ranting, delusional extremist couldn't apply. Someone who planned terrorist attacks out of insane rage: that would be bad enough, but at least it would be understandable. That someone could do it with an agreeable smile on their face, was something else.
Given which, it's no surprise that the U.S. reported that Osama died a coward, hiding behind his wife. Nothing could have shattered the Osama image better than that. He wasn't beyond human emotion after all, he was scared just like anyone else. Again, whether or not that actually happened, is not the point. It's the message that went out, and I suspect that's the message that will stick.

