backundkochrezepte
brothersandsisters
cubicasa
petroros
ionicfilter
acne-facts
consciouslifestyle
hosieryassociation
analpornoizle
acbdp
polskie-dziwki
polskie-kurwy
agwi
dsl-service-dsl-providers
airss
stone-island
turbomagazin
ursi2011
godsheritageevangelical
hungerdialogue
vezetestechnika
achatina
never-fail
Monday, September 19, 2011
The Ancients
Greece, of course, is rich in history (if not money, at the moment) and the National Archaeological Museum is predictably impressive. One of the most striking artefacts I remember was a kind of miniature suit made out of pure gold leaf, complete with a little face mask with tiny eye holes. It was the death mask of an infant from Mycenae, buried about 3000 years ago and dug up in the 19th century.
That's fascinating of course. When you think about it, it's also tragic. This was someone's baby son or daughter. However, it's hard to feel sad over it. If that baby died in front of you, or even if it happened yesterday and you read about it on the news, it would be sad.
You'd even feel sad if it were an entirely fictional baby that "died" in a movie. But being so old, it's not sad, it's just interesting, which is why these things have ended up in museums.
Most of the best exhibits are grave goods, placed in tombs with the dead, in the belief that the deceased would be able to use them in the next world. One Mycenaean warrior was buried with his sword, the blade specially bent so as to "kill" it, and ensure that it would travel to the afterlife with him.
That's fascinating, and also rather weird. Killing a sword so its dead owner could use the ghost of it in heaven? Those crazy ancients!
When you think about it, that's a horrible thing to think. That guy was probably a war hero and that grave was the most solemn memorial his culture could erect to his memory. That was the Arlington, the Tomb of the Unknown Soldier, of his day. We could have let it rest in peace. But we put it in a museum.
My point here is not that we ought to stop doing archaeology because it's offending the memory of the dead. What's interesting is the fact that no-one would even consider that. We just don't care about the dead of 3000 years ago, except as historical data. Yet there'd be outrage if someone went into a churchyard and starting digging up the dead of 300 years ago. You wouldn't even stuck some chewing gum to a gravestone or use it as a seat.
So there are two categories of the dead. There's the alive dead, who are felt to be with us, in the sense that they have a right to respect. Then there are the dead dead, the ancients, who are of purely historical interest. The alive dead still have power - wars are fought over their memories, honour, property rights.
Eventually, though, even the dead die, and that's generally a good thing. The Hungarians, so far as I know, don't dislike the Mongolians because of the Mongol Invasion of 1237, although the Hungarians who died then would probably have wanted them to.
Fortunately for modern international relations, they're dead.
The Ancients
Greece, of course, is rich in history (if not money, at the moment) and the National Archaeological Museum is predictably impressive. One of the most striking artefacts I remember was a kind of miniature suit made out of pure gold leaf, complete with a little face mask with tiny eye holes. It was the death mask of an infant from Mycenae, buried about 3000 years ago and dug up in the 19th century.
That's fascinating of course. When you think about it, it's also tragic. This was someone's baby son or daughter. However, it's hard to feel sad over it. If that baby died in front of you, or even if it happened yesterday and you read about it on the news, it would be sad.
You'd even feel sad if it were an entirely fictional baby that "died" in a movie. But being so old, it's not sad, it's just interesting, which is why these things have ended up in museums.
Most of the best exhibits are grave goods, placed in tombs with the dead, in the belief that the deceased would be able to use them in the next world. One Mycenaean warrior was buried with his sword, the blade specially bent so as to "kill" it, and ensure that it would travel to the afterlife with him.
That's fascinating, and also rather weird. Killing a sword so its dead owner could use the ghost of it in heaven? Those crazy ancients!
When you think about it, that's a horrible thing to think. That guy was probably a war hero and that grave was the most solemn memorial his culture could erect to his memory. That was the Arlington, the Tomb of the Unknown Soldier, of his day. We could have let it rest in peace. But we put it in a museum.
My point here is not that we ought to stop doing archaeology because it's offending the memory of the dead. What's interesting is the fact that no-one would even consider that. We just don't care about the dead of 3000 years ago, except as historical data. Yet there'd be outrage if someone went into a churchyard and starting digging up the dead of 300 years ago. You wouldn't even stuck some chewing gum to a gravestone or use it as a seat.
So there are two categories of the dead. There's the alive dead, who are felt to be with us, in the sense that they have a right to respect. Then there are the dead dead, the ancients, who are of purely historical interest. The alive dead still have power - wars are fought over their memories, honour, property rights.
Eventually, though, even the dead die, and that's generally a good thing. The Hungarians, so far as I know, don't dislike the Mongolians because of the Mongol Invasion of 1237, although the Hungarians who died then would probably have wanted them to.
Fortunately for modern international relations, they're dead.
Thursday, September 1, 2011
Men, Women and Spatial Intelligence
While it's now (almost) generally accepted that men and women are at most only very slightly different in average IQ, there are still a couple of lines of evidence in favor of a gender difference.
First, there's the idea that men are more variable in their intelligence, so there are more very smart men, and also more very stupid ones. This averages out so the mean is the same.
Second, there's the theory that men are on average better at some things, notably "spatial" stuff involving the ability to mentally process shapes, patterns and images, while women are better at social, emotional and perhaps verbal tasks. Again, this averages out overall.
According to proponents, these differences explain why men continue to dominate the upper echelons of things like mathematics, physics, and chess. These all tap spatial processing and since men are more variable, there'll be more extremely high achievers - Nobel Prizes, grandmasters. (There are also presumably more men who are rubbish at these things, but we don't notice them.)
The male spatial advantage has been reported in many parts of the world, but is it "innate", something to do with the male brain? A new PNAS study says - probably not, it's to do with culture. But I'm not convinced.
The authors went to India and studied two tribes, the Khasi and the Karbi. Both live right next to other in the hills of Northeastern India and genetically, they're closely related. Culturally though, the Karbi are patrilineal - property and status is passed down from father to son, with women owning no land of their own. The Khasi are matrilineal, with men forbidden to own land. Moreover, Khasi women also get just as much education as the men, while Karbi ones get much less.
The authors took about 1200 people from 8 villages - 4 per culture - and got them to do a jigsaw puzzle. The quicker you do it, the better your spatial ability. Here were the results. I added the gender-stereotypical colours.
In the patrilineal group, women did substantially worse on average (remember that more time means worse). In the matrilineal society, they performed as well as men. Well, a tiny bit worse, but it wasn't significant. Differences in education explained some of the effect, but only a small part of it.
OK.
This was a large study, and the results are statistically very strong. However, there's a curious result that the authors don't discuss in the paper - the matrilineal group just did much better overall. Looking at the men, they were 10 seconds faster in the matrilineal culture. That's nearly as big as the gender difference in the patrilineal group (15 seconds)!
The individual variability was also much higher in the patrilineal society, for both genders.
Now, maybe, this is a real effect. Maybe being in a patrilineal society makes everyone less spatially aware, not just women; that seems a bit of a stretch, though.
There's also the problem that this study essentially only has two datapoints. One society is matrilineal and has low gender difference in visuospatial processing. One is patrilineal and has a high difference. But that's just not enough data to conclude that there's a correlation between the two things, let alone a causal relationship; you would need to study lots of societies to do that.
Personally, I have no idea what drives the difference, but this study is a reminder of how difficult the question is.
Men, Women and Spatial Intelligence
While it's now (almost) generally accepted that men and women are at most only very slightly different in average IQ, there are still a couple of lines of evidence in favor of a gender difference.
First, there's the idea that men are more variable in their intelligence, so there are more very smart men, and also more very stupid ones. This averages out so the mean is the same.
Second, there's the theory that men are on average better at some things, notably "spatial" stuff involving the ability to mentally process shapes, patterns and images, while women are better at social, emotional and perhaps verbal tasks. Again, this averages out overall.
According to proponents, these differences explain why men continue to dominate the upper echelons of things like mathematics, physics, and chess. These all tap spatial processing and since men are more variable, there'll be more extremely high achievers - Nobel Prizes, grandmasters. (There are also presumably more men who are rubbish at these things, but we don't notice them.)
The male spatial advantage has been reported in many parts of the world, but is it "innate", something to do with the male brain? A new PNAS study says - probably not, it's to do with culture. But I'm not convinced.
The authors went to India and studied two tribes, the Khasi and the Karbi. Both live right next to other in the hills of Northeastern India and genetically, they're closely related. Culturally though, the Karbi are patrilineal - property and status is passed down from father to son, with women owning no land of their own. The Khasi are matrilineal, with men forbidden to own land. Moreover, Khasi women also get just as much education as the men, while Karbi ones get much less.
The authors took about 1200 people from 8 villages - 4 per culture - and got them to do a jigsaw puzzle. The quicker you do it, the better your spatial ability. Here were the results. I added the gender-stereotypical colours.
In the patrilineal group, women did substantially worse on average (remember that more time means worse). In the matrilineal society, they performed as well as men. Well, a tiny bit worse, but it wasn't significant. Differences in education explained some of the effect, but only a small part of it.
OK.
This was a large study, and the results are statistically very strong. However, there's a curious result that the authors don't discuss in the paper - the matrilineal group just did much better overall. Looking at the men, they were 10 seconds faster in the matrilineal culture. That's nearly as big as the gender difference in the patrilineal group (15 seconds)!
The individual variability was also much higher in the patrilineal society, for both genders.
Now, maybe, this is a real effect. Maybe being in a patrilineal society makes everyone less spatially aware, not just women; that seems a bit of a stretch, though.
There's also the problem that this study essentially only has two datapoints. One society is matrilineal and has low gender difference in visuospatial processing. One is patrilineal and has a high difference. But that's just not enough data to conclude that there's a correlation between the two things, let alone a causal relationship; you would need to study lots of societies to do that.
Personally, I have no idea what drives the difference, but this study is a reminder of how difficult the question is.
Friday, July 29, 2011
What Big Eyes You Have
Actually, the paper in question talked about eyes but didn't make much of the brain finding, which is confined to the Supplement. Nonetheless, they did find an effect on brain size too. Peoples living further from the equator have larger eye sockets and also larger total cranial capacity (brain volume), apparantly. The authors include Robin Dunbar of "Dunbar's Number" fame.
Their idea is that humans evolved larger eyes because further from the equator, there's on average less light, so you need bigger eyes to collect more light and see well.
They looked at 19th century skulls stored in museum collections, and measured the size of the eye sockets (orbits). They did this by filling them with a bunch of little glass balls and counting how many balls fit. They had a total of 73 "healthy adult" skulls from 12 different places, ranging from Scandinavia to Kenya.
Latitude essentially meant northern-ness because only one population (Australian Aborigines) were from far south of the equator.
The heat of the Sahara was easy living compared to the deadly horrors of an English winter, in other words. Hmm.
The idea that higher latitudes are darker, so you'd need bigger eyes, and then a bigger brain (at least the visual parts of the brain) to process what you see, is certainly more plausible than that theory. However, the data in this paper seem pretty scanty.
Measuring skulls by filling them with little balls was cutting edge neuroscience in the 19th century. However, nowadays, we have MRI scanners. Although usually intended to image the brain, many MRI scans of the head also give an excellent image of the skull and eyes. Millions of people of all races get MRI scans every year.
Nowadays, people have medical records, so we can tell exactly how healthy people are. The people who became these skulls in a museum were said to be healthy, but how healthy a 19th century Indian or Kenyan could hope to be, by modern standards, I'm not sure. Certainly there's an excellent chance that they were malnourished and I suspect this would make your eyes and skull smaller.
What Big Eyes You Have
Actually, the paper in question talked about eyes but didn't make much of the brain finding, which is confined to the Supplement. Nonetheless, they did find an effect on brain size too. Peoples living further from the equator have larger eye sockets and also larger total cranial capacity (brain volume), apparantly. The authors include Robin Dunbar of "Dunbar's Number" fame.
Their idea is that humans evolved larger eyes because further from the equator, there's on average less light, so you need bigger eyes to collect more light and see well.
They looked at 19th century skulls stored in museum collections, and measured the size of the eye sockets (orbits). They did this by filling them with a bunch of little glass balls and counting how many balls fit. They had a total of 73 "healthy adult" skulls from 12 different places, ranging from Scandinavia to Kenya.
Latitude essentially meant northern-ness because only one population (Australian Aborigines) were from far south of the equator.
The heat of the Sahara was easy living compared to the deadly horrors of an English winter, in other words. Hmm.
The idea that higher latitudes are darker, so you'd need bigger eyes, and then a bigger brain (at least the visual parts of the brain) to process what you see, is certainly more plausible than that theory. However, the data in this paper seem pretty scanty.
Measuring skulls by filling them with little balls was cutting edge neuroscience in the 19th century. However, nowadays, we have MRI scanners. Although usually intended to image the brain, many MRI scans of the head also give an excellent image of the skull and eyes. Millions of people of all races get MRI scans every year.
Nowadays, people have medical records, so we can tell exactly how healthy people are. The people who became these skulls in a museum were said to be healthy, but how healthy a 19th century Indian or Kenyan could hope to be, by modern standards, I'm not sure. Certainly there's an excellent chance that they were malnourished and I suspect this would make your eyes and skull smaller.
Friday, June 24, 2011
Blind Spots & Braintrust
The pair may come from the same publisher (Princeton), but they couldn't be more different.

Blind Spots is a good book. It tells a story in a clear and compelling fashion, which is what a book is for.
The story is that we often act unethically, not because we're faced with ethical questions and decide to pick the "bad" option, but because we fail to see that there is an ethical issue at all.
This is not the same as saying that 'the road to hell is paved with good intentions'. That old phrase warns against trying to be good and, as a result, causing evil, because your plans go wrong. Blind Spots is saying, even if all of your attempts to be good work out just fine, you might still cause evil despite that.
For example, you could be a good employee, who never calls in sick unnecessarily, kind to your friends and colleagues, and a generous charity donor.
Unfortunately, you're an accountant connected to Enron, and your work - ultimately - consists of defrauding innocent people. But of course, you don't think of it like that, because we don't tend to think about things "ultimately".
Which is hard to disagree with. At worst, you could say it's obvious, although I think it's still something we ought to be reminded of. That's not all there is to the book, though: it also discusses how this happens and suggests ways to avoid it within organizations.
For example, the authors give an example of how setting up rewards and punishments to "make people be ethical", can make them less so, by encouraging people to think of the issue as a personal trade-off between gain and loss, rather than an ethical dilemma - what the authors call "ethical fading".
A day-care centre was annoyed at the fact that some parents were picking up their children late. This was antisocial because it meant staff had to work late into the evening.
So they started charging parents a late fee. Not a big one, but enough to send people a message: this is wrong, don't do. But in fact what happened was that late pickups became more common.
Previously, many people were making an effort to be on time, as a matter of principle. Once the fees were in place, it stopped being an ethical issue and just became a financial trade-off: is it worth paying the fee to get an extra hour?
Of course, you could make the fees higher to get around this, but even then, you've caused ethical fading, and you'll be relying on the sanctions from that point on.

Braintrust, by contrast, is just not a good read. The bulk of the book consists of discussions of various neurotransmitters and brain areas and how they may be related to human social behaviour. Oxytocin, for example, may make us behave all trusting and kindly, as it's involved in maternal bonding. There's a long discussion of the neurochemistry of male sexual behaviour in voles.
It's not clear how this is relevant to ethics. Whether it's oxytocin that does it, or something else, and whether voles are a useful model of human behaviour or not, clearly sometimes we trust people and sometimes we don't. That's psychology. And biology can't yet explain it.
Churchland doesn't claim that the various biological concepts that she covers can fully explain anything, and she doesn't vouch that all of these findings are rock solid. Which is good, because they can't, and they're not. So why spend well over half of the book talking about them?
Churchland's big idea seems to be that human morality emerges out of our more general capacity for sociability. Hence all the stuff about oxytocin and "the social brain". OK. But I'd have said that's a given - there's obviously some relation between sociability and morality.
I think there is an interesting idea in here, albeit not very clearly expressed, namely that morality isn't a special function of the brain, but just one of the many forms in which our social cognition can take.
In other words, I think the claim is that ethics isn't just related to sociability, it is sociability. Even asocial animals care about their own welfare, in terms of pleasure and pain; social ones become social when they extend this caring to others; intelligent social animals including humans and maybe some primates also have a system for inferring the motivations and thoughts of others.
At the end of the book, Churchland stops reviewing neuroscience, and starts talking about the implications for philosophy. This is best section of the book, but it's too short.
Churchland makes the interesting point, for example, that when we are considering philosophical "ethical dilemmas", like the famous trolley problems, we may not be applying any kind of ethical "rules" as such. Rather, she thinks that our moral reasoning is pretty much a kind of pattern recognition based on previous experience - like all our other social reasoning.
Someone who'd just read a book about the horrors of Stalinism might tend to adopt an anti-consequentialist, every-life-is-sacred approach. Whereas if you'd just watched a movie in which the hero, reluctantly but rightly, decides to sacrifice one guy to save many other people, would do the opposite. Then the ethical "rules" might be confabulated to cover it.
This is a nice idea. It's open to criticism, but it's a serious suggestion, and one that deserves a decent discussion. Sadly, there isn't one. If only there were more room in the book for this kind of stuff - but oxytocin covers so many pages.
Basically, the good parts of this book are not about the brain at all.
Reading Braintrust is like going on date but then bumping into an annoying friend who insists on coming along for dinner. Jesus, The Brain, you want to say. I like you and all, but seriously, you are getting in the way right now.
Links: Other blog reviews.
Blind Spots & Braintrust
The pair may come from the same publisher (Princeton), but they couldn't be more different.

Blind Spots is a good book. It tells a story in a clear and compelling fashion, which is what a book is for.
The story is that we often act unethically, not because we're faced with ethical questions and decide to pick the "bad" option, but because we fail to see that there is an ethical issue at all.
This is not the same as saying that 'the road to hell is paved with good intentions'. That old phrase warns against trying to be good and, as a result, causing evil, because your plans go wrong. Blind Spots is saying, even if all of your attempts to be good work out just fine, you might still cause evil despite that.
For example, you could be a good employee, who never calls in sick unnecessarily, kind to your friends and colleagues, and a generous charity donor.
Unfortunately, you're an accountant connected to Enron, and your work - ultimately - consists of defrauding innocent people. But of course, you don't think of it like that, because we don't tend to think about things "ultimately".
Which is hard to disagree with. At worst, you could say it's obvious, although I think it's still something we ought to be reminded of. That's not all there is to the book, though: it also discusses how this happens and suggests ways to avoid it within organizations.
For example, the authors give an example of how setting up rewards and punishments to "make people be ethical", can make them less so, by encouraging people to think of the issue as a personal trade-off between gain and loss, rather than an ethical dilemma - what the authors call "ethical fading".
A day-care centre was annoyed at the fact that some parents were picking up their children late. This was antisocial because it meant staff had to work late into the evening.
So they started charging parents a late fee. Not a big one, but enough to send people a message: this is wrong, don't do. But in fact what happened was that late pickups became more common.
Previously, many people were making an effort to be on time, as a matter of principle. Once the fees were in place, it stopped being an ethical issue and just became a financial trade-off: is it worth paying the fee to get an extra hour?
Of course, you could make the fees higher to get around this, but even then, you've caused ethical fading, and you'll be relying on the sanctions from that point on.

Braintrust, by contrast, is just not a good read. The bulk of the book consists of discussions of various neurotransmitters and brain areas and how they may be related to human social behaviour. Oxytocin, for example, may make us behave all trusting and kindly, as it's involved in maternal bonding. There's a long discussion of the neurochemistry of male sexual behaviour in voles.
It's not clear how this is relevant to ethics. Whether it's oxytocin that does it, or something else, and whether voles are a useful model of human behaviour or not, clearly sometimes we trust people and sometimes we don't. That's psychology. And biology can't yet explain it.
Churchland doesn't claim that the various biological concepts that she covers can fully explain anything, and she doesn't vouch that all of these findings are rock solid. Which is good, because they can't, and they're not. So why spend well over half of the book talking about them?
Churchland's big idea seems to be that human morality emerges out of our more general capacity for sociability. Hence all the stuff about oxytocin and "the social brain". OK. But I'd have said that's a given - there's obviously some relation between sociability and morality.
I think there is an interesting idea in here, albeit not very clearly expressed, namely that morality isn't a special function of the brain, but just one of the many forms in which our social cognition can take.
In other words, I think the claim is that ethics isn't just related to sociability, it is sociability. Even asocial animals care about their own welfare, in terms of pleasure and pain; social ones become social when they extend this caring to others; intelligent social animals including humans and maybe some primates also have a system for inferring the motivations and thoughts of others.
At the end of the book, Churchland stops reviewing neuroscience, and starts talking about the implications for philosophy. This is best section of the book, but it's too short.
Churchland makes the interesting point, for example, that when we are considering philosophical "ethical dilemmas", like the famous trolley problems, we may not be applying any kind of ethical "rules" as such. Rather, she thinks that our moral reasoning is pretty much a kind of pattern recognition based on previous experience - like all our other social reasoning.
Someone who'd just read a book about the horrors of Stalinism might tend to adopt an anti-consequentialist, every-life-is-sacred approach. Whereas if you'd just watched a movie in which the hero, reluctantly but rightly, decides to sacrifice one guy to save many other people, would do the opposite. Then the ethical "rules" might be confabulated to cover it.
This is a nice idea. It's open to criticism, but it's a serious suggestion, and one that deserves a decent discussion. Sadly, there isn't one. If only there were more room in the book for this kind of stuff - but oxytocin covers so many pages.
Basically, the good parts of this book are not about the brain at all.
Reading Braintrust is like going on date but then bumping into an annoying friend who insists on coming along for dinner. Jesus, The Brain, you want to say. I like you and all, but seriously, you are getting in the way right now.
Links: Other blog reviews.
Thursday, June 9, 2011
What Is Mental Distress?
This awkward wording seems to be a result of the fact that it's an attempt to fuse some of the features of "mental illness" with some of the implications of "distress", a kind of verbal alchemy. What is mental distress? It's not mental illness, but it's not exactly not mental illness.
Fair enough. Mental illness is a problematic concept, so I'm all in favor of rethinking it. But I'm worried. My worry is that "mental distress" takes the worst features of mental illness and perpetuates them in the guise of being a new and radical idea.
Were I to go around making sweeping statements about "the mentally ill" or "people with mental illness", someone would call me out on it, like this - Mental illness is an umbrella term, for all kinds of different experiences! You can't talk about all those people as if they're the same. They're individuals!
Which is quite right.
But it's equally bad to talk about "mental distress" in the same way, and this happens as well. I don't know if mental distress is more often used as a blanket statement, but it's certainly not immune and it's no better. See for example the top Google hit for mental distress:
The first signs of mental distress will be different for the onlooker than it is for the person in distress...Perfectly true, of some people. Not all. In this paragraph "mental distress" seems to mean "bipolar disorder", but in the course of the article it morphs into several other forms. All mental distress.
Changes in sleep patterns are a common sign, and appetite may also be affected. Lethargy, low energy levels, feeling antisocial and spending too much time in bed may indicate the onset of depression. Wanting to go out more, needing very little sleep, and feeling highly energetic, creative and sociable, may signal that a person is becoming 'high'.
The first time it happens, the effects of hearing or seeing things that other people don't are likely to be especially dramatic...
It's not good enough to make sweeping statements and say "...Of course, everyone is different, but..." That's a cop-out, not a serious attempt to be helpful. It's like being really offensive, and then quickly adding "No offence". If you think everyone's different, talk about them all differently.
I think there's a good case to be made that we shouldn't talk about "mental illness" at all. Take, say, bipolar disorder, social anxiety, and antisocial personality. I'm really not sure that these have anything in common.
They've only been considered to belong to the single category of "psychiatric disorders" for about 50 years. 100 years ago, bipolar was insanity, social anxiety was a character trait, or a 'nervous' problem, and antisocial behaviour was just evil. Different professionals dealt with each one, and few thought of them as being linked.
I'm not saying that we should go back to that. But categories are up for debate. "Mental distress" is a new label, but it's a 50 year old category.
My second problem is that "mental distress" implies that everyone who has it, is distressed. But they're just not - at least not if you're using that term as a replacement for "mental illness".
If you're bipolar, and in a manic or hypomanic episode, you might well be the opposite of distressed. More subtly, if you're severely depressed, you might be too low to be distressed. "Distress" implies an acute emotional response. Severe depression paralyses the emotions.
Maybe "mental distress" isn't like normal everyday distress. Maybe mania or depression are mental distress, but not distress. But that's rather confusing. If mental distress isn't distress, what on earth is it? You can't redefine words like that, unless you're Humpty Dumpty.

"Are you mentally distressed?"
"No, I'm fine. I'm just distressed."
It would also lead to even more people being treated in the mental health system. Already we're told that 1 in 4 people experience mental illness, but almost everyone gets distressed now and again.
You might say that you don't consider mental distress to be a form of pathology. I'm against medicalization! Mental distress isn't an illness! If so, fine, but to be consistent, you're going to have to stop talking about treatments. And causes. And symptoms. Those are all medical words. Discussions of mental distress are chock full of them.
Indeed, if you want to demedicalize "mental distress", you should probably just call it... distress. The "mental" part is a hangover from "mental illness", after all. If you're serious, you ought to junk that and stick with distress.
This would be perfectly clear, it doesn't require us to redefine words or use awkward phrases. Let's give it a go: "Mental illness" is distress. Easy. Unfortunately, when you put it like that, it looks a bit like a sweeping oversimplification, doesn't it? Hmm.
On the other hand, if you're not looking to demedicalize mental illness, why throw out the word illness?
The problem is that many people like the sound of demedicalization, but they're not sure how far they want to go. And in large organizations, some people will want to go much further than others.
Mental health charities seem to be particularly prone to this, so you often see them assuring people that "mental illness is an illness like any other", while simultaneously saying that seeing it just as a medical illness is far too narrow and unhelpful!
This is a serious debate, and it deserves a careful discussion. The compromise term "mental distress" seems to bridge this gap, and allows people with very different views to sound like they're agreeing with each other. This is not the best way to resolve debates like this. People still disagree with each other. They just lack the words to talk about it.
What Is Mental Distress?
This awkward wording seems to be a result of the fact that it's an attempt to fuse some of the features of "mental illness" with some of the implications of "distress", a kind of verbal alchemy. What is mental distress? It's not mental illness, but it's not exactly not mental illness.
Fair enough. Mental illness is a problematic concept, so I'm all in favor of rethinking it. But I'm worried. My worry is that "mental distress" takes the worst features of mental illness and perpetuates them in the guise of being a new and radical idea.
Were I to go around making sweeping statements about "the mentally ill" or "people with mental illness", someone would call me out on it, like this - Mental illness is an umbrella term, for all kinds of different experiences! You can't talk about all those people as if they're the same. They're individuals!
Which is quite right.
But it's equally bad to talk about "mental distress" in the same way, and this happens as well. I don't know if mental distress is more often used as a blanket statement, but it's certainly not immune and it's no better. See for example the top Google hit for mental distress:
The first signs of mental distress will be different for the onlooker than it is for the person in distress...Perfectly true, of some people. Not all. In this paragraph "mental distress" seems to mean "bipolar disorder", but in the course of the article it morphs into several other forms. All mental distress.
Changes in sleep patterns are a common sign, and appetite may also be affected. Lethargy, low energy levels, feeling antisocial and spending too much time in bed may indicate the onset of depression. Wanting to go out more, needing very little sleep, and feeling highly energetic, creative and sociable, may signal that a person is becoming 'high'.
The first time it happens, the effects of hearing or seeing things that other people don't are likely to be especially dramatic...
It's not good enough to make sweeping statements and say "...Of course, everyone is different, but..." That's a cop-out, not a serious attempt to be helpful. It's like being really offensive, and then quickly adding "No offence". If you think everyone's different, talk about them all differently.
I think there's a good case to be made that we shouldn't talk about "mental illness" at all. Take, say, bipolar disorder, social anxiety, and antisocial personality. I'm really not sure that these have anything in common.
They've only been considered to belong to the single category of "psychiatric disorders" for about 50 years. 100 years ago, bipolar was insanity, social anxiety was a character trait, or a 'nervous' problem, and antisocial behaviour was just evil. Different professionals dealt with each one, and few thought of them as being linked.
I'm not saying that we should go back to that. But categories are up for debate. "Mental distress" is a new label, but it's a 50 year old category.
My second problem is that "mental distress" implies that everyone who has it, is distressed. But they're just not - at least not if you're using that term as a replacement for "mental illness".
If you're bipolar, and in a manic or hypomanic episode, you might well be the opposite of distressed. More subtly, if you're severely depressed, you might be too low to be distressed. "Distress" implies an acute emotional response. Severe depression paralyses the emotions.
Maybe "mental distress" isn't like normal everyday distress. Maybe mania or depression are mental distress, but not distress. But that's rather confusing. If mental distress isn't distress, what on earth is it? You can't redefine words like that, unless you're Humpty Dumpty.

"Are you mentally distressed?"
"No, I'm fine. I'm just distressed."
It would also lead to even more people being treated in the mental health system. Already we're told that 1 in 4 people experience mental illness, but almost everyone gets distressed now and again.
You might say that you don't consider mental distress to be a form of pathology. I'm against medicalization! Mental distress isn't an illness! If so, fine, but to be consistent, you're going to have to stop talking about treatments. And causes. And symptoms. Those are all medical words. Discussions of mental distress are chock full of them.
Indeed, if you want to demedicalize "mental distress", you should probably just call it... distress. The "mental" part is a hangover from "mental illness", after all. If you're serious, you ought to junk that and stick with distress.
This would be perfectly clear, it doesn't require us to redefine words or use awkward phrases. Let's give it a go: "Mental illness" is distress. Easy. Unfortunately, when you put it like that, it looks a bit like a sweeping oversimplification, doesn't it? Hmm.
On the other hand, if you're not looking to demedicalize mental illness, why throw out the word illness?
The problem is that many people like the sound of demedicalization, but they're not sure how far they want to go. And in large organizations, some people will want to go much further than others.
Mental health charities seem to be particularly prone to this, so you often see them assuring people that "mental illness is an illness like any other", while simultaneously saying that seeing it just as a medical illness is far too narrow and unhelpful!
This is a serious debate, and it deserves a careful discussion. The compromise term "mental distress" seems to bridge this gap, and allows people with very different views to sound like they're agreeing with each other. This is not the best way to resolve debates like this. People still disagree with each other. They just lack the words to talk about it.
Thursday, May 19, 2011
Kanazawa's Black Day
There's no need for the rest of us to feel jealous though. There may be no such thing as bad publicity, but I think that being accused of being a racist and a sexist who should be sacked, for something you wrote on your blog, something that swiftly got pulled, must come pretty close.
You can read the controversial article here, and in other places, because it was helpfully archived. It used to be here.
Kanazawa based his argument on the Add Health project which was a massive observational study of American adolescents and young adults.
Add Health is huge. It's produced over 3,000 scientific papers, presentations and other documents. That's because it collected a wealth of data on everything from genetics to blood chemistry to social relationships and emotional issues.
Kanazawa looked at the data on physical attractiveness. Attractiveness was rated by an interviewer. Each subject got interviewed for a couple of hours by one interviewer and at the end the interviewer rated how hot they found them.
The fateful post claimed that, according to the Add Health data, black women were rated less attractive, on average, than white, Asian and Native American ones. Let's assume that he's done his sums correctly and that this is true of the data.
The obvious problem is that maybe the interviewers were biased against black women, and rated them lower for that reason. Kanazawa didn't consider this in his post, which is unquestionably an oversight, but he did go on to speculate as to the biological reasons why they might be less attractive.
However, looking at the original Add Health data, can we check whether this bias was at play or not?
Short answer: I found no evidence either way.
Long answer: I first looked over the Add Health website but it doesn't seem to mention anything about who the interviewers were. It doesn't mention their own ethnicity, which would be helpful, although even if they were all black themselves, they might have internalized racism, so that wouldn't be conclusive. They were trained, but then, you can't train someone to not be a racist.
Then I decided to look at the publications. I searched Google Scholar for "Add Health" + attractiveness. This reveals a number of articles, including a 2007 one by Kanazawa ironically, but only one seemed really relevant: Weight Preoccupation as a Function of Observed Physical Attractiveness. (There are other hits, but I skimmed the most likely looking ones and they didn't address bias.)
The details are unimportant, but it involved race and attractiveness, so the authors had to deal with the question of potential rater bias. Unlike Kanazawa they didn't just brush this under the carpet:
Although the interviewers were different races and ethnicities, there is no information about the race or ethnicity of the interviewer for any one respondent to examine systematic bias.The point about "post-hoc cluster analysis" is the key here. To try to control for rater effects (not just racial ones) they analyzed the data covarying for which interviewer rated each girl. They didn't know what races the interviewers were, but they did know which girls got rated by the same interviewer. They found that controlling for the rater did not affect their results.
However, post hoc cluster analyses that controlled for an interviewer effect yielded similar results; thus, it is unlikely that interviewers had any substantial biases against any one ethnic group or that they rated attractiveness significantly differently from each other.
So does that mean there was no bias? No. Because - this only applies to their results, which were not about attractiveness per se, but about the interaction of attractiveness with other factors to predict an outcome variable (dieting and concern about weight) within a given race.
Even supposing that half of the raters were KKK members who cruelly subtracted, say, a million points from the rated attractiveness of any given black subject, so long as they still rated some black subjects as more attractive than others, all of the comparisons within the black subjects would still work fine: the millions would all cancel out.
So in my judgement, we just can't tell. Unless I've missed something, in which case, please tell us about it in the comments.
Kanazawa's Black Day
There's no need for the rest of us to feel jealous though. There may be no such thing as bad publicity, but I think that being accused of being a racist and a sexist who should be sacked, for something you wrote on your blog, something that swiftly got pulled, must come pretty close.
You can read the controversial article here, and in other places, because it was helpfully archived. It used to be here.
Kanazawa based his argument on the Add Health project which was a massive observational study of American adolescents and young adults.
Add Health is huge. It's produced over 3,000 scientific papers, presentations and other documents. That's because it collected a wealth of data on everything from genetics to blood chemistry to social relationships and emotional issues.
Kanazawa looked at the data on physical attractiveness. Attractiveness was rated by an interviewer. Each subject got interviewed for a couple of hours by one interviewer and at the end the interviewer rated how hot they found them.
The fateful post claimed that, according to the Add Health data, black women were rated less attractive, on average, than white, Asian and Native American ones. Let's assume that he's done his sums correctly and that this is true of the data.
The obvious problem is that maybe the interviewers were biased against black women, and rated them lower for that reason. Kanazawa didn't consider this in his post, which is unquestionably an oversight, but he did go on to speculate as to the biological reasons why they might be less attractive.
However, looking at the original Add Health data, can we check whether this bias was at play or not?
Short answer: I found no evidence either way.
Long answer: I first looked over the Add Health website but it doesn't seem to mention anything about who the interviewers were. It doesn't mention their own ethnicity, which would be helpful, although even if they were all black themselves, they might have internalized racism, so that wouldn't be conclusive. They were trained, but then, you can't train someone to not be a racist.
Then I decided to look at the publications. I searched Google Scholar for "Add Health" + attractiveness. This reveals a number of articles, including a 2007 one by Kanazawa ironically, but only one seemed really relevant: Weight Preoccupation as a Function of Observed Physical Attractiveness. (There are other hits, but I skimmed the most likely looking ones and they didn't address bias.)
The details are unimportant, but it involved race and attractiveness, so the authors had to deal with the question of potential rater bias. Unlike Kanazawa they didn't just brush this under the carpet:
Although the interviewers were different races and ethnicities, there is no information about the race or ethnicity of the interviewer for any one respondent to examine systematic bias.The point about "post-hoc cluster analysis" is the key here. To try to control for rater effects (not just racial ones) they analyzed the data covarying for which interviewer rated each girl. They didn't know what races the interviewers were, but they did know which girls got rated by the same interviewer. They found that controlling for the rater did not affect their results.
However, post hoc cluster analyses that controlled for an interviewer effect yielded similar results; thus, it is unlikely that interviewers had any substantial biases against any one ethnic group or that they rated attractiveness significantly differently from each other.
So does that mean there was no bias? No. Because - this only applies to their results, which were not about attractiveness per se, but about the interaction of attractiveness with other factors to predict an outcome variable (dieting and concern about weight) within a given race.
Even supposing that half of the raters were KKK members who cruelly subtracted, say, a million points from the rated attractiveness of any given black subject, so long as they still rated some black subjects as more attractive than others, all of the comparisons within the black subjects would still work fine: the millions would all cancel out.
So in my judgement, we just can't tell. Unless I've missed something, in which case, please tell us about it in the comments.
Saturday, May 14, 2011
Filters
As web companies strive to tailor their services (including news and search results) to our personal tastes, there's a dangerous unintended consequence: We get trapped in a "filter bubble" and don't get exposed to information that could challenge or broaden our world-view. Eli Pariser argues powerfully that this will ultimately prove to be bad for us and bad for democracy.His point is that the web is, technologically, a fantastic system of giving the consumers of information (i.e. you) exactly what they want, when they want it. It's enabled a degree of personalization which old media could never come close to. But this isn't necessarily a good thing, because people tend to pick and choose information that fits with their existing views and interests, and filters out everything else.
The problem is not entirely new. Back in the days when everyone read their daily newspaper, the newspaper editor was your filter. And because there were maybe a dozen newspapers in your region that you could buy, you'd choose the one that best fitted with your world-view.

Indeed, in the UK, what newspaper you read says considerably more about you than what party you vote for. There are only 3 main political parties, but there are about 10 main newspapers, and in my experience people are more likely to change their vote than to change what they read.
But the internet allows people to cherry-pick far more effectively. The Guardian, for example, regularly prints articles that annoy, or at least challenge, many Guardian readers. That's inevitable, because no two people have exactly the same tastes: what one reader loves will have another reader tearing up his paper in frustration.
Nowadays, it's quite possible to get all of your news and views from blogs. Blogs are specialized: they cover a particular kind of stories, with a particular slant. Many of them do that extremely well. If you don't quite agree with a given blog, there's plenty of others with a slightly different approach to pick from. And you can pick as many blogs as you like until you've got a full set - exactly how you want it. Clearly, the potential to only find out about what you already want to hear is much greater.
New or not, it's certainly a problem. The good thing is that the internet makes it extremely easy to snap out of the filter bubble. A completely different perspective is just a click away: that's new, as well. All you need is to want to do that.
Why should you? Always reading stuff that you already agree with isn't the best way to get informed about something. Actually, it's just about the worst way to do that. If you're serious about wanting to learn the truth about something, you need to (critically) read different sources. But beyond that, it's just boring to always do the same things. There are a lot of cool things going on that you've never heard of.
Finally, if you're a blogger, remember that you're not just telling readers your opinions, you're helping them to filter out other people's. You don't have to feel bad about that, it's inevitable, but remember: if you really want to help your readers understand something, you need to tell them about the areas of disagreement.
I don't just mean linking to stupid people and then explaining why they're stupid. That's fun, but if you're serious, you need to link to the best examples of alternative views and give them a fair hearing. This is something that I feel I could do more of on this blog, and I hope to do it more in future.
Filters
As web companies strive to tailor their services (including news and search results) to our personal tastes, there's a dangerous unintended consequence: We get trapped in a "filter bubble" and don't get exposed to information that could challenge or broaden our world-view. Eli Pariser argues powerfully that this will ultimately prove to be bad for us and bad for democracy.His point is that the web is, technologically, a fantastic system of giving the consumers of information (i.e. you) exactly what they want, when they want it. It's enabled a degree of personalization which old media could never come close to. But this isn't necessarily a good thing, because people tend to pick and choose information that fits with their existing views and interests, and filters out everything else.
The problem is not entirely new. Back in the days when everyone read their daily newspaper, the newspaper editor was your filter. And because there were maybe a dozen newspapers in your region that you could buy, you'd choose the one that best fitted with your world-view.

Indeed, in the UK, what newspaper you read says considerably more about you than what party you vote for. There are only 3 main political parties, but there are about 10 main newspapers, and in my experience people are more likely to change their vote than to change what they read.
But the internet allows people to cherry-pick far more effectively. The Guardian, for example, regularly prints articles that annoy, or at least challenge, many Guardian readers. That's inevitable, because no two people have exactly the same tastes: what one reader loves will have another reader tearing up his paper in frustration.
Nowadays, it's quite possible to get all of your news and views from blogs. Blogs are specialized: they cover a particular kind of stories, with a particular slant. Many of them do that extremely well. If you don't quite agree with a given blog, there's plenty of others with a slightly different approach to pick from. And you can pick as many blogs as you like until you've got a full set - exactly how you want it. Clearly, the potential to only find out about what you already want to hear is much greater.
New or not, it's certainly a problem. The good thing is that the internet makes it extremely easy to snap out of the filter bubble. A completely different perspective is just a click away: that's new, as well. All you need is to want to do that.
Why should you? Always reading stuff that you already agree with isn't the best way to get informed about something. Actually, it's just about the worst way to do that. If you're serious about wanting to learn the truth about something, you need to (critically) read different sources. But beyond that, it's just boring to always do the same things. There are a lot of cool things going on that you've never heard of.
Finally, if you're a blogger, remember that you're not just telling readers your opinions, you're helping them to filter out other people's. You don't have to feel bad about that, it's inevitable, but remember: if you really want to help your readers understand something, you need to tell them about the areas of disagreement.
I don't just mean linking to stupid people and then explaining why they're stupid. That's fun, but if you're serious, you need to link to the best examples of alternative views and give them a fair hearing. This is something that I feel I could do more of on this blog, and I hope to do it more in future.
Thursday, May 12, 2011
And When You Say "Economy", You Mean. . .?
President Barack Obama's approval rating has hit its highest point in two years — 60 percent — and more than half of Americans now say he deserves to be re-elected, according to an Associated Press-GfK poll taken after U.S. forces killed al-Qaeda leader Osama bin Laden. In worrisome signs for Republicans, the president's standing improved not just on foreign policy but also on the economy.
Tuesday, May 10, 2011
Will Mitt Romney Get the Republican Nomination?
Saturday, May 7, 2011
Bin Laden's Smile

Why was he so "popular"? I think it was his smile.
Bin Laden always smiled. This was his unique selling point. Most photos of extremists show either a hateful scowl, emotionless resolve, or at best a forced, unfriendly smile.
Bin Laden smiled, but it wasn't an evil smile. It looked perfectly genuine. He wasn't smiling because he'd just killed lots of enemies. He was just calm and content with being a killer. At peace. His videos illustrate this most dramatically. He was collected, quiet, almost shy. I've seen more passionate performances by college chemistry lecturers.
That was surely his appeal. No-one joins a movement like Al Qaeda unless they're angry, but Bin Laden seemed to be living proof that you didn't have to stay angry to stay a member. Al Qaeda was the way out of that. Al Qaeda could bring you inner peace. Whether Bin Laden was really like that, I have no idea. He might have been tormented by inner doubts, and just good at acting for the cameras. The point is, it doesn't matter. The images were out there, and that was the message.
His calm was also the reason why he was hated and feared more than the other members of his organization, including the ones who had a more direct role in 9/11. Osama was the one man to whom the image of the ranting, delusional extremist couldn't apply. Someone who planned terrorist attacks out of insane rage: that would be bad enough, but at least it would be understandable. That someone could do it with an agreeable smile on their face, was something else.
Given which, it's no surprise that the U.S. reported that Osama died a coward, hiding behind his wife. Nothing could have shattered the Osama image better than that. He wasn't beyond human emotion after all, he was scared just like anyone else. Again, whether or not that actually happened, is not the point. It's the message that went out, and I suspect that's the message that will stick.
Bin Laden's Smile

Why was he so "popular"? I think it was his smile.
Bin Laden always smiled. This was his unique selling point. Most photos of extremists show either a hateful scowl, emotionless resolve, or at best a forced, unfriendly smile.
Bin Laden smiled, but it wasn't an evil smile. It looked perfectly genuine. He wasn't smiling because he'd just killed lots of enemies. He was just calm and content with being a killer. At peace. His videos illustrate this most dramatically. He was collected, quiet, almost shy. I've seen more passionate performances by college chemistry lecturers.
That was surely his appeal. No-one joins a movement like Al Qaeda unless they're angry, but Bin Laden seemed to be living proof that you didn't have to stay angry to stay a member. Al Qaeda was the way out of that. Al Qaeda could bring you inner peace. Whether Bin Laden was really like that, I have no idea. He might have been tormented by inner doubts, and just good at acting for the cameras. The point is, it doesn't matter. The images were out there, and that was the message.
His calm was also the reason why he was hated and feared more than the other members of his organization, including the ones who had a more direct role in 9/11. Osama was the one man to whom the image of the ranting, delusional extremist couldn't apply. Someone who planned terrorist attacks out of insane rage: that would be bad enough, but at least it would be understandable. That someone could do it with an agreeable smile on their face, was something else.
Given which, it's no surprise that the U.S. reported that Osama died a coward, hiding behind his wife. Nothing could have shattered the Osama image better than that. He wasn't beyond human emotion after all, he was scared just like anyone else. Again, whether or not that actually happened, is not the point. It's the message that went out, and I suspect that's the message that will stick.
Thursday, May 5, 2011
Now Who's Sensitive?
George W. Bush won't be at Ground Zero with President Obama Thursday in part because he feels his team is getting short shrift in the decade-long manhunt for Osama Bin Laden. "[Bush] viewed this as an Obama victory lap," a highly-placed source told the Daily News Wednesday.Obama gave no credit whatsoever to the intelligence infrastructure the Bush administration set up that is being hailed from the left and right as setting in motion the operation that got Bin Laden. It rubbed Bush the wrong way."Mwaaa, mwaaa, Mommy, he took my toy! Mwaaaaa!

