Showing posts with label blogging. Show all posts
Showing posts with label blogging. Show all posts

Thursday, March 3, 2011

Monumentally Popular

Somebody just referred to my blog as "monumentally popular." I know it was said in jest, but I'm still very pleased. 

I was a very unpopular child at school. One of those kids who had to invent things to do during breaks between classes so that people wouldn't notice that I had no one to play with me. Little did I know that one day I would be monumentally popular.

Sunday, February 27, 2011

Quality of Commenters

I read a post on yet another Clarissa-bashing blog. The post was nothing new, it just indulged in some unhealthy fantasies about my sad and lonely existence and evinced fake sympathy to the state of my utter friendlessness (which, of course, is a figment of the author's strange imagination.) I wasn't surprised at the post but the comments made me feel a little scared. Here are some examples:
Everyone has already said everything I’m thinking….but I wanted to add that I’m one more person that loves you 
Love ya! 
Bless you sweetie! 
I have no idea what’s going on here but I love ((((((you))))))

And it goes on like this for a while. I looked into other threads, and the comments are all in this same saccharine, cloying vein.

Then I felt very grateful for my commenters who never come to tell me that they have no idea what I'm writing about but they love (((me.))) My blog attracts all kinds of commenters but, save for an occasional troll, they are all intelligent people who offer arguments and not just fake, meaningless declarations of non-existing love.

Thank you for being you, guys!

Thursday, January 13, 2011

Two Blogs and a Public Service Announcement

First up, here are two new(ish) blogs which have been consistently excellent since I started reading them:
Second, an announcement: Blogger has a spam filter for comments.

It's rubbish.

It seems to think that any comment containing more than one hyperlink is spam. Actually, all the spam I get contains one link, and hence makes it through, while the real comments with multiple links, which are usually interesting and sensible, get blocked. A Mr. "Generic Viagra" (no really) can leave 20 comments in 5 minutes with impunity, but more than one link, and you're out.

I would love to turn it off, but you can't. Thanks Google. My comment policy is, as it's always been, that all comments except spam are welcome. So if your comment hasn't appeared, it's not that I've deleted it, it's the spam filter.

I check the spam folder as often as I can, and allow the proper comments through, but you might want to avoid comments with more than one link. Maybe split them into multiple comments. It's not ideal but, as I said, it's not my filter.

Two Blogs and a Public Service Announcement

First up, here are two new(ish) blogs which have been consistently excellent since I started reading them:
Second, an announcement: Blogger has a spam filter for comments.

It's rubbish.

It seems to think that any comment containing more than one hyperlink is spam. Actually, all the spam I get contains one link, and hence makes it through, while the real comments with multiple links, which are usually interesting and sensible, get blocked. A Mr. "Generic Viagra" (no really) can leave 20 comments in 5 minutes with impunity, but more than one link, and you're out.

I would love to turn it off, but you can't. Thanks Google. My comment policy is, as it's always been, that all comments except spam are welcome. So if your comment hasn't appeared, it's not that I've deleted it, it's the spam filter.

I check the spam folder as often as I can, and allow the proper comments through, but you might want to avoid comments with more than one link. Maybe split them into multiple comments. It's not ideal but, as I said, it's not my filter.

Monday, January 3, 2011

Left Wing vs. Right Wing Brains

So apparently: Left wing or right wing? It's written in the brain

People with liberal views tended to have increased grey matter in the anterior cingulate cortex, a region of the brain linked to decision-making, in particular when conflicting information is being presented...

Conservatives, meanwhile, had increased grey matter in the amygdala, an area of the brain associated with processing emotion.

This was based on a study of 90 young adults using MRI to measure brain structure. Sadly that press release is all we know about the study at the moment, because it hasn't been published yet. The BBC also have no fewer than three radio shows about it here, here and here.

Politics blog Heresy Corner discusses it...
Subjects who professed liberal or left-wing opinions tended to have a larger anterior cingulate cortex, an area of the brain which, we were told, helps process complex and conflicting information. (Perhaps they need this extra grey matter to be able to cope with the internal contradictions of left-wing philosophy.)
This kind of story tends to attract chuckle-some comments.

In truth, without seeing the full scientific paper, we can't know whether the differences they found were really statistically solid, or whether they were voodoo or fishy. The authors, Geraint Rees and Ryota Kanai, have both published a lot of excellent neuroscience in the past, but that's no guarantee.

In fact, however, I suspect that the brain is just the wrong place to look if you're interested in politics, because most political views don't originate in the individual brain, they originate in the wider culture and are absorbed and regurgitated without much thought. This is a real shame, because all of us, left or right, have a brain, and it's really quite nifty:

But when it comes to politics we generally don't use it. The brain is a powerful organ designed to help you deal with reality in all its complexity. For a lot of people, politics doesn't take place there, it happens in fairytale kingdoms populated by evil monsters, foolish jesters, and brave knights.

Given that the characters in this story are mindless stereotypes, there's no need for empathy. Because the plot comes fully-formed from TV or a newspaper, there's no need for original ideas. Because everything is either obviously right or obviously wrong, there's not much reasoning required. And so on. Which is why this happens amongst other things.

I don't think individual personality is very important in determining which political narratives and values you adopt: your family background, job, and position in society is much more important.

Where individual differences matter, I think, is in deciding how "conservative" or "radical" you are within whatever party you find yourself. Not in the sense of left or right, but in terms of how keen you are on grand ideas and big changes, as opposed to cautious, boring pragmatism.

In this sense, there are conservative liberals (i.e. Obama) and radical conservatives (i.e. Palin), and that's the kind of thing I'd be looking for if I were trying to find political differences in the brain.

Links: If right wingers have bigger amygdalae, does that mean patient SM, the woman with no amygdalae at all, must be a communist? Then again, Neuroskeptic readers may remember that the brain itself is a communist...

Left Wing vs. Right Wing Brains

So apparently: Left wing or right wing? It's written in the brain

People with liberal views tended to have increased grey matter in the anterior cingulate cortex, a region of the brain linked to decision-making, in particular when conflicting information is being presented...

Conservatives, meanwhile, had increased grey matter in the amygdala, an area of the brain associated with processing emotion.

This was based on a study of 90 young adults using MRI to measure brain structure. Sadly that press release is all we know about the study at the moment, because it hasn't been published yet. The BBC also have no fewer than three radio shows about it here, here and here.

Politics blog Heresy Corner discusses it...
Subjects who professed liberal or left-wing opinions tended to have a larger anterior cingulate cortex, an area of the brain which, we were told, helps process complex and conflicting information. (Perhaps they need this extra grey matter to be able to cope with the internal contradictions of left-wing philosophy.)
This kind of story tends to attract chuckle-some comments.

In truth, without seeing the full scientific paper, we can't know whether the differences they found were really statistically solid, or whether they were voodoo or fishy. The authors, Geraint Rees and Ryota Kanai, have both published a lot of excellent neuroscience in the past, but that's no guarantee.

In fact, however, I suspect that the brain is just the wrong place to look if you're interested in politics, because most political views don't originate in the individual brain, they originate in the wider culture and are absorbed and regurgitated without much thought. This is a real shame, because all of us, left or right, have a brain, and it's really quite nifty:

But when it comes to politics we generally don't use it. The brain is a powerful organ designed to help you deal with reality in all its complexity. For a lot of people, politics doesn't take place there, it happens in fairytale kingdoms populated by evil monsters, foolish jesters, and brave knights.

Given that the characters in this story are mindless stereotypes, there's no need for empathy. Because the plot comes fully-formed from TV or a newspaper, there's no need for original ideas. Because everything is either obviously right or obviously wrong, there's not much reasoning required. And so on. Which is why this happens amongst other things.

I don't think individual personality is very important in determining which political narratives and values you adopt: your family background, job, and position in society is much more important.

Where individual differences matter, I think, is in deciding how "conservative" or "radical" you are within whatever party you find yourself. Not in the sense of left or right, but in terms of how keen you are on grand ideas and big changes, as opposed to cautious, boring pragmatism.

In this sense, there are conservative liberals (i.e. Obama) and radical conservatives (i.e. Palin), and that's the kind of thing I'd be looking for if I were trying to find political differences in the brain.

Links: If right wingers have bigger amygdalae, does that mean patient SM, the woman with no amygdalae at all, must be a communist? Then again, Neuroskeptic readers may remember that the brain itself is a communist...

Monday, December 6, 2010

Science Bloggers vs. Science

First NASA had quite possibly discovered an alien lifeform.

Then it was an earth bacteria that has a unique kind of arsenic-based DNA - an entirely new kind of organism.

Then it merely could use arsenic in its DNA, if forced to, although under normal conditions it didn't.

But now, it's looking like it's just a regular (albeit tough) bug - and a lot of hot air.

*

The "arsenic-based alien bacteria" story attracted more media attention than any other scientific paper of the last year. At first, I was very pleased by this: to a scientist, the discovery of an organism that can use arsenic instead of phosphorous in its DNA would have been massive news, with big implications for every branch of biology. How great that the media picked up on the importance of this story, even though it's about a specialized point of biochemistry, I thought.

Unfortunately, as you've probably heard, serious questions have been asked about the Science paper announcing the findings. For details, see microbiologist Rosie Redfield's devastating post on the topic: Arsenic-associated bacteria (NASA's claims), and this one from Alex Bradley: Arsenate-based DNA: a big idea with big holes. In a nutshell, the critics make a very strong case that the evidence supposedly showing arsenic-containing DNA is flawed, and fairly obviously so.

As I've said before, this kind of thing is why science blogging is so important. Thanks to bloggers such as those I've linked to, and many others, this paper - which has enormous implications, if true - has been subject to detailed scrutiny within days of publication.

Without blogs, these questions would certainly have been asked sooner or later - but with the emphasis on "later". The traditional way to criticize a paper is to write a Letter to the Editor of the journal that published it but this usually takes, at best, weeks, and usually months to appear.

Some journals now feature "e-letters" which can appear within hours, or public comment threads attached to each paper, and this is certainly a big step forward. Blogs still have the edge, though, because it's often hard to incorporate pictures, html, etc. into these comments, and these discussion threads often become very hard to read as the important comments get mixed up with less useful, or simply out of date, ones.

A blog post, clearly setting out the arguments, and updated as new information comes to light, is, to my mind, the best form of scientific peer review we currently have.

Science Bloggers vs. Science

First NASA had quite possibly discovered an alien lifeform.

Then it was an earth bacteria that has a unique kind of arsenic-based DNA - an entirely new kind of organism.

Then it merely could use arsenic in its DNA, if forced to, although under normal conditions it didn't.

But now, it's looking like it's just a regular (albeit tough) bug - and a lot of hot air.

*

The "arsenic-based alien bacteria" story attracted more media attention than any other scientific paper of the last year. At first, I was very pleased by this: to a scientist, the discovery of an organism that can use arsenic instead of phosphorous in its DNA would have been massive news, with big implications for every branch of biology. How great that the media picked up on the importance of this story, even though it's about a specialized point of biochemistry, I thought.

Unfortunately, as you've probably heard, serious questions have been asked about the Science paper announcing the findings. For details, see microbiologist Rosie Redfield's devastating post on the topic: Arsenic-associated bacteria (NASA's claims), and this one from Alex Bradley: Arsenate-based DNA: a big idea with big holes. In a nutshell, the critics make a very strong case that the evidence supposedly showing arsenic-containing DNA is flawed, and fairly obviously so.

As I've said before, this kind of thing is why science blogging is so important. Thanks to bloggers such as those I've linked to, and many others, this paper - which has enormous implications, if true - has been subject to detailed scrutiny within days of publication.

Without blogs, these questions would certainly have been asked sooner or later - but with the emphasis on "later". The traditional way to criticize a paper is to write a Letter to the Editor of the journal that published it but this usually takes, at best, weeks, and usually months to appear.

Some journals now feature "e-letters" which can appear within hours, or public comment threads attached to each paper, and this is certainly a big step forward. Blogs still have the edge, though, because it's often hard to incorporate pictures, html, etc. into these comments, and these discussion threads often become very hard to read as the important comments get mixed up with less useful, or simply out of date, ones.

A blog post, clearly setting out the arguments, and updated as new information comes to light, is, to my mind, the best form of scientific peer review we currently have.

Sunday, December 5, 2010

Online Comments: It's Not You, It's Them

Last week I was at a discussion about New Media, and someone mentioned that they'd been put off from writing content online because of a comment on one of their articles accusing them of being "stupid".

I found this surprising - not the comment, but that anyone would take it so personally. It's the internet. You will get called names. Everyone does. It doesn't mean there's anything wrong with you.

I suspect this is a generational issue. People who 'grew up online' know, as Penny Arcade explained, that

The sad fact is that there are millions of people whose idea of fun is to find people they disagree with, and mock them. And they're right, it can be fun - why else do you think people like Jon Stewart are so popular? - but that's all it is, entertainment. If you're on the receiving end, don't take it seriously.

If you write something online, and a lot of people read it, you will get slammed. Someone, somewhere, will disagree with you and they'll tell you so, in no uncertain terms. This is true whatever you write about, but some topics are like a big red rag to the herds of bulls out there.

Just to name a few, if you say anything vaguely related to climate change, religion, health, the economy, feminism or race, you might as well be holding a placard with a big arrow pointing down at you and "Sling Mud Here" on it.

The point is - it's them, not you. They are not interested in you, they don't know you, it's not you. True, they might tailor their insults a bit; if you're a young woman you might be, say, a "stupid girl" where a man would merely get called an "idiot". But this doesn't mean that the attacks are a reflection on you in any way. You just happen to be the one in the line of fire.

What do you do about this? Nothing.

Trying to enter into a serious debate is pointless. Insulting them back can be fun, just remember that if you find it fun, you've become one of them: "he who stares too long into the abyss...", etc. Complaining to the moderators might help, but unless the site has a rock solid zero-tolerance-for-fuckwads policy, probably not. Where the blight has taken root, like Comment is Free, I'd not waste your time complaining. Just ignore it and carry on.

The most important thing is not to take it personally. Do not get offended. Do not care. Because no-one else cares. Especially the people who wrote the comments. They presumably care about whatever "issue" prompted their attack, but they don't care about you. If anything, you should be pleased, because on the internet, the only stuff that doesn't attract stupid comments is the stuff that no-one reads.

I've heard these attacks referred to as "policing" existing hierarchies or "silencing" certain types of people. This seems to me to be granting them far more respect than they deserve. With the actual police, if you break the rules, they will physically arrest you. They have power. Internet trolls don't: if they succeed in policing or silencing anybody, it's because their targets let them boss them around. They're nobody; they're not your problem.

If you can't help being offended by such comments, don't read them, but ideally you shouldn't need to resort to that. For one thing, it means you miss the sensible comments (and there's always a few). But fundamentally, you shouldn't need to do this, because you really shouldn't care what some anonymous joker from the depths of the internet thinks about you.

Online Comments: It's Not You, It's Them

Last week I was at a discussion about New Media, and someone mentioned that they'd been put off from writing content online because of a comment on one of their articles accusing them of being "stupid".

I found this surprising - not the comment, but that anyone would take it so personally. It's the internet. You will get called names. Everyone does. It doesn't mean there's anything wrong with you.

I suspect this is a generational issue. People who 'grew up online' know, as Penny Arcade explained, that

The sad fact is that there are millions of people whose idea of fun is to find people they disagree with, and mock them. And they're right, it can be fun - why else do you think people like Jon Stewart are so popular? - but that's all it is, entertainment. If you're on the receiving end, don't take it seriously.

If you write something online, and a lot of people read it, you will get slammed. Someone, somewhere, will disagree with you and they'll tell you so, in no uncertain terms. This is true whatever you write about, but some topics are like a big red rag to the herds of bulls out there.

Just to name a few, if you say anything vaguely related to climate change, religion, health, the economy, feminism or race, you might as well be holding a placard with a big arrow pointing down at you and "Sling Mud Here" on it.

The point is - it's them, not you. They are not interested in you, they don't know you, it's not you. True, they might tailor their insults a bit; if you're a young woman you might be, say, a "stupid girl" where a man would merely get called an "idiot". But this doesn't mean that the attacks are a reflection on you in any way. You just happen to be the one in the line of fire.

What do you do about this? Nothing.

Trying to enter into a serious debate is pointless. Insulting them back can be fun, just remember that if you find it fun, you've become one of them: "he who stares too long into the abyss...", etc. Complaining to the moderators might help, but unless the site has a rock solid zero-tolerance-for-fuckwads policy, probably not. Where the blight has taken root, like Comment is Free, I'd not waste your time complaining. Just ignore it and carry on.

The most important thing is not to take it personally. Do not get offended. Do not care. Because no-one else cares. Especially the people who wrote the comments. They presumably care about whatever "issue" prompted their attack, but they don't care about you. If anything, you should be pleased, because on the internet, the only stuff that doesn't attract stupid comments is the stuff that no-one reads.

I've heard these attacks referred to as "policing" existing hierarchies or "silencing" certain types of people. This seems to me to be granting them far more respect than they deserve. With the actual police, if you break the rules, they will physically arrest you. They have power. Internet trolls don't: if they succeed in policing or silencing anybody, it's because their targets let them boss them around. They're nobody; they're not your problem.

If you can't help being offended by such comments, don't read them, but ideally you shouldn't need to resort to that. For one thing, it means you miss the sensible comments (and there's always a few). But fundamentally, you shouldn't need to do this, because you really shouldn't care what some anonymous joker from the depths of the internet thinks about you.

Wednesday, October 20, 2010

You Read It Here First...Again

A couple of months ago I pointed out that a Letter published in the American Journal of Psychiatry, critiquing a certain paper about antidepressants, made very similar points to the ones that I did in my blog post about the paper. The biggest difference was that my post came out 9 months sooner.


Well, it's happened again. Except I was only 3 months ahead this time. Remember my post Clever New Scheme, criticizing a study which claimed to have found a brilliant way of deciding which antidepressant is right for someone, based on their brain activity?

That post went up on July 21st. Yesterday, October 19th, a Letter was published by the journal that ran the original paper. Three months ago, I said -
...there were two groups in this trial and they got entirely different sets of drugs. One group also got rEEG-based treatment personalization. That group did better, but that might have nothing to do with the rEEG...

...it would have been very simple to avoid this issue. Just give everyone rEEG, but shuffle the assignments in the control group, so that everyone was guided by someone else's EEG...

This would be a genuinely controlled test of the personalized rEEG system, because both groups would get the same kinds of drugs... Second, it would allow the trial to be double-blind: in this study the investigators knew which group people were in, because it was obvious from the drug choice... Thirdly, it wouldn't have meant they had to exclude people whose rEEG recommended they get the same treatment that they would have got in the control group...
Now Alexander C. Tsai says, in his Letter:
DeBattista et al. chose a study design that conflates the effect of rEEG-guided pharmacotherapy with the effects of differing medication regimes...
A more definitive study design would have been one in which study participants were randomized to receive rEEG-guided pharmacotherapy vs. sham rEEG-guided pharmacotherapy.

Such a study design could have been genuinely double blinded,
would not have required the inclusion of potential subjects whose rEEG treatment regimen was different from the control, and would be more likely to result in medication regimens that were balanced on average across the intervention vs. control arms.
To be fair, he also makes a separate point questioning how meaningful the small between-group difference was.

I'm mentioning this not because I want to show off, or to accuse Tsai of ripping me off, but because it's a good example of why people like Royce Murray are wrong. Murray recently wrote an editorial in the academic journal Analytical Chemistry, accusing blogging of being unreliable compared to proper, peer-reviewed science.

Murray is certainly right that one could use a blog as a platform to push crap ideas, but one can also use peer reviewed papers to do that, and often it's bloggers who are the first to pick up on this when it happens.

ResearchBlogging.orgTsai AC (2010). Unclear clinical significance of findings on the use of referenced-EEG-guided pharmacotherapy. Journal of psychiatric research PMID: 20943234

You Read It Here First...Again

A couple of months ago I pointed out that a Letter published in the American Journal of Psychiatry, critiquing a certain paper about antidepressants, made very similar points to the ones that I did in my blog post about the paper. The biggest difference was that my post came out 9 months sooner.


Well, it's happened again. Except I was only 3 months ahead this time. Remember my post Clever New Scheme, criticizing a study which claimed to have found a brilliant way of deciding which antidepressant is right for someone, based on their brain activity?

That post went up on July 21st. Yesterday, October 19th, a Letter was published by the journal that ran the original paper. Three months ago, I said -
...there were two groups in this trial and they got entirely different sets of drugs. One group also got rEEG-based treatment personalization. That group did better, but that might have nothing to do with the rEEG...

...it would have been very simple to avoid this issue. Just give everyone rEEG, but shuffle the assignments in the control group, so that everyone was guided by someone else's EEG...

This would be a genuinely controlled test of the personalized rEEG system, because both groups would get the same kinds of drugs... Second, it would allow the trial to be double-blind: in this study the investigators knew which group people were in, because it was obvious from the drug choice... Thirdly, it wouldn't have meant they had to exclude people whose rEEG recommended they get the same treatment that they would have got in the control group...
Now Alexander C. Tsai says, in his Letter:
DeBattista et al. chose a study design that conflates the effect of rEEG-guided pharmacotherapy with the effects of differing medication regimes...
A more definitive study design would have been one in which study participants were randomized to receive rEEG-guided pharmacotherapy vs. sham rEEG-guided pharmacotherapy.

Such a study design could have been genuinely double blinded,
would not have required the inclusion of potential subjects whose rEEG treatment regimen was different from the control, and would be more likely to result in medication regimens that were balanced on average across the intervention vs. control arms.
To be fair, he also makes a separate point questioning how meaningful the small between-group difference was.

I'm mentioning this not because I want to show off, or to accuse Tsai of ripping me off, but because it's a good example of why people like Royce Murray are wrong. Murray recently wrote an editorial in the academic journal Analytical Chemistry, accusing blogging of being unreliable compared to proper, peer-reviewed science.

Murray is certainly right that one could use a blog as a platform to push crap ideas, but one can also use peer reviewed papers to do that, and often it's bloggers who are the first to pick up on this when it happens.

ResearchBlogging.orgTsai AC (2010). Unclear clinical significance of findings on the use of referenced-EEG-guided pharmacotherapy. Journal of psychiatric research PMID: 20943234

Sunday, October 10, 2010

The Joy of Sexism

This week, I've been embroiled in not one but two gender-based debates.

First up, I've been quoted in Delusions of Gender, the new book from Cordelia Fine, in which she examines the science of alleged sex differences in behaviour. The quote was from this 2008 post about Vicky Tuck, a teacher with odd ideas about the brains of boys and girls. I haven't had time to read the book yet, but a review's in the pipeline.

Then yesterday, I found out that I've been the subject of some research.
In this report, we detail research into the representation of women in science, engineering and technology (SET) within online media...

The research involved data collection and analysis from websites, web authors and young web users. We monitored SET content across 16 websites. Eight sites were generalist: BBC, Channel 4, SkyTV, The Guardian, The Daily Mail, Wikipedia, YouTube and Twitter.

Eight sites were SET-specific: New Scientist, Bad Science, The Science Museum, The Natural History Museum, Neuroskeptic Blog, Science – So What? So Everything, Watt’s Up With That? Blog and RichardDawkins.net.
Quite a line-up. Clearly they decided to look at the very best, most illustrious and most respected science blogs... and also Neuroskeptic. Anyway, unfortunately I can't access the paper, despite being in it, but according to the abstract they found that:
Online science informational content is male dominated in that far more men than women are present... we found that these women are:
  • Subject to muting of their ‘voices’. This includes instances where SET women are pictured but remain anonymous and instances where they are used, mainly as science journalists, to ventriloquise other people's scientific work.
  • Subject to clustering in specific SET fields and website sections, particularly those about ‘feminine’ subjects or specifically about women...
  • Associated with ‘feminine’ attributes and activities, notably as caring, demonstrating empathy with children and animals...
  • Predominantly White, middle-class, able-bodied and heterosexual.
  • Peripheral to the main story and subordinated as students, young scientists, relatives of a male scientist ... we found less hyperlinking of women’s than men’s names in online SET.
  • Discussed in terms of appearance, personality, sexuality and personal circumstances more often than men...
  • More generally, constructed in ways that relocate them in the private domestic sphere, detract from their scientific contribution, and associate them, more often than men, with the new category of ‘bad science’.
Without knowing the details it's hard to evaluate these claims, but it's fair to say that some of it rings true.

There's been lots of buzz recently about the gender ratio of science bloggers - we're mostly male, who'd have guessed? - and I suppose this would be a good time to chip in. Does it matter?

I think it does, and moreover it's part of a bigger picture. As far as I can see, science bloggers are mostly: male, white, under 40... and almost all of the biggest ones are also native English speakers; I don't know if, overall, English-speakers are overrepresented, because not all blogs are written in English and I only know the ones that are - but English ones get the lions share of the traffic.

Back to gender, even in fields such as psychology and neuroscience in which there are lots of female researchers, bloggers are overwhelmingly male. Likewise, a lot of researchers, even those working in English-speaking countries, are non-native-English speakers, but they have an obvious disadvantage when it comes to blogging in English.

So science bloggers are drawn mostly from a narrow cross-section of the scientific community, which is a problem, because it greatly increases the chances of bloggers becoming an "echo chamber", or a clique, neither of which is likely to end well. Diversity is valuable, in this kind of thing, not because it's somehow morally good per se, but because it helps prevent stagnation.

The Joy of Sexism

This week, I've been embroiled in not one but two gender-based debates.

First up, I've been quoted in Delusions of Gender, the new book from Cordelia Fine, in which she examines the science of alleged sex differences in behaviour. The quote was from this 2008 post about Vicky Tuck, a teacher with odd ideas about the brains of boys and girls. I haven't had time to read the book yet, but a review's in the pipeline.

Then yesterday, I found out that I've been the subject of some research.
In this report, we detail research into the representation of women in science, engineering and technology (SET) within online media...

The research involved data collection and analysis from websites, web authors and young web users. We monitored SET content across 16 websites. Eight sites were generalist: BBC, Channel 4, SkyTV, The Guardian, The Daily Mail, Wikipedia, YouTube and Twitter.

Eight sites were SET-specific: New Scientist, Bad Science, The Science Museum, The Natural History Museum, Neuroskeptic Blog, Science – So What? So Everything, Watt’s Up With That? Blog and RichardDawkins.net.
Quite a line-up. Clearly they decided to look at the very best, most illustrious and most respected science blogs... and also Neuroskeptic. Anyway, unfortunately I can't access the paper, despite being in it, but according to the abstract they found that:
Online science informational content is male dominated in that far more men than women are present... we found that these women are:
  • Subject to muting of their ‘voices’. This includes instances where SET women are pictured but remain anonymous and instances where they are used, mainly as science journalists, to ventriloquise other people's scientific work.
  • Subject to clustering in specific SET fields and website sections, particularly those about ‘feminine’ subjects or specifically about women...
  • Associated with ‘feminine’ attributes and activities, notably as caring, demonstrating empathy with children and animals...
  • Predominantly White, middle-class, able-bodied and heterosexual.
  • Peripheral to the main story and subordinated as students, young scientists, relatives of a male scientist ... we found less hyperlinking of women’s than men’s names in online SET.
  • Discussed in terms of appearance, personality, sexuality and personal circumstances more often than men...
  • More generally, constructed in ways that relocate them in the private domestic sphere, detract from their scientific contribution, and associate them, more often than men, with the new category of ‘bad science’.
Without knowing the details it's hard to evaluate these claims, but it's fair to say that some of it rings true.

There's been lots of buzz recently about the gender ratio of science bloggers - we're mostly male, who'd have guessed? - and I suppose this would be a good time to chip in. Does it matter?

I think it does, and moreover it's part of a bigger picture. As far as I can see, science bloggers are mostly: male, white, under 40... and almost all of the biggest ones are also native English speakers; I don't know if, overall, English-speakers are overrepresented, because not all blogs are written in English and I only know the ones that are - but English ones get the lions share of the traffic.

Back to gender, even in fields such as psychology and neuroscience in which there are lots of female researchers, bloggers are overwhelmingly male. Likewise, a lot of researchers, even those working in English-speaking countries, are non-native-English speakers, but they have an obvious disadvantage when it comes to blogging in English.

So science bloggers are drawn mostly from a narrow cross-section of the scientific community, which is a problem, because it greatly increases the chances of bloggers becoming an "echo chamber", or a clique, neither of which is likely to end well. Diversity is valuable, in this kind of thing, not because it's somehow morally good per se, but because it helps prevent stagnation.

Thursday, August 26, 2010

You Read It Here First

Remember the paper from 2009 about combining two different drugs in the treatment of depression?

It was about a clinical trial in which patients were randomly assigned to get just one antidepressant, fluoxetine, or two - mirtazapine & fluoxetine, mirtazapine & venlafaxine, or mirtazapine & buproprion. The people who got two antidepressants did better.

But as I said at the time, in a comment beneath my post about it...
All the first 6 weeks shows is that mirtazapine is better than placebo. Everyone in the study got a non-mirtazapine antidepressant, so any improvement in the non-mirtazapine group (i.e. the fluoxetine alone group) could have been placebo, regression to the mean etc. The only placebo-controlled aspect was that some people got placebo mirtazapine and some people got real mirtazapine.
Now Dr's El-Mallakh, Kaur and Lippmann have written in a Letter to The Editor of the American Journal of Psychiatry (where the original paper appeared) that
There was no mirtazapine plus placebo study group. This comparison arm is necessary in order to be confident that the observed effect by the three combined treatments could not have been accomplished by mirtazapine as a single drug. The observation that mirtazapine alone was equivalent to fluoxetine or paroxetine alone in a previous study does not negate the need for a control in the Blier et al. study. Without such a control, one cannot assume that two antidepressant medications are more effective than mirtazapine alone.
What I said - on 18th December 2009. The new Letter was "accepted for publication" in May 2010, and it's only just appeared.

Am I just blowing my own trumpet? No. Well, a bit. But there's a serious point as well: internet comments are a much better medium for discussing and criticizing research than Letters To The Editor ever can be.

Why? The Letter may have been a bit slower, but it's still out there, surely? Plus, it'll have been read by far more people. My post has got about 400 pageviews so far. I don't know how many people read the Letters page in the AJP, but I'd imagine it must be a good few thousand. So what's the problem?

The problem is that it's too late. Papers get cited by other papers fast (this one's got 13 citations so far), and they change minds even faster. This article's been out nearly a year, and I'm sure that in that time it will have convinced some psychiatrists to start their depressed patients on two drugs, rather than just one.

Now I'm not saying they shouldn't do that. I don't know. Anyway, I'm not a doctor. But I stand by my comment that this paper shouldn't be what changes your opinion on that question; the design of the trial means it can't tell you that. And I think that's something that readers of the paper should have been told at the time, not 9 months later.

What's the solution? I've written about this previously as well. Scientific journals should have open, blog-style comment threads attached to everything they publish, so that readers can say what they have to say, immediately. A number of major journals, e.g. the PLoS journals, some of the Nature ones, and the BMJ, already do this.

From what I've seen, the standard of comments is extremely high. Sure, some are rubbish. But the rubbish ones are almost always obviously bad, so I don't think they'll be doing much damage. The good ones, on the other hand, are often extremely insightful - whether they are criticizing, or praising, the paper.

ResearchBlogging.orgEl-Mallakh RS, Kaur G, & Lippman S (2010). Placebo group needed for interpretation of combination trial. The American journal of psychiatry, 167 (8) PMID: 20693473

You Read It Here First

Remember the paper from 2009 about combining two different drugs in the treatment of depression?

It was about a clinical trial in which patients were randomly assigned to get just one antidepressant, fluoxetine, or two - mirtazapine & fluoxetine, mirtazapine & venlafaxine, or mirtazapine & buproprion. The people who got two antidepressants did better.

But as I said at the time, in a comment beneath my post about it...
All the first 6 weeks shows is that mirtazapine is better than placebo. Everyone in the study got a non-mirtazapine antidepressant, so any improvement in the non-mirtazapine group (i.e. the fluoxetine alone group) could have been placebo, regression to the mean etc. The only placebo-controlled aspect was that some people got placebo mirtazapine and some people got real mirtazapine.
Now Dr's El-Mallakh, Kaur and Lippmann have written in a Letter to The Editor of the American Journal of Psychiatry (where the original paper appeared) that
There was no mirtazapine plus placebo study group. This comparison arm is necessary in order to be confident that the observed effect by the three combined treatments could not have been accomplished by mirtazapine as a single drug. The observation that mirtazapine alone was equivalent to fluoxetine or paroxetine alone in a previous study does not negate the need for a control in the Blier et al. study. Without such a control, one cannot assume that two antidepressant medications are more effective than mirtazapine alone.
What I said - on 18th December 2009. The new Letter was "accepted for publication" in May 2010, and it's only just appeared.

Am I just blowing my own trumpet? No. Well, a bit. But there's a serious point as well: internet comments are a much better medium for discussing and criticizing research than Letters To The Editor ever can be.

Why? The Letter may have been a bit slower, but it's still out there, surely? Plus, it'll have been read by far more people. My post has got about 400 pageviews so far. I don't know how many people read the Letters page in the AJP, but I'd imagine it must be a good few thousand. So what's the problem?

The problem is that it's too late. Papers get cited by other papers fast (this one's got 13 citations so far), and they change minds even faster. This article's been out nearly a year, and I'm sure that in that time it will have convinced some psychiatrists to start their depressed patients on two drugs, rather than just one.

Now I'm not saying they shouldn't do that. I don't know. Anyway, I'm not a doctor. But I stand by my comment that this paper shouldn't be what changes your opinion on that question; the design of the trial means it can't tell you that. And I think that's something that readers of the paper should have been told at the time, not 9 months later.

What's the solution? I've written about this previously as well. Scientific journals should have open, blog-style comment threads attached to everything they publish, so that readers can say what they have to say, immediately. A number of major journals, e.g. the PLoS journals, some of the Nature ones, and the BMJ, already do this.

From what I've seen, the standard of comments is extremely high. Sure, some are rubbish. But the rubbish ones are almost always obviously bad, so I don't think they'll be doing much damage. The good ones, on the other hand, are often extremely insightful - whether they are criticizing, or praising, the paper.

ResearchBlogging.orgEl-Mallakh RS, Kaur G, & Lippman S (2010). Placebo group needed for interpretation of combination trial. The American journal of psychiatry, 167 (8) PMID: 20693473

Tuesday, August 17, 2010

What The Internet Thinks About Antidepressants

Toronto team Rizo et al offer a novel approach to psychopharmacology: trawling the internet for people's opinions. It's a rapid, web-based method for obtaining patient views on effects and side-effects of antidepressants.

They designed a script to Google the names of several antidepressants in the context of someone who's taking them, and checks to see if they describe any side-effects.
A large number of URLs were rapidly screened through Google Search™, using one server situated in Ohio, USA. The search strategy used language strings to denote active antidepressant drug usage, such as “I'm on [name of antidepressant]…” or “I
have been on [antidepressant] for ….”, or “I've started [antidepressant]…”, or “the [antidepressant] is giving me or causing me…”
They then used a thing called OpenCalais™ to read the search hits and decide whether they were mentioning particular diseases or symptoms. OpenCalais is a natural language processor which is meant to be able to automatically extract the meaning from text. However, to make sure it wasn't doing anything silly (natural language processing is quite tricky), they manually checked the results.

What happened? They found about 5,000 hits in total from people taking antidepressants, ranging from 210 for mirtazapine (Remeron) up to 835 for duloxetine (Cymbalta). That doesn't seem like all that many considering they searched on the entire internet, although they only searched English language websites.

Anyway, drowsiness, sleepiness or tiredness was mentioned in from 6.4% (duloxetine) down to 2.9% (fluoxetine) of the hits. Insomnia was noted in 4% (desvenlafaxine) down to 2.2% fluoxetine. And so on.

These results are a lot lower than anything previously reported from clinical trials, where the prevalence of drowsiness, for example, is often around 25% (vs. 10% on placebo); with some drugs, it's higher. So there's a big discrepancy, and it's hard to interpret these results. Maybe lots of people are having side effects and just not bothering to write about them. Or they're too embarrassed. Etc.

Still, it's a very clever idea it would probably be better used trying to discover which drugs work best. Neuroskeptic readers will know that clinical trials of antidepressants are flawed in several ways. I'd say they're actually better at telling us about side effects (which are probably roughly the same in clinical trials and in real life) than they are at telling us about efficacy (where this assumption doesn't hold)...

Links: There are many websites where people describe their experiences of medical treatments ranging from the fancy to the crude (but much more informative)...

ResearchBlogging.orgRizo C, Deshpande A, Ing A, & Seeman N (2010). A rapid, Web-based method for obtaining patient views on effects and side-effects of antidepressants. Journal of affective disorders PMID: 20705344

What The Internet Thinks About Antidepressants

Toronto team Rizo et al offer a novel approach to psychopharmacology: trawling the internet for people's opinions. It's a rapid, web-based method for obtaining patient views on effects and side-effects of antidepressants.

They designed a script to Google the names of several antidepressants in the context of someone who's taking them, and checks to see if they describe any side-effects.
A large number of URLs were rapidly screened through Google Search™, using one server situated in Ohio, USA. The search strategy used language strings to denote active antidepressant drug usage, such as “I'm on [name of antidepressant]…” or “I
have been on [antidepressant] for ….”, or “I've started [antidepressant]…”, or “the [antidepressant] is giving me or causing me…”
They then used a thing called OpenCalais™ to read the search hits and decide whether they were mentioning particular diseases or symptoms. OpenCalais is a natural language processor which is meant to be able to automatically extract the meaning from text. However, to make sure it wasn't doing anything silly (natural language processing is quite tricky), they manually checked the results.

What happened? They found about 5,000 hits in total from people taking antidepressants, ranging from 210 for mirtazapine (Remeron) up to 835 for duloxetine (Cymbalta). That doesn't seem like all that many considering they searched on the entire internet, although they only searched English language websites.

Anyway, drowsiness, sleepiness or tiredness was mentioned in from 6.4% (duloxetine) down to 2.9% (fluoxetine) of the hits. Insomnia was noted in 4% (desvenlafaxine) down to 2.2% fluoxetine. And so on.

These results are a lot lower than anything previously reported from clinical trials, where the prevalence of drowsiness, for example, is often around 25% (vs. 10% on placebo); with some drugs, it's higher. So there's a big discrepancy, and it's hard to interpret these results. Maybe lots of people are having side effects and just not bothering to write about them. Or they're too embarrassed. Etc.

Still, it's a very clever idea it would probably be better used trying to discover which drugs work best. Neuroskeptic readers will know that clinical trials of antidepressants are flawed in several ways. I'd say they're actually better at telling us about side effects (which are probably roughly the same in clinical trials and in real life) than they are at telling us about efficacy (where this assumption doesn't hold)...

Links: There are many websites where people describe their experiences of medical treatments ranging from the fancy to the crude (but much more informative)...

ResearchBlogging.orgRizo C, Deshpande A, Ing A, & Seeman N (2010). A rapid, Web-based method for obtaining patient views on effects and side-effects of antidepressants. Journal of affective disorders PMID: 20705344

Friday, July 16, 2010

Pepsi No Evil

So the web's leading science blogs hub, scienceblogs.com, tried to open a big bottle of Pepsi, but someone had shaken it up and it sprayed all over their face. Or something.

The PepsiGate Affair aka #sbfail has been covered elsewhere in great detail. Basically, SB announced they were going to host a new blog by Pepsi where Pepsi could talk about the "nutritional research" they're doing. A number of their best known bloggers decided they didn't want to be a part of that, and moved their blogs off the site. SB backtracked, and the PepsiBlog is no more, but the damage has been done.

Now that the dust has settled somewhat, I wonder: what exactly was wrong with the idea?

Well, the Pepsi blog would have been crap. Almost by definition. And the whole thing was undeniably an ill-thought decision, as shown by the fact that SB U-turned when the backlash hit. If they'd been serious, they'd have stuck to their guns.

But would it have been so bad? Take this from the response that SB made to their critics:
We think the conversation should include scientists from academia and government; we also think it should include scientists from industry.
I agree with this. It reflects the real world, and to the extent that science blogs are there to educate about science, that's a good thing. It would be lovely if all research was done by tenured academics with absolutely no ulterior motives except to uncover the truth. Unfortunately, it isn't. Most research is either done by non-tenured academics, whose ulterior motive is to advance our own careers, or by industry. (Of course most tenured academics have conflicts of interest too, but at least they could be impartial and still make a living.)

Now it could be said that industrial researchers shouldn't be bloggers because their conflict of interest is so glaring that their blogs would be mere propaganda. Well, they almost certainly would, but the point about blogging is that it's peer reviewed by default: if someone writes something crap, then either no-one will read it, or they'll criticize it, probably in the comments.

This is why if someone has a "blog" with no comments I don't think it's really a blog (comment moderation is iffy too in my book). So the fact that we're rarely perfectly impartial isn't a fatal flaw, because we get grilled. And we get grilled if we're wrong for reasons other than impartiality.

I'd love it if every major company had an official blog, so long as it had genuinely open comments, because I think they would get ripped to shreds and that would, eventually, undermine their credibility. This is presumably why most companies don't. As Jack of Kent said, "they are exposed to a huge reputational risk by seeking to blog in the full glare of the blogosphere."

Now there is a big question as to whether scienceblogs.com should play host to such blogs. I agree that it feels wrong. But I suspect that this feeling stems from the fear that it wouldn't just be a new PepsiBlog, it would also lead to a chilling effect on any of their other bloggers preventing them from criticizing Pepsi. That it would fundamentally change the character of the whole site.

If that happened, then I'd stop reading SB, and I'd hope that any blogger with integrity would quit - but let's be fair, we just don't know whether it would have happened or not. And if it didn't, what harm would have been done? The non-Pepsi blogs would be able to continue blogging as happily as ever, PepsiBlog would get ripped to shreds, and Pepsi would, I suspect, have pulled it before too long anyway, realizing it had become a joke.

SB's pristine reputation for only hosting the best science would be dirtied. But I'm not sure that reputation was intact anyway. Look at Pharyngula. Let's be honest, most Pharyngula posts are not actually about science, they're about religion. Not that there's anything wrong with that, it's one of the leading blogs of its kind, but it's pretty obvious that the reason SB host it is because it brings in a ton of hits, and hence advertising money. And the owners of ScienceBlogs have allowed advertisers to dictate editorial policy before (personally I find this incident more disturbing than the Pepsi one).

In my mind, there is however, one excellent reason for opposing the PepsiBlog, and that's that it is a slippery slope away from high quality writing. As it stands, SB recruits blogs on merit. At least nominally. Maybe they also accept sexual favors. But not openly. Pharyngula has plenty of merit, although as I said, it's not exactly science, but that's the big difference between Pharyngula and a corporate blog: Pharyngula brings in the hits because it's good at what it does.

The PepsiBlog, while not a disaster in itself, would have sent the signal that you don't need to be good to blog at SB, you can also blog there if you're rich. This would have inevitably led to the erosion of SB's own reputation, which was extremely good until this happened because most of their blogs were excellent. The very fact that there has been such outcry over all this proves it - people didn't expect this from SB because we thought: they are above this.

SB was a great site. It may still be one, I hope it is, and I suspect they have learned their lesson now. If not, then the biggest damage from PepsiGate will be that we've lost a great site. But I don't think that's happened yet. SB still has a lot of great blogs, although it has just lost some of its best, but I for one am hopeful that it will recover.

Pepsi No Evil

So the web's leading science blogs hub, scienceblogs.com, tried to open a big bottle of Pepsi, but someone had shaken it up and it sprayed all over their face. Or something.

The PepsiGate Affair aka #sbfail has been covered elsewhere in great detail. Basically, SB announced they were going to host a new blog by Pepsi where Pepsi could talk about the "nutritional research" they're doing. A number of their best known bloggers decided they didn't want to be a part of that, and moved their blogs off the site. SB backtracked, and the PepsiBlog is no more, but the damage has been done.

Now that the dust has settled somewhat, I wonder: what exactly was wrong with the idea?

Well, the Pepsi blog would have been crap. Almost by definition. And the whole thing was undeniably an ill-thought decision, as shown by the fact that SB U-turned when the backlash hit. If they'd been serious, they'd have stuck to their guns.

But would it have been so bad? Take this from the response that SB made to their critics:
We think the conversation should include scientists from academia and government; we also think it should include scientists from industry.
I agree with this. It reflects the real world, and to the extent that science blogs are there to educate about science, that's a good thing. It would be lovely if all research was done by tenured academics with absolutely no ulterior motives except to uncover the truth. Unfortunately, it isn't. Most research is either done by non-tenured academics, whose ulterior motive is to advance our own careers, or by industry. (Of course most tenured academics have conflicts of interest too, but at least they could be impartial and still make a living.)

Now it could be said that industrial researchers shouldn't be bloggers because their conflict of interest is so glaring that their blogs would be mere propaganda. Well, they almost certainly would, but the point about blogging is that it's peer reviewed by default: if someone writes something crap, then either no-one will read it, or they'll criticize it, probably in the comments.

This is why if someone has a "blog" with no comments I don't think it's really a blog (comment moderation is iffy too in my book). So the fact that we're rarely perfectly impartial isn't a fatal flaw, because we get grilled. And we get grilled if we're wrong for reasons other than impartiality.

I'd love it if every major company had an official blog, so long as it had genuinely open comments, because I think they would get ripped to shreds and that would, eventually, undermine their credibility. This is presumably why most companies don't. As Jack of Kent said, "they are exposed to a huge reputational risk by seeking to blog in the full glare of the blogosphere."

Now there is a big question as to whether scienceblogs.com should play host to such blogs. I agree that it feels wrong. But I suspect that this feeling stems from the fear that it wouldn't just be a new PepsiBlog, it would also lead to a chilling effect on any of their other bloggers preventing them from criticizing Pepsi. That it would fundamentally change the character of the whole site.

If that happened, then I'd stop reading SB, and I'd hope that any blogger with integrity would quit - but let's be fair, we just don't know whether it would have happened or not. And if it didn't, what harm would have been done? The non-Pepsi blogs would be able to continue blogging as happily as ever, PepsiBlog would get ripped to shreds, and Pepsi would, I suspect, have pulled it before too long anyway, realizing it had become a joke.

SB's pristine reputation for only hosting the best science would be dirtied. But I'm not sure that reputation was intact anyway. Look at Pharyngula. Let's be honest, most Pharyngula posts are not actually about science, they're about religion. Not that there's anything wrong with that, it's one of the leading blogs of its kind, but it's pretty obvious that the reason SB host it is because it brings in a ton of hits, and hence advertising money. And the owners of ScienceBlogs have allowed advertisers to dictate editorial policy before (personally I find this incident more disturbing than the Pepsi one).

In my mind, there is however, one excellent reason for opposing the PepsiBlog, and that's that it is a slippery slope away from high quality writing. As it stands, SB recruits blogs on merit. At least nominally. Maybe they also accept sexual favors. But not openly. Pharyngula has plenty of merit, although as I said, it's not exactly science, but that's the big difference between Pharyngula and a corporate blog: Pharyngula brings in the hits because it's good at what it does.

The PepsiBlog, while not a disaster in itself, would have sent the signal that you don't need to be good to blog at SB, you can also blog there if you're rich. This would have inevitably led to the erosion of SB's own reputation, which was extremely good until this happened because most of their blogs were excellent. The very fact that there has been such outcry over all this proves it - people didn't expect this from SB because we thought: they are above this.

SB was a great site. It may still be one, I hope it is, and I suspect they have learned their lesson now. If not, then the biggest damage from PepsiGate will be that we've lost a great site. But I don't think that's happened yet. SB still has a lot of great blogs, although it has just lost some of its best, but I for one am hopeful that it will recover.