Showing posts with label funny. Show all posts
Showing posts with label funny. Show all posts

Sunday, January 9, 2011

The Wheel of Peer Review

In the spirit of the 9 Circles of Scientific Hell, and inspired by the evidence showing that scientific peer reviewers agree only slightly more often than they would by chance, here's a handy tool for randomly generating your review.

Feel free to print it out and throw darts at it, or maybe make a roulette wheel kind of thing, or perhaps a ouija board. It seems to be in widespread use already, so there must be an easy way to use it.


1. The Power of Love: You love this paper! Well, you love the author. Maybe it's a romantic thing, maybe they once saved your ass by lending you their expertise/equipment/data, or maybe they bought you a drink once at a conference. Either way, they're awesome, so their paper must be fine.

2. Bee-in-your-Bonnet: You don't really care about this paper, but you do care, very strongly, about something else which is vaguely related. Many say that you're obsessed by it, though not to your face, because that would start you off talking about it. The problem with this paper is that it doesn't cover your pet idea. If the authors want it published, they'll need to change that, pronto. Major revisions are called for.

3. The Pedant: The paper is atrocious and doesn't deserve to be written on a scrap of toilet paper let alone submitted to this great Journal... in terms of spelling and formatting. Scientifically, you think it's probably pretty good, but it was hard to tell because of the amount of red ink you put all over it. English isn't the author's first language? That's their problem. Isn't that what "minor corrections" are for? No! That's what the bin is for.

4. Cite Me, Me, Me!: The problem with this paper is that it doesn't reference the right previous work... yours. Unless the authors change it to cite everything you've written in the past 10 years, they can get lost. If they do, the paper will be immediately accepted - to reject it would harm your citation count.

5. The Tortoise: You'll review this paper when you get back from holiday. And finished writing your own paper. After that conference. When you've finished your teaching for the year. Maybe. Until you submit your review, the authors are stuck in a horrible limbo, but luckily you're anonymous so they won't know who to send hate mail to.

6. The Cheerleader: This paper is awesome because it supports something that you yourself are about to publish. It's full of methodological holes? Never mind, that will only make your paper better by comparison. It's barely readable? Suggest edits to make it just about comprehensible so people can tell how well it supports you. Then accept a.s.a.p.

7. Wrong End of the Stick: You think you understand this paper, but actually you don't. So your review completely misses the point. When the authors point this out, you have two options: a) blame the paper for being confusing, and chuck it out or b) decide the whole thing is much too complicated to spend time over, and accept it.

8. The Perfect Reviewer: You are an intelligent, informed expert, new enough to the field that you have no axe to grind, and you take the time to read the paper fully, and return a constructive, perceptive review within a couple of weeks. Well done. Unfortunately, there are 1 or 2 other reviewers, and there's only a 1 in 8 chance they'll be like you...

The Wheel of Peer Review

In the spirit of the 9 Circles of Scientific Hell, and inspired by the evidence showing that scientific peer reviewers agree only slightly more often than they would by chance, here's a handy tool for randomly generating your review.

Feel free to print it out and throw darts at it, or maybe make a roulette wheel kind of thing, or perhaps a ouija board. It seems to be in widespread use already, so there must be an easy way to use it.


1. The Power of Love: You love this paper! Well, you love the author. Maybe it's a romantic thing, maybe they once saved your ass by lending you their expertise/equipment/data, or maybe they bought you a drink once at a conference. Either way, they're awesome, so their paper must be fine.

2. Bee-in-your-Bonnet: You don't really care about this paper, but you do care, very strongly, about something else which is vaguely related. Many say that you're obsessed by it, though not to your face, because that would start you off talking about it. The problem with this paper is that it doesn't cover your pet idea. If the authors want it published, they'll need to change that, pronto. Major revisions are called for.

3. The Pedant: The paper is atrocious and doesn't deserve to be written on a scrap of toilet paper let alone submitted to this great Journal... in terms of spelling and formatting. Scientifically, you think it's probably pretty good, but it was hard to tell because of the amount of red ink you put all over it. English isn't the author's first language? That's their problem. Isn't that what "minor corrections" are for? No! That's what the bin is for.

4. Cite Me, Me, Me!: The problem with this paper is that it doesn't reference the right previous work... yours. Unless the authors change it to cite everything you've written in the past 10 years, they can get lost. If they do, the paper will be immediately accepted - to reject it would harm your citation count.

5. The Tortoise: You'll review this paper when you get back from holiday. And finished writing your own paper. After that conference. When you've finished your teaching for the year. Maybe. Until you submit your review, the authors are stuck in a horrible limbo, but luckily you're anonymous so they won't know who to send hate mail to.

6. The Cheerleader: This paper is awesome because it supports something that you yourself are about to publish. It's full of methodological holes? Never mind, that will only make your paper better by comparison. It's barely readable? Suggest edits to make it just about comprehensible so people can tell how well it supports you. Then accept a.s.a.p.

7. Wrong End of the Stick: You think you understand this paper, but actually you don't. So your review completely misses the point. When the authors point this out, you have two options: a) blame the paper for being confusing, and chuck it out or b) decide the whole thing is much too complicated to spend time over, and accept it.

8. The Perfect Reviewer: You are an intelligent, informed expert, new enough to the field that you have no axe to grind, and you take the time to read the paper fully, and return a constructive, perceptive review within a couple of weeks. Well done. Unfortunately, there are 1 or 2 other reviewers, and there's only a 1 in 8 chance they'll be like you...

Friday, December 17, 2010

The Scanner's Prayer

MRI scanners have revolutionized medicine and provided neuroscientists with some incredible tools for exploring the brain.

But that doesn't mean they're fun to use. They can be annoying, unpredictable beings, and you never know whether they're going to bless you with nice results or curse you with cancelled scans and noisy data.

So for the benefit of everyone who has to work with MRI, here is a devotional litany which might just keep your scanner from getting wrathful at the crucial moment. Say this before each scan. Just remember, the magnet is always on and it can read your mind, so make sure you really mean it, and refrain from scientific sins...

*

Our scanner, which art from Siemens,
Hallowed be thy coils.
Thy data come;
Thy scans be done;
In grey matter as it is in white matter.
Give us this day our daily blobs.
And forgive us our trespasses,
As we forgive them that trespass onto our scan slots.
And lead us not into the magnet room carrying a pair of scissors,
But deliver us from volunteers who can’t keep their heads still.
For thine is the magnet,
The gradients,
And the headcoil,
For ever and ever (at least until we can afford a 7T).
Amen.

(Apologies to Christians).

The Scanner's Prayer

MRI scanners have revolutionized medicine and provided neuroscientists with some incredible tools for exploring the brain.

But that doesn't mean they're fun to use. They can be annoying, unpredictable beings, and you never know whether they're going to bless you with nice results or curse you with cancelled scans and noisy data.

So for the benefit of everyone who has to work with MRI, here is a devotional litany which might just keep your scanner from getting wrathful at the crucial moment. Say this before each scan. Just remember, the magnet is always on and it can read your mind, so make sure you really mean it, and refrain from scientific sins...

*

Our scanner, which art from Siemens,
Hallowed be thy coils.
Thy data come;
Thy scans be done;
In grey matter as it is in white matter.
Give us this day our daily blobs.
And forgive us our trespasses,
As we forgive them that trespass onto our scan slots.
And lead us not into the magnet room carrying a pair of scissors,
But deliver us from volunteers who can’t keep their heads still.
For thine is the magnet,
The gradients,
And the headcoil,
For ever and ever (at least until we can afford a 7T).
Amen.

(Apologies to Christians).

Saturday, December 11, 2010

Wikileaks: A Conversation

"Wikileaks is great. It lets people leak stuff."

"Hang on, so you're saying that no-one could leak stuff before? They invented it?"

"Well, no, but they brought leaking to the masses. Sure, people could post documents to the press before, but now anyone in the world can access the leaks!"

"Great, but isn't that just the internet that did that? If it weren't for Wikileaks, people could just upload their leaks to a blog. Or email them to 50 newspapers. Or put them on the torrents. Or start their own site. If it's good, it would go viral, and be impossible to take down. Just like Wikileaks, with all their mirrors, except even more secure, because there'd be literally no-one to arrest or cut off funding to."

"OK, but Wikileaks is a brand. It's not about the technical stuff - it's the message. Like one of their wallpapers says, they're synonymous with free speech."

"So you think it's a good thing that one organization has become synonymous with the whole process of leaking? With the whole concept of openness? What will happen to the idea of free speech, then, if that brand image suddenly gets tarnished - like, say, if their founder and figurehead gets convicted of a serious crime, or..."

"He's innocent! Justice for Julian!"

"Quite possibly, but why do you care? Is he a personal friend?"

"It's an attack on free speech!"

"So you agree that one man has become synonymous with free speech? Doesn't that bother you?"

"Erm... well. Look, fundamentally, we need Wikileaks. Before, there was no centralized system for leaking. Anyone could do it. It was a mess! Wikileaks put everything in one place, and put a committee of experts in a position to decide what was worth leaking and what wasn't. It brought much-needed efficiency and respectability to the idea of leaking. Before Wikileaks, it was anarchy. They're like... the government."

"..."

Edit: See also The Last Psychiatrist's take.

Wikileaks: A Conversation

"Wikileaks is great. It lets people leak stuff."

"Hang on, so you're saying that no-one could leak stuff before? They invented it?"

"Well, no, but they brought leaking to the masses. Sure, people could post documents to the press before, but now anyone in the world can access the leaks!"

"Great, but isn't that just the internet that did that? If it weren't for Wikileaks, people could just upload their leaks to a blog. Or email them to 50 newspapers. Or put them on the torrents. Or start their own site. If it's good, it would go viral, and be impossible to take down. Just like Wikileaks, with all their mirrors, except even more secure, because there'd be literally no-one to arrest or cut off funding to."

"OK, but Wikileaks is a brand. It's not about the technical stuff - it's the message. Like one of their wallpapers says, they're synonymous with free speech."

"So you think it's a good thing that one organization has become synonymous with the whole process of leaking? With the whole concept of openness? What will happen to the idea of free speech, then, if that brand image suddenly gets tarnished - like, say, if their founder and figurehead gets convicted of a serious crime, or..."

"He's innocent! Justice for Julian!"

"Quite possibly, but why do you care? Is he a personal friend?"

"It's an attack on free speech!"

"So you agree that one man has become synonymous with free speech? Doesn't that bother you?"

"Erm... well. Look, fundamentally, we need Wikileaks. Before, there was no centralized system for leaking. Anyone could do it. It was a mess! Wikileaks put everything in one place, and put a committee of experts in a position to decide what was worth leaking and what wasn't. It brought much-needed efficiency and respectability to the idea of leaking. Before Wikileaks, it was anarchy. They're like... the government."

"..."

Edit: See also The Last Psychiatrist's take.

Wednesday, November 24, 2010

The 9 Circles of Scientific Hell


Dante's Inferno: a classic of world literature, the definitive statement of the mediaeval Christian world-view, the first major work in the Italian language, and the basis for a violent videogame. The poem offers a tour through the nine increasingly horrible levels of Hell, in which sinners are tormented forever.

But Dante lived before the era of modern science. I thought I'd update his scheme to explain what happens to those guilty of various scientific sins, ranging from the commonplace to the shocking.

Bear in mind that Dante's Hell had a place for everyone, and it was only Christ's intervention that saved anyone from it; even "good" people went to Hell because everyone sins. But they are still sins. Likewise, very few scientists (and I'm certainly not one of them) would be able to avoid being condemned to some level of this Inferno... but, that's no excuse.

First Circle: Limbo
"The uppermost circle is not a place of punishment, so much as regret. Those who have committed no scientific sins as such, but who turned a blind eye to it, and encouraged it by their awarding of grants and publications, spend eternity on top of this barren mountain, watching the carnage below and reflecting on how they are partially responsible..."

Second Circle: Overselling
"This circle is reserved for those who exaggerated the importantance of their work in order to get grants or write better papers. Sinners are trapped in a huge pit, neck-deep in horrible sludge. Each sinner is provided with the single rung of a ladder, labelled 'The Way Out - Scientists Crack Problem of Second Circle of Hell"

Third Circle: Post-Hoc Storytelling
"Sinners condemned to this circle must constantly dodge the attacks of demons armed with bows and arrows, firing more or less at random. Every time someone is hit in some part of their body, the demon proceeds to explain at enormous length how they were aiming for that exact spot all along."

Fourth Circle: P-Value Fishing
"Those who tried every statistical test in the book until they got a p value less than 0.05 find themselves here, an enormous lake of murky water. Sinners sit on boats and must fish for their food. Fortunately, they have a huge selection of different fishing rods and nets (brandnames include Bayes, Student, Spearman and many more). Unfortunately, only one in 20 fish are edible, so they are constantly hungry."

Fifth Circle: Creative Use of Outliers
"Those who 'cleaned up' their results by excluding inconvenient data-points are condemned here. Demons pluck out their hairs one by one, every time explaining that they are better off without that hair because there was something wrong with it."

Sixth Circle: Plagiarism
"This circle is entirely empty because as soon as a sinner arrives, a winged demon carries them to another circle and forces them to suffer the punishment meted out to the people there. After their 3 year "post" is up, they are carried to another circle, and so on..."

Seventh Circle: Non-Publication of Data
"Here sinners are chained to burning chairs in front of desks covered with broken typewriters. Only if they can write an article describing their predicament, will they be set free. Each desk has a file-drawer stuffed full of these, but the drawers are locked.

Eighth Circle: Partial Publication of Data
"At any one time exactly half of the sinners here are chased around by demons prodding them with spears. The demons choose who to chase at random after ensuring that the groups are matched for age, gender, height and weight. Howling desert winds blow a constant torrent of articles announcing the success of a new program to enhance participation in physical exercise - but with no mention of the side effects."

Ninth Circle: Inventing Data
"Here Satan himself lies trapped forever in a block of solid ice alongside the worst sinners of all. Frozen in front of their eyes is a paper explaining very convincingly that water cannot freeze in the environmental conditions of this part of Hell. Unfortunately, the data were made up."

Links: This has been kindly translated into Russian here and into Portuguese here.

The 9 Circles of Scientific Hell


Dante's Inferno: a classic of world literature, the definitive statement of the mediaeval Christian world-view, the first major work in the Italian language, and the basis for a violent videogame. The poem offers a tour through the nine increasingly horrible levels of Hell, in which sinners are tormented forever.

But Dante lived before the era of modern science. I thought I'd update his scheme to explain what happens to those guilty of various scientific sins, ranging from the commonplace to the shocking.

Bear in mind that Dante's Hell had a place for everyone, and it was only Christ's intervention that saved anyone from it; even "good" people went to Hell because everyone sins. But they are still sins. Likewise, very few scientists (and I'm certainly not one of them) would be able to avoid being condemned to some level of this Inferno... but, that's no excuse.

First Circle: Limbo
"The uppermost circle is not a place of punishment, so much as regret. Those who have committed no scientific sins as such, but who turned a blind eye to it, and encouraged it by their awarding of grants and publications, spend eternity on top of this barren mountain, watching the carnage below and reflecting on how they are partially responsible..."

Second Circle: Overselling
"This circle is reserved for those who exaggerated the importantance of their work in order to get grants or write better papers. Sinners are trapped in a huge pit, neck-deep in horrible sludge. Each sinner is provided with the single rung of a ladder, labelled 'The Way Out - Scientists Crack Problem of Second Circle of Hell"

Third Circle: Post-Hoc Storytelling
"Sinners condemned to this circle must constantly dodge the attacks of demons armed with bows and arrows, firing more or less at random. Every time someone is hit in some part of their body, the demon proceeds to explain at enormous length how they were aiming for that exact spot all along."

Fourth Circle: P-Value Fishing
"Those who tried every statistical test in the book until they got a p value less than 0.05 find themselves here, an enormous lake of murky water. Sinners sit on boats and must fish for their food. Fortunately, they have a huge selection of different fishing rods and nets (brandnames include Bayes, Student, Spearman and many more). Unfortunately, only one in 20 fish are edible, so they are constantly hungry."

Fifth Circle: Creative Use of Outliers
"Those who 'cleaned up' their results by excluding inconvenient data-points are condemned here. Demons pluck out their hairs one by one, every time explaining that they are better off without that hair because there was something wrong with it."

Sixth Circle: Plagiarism
"This circle is entirely empty because as soon as a sinner arrives, a winged demon carries them to another circle and forces them to suffer the punishment meted out to the people there. After their 3 year "post" is up, they are carried to another circle, and so on..."

Seventh Circle: Non-Publication of Data
"Here sinners are chained to burning chairs in front of desks covered with broken typewriters. Only if they can write an article describing their predicament, will they be set free. Each desk has a file-drawer stuffed full of these, but the drawers are locked.

Eighth Circle: Partial Publication of Data
"At any one time exactly half of the sinners here are chased around by demons prodding them with spears. The demons choose who to chase at random after ensuring that the groups are matched for age, gender, height and weight. Howling desert winds blow a constant torrent of articles announcing the success of a new program to enhance participation in physical exercise - but with no mention of the side effects."

Ninth Circle: Inventing Data
"Here Satan himself lies trapped forever in a block of solid ice alongside the worst sinners of all. Frozen in front of their eyes is a paper explaining very convincingly that water cannot freeze in the environmental conditions of this part of Hell. Unfortunately, the data were made up."

Links: This has been kindly translated into Russian here and into Portuguese here.

Sunday, September 26, 2010

Big Pharma Explain How To Pick Cherries

Here at Neuroskeptic, we see a lot of bad science. Maybe, over the years (all 2 of them) that I've been writing this blog, I've become a bit jaded. Maybe I'm less distressed by it than I used to be. Cynical, even.

But this one really takes the biscuit. And then it takes the tin. And relieves itself in it: A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs.

Don't worry - it's from a big pharmaceutical company (GlaxoSmithKline), so I don't have to worry about hurting feelings.

It's is full to bursting with colourful graphs and pictures, but the basic idea is very simple. As in "simpleton".

Suppose you're testing a new drug against placebo. You decide to do a multicentre trial, i.e. you enlist lots of doctors to give the drug, or placebo, to their patients. Each clinic or hospital which takes part is a "centre". Multicentre trials are popular because they're an easy way of quickly testing a drug on a large number of patients.

Anyway, suppose that the results come in, and it turns out that the drug didn't work any better than placebo, which unfortunately is what happens rather often in modern trials of antidepressants. Oh dear. The drug's crap. That's the end of that chapter.

...
or is it?!? say GSK. Maybe not. They have a clever trick. Look at the results from each centre individually. Placebo response rates will probably vary between centres: in some of them, the placebo people don't get better, in others, they get lots better.

Now, suppose that you just chucked out all of the data from centres where the people on placebo got much better, on the grounds that there must be something weird going on in those ones. They reanalyzed the data from 1,837 patients given paroxetine or placebo, across 124 centres. In the dataset as a whole, paroxetine barely outperformed placebo. However, in the centres where people on placebo only improved a little, the drug was much better than placebo!

Well, of course it was. Imagine that the drug has no effect. Some people just get better and others don't. Let's assume that each person randomly gets between 0 and 25 better, with an equal chance of any outcome. Half are on drug and half are on placebo, but it makes no difference.

Let's further assume that there are 50 centres, with 20 people per centre (1000 people total). I knocked up a "simulation" of this in Excel (it took 10 minutes). Here's what you get:

The blue dots show, for each imaginary centre, drug improvement vs. placebo improvement. There's no correlation (it's random), and, on average, there is no difference: both average out at 12 points. The drug doesn't work.

The red dots show the "Treatment Effect" i.e. [drug improvement - placebo improvement]. The average is 0 - because the drug doesn't work. But there's a strong negative correlation between Treatment Effect and the placebo improvement - in centres where people improved lots on placebo, the drug worked worse.

This is exactly what Glaxo show in Figure 1a (see above). They write:
The analysis of the surface response indicated the predominant role of center specific placebo response as compared with the dose strength in determining the Treatment Effect of paroxetine.
But of course they correlate. You're correlating placebo improvement with itself: the "Treatment Effect" is a function of the placebo improvement. It's classic regression to the mean.

Of course if you chuck out the centres where people on placebo do well (the grey box in my picture), the drug seems to work pretty nicely. But this is cheating. It is cherry-picking. It is completely unscientific. (To give the authors their due, they also eliminated the centres where the placebo response was very low. This could, under some assumptions, make the analysis unbiased, but they don't show that this was their intention, let alone that it would eliminate all of the bias.)

The authors note that this could be a source of bias, but say that it wouldn't be one if it was planned out in advance: "in order to overcome the bias risk, the enrichment strategy should be accounted for and pre-planned in the study protocol." This is like saying that if you announce, before playing chess, that you are going to cheat, it's not cheating.

To be fair to the authors, assuming the drug does work, this method would improve your chances of correctly detecting the effect. Centres with very high placebo responses quite possibly are junk. Assuming the drug works.

But if we're assuming the drug works, why are we bothering to do a trial? The whole point of a trial is to discover something we don't know. The authors justify their approach by suggesting that it would be useful for drug companies who want to do a "proof-of-concept" trial to find out whether an experimental drug might work under the most favourable conditions, i.e. whether they should bother continuing to research it.

They say that such trials "are inherently exploratory in their conception, aimed at signal detection, open to innovation..." - in other words, that they're not meant to be as rigorous as late-stage trials.

Fair enough. But this method is not even suitable for proof-of-concept, because it would (as I have shown above in my 10 minute simulation) increase your chance of finding an "effect" from a drug that doesn't work.

Whatever the truth is, this method will give the same result, so it's not useful evidence. It's like saying "Heads I win, tails you lose". You've set it up so that I lose - the coin toss doesn't tell us anything.

All of the author's results are based on trials in which the drug "should have worked": they do not appear to have simulated what would happen if they used this method on trials where it didn't work, as I just did. So I'm doing Pharma a big favour by writing this post, because if they adopt this approach, they're more likely to waste money on drugs that don't work.

They should be paying me for this stuff.

ResearchBlogging.orgMerlo-Pich E, Alexander RC, Fava M, & Gomeni R (2010). A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs. Clinical Pharmacology and Therapeutics PMID: 20861834

Big Pharma Explain How To Pick Cherries

Here at Neuroskeptic, we see a lot of bad science. Maybe, over the years (all 2 of them) that I've been writing this blog, I've become a bit jaded. Maybe I'm less distressed by it than I used to be. Cynical, even.

But this one really takes the biscuit. And then it takes the tin. And relieves itself in it: A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs.

Don't worry - it's from a big pharmaceutical company (GlaxoSmithKline), so I don't have to worry about hurting feelings.

It's is full to bursting with colourful graphs and pictures, but the basic idea is very simple. As in "simpleton".

Suppose you're testing a new drug against placebo. You decide to do a multicentre trial, i.e. you enlist lots of doctors to give the drug, or placebo, to their patients. Each clinic or hospital which takes part is a "centre". Multicentre trials are popular because they're an easy way of quickly testing a drug on a large number of patients.

Anyway, suppose that the results come in, and it turns out that the drug didn't work any better than placebo, which unfortunately is what happens rather often in modern trials of antidepressants. Oh dear. The drug's crap. That's the end of that chapter.

...
or is it?!? say GSK. Maybe not. They have a clever trick. Look at the results from each centre individually. Placebo response rates will probably vary between centres: in some of them, the placebo people don't get better, in others, they get lots better.

Now, suppose that you just chucked out all of the data from centres where the people on placebo got much better, on the grounds that there must be something weird going on in those ones. They reanalyzed the data from 1,837 patients given paroxetine or placebo, across 124 centres. In the dataset as a whole, paroxetine barely outperformed placebo. However, in the centres where people on placebo only improved a little, the drug was much better than placebo!

Well, of course it was. Imagine that the drug has no effect. Some people just get better and others don't. Let's assume that each person randomly gets between 0 and 25 better, with an equal chance of any outcome. Half are on drug and half are on placebo, but it makes no difference.

Let's further assume that there are 50 centres, with 20 people per centre (1000 people total). I knocked up a "simulation" of this in Excel (it took 10 minutes). Here's what you get:

The blue dots show, for each imaginary centre, drug improvement vs. placebo improvement. There's no correlation (it's random), and, on average, there is no difference: both average out at 12 points. The drug doesn't work.

The red dots show the "Treatment Effect" i.e. [drug improvement - placebo improvement]. The average is 0 - because the drug doesn't work. But there's a strong negative correlation between Treatment Effect and the placebo improvement - in centres where people improved lots on placebo, the drug worked worse.

This is exactly what Glaxo show in Figure 1a (see above). They write:
The analysis of the surface response indicated the predominant role of center specific placebo response as compared with the dose strength in determining the Treatment Effect of paroxetine.
But of course they correlate. You're correlating placebo improvement with itself: the "Treatment Effect" is a function of the placebo improvement. It's classic regression to the mean.

Of course if you chuck out the centres where people on placebo do well (the grey box in my picture), the drug seems to work pretty nicely. But this is cheating. It is cherry-picking. It is completely unscientific. (To give the authors their due, they also eliminated the centres where the placebo response was very low. This could, under some assumptions, make the analysis unbiased, but they don't show that this was their intention, let alone that it would eliminate all of the bias.)

The authors note that this could be a source of bias, but say that it wouldn't be one if it was planned out in advance: "in order to overcome the bias risk, the enrichment strategy should be accounted for and pre-planned in the study protocol." This is like saying that if you announce, before playing chess, that you are going to cheat, it's not cheating.

To be fair to the authors, assuming the drug does work, this method would improve your chances of correctly detecting the effect. Centres with very high placebo responses quite possibly are junk. Assuming the drug works.

But if we're assuming the drug works, why are we bothering to do a trial? The whole point of a trial is to discover something we don't know. The authors justify their approach by suggesting that it would be useful for drug companies who want to do a "proof-of-concept" trial to find out whether an experimental drug might work under the most favourable conditions, i.e. whether they should bother continuing to research it.

They say that such trials "are inherently exploratory in their conception, aimed at signal detection, open to innovation..." - in other words, that they're not meant to be as rigorous as late-stage trials.

Fair enough. But this method is not even suitable for proof-of-concept, because it would (as I have shown above in my 10 minute simulation) increase your chance of finding an "effect" from a drug that doesn't work.

Whatever the truth is, this method will give the same result, so it's not useful evidence. It's like saying "Heads I win, tails you lose". You've set it up so that I lose - the coin toss doesn't tell us anything.

All of the author's results are based on trials in which the drug "should have worked": they do not appear to have simulated what would happen if they used this method on trials where it didn't work, as I just did. So I'm doing Pharma a big favour by writing this post, because if they adopt this approach, they're more likely to waste money on drugs that don't work.

They should be paying me for this stuff.

ResearchBlogging.orgMerlo-Pich E, Alexander RC, Fava M, & Gomeni R (2010). A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs. Clinical Pharmacology and Therapeutics PMID: 20861834

Monday, August 23, 2010

Fish Out Of Water, On Ketamine

Ketamine is a drug of many talents. Used medically as an anesthetic in animals and, sometimes, in humans, it's also become widely used recreationally despite, or perhaps because of, its reputation as a "horse tranquilizer".

Ketamine's also a hot topic in research at the moment for two reasons: it's considered an interesting way of provoking the symptoms of schizophrenia, and it's also shown promise as a fast-acting antidepressant.

Anyway, most ketamine research to date has been done in either humans or in rodents, but New York pharmacologists Zakhary et al decided to see what it does to fish. So they put some ketamine in the fishes water and saw what happened: A Behavioral and Molecular Analysis of Ketamine in Zebrafish.

A high dose, 0.8%, just made the fish unconscious. Well, it is an anesthetic. But a low dose (0.2%) had rather more complex effects. It sent them literally loopy - they started swimming around and around in circles, usually in a clockwise direction. Control zebrafish swam about and explored their tanks without any circling behaviours.

They also examined the effect of ketamine on the "hypoxic stress" response, i.e. what happens when you take the fish out of water (only for 20 seconds, so it doesn't cause any real harm.) Normal fish struggle and gasp for water in this situation, unsurprisingly. Ketamine strongly inhibited this.

So what? Well, it's hard to say what this might mean. It would be great if the zebrafish turned out to be a useful experimental model for investigating the effects of ketamine and similar drugs, because they're much easier to work with than rodents (for one thing, it's a lot easier to just put a drug in a fish tank than to inject it into a mouse.)

However, it remains to be seen whether swimming in circles is a useful analog of the human effects of ketamine. Ketamine can make people act in some pretty stupid ways, but walking around in little circles is extreme even by K-head standards...

Link: I've blogged about ketamine before: I'm On K, You're On K.

ResearchBlogging.orgZakhary SM, Ayubcha D, Ansari F, Kamran K, Karim M, Leheste JR, Horowitz JM, & Torres G (2010). A behavioral and molecular analysis of ketamine in zebrafish. Synapse (New York, N.Y.) PMID: 20623473

Fish Out Of Water, On Ketamine

Ketamine is a drug of many talents. Used medically as an anesthetic in animals and, sometimes, in humans, it's also become widely used recreationally despite, or perhaps because of, its reputation as a "horse tranquilizer".

Ketamine's also a hot topic in research at the moment for two reasons: it's considered an interesting way of provoking the symptoms of schizophrenia, and it's also shown promise as a fast-acting antidepressant.

Anyway, most ketamine research to date has been done in either humans or in rodents, but New York pharmacologists Zakhary et al decided to see what it does to fish. So they put some ketamine in the fishes water and saw what happened: A Behavioral and Molecular Analysis of Ketamine in Zebrafish.

A high dose, 0.8%, just made the fish unconscious. Well, it is an anesthetic. But a low dose (0.2%) had rather more complex effects. It sent them literally loopy - they started swimming around and around in circles, usually in a clockwise direction. Control zebrafish swam about and explored their tanks without any circling behaviours.

They also examined the effect of ketamine on the "hypoxic stress" response, i.e. what happens when you take the fish out of water (only for 20 seconds, so it doesn't cause any real harm.) Normal fish struggle and gasp for water in this situation, unsurprisingly. Ketamine strongly inhibited this.

So what? Well, it's hard to say what this might mean. It would be great if the zebrafish turned out to be a useful experimental model for investigating the effects of ketamine and similar drugs, because they're much easier to work with than rodents (for one thing, it's a lot easier to just put a drug in a fish tank than to inject it into a mouse.)

However, it remains to be seen whether swimming in circles is a useful analog of the human effects of ketamine. Ketamine can make people act in some pretty stupid ways, but walking around in little circles is extreme even by K-head standards...

Link: I've blogged about ketamine before: I'm On K, You're On K.

ResearchBlogging.orgZakhary SM, Ayubcha D, Ansari F, Kamran K, Karim M, Leheste JR, Horowitz JM, & Torres G (2010). A behavioral and molecular analysis of ketamine in zebrafish. Synapse (New York, N.Y.) PMID: 20623473

Thursday, August 12, 2010

Drugs for Starcraft Addiction

Are you addicted to Starcraft? Do you want to get off Battle.net and on a psychoactive drug?

Well, South Korean psychiatrists Han et al report that Bupropion sustained release treatment decreases craving for video games and cue-induced brain activity in patients with Internet video game addiction.

They took 11 people with "Internet Game Addiction" - the game being Starcraft, this being South Korea - and gave them the drug bupropion (Wellbutrin), an antidepressant that's also used in drug addiction and smoking cessation. These guys (because, predictably, they were all guys) were seriously hooked, playing on average at least 4 hours per day.
Six were absent from school because of playing Internet video game in Internet cafes for more than 2 months. Two IAGs had been divorced because of excessive Internet use at night.
They helpfully summarize Starcraft for the layperson:
As a military leader for one of three species, players must gather resources for training and expanding their species’ forces. Utilizing various strategies and alliances with other species, players attempt to lead their own species to victory.
Which is all true, but it doesn't quite communicate the sheer obsessiveness that's require to win this game. As Penny Arcade said "it is OCD masquerading as recreation", and that's coming from someone who literally plays video games for a living.

Anyway, apparently the drug worked:
After 6 weeks of bupropion SR treatment in the IAG group, there were significant decreases in terms of craving for playing StarCraft (23.6%), total playing game time (35.4%), and Internet Addiction Scale scores (15.4%)
They also did some fMRI and found that the addict's brains responded more strongly to pictures of Zerglings than did control people, and that the drug reduced activity a bit. But there was no placebo group, so we have no idea whether this was the drug or not.

Sadly, the point is moot, because Starcraft II has just come out, and it's more addictive than ever. I'm off to try and optimize my Terran build order, and by God I will get those 10 marines out in the first 5 minutes if it takes me all night...

ResearchBlogging.orgHan DH, Hwang JW, & Renshaw PF (2010). Bupropion sustained release treatment decreases craving for video games and cue-induced brain activity in patients with Internet video game addiction. Experimental and clinical psychopharmacology, 18 (4), 297-304 PMID: 20695685

Drugs for Starcraft Addiction

Are you addicted to Starcraft? Do you want to get off Battle.net and on a psychoactive drug?

Well, South Korean psychiatrists Han et al report that Bupropion sustained release treatment decreases craving for video games and cue-induced brain activity in patients with Internet video game addiction.

They took 11 people with "Internet Game Addiction" - the game being Starcraft, this being South Korea - and gave them the drug bupropion (Wellbutrin), an antidepressant that's also used in drug addiction and smoking cessation. These guys (because, predictably, they were all guys) were seriously hooked, playing on average at least 4 hours per day.
Six were absent from school because of playing Internet video game in Internet cafes for more than 2 months. Two IAGs had been divorced because of excessive Internet use at night.
They helpfully summarize Starcraft for the layperson:
As a military leader for one of three species, players must gather resources for training and expanding their species’ forces. Utilizing various strategies and alliances with other species, players attempt to lead their own species to victory.
Which is all true, but it doesn't quite communicate the sheer obsessiveness that's require to win this game. As Penny Arcade said "it is OCD masquerading as recreation", and that's coming from someone who literally plays video games for a living.

Anyway, apparently the drug worked:
After 6 weeks of bupropion SR treatment in the IAG group, there were significant decreases in terms of craving for playing StarCraft (23.6%), total playing game time (35.4%), and Internet Addiction Scale scores (15.4%)
They also did some fMRI and found that the addict's brains responded more strongly to pictures of Zerglings than did control people, and that the drug reduced activity a bit. But there was no placebo group, so we have no idea whether this was the drug or not.

Sadly, the point is moot, because Starcraft II has just come out, and it's more addictive than ever. I'm off to try and optimize my Terran build order, and by God I will get those 10 marines out in the first 5 minutes if it takes me all night...

ResearchBlogging.orgHan DH, Hwang JW, & Renshaw PF (2010). Bupropion sustained release treatment decreases craving for video games and cue-induced brain activity in patients with Internet video game addiction. Experimental and clinical psychopharmacology, 18 (4), 297-304 PMID: 20695685

Tuesday, July 6, 2010

Brain Stimulation Can Stop the Rock

Isn't it annoying when you get a song stuck in your head? Like, say, this one:


Stop the rock, stop the rock
Stop the rock, stop the rock
Stop the rock, can't stop the rock
You can't stop the rock, stop the rock
Stop the rock, can't stop the rock
You can't stop the rock, can't stop the rock. etc.
- Apollo 440, "Stop the Rock"
You'll probably be stuck with that tune for a few minutes, but with any luck it'll go away eventually. However, for the 63-year old Italian man reported on in a new paper by Cosentino et al., the melodic misery never stopped.

The patient had suffered from partial hearing loss for 20 years, probably as a result of his work as a stonemason, which involved a lot of loud noise. His real problems started, however, when he suffered a car accident which cause damage to his right temporal pole. This caused
continuous musical hallucinations in the form of popular songs by Renato Carosone ... the songs were the ones he often used to listen to when he was younger. The volume of the musical hallucinations was initially low, and then became progressively louder; it was perceived in the middle of head and changed in severity over the course of the day. The intensity of the hallucinations evaluated through an arbitrary scale ranging from 0 (no hallucinations) to 10 (unbearable hallucinations) varied from 5 to 8 during the day.
The spectral songs didn't directly interfere with his life, but they were extremely annoying. He reported no other symptoms, his hearing was no worse than it had been before the accident, all neuropsychological tests were normal, and he had no history of any neurological or psychiatric problems.

Doctors tried to control the harmonic hallucinations with a range of anti-epileptic drugs, but they didn't work. A PET scan showed reduced brain activity in the area which was damaged, but increased activity in the posterior temporal lobe. Maybe this was to blame for the problems.

So Cosentino et al. decided to use repetitive transcranial magnetic stimulation (rTMS) to suppress activity in the offending part of the brain. rTMS uses strong magnetic fields to stimulate the brain; through some unknown neurobiological process, it can, in the long term, lead to reduced activity.

rTMS was given 5 days per week for 2 weeks. After the first week, the patient reported that the music had got a lot quieter and after another week, it was gone. A few months later it started again, but far quieter than before and only occasionally. The patient was offered more treatment but he said it wouldn't be worth it, because the hallucinations were no longer annoying. A second PET scan showed normalization of the activity...maybe (see the picture above; A=before B=after.)

There was no placebo condition, so it's hard to know whether this was a true effect of the magnetic stimulation, but the fact that a number of drugs hadn't worked suggests that it wasn't merely a placebo effect. So it turns out that you can Stop the Rock. Or at least, you can Stop the Canzone Napoletana of Renato Carosone. Whether the Rock is harder to Stop is a topic for future research.

ResearchBlogging.orgCosentino, G., Giglia, G., Palermo, A., Panetta, M., Lo Baido, R., Brighina, F., & Fierro, B. (2010). A case of post-traumatic complex auditory hallucinosis treated with rTMS Neurocase, 16 (3), 267-272 DOI: 10.1080/13554790903456191

Brain Stimulation Can Stop the Rock

Isn't it annoying when you get a song stuck in your head? Like, say, this one:


Stop the rock, stop the rock
Stop the rock, stop the rock
Stop the rock, can't stop the rock
You can't stop the rock, stop the rock
Stop the rock, can't stop the rock
You can't stop the rock, can't stop the rock. etc.
- Apollo 440, "Stop the Rock"
You'll probably be stuck with that tune for a few minutes, but with any luck it'll go away eventually. However, for the 63-year old Italian man reported on in a new paper by Cosentino et al., the melodic misery never stopped.

The patient had suffered from partial hearing loss for 20 years, probably as a result of his work as a stonemason, which involved a lot of loud noise. His real problems started, however, when he suffered a car accident which cause damage to his right temporal pole. This caused
continuous musical hallucinations in the form of popular songs by Renato Carosone ... the songs were the ones he often used to listen to when he was younger. The volume of the musical hallucinations was initially low, and then became progressively louder; it was perceived in the middle of head and changed in severity over the course of the day. The intensity of the hallucinations evaluated through an arbitrary scale ranging from 0 (no hallucinations) to 10 (unbearable hallucinations) varied from 5 to 8 during the day.
The spectral songs didn't directly interfere with his life, but they were extremely annoying. He reported no other symptoms, his hearing was no worse than it had been before the accident, all neuropsychological tests were normal, and he had no history of any neurological or psychiatric problems.

Doctors tried to control the harmonic hallucinations with a range of anti-epileptic drugs, but they didn't work. A PET scan showed reduced brain activity in the area which was damaged, but increased activity in the posterior temporal lobe. Maybe this was to blame for the problems.

So Cosentino et al. decided to use repetitive transcranial magnetic stimulation (rTMS) to suppress activity in the offending part of the brain. rTMS uses strong magnetic fields to stimulate the brain; through some unknown neurobiological process, it can, in the long term, lead to reduced activity.

rTMS was given 5 days per week for 2 weeks. After the first week, the patient reported that the music had got a lot quieter and after another week, it was gone. A few months later it started again, but far quieter than before and only occasionally. The patient was offered more treatment but he said it wouldn't be worth it, because the hallucinations were no longer annoying. A second PET scan showed normalization of the activity...maybe (see the picture above; A=before B=after.)

There was no placebo condition, so it's hard to know whether this was a true effect of the magnetic stimulation, but the fact that a number of drugs hadn't worked suggests that it wasn't merely a placebo effect. So it turns out that you can Stop the Rock. Or at least, you can Stop the Canzone Napoletana of Renato Carosone. Whether the Rock is harder to Stop is a topic for future research.

ResearchBlogging.orgCosentino, G., Giglia, G., Palermo, A., Panetta, M., Lo Baido, R., Brighina, F., & Fierro, B. (2010). A case of post-traumatic complex auditory hallucinosis treated with rTMS Neurocase, 16 (3), 267-272 DOI: 10.1080/13554790903456191

Friday, May 28, 2010

This Is Your Brain's Anti-Drug

What's your anti-drug? Well, it might well be hemopressin. At least, that's probably your anti-marijuana.

Hemopressin is a small protein that was discovered in the brains of rodents in 2003: its name comes from the fact that it's a breakdown product of hemoglobin and that it can lower blood pressure.

No-one seems to have looked to see whether hemopressin is found in humans, yet, but it seems very likely. Almost everything that's in your brain is in a mouse's brain, and vice versa.

Pharmacologically, hemopressin's literally an anti-marijuana molecule: it's an inverse agonist at CB1 receptors, which are the ones targeted by the psychoactive compounds in marijuana, and also by the neurotransmitters known as endocannabinoids. Cannabinoids turn CB1 receptors on, hemopressin turns them off.

Artificial CB1 blockers were developed as weight loss drugs, and one of them, rimonabant, made it onto the market - but it was banned after it turned out that it caused depression and anxiety in many people.

So hemopressin is Nature's rimonabant: in which case, it ought to do what rimonabant does, which is to reduce appetite. And indeed a Journal of Neuroscience paper just out from Godd et al shows that it does just that, in rats and mice: injections of hemopressin reduced feeding.

Interestingly, this worked even when it was injected by the standard route under the skin - many proteins can't enter the brain if they're given this way, because they can't cross the blood-brain barrier, meaning that they have to be injected directly into the brain, which makes researching them much harder. So hemopressin, with any luck, will be pretty easy to study. Any volunteers for the first human trial...?

ResearchBlogging.orgDodd, G., Mancini, G., Lutz, B., & Luckman, S. (2010). The Peptide Hemopressin Acts through CB1 Cannabinoid Receptors to Reduce Food Intake in Rats and Mice Journal of Neuroscience, 30 (21), 7369-7376 DOI: 10.1523/JNEUROSCI.5455-09.2010

This Is Your Brain's Anti-Drug

What's your anti-drug? Well, it might well be hemopressin. At least, that's probably your anti-marijuana.

Hemopressin is a small protein that was discovered in the brains of rodents in 2003: its name comes from the fact that it's a breakdown product of hemoglobin and that it can lower blood pressure.

No-one seems to have looked to see whether hemopressin is found in humans, yet, but it seems very likely. Almost everything that's in your brain is in a mouse's brain, and vice versa.

Pharmacologically, hemopressin's literally an anti-marijuana molecule: it's an inverse agonist at CB1 receptors, which are the ones targeted by the psychoactive compounds in marijuana, and also by the neurotransmitters known as endocannabinoids. Cannabinoids turn CB1 receptors on, hemopressin turns them off.

Artificial CB1 blockers were developed as weight loss drugs, and one of them, rimonabant, made it onto the market - but it was banned after it turned out that it caused depression and anxiety in many people.

So hemopressin is Nature's rimonabant: in which case, it ought to do what rimonabant does, which is to reduce appetite. And indeed a Journal of Neuroscience paper just out from Godd et al shows that it does just that, in rats and mice: injections of hemopressin reduced feeding.

Interestingly, this worked even when it was injected by the standard route under the skin - many proteins can't enter the brain if they're given this way, because they can't cross the blood-brain barrier, meaning that they have to be injected directly into the brain, which makes researching them much harder. So hemopressin, with any luck, will be pretty easy to study. Any volunteers for the first human trial...?

ResearchBlogging.orgDodd, G., Mancini, G., Lutz, B., & Luckman, S. (2010). The Peptide Hemopressin Acts through CB1 Cannabinoid Receptors to Reduce Food Intake in Rats and Mice Journal of Neuroscience, 30 (21), 7369-7376 DOI: 10.1523/JNEUROSCI.5455-09.2010

Thursday, May 6, 2010

Mice That Fight for Their Rights

Israeli biologists Feder et al report on Selective breeding for dominant and submissive behavior in Sabra mice.

Mice are social animals and like many species, they show dominance hierarchies. When they first meet, they'll often fight each other. The winner gets to be Mr (or Mrs) Big, and they enjoy first pick of the food, mating opportunities, etc - for as long as they can remain dominant.

But what determines which mice become top dog... ? Feder et al show that it's partially under genetic control. They took a normal population of laboratory mice, paired them up, and made them battle for supremacy in a simple set-up in which only one mouse can get access to a central food supply:

At first, only about 30% of pairs developed clear dominance/submission relationships, but the ones that did were selectively bred: dominant males mated with dominant females, and submissive males with submissive females. The offspring were put through the same process, and it was repeated.

The results were dramatic: After 4 generations of successive selection, 80% of the pairs showed clear dominance and submission behaviour. And with each generation of breeding, the dominance relationships appeared faster, and stronger: at first the winners only got slightly more access to the food, but by the 4th generation, they almost completely monopolized it. As expected the mice bred to be dominant were overwhelmingly more likely to end up on top. The differences were not due to general differences in activity levels or anxiety.

But the naturally timid mice could be made to fight for their rights by treating them with antidepressants - after a month of imipramine, they were taking crap from no-one.

Feder et al say that previous studies have also shown anti-submissive effects of antidepressants, while drugs used to treat mania reduce dominance. Anyone who's experienced a mood disorder will probably be able to relate to this: depressed people tend to feel like they belong at the bottom of the pecking order of life, while mania is classically associated with believing you're the greatest person in history.

So dominance and submission could provide a useful way of testing the effects of drugs on mood. If so, it would be useful, because current animal models of depression and antidepressants etc. mostly rely on putting animals in a glass of water and seeing how long they take to stop struggling...

ResearchBlogging.orgFeder, Y., Nesher, E., Ogran, A., Kreinin, A., Malatynska, E., Yadid, G., & Pinhasov, A. (2010). Selective breeding for dominant and submissive behavior in Sabra mice Journal of Affective Disorders DOI: 10.1016/j.jad.2010.03.018