Monday, December 27, 2010

Does Peer Review Work?

Scientific peer review is based on the idea that some papers deserve to get published and others don't.

By asking a hand-picked team of 3 or 4 experts in the field (the "peers"), journals hope to accept the good stuff, filter out the rubbish, and improve the not-quite-good-enough papers.

This all assumes that the reviewers, being experts, are able to make a more or less objective judgement. In other words, when a reviewer says that a paper's good or bad, they're reporting something about the paper, not just giving their own personal opinion.

If that's true, reviewers ought to agree with each other about the merits of each paper. On the other hand, if it turns out that they don't agree any more often than we'd expect if they were assigning ratings entirely at random, that would suggest that there's a problem somewhere.

Guess what? Bornmann et al have just reported that reviewers are only slightly more likely to agree than they would be if they were just flipping coins: A Reliability - Generalization Study of Journal Peer Reviews.

The study is a meta-analysis of 48 studies published since 1966, looking at peer review of either journal papers or conference presentations. In total, almost 20,000 submissions were studied. Bornmann et al calculated the mean inter-rater reliability (IRR), a measure of how well different judges agree with each other.

Overall, they found a reliability coefficient (r^2) of 0.23, or 0.34 under a different statistical model. This is pretty low, given that 0 is random chance, while a perfect correlation would be 1.0. Using another measure of IRR, Cohen's kappa, they found a reliability of 0.17. That means that peer reviewers only agreed on 17% more manuscripts than they would by chance alone.

Worse still, the bigger the study, the worse the reliability it reported. On the other hand, the subject - economics/law, natural sciences, medical sciences, or social sciences - had no effect, arguing against the common sense idea that reviews must be more objective in the "harder" sciences.

So what? Does this mean that peer review is a bad thing? Maybe it's like the police. The police are there to prevent and punish crime. They don't always succeed: crime happens. But only a fool would argue that, because the police fail to prevent some crimes, we ought to abolish them. The fact that we have police, even imperfect ones, acts a deterrent.

Likewise, I suspect that peer review, for all its flaws (and poor reliability is just one of them), does prevent many "bad" papers from getting written, or getting submitted, even if a lot do still make it through, and even if the vetting process is not itself not very efficient. The very fact that peer review is there at all, makes people write their papers in a certain way.

Peer review surely does "work", to some extent - but is the work it does actually useful? Does it really filter out bad papers or does it on the contrary act to stifle originality? There are lots of things to say about this, but I will just say this for now: it's important to distinguish between whether peer review is good for science as a whole, and whether it's good for journals.

Every respectable journal relies on peer review to decide which papers to publish: even if the reviewers achieve nothing else, they certainly save the Editor time, and hence money (reviewers generally work for free). It's very hard to see how the current system of scientific publication in journals would survive without peer review. But that doesn't mean it's good for science. That's an entirely different question.

ResearchBlogging.orgBornmann L, Mutz R, & Daniel HD (2010). A reliability-generalization study of journal peer reviews: a multilevel metaanalysis of interrater reliability and its determinants. PloS ONE, 5 (12) PMID: 21179459

Does Peer Review Work?

Scientific peer review is based on the idea that some papers deserve to get published and others don't.

By asking a hand-picked team of 3 or 4 experts in the field (the "peers"), journals hope to accept the good stuff, filter out the rubbish, and improve the not-quite-good-enough papers.

This all assumes that the reviewers, being experts, are able to make a more or less objective judgement. In other words, when a reviewer says that a paper's good or bad, they're reporting something about the paper, not just giving their own personal opinion.

If that's true, reviewers ought to agree with each other about the merits of each paper. On the other hand, if it turns out that they don't agree any more often than we'd expect if they were assigning ratings entirely at random, that would suggest that there's a problem somewhere.

Guess what? Bornmann et al have just reported that reviewers are only slightly more likely to agree than they would be if they were just flipping coins: A Reliability - Generalization Study of Journal Peer Reviews.

The study is a meta-analysis of 48 studies published since 1966, looking at peer review of either journal papers or conference presentations. In total, almost 20,000 submissions were studied. Bornmann et al calculated the mean inter-rater reliability (IRR), a measure of how well different judges agree with each other.

Overall, they found a reliability coefficient (r^2) of 0.23, or 0.34 under a different statistical model. This is pretty low, given that 0 is random chance, while a perfect correlation would be 1.0. Using another measure of IRR, Cohen's kappa, they found a reliability of 0.17. That means that peer reviewers only agreed on 17% more manuscripts than they would by chance alone.

Worse still, the bigger the study, the worse the reliability it reported. On the other hand, the subject - economics/law, natural sciences, medical sciences, or social sciences - had no effect, arguing against the common sense idea that reviews must be more objective in the "harder" sciences.

So what? Does this mean that peer review is a bad thing? Maybe it's like the police. The police are there to prevent and punish crime. They don't always succeed: crime happens. But only a fool would argue that, because the police fail to prevent some crimes, we ought to abolish them. The fact that we have police, even imperfect ones, acts a deterrent.

Likewise, I suspect that peer review, for all its flaws (and poor reliability is just one of them), does prevent many "bad" papers from getting written, or getting submitted, even if a lot do still make it through, and even if the vetting process is not itself not very efficient. The very fact that peer review is there at all, makes people write their papers in a certain way.

Peer review surely does "work", to some extent - but is the work it does actually useful? Does it really filter out bad papers or does it on the contrary act to stifle originality? There are lots of things to say about this, but I will just say this for now: it's important to distinguish between whether peer review is good for science as a whole, and whether it's good for journals.

Every respectable journal relies on peer review to decide which papers to publish: even if the reviewers achieve nothing else, they certainly save the Editor time, and hence money (reviewers generally work for free). It's very hard to see how the current system of scientific publication in journals would survive without peer review. But that doesn't mean it's good for science. That's an entirely different question.

ResearchBlogging.orgBornmann L, Mutz R, & Daniel HD (2010). A reliability-generalization study of journal peer reviews: a multilevel metaanalysis of interrater reliability and its determinants. PloS ONE, 5 (12) PMID: 21179459

Sunday, December 26, 2010

NOSSO AMIGO SECRETO



NATAL COM PRESENTE NO AMIGO SECRETO.





NOSSA NOITE DE NATAL-
CEIA EXATAMENTE A MEIA NOITE.

Friday, December 24, 2010

CURIOSA DESEJA UM FELIZ NATAL...

UM NATAL DIFERENTE E MARAVILHOSO.
JUNTO A NEVE AO FRIO.

MAS COM A ALEGRIA DE UM NATAL NOS ESTADOS UNIDOS..













DESEJO FELIZ NATAL A TODOS...




Thursday, December 23, 2010

Depression Treatment Increased From 1998 to 2007

A paper just out reports on the changing patterns of treatment for depression in the USA, over the period from 1998 to 2007.

The headline news is that it increased: the overall rate of people treated for some form of "depression" went from 2.37% to 2.88% per year. That's an increase of 21%, which is not trivial, but it's much less than the increase in the previous decade: it was just 0.73% in 1987.

But the increase was concentrated in. some groups of people.
  • Americans over 50 accounted for the bulk of the rise. Their use went up by about 50%, while rates in younger people stayed almost steady. In '98 the peak age band was 35-49, now it's 50-64, with almost 5% of those people getting treated in any given year.
  • Men's rates of treatment went up by over 40% while women's only increased by 10%. Women are still more likely to get treated for depression than men, though, with a ratio of 1.7 women for each 1 man. But that ratio is a lot closer than it used to be.
  • Black people's rates increased hugely, by 120%. Rates in black people now stand at 2.2% which is close behind whites at 3.2%. Hispanics are now the least treated major ethnic group at 1.9%: in previous studies, blacks were the least treated. (There was no data on Asians or others).
So the increase wasn't an across the board rise, as we saw from '87 to '98. Rather the '98-'07 increase was more of a "catching up" by people who've historically had low levels of treatment, closing in on the level of the historically highest group: middle-aged white women.

In terms of what treatments people got, out of everyone treated for depression, 80% got some kind of drugs, and that didn't change much. But use of psychotherapy declined a bit from 54% to 43% (some people got both).

What's also interesting is that the same authors reported last year that, over pretty much the same time period ('96 to '05), the number of Americans who used antidepressants in any given year sky-rocketed from 5% to 10% - that is to say, much faster than the rate of depression treatment rose! And the data are comparable, because they came from the same national MEPS surveys.

In other words, the decade must have seen antidepressants increasingly being used to treat stuff other than depression. What stuff? Well, all kinds of things. SSRIs are popular in everything from anxiety and OCD to premature ejaculation. Several of the "other new" drugs, like mirtazapine and trazodone, are very good at putting you to sleep (rather too good, some users would say...)

ResearchBlogging.orgMarcus SC, & Olfson M (2010). National trends in the treatment for depression from 1998 to 2007. Archives of general psychiatry, 67 (12), 1265-73 PMID: 21135326

Depression Treatment Increased From 1998 to 2007

A paper just out reports on the changing patterns of treatment for depression in the USA, over the period from 1998 to 2007.

The headline news is that it increased: the overall rate of people treated for some form of "depression" went from 2.37% to 2.88% per year. That's an increase of 21%, which is not trivial, but it's much less than the increase in the previous decade: it was just 0.73% in 1987.

But the increase was concentrated in. some groups of people.
  • Americans over 50 accounted for the bulk of the rise. Their use went up by about 50%, while rates in younger people stayed almost steady. In '98 the peak age band was 35-49, now it's 50-64, with almost 5% of those people getting treated in any given year.
  • Men's rates of treatment went up by over 40% while women's only increased by 10%. Women are still more likely to get treated for depression than men, though, with a ratio of 1.7 women for each 1 man. But that ratio is a lot closer than it used to be.
  • Black people's rates increased hugely, by 120%. Rates in black people now stand at 2.2% which is close behind whites at 3.2%. Hispanics are now the least treated major ethnic group at 1.9%: in previous studies, blacks were the least treated. (There was no data on Asians or others).
So the increase wasn't an across the board rise, as we saw from '87 to '98. Rather the '98-'07 increase was more of a "catching up" by people who've historically had low levels of treatment, closing in on the level of the historically highest group: middle-aged white women.

In terms of what treatments people got, out of everyone treated for depression, 80% got some kind of drugs, and that didn't change much. But use of psychotherapy declined a bit from 54% to 43% (some people got both).

What's also interesting is that the same authors reported last year that, over pretty much the same time period ('96 to '05), the number of Americans who used antidepressants in any given year sky-rocketed from 5% to 10% - that is to say, much faster than the rate of depression treatment rose! And the data are comparable, because they came from the same national MEPS surveys.

In other words, the decade must have seen antidepressants increasingly being used to treat stuff other than depression. What stuff? Well, all kinds of things. SSRIs are popular in everything from anxiety and OCD to premature ejaculation. Several of the "other new" drugs, like mirtazapine and trazodone, are very good at putting you to sleep (rather too good, some users would say...)

ResearchBlogging.orgMarcus SC, & Olfson M (2010). National trends in the treatment for depression from 1998 to 2007. Archives of general psychiatry, 67 (12), 1265-73 PMID: 21135326

Wednesday, December 22, 2010

NOSSAS AVENTURAS

Mais um pouquinho da viagem. Estamos em Dallas,,..Ainda!!O PC esta configurado em ingles.

Natal em Dallas. casa enfeitadas.















PRAIA DE MIAME



UMA LINDA E BELA VIAGEM...
LOGO POSTAREI MAIS...