Thursday, August 5, 2010

Publication Bias: Not Dead Yet

Suppose you do two clinical trials of a drug, and only one of them shows it to work. It would be entirely misleading to only tell people about that one, and sweep the negative result under the carpet - but it happens.

That's publication bias. A simple but powerful remedy is to require everyone to publically announce their trials before the data comes in. The USA has led the way in this, with the public clinicaltrials.gov database, and for several years it's been a legal requirement that all clinical trials conducted in the USA must be pre-registered there, and that the results have to be uploaded when they arrive.

A new study by Bourgeois et al used this database to assess the scale of non-publication: Outcome Reporting Among Drug Trials Registered in ClinicalTrials.gov. Out of over 500 clinicaltrials.gov trials they looked at, 66% of them ended up getting published eventually. (The trials all ended before 2006, so they've had 5 years to get published and if they haven't by now, it's unlikely they ever will.) Is that a lot? Well, it's better than I'd expected, but it's still 33% too low.

The odds of getting published varied depending upon the type of drug, though. Trials of proton-pump inhibitors and cholesterol lowering drugs had the best chances. Antidepressants were a bit less publishable; antipsychotics were markedly worse; and only just over half of the vasodilator trials did.

This is interesting data, and it should remind us that publication bias, although often discussed nowadays as a problem with trials of antidepressants, is by no means limited to those drugs and in fact antidepressant trials (at least the ones starting after 2000 and completed by 2006) are fairly middle-of-the-road in terms of % publication rates.

Publications resulting from drug company-funded trials were also more likely to be positive (85%) than were trials bankrolled by the government (50%) or non-profit organizations without Pharma "contributions" (61%). This doesn't prove that drug companies are biasing publication - maybe they just really do get more positive results - but, well, it's not exactly reassuring.

Why is non-publication still a problem, given that people are required by law to release their trial protocols and results on clinicaltrials.gov? The problem is that clinicaltrials.gov doesn't appear on PubMed, and medical science works on the rule of "PubMed or it didn't happen". Someone searching for papers about "drug X for disease Y" - which I suspect accounts for the vast majority of clinical paper downloads - will still only get told about the trials that the authors chose to publish.

Is there an answer? We could in theory force people to write their results up and submit them to a journal, and we force journals to publish them, but that would be unworkable and incredibly unpopular. But why not simply publish the results from clinicaltrials.gov?

Whenever someone uploads their results to the database (as they legally must), clinicaltrials.gov could automatically use them to generate a mini-paper and publish it online. There would be a few tricky issues to sort out - you'd have to be careful that it didn't lead to the same results getting published in multiple places, for one - but so long as these reports were indexed on PubMed, it would solve the fundamental problem.

ResearchBlogging.orgBourgeois FT, Murthy S, & Mandl KD (2010). Outcome Reporting Among Drug Trials Registered in ClinicalTrials.gov. Annals of internal medicine, 153 (3), 158-66 PMID: 20679560

Publication Bias: Not Dead Yet

Suppose you do two clinical trials of a drug, and only one of them shows it to work. It would be entirely misleading to only tell people about that one, and sweep the negative result under the carpet - but it happens.

That's publication bias. A simple but powerful remedy is to require everyone to publically announce their trials before the data comes in. The USA has led the way in this, with the public clinicaltrials.gov database, and for several years it's been a legal requirement that all clinical trials conducted in the USA must be pre-registered there, and that the results have to be uploaded when they arrive.

A new study by Bourgeois et al used this database to assess the scale of non-publication: Outcome Reporting Among Drug Trials Registered in ClinicalTrials.gov. Out of over 500 clinicaltrials.gov trials they looked at, 66% of them ended up getting published eventually. (The trials all ended before 2006, so they've had 5 years to get published and if they haven't by now, it's unlikely they ever will.) Is that a lot? Well, it's better than I'd expected, but it's still 33% too low.

The odds of getting published varied depending upon the type of drug, though. Trials of proton-pump inhibitors and cholesterol lowering drugs had the best chances. Antidepressants were a bit less publishable; antipsychotics were markedly worse; and only just over half of the vasodilator trials did.

This is interesting data, and it should remind us that publication bias, although often discussed nowadays as a problem with trials of antidepressants, is by no means limited to those drugs and in fact antidepressant trials (at least the ones starting after 2000 and completed by 2006) are fairly middle-of-the-road in terms of % publication rates.

Publications resulting from drug company-funded trials were also more likely to be positive (85%) than were trials bankrolled by the government (50%) or non-profit organizations without Pharma "contributions" (61%). This doesn't prove that drug companies are biasing publication - maybe they just really do get more positive results - but, well, it's not exactly reassuring.

Why is non-publication still a problem, given that people are required by law to release their trial protocols and results on clinicaltrials.gov? The problem is that clinicaltrials.gov doesn't appear on PubMed, and medical science works on the rule of "PubMed or it didn't happen". Someone searching for papers about "drug X for disease Y" - which I suspect accounts for the vast majority of clinical paper downloads - will still only get told about the trials that the authors chose to publish.

Is there an answer? We could in theory force people to write their results up and submit them to a journal, and we force journals to publish them, but that would be unworkable and incredibly unpopular. But why not simply publish the results from clinicaltrials.gov?

Whenever someone uploads their results to the database (as they legally must), clinicaltrials.gov could automatically use them to generate a mini-paper and publish it online. There would be a few tricky issues to sort out - you'd have to be careful that it didn't lead to the same results getting published in multiple places, for one - but so long as these reports were indexed on PubMed, it would solve the fundamental problem.

ResearchBlogging.orgBourgeois FT, Murthy S, & Mandl KD (2010). Outcome Reporting Among Drug Trials Registered in ClinicalTrials.gov. Annals of internal medicine, 153 (3), 158-66 PMID: 20679560

Wednesday, August 4, 2010

SEU OLHAR MISTERIOSO.

Uma linda e bela mensagem recebida de um grande poeta Valter Poeta.
Como não tem título, vou colocar um. Que meu amigo me permita...
http://valterpoeta.blogspot.com/

Seu olhar Misterioso.
http://2.bp.blogspot.com/_Gf0c_CFUTuU/TE8BY-MFR1I/AAAAAAAAEp0/1Y4-tEMysJQ/s400/ac5.jpg

Existe algo misterioso
no silêncio de seu olhar
que talvez nunca revele pois,
a mente feminina
é um perigoso enigma
que em vão, os homens pretendem desvendar.
Mas, para quê conhecer
esse hermético segredo?

Se nosso grande objetivo sempre
por nós perseguido
é encontrar a felicidade
realizar nossos desejos.

Então, não faz sentido compreender essa paixão!
O que interessa é o milagre

que dá sentido nessa religião.
Ao matar a sede dos corpos

em seu ato misericordioso
vai aos poucos libertando seus
devotos sequiosos
de um enorme desprazer e,
mesmo sem entendê-las
estamos libertos
e felizes
duma existência triste
e vazia
sem o amor de uma mulher!

Valter Montani
.

PARTICIPE DOS DEMAIS BLOGS.
AGRADEÇO
A SUA COMPANHIA!!!


Crochet Bows



I really like these bows! Jess from Happy Together made them. She posted a tutorial today. And I'm going to make me some for school. I have to wear a uniform for school. But they let us wear different kinds of hair bows. I really like these. And they look pretty easy to make too! :) C

Urban Rain Giveaway



You can win $25 from Urban Rain. Look at how cute the things from Urban Rain's shop are. I really like all of the hair pins. I wear a lot of them when I dance. I hope I can win something from the giveaway. You should enter too! :) C

Real Time fMRI

Wouldn't it be cool if you could measure brain activation with fMRI... right as it happens?

You could lie there in the scanner and watch your brain light up. Then you could watch your brain light up some more in response to seeing your brain light up, and watch it light up even more upon seeing your brain light up in response to seeing itself light up... like putting your brain between two mirrors and getting an infinite tunnel of activations.

Ok, that would probably get boring, eventually. But there'd be some useful applications too. Apart from the obvious research interest, it would allow you to attempt fMRI neurofeedback: training yourself to be able to activate or deactivate parts of your brain. Neurofeedback has a long (and controversial) history, but so far it's only been feasible using EEG because that's the only neuroimaging method that gives real-time results. EEG is unfortunately not very good at localizing activity to specific areas.

Now MIT neuroscientists Hinds et al present a new way of doing right-now fMRI:
Computing moment to moment BOLD activation for real-time neurofeedback. It's not in fact the first such method, but they argue that it's the only one that provides reliable, truly real-time signals.

Essentially the approach is closely related to standard fMRI analysis processes, except instead of waiting for all of the data to come in before starting to analyze it, it incrementally estimates neural activation every time a new scan of the brain arrives, while accounting for various forms of noise. They first show that it works well on some simulated data, and then discuss the results of a real experiment in which 16 people were asked to alternately increase or decrease their own neural response to hearing the noise of the MRI scanner (they are very noisy). Neurofeedback was given by showing them a "thermometer" representing activity in their auditory cortex.

The real-time estimates of activation turned out to be highly correlated with the estimates given by conventional analysis after the experiment was over - though we're not told how well people were able to use the neurofeedback to regulate their own brains.

Unfortunately, we're not given all of the technical details of the method, so you won't be able to jump into the nearest scanner and look into your brain quite yet, though they do promise that "this method will be made publicly available as part of a real-time functional imaging software package."

ResearchBlogging.orgHinds, O., Ghosh, S., Thompson, T., Yoo, J., Whitfield-Gabrieli, S., Triantafyllou, C., & Gabrieli, J. (2010). Computing moment to moment BOLD activation for real-time neurofeedback NeuroImage DOI: 10.1016/j.neuroimage.2010.07.060

Real Time fMRI

Wouldn't it be cool if you could measure brain activation with fMRI... right as it happens?

You could lie there in the scanner and watch your brain light up. Then you could watch your brain light up some more in response to seeing your brain light up, and watch it light up even more upon seeing your brain light up in response to seeing itself light up... like putting your brain between two mirrors and getting an infinite tunnel of activations.

Ok, that would probably get boring, eventually. But there'd be some useful applications too. Apart from the obvious research interest, it would allow you to attempt fMRI neurofeedback: training yourself to be able to activate or deactivate parts of your brain. Neurofeedback has a long (and controversial) history, but so far it's only been feasible using EEG because that's the only neuroimaging method that gives real-time results. EEG is unfortunately not very good at localizing activity to specific areas.

Now MIT neuroscientists Hinds et al present a new way of doing right-now fMRI:
Computing moment to moment BOLD activation for real-time neurofeedback. It's not in fact the first such method, but they argue that it's the only one that provides reliable, truly real-time signals.

Essentially the approach is closely related to standard fMRI analysis processes, except instead of waiting for all of the data to come in before starting to analyze it, it incrementally estimates neural activation every time a new scan of the brain arrives, while accounting for various forms of noise. They first show that it works well on some simulated data, and then discuss the results of a real experiment in which 16 people were asked to alternately increase or decrease their own neural response to hearing the noise of the MRI scanner (they are very noisy). Neurofeedback was given by showing them a "thermometer" representing activity in their auditory cortex.

The real-time estimates of activation turned out to be highly correlated with the estimates given by conventional analysis after the experiment was over - though we're not told how well people were able to use the neurofeedback to regulate their own brains.

Unfortunately, we're not given all of the technical details of the method, so you won't be able to jump into the nearest scanner and look into your brain quite yet, though they do promise that "this method will be made publicly available as part of a real-time functional imaging software package."

ResearchBlogging.orgHinds, O., Ghosh, S., Thompson, T., Yoo, J., Whitfield-Gabrieli, S., Triantafyllou, C., & Gabrieli, J. (2010). Computing moment to moment BOLD activation for real-time neurofeedback NeuroImage DOI: 10.1016/j.neuroimage.2010.07.060