Thursday, August 26, 2010

You Read It Here First

Remember the paper from 2009 about combining two different drugs in the treatment of depression?

It was about a clinical trial in which patients were randomly assigned to get just one antidepressant, fluoxetine, or two - mirtazapine & fluoxetine, mirtazapine & venlafaxine, or mirtazapine & buproprion. The people who got two antidepressants did better.

But as I said at the time, in a comment beneath my post about it...
All the first 6 weeks shows is that mirtazapine is better than placebo. Everyone in the study got a non-mirtazapine antidepressant, so any improvement in the non-mirtazapine group (i.e. the fluoxetine alone group) could have been placebo, regression to the mean etc. The only placebo-controlled aspect was that some people got placebo mirtazapine and some people got real mirtazapine.
Now Dr's El-Mallakh, Kaur and Lippmann have written in a Letter to The Editor of the American Journal of Psychiatry (where the original paper appeared) that
There was no mirtazapine plus placebo study group. This comparison arm is necessary in order to be confident that the observed effect by the three combined treatments could not have been accomplished by mirtazapine as a single drug. The observation that mirtazapine alone was equivalent to fluoxetine or paroxetine alone in a previous study does not negate the need for a control in the Blier et al. study. Without such a control, one cannot assume that two antidepressant medications are more effective than mirtazapine alone.
What I said - on 18th December 2009. The new Letter was "accepted for publication" in May 2010, and it's only just appeared.

Am I just blowing my own trumpet? No. Well, a bit. But there's a serious point as well: internet comments are a much better medium for discussing and criticizing research than Letters To The Editor ever can be.

Why? The Letter may have been a bit slower, but it's still out there, surely? Plus, it'll have been read by far more people. My post has got about 400 pageviews so far. I don't know how many people read the Letters page in the AJP, but I'd imagine it must be a good few thousand. So what's the problem?

The problem is that it's too late. Papers get cited by other papers fast (this one's got 13 citations so far), and they change minds even faster. This article's been out nearly a year, and I'm sure that in that time it will have convinced some psychiatrists to start their depressed patients on two drugs, rather than just one.

Now I'm not saying they shouldn't do that. I don't know. Anyway, I'm not a doctor. But I stand by my comment that this paper shouldn't be what changes your opinion on that question; the design of the trial means it can't tell you that. And I think that's something that readers of the paper should have been told at the time, not 9 months later.

What's the solution? I've written about this previously as well. Scientific journals should have open, blog-style comment threads attached to everything they publish, so that readers can say what they have to say, immediately. A number of major journals, e.g. the PLoS journals, some of the Nature ones, and the BMJ, already do this.

From what I've seen, the standard of comments is extremely high. Sure, some are rubbish. But the rubbish ones are almost always obviously bad, so I don't think they'll be doing much damage. The good ones, on the other hand, are often extremely insightful - whether they are criticizing, or praising, the paper.

ResearchBlogging.orgEl-Mallakh RS, Kaur G, & Lippman S (2010). Placebo group needed for interpretation of combination trial. The American journal of psychiatry, 167 (8) PMID: 20693473

You Read It Here First

Remember the paper from 2009 about combining two different drugs in the treatment of depression?

It was about a clinical trial in which patients were randomly assigned to get just one antidepressant, fluoxetine, or two - mirtazapine & fluoxetine, mirtazapine & venlafaxine, or mirtazapine & buproprion. The people who got two antidepressants did better.

But as I said at the time, in a comment beneath my post about it...
All the first 6 weeks shows is that mirtazapine is better than placebo. Everyone in the study got a non-mirtazapine antidepressant, so any improvement in the non-mirtazapine group (i.e. the fluoxetine alone group) could have been placebo, regression to the mean etc. The only placebo-controlled aspect was that some people got placebo mirtazapine and some people got real mirtazapine.
Now Dr's El-Mallakh, Kaur and Lippmann have written in a Letter to The Editor of the American Journal of Psychiatry (where the original paper appeared) that
There was no mirtazapine plus placebo study group. This comparison arm is necessary in order to be confident that the observed effect by the three combined treatments could not have been accomplished by mirtazapine as a single drug. The observation that mirtazapine alone was equivalent to fluoxetine or paroxetine alone in a previous study does not negate the need for a control in the Blier et al. study. Without such a control, one cannot assume that two antidepressant medications are more effective than mirtazapine alone.
What I said - on 18th December 2009. The new Letter was "accepted for publication" in May 2010, and it's only just appeared.

Am I just blowing my own trumpet? No. Well, a bit. But there's a serious point as well: internet comments are a much better medium for discussing and criticizing research than Letters To The Editor ever can be.

Why? The Letter may have been a bit slower, but it's still out there, surely? Plus, it'll have been read by far more people. My post has got about 400 pageviews so far. I don't know how many people read the Letters page in the AJP, but I'd imagine it must be a good few thousand. So what's the problem?

The problem is that it's too late. Papers get cited by other papers fast (this one's got 13 citations so far), and they change minds even faster. This article's been out nearly a year, and I'm sure that in that time it will have convinced some psychiatrists to start their depressed patients on two drugs, rather than just one.

Now I'm not saying they shouldn't do that. I don't know. Anyway, I'm not a doctor. But I stand by my comment that this paper shouldn't be what changes your opinion on that question; the design of the trial means it can't tell you that. And I think that's something that readers of the paper should have been told at the time, not 9 months later.

What's the solution? I've written about this previously as well. Scientific journals should have open, blog-style comment threads attached to everything they publish, so that readers can say what they have to say, immediately. A number of major journals, e.g. the PLoS journals, some of the Nature ones, and the BMJ, already do this.

From what I've seen, the standard of comments is extremely high. Sure, some are rubbish. But the rubbish ones are almost always obviously bad, so I don't think they'll be doing much damage. The good ones, on the other hand, are often extremely insightful - whether they are criticizing, or praising, the paper.

ResearchBlogging.orgEl-Mallakh RS, Kaur G, & Lippman S (2010). Placebo group needed for interpretation of combination trial. The American journal of psychiatry, 167 (8) PMID: 20693473

Wednesday, August 25, 2010

O BLOG INTERAÇÃO DE AMIGOS AGRADECE OS AMIGOS...

SELO EXCLUSIVO DO BLOG
VENHA INTERAGIR COM
ESTE BLOG.

QUERO AGRADECER O CARINHO MUITO ESPECIAL DE TODOS QUE FORAM COMPARTILHAR COMIGO AS DELICIAS DO TURISMO RURAL LÁ NA MINHA ALDEIA-(BLOG DA SUSANA- PORTUGAL.)

OBRIGADA PELO SEU COMENTARIO A INTERAÇÃO DE AMIGOS AGRADECE.
TODOS OS COMENTÁRIOS ESTAM POSTADOS NA INTERAÇÃO. GUARDAREI COM MUITO CARINHO O SEU PONTO...

Blog Coletivo-Uma Interação de Amigos-
COLETIVAS-COMPARTILHE. TEM -TURISMO RURAL-CONHEÇA UM POUQUINHO DESSE LUGAR ..VOU TE ESPERAR POR LÁ.
CLIQUE E COMENTE...CADA COMENTÁRIO VALE TRÊS PONTOS.
MAS TEM QUE SER LÁ NA ALDEIA DE MINHA VIDA.


.http://aldeiadaminhavida.blogspot.com/2010/08/momentos-especiais-em-turismo-rural.html#comment-form

VENHA SABOREAR AS DELICIAS DESSA POSTAGEM.
VIVA A MAGIA DO LUGAR. A REGIÃO SUL DE SANTA CATARINA É MUITO BELA.. NESTE RECANTO RURAL.DESCANSAMOS UM POUQUINHO.. SUPER MARAVILHOSO.



GRADEÇO A SUA COMPANHIA!!!Clique Aqui e veja mais imagens

Poetas-Um Voo Livre-

Sinal de Liberdade-uma expressão de sentimento-

Blog Coletivo-Uma Interação de Amigos-
JÁ NOVO TEMA...COMPARTILHE...

MEUS MIMOS . AQUI. OFERECIDOS/RECEBIDOS-

Tuesday, August 24, 2010

OLA, TUDO BEM?? FOMOS AO TEATRO...

Acredito que sim.. Venho para deixar um abraço e dizer que é muito bom ter você na minha companhia.
Muito obrigada pelo seu carinho...

TEATRO DA SCAR.
http://www.radarsul.com.br/jaragua/imagens/scar2.jpg
Por aqui trabalhando muito. Mas muito feliz. Pois amo minha profissão.
Durante esta semana estamos levando nossos alunos ao Teatro.
É um Projeto com a SCAR...A Escola vai ao Teatro.
Lavamos todos..Cada um na sua fase escolar. Super bom.
Gostei muito desta peça.
A Menina e o Vento
Faixa etária: de 3 até 6 anos - Duração: 50 minutos
Dias 29 e 30 de abril - Quinta e sexta-feira - 15h
Sinopse: Esta é a história do encontro da menina Maria com o Vento, quando os dois se tornam amigos - apesar de quase ninguém acreditar nisso - e saem viajando juntos pelo mundo. Maria encoraja o Vento a fazer um pouco de bagunça, porque "mundo arrumadinho é muito chato"! A peça fala sobre liberdade, utilizando o vento como sua metáfora mais marcante e transforma a poética relação entre uma menina e esse elemento da natureza em uma linda fábula. Coloca em discussão temas extremamente presentes no cotidiano infantil como família, escola, amizade, hábitos e costumes sociais, a fim de defender, antes de tudo, a liberdade da criança de descobrir o mundo.

TEM OUTRAS PEÇAS PARA OS ALUNOS MAIORES. CITEI ESTA, PORQUE ASSISTI E FOI MUITO LEGAL..GOSTO DA LIBERDADE..A OUTRA NÃO FUI. FORAM OUTROS PROFESSORES.
É UM INCENTIVO CULTURAL. TODOS OS ANOS REALIZAMOS ESTE PROJETO.

O projeto A Escola vai ao Teatro, iniciativa da SCAR - Sociedade Cultura Artística com o objetivo de promover o acesso aos estudantes de Jaraguá do Sul e região a espetáculos de teatro, realiza de 27 a 30 de abril a primeira etapa da edição de 2010. A programação apresenta as peças Concerto em Ri Maior (Curitiba-PR), O Menino do Dedo Verde (Itajaí-SC), Aventuras e... Humor! (Jaraguá do Sul-SC) e A Menina e o Vento (Núcleo de Teatro da SCAR).


GRADEÇO
A SUA COMPANHIA!!!Clique Aqui e veja mais imagens

Poetas-Um Voo Livre-

Sinal de Liberdade-uma expressão de sentimento-

Blog Coletivo-Uma Interação de Amigos-
COLETIVAS-COMPARTILHE. TEM -TURISMO RURAL-CONHEÇA UM POUQUINHO DESSE LUGAR ..VOU TE ESPERAR POR LÁ.
CLIQUE E COMENTE...CADA COMENTÁRIO VALE TRÊS PONTOS. MAS TEM QUE SER LÁ NA ALDEIA DE MINHA VIDA.
.http://aldeiadaminhavida.blogspot.com/2010/08/momentos-especiais-em-turismo-rural.html#comment-form
VENHA SABOREAR AS DELICIAS DESSA POSTAGEM. DESCANSE NESTE RECANTO RURAL. SUPER MARAVILHOSO.

MEUS MIMOS . AQUI. OFERECIDOS/RECEBIDOS-

Help I'm Being Regressed To The Mean

"Regression to the mean" was the bane of my undergraduate statistics class. We knew that it was out there, and that the final exam would have a question about it, but no-one understood it or had ever seen it. A bit like unicorns or fairies.

The lecture notes were unhelpful. They told us what it did - make things wrongly appear to change over time when actually stuff stayed the same - but not what it was. Some people claimed to get it, but they couldn't explain it to others.

I now see that our mistake was in thinking that there's some thing called "regression to the mean". There isn't. It's just a rather unhelpful term for what happens in a certain kind of situation, and once you understand those situations, there's nothing more to learn.

Suppose there's a number, which varies over time, and at least some of this variation is random. It could be anything from the number of sunspots to rates of cancer. You get interested in this number whenever it gets very high (or very low). Whenever it does, you start tracking the number for a while. Maybe you even try to change it. You notice that the number always seems to be falling (or rising). Why?

Because you only get interested in the number when it's, by chance, unusually high. The chances are, the next time you look at it, it will be lower: not for any interesting reason, or because "what goes up must come down", but just because if you take an unusually high number and then generate a new number at random, it'll probably be lower. That's why the first number was "unusually high".

Suppose that you take some people and give them an IQ test twice, a week apart. Call the first test "X" and the second test "Y". Suppose it's a crap test that gives entirely random results. Here's what might happen if you gave the test to 100 people, with each dot a person:
There's no correlation, because X and Y are both random junk. Nothing to see, move along. But wait a second...
Here's X, first test score, plotted vs Y-X i.e. the change in score between the first test and the second. There's a strong negative correlation: people who did well on the first test tended to get worse, and people who did badly, tended to improve. Wow? No. This is a purely statistical effect. It's meaningless: the "correlation" exists only because we're correlating X with itself (in the form of Y-X).

It's a fundamental mistake, and it's obvious when you look at it like this, yet it's a surprisingly easy one to make without noticing. Imagine that you'd invented a pill that you think can make people smarter. You decide to test it on "stupid people", because they're the ones who need it most. So you give lots of people an IQ test (X), select the worst 10%, and give them the drug. Then you re-test them afterwards (Y). Whoa! They've improved! The drug works!

There's only one stupid person involved in this experiment.

This remains true, even if the IQ tests aren't entirely random. A test that measures real intelligence will also have an element of luck. By selecting the bottom 10% of scores, you're selecting people who are both unintelligent and unlucky when they took the test. They'd have scored 11% if they were lucky. So the same problem applies, albeit to a lesser degree.

That's really all there is to "regression to the mean". The regression of high or low scores towards the mean score is inevitable, given our definition of "high" and "low" scores, to the extent that scores are random. This is why I said it's unhelpful to think of it as a thing. The trick is being able to spot it when it happens, and to avoid being mislead by it. If you're not careful, it can happen anywhere.

Interestingly, the reason why it's thought of in this unhelpful way is probably because the "discoverer" of regression-to-the-mean, Francis Galton, misunderstood it. He observed this "effect" in some data he'd collected about human height, and he wrongly interpreted it as a real biological fact about genetics. Eventually, people noticed the statistical mistake, but the idea of "regression to the mean" stuck, to the dismay of undergraduates everywhere.

Link: This was inspired by a post on Dorothy Bishop's blog, Three ways to improve cognitive test scores without intervention.

Help I'm Being Regressed To The Mean

"Regression to the mean" was the bane of my undergraduate statistics class. We knew that it was out there, and that the final exam would have a question about it, but no-one understood it or had ever seen it. A bit like unicorns or fairies.

The lecture notes were unhelpful. They told us what it did - make things wrongly appear to change over time when actually stuff stayed the same - but not what it was. Some people claimed to get it, but they couldn't explain it to others.

I now see that our mistake was in thinking that there's some thing called "regression to the mean". There isn't. It's just a rather unhelpful term for what happens in a certain kind of situation, and once you understand those situations, there's nothing more to learn.

Suppose there's a number, which varies over time, and at least some of this variation is random. It could be anything from the number of sunspots to rates of cancer. You get interested in this number whenever it gets very high (or very low). Whenever it does, you start tracking the number for a while. Maybe you even try to change it. You notice that the number always seems to be falling (or rising). Why?

Because you only get interested in the number when it's, by chance, unusually high. The chances are, the next time you look at it, it will be lower: not for any interesting reason, or because "what goes up must come down", but just because if you take an unusually high number and then generate a new number at random, it'll probably be lower. That's why the first number was "unusually high".

Suppose that you take some people and give them an IQ test twice, a week apart. Call the first test "X" and the second test "Y". Suppose it's a crap test that gives entirely random results. Here's what might happen if you gave the test to 100 people, with each dot a person:
There's no correlation, because X and Y are both random junk. Nothing to see, move along. But wait a second...
Here's X, first test score, plotted vs Y-X i.e. the change in score between the first test and the second. There's a strong negative correlation: people who did well on the first test tended to get worse, and people who did badly, tended to improve. Wow? No. This is a purely statistical effect. It's meaningless: the "correlation" exists only because we're correlating X with itself (in the form of Y-X).

It's a fundamental mistake, and it's obvious when you look at it like this, yet it's a surprisingly easy one to make without noticing. Imagine that you'd invented a pill that you think can make people smarter. You decide to test it on "stupid people", because they're the ones who need it most. So you give lots of people an IQ test (X), select the worst 10%, and give them the drug. Then you re-test them afterwards (Y). Whoa! They've improved! The drug works!

There's only one stupid person involved in this experiment.

This remains true, even if the IQ tests aren't entirely random. A test that measures real intelligence will also have an element of luck. By selecting the bottom 10% of scores, you're selecting people who are both unintelligent and unlucky when they took the test. They'd have scored 11% if they were lucky. So the same problem applies, albeit to a lesser degree.

That's really all there is to "regression to the mean". The regression of high or low scores towards the mean score is inevitable, given our definition of "high" and "low" scores, to the extent that scores are random. This is why I said it's unhelpful to think of it as a thing. The trick is being able to spot it when it happens, and to avoid being mislead by it. If you're not careful, it can happen anywhere.

Interestingly, the reason why it's thought of in this unhelpful way is probably because the "discoverer" of regression-to-the-mean, Francis Galton, misunderstood it. He observed this "effect" in some data he'd collected about human height, and he wrongly interpreted it as a real biological fact about genetics. Eventually, people noticed the statistical mistake, but the idea of "regression to the mean" stuck, to the dismay of undergraduates everywhere.

Link: This was inspired by a post on Dorothy Bishop's blog, Three ways to improve cognitive test scores without intervention.

Monday, August 23, 2010

Fish Out Of Water, On Ketamine

Ketamine is a drug of many talents. Used medically as an anesthetic in animals and, sometimes, in humans, it's also become widely used recreationally despite, or perhaps because of, its reputation as a "horse tranquilizer".

Ketamine's also a hot topic in research at the moment for two reasons: it's considered an interesting way of provoking the symptoms of schizophrenia, and it's also shown promise as a fast-acting antidepressant.

Anyway, most ketamine research to date has been done in either humans or in rodents, but New York pharmacologists Zakhary et al decided to see what it does to fish. So they put some ketamine in the fishes water and saw what happened: A Behavioral and Molecular Analysis of Ketamine in Zebrafish.

A high dose, 0.8%, just made the fish unconscious. Well, it is an anesthetic. But a low dose (0.2%) had rather more complex effects. It sent them literally loopy - they started swimming around and around in circles, usually in a clockwise direction. Control zebrafish swam about and explored their tanks without any circling behaviours.

They also examined the effect of ketamine on the "hypoxic stress" response, i.e. what happens when you take the fish out of water (only for 20 seconds, so it doesn't cause any real harm.) Normal fish struggle and gasp for water in this situation, unsurprisingly. Ketamine strongly inhibited this.

So what? Well, it's hard to say what this might mean. It would be great if the zebrafish turned out to be a useful experimental model for investigating the effects of ketamine and similar drugs, because they're much easier to work with than rodents (for one thing, it's a lot easier to just put a drug in a fish tank than to inject it into a mouse.)

However, it remains to be seen whether swimming in circles is a useful analog of the human effects of ketamine. Ketamine can make people act in some pretty stupid ways, but walking around in little circles is extreme even by K-head standards...

Link: I've blogged about ketamine before: I'm On K, You're On K.

ResearchBlogging.orgZakhary SM, Ayubcha D, Ansari F, Kamran K, Karim M, Leheste JR, Horowitz JM, & Torres G (2010). A behavioral and molecular analysis of ketamine in zebrafish. Synapse (New York, N.Y.) PMID: 20623473