Wednesday, September 29, 2010

The Prefrontal Cortex Is Holistic

The question of whether the brain is "modular" - whether different parts do different things - has been a neuroscientific talking point since the days of the phrenologists.

They were the guys who believed that, not only were there modules, but that you could tell how big they were by measuring the shape of someone's skull, and so learn about their personality.

Phrenology made modules unfashionable for a while, but today they're back, and most of fMRI consists in trying to find areas of the brain that do different stuff, but in a new paper Wilson et al argue against taking modularism too far: Functional localization within the prefrontal cortex: missing the forest for the trees?

Their focus is the prefrontal cortex (PFC), a large chunk of the front of the brain which is bigger in humans than in any other species. The PFC is routinely subdivided into segments, each with (presumably) a different function. So we have the "emotional" vmPFC, the "memory" dlPFC, the "pleasure" OFC, etc.

Wilson et al don't dispute that there are some variations in function between different bits of the PFC, but they say that in all the excitement over localization, we may have overlooked the role of the PFC as a whole.

They discuss evidence from monkeys with PFC damage (or lesions which disconnect it from the rest of the brain). Damage to the entire PFC, they say, leaves monkeys completely unable to perform tasks which require storing concepts over time. For example, they can't learn that whenever they see, say, a red button, they ought to press it to get food. But if part of the PFC is intact, and it doesn't matter which part, monkeys can do this with only minor problems.

However, the PFC isn't required for all tasks. If the task only involves information which is all presented at once, the lesioned monkeys are OK. So they could learn, given a big panel covered in red buttons, to push the buttons to get food, because the buttons are all there simultaneously.
Hence the data from these tasks are congruent with the notion that [the PFC] is only crucial in memory during tasks requiring the processing of temporally complex events. This can be defined as an event to be learned about, in which information that is crucial to that learning is presented at more than one point in time, or that can only be interpreted with respect to a preceding event.
They say that evidence from human neuroimaging studies supports this view.
A meta-analysis has shown consistent recruitment of the same network of regions in the PFC across a range of cognitive demands. The authors argue that this supports specialization of function within the PFC, but of an unexpected nature, namely ‘a specific frontal-lobe network that is consistently recruited for solution of diverse cognitive problems’. The idea that large and different regions of the PFC are recruited by any task at hand supports our argument that the function of the PFC as a whole exceeds the sum of the functions of its subcomponents.
This all has echoes of Karl Lashley, an early neuroscientist (died 1958) who proposed the theory of "mass action" - that the whole cortex contributes to behaviour, rather than each part doing different things ("modularism").

Jerry Fodor, whose classic book The Modularity of Mind (1983) helped to rehabilitate modularism from its reputation as "phrenological", was also an advocate of this view - within limits.

Fodor argued that some brain systems, like vision, hearing and language, were cortical modules, but that above this, there was a non-modular system which was the basis for thought, intelligence and decision making. If I remember correctly, he didn't explicitly say that the prefrontal cortex was this system, but I'm sure he'd have no objections to Wilson et al's account.

ResearchBlogging.orgWilson CR, Gaffan D, Browning PG, & Baxter MG (2010). Functional localization within the prefrontal cortex: missing the forest for the trees? Trends in neurosciences PMID: 20864190

The Prefrontal Cortex Is Holistic

The question of whether the brain is "modular" - whether different parts do different things - has been a neuroscientific talking point since the days of the phrenologists.

They were the guys who believed that, not only were there modules, but that you could tell how big they were by measuring the shape of someone's skull, and so learn about their personality.

Phrenology made modules unfashionable for a while, but today they're back, and most of fMRI consists in trying to find areas of the brain that do different stuff, but in a new paper Wilson et al argue against taking modularism too far: Functional localization within the prefrontal cortex: missing the forest for the trees?

Their focus is the prefrontal cortex (PFC), a large chunk of the front of the brain which is bigger in humans than in any other species. The PFC is routinely subdivided into segments, each with (presumably) a different function. So we have the "emotional" vmPFC, the "memory" dlPFC, the "pleasure" OFC, etc.

Wilson et al don't dispute that there are some variations in function between different bits of the PFC, but they say that in all the excitement over localization, we may have overlooked the role of the PFC as a whole.

They discuss evidence from monkeys with PFC damage (or lesions which disconnect it from the rest of the brain). Damage to the entire PFC, they say, leaves monkeys completely unable to perform tasks which require storing concepts over time. For example, they can't learn that whenever they see, say, a red button, they ought to press it to get food. But if part of the PFC is intact, and it doesn't matter which part, monkeys can do this with only minor problems.

However, the PFC isn't required for all tasks. If the task only involves information which is all presented at once, the lesioned monkeys are OK. So they could learn, given a big panel covered in red buttons, to push the buttons to get food, because the buttons are all there simultaneously.
Hence the data from these tasks are congruent with the notion that [the PFC] is only crucial in memory during tasks requiring the processing of temporally complex events. This can be defined as an event to be learned about, in which information that is crucial to that learning is presented at more than one point in time, or that can only be interpreted with respect to a preceding event.
They say that evidence from human neuroimaging studies supports this view.
A meta-analysis has shown consistent recruitment of the same network of regions in the PFC across a range of cognitive demands. The authors argue that this supports specialization of function within the PFC, but of an unexpected nature, namely ‘a specific frontal-lobe network that is consistently recruited for solution of diverse cognitive problems’. The idea that large and different regions of the PFC are recruited by any task at hand supports our argument that the function of the PFC as a whole exceeds the sum of the functions of its subcomponents.
This all has echoes of Karl Lashley, an early neuroscientist (died 1958) who proposed the theory of "mass action" - that the whole cortex contributes to behaviour, rather than each part doing different things ("modularism").

Jerry Fodor, whose classic book The Modularity of Mind (1983) helped to rehabilitate modularism from its reputation as "phrenological", was also an advocate of this view - within limits.

Fodor argued that some brain systems, like vision, hearing and language, were cortical modules, but that above this, there was a non-modular system which was the basis for thought, intelligence and decision making. If I remember correctly, he didn't explicitly say that the prefrontal cortex was this system, but I'm sure he'd have no objections to Wilson et al's account.

ResearchBlogging.orgWilson CR, Gaffan D, Browning PG, & Baxter MG (2010). Functional localization within the prefrontal cortex: missing the forest for the trees? Trends in neurosciences PMID: 20864190

ESTAMOS AQUI..

HOJE É DIA DE PARTICIPAR E LER.
CONVIDO VOCÊ A VIR COMIGO.

CLIQUE NA IMAGEM E VENHA.


DIA 29 DE SETEMBRO chegou e estou aqui neste lindo cantinho...
Click na imagem e venha comigo!!!!Deixe seu comentá
rio e compartilhe por lá.Este mundo da LEITURA é muito FASHION.....

LER É FASHION!
Entre nesta moda!!! Ler é Fashion!


UM CHÁ E UMA BOA LEITURA VAI MUITO BEM...

SELINHO EU TE AMO campanhadoeuteamo.gif

JÁ PARTICIPEI ANTERIORMENTE AQUI..


"Nunca desvalorize ninguém...
Guarde cada pessoa perto do seu coração, porque um dia você pode acordar e perceber que perdeu um diamante enquanto estava muito ocupado colecionando pedras."


Blog Coletivo-Uma Interação de Amigos- SEMPRE UM NOVO TEMA...COMPARTILHE...VENHAM VER O MEU LINDO PRESENTE!!!!

NÃO SOU POLITICO..MAS VOU PEDIR O TE VOTO.

VOTE EM MIM

ESTOU CONTANDO COM SEU VOTO CLICK NA IMAGEM E PODERÁ VOTAR!!!! OBRIGADA PELO SEU VOTO... ELE VALE OURO PARA MIM

Sunday, September 26, 2010

MEUS QUERIDOS AMIGOS(A) VIRTUAIS!

http://2.bp.blogspot.com/_vzrlnu76oJw/S59hupVevMI/AAAAAAAAC5k/AbinGmJaIbo/s320/TOTOSANDRA.jpg

VENHO COM MUITO CARINHO ENTREGAR PARA VOCÊ A LEMBRANÇINHA DE PARTICIPAÇÃO NA FESTA DO MEU ANIVERSÁRIO.

VOCÊ FOI O MEU MELHOR PRESENTE!!OBRIGADA PELA SUA AMIZADE..
CONFIANÇA E TERNURA.
BEIJO NO SEU CORAÇÃO...


GRADEÇO A SUA COMPANHIA!!!Clique Aqui e veja mais imagens

Poetas-Um Voo Livre-

Sinal de Liberdade-uma expressão de sentimento-

Blog Coletivo-Uma Interação de Amigos- SEMPRE UM NOVO TEMA...COMPARTILHE...

MEUS MIMOS . AQUI. OFERECIDOS/RECEBIDOS-


Big Pharma Explain How To Pick Cherries

Here at Neuroskeptic, we see a lot of bad science. Maybe, over the years (all 2 of them) that I've been writing this blog, I've become a bit jaded. Maybe I'm less distressed by it than I used to be. Cynical, even.

But this one really takes the biscuit. And then it takes the tin. And relieves itself in it: A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs.

Don't worry - it's from a big pharmaceutical company (GlaxoSmithKline), so I don't have to worry about hurting feelings.

It's is full to bursting with colourful graphs and pictures, but the basic idea is very simple. As in "simpleton".

Suppose you're testing a new drug against placebo. You decide to do a multicentre trial, i.e. you enlist lots of doctors to give the drug, or placebo, to their patients. Each clinic or hospital which takes part is a "centre". Multicentre trials are popular because they're an easy way of quickly testing a drug on a large number of patients.

Anyway, suppose that the results come in, and it turns out that the drug didn't work any better than placebo, which unfortunately is what happens rather often in modern trials of antidepressants. Oh dear. The drug's crap. That's the end of that chapter.

...
or is it?!? say GSK. Maybe not. They have a clever trick. Look at the results from each centre individually. Placebo response rates will probably vary between centres: in some of them, the placebo people don't get better, in others, they get lots better.

Now, suppose that you just chucked out all of the data from centres where the people on placebo got much better, on the grounds that there must be something weird going on in those ones. They reanalyzed the data from 1,837 patients given paroxetine or placebo, across 124 centres. In the dataset as a whole, paroxetine barely outperformed placebo. However, in the centres where people on placebo only improved a little, the drug was much better than placebo!

Well, of course it was. Imagine that the drug has no effect. Some people just get better and others don't. Let's assume that each person randomly gets between 0 and 25 better, with an equal chance of any outcome. Half are on drug and half are on placebo, but it makes no difference.

Let's further assume that there are 50 centres, with 20 people per centre (1000 people total). I knocked up a "simulation" of this in Excel (it took 10 minutes). Here's what you get:

The blue dots show, for each imaginary centre, drug improvement vs. placebo improvement. There's no correlation (it's random), and, on average, there is no difference: both average out at 12 points. The drug doesn't work.

The red dots show the "Treatment Effect" i.e. [drug improvement - placebo improvement]. The average is 0 - because the drug doesn't work. But there's a strong negative correlation between Treatment Effect and the placebo improvement - in centres where people improved lots on placebo, the drug worked worse.

This is exactly what Glaxo show in Figure 1a (see above). They write:
The analysis of the surface response indicated the predominant role of center specific placebo response as compared with the dose strength in determining the Treatment Effect of paroxetine.
But of course they correlate. You're correlating placebo improvement with itself: the "Treatment Effect" is a function of the placebo improvement. It's classic regression to the mean.

Of course if you chuck out the centres where people on placebo do well (the grey box in my picture), the drug seems to work pretty nicely. But this is cheating. It is cherry-picking. It is completely unscientific. (To give the authors their due, they also eliminated the centres where the placebo response was very low. This could, under some assumptions, make the analysis unbiased, but they don't show that this was their intention, let alone that it would eliminate all of the bias.)

The authors note that this could be a source of bias, but say that it wouldn't be one if it was planned out in advance: "in order to overcome the bias risk, the enrichment strategy should be accounted for and pre-planned in the study protocol." This is like saying that if you announce, before playing chess, that you are going to cheat, it's not cheating.

To be fair to the authors, assuming the drug does work, this method would improve your chances of correctly detecting the effect. Centres with very high placebo responses quite possibly are junk. Assuming the drug works.

But if we're assuming the drug works, why are we bothering to do a trial? The whole point of a trial is to discover something we don't know. The authors justify their approach by suggesting that it would be useful for drug companies who want to do a "proof-of-concept" trial to find out whether an experimental drug might work under the most favourable conditions, i.e. whether they should bother continuing to research it.

They say that such trials "are inherently exploratory in their conception, aimed at signal detection, open to innovation..." - in other words, that they're not meant to be as rigorous as late-stage trials.

Fair enough. But this method is not even suitable for proof-of-concept, because it would (as I have shown above in my 10 minute simulation) increase your chance of finding an "effect" from a drug that doesn't work.

Whatever the truth is, this method will give the same result, so it's not useful evidence. It's like saying "Heads I win, tails you lose". You've set it up so that I lose - the coin toss doesn't tell us anything.

All of the author's results are based on trials in which the drug "should have worked": they do not appear to have simulated what would happen if they used this method on trials where it didn't work, as I just did. So I'm doing Pharma a big favour by writing this post, because if they adopt this approach, they're more likely to waste money on drugs that don't work.

They should be paying me for this stuff.

ResearchBlogging.orgMerlo-Pich E, Alexander RC, Fava M, & Gomeni R (2010). A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs. Clinical Pharmacology and Therapeutics PMID: 20861834

Big Pharma Explain How To Pick Cherries

Here at Neuroskeptic, we see a lot of bad science. Maybe, over the years (all 2 of them) that I've been writing this blog, I've become a bit jaded. Maybe I'm less distressed by it than I used to be. Cynical, even.

But this one really takes the biscuit. And then it takes the tin. And relieves itself in it: A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs.

Don't worry - it's from a big pharmaceutical company (GlaxoSmithKline), so I don't have to worry about hurting feelings.

It's is full to bursting with colourful graphs and pictures, but the basic idea is very simple. As in "simpleton".

Suppose you're testing a new drug against placebo. You decide to do a multicentre trial, i.e. you enlist lots of doctors to give the drug, or placebo, to their patients. Each clinic or hospital which takes part is a "centre". Multicentre trials are popular because they're an easy way of quickly testing a drug on a large number of patients.

Anyway, suppose that the results come in, and it turns out that the drug didn't work any better than placebo, which unfortunately is what happens rather often in modern trials of antidepressants. Oh dear. The drug's crap. That's the end of that chapter.

...
or is it?!? say GSK. Maybe not. They have a clever trick. Look at the results from each centre individually. Placebo response rates will probably vary between centres: in some of them, the placebo people don't get better, in others, they get lots better.

Now, suppose that you just chucked out all of the data from centres where the people on placebo got much better, on the grounds that there must be something weird going on in those ones. They reanalyzed the data from 1,837 patients given paroxetine or placebo, across 124 centres. In the dataset as a whole, paroxetine barely outperformed placebo. However, in the centres where people on placebo only improved a little, the drug was much better than placebo!

Well, of course it was. Imagine that the drug has no effect. Some people just get better and others don't. Let's assume that each person randomly gets between 0 and 25 better, with an equal chance of any outcome. Half are on drug and half are on placebo, but it makes no difference.

Let's further assume that there are 50 centres, with 20 people per centre (1000 people total). I knocked up a "simulation" of this in Excel (it took 10 minutes). Here's what you get:

The blue dots show, for each imaginary centre, drug improvement vs. placebo improvement. There's no correlation (it's random), and, on average, there is no difference: both average out at 12 points. The drug doesn't work.

The red dots show the "Treatment Effect" i.e. [drug improvement - placebo improvement]. The average is 0 - because the drug doesn't work. But there's a strong negative correlation between Treatment Effect and the placebo improvement - in centres where people improved lots on placebo, the drug worked worse.

This is exactly what Glaxo show in Figure 1a (see above). They write:
The analysis of the surface response indicated the predominant role of center specific placebo response as compared with the dose strength in determining the Treatment Effect of paroxetine.
But of course they correlate. You're correlating placebo improvement with itself: the "Treatment Effect" is a function of the placebo improvement. It's classic regression to the mean.

Of course if you chuck out the centres where people on placebo do well (the grey box in my picture), the drug seems to work pretty nicely. But this is cheating. It is cherry-picking. It is completely unscientific. (To give the authors their due, they also eliminated the centres where the placebo response was very low. This could, under some assumptions, make the analysis unbiased, but they don't show that this was their intention, let alone that it would eliminate all of the bias.)

The authors note that this could be a source of bias, but say that it wouldn't be one if it was planned out in advance: "in order to overcome the bias risk, the enrichment strategy should be accounted for and pre-planned in the study protocol." This is like saying that if you announce, before playing chess, that you are going to cheat, it's not cheating.

To be fair to the authors, assuming the drug does work, this method would improve your chances of correctly detecting the effect. Centres with very high placebo responses quite possibly are junk. Assuming the drug works.

But if we're assuming the drug works, why are we bothering to do a trial? The whole point of a trial is to discover something we don't know. The authors justify their approach by suggesting that it would be useful for drug companies who want to do a "proof-of-concept" trial to find out whether an experimental drug might work under the most favourable conditions, i.e. whether they should bother continuing to research it.

They say that such trials "are inherently exploratory in their conception, aimed at signal detection, open to innovation..." - in other words, that they're not meant to be as rigorous as late-stage trials.

Fair enough. But this method is not even suitable for proof-of-concept, because it would (as I have shown above in my 10 minute simulation) increase your chance of finding an "effect" from a drug that doesn't work.

Whatever the truth is, this method will give the same result, so it's not useful evidence. It's like saying "Heads I win, tails you lose". You've set it up so that I lose - the coin toss doesn't tell us anything.

All of the author's results are based on trials in which the drug "should have worked": they do not appear to have simulated what would happen if they used this method on trials where it didn't work, as I just did. So I'm doing Pharma a big favour by writing this post, because if they adopt this approach, they're more likely to waste money on drugs that don't work.

They should be paying me for this stuff.

ResearchBlogging.orgMerlo-Pich E, Alexander RC, Fava M, & Gomeni R (2010). A New Population-Enrichment Strategy to Improve Efficiency of Placebo-Controlled Clinical Trials of Antidepressant Drugs. Clinical Pharmacology and Therapeutics PMID: 20861834

Friday, September 24, 2010

BOM FINAL DE SEMANA A TODOS.

orkut e hi5, Fim De Semana, rosa vermelha, recados para orkut com rosa vermelha, mensagem de fim de semana, recado para orkut

LINDOS PRESENTES FORAM RECEBIDOS.
AGRADEÇO A TODOS DE CORAÇÃO. RECEBI ESTA LINDA FAIXINHA DA LINDA E FOFA ZIL DO BLOG RECOMEÇAR.

http://2.bp.blogspot.com/_JIuGNIIQRzA/TJ0xMNRWQRI/AAAAAAAAC0g/-Yztdo1mc_g/S350/QlxWF.gif

AMANHÃ(SÁBADO-25.09),VOU TRABALHAR. A ESCOLA ESTÁ PROMOVENDO UM CAFÉ LITERÁRIO. POR TANTO NÃO VIREI NO BLOG. DOMINGO DAREI UMA PASSADINHA. ASSIM QUE PUDER VOU RETRIBUIR A TODOS. POIS FIQUEI MUITO FELIZ COM A SUA PRESENÇA NA FESTA.

OBRIGADOOOOOO!!!!!AMIGOS SÃO ETERNOS..

OFEREÇO A VOCÊ ESTE SELINHO
LOGO POSTAREI A LEMBRANCINHA DA FESTA..


AGRADEÇO A SUA COMPANHIA!!!Clique Aqui e veja mais imagens

Poetas-Um Voo Livre-

Sinal de Liberdade-uma expressão de sentimento-

Blog Coletivo-Uma Interação de Amigos- JÁ NOVO TEMA...COMPARTILHE.. VOU TE ESPERAR NESTA RODA DE CONVERSA!!!

MEUS MIMOS . AQUI. OFERECIDOS/RECEBIDOS-