Showing posts with label papers. Show all posts
Showing posts with label papers. Show all posts

Wednesday, June 29, 2011

Eagle-Eyed Autism? No.

An interesting and refreshing paper from Simon Baron-Cohen's autism group from Cambridge. The results themselves are pretty boring - they found that people with autism have normal visual acuity.


But the story behind it is rather spicy.

Back in 2009, a Cambridge group - different authors, but led by "SBC", published a report claiming that people with autism have exceptionally acute vision. Their average visual acuity was claimed to be 2.8

On this scale, 1.0 is defined as normal, and a sharp-eyed young adult with excellent eyesight would get about 1.5. 2.8 means nearly three times as good. Which is, literally, superhuman - a bird of prey would be happy with that. The paper was titled "Eagle-Eyed Visual Acuity In Autism".

However, what followed was straight out of the Book of Obadiah - "Though you soar like the eagle ... from there I will bring you down, sayeth the Lord". Or in this case, sayeth two experts in visual acuity research, Bach and Dakin, whose qualifications included the fact that they wrote the software used in the original study, which is online here.

They wrote a knock-down critique, arguing that the results were a result of using the wrong settings, which meant that the task was extremely easy. In fact, even perfect performance would only correspond to an acuity of less than 1.

You could never make a test so hard that it would require an acuity of 3.0 on a standard computer. Pixels are just too big. A single pixel is easy to spot, for someone of normal-ish vision. The only way to make it harder would be to use a special, extremely high-res monitor, or to get people to sit a long way from the screen.

So how did a result of nearly 3.0 come out? Because they also turned on data extrapolation, basically saying that if you really aced the easy task, you'd probably do quite well on a harder one. This might be sensible in some situations, but it breaks down when the task was so easy. The autistics seemed to have super vision because they got, say, 99% right, as opposed to 98%.

Yet the present paper represents a happy ending as it's written by a combined team of Cambridge people, and Bach and Dakin as well, although the lead authors of the original weren't on it. This time, they used appropriate methods - they got people to sit 4 meters from the screen. To be extra sure, they also gave everyone an eye exam before testing.

And they found no difference at all. The present paper is heartening - rather than grimly sticking to their guns, they admitted their error.


This story should however serve as a cautionary tale; I previously wrote about the fact that in science, a little mistake can cause a lot of problems. This is one of those cases, although arguably there were two seperate mistakes, but one, the extrapolation, was only a problem because of the main mistake, the big pixels.

ResearchBlogging.orgTavassoli T, Latham K, Bach M, Dakin SC, & Baron-Cohen S (2011). Psychophysical measures of visual acuity in autism spectrum conditions. Vision research PMID: 21704058

Eagle-Eyed Autism? No.

An interesting and refreshing paper from Simon Baron-Cohen's autism group from Cambridge. The results themselves are pretty boring - they found that people with autism have normal visual acuity.


But the story behind it is rather spicy.

Back in 2009, a Cambridge group - different authors, but led by "SBC", published a report claiming that people with autism have exceptionally acute vision. Their average visual acuity was claimed to be 2.8

On this scale, 1.0 is defined as normal, and a sharp-eyed young adult with excellent eyesight would get about 1.5. 2.8 means nearly three times as good. Which is, literally, superhuman - a bird of prey would be happy with that. The paper was titled "Eagle-Eyed Visual Acuity In Autism".

However, what followed was straight out of the Book of Obadiah - "Though you soar like the eagle ... from there I will bring you down, sayeth the Lord". Or in this case, sayeth two experts in visual acuity research, Bach and Dakin, whose qualifications included the fact that they wrote the software used in the original study, which is online here.

They wrote a knock-down critique, arguing that the results were a result of using the wrong settings, which meant that the task was extremely easy. In fact, even perfect performance would only correspond to an acuity of less than 1.

You could never make a test so hard that it would require an acuity of 3.0 on a standard computer. Pixels are just too big. A single pixel is easy to spot, for someone of normal-ish vision. The only way to make it harder would be to use a special, extremely high-res monitor, or to get people to sit a long way from the screen.

So how did a result of nearly 3.0 come out? Because they also turned on data extrapolation, basically saying that if you really aced the easy task, you'd probably do quite well on a harder one. This might be sensible in some situations, but it breaks down when the task was so easy. The autistics seemed to have super vision because they got, say, 99% right, as opposed to 98%.

Yet the present paper represents a happy ending as it's written by a combined team of Cambridge people, and Bach and Dakin as well, although the lead authors of the original weren't on it. This time, they used appropriate methods - they got people to sit 4 meters from the screen. To be extra sure, they also gave everyone an eye exam before testing.

And they found no difference at all. The present paper is heartening - rather than grimly sticking to their guns, they admitted their error.


This story should however serve as a cautionary tale; I previously wrote about the fact that in science, a little mistake can cause a lot of problems. This is one of those cases, although arguably there were two seperate mistakes, but one, the extrapolation, was only a problem because of the main mistake, the big pixels.

ResearchBlogging.orgTavassoli T, Latham K, Bach M, Dakin SC, & Baron-Cohen S (2011). Psychophysical measures of visual acuity in autism spectrum conditions. Vision research PMID: 21704058

Tuesday, June 28, 2011

Machine-Readable Psychiatry

The idea of trawling the internet to discover what people think about medications is a fascinating one and I've covered some attempts to do this in the past, but it's not easy. And there's something worrying about where it could lead.

A new paper aims to trawl medical records to work out how well depressed patients responded to treatment. The authors used Natural Language Processing or NLP (not that NLP) to interpret electronic medical records from over 5,000 patients treated at hospitals in New England. Each record included notes taken at multiple visits.

A crack team of "three experienced board-certified clinical psychiatrists" reviewed the notes and provided a "Gold Standard" classification as to whether patients were Depressed, Recovered or Intermediate at each visit. The problem here is that they didn't actually see the patients, they just had the notes. If the notes were bad, the result will have been bad too. Garbage In, Garbage Out. Even if you then put a big gold medal on the garbage.

They then found that an NLP algorithm was able to learn how to duplicate the expert opinion, based on the words used in the notes. Using a machine learning approach they were able to teach the computer that if the text contained the word "depressed", it was a sign that the patient was depressed while "much better" was associated with being... guess.

In fairness, it's not a bad attempt to turn text into numbers, and in future it could allow you to do interesting things such as comparing two drugs in terms of which ones make people "much better".

I'm concerned about this though. The essence of the original, narrative notes, is that they contain individual information about that patient's story. You could go through them with a computer and calculate what happens to the average patient given a certain drug. That might be useful information. But if you did that as a replacement for reading about individual patients, you'd be missing the whole point of the narrative notes.

Worse, as this kind of thing becomes feaisble it will feed back on itself and encourage clinicians to write their notes -and therefore to think, inevitably - in machine-readable terms. The authors suggest as much:
As more health care systems move to electronic medical records, there is a unique opportunity to better quantify outcomes. For example, the 16-item patient-rated QIDS-SR [questionairre] has been shown to be highly correlated with clinician-rated measures and sensitive to treatment effects.... At minimum, EMR systems that utilize templates could require clinicians to record a clinical status for example, using the 7-point Clinical Global Impression scale...
Indeed, many say that this already is happening. Now quantification is generally a good thing I think, but only so long as it's an aid to understanding, not a replacement for it.

Yet quantification often does become a replacement for understanding because there's a trap that we face when trying to deal with a complicated set of information. The temptation is to focus on the easiest bit to measure, because that's easy, and then assume that this represents the state of the whole thing. The reason that something's easy to measure is often because it doesn't capture the whole phenomena.

ResearchBlogging.orgPerlis RH, Iosifescu DV, Castro VM, Murphy SN, Gainer VS, Minnier J, Cai T, Goryachev S, Zeng Q, Gallagher PJ, Fava M, Weilburg JB, Churchill SE, Kohane IS, & Smoller JW (2011). Using electronic medical records to enable large-scale studies in psychiatry: treatment resistant depression as a model. Psychological medicine, 1-10 PMID: 21682950

Machine-Readable Psychiatry

The idea of trawling the internet to discover what people think about medications is a fascinating one and I've covered some attempts to do this in the past, but it's not easy. And there's something worrying about where it could lead.

A new paper aims to trawl medical records to work out how well depressed patients responded to treatment. The authors used Natural Language Processing or NLP (not that NLP) to interpret electronic medical records from over 5,000 patients treated at hospitals in New England. Each record included notes taken at multiple visits.

A crack team of "three experienced board-certified clinical psychiatrists" reviewed the notes and provided a "Gold Standard" classification as to whether patients were Depressed, Recovered or Intermediate at each visit. The problem here is that they didn't actually see the patients, they just had the notes. If the notes were bad, the result will have been bad too. Garbage In, Garbage Out. Even if you then put a big gold medal on the garbage.

They then found that an NLP algorithm was able to learn how to duplicate the expert opinion, based on the words used in the notes. Using a machine learning approach they were able to teach the computer that if the text contained the word "depressed", it was a sign that the patient was depressed while "much better" was associated with being... guess.

In fairness, it's not a bad attempt to turn text into numbers, and in future it could allow you to do interesting things such as comparing two drugs in terms of which ones make people "much better".

I'm concerned about this though. The essence of the original, narrative notes, is that they contain individual information about that patient's story. You could go through them with a computer and calculate what happens to the average patient given a certain drug. That might be useful information. But if you did that as a replacement for reading about individual patients, you'd be missing the whole point of the narrative notes.

Worse, as this kind of thing becomes feaisble it will feed back on itself and encourage clinicians to write their notes -and therefore to think, inevitably - in machine-readable terms. The authors suggest as much:
As more health care systems move to electronic medical records, there is a unique opportunity to better quantify outcomes. For example, the 16-item patient-rated QIDS-SR [questionairre] has been shown to be highly correlated with clinician-rated measures and sensitive to treatment effects.... At minimum, EMR systems that utilize templates could require clinicians to record a clinical status for example, using the 7-point Clinical Global Impression scale...
Indeed, many say that this already is happening. Now quantification is generally a good thing I think, but only so long as it's an aid to understanding, not a replacement for it.

Yet quantification often does become a replacement for understanding because there's a trap that we face when trying to deal with a complicated set of information. The temptation is to focus on the easiest bit to measure, because that's easy, and then assume that this represents the state of the whole thing. The reason that something's easy to measure is often because it doesn't capture the whole phenomena.

ResearchBlogging.orgPerlis RH, Iosifescu DV, Castro VM, Murphy SN, Gainer VS, Minnier J, Cai T, Goryachev S, Zeng Q, Gallagher PJ, Fava M, Weilburg JB, Churchill SE, Kohane IS, & Smoller JW (2011). Using electronic medical records to enable large-scale studies in psychiatry: treatment resistant depression as a model. Psychological medicine, 1-10 PMID: 21682950

Tuesday, June 21, 2011

Autism In The I.T. Crowd

Is autism more common in Silicon Valley?


A new study from Simon Baron-Cohen and colleagues asked pretty much this question, although rather than California, they looked at Eindhoven in Holland. Eindhoven is the tech hub of the Netherlands:
This region contains the Eindhoven University of Technology, as well as the High Tech Campus Eindhoven, where IT and technology companies such as Philips, ASML, IBM and ATOS Origin are based... 30% of jobs in Eindhoven are now in technology or ICT, in Haarlem and Utrecht this is, respectively, 16 and 17%
The authors found that official rates of diagnosed autism amongst children enrolled in Eindhoven schools were more than twice as high as those in kids from the comparison cities of Haarlem and Utrecht. In Eindhoven, rates of any autism spectrum disorder were 2.3%, far higher than rates elsewhere (0.6-0.8%).

Narrowly defined "classical autism" was also higher. However, two control disorders, dyspraxia and ADHD, were no different.

A diagnosed autism prevalence of 2.3% is extremely high. Some recent studies have found similar figures when you actually go out and attempt to find undiagnosed cases and diagnose them. But for 2.3% of kids to already have a diagnosis, is remarkable.

Unfortunately, there's a big problem here, which is that this study has a sample size is 3. There were lots of data from each city: in total, 369 schools took part, with over 60,000 kids. But there were only three independent cities.

So while these data convincingly show that Eindhoven has higher rates of autism than the other two regions, this might just mean, say, that half of Dutch cities have local educational systems that promote diagnosis, and Eindhoven happens to be one of them.

To really answer the question of whether I.T. folk have more autism, you'd need to look at Silicon Valleys around the world, to increase your sample size.

I'd be surprised if there weren't a link. Autism is highly heritable and we know that the children of people with autism, or mild autistic traits, have a higher rate. I don't think it's too controversial to say that the average programmer has above-average autistic traits, and it's quite possible that a little autism is a positive advantage in IT professions.

This is certainly Baron-Cohen's hypothesis, as he's long argued that people with autism have a tendency to be strong "systematizers":
This striking difference in the prevalence of ASC is in line with the hyper-systemizing theory, and will require the phase two study using diagnostic assessments and screening methods, to determine the exact nature of regional differences in population prevalence. Future research should test if this higher prevalence in a high tech region is found in other cultures (e.g., in Silicon Valley, California)...

ResearchBlogging.orgRoelfsema MT, Hoekstra RA, Allison C, Wheelwright S, Brayne C, Matthews FE, & Baron-Cohen S (2011). Are Autism Spectrum Conditions More Prevalent in an Information-Technology Region? A School-Based Study of Three Regions in the Netherlands. Journal of autism and developmental disorders PMID: 21681590

Autism In The I.T. Crowd

Is autism more common in Silicon Valley?


A new study from Simon Baron-Cohen and colleagues asked pretty much this question, although rather than California, they looked at Eindhoven in Holland. Eindhoven is the tech hub of the Netherlands:
This region contains the Eindhoven University of Technology, as well as the High Tech Campus Eindhoven, where IT and technology companies such as Philips, ASML, IBM and ATOS Origin are based... 30% of jobs in Eindhoven are now in technology or ICT, in Haarlem and Utrecht this is, respectively, 16 and 17%
The authors found that official rates of diagnosed autism amongst children enrolled in Eindhoven schools were more than twice as high as those in kids from the comparison cities of Haarlem and Utrecht. In Eindhoven, rates of any autism spectrum disorder were 2.3%, far higher than rates elsewhere (0.6-0.8%).

Narrowly defined "classical autism" was also higher. However, two control disorders, dyspraxia and ADHD, were no different.

A diagnosed autism prevalence of 2.3% is extremely high. Some recent studies have found similar figures when you actually go out and attempt to find undiagnosed cases and diagnose them. But for 2.3% of kids to already have a diagnosis, is remarkable.

Unfortunately, there's a big problem here, which is that this study has a sample size is 3. There were lots of data from each city: in total, 369 schools took part, with over 60,000 kids. But there were only three independent cities.

So while these data convincingly show that Eindhoven has higher rates of autism than the other two regions, this might just mean, say, that half of Dutch cities have local educational systems that promote diagnosis, and Eindhoven happens to be one of them.

To really answer the question of whether I.T. folk have more autism, you'd need to look at Silicon Valleys around the world, to increase your sample size.

I'd be surprised if there weren't a link. Autism is highly heritable and we know that the children of people with autism, or mild autistic traits, have a higher rate. I don't think it's too controversial to say that the average programmer has above-average autistic traits, and it's quite possible that a little autism is a positive advantage in IT professions.

This is certainly Baron-Cohen's hypothesis, as he's long argued that people with autism have a tendency to be strong "systematizers":
This striking difference in the prevalence of ASC is in line with the hyper-systemizing theory, and will require the phase two study using diagnostic assessments and screening methods, to determine the exact nature of regional differences in population prevalence. Future research should test if this higher prevalence in a high tech region is found in other cultures (e.g., in Silicon Valley, California)...

ResearchBlogging.orgRoelfsema MT, Hoekstra RA, Allison C, Wheelwright S, Brayne C, Matthews FE, & Baron-Cohen S (2011). Are Autism Spectrum Conditions More Prevalent in an Information-Technology Region? A School-Based Study of Three Regions in the Netherlands. Journal of autism and developmental disorders PMID: 21681590

Friday, June 17, 2011

Bipolar Kids: You Read It Here First

Last year, I discussed the controvery over the proposed new childhood syndrome of "Temper Disregulation Disorder with Dysphoria" (TDDD). It may be included in the upcoming revision of the psychiatric bible, DSM-V.

Back then, I said:
TDDD has been proposed in order to reduce the number of children being diagnosed with pediatric bipolar disorder... many people agree that pediatric bipolar is being over-diagnosed.

So we can all sympathize with the sentiment behind TDDD - but this is fighting fire with fire. Is the only way to stop kids getting one diagnosis, to give them another one? Should we really be creating diagnoses for more or less "strategic" purposes?
Now, a bunch of psychiatrists have written to the Journal of Clinical Psychiatry to express their concerns over the proposed diagnosis. They make the same point that I did:
We believe that the creation of a new, unsubstantiated diagnosis in order to prevent misapplication of a different diagnosis is misguided and a step backward for the progression of psychiatry as a rational scientific discipline.
Although they go into much more detail in critiquing the evidence held up in favor of the idea of TDDD. They also point out that it is rather optimistic to think, as some people apparantly do, that if we were to diagnose kids with TDDD, as opposed to childhood bipolar, we'd save them from getting nasty bipolar medications.

As they say, the risk is that drug companies would just get their drugs licensed to treat TDDD instead. Same drugs, different label. It would be fairly easy: just for starters, there are plenty of sedative drugs, such as atypical antipsychotics, which would certainly alter or mask the "symptoms" of TDDD, in the short term. Doing a clinical trial and showing that these drugs "work" would be easy. It wouldn't mean they actually worked, or that TDDD actually existed.

They also point out that the public perception of child psychiatry has already been harmed by the proposal of TDDD, and would suffer further if it were to become official.

Well, of course it would, and quite rightly so. That would be a sign that child psychiatry is so out of control that, literally, the only way it can stop diagnosing children, is to diagnose them with something else!

The same issue of the the same journal features another paper, claiming that "pediatric bipolar disorder" has a prevalence rate of 1.8%, and that rates of diagnosis of childhood bipolar are not higher in the USA than elsewhere, contrary to popular belief based on evidence.

Their data are a bunch of epidemiological studies on bipolar disorder. One of which included children up to the age of...21. The majority included kids of 17 or 18.

So, er, not children at all, then.


The older the "children" in the study, the more bipolar that study found. Everyone knows that bipolar disorder typically starts in late adolescence. That's the orthodoxy and it has been since Kraepelin. It's right there at the top of the Wikipedia page. That's not pediatric bipolar, that's just normal bipolar.

All the recent controversy is about bipolar in children. As in, like, 8 year olds. Yet this paper is still titled "Meta-analysis of epidemiologic studies of pediatric bipolar disorder". The senior author on this paper also signed the paper criticizing TDDD.

This, then, is the state of the debate over the future of our children.

P.S. I've just noticed that in the latest draft of DSM-V, TDDD has been renamed. It's now called "DMDD". What's next? DUDD? DEDD? P-DIDDY ?


ResearchBlogging.orgAxelson DA, Birmaher B, Findling RL, Fristad MA, Kowatch RA, Youngstrom EA, Arnold EL, Goldstein BI, Goldstein TR, Chang KD, Delbello MP, Ryan ND, & Diler RS (2011). Concerns regarding the inclusion of temper dysregulation disorder with dysphoria in the DSM-V The Journal of clinical psychiatry PMID: 21672494

Van Meter AR, Moreira AL, & Youngstrom EA (2011). Meta-analysis of epidemiologic studies of pediatric bipolar disorder. The Journal of clinical psychiatry PMID: 21672501

Bipolar Kids: You Read It Here First

Last year, I discussed the controvery over the proposed new childhood syndrome of "Temper Disregulation Disorder with Dysphoria" (TDDD). It may be included in the upcoming revision of the psychiatric bible, DSM-V.

Back then, I said:
TDDD has been proposed in order to reduce the number of children being diagnosed with pediatric bipolar disorder... many people agree that pediatric bipolar is being over-diagnosed.

So we can all sympathize with the sentiment behind TDDD - but this is fighting fire with fire. Is the only way to stop kids getting one diagnosis, to give them another one? Should we really be creating diagnoses for more or less "strategic" purposes?
Now, a bunch of psychiatrists have written to the Journal of Clinical Psychiatry to express their concerns over the proposed diagnosis. They make the same point that I did:
We believe that the creation of a new, unsubstantiated diagnosis in order to prevent misapplication of a different diagnosis is misguided and a step backward for the progression of psychiatry as a rational scientific discipline.
Although they go into much more detail in critiquing the evidence held up in favor of the idea of TDDD. They also point out that it is rather optimistic to think, as some people apparantly do, that if we were to diagnose kids with TDDD, as opposed to childhood bipolar, we'd save them from getting nasty bipolar medications.

As they say, the risk is that drug companies would just get their drugs licensed to treat TDDD instead. Same drugs, different label. It would be fairly easy: just for starters, there are plenty of sedative drugs, such as atypical antipsychotics, which would certainly alter or mask the "symptoms" of TDDD, in the short term. Doing a clinical trial and showing that these drugs "work" would be easy. It wouldn't mean they actually worked, or that TDDD actually existed.

They also point out that the public perception of child psychiatry has already been harmed by the proposal of TDDD, and would suffer further if it were to become official.

Well, of course it would, and quite rightly so. That would be a sign that child psychiatry is so out of control that, literally, the only way it can stop diagnosing children, is to diagnose them with something else!

The same issue of the the same journal features another paper, claiming that "pediatric bipolar disorder" has a prevalence rate of 1.8%, and that rates of diagnosis of childhood bipolar are not higher in the USA than elsewhere, contrary to popular belief based on evidence.

Their data are a bunch of epidemiological studies on bipolar disorder. One of which included children up to the age of...21. The majority included kids of 17 or 18.

So, er, not children at all, then.


The older the "children" in the study, the more bipolar that study found. Everyone knows that bipolar disorder typically starts in late adolescence. That's the orthodoxy and it has been since Kraepelin. It's right there at the top of the Wikipedia page. That's not pediatric bipolar, that's just normal bipolar.

All the recent controversy is about bipolar in children. As in, like, 8 year olds. Yet this paper is still titled "Meta-analysis of epidemiologic studies of pediatric bipolar disorder". The senior author on this paper also signed the paper criticizing TDDD.

This, then, is the state of the debate over the future of our children.

P.S. I've just noticed that in the latest draft of DSM-V, TDDD has been renamed. It's now called "DMDD". What's next? DUDD? DEDD? P-DIDDY ?


ResearchBlogging.orgAxelson DA, Birmaher B, Findling RL, Fristad MA, Kowatch RA, Youngstrom EA, Arnold EL, Goldstein BI, Goldstein TR, Chang KD, Delbello MP, Ryan ND, & Diler RS (2011). Concerns regarding the inclusion of temper dysregulation disorder with dysphoria in the DSM-V The Journal of clinical psychiatry PMID: 21672494

Van Meter AR, Moreira AL, & Youngstrom EA (2011). Meta-analysis of epidemiologic studies of pediatric bipolar disorder. The Journal of clinical psychiatry PMID: 21672501

Thursday, June 16, 2011

Neuroplasticity Revisited

A fascinating case report details a remarkable recovery from serious brain injury: Characterization of recovery and neuropsychological consequences of orbitofrontal lesion.

The patient "M. S." was a previously healthy 29 year old Israeli graduate student who suffered injuries in a terrorist attack. As the MRI scans above show, she lost large parts of her orbitofrontal cortex and ventromedial prefrontal cortex, although the left side was only partially affected. She also lost her right eye.

These areas are known to be involved in emotion and decision making. Her lesions are somewhat similar to those suffered by the famous Phineas Gage, and as we'll see, her symptoms were too - but only temporarily.

One year after the injury...
M.S.’s complaints included a sense of general fatigue, loss of taste and smell, difficulty concentrating and emotional changes including irritability, lability, depression and social isolation. She reported failing to make new social contacts, having lost most of her old friends, and a diminished need for social relationships.

M.S. reported that family and friends commented on her change from a quiet and pleasant person to a rude, annoying, uninhibited, and unstoppable talkative person following the injury... M.S. had become apathetic, without a sense of time, and with no plans for the future.

On examination, M.S. was fully cooperative. She had difficulty concentrating and required frequent breaks. She appeared euphoric, laughed frequently and inappropriately, talked too much,made inappropriate remarks and jokes, yawned loudly... M.S. found it difficult to sit still and showed utilization behavior, continuously fidgeting and touching objects on the table. She had a tendency to continue performing tasks after completion was stated.
These personality and mood changes are reminisicent of those Phineas Gage suffered. Strangely, she scored 33 on the self-report depression scale the BDI, which corresponds to "severe depression", but from the description she doesn't sound depressed in the normal sense. These scales were not designed for people with brain lesions. Her cognitive function and memory was mostly normal but with clear impairments on some tests.

Anyway, that was after 1 year, and if that were the end it would be a rather sad story, but there's a happy ending. After this she got psychotherapy and rehabilitation treatment. 7 years later she had a follow-up assessment and she was much improved.

Her mood, attention-span and so forth were reported as normal. She struggled with her graduate studies, finding them more difficult than before the injury, and had eventually quit them, but she'd got a new job. She had recently got married.

Her performance on neuropsychological tests designed to measure prefrontal cortex damage was mostly normal, and she did much better on the ones that she used to be impaired on. She still did poorly on the Iowa Gambling Task, which is very sensitive vmPFC damage.

Overall, though, she had made a "magnificent" recovery despite losing a large chunk of her brain. I've previously been skeptical of some of the stronger claims of neuroplasticity or "brain remodelling", but some parts of the brain are more plastic than others and the prefrontal cortex seems to be one of the most flexible.

ResearchBlogging.orgFisher T, Shamay-Tsoory SG, Eran A, & Aharon-Peretz J (2011). Characterization of recovery and neuropsychological consequences of orbitofrontal lesion: A case study. Neurocase, 17 (3), 285-93 PMID: 21667397

Neuroplasticity Revisited

A fascinating case report details a remarkable recovery from serious brain injury: Characterization of recovery and neuropsychological consequences of orbitofrontal lesion.

The patient "M. S." was a previously healthy 29 year old Israeli graduate student who suffered injuries in a terrorist attack. As the MRI scans above show, she lost large parts of her orbitofrontal cortex and ventromedial prefrontal cortex, although the left side was only partially affected. She also lost her right eye.

These areas are known to be involved in emotion and decision making. Her lesions are somewhat similar to those suffered by the famous Phineas Gage, and as we'll see, her symptoms were too - but only temporarily.

One year after the injury...
M.S.’s complaints included a sense of general fatigue, loss of taste and smell, difficulty concentrating and emotional changes including irritability, lability, depression and social isolation. She reported failing to make new social contacts, having lost most of her old friends, and a diminished need for social relationships.

M.S. reported that family and friends commented on her change from a quiet and pleasant person to a rude, annoying, uninhibited, and unstoppable talkative person following the injury... M.S. had become apathetic, without a sense of time, and with no plans for the future.

On examination, M.S. was fully cooperative. She had difficulty concentrating and required frequent breaks. She appeared euphoric, laughed frequently and inappropriately, talked too much,made inappropriate remarks and jokes, yawned loudly... M.S. found it difficult to sit still and showed utilization behavior, continuously fidgeting and touching objects on the table. She had a tendency to continue performing tasks after completion was stated.
These personality and mood changes are reminisicent of those Phineas Gage suffered. Strangely, she scored 33 on the self-report depression scale the BDI, which corresponds to "severe depression", but from the description she doesn't sound depressed in the normal sense. These scales were not designed for people with brain lesions. Her cognitive function and memory was mostly normal but with clear impairments on some tests.

Anyway, that was after 1 year, and if that were the end it would be a rather sad story, but there's a happy ending. After this she got psychotherapy and rehabilitation treatment. 7 years later she had a follow-up assessment and she was much improved.

Her mood, attention-span and so forth were reported as normal. She struggled with her graduate studies, finding them more difficult than before the injury, and had eventually quit them, but she'd got a new job. She had recently got married.

Her performance on neuropsychological tests designed to measure prefrontal cortex damage was mostly normal, and she did much better on the ones that she used to be impaired on. She still did poorly on the Iowa Gambling Task, which is very sensitive vmPFC damage.

Overall, though, she had made a "magnificent" recovery despite losing a large chunk of her brain. I've previously been skeptical of some of the stronger claims of neuroplasticity or "brain remodelling", but some parts of the brain are more plastic than others and the prefrontal cortex seems to be one of the most flexible.

ResearchBlogging.orgFisher T, Shamay-Tsoory SG, Eran A, & Aharon-Peretz J (2011). Characterization of recovery and neuropsychological consequences of orbitofrontal lesion: A case study. Neurocase, 17 (3), 285-93 PMID: 21667397

Tuesday, June 14, 2011

Consciousness? FFS...

An interesting paper on the neurobiology of conscious awareness: Unconscious High-Level Information Processing.


The authors propose that consciousness may be associated, not with activation in any given area of the brain, but with recurrent information processing between areas, a kind of neural ping-pong.

When presented with sensory information, say the sight of an object, signals travel up through the brain from "primary" sensory areas to "higher" areas associated with more complicated processing. They call this the Fast Feedforward Sweep, or "FFS". Maybe not the best acronym.

Anyway, depending on the nature of the stimulus, this can lead to activation in almost any part of the brain. However, they say that it's not enough to generate consciousness; only if the later areas feedback to the earlier areas, and start a recurrent processing loop, does this happen.

This stands in contrast to the popular view, which seems to fit with common sense, that primary areas are unconscious and that consciousness is directly associated with activity in the higher areas, in particular, the prefrontal cortex (PFC).

The authors refer to fMRI and EEG studies showing that even "high level" processes, such as selective attention to stimuli, and inhibition of an action, can be triggered by subconscious cues, and that this is associated with activation in the prefrontal cortex - unconscious activation.

The details of these studies are fairly arcane but the point is that the prefrontal cortex is generally agreed to be the most developed, "highest level" part of the brain. If anywhere in the brain was going to be the seat of the soul, it's the PFC.


This shouldn't come as a surprise, though. While it's tempting to look for a part of the brain which "does" conscious experience - the "me module" - Daniel Dennet pointed out a while ago that this temptation is motivated by a fundamental confusion.

Likewise, while it seems common sense that conciousness is the "highest mental function" and therefore must be located in the highest brain area, this is a presumption: consciousness is a mystery, and we don't know if it's a high level function or not, or whether that question even makes sense.

Nor should the fact that consciousness isn't an inevitable consequence of high-level cognition come as a shock: in fact, that would be impossible. As Ryle pointed out in The Concept of Mind, this would create an infinite regression. Any conscious experience has to come from somewhere.

Right now I'm concious of choosing certain words rather than others in typing this post, in a conscious attempt to make it read better. But I'm not aware of all of the rules and experiences that guide my choices. I just feel that some words work. This feeling seems to come out of nowhere, or rather, out of the words themselves.

It isn't, of course, it's a product of calculations taking place in my brain, but I've no idea what they are. I wouldn't want to be, either: I'm too busy typing.

ResearchBlogging.orgvan Gaal S, & Lamme VA (2011). Unconscious High-Level Information Processing: Implication for Neurobiological Theories of Consciousness. The Neuroscientist : a review journal bringing neurobiology, neurology and psychiatry PMID: 21628675

Consciousness? FFS...

An interesting paper on the neurobiology of conscious awareness: Unconscious High-Level Information Processing.


The authors propose that consciousness may be associated, not with activation in any given area of the brain, but with recurrent information processing between areas, a kind of neural ping-pong.

When presented with sensory information, say the sight of an object, signals travel up through the brain from "primary" sensory areas to "higher" areas associated with more complicated processing. They call this the Fast Feedforward Sweep, or "FFS". Maybe not the best acronym.

Anyway, depending on the nature of the stimulus, this can lead to activation in almost any part of the brain. However, they say that it's not enough to generate consciousness; only if the later areas feedback to the earlier areas, and start a recurrent processing loop, does this happen.

This stands in contrast to the popular view, which seems to fit with common sense, that primary areas are unconscious and that consciousness is directly associated with activity in the higher areas, in particular, the prefrontal cortex (PFC).

The authors refer to fMRI and EEG studies showing that even "high level" processes, such as selective attention to stimuli, and inhibition of an action, can be triggered by subconscious cues, and that this is associated with activation in the prefrontal cortex - unconscious activation.

The details of these studies are fairly arcane but the point is that the prefrontal cortex is generally agreed to be the most developed, "highest level" part of the brain. If anywhere in the brain was going to be the seat of the soul, it's the PFC.


This shouldn't come as a surprise, though. While it's tempting to look for a part of the brain which "does" conscious experience - the "me module" - Daniel Dennet pointed out a while ago that this temptation is motivated by a fundamental confusion.

Likewise, while it seems common sense that conciousness is the "highest mental function" and therefore must be located in the highest brain area, this is a presumption: consciousness is a mystery, and we don't know if it's a high level function or not, or whether that question even makes sense.

Nor should the fact that consciousness isn't an inevitable consequence of high-level cognition come as a shock: in fact, that would be impossible. As Ryle pointed out in The Concept of Mind, this would create an infinite regression. Any conscious experience has to come from somewhere.

Right now I'm concious of choosing certain words rather than others in typing this post, in a conscious attempt to make it read better. But I'm not aware of all of the rules and experiences that guide my choices. I just feel that some words work. This feeling seems to come out of nowhere, or rather, out of the words themselves.

It isn't, of course, it's a product of calculations taking place in my brain, but I've no idea what they are. I wouldn't want to be, either: I'm too busy typing.

ResearchBlogging.orgvan Gaal S, & Lamme VA (2011). Unconscious High-Level Information Processing: Implication for Neurobiological Theories of Consciousness. The Neuroscientist : a review journal bringing neurobiology, neurology and psychiatry PMID: 21628675

Saturday, June 11, 2011

Pharmaceuticals And Violence

A French study reveals which medications are most often associated with violence and aggression: Prescribed drugs and violence.


The authors trawled the French records of drug side effects from 1985 to 2008. By law, doctors in France must report any adverse event which is either serious, or unexpected, to the authorities.

They found a total of 540 reports mentioning "violence", but only 56 of these were clear-cut incidents of physical aggression towards others. Suicide and self-harm were not included, unless they also involved violence to other people.

There were 76 suspect drugs (because some reports included one or more). Here's the Hall of Shame:

16 reports involved benzodiazepines (Valium) or similar drugs.
13 implicated dopamine-boosting drugs used to treat Parkinson's disease.
4 were caused by serotonin-based antidepressants like Prozac. Older antidepressants were not associated.

Antipsychotics and anti-epileptics were also high on the list.

There were also reports involving and the antiviral drugs interferon (3), ribavarin(2), and efavirenz (3); the stop-smoking aid varenicline (4); anti-acne drug isotretinoin (4); and the banned weight-loss drug rimonabant (2). All of these can also cause depression, and I've blogged about some of them before for that reason.

Of the perpetrators, 86% were men. Nearly half had a prior psychiatric history, but that's not surprising because many of these drugs are prescribed to people with mental illness.

In terms of the number of reports of violence relative to the total number of adverse events for each drug, Parkinson's drugs were "worst". However, this doesn't mean much, because it might just mean that these drugs are generally mild in terms of side effects.

So it's an interesting dataset, but it's impossible to come to any firm conclusions as to how common these effects really are. Cases might go unreported if they're thought to be "normal" violence; and regular violence could also get wrongly blamed on a drug - criminals get sick too.

Finally, we ought to remember while these effects are inherently attention-grabbing (and Parkinson's drugs in particular have given rise to some tabloid-friendly stories), the overall rate was tiny - less than 3 cases per year, for all prescribed drugs, in a nation of over 60 million people.

ResearchBlogging.orgRouve N, Bagheri H, Telmon N, Pathak A, Franchitto N, Schmitt L, Rougé D, Lapeyre-Mestre M, Montastruc JL, & the French Association of Regional PharmacoVigilance Centres (2011). Prescribed drugs and violence: a case/noncase study in the French PharmacoVigilance Database. European journal of clinical pharmacology PMID: 21655992

Pharmaceuticals And Violence

A French study reveals which medications are most often associated with violence and aggression: Prescribed drugs and violence.


The authors trawled the French records of drug side effects from 1985 to 2008. By law, doctors in France must report any adverse event which is either serious, or unexpected, to the authorities.

They found a total of 540 reports mentioning "violence", but only 56 of these were clear-cut incidents of physical aggression towards others. Suicide and self-harm were not included, unless they also involved violence to other people.

There were 76 suspect drugs (because some reports included one or more). Here's the Hall of Shame:

16 reports involved benzodiazepines (Valium) or similar drugs.
13 implicated dopamine-boosting drugs used to treat Parkinson's disease.
4 were caused by serotonin-based antidepressants like Prozac. Older antidepressants were not associated.

Antipsychotics and anti-epileptics were also high on the list.

There were also reports involving and the antiviral drugs interferon (3), ribavarin(2), and efavirenz (3); the stop-smoking aid varenicline (4); anti-acne drug isotretinoin (4); and the banned weight-loss drug rimonabant (2). All of these can also cause depression, and I've blogged about some of them before for that reason.

Of the perpetrators, 86% were men. Nearly half had a prior psychiatric history, but that's not surprising because many of these drugs are prescribed to people with mental illness.

In terms of the number of reports of violence relative to the total number of adverse events for each drug, Parkinson's drugs were "worst". However, this doesn't mean much, because it might just mean that these drugs are generally mild in terms of side effects.

So it's an interesting dataset, but it's impossible to come to any firm conclusions as to how common these effects really are. Cases might go unreported if they're thought to be "normal" violence; and regular violence could also get wrongly blamed on a drug - criminals get sick too.

Finally, we ought to remember while these effects are inherently attention-grabbing (and Parkinson's drugs in particular have given rise to some tabloid-friendly stories), the overall rate was tiny - less than 3 cases per year, for all prescribed drugs, in a nation of over 60 million people.

ResearchBlogging.orgRouve N, Bagheri H, Telmon N, Pathak A, Franchitto N, Schmitt L, Rougé D, Lapeyre-Mestre M, Montastruc JL, & the French Association of Regional PharmacoVigilance Centres (2011). Prescribed drugs and violence: a case/noncase study in the French PharmacoVigilance Database. European journal of clinical pharmacology PMID: 21655992

Friday, June 10, 2011

Do Pigs Get Autism?

What happens to a pig if it has a gene for autism?

There has been lots of research on mice who carry the same genes associated with autism in humans. Rats and recently monkeys have been studied as well. But the possibility of autistic pigs has been strangely neglected by science.

A new paper might just change that: Characterization of porcine autism susceptibility candidate 2 as a candidate gene for the number of corpora lutea in pigs. The authors found that, in female pigs, variation in a certain gene affects the function of the ovaries.

The corpus luteum is a little yellow blob (technically speaking) in the ovary. Its job is to secrete progesterone. Women's ovaries grow a new one during every menstrual cycle, and it normally breaks down and disappears before the period. However, if you get pregnant, the corpus luteum sticks around and continues producing that hormone.

Pigs, like many animals, can have more than one of these per ovary and it turns out that one of the genes controlling the number is a homolog of the human gene AUTS2. AUTS2 mutations are linked to autism (hence the name), smoking and mental retardation. The authors of this paper found several variants in this gene in domestic pig populations, and they show that it's expressed in the pig ovary.

It's quite a long leap from porcine lady bits to autism, I would say, but this actually does make sense, if you accept the Extreme Male Brain theory of autism. Boys are at least four times more likely to have autism than girls, and some say that masculinizing hormone testosterone may be the reason. This study fits with that, given that progesterone is a female hormone. Maybe mutations in AUTS2 gene alter sex hormone production?

On the other hand, it might be a coincidence. AUTS2 is strongly expressed in the brain, as well as the ovaries. Maybe it's just required for cell function, and if it's mutated, cells stop working normally: whether they be in the brain, or the corpus luteum.

Either way, it would be interesting to see whether AUTS2 affects pig behaviour... but I'm not sure what an autistic pig would look like.

ResearchBlogging.orgSato S, Hayashi T, & Kobayashi E (2011). Characterization of porcine autism susceptibility candidate 2 as a candidate gene for the number of corpora lutea in pigs. Animal reproduction science PMID: 21641132

Do Pigs Get Autism?

What happens to a pig if it has a gene for autism?

There has been lots of research on mice who carry the same genes associated with autism in humans. Rats and recently monkeys have been studied as well. But the possibility of autistic pigs has been strangely neglected by science.

A new paper might just change that: Characterization of porcine autism susceptibility candidate 2 as a candidate gene for the number of corpora lutea in pigs. The authors found that, in female pigs, variation in a certain gene affects the function of the ovaries.

The corpus luteum is a little yellow blob (technically speaking) in the ovary. Its job is to secrete progesterone. Women's ovaries grow a new one during every menstrual cycle, and it normally breaks down and disappears before the period. However, if you get pregnant, the corpus luteum sticks around and continues producing that hormone.

Pigs, like many animals, can have more than one of these per ovary and it turns out that one of the genes controlling the number is a homolog of the human gene AUTS2. AUTS2 mutations are linked to autism (hence the name), smoking and mental retardation. The authors of this paper found several variants in this gene in domestic pig populations, and they show that it's expressed in the pig ovary.

It's quite a long leap from porcine lady bits to autism, I would say, but this actually does make sense, if you accept the Extreme Male Brain theory of autism. Boys are at least four times more likely to have autism than girls, and some say that masculinizing hormone testosterone may be the reason. This study fits with that, given that progesterone is a female hormone. Maybe mutations in AUTS2 gene alter sex hormone production?

On the other hand, it might be a coincidence. AUTS2 is strongly expressed in the brain, as well as the ovaries. Maybe it's just required for cell function, and if it's mutated, cells stop working normally: whether they be in the brain, or the corpus luteum.

Either way, it would be interesting to see whether AUTS2 affects pig behaviour... but I'm not sure what an autistic pig would look like.

ResearchBlogging.orgSato S, Hayashi T, & Kobayashi E (2011). Characterization of porcine autism susceptibility candidate 2 as a candidate gene for the number of corpora lutea in pigs. Animal reproduction science PMID: 21641132

Tuesday, June 7, 2011

Britain's Not Getting More Mentally Ill

There's a widespread belief that mental illness is getting more common, or that it has got more common in recent years.

A new study in the British Journal of Psychiatry says: no, it's not. They looked at the UK APMS mental health surveys, which were done in 1993, 2000 and 2007. Long-time readers will remember these.

The authors of the new paper analyzed the data by birth cohort, i.e. when you were born, and by age at the time of the survey. If mental illness were rising, you'd predict that people born more recently would have higher rates of mental illness at any given age.

The headline finding: there was no cohort effect, implying that rates of mental illness aren't changing. There was a strong age effect: in men, rates peak at about age 50; in women the data is rather messy but in general the rate is flat up to age 50 and then it falls off, like in men. But there's no evidence that those born recently are at higher risk.

The only exception was that men born after 1950 were at somewhat higher risk than those born earlier as shown by the "break" on the graph above. The effect for women was smaller. The most recent cohort, those born after 1985, were also above the curve but there was only one datapoint there, so it's hard to interpret.

We also get a rather cute graph showing how life changes with age:

As you get older, you get less irritable and, if you're a woman, you'll worry less. But sleep problems and, in men, fatigue, increase. Overall, 50 is the worst age in terms of total symptoms. After that, it gets better. Well, that's nice to know. Or not, depending on your age.

Overall, the authors say:
Our finding of subsequently stable rates contradicts popular media stories of a relentlessly rising tide of mental illness, at least for men. Stable prevalence in the male population, together with peaking of the prevalence of common mental disorder at about age 50 years, indicates that a large increase in projected rates of poor mental health is unlikely in the male population in the near future....

Trends in women are less clearly identified, with considerable increases in the prevalence of sleep problems, but no clear increase or even some decrease in other measures. Further research is needed to relate these age and cohort differences to drivers of mental health such as employment status and family composition.
Caution's warranted, though, because the APMS data were based on self-reported symptoms of mental illness assessed by lay interviewers. As I've argued before, self-report is problematic, but this is true of almost all of these kinds of studies.

More unusual is that this study didn't attempt to assign formal diagnoses, it just looked at total symptoms on the CIS Scale; a total of 12 or more was considered to indicate "probable disorder".

Purists would say that this is a weakness and that you ought to be making full DSM-IV diagnoses, but honestly, it's got its own problems, and I think this is no worse.

Finally, this study only looked at "common mental disorders" i.e. depression and various kinds of anxiety symptoms. Things like schizophrenia and bipolar disorder weren't included, but from what I remember they're not rising either.

ResearchBlogging.orgSpiers N, Bebbington P, McManus S, Brugha TS, Jenkins R, & Meltzer H (2011). Age and birth cohort differences in the prevalence of common mental disorder in England: National Psychiatric Morbidity Surveys 1993-2007. The British journal of psychiatry : the journal of mental science, 198, 479-84 PMID: 21628710