Showing posts with label history. Show all posts
Showing posts with label history. Show all posts

Wednesday, November 10, 2010

The Tree of Science

How do you know whether a scientific idea is a good one or not?


The only sure way is to study it in detail and know all the technical ins and outs. But good ideas and bad ideas behave differently over time, and this can provide clues as to which ones are solid; useful if you're a non-expert trying to evaluate a field, or a junior researcher looking for a career.

Today's ideas are the basis for tomorrow's experiments. A good idea will lead to experiments which provide interesting results, generating new ideas, which will lead to more experiments, and so on.

Before long, it will be taken as granted that it's true, because so many successful studies assumed it was. The mark of a really good idea is not that it's always being tested and found to be true; it's that it's an unstated assumption of studies which could only work if it were true. Good ideas grow onwards and upwards, in an expanding tree, with each exciting new discovery becoming the boring background of the next generation.

Astronomers don't go around testing whether light travels at a finite speed as opposed to an infinite one; rather, if it were infinite, their whole set-up would fail.

Bad ideas generate experiments too, but they don't work out. The assumptions are wrong. You try to explain why something happens, and you find that it doesn't happen at all. Or you come up with an "explanation", but next time, someone comes along and finds evidence suggesting the "true" explanation is the exact opposite.

Unfortunately, some bad ideas stick around, for political or historical reasons or just because people are lazy. What tends to happen is that these ideas are, ironically, more "productive" than good ideas: they are always giving rise to new hypotheses. It's just that these lines of research peter out eventually, meaning that new ones have to take their place.

As an example of a bad idea, take the theory that "vaccines cause autism". This hypothesis is, in itself, impossible to test: it's too vague. Which vaccines? How do they cause autism? What kind of autism? In which people? How often?

The basic idea that some vaccines, somewhere, somehow, cause some autism, has been very productive. It's given rise to a great many, testable, ideas. But every one which has been tested has proven false.

First there was the idea that the MMR vaccine causes autism, linked to a "leaky gut" or "autistic enterocolitis". It doesn't, and it's not linked to that. Then along came the idea that actually it's mercury preservatives in vaccines that cause autism. It doesn't. No problem - maybe it's aluminium? Or maybe it's just the Hep B vaccine? And so on.

At every turn, it's back to square one after a few years, and a new idea is proposed. "We know this is true; now we just need to work out why and how...". Except that turns out to be tricky. Hmm. Maybe, if you keep ending up back at square one, you ought to find a new square to start from.

The Tree of Science

How do you know whether a scientific idea is a good one or not?


The only sure way is to study it in detail and know all the technical ins and outs. But good ideas and bad ideas behave differently over time, and this can provide clues as to which ones are solid; useful if you're a non-expert trying to evaluate a field, or a junior researcher looking for a career.

Today's ideas are the basis for tomorrow's experiments. A good idea will lead to experiments which provide interesting results, generating new ideas, which will lead to more experiments, and so on.

Before long, it will be taken as granted that it's true, because so many successful studies assumed it was. The mark of a really good idea is not that it's always being tested and found to be true; it's that it's an unstated assumption of studies which could only work if it were true. Good ideas grow onwards and upwards, in an expanding tree, with each exciting new discovery becoming the boring background of the next generation.

Astronomers don't go around testing whether light travels at a finite speed as opposed to an infinite one; rather, if it were infinite, their whole set-up would fail.

Bad ideas generate experiments too, but they don't work out. The assumptions are wrong. You try to explain why something happens, and you find that it doesn't happen at all. Or you come up with an "explanation", but next time, someone comes along and finds evidence suggesting the "true" explanation is the exact opposite.

Unfortunately, some bad ideas stick around, for political or historical reasons or just because people are lazy. What tends to happen is that these ideas are, ironically, more "productive" than good ideas: they are always giving rise to new hypotheses. It's just that these lines of research peter out eventually, meaning that new ones have to take their place.

As an example of a bad idea, take the theory that "vaccines cause autism". This hypothesis is, in itself, impossible to test: it's too vague. Which vaccines? How do they cause autism? What kind of autism? In which people? How often?

The basic idea that some vaccines, somewhere, somehow, cause some autism, has been very productive. It's given rise to a great many, testable, ideas. But every one which has been tested has proven false.

First there was the idea that the MMR vaccine causes autism, linked to a "leaky gut" or "autistic enterocolitis". It doesn't, and it's not linked to that. Then along came the idea that actually it's mercury preservatives in vaccines that cause autism. It doesn't. No problem - maybe it's aluminium? Or maybe it's just the Hep B vaccine? And so on.

At every turn, it's back to square one after a few years, and a new idea is proposed. "We know this is true; now we just need to work out why and how...". Except that turns out to be tricky. Hmm. Maybe, if you keep ending up back at square one, you ought to find a new square to start from.

Tuesday, September 21, 2010

The Rise of the Mouse

Everyone knows that scientists experiment on rats, and guinea-pigs. That's why we have "lab rats" and why, if you're trying out something new, you're a "human guinea-pig".

But this is all out of date. Nowadays, mice are the most popular lab animals. Here's a graph showing the number of scientific papers published each year, mentioning each kind of critter (data gathered with this script):

Rats were on top until about 10 years ago, when mice overtook them. Why? No-one wants to study mice if they can help it: they are horrible to work with compared to rats, and rats are more similar to humans physiologically. This is why rats were more popular for a long time. (Contrary to popular belief, guinea pigs were never used all that much, and they've become even less popular with the rise of mice.)

Non-scientists tend to think of rats as just big mice. They're not: mice are less intelligent, harder to handle (they bite... a lot), and they smell bad. The fact that they're smaller makes surgery, and even simple stuff like taking blood samples, much harder. On the plus side, you can fit more of them in any given space, making them cheaper, but that's about it.

So why did mice suddenly claim the crown? One word - knockout. Mice are the only mammal in which it's easy to perform genetic knockout, i.e. eliminating the function of a single gene. It's extremely difficult in rats, because, for reasons no-one really understands, it is harder to get rat stem cells to grow in vitro.

Knockout mice were "invented" in 1989, and the inexorable rise in the number of mouse papers began a few years later. Recently, there have been reports that knockout rats may now be easy; whether this will lead to a rat renaissance remains to be seen.

Knockouts have revolutionized biology, because they make it easy to investigate what each gene does. Just knock it out, and see what's wrong with your mouse. This is why there are mouse models of so many genetic diseases, while rat and monkey models are only available for a few disorders.

The Rise of the Mouse

Everyone knows that scientists experiment on rats, and guinea-pigs. That's why we have "lab rats" and why, if you're trying out something new, you're a "human guinea-pig".

But this is all out of date. Nowadays, mice are the most popular lab animals. Here's a graph showing the number of scientific papers published each year, mentioning each kind of critter (data gathered with this script):

Rats were on top until about 10 years ago, when mice overtook them. Why? No-one wants to study mice if they can help it: they are horrible to work with compared to rats, and rats are more similar to humans physiologically. This is why rats were more popular for a long time. (Contrary to popular belief, guinea pigs were never used all that much, and they've become even less popular with the rise of mice.)

Non-scientists tend to think of rats as just big mice. They're not: mice are less intelligent, harder to handle (they bite... a lot), and they smell bad. The fact that they're smaller makes surgery, and even simple stuff like taking blood samples, much harder. On the plus side, you can fit more of them in any given space, making them cheaper, but that's about it.

So why did mice suddenly claim the crown? One word - knockout. Mice are the only mammal in which it's easy to perform genetic knockout, i.e. eliminating the function of a single gene. It's extremely difficult in rats, because, for reasons no-one really understands, it is harder to get rat stem cells to grow in vitro.

Knockout mice were "invented" in 1989, and the inexorable rise in the number of mouse papers began a few years later. Recently, there have been reports that knockout rats may now be easy; whether this will lead to a rat renaissance remains to be seen.

Knockouts have revolutionized biology, because they make it easy to investigate what each gene does. Just knock it out, and see what's wrong with your mouse. This is why there are mouse models of so many genetic diseases, while rat and monkey models are only available for a few disorders.

Monday, September 20, 2010

The Refrigerator Mother

Autism is biological: that's the one thing everyone agrees about it. Scientific orthodoxy is that it's a neurodevelopmental condition caused by genetics, in most cases, and by environmental insult, such fetal exposure to anticonvulsants, in rare cases. Jenny McCarthy orthodoxy is that "toxins" - usually in vaccines - are to blame, not genes, and that the underlying damage might be in the gut not the brain: but they agree that it's biological.

However, it hasn't always been this way. From the 1950s to about the 1980s, there was a widespread view that autism was a purely psychological condition. Bruno Bettelheim is the name most often linked to this view. Bettelheim spent most of his career at the University of Chicago's Orthogenic School, an institution for "disturbed" children, including autistics as well as "schizophrenic" and others.

His magnum opus was his book The Empty Fortress: Infantile Autism and the Birth of the Self, in which he outlined his theory of autism illustrated by three long case histories. His ideas are now referred to as the "refrigerator mother" theory.

For Bettelheim, autism was a reaction to severe neglect. Not of physical needs, which would be fatal, but of emotional relations. In his view, the most common underlying cause of this neglect was when the mother (and to a lesser extent, the father) did not want the child to exist. They cared for him, but they did so in a mechanical fashion, treating the baby as a mouth to feed and a nappy to change, rather than as a human being.

Hence the "refrigerator" - it provides food, but it's cold.

The result was that the child never learned to interact with the mother on anything other than a mechanical level; and for Bettelheim, as for most psychoanalysts, our relationships with our parents were the model on which all our other relationships were based.

The mechanical mother thus left the autistic child unable to relate to anyone, indeed, unable to conceive of the existence of other human beings, and thus lacking a sense of "self" as opposed to "others".


The repetitive behaviours and obsessive interests characteristic of autism were seen as an active, even heroic, coping strategy. They were the child's way of asserting what little self they had, by doing something for themselves, albeit something "pointless". But they also had symbolic meanings: "Joey's" interest in fans, propellers and other rotating objects was interpreted as a representation of the "vicious circle" of his life. And so on.

*

Bettelheim's ideas are now generally derided as dangerously wrong; his reputation suffered a hit when, after his suicide in 1990, stories emerged from former colleagues and patients painting him in a nasty light. But psychiatry's wider turn away from Freud and towards biology probably made his downfall inevitable.

Today the "refrigerator mother theory" is routinely cited as a cautionary tale of how deeply one can misunderstand autism. Ironically, Bettelheim's only reference to that term in The Empty Fortress is a quotation, from none other than Leo Kanner, the man who coined the term 'childhood autism' in 1944. Kanner referred to the "emotional refrigeration" he observed in the families of autistic children, although it's not clear that he thought of it as causing the autism.

There is no doubt that Bettelheim's approach was unscientific. He repeatedly claimed that the fact that many children improved after three or four years at the Orthogenic School proved that their autism was psychological, because if it were biological it would be permanent.

Yet there is no reason to assume that children with a neurodevelopmental disorder would never change as they grew up. There was no control group, let alone a placebo group, to show that the children wouldn't have "grown out of" some symptoms anyway. (Edit: In fact, Kanner himself had written about improvement with age way back in 1943, in the first ever paper about autistic children! So there was simply no excuse for Bettelheim's flawed argument.)

Bettelheim's attributing the cause of autism to family dynamics was post hoc: for each autistic child, he looked back into their family history (i.e. what the parents reported) and found that they "consciously or unconsciously" didn't want the child to exist.

Yet all this proves is that it is possible to interpret a parent's behaviour in that way, in retrospect, if you want to. The "or unconsciously" caveat creates endless scope for over-interpretation.

But even if we now see autism as a neurodevelopmental disorder, there is something attractive about Bettelheim's book: it seems to be a serious attempt to understand the autistic experience "from the inside", and to appreciate the autistic child as a person rather than a disease. This is something that we rarely see nowadays.

Bettelheim's problem was that he tried to understand autistic behaviour from the assumption that the autistic child was, deep down, entirely "normal". Hence his interpretation of, say, Joey's fascination with rotating objects as symbolic of his life situation (and also as reflecting the fact that his father was often flying away in propeller-driven aircraft, which he was).

Yet couldn't it be that Joey was just fascinated by spinning fans per se? There's nothing interesting about rotating objects. They must have a hidden meaning. Otherwise it makes no sense - to someone who isn't autistic. But all that means is that trying to understand the autistic child is rather difficult if you don't bear in mind that they are autistic.

The Refrigerator Mother

Autism is biological: that's the one thing everyone agrees about it. Scientific orthodoxy is that it's a neurodevelopmental condition caused by genetics, in most cases, and by environmental insult, such fetal exposure to anticonvulsants, in rare cases. Jenny McCarthy orthodoxy is that "toxins" - usually in vaccines - are to blame, not genes, and that the underlying damage might be in the gut not the brain: but they agree that it's biological.

However, it hasn't always been this way. From the 1950s to about the 1980s, there was a widespread view that autism was a purely psychological condition. Bruno Bettelheim is the name most often linked to this view. Bettelheim spent most of his career at the University of Chicago's Orthogenic School, an institution for "disturbed" children, including autistics as well as "schizophrenic" and others.

His magnum opus was his book The Empty Fortress: Infantile Autism and the Birth of the Self, in which he outlined his theory of autism illustrated by three long case histories. His ideas are now referred to as the "refrigerator mother" theory.

For Bettelheim, autism was a reaction to severe neglect. Not of physical needs, which would be fatal, but of emotional relations. In his view, the most common underlying cause of this neglect was when the mother (and to a lesser extent, the father) did not want the child to exist. They cared for him, but they did so in a mechanical fashion, treating the baby as a mouth to feed and a nappy to change, rather than as a human being.

Hence the "refrigerator" - it provides food, but it's cold.

The result was that the child never learned to interact with the mother on anything other than a mechanical level; and for Bettelheim, as for most psychoanalysts, our relationships with our parents were the model on which all our other relationships were based.

The mechanical mother thus left the autistic child unable to relate to anyone, indeed, unable to conceive of the existence of other human beings, and thus lacking a sense of "self" as opposed to "others".


The repetitive behaviours and obsessive interests characteristic of autism were seen as an active, even heroic, coping strategy. They were the child's way of asserting what little self they had, by doing something for themselves, albeit something "pointless". But they also had symbolic meanings: "Joey's" interest in fans, propellers and other rotating objects was interpreted as a representation of the "vicious circle" of his life. And so on.

*

Bettelheim's ideas are now generally derided as dangerously wrong; his reputation suffered a hit when, after his suicide in 1990, stories emerged from former colleagues and patients painting him in a nasty light. But psychiatry's wider turn away from Freud and towards biology probably made his downfall inevitable.

Today the "refrigerator mother theory" is routinely cited as a cautionary tale of how deeply one can misunderstand autism. Ironically, Bettelheim's only reference to that term in The Empty Fortress is a quotation, from none other than Leo Kanner, the man who coined the term 'childhood autism' in 1944. Kanner referred to the "emotional refrigeration" he observed in the families of autistic children, although it's not clear that he thought of it as causing the autism.

There is no doubt that Bettelheim's approach was unscientific. He repeatedly claimed that the fact that many children improved after three or four years at the Orthogenic School proved that their autism was psychological, because if it were biological it would be permanent.

Yet there is no reason to assume that children with a neurodevelopmental disorder would never change as they grew up. There was no control group, let alone a placebo group, to show that the children wouldn't have "grown out of" some symptoms anyway. (Edit: In fact, Kanner himself had written about improvement with age way back in 1943, in the first ever paper about autistic children! So there was simply no excuse for Bettelheim's flawed argument.)

Bettelheim's attributing the cause of autism to family dynamics was post hoc: for each autistic child, he looked back into their family history (i.e. what the parents reported) and found that they "consciously or unconsciously" didn't want the child to exist.

Yet all this proves is that it is possible to interpret a parent's behaviour in that way, in retrospect, if you want to. The "or unconsciously" caveat creates endless scope for over-interpretation.

But even if we now see autism as a neurodevelopmental disorder, there is something attractive about Bettelheim's book: it seems to be a serious attempt to understand the autistic experience "from the inside", and to appreciate the autistic child as a person rather than a disease. This is something that we rarely see nowadays.

Bettelheim's problem was that he tried to understand autistic behaviour from the assumption that the autistic child was, deep down, entirely "normal". Hence his interpretation of, say, Joey's fascination with rotating objects as symbolic of his life situation (and also as reflecting the fact that his father was often flying away in propeller-driven aircraft, which he was).

Yet couldn't it be that Joey was just fascinated by spinning fans per se? There's nothing interesting about rotating objects. They must have a hidden meaning. Otherwise it makes no sense - to someone who isn't autistic. But all that means is that trying to understand the autistic child is rather difficult if you don't bear in mind that they are autistic.

Wednesday, September 1, 2010

Marc Hauser's Scapegoat?

The dust is starting to settle after the Hauser-gate scandal which rocked psychology a couple of weeks back.

Harvard Professor Marc Hauser has been investigated by a faculty committee and the verdict was released on the 20th August: Hauser was "found solely responsible... for eight instances of scientific misconduct." He's taking a year's "leave", his future uncertain.

Unfortunately, there has been no official news on what exactly the misconduct was, and how much of Hauser's work is suspect. According to Harvard, only three publications were affected: a 2002 paper in Cognition, which has been retracted; a 2007 paper which has been "corrected" (see below), and another 2007 Science paper, which is still under discussion.

But what happened? Cognition editor Gerry Altmann writes that he was given access to some of the Harvard internal investigation. He concludes that Hauser simply invented some of the crucial data in the retracted 2002 paper.

Essentially, some monkeys were supposed to have been tested on two conditions, X and Y, and their responses were videotaped. The difference in the monkey's behaviour between the two conditions was the scientifically interesting outcome.

In fact, the videos of the experiment showed them being tested only on condition X. There was no video evidence that condition Y was even tested. The "data" from condition Y, and by extension the differences, were, apparently, simply made up.

If this is true, it is, in Altmann's words, "the worst form of academic misconduct." As he says, it's not quite a smoking gun: maybe tapes of Y did exist, but they got lost somehow. However, this seems implausible. If so, Hauser would presumably have told Harvard so in his defence. Yet they found him guilty - and Hauser retracted the paper.

So it seems that either Hauser never tested the monkeys on condition B at all, and just made up the data, or he did test them, saw that they weren't behaving the "right" way, deleted the videos... and just made up the data. Either way it's fraud.

Was this a one-off? The Cognition paper is the only one that's been retracted. But another 2007 paper was "replicated", with Hauser & a colleague recently writing:
In the original [2007] study by Hauser et al., we reported videotaped experiments on action perception with free ranging rhesus macaques living on the island of Cayo Santiago, Puerto Rico. It has been discovered that the video records and field notes collected by the researcher who performed the experiments (D. Glynn) are incomplete for two of the conditions.
Luckily, Hauser said, when he and a colleague went back to Puerto Rico and repeated the experiment, they found "the exact same pattern of results" as originally reported. Phew.

This note, however, was sent to the journal in July, several weeks before the scandal broke - back when Hauser's reputation was intact. Was this an attempt by Hauser to pin the blame on someone else - David Glynn, who worked as a research assistant in Hauser's lab for three years, and has since left academia?

As I wrote in my previous post:
Glynn was not an author on the only paper which has actually been retracted [the Cognition 2002 paper that Altmann refers to]... according to his resume, he didn't arrive in Hauser's lab until 2005.
Glynn cannot possibly have been involved in the retracted 2002 paper. And Harvard's investigation concluded that Hauser was "solely responsible", remember. So we're to believe that Hauser, guilty of misconduct, was himself an innocent victim of some entirely unrelated mischief in 2007 - but that it was all OK in the end, because when Hauser checked the data, it was fine.

Maybe that's what happened. I am not convinced.

Personally, if I were David Glynn, I would want to clear my name. He's left science, but still, a letter to a peer reviewed journal accuses him of having produced "incomplete video records and field notes", which is not a nice thing to say about someone.

Hmm. On August 19th, the Chronicle of Higher Education ran an article about the case, based on a leaked Harvard document. They say that "A copy of the document was provided to The Chronicle by a former research assistant in the lab who has since left psychology."

Hmm. Who could blame them for leaking it? It's worth remembering that it was a research assistant in Hauser's lab who originally blew the whistle on the whole deal, according to the Chronicle.

Apparently, what originally rang alarm bells was that Hauser appeared to be reporting monkey behaviours which had never happened, according to the video evidence. So at least in that case, there were videos, and it was the inconsistency between Hauser's data and the videos that drew attention. This is what makes me suspect that maybe there were videos and field notes in every case, and the "inconvenient" ones were deleted to try to hide the smoking gun. But that's just speculation.

What's clear is that science owes the whistle-blowing research assistant, whoever it is, a huge debt.

Marc Hauser's Scapegoat?

The dust is starting to settle after the Hauser-gate scandal which rocked psychology a couple of weeks back.

Harvard Professor Marc Hauser has been investigated by a faculty committee and the verdict was released on the 20th August: Hauser was "found solely responsible... for eight instances of scientific misconduct." He's taking a year's "leave", his future uncertain.

Unfortunately, there has been no official news on what exactly the misconduct was, and how much of Hauser's work is suspect. According to Harvard, only three publications were affected: a 2002 paper in Cognition, which has been retracted; a 2007 paper which has been "corrected" (see below), and another 2007 Science paper, which is still under discussion.

But what happened? Cognition editor Gerry Altmann writes that he was given access to some of the Harvard internal investigation. He concludes that Hauser simply invented some of the crucial data in the retracted 2002 paper.

Essentially, some monkeys were supposed to have been tested on two conditions, X and Y, and their responses were videotaped. The difference in the monkey's behaviour between the two conditions was the scientifically interesting outcome.

In fact, the videos of the experiment showed them being tested only on condition X. There was no video evidence that condition Y was even tested. The "data" from condition Y, and by extension the differences, were, apparently, simply made up.

If this is true, it is, in Altmann's words, "the worst form of academic misconduct." As he says, it's not quite a smoking gun: maybe tapes of Y did exist, but they got lost somehow. However, this seems implausible. If so, Hauser would presumably have told Harvard so in his defence. Yet they found him guilty - and Hauser retracted the paper.

So it seems that either Hauser never tested the monkeys on condition B at all, and just made up the data, or he did test them, saw that they weren't behaving the "right" way, deleted the videos... and just made up the data. Either way it's fraud.

Was this a one-off? The Cognition paper is the only one that's been retracted. But another 2007 paper was "replicated", with Hauser & a colleague recently writing:
In the original [2007] study by Hauser et al., we reported videotaped experiments on action perception with free ranging rhesus macaques living on the island of Cayo Santiago, Puerto Rico. It has been discovered that the video records and field notes collected by the researcher who performed the experiments (D. Glynn) are incomplete for two of the conditions.
Luckily, Hauser said, when he and a colleague went back to Puerto Rico and repeated the experiment, they found "the exact same pattern of results" as originally reported. Phew.

This note, however, was sent to the journal in July, several weeks before the scandal broke - back when Hauser's reputation was intact. Was this an attempt by Hauser to pin the blame on someone else - David Glynn, who worked as a research assistant in Hauser's lab for three years, and has since left academia?

As I wrote in my previous post:
Glynn was not an author on the only paper which has actually been retracted [the Cognition 2002 paper that Altmann refers to]... according to his resume, he didn't arrive in Hauser's lab until 2005.
Glynn cannot possibly have been involved in the retracted 2002 paper. And Harvard's investigation concluded that Hauser was "solely responsible", remember. So we're to believe that Hauser, guilty of misconduct, was himself an innocent victim of some entirely unrelated mischief in 2007 - but that it was all OK in the end, because when Hauser checked the data, it was fine.

Maybe that's what happened. I am not convinced.

Personally, if I were David Glynn, I would want to clear my name. He's left science, but still, a letter to a peer reviewed journal accuses him of having produced "incomplete video records and field notes", which is not a nice thing to say about someone.

Hmm. On August 19th, the Chronicle of Higher Education ran an article about the case, based on a leaked Harvard document. They say that "A copy of the document was provided to The Chronicle by a former research assistant in the lab who has since left psychology."

Hmm. Who could blame them for leaking it? It's worth remembering that it was a research assistant in Hauser's lab who originally blew the whistle on the whole deal, according to the Chronicle.

Apparently, what originally rang alarm bells was that Hauser appeared to be reporting monkey behaviours which had never happened, according to the video evidence. So at least in that case, there were videos, and it was the inconsistency between Hauser's data and the videos that drew attention. This is what makes me suspect that maybe there were videos and field notes in every case, and the "inconvenient" ones were deleted to try to hide the smoking gun. But that's just speculation.

What's clear is that science owes the whistle-blowing research assistant, whoever it is, a huge debt.

Monday, August 30, 2010

Serotonin, Psychedelics and Depression

Note: This post is part of a Nature Blog Focus on hallucinogenic drugs in medicine and mental health, inspired by a recent Nature Reviews Neuroscience paper, The neurobiology of psychedelic drugs: implications for the treatment of mood disorders, by Franz Vollenweider & Michael Kometer. That article will be available, free (once you register), until September 23. For more information on this Blog Focus, see the "Table of Contents" here.

Neurophilosophy is covering the history of psychedelic psychiatry, while Mind Hacks provides a personal look at one particular drug, DMT. The Neurocritic discusses ketamine, an anesthetic with hallucinogenic properties, which is attracting a lot of interest at the moment as a treatment for depression.

Ketamine, however, is not a "classical" psychedelic like the drugs that gave the 60s its unique flavor and left us with psychedelic rock, acid house and colorful artwork. Classical psychedelics are the focus of this post.

The best known are LSD ("acid"), mescaline, found in the peyote and a few other species of cactus, and psilocybin, from "magic" mushrooms of the Psilocybe genus. Yet there are literally hundreds of related compounds. Most of them are described in loving detail in the two heroic epics of psychopharmacology, PIKHaL and TIKHaL, written by chemists and trip veterans Alexander and Ann Shulgin.

The chemistry of psychedelics is closely linked with that of depression and antidepressants. All classical psychedelics are 5HT2A receptor agonists. Most of them have other effects on the brain as well, which contribute to the unique effects of each drug, but 5HT2A agonism is what they all have in common.

5HT2A receptors are excitatory receptors expressed throughout the brain, and are especially dense in the key pyramidal cells of the cerebral cortex. They're normally activated by serotonin (5HT), which is the neurotransmitter that's most often thought of as being implicated in depression. The relationship between 5HT and mood is very complicated, and depression isn't simply a disorder of "low serotonin", but there's strong evidence that it is involved.

There's one messy detail, which is that not quite all 5HT2A agonists are hallucinogenic. Lisuride, a drug used in Parkinson's disease, is closely related to LSD, and is a strong 5HT2A agonist, but it has no psychedelic effects. It's recently been shown that LSD and lisuride have different molecular effects on cortical cells, even though they act on the same receptor - in other words, there's more to 5HT2A than simply turning it "on" and "off".

*

How could psychedelics help to treat mental illness? On the face of it, the acute effects of these drugs - hallucinations, altered thought processes and emotions - sound rather like the symptoms of mental illness themselves, and indeed psychedelics have been referred to as "psychotomimetic" - mimicking psychosis.

There are two schools of thought here: psychological and neurobiological.

The psychological approach ruled the first wave of psychedelic psychiatry, in the 50s and 60s. Psychiatry, especially in America, was dominated by Freudian theories of the unconscious. On this view, mental illness was a product of conflicts between unconscious desires and the conscious mind. The symptoms experienced by a particular patient were distressing, of course, but they also provided clues to the nature of their unconscious troubles.

It was tempting to see the action of psychedelics as a weakening of the filters which kept the unconscious, unconscious - allowing repressed material to come into awareness. The only other time this happened, according to Freud, was during dreams. That's why Freud famously called the interpretation of dreams the "royal road to the unconscious".

Psychedelics offered analysts the tantalizing prospect of confronting the unconscious face-to-face, while awake, instead of having to rely on the patient's memory of their previous dreams. To enthusiastic Freudians, this promised to revolutionize therapy, in the same way that the x-ray had done so much for surgery. The "dreamlike" nature of many aspects of the psychedelic experience seemed to confirm this.

Not all psychedelic therapists were orthodox Freudians, however. There were plenty of other theories in circulation, many of them inspired by the theorists' own drug experiences. Stanislav Grof, Timothy Leary and others saw the psychedelic state of consciousness as the key to attaining spiritual, philosophical and even mystical insights, whether one was "ill" or "healthy" - and indeed, they often said that mental "illness" was itself a potential source of spiritual growth.

Like many things, psychiatry has changed since the 60s. Psychotherapy is currently dominated by cognitive-behavioural (CBT) theory, and Freudian ideas have gone distinctly out of fashion. It remains to be seen what CBT would make of LSD, but the basic idea - that carefully controlled use of drugs could help patients to "break through" psychological barriers to treatment - seems likely to remain at the heart of their continued use.

*

The other view is that these drugs could have direct biological effects which lead to improvements in mood. Repeated use of LSD, for example, has been shown to rapidly induce down-regulation of 5HT2A receptors. Presumably, this is the brain's way of "compensating" for prolonged 5HT2A activation. This is probably why tolerance to the effects of psychedelics rapidly develops, something that's long been known (and regretted) by heavy users.

Vollenweider and Kometeris note that this is interesting, because 5HT2A blockers are used as antidepressants - the drugs nefazadone and mirtazapine are the best known today, but most of the older tricyclic antidepressants are also 5HT2A antagonists. Atypical antipsychotics, which are also used in depression, are potent 5HT2A antagonists as well.

So indirectly suppressing 5HT2A might be one biological mechanism by which psychedelics improve mood. However, questions remain about how far this could explain any therapeutic effects of these drugs. Psychedelic-induced 5HT2A down-regulation is presumably temporary - and if all we need to do is to knock out 5HT2A, it would surely be easiest to just use an antagonist...

ResearchBlogging.orgVollenweider FX, & Kometer M (2010). The neurobiology of psychedelic drugs: implications for the treatment of mood disorders. Nature Reviews Neuroscience, 11 (9), 642-51 PMID: 20717121

Serotonin, Psychedelics and Depression

Note: This post is part of a Nature Blog Focus on hallucinogenic drugs in medicine and mental health, inspired by a recent Nature Reviews Neuroscience paper, The neurobiology of psychedelic drugs: implications for the treatment of mood disorders, by Franz Vollenweider & Michael Kometer. That article will be available, free (once you register), until September 23. For more information on this Blog Focus, see the "Table of Contents" here.

Neurophilosophy is covering the history of psychedelic psychiatry, while Mind Hacks provides a personal look at one particular drug, DMT. The Neurocritic discusses ketamine, an anesthetic with hallucinogenic properties, which is attracting a lot of interest at the moment as a treatment for depression.

Ketamine, however, is not a "classical" psychedelic like the drugs that gave the 60s its unique flavor and left us with psychedelic rock, acid house and colorful artwork. Classical psychedelics are the focus of this post.

The best known are LSD ("acid"), mescaline, found in the peyote and a few other species of cactus, and psilocybin, from "magic" mushrooms of the Psilocybe genus. Yet there are literally hundreds of related compounds. Most of them are described in loving detail in the two heroic epics of psychopharmacology, PIKHaL and TIKHaL, written by chemists and trip veterans Alexander and Ann Shulgin.

The chemistry of psychedelics is closely linked with that of depression and antidepressants. All classical psychedelics are 5HT2A receptor agonists. Most of them have other effects on the brain as well, which contribute to the unique effects of each drug, but 5HT2A agonism is what they all have in common.

5HT2A receptors are excitatory receptors expressed throughout the brain, and are especially dense in the key pyramidal cells of the cerebral cortex. They're normally activated by serotonin (5HT), which is the neurotransmitter that's most often thought of as being implicated in depression. The relationship between 5HT and mood is very complicated, and depression isn't simply a disorder of "low serotonin", but there's strong evidence that it is involved.

There's one messy detail, which is that not quite all 5HT2A agonists are hallucinogenic. Lisuride, a drug used in Parkinson's disease, is closely related to LSD, and is a strong 5HT2A agonist, but it has no psychedelic effects. It's recently been shown that LSD and lisuride have different molecular effects on cortical cells, even though they act on the same receptor - in other words, there's more to 5HT2A than simply turning it "on" and "off".

*

How could psychedelics help to treat mental illness? On the face of it, the acute effects of these drugs - hallucinations, altered thought processes and emotions - sound rather like the symptoms of mental illness themselves, and indeed psychedelics have been referred to as "psychotomimetic" - mimicking psychosis.

There are two schools of thought here: psychological and neurobiological.

The psychological approach ruled the first wave of psychedelic psychiatry, in the 50s and 60s. Psychiatry, especially in America, was dominated by Freudian theories of the unconscious. On this view, mental illness was a product of conflicts between unconscious desires and the conscious mind. The symptoms experienced by a particular patient were distressing, of course, but they also provided clues to the nature of their unconscious troubles.

It was tempting to see the action of psychedelics as a weakening of the filters which kept the unconscious, unconscious - allowing repressed material to come into awareness. The only other time this happened, according to Freud, was during dreams. That's why Freud famously called the interpretation of dreams the "royal road to the unconscious".

Psychedelics offered analysts the tantalizing prospect of confronting the unconscious face-to-face, while awake, instead of having to rely on the patient's memory of their previous dreams. To enthusiastic Freudians, this promised to revolutionize therapy, in the same way that the x-ray had done so much for surgery. The "dreamlike" nature of many aspects of the psychedelic experience seemed to confirm this.

Not all psychedelic therapists were orthodox Freudians, however. There were plenty of other theories in circulation, many of them inspired by the theorists' own drug experiences. Stanislav Grof, Timothy Leary and others saw the psychedelic state of consciousness as the key to attaining spiritual, philosophical and even mystical insights, whether one was "ill" or "healthy" - and indeed, they often said that mental "illness" was itself a potential source of spiritual growth.

Like many things, psychiatry has changed since the 60s. Psychotherapy is currently dominated by cognitive-behavioural (CBT) theory, and Freudian ideas have gone distinctly out of fashion. It remains to be seen what CBT would make of LSD, but the basic idea - that carefully controlled use of drugs could help patients to "break through" psychological barriers to treatment - seems likely to remain at the heart of their continued use.

*

The other view is that these drugs could have direct biological effects which lead to improvements in mood. Repeated use of LSD, for example, has been shown to rapidly induce down-regulation of 5HT2A receptors. Presumably, this is the brain's way of "compensating" for prolonged 5HT2A activation. This is probably why tolerance to the effects of psychedelics rapidly develops, something that's long been known (and regretted) by heavy users.

Vollenweider and Kometeris note that this is interesting, because 5HT2A blockers are used as antidepressants - the drugs nefazadone and mirtazapine are the best known today, but most of the older tricyclic antidepressants are also 5HT2A antagonists. Atypical antipsychotics, which are also used in depression, are potent 5HT2A antagonists as well.

So indirectly suppressing 5HT2A might be one biological mechanism by which psychedelics improve mood. However, questions remain about how far this could explain any therapeutic effects of these drugs. Psychedelic-induced 5HT2A down-regulation is presumably temporary - and if all we need to do is to knock out 5HT2A, it would surely be easiest to just use an antagonist...

ResearchBlogging.orgVollenweider FX, & Kometer M (2010). The neurobiology of psychedelic drugs: implications for the treatment of mood disorders. Nature Reviews Neuroscience, 11 (9), 642-51 PMID: 20717121

Thursday, July 8, 2010

The World Turned Upside Down

This map is not “upside down”. It looks that way to us; the sense that north is up is a deeply ingrained one. It's grim up north, Dixie is away down south. Yet this is pure convention. The earth is a sphere in space. It has a north and a south, but no up and down.

There’s a famous experiment involving four guys and a door. An unsuspecting test subject is lured into a conversation with a stranger, actually a psychologist. After a few moments, two people appear carrying a large door, and they walk right between the subject and the experimenter.

Behind the door, the experimenter swaps places with one of the door carriers, who may be quite different in voice and appearance. Most subjects don't notice the swap. Perception is lazy: whenever it can get away with it, it merely tells us that things are as we expect, rather than actually showing us stuff. We often do not really perceive things at all. Did the subject really see the first guy? The second? Either?

The inverted map makes us actually see the Earth's geography, rather than just showing us the expected "countries" and "continents". I was struck by how parochial Europe is – the whole place is little more than a frayed end of the vast Eurasian landmass, no more impressive than the one at the other end, Russia's Chukotski. Africa dominates the scene: it can no longer be written off as that poor place at the bottom.

One of the most common observations in psychotherapy of people with depression or anxiety is that they hold themselves to impossibly high standards, although they have a perfectly sensible evaluation of everyone else. Their own failures are catastrophic; other people's are minor setbacks. Other people's successes are well-deserved triumphs; their own are never good enough, flukes, they don't count.

The first step in challenging these unhelpful patterns of thought is to simply point out the double-standard: why are you such a perfectionist about yourself, when you're not when it comes to other people? The idea being to help people to think about themselves in more like healthy way they already think about others. Turn the map of yourself upside down - what do you actually see?

The World Turned Upside Down

This map is not “upside down”. It looks that way to us; the sense that north is up is a deeply ingrained one. It's grim up north, Dixie is away down south. Yet this is pure convention. The earth is a sphere in space. It has a north and a south, but no up and down.

There’s a famous experiment involving four guys and a door. An unsuspecting test subject is lured into a conversation with a stranger, actually a psychologist. After a few moments, two people appear carrying a large door, and they walk right between the subject and the experimenter.

Behind the door, the experimenter swaps places with one of the door carriers, who may be quite different in voice and appearance. Most subjects don't notice the swap. Perception is lazy: whenever it can get away with it, it merely tells us that things are as we expect, rather than actually showing us stuff. We often do not really perceive things at all. Did the subject really see the first guy? The second? Either?

The inverted map makes us actually see the Earth's geography, rather than just showing us the expected "countries" and "continents". I was struck by how parochial Europe is – the whole place is little more than a frayed end of the vast Eurasian landmass, no more impressive than the one at the other end, Russia's Chukotski. Africa dominates the scene: it can no longer be written off as that poor place at the bottom.

One of the most common observations in psychotherapy of people with depression or anxiety is that they hold themselves to impossibly high standards, although they have a perfectly sensible evaluation of everyone else. Their own failures are catastrophic; other people's are minor setbacks. Other people's successes are well-deserved triumphs; their own are never good enough, flukes, they don't count.

The first step in challenging these unhelpful patterns of thought is to simply point out the double-standard: why are you such a perfectionist about yourself, when you're not when it comes to other people? The idea being to help people to think about themselves in more like healthy way they already think about others. Turn the map of yourself upside down - what do you actually see?

Wednesday, June 30, 2010

The Fall of Freud

The works of Sigmund Freud were enormously influential in 20th century psychiatry, but they've now been reduced to little more than a fringe belief system. Armed with the latest version of my PubMed history script, and inspired by this classic gnxp post on the death of Marxism, postmodernism, and other stupid academic fads I decided to see how this happened.

As you can see, the number of published scientific papers related to Freud-y search terms like psychoanalytic has flat-lined for the past 50 years. That represents a serious collapse of influence, given the enormous expansion in the amount of research being published over this time.

Since 1960 the number of papers on schizophrenia has risen by a factor of 10 and anxiety by a factor of 80 (sic). The peak of Freud's fame was 1968, when almost as many papers referenced psychoanalytic (721) as did schizophrenia (989), and it was more than half as popular as antidepressants (1372). Today it's just 10% of either. Proportionally speaking, psychoanalysis has gone out with a whimper, though not a bang.

The rise of Cognitive Behavioral Therapy (CBT), however, is even more dramatic. From being almost unheard until the late 80's, it overtook psychoanalytic in 1993, and it's now more popular than antipsychotics and close on the heels of antidepressants.

What's going to happen in the future? If there is to be a struggle for influence it looks set to be fought between CBT and biological psychiatry, if only because they're pretty much the only games left in town. Yet one of the reasons behind CBT's widespread appeal is that it hasn't thus far overtly challenged biology, has adopted the methods of medicine (clinical trials etc.), and has presented itself as being useful as well as medication rather than instead of it.

One of the few exceptions was Richard Bentall's book Madness Explained (2003) in which he criticized psychiatry and presented a cognitive-behavioural alternative to orthodox biological theories of schizophrenia and bipolar disorder. Bentall remains on the radical wing of the CBT community but in the coming decades this kind of thing may become more common. Only time will tell...

The Fall of Freud

The works of Sigmund Freud were enormously influential in 20th century psychiatry, but they've now been reduced to little more than a fringe belief system. Armed with the latest version of my PubMed history script, and inspired by this classic gnxp post on the death of Marxism, postmodernism, and other stupid academic fads I decided to see how this happened.

As you can see, the number of published scientific papers related to Freud-y search terms like psychoanalytic has flat-lined for the past 50 years. That represents a serious collapse of influence, given the enormous expansion in the amount of research being published over this time.

Since 1960 the number of papers on schizophrenia has risen by a factor of 10 and anxiety by a factor of 80 (sic). The peak of Freud's fame was 1968, when almost as many papers referenced psychoanalytic (721) as did schizophrenia (989), and it was more than half as popular as antidepressants (1372). Today it's just 10% of either. Proportionally speaking, psychoanalysis has gone out with a whimper, though not a bang.

The rise of Cognitive Behavioral Therapy (CBT), however, is even more dramatic. From being almost unheard until the late 80's, it overtook psychoanalytic in 1993, and it's now more popular than antipsychotics and close on the heels of antidepressants.

What's going to happen in the future? If there is to be a struggle for influence it looks set to be fought between CBT and biological psychiatry, if only because they're pretty much the only games left in town. Yet one of the reasons behind CBT's widespread appeal is that it hasn't thus far overtly challenged biology, has adopted the methods of medicine (clinical trials etc.), and has presented itself as being useful as well as medication rather than instead of it.

One of the few exceptions was Richard Bentall's book Madness Explained (2003) in which he criticized psychiatry and presented a cognitive-behavioural alternative to orthodox biological theories of schizophrenia and bipolar disorder. Bentall remains on the radical wing of the CBT community but in the coming decades this kind of thing may become more common. Only time will tell...

Thursday, May 27, 2010

Do Genes Remember?

Almost all neuroscientists believe that memories are stored in the connections between neurons: synapses. Learning, then, consists of the strengthening of some synapses, the weakening of others, and maybe even the formation of entirely new ones. But a paper from Catherine Miller and colleagues suggests that changes to DNA are also involved: Cortical DNA methylation maintains remote memory.


DNA is a series of bases, and fundamentally there are just four: C, A, T and G. However, the Cs and the As can be methylated, i.e. modified by the addition of a very simple methyl chemical group. They then stay that way until they get demethylated in the reverse process. Methylating a gene generally reduces its expression.

It's a bit like writing notes in pencil on top of a printed document: it doesn't change the underlying genetic sequence, but it's a semi-permanent change and it can be inherited by dividing cells. Methylation is a classic example of an epigenetic change, and epigenetics is very hot right now.Miller et al found that learning induces the methylation of a gene called calcineurin (CaN) in the cells of the frontal cortex of rats. These changes appeared within 1 day of the learning event, and they persisted for at least 30 days (the longest time studied - they could well last much longer). Methylation of another gene, reelin, was also increased, but only for a few hours.

When they blocked these changes by injecting a DNA methylation inhibitor into the frontal cortex, it caused amnesia - even if the drug was given 30 days after the learning had taken place. In other words, the methylation inhibitors somehow erased the memory traces. These authors have previously reported that the same kind of learning causes a short-lived increase in methylation in the hippocampus. Taken together with these data, this fits with the well-known theory that memory traces start off being stored in the hippocampus and are then somehow transferred to the cortex later.

This kind of research has a bit of a history. The idea that memories are stored in DNA has led some to theorize that memories can be inherited. It also reminds me of the work of psychologist and Unabomber-victim James McConnell, who claimed that planarian worms can learn information by eating the ground-up remains of other worms who knew something...

These data are very interesting, but they don't imply anything quite so exciting. The pattern of methylation seemed entirely random (except in the sense that it was targeted at certain genes) - so rather than encoding information per se, the DNA changes were acting as a way of reducing CaN gene expression. Most likely, the reduction in CaN was limited to certain cells, and these were the cells that formed the connections that encoded the information.

ResearchBlogging.orgMiller, C., Gavin, C., White, J., Parrish, R., Honasoge, A., Yancey, C., Rivera, I., Rubio, M., Rumbaugh, G., & Sweatt, J. (2010). Cortical DNA methylation maintains remote memory Nature Neuroscience, 13 (6), 664-666 DOI: 10.1038/nn.2560

Do Genes Remember?

Almost all neuroscientists believe that memories are stored in the connections between neurons: synapses. Learning, then, consists of the strengthening of some synapses, the weakening of others, and maybe even the formation of entirely new ones. But a paper from Catherine Miller and colleagues suggests that changes to DNA are also involved: Cortical DNA methylation maintains remote memory.


DNA is a series of bases, and fundamentally there are just four: C, A, T and G. However, the Cs and the As can be methylated, i.e. modified by the addition of a very simple methyl chemical group. They then stay that way until they get demethylated in the reverse process. Methylating a gene generally reduces its expression.

It's a bit like writing notes in pencil on top of a printed document: it doesn't change the underlying genetic sequence, but it's a semi-permanent change and it can be inherited by dividing cells. Methylation is a classic example of an epigenetic change, and epigenetics is very hot right now.Miller et al found that learning induces the methylation of a gene called calcineurin (CaN) in the cells of the frontal cortex of rats. These changes appeared within 1 day of the learning event, and they persisted for at least 30 days (the longest time studied - they could well last much longer). Methylation of another gene, reelin, was also increased, but only for a few hours.

When they blocked these changes by injecting a DNA methylation inhibitor into the frontal cortex, it caused amnesia - even if the drug was given 30 days after the learning had taken place. In other words, the methylation inhibitors somehow erased the memory traces. These authors have previously reported that the same kind of learning causes a short-lived increase in methylation in the hippocampus. Taken together with these data, this fits with the well-known theory that memory traces start off being stored in the hippocampus and are then somehow transferred to the cortex later.

This kind of research has a bit of a history. The idea that memories are stored in DNA has led some to theorize that memories can be inherited. It also reminds me of the work of psychologist and Unabomber-victim James McConnell, who claimed that planarian worms can learn information by eating the ground-up remains of other worms who knew something...

These data are very interesting, but they don't imply anything quite so exciting. The pattern of methylation seemed entirely random (except in the sense that it was targeted at certain genes) - so rather than encoding information per se, the DNA changes were acting as a way of reducing CaN gene expression. Most likely, the reduction in CaN was limited to certain cells, and these were the cells that formed the connections that encoded the information.

ResearchBlogging.orgMiller, C., Gavin, C., White, J., Parrish, R., Honasoge, A., Yancey, C., Rivera, I., Rubio, M., Rumbaugh, G., & Sweatt, J. (2010). Cortical DNA methylation maintains remote memory Nature Neuroscience, 13 (6), 664-666 DOI: 10.1038/nn.2560

Tuesday, May 18, 2010

How to Be A PubMed Historian

Quite a lot of people seem to like those graphs I sometimes make showing the number of papers published about a certain topic in any given year, based on the number of PubMed hits.

But how do I do it? Surely I don't sit there manually searching PubMed for each term, for each year, right? That would mean dozens, maybe hundreds, of manual searches. Well, unfortunately, that is exactly how I've done it in the past. I really am that cool, see.


Actually it doesn't take very long once you get into the swing of it, but I've now worked out a better way. See below for a bash script which repeatedly searches PubMed for a given sequence of years, downloads the first page of the results, picks out the bit where it tells you how many hits you got, and puts it all into a single output text file ready to be pasted into Excel or whatever. This comes with no guarantees whatsoever, but it seems to work. Enjoy...

Edit 29/06/2010: Vastly improved version that searches for multiple different terms sequentially, accepts terms that include spaces, and outputs the data into a sensible format
. The search term text file should be a plain text file containing one search term per line. e.g:
serotonin depression
dopamine depression
GABA depression
Would search for each of those terms and output the data for each year into a single text file - with three data columns in this case - good for comparing the relative popularity of many different terms across time.

---
#! /bin/bash
# 29 . 06 . 2010
#PubMedHistory script by Neuroskeptic http://neuroskeptic.blogspot.com
# script to find out how many PubMed hits for a certain string in a given year range.

# usage: script (search term text file) (start year) (end year) (output file)
# e.g script list_of_terms.txt 2000 2005 dope.txt
#first, print the HEADER line of the output file.

printf "YEAR\t" > $4
cat $1 | while read subject
do
#pre-format the subject to remove spaces
ffa=${subject/' '/%20}
echo -n "$ffa" >> $4
printf "\t" >> $4
done
#and a newline
printf "\n" >> $4

#Now the real thing. The main loop is a YEAR loop:

for (( yearz=$2; yearz<=$3; yearz++ )) do #For each year, create a temporary file t.txt containing the output for this line.
#First, the year, then a tab.

printf "$yearz\t" > t.txt

#now, a second loop to go through the list of searches
cat $1 | while read subject
do
one=${subject/' '/%20}
wget -O $yearz.txt http://www.ncbi.nlm.nih.gov/sites/entrez?term="$one"+"$
yearz"'[Publication Date]'
#find the line in the output with what we're interested in
output=`cat $yearz.txt | grep ncbi_resultcount | awk '{print}'`
#now, change it to get rid of the bit containing the search term
#as this will screw up the next step if it contains spaces!
output=${output/content*
publication/LOL}
#print to a temp file
echo $output > temp$one$2$3$4.txt
#find the bit we want using awk
output=`awk '{ print $22 }' temp$one$2$3$4.txt`
rm temp$one$2$3$4.txt
rm $yearz.txt
#trim output
trimmedout=${output#content\=\
"}
trimmedoutB=${trimmedout%\"}
#replace "false" with 0 because that's what "false" means
trimmedoutC=${trimmedoutB/'
false'/0}
echo in year $yearz , I got $trimmedoutC. Saving to temp file t.txt
#write the result, and a tab, to the TEMPORARY output file
printf "$trimmedoutC\t" >> t.txt
done
#Now we've done all the search terms for this YEAR, so send the temporary data to the final file
cat t.txt >> $4
#and give it a newline
printf "\n" >> $4
done
rm t.txt

How to Be A PubMed Historian

Quite a lot of people seem to like those graphs I sometimes make showing the number of papers published about a certain topic in any given year, based on the number of PubMed hits.

But how do I do it? Surely I don't sit there manually searching PubMed for each term, for each year, right? That would mean dozens, maybe hundreds, of manual searches. Well, unfortunately, that is exactly how I've done it in the past. I really am that cool, see.


Actually it doesn't take very long once you get into the swing of it, but I've now worked out a better way. See below for a bash script which repeatedly searches PubMed for a given sequence of years, downloads the first page of the results, picks out the bit where it tells you how many hits you got, and puts it all into a single output text file ready to be pasted into Excel or whatever. This comes with no guarantees whatsoever, but it seems to work. Enjoy...

Edit 29/06/2010: Vastly improved version that searches for multiple different terms sequentially, accepts terms that include spaces, and outputs the data into a sensible format
. The search term text file should be a plain text file containing one search term per line. e.g:
serotonin depression
dopamine depression
GABA depression
Would search for each of those terms and output the data for each year into a single text file - with three data columns in this case - good for comparing the relative popularity of many different terms across time.

---
#! /bin/bash
# 29 . 06 . 2010
#PubMedHistory script by Neuroskeptic http://neuroskeptic.blogspot.com
# script to find out how many PubMed hits for a certain string in a given year range.

# usage: script (search term text file) (start year) (end year) (output file)
# e.g script list_of_terms.txt 2000 2005 dope.txt
#first, print the HEADER line of the output file.

printf "YEAR\t" > $4
cat $1 | while read subject
do
#pre-format the subject to remove spaces
ffa=${subject/' '/%20}
echo -n "$ffa" >> $4
printf "\t" >> $4
done
#and a newline
printf "\n" >> $4

#Now the real thing. The main loop is a YEAR loop:

for (( yearz=$2; yearz<=$3; yearz++ )) do #For each year, create a temporary file t.txt containing the output for this line.
#First, the year, then a tab.

printf "$yearz\t" > t.txt

#now, a second loop to go through the list of searches
cat $1 | while read subject
do
one=${subject/' '/%20}
wget -O $yearz.txt http://www.ncbi.nlm.nih.gov/sites/entrez?term="$one"+"$
yearz"'[Publication Date]'
#find the line in the output with what we're interested in
output=`cat $yearz.txt | grep ncbi_resultcount | awk '{print}'`
#now, change it to get rid of the bit containing the search term
#as this will screw up the next step if it contains spaces!
output=${output/content*
publication/LOL}
#print to a temp file
echo $output > temp$one$2$3$4.txt
#find the bit we want using awk
output=`awk '{ print $22 }' temp$one$2$3$4.txt`
rm temp$one$2$3$4.txt
rm $yearz.txt
#trim output
trimmedout=${output#content\=\
"}
trimmedoutB=${trimmedout%\"}
#replace "false" with 0 because that's what "false" means
trimmedoutC=${trimmedoutB/'
false'/0}
echo in year $yearz , I got $trimmedoutC. Saving to temp file t.txt
#write the result, and a tab, to the TEMPORARY output file
printf "$trimmedoutC\t" >> t.txt
done
#Now we've done all the search terms for this YEAR, so send the temporary data to the final file
cat t.txt >> $4
#and give it a newline
printf "\n" >> $4
done
rm t.txt