Friday, August 20, 2010

UM CARINHO PARA TODOS!!!

OBRIGADA MARCIA..

ESTOU COM SAUDADES DE TODOS VOCÊS. O MEU TEMPO FICOU TÃO PEQUENINO. AGORA ALÉM DA FISIOTERAPIA, VOU FAZER O CURSO DE INGLÊS. TUDO A NOITE, APÓS O TRABALHO.
E DAI..O BICHO PEGA..SEM TEMPO MESMO...MAS MESMO ASSIM..VOU CONTINUAR TE AMANDO!!!!

amizade - Recados e Imagens (142)
QUANTO AS FOTOS DA BIENAL. POSTAREI NO FINAL DE SEMANA. ESTOU PREPARANDO A POSTAGEM. AGRADEÇO O CARINHO DE TODOS. SAUDADES DE TODOS!!!!
GANHEI ESTE LINDO SELINHO DA ANNINHA DO PALAVRAS SOLTAS..
PEÇO PERMISSÃO PARA REPASSAR A VOCÊ MEU GRANDE AMIGO(A)..





AGRADEÇO A SUA COMPANHIA!!!Clique Aqui e veja mais imagens

Poetas-Um Voo Livre-

Sinal de Liberdade-uma expressão de sentimento-

Blog Coletivo-Uma Interação de Amigos-
COLETIVAS-COMPARTILHE

MEUS MIMOS . AQUI. OFERECIDOS/RECEBIDOS-SELO AMIZADE

Schizophrenia, Genes and Environment

Schizophrenia is generally thought of as the "most genetic" of all psychiatric disorders and in the past 10 years there have been heroic efforts to find the genes responsible for it, with not much success so far.A new study reminds us that there's more to it than genes alone: Social Risk or Genetic Liability for Psychosis? The authors decided to look at adopted children, because this is one of the best ways of disentangling genes and environment.

If you find that the children of people with schizophrenia are at an increased risk of schizophrenia (they are), that doesn't tell you whether the risk is due to genetics, or environment, because we share both with our parents. Only in adoption is the link between genes and environment broken.

Wicks et al looked at all of the kids born in Sweden and then adopted by another Swedish family, over several decades (births 1955-1984). To make sure genes and environment were independent, they excluded those who were adopted by their own relatives (i.e. grandparents), and those lived with their biological parents between the ages of 1 and 15. This is the kind of study you can only do in Scandinavia, because only those countries have accessible national records of adoptions and mental illness...

What happened? Here's a little graph I whipped up:

Brighter colors are adoptees at "genetic risk", defined as those with at least one biological parent who was hospitalized for a psychotic illness (including schizophrenia but also bipolar disorder.) The outcome measure was being hospitalized for a non-affective psychosis, meaning schizophrenia or similar conditions but not bipolar.

As you can see, rates are much higher in those with a genetic risk, but were also higher in those adopted into a less favorable environment. Parental unemployment was worst, followed by single parenthood, which was also quite bad. Living in an apartment as opposed to a house, however, had only a tiny effect.

Genetic and environmental risk also interacted. If a biological parent was mentally ill and your adopted parents were unemployed, that was really bad news.

But hang on. Adoption studies have been criticized because children don't get adopted at random (there's a story behind every adoption, and it's rarely a happy one), and also adopting families are not picked at random - you're only allowed to adopt if you can convince the authorities that you're going to be good parents.

So they also looked at the non-adopted population, i.e. everyone else in Sweden, over the same time period. The results were surprisingly similar. The hazard ratio (increased risk) in those with parental mental illness, but no adverse circumstances, was 4.5, the same as in the adoption study, 4.7.

For environment, the ratio was 1.5 for unemployment, and slightly lower for the other two. This is a bit less than in the adoption study (2.0 for unemployment). And the two risks interacted, but much less than they did in the adoption sample.

However, one big difference was that the total lifetime rate of illness was 1.8% in the adoptees and just 0.8% in the nonadoptees, despite much higher rates of unemployment etc. in the latter. Unfortunately, the authors don't discuss this odd result. It could be that adopted children have a higher risk of psychosis for whatever reason. But it could also be an artefact: rates of adoption massively declined between 1955 and 1984, so most of the adoptees were born earlier, i.e. they're older on average. That gives them more time in which to become ill.

A few more random thoughts:
  • This was Sweden. Sweden is very rich and compared to most other rich countries also very egalitarian with extremely high taxes and welfare spending. In other words, no-one in Sweden is really poor. So the effects of environment might be bigger in other countries.
  • On the other hand this study may overestimate the risk due to environment, because it looked at hospitalizations, not illness per se. Supposing that poorer people are more likely to get hospitalized, this could mean that the true effect of environment on illness is lower than it appears.
  • The outcome measure was hospitalization for "non-affective psychosis". Only 40% of this was diagnosed as "schizophrenia". The rest will have been some kind of similar illness which didn't meet the full criteria for schizophrenia (which are quite narrow, in particular, they require >6 months of symptoms).
  • Parental bipolar disorder was counted as a family history. This does make sense because we know that bipolar disorder and schizophrenia often occur in the same families (and indeed they can be hard to tell apart, many people are diagnosed with both at different times.)
Overall, though, this is a solid study and confirms that genes and environment are both relevant to psychosis. Unfortunately, almost all of the research money at the moment goes on genes, with studying environmental factors being unfashionable.

ResearchBlogging.orgWicks S, Hjern A, & Dalman C (2010). Social Risk or Genetic Liability for Psychosis? A Study of Children Born in Sweden and Reared by Adoptive Parents. The American journal of psychiatry PMID: 20686186

Schizophrenia, Genes and Environment

Schizophrenia is generally thought of as the "most genetic" of all psychiatric disorders and in the past 10 years there have been heroic efforts to find the genes responsible for it, with not much success so far.A new study reminds us that there's more to it than genes alone: Social Risk or Genetic Liability for Psychosis? The authors decided to look at adopted children, because this is one of the best ways of disentangling genes and environment.

If you find that the children of people with schizophrenia are at an increased risk of schizophrenia (they are), that doesn't tell you whether the risk is due to genetics, or environment, because we share both with our parents. Only in adoption is the link between genes and environment broken.

Wicks et al looked at all of the kids born in Sweden and then adopted by another Swedish family, over several decades (births 1955-1984). To make sure genes and environment were independent, they excluded those who were adopted by their own relatives (i.e. grandparents), and those lived with their biological parents between the ages of 1 and 15. This is the kind of study you can only do in Scandinavia, because only those countries have accessible national records of adoptions and mental illness...

What happened? Here's a little graph I whipped up:

Brighter colors are adoptees at "genetic risk", defined as those with at least one biological parent who was hospitalized for a psychotic illness (including schizophrenia but also bipolar disorder.) The outcome measure was being hospitalized for a non-affective psychosis, meaning schizophrenia or similar conditions but not bipolar.

As you can see, rates are much higher in those with a genetic risk, but were also higher in those adopted into a less favorable environment. Parental unemployment was worst, followed by single parenthood, which was also quite bad. Living in an apartment as opposed to a house, however, had only a tiny effect.

Genetic and environmental risk also interacted. If a biological parent was mentally ill and your adopted parents were unemployed, that was really bad news.

But hang on. Adoption studies have been criticized because children don't get adopted at random (there's a story behind every adoption, and it's rarely a happy one), and also adopting families are not picked at random - you're only allowed to adopt if you can convince the authorities that you're going to be good parents.

So they also looked at the non-adopted population, i.e. everyone else in Sweden, over the same time period. The results were surprisingly similar. The hazard ratio (increased risk) in those with parental mental illness, but no adverse circumstances, was 4.5, the same as in the adoption study, 4.7.

For environment, the ratio was 1.5 for unemployment, and slightly lower for the other two. This is a bit less than in the adoption study (2.0 for unemployment). And the two risks interacted, but much less than they did in the adoption sample.

However, one big difference was that the total lifetime rate of illness was 1.8% in the adoptees and just 0.8% in the nonadoptees, despite much higher rates of unemployment etc. in the latter. Unfortunately, the authors don't discuss this odd result. It could be that adopted children have a higher risk of psychosis for whatever reason. But it could also be an artefact: rates of adoption massively declined between 1955 and 1984, so most of the adoptees were born earlier, i.e. they're older on average. That gives them more time in which to become ill.

A few more random thoughts:
  • This was Sweden. Sweden is very rich and compared to most other rich countries also very egalitarian with extremely high taxes and welfare spending. In other words, no-one in Sweden is really poor. So the effects of environment might be bigger in other countries.
  • On the other hand this study may overestimate the risk due to environment, because it looked at hospitalizations, not illness per se. Supposing that poorer people are more likely to get hospitalized, this could mean that the true effect of environment on illness is lower than it appears.
  • The outcome measure was hospitalization for "non-affective psychosis". Only 40% of this was diagnosed as "schizophrenia". The rest will have been some kind of similar illness which didn't meet the full criteria for schizophrenia (which are quite narrow, in particular, they require >6 months of symptoms).
  • Parental bipolar disorder was counted as a family history. This does make sense because we know that bipolar disorder and schizophrenia often occur in the same families (and indeed they can be hard to tell apart, many people are diagnosed with both at different times.)
Overall, though, this is a solid study and confirms that genes and environment are both relevant to psychosis. Unfortunately, almost all of the research money at the moment goes on genes, with studying environmental factors being unfashionable.

ResearchBlogging.orgWicks S, Hjern A, & Dalman C (2010). Social Risk or Genetic Liability for Psychosis? A Study of Children Born in Sweden and Reared by Adoptive Parents. The American journal of psychiatry PMID: 20686186

Thursday, August 19, 2010

ESTOU MORRENDO DE SAUDADES DE VOCÊ!!!

SAUDADES!!!!!

http://2.bp.blogspot.com/_vzrlnu76oJw/S59hupVevMI/AAAAAAAAC5k/AbinGmJaIbo/s320/TOTOSANDRA.jpg

ESTOU SEM T EMPO DE VIR. MAS ASSIM QUE PUDER, ESTAREI AQUI, MORRENDO DE SAUDADES SUAS. ALÉM DO MEU TRABALHO DE 40 HORAS, TENHO FISIO A NOITE.
ALÉM DAS MINHAS ATIVIDADES DE CASA.

OBRGADA POR TER VINDO....TE AMO!!!
AGRADEÇO PELA SUA VISITA E O SEU CARINHO.

http://t3.gstatic.com/images?q=tbn:ANd9GcQkF8K9KNtF4EwwlMNnFHNP_1XpaurfMY_Rikt5c90K4oFz7MI&t=1&usg=__cti2Sp4RRu1zWli4ZVHHUTCrgBg=

OBRIGADA PELO SEU CARINHO E ATENÇÃO.

AGRADEÇO A SUA COMPANHIA!!!

Poetas-Um Voo Livre-

Sinal de Liberdade-uma expressão de sentimento-

Blog Coletivo-Uma Interação de Amigos-
COLETIVAS-COMPARTILHE

MEUS MIMOS . AQUI. OFERECIDOS/RECEBIDOS-SELO AMIZADE

AGRADEÇO E RETRIBUO AQUI A SUA VISITA.MUITO OBRIGADA

fMRI Analysis in 1000 Words

Following on from fMRI in 1000 words, which seemed to go down well, here's the next step: how to analyze the data.

There are many software packages available for fMRI analysis, such as FSL, SPM, AFNI, and BrainVoyager. The following principles, however, apply to most. The first step is pre-processing, which involves:
  • Motion Correction aka Realignment – during the course of the experiment subjects often move their heads slightly; during realignment, all of the volumes are automatically adjusted to eliminate motion.
  • Smoothing – all MRI signals contain some degree of random noise. During smoothing, the image of the whole brain is blurred. This tends to smooth out random fluctuations. The degree of smoothing is given by the “Full Width to Half Maximum” (FWHM) of the smoother. Between 5 and 8 mm is most common.
  • Spatial Normalization aka Warping – Everyone’s brain has a unique shape and size. In order to compare activations between two or more people, you need to eliminate these differences. Each subject’s brain is warped so that it fits with a standard template (the Montreal Neurological Institute or MNI template is most popular.)
Other techniques are also sometimes used, depending on the user’s preference and the software package.

Then the real fun begins: the stats. By far the most common statistical approach for detecting task-related neural activation is that based upon the General Linear Model (GLM), though there are alternatives.

We first need to define a model of what responses we’re looking for, which makes predictions as to what the neural signal should look like. The simplest model would be that the brain is more active at certain times, say, when a picture is on the screen. So our model would be simply a record of when the stimulus was on the screen. This is called a "boxcar" function (guess why):
In fact, we know that the neural response has a certain time lag. So we can improve our model by adding the canonical (meaning “standard”) haemodynamic response function (HRF).
Now consider a single voxel. The MRI signal in this voxel (the brightness) varies over time. If there were no particular neural activation in this area, we’d expect the variation to be purely noise:Now suppose that this voxel was responding to a stimulus present from time-point 40 to 80.
While the signal is on average higher during this period of activation, there’s still a lot of noise, so the data doesn’t fit with the model exactly.
The GLM is a way of asking, for each voxel, how closely it fits a particular model. It estimates a parameter, β, representing the “goodness-of-fit” of the model at that voxel, relative to noise. Higher β, better fit. Note that a model could be more complex than the one above. For example, we could have two kinds of pictures, Faces and Houses, presented on the screen at different times:
In this case, we are estimating two β scores for each voxel, β-faces and β-houses. Each stimulus type is called an explanatory variable (EV). But how do we decide which β scores are high enough to qualify as “activations”? Just by chance, some voxels which contain pure noise will have quite high β scores (even a stopped clock’s right twice per day!)

The answer is to calculate the t score, which for each voxel is β / standard deviation of β across the whole brain. The higher the t score, the more unlikely it is that the model would fit that well by chance alone. It’s conventional to finally convert the t score into the closely-related z score.

We therefore end up with a map of the brain in terms of z. z is a statistical parameter, so fMRI analysis is a form of statistical parametric mapping (even if you don’t use the "SPM" software!) Higher z scores mean more likely activation.

Note also that we are often interested in the difference or contrast between two EVs. For example, we might be interested in areas that respond to Faces more than Houses. In this case, rather than comparing β scores to zero, we compare them to each other – but we still end up with a z score. In fact, even an analysis with just one EV is still a contrast: it’s a contrast between the EV, and an “implicit baseline”, which is that nothing happens.

Now we still need to decide how high of a z score we consider “high enough”, in other words we need to set a threshold. We could use conventional criteria for significance: p less than 0.05. But there are 10,000 voxels in a typical fMRI scan, so that would leave us with 500 false positives.

We could go for a p value 10,000 times smaller, but that would be too conservative. Luckily, real brain activations tend to happen in clusters of connected voxels, especially when you’ve smoothed the data, and clusters are unlikely to occur due to chance. So the solution is to threshold clusters, not voxels.

A typical threshold would be “z greater than 2.3, p less than 0.05”, meaning that you're searching for clusters of voxels, all of which has a z score of at least 2.3, where there's only a 5% chance of finding a cluster that size by chance (based on this theory.) This is called a cluster corrected analysis. Not everyone uses cluster correction, but they should. This is what happens if you don't.

Thus, after all that, we hopefully get some nice colorful blobs for each subject, each blob representing a cluster and colour representing voxel z scores:

This is called a first-level, or single-subject, analysis. Comparing the activations across multiple subjects is called the second-level or group-level analysis, and it relies on similar principles to find clusters which significantly activate across most people.

This discussion has focused on the most common method of model-based detection of activations. There are other "data driven" or "model free" approaches, such as this. There are also ways of analyzing fMRI data to find connections and patterns rather than just activations. But that's another story...

fMRI Analysis in 1000 Words

Following on from fMRI in 1000 words, which seemed to go down well, here's the next step: how to analyze the data.

There are many software packages available for fMRI analysis, such as FSL, SPM, AFNI, and BrainVoyager. The following principles, however, apply to most. The first step is pre-processing, which involves:
  • Motion Correction aka Realignment – during the course of the experiment subjects often move their heads slightly; during realignment, all of the volumes are automatically adjusted to eliminate motion.
  • Smoothing – all MRI signals contain some degree of random noise. During smoothing, the image of the whole brain is blurred. This tends to smooth out random fluctuations. The degree of smoothing is given by the “Full Width to Half Maximum” (FWHM) of the smoother. Between 5 and 8 mm is most common.
  • Spatial Normalization aka Warping – Everyone’s brain has a unique shape and size. In order to compare activations between two or more people, you need to eliminate these differences. Each subject’s brain is warped so that it fits with a standard template (the Montreal Neurological Institute or MNI template is most popular.)
Other techniques are also sometimes used, depending on the user’s preference and the software package.

Then the real fun begins: the stats. By far the most common statistical approach for detecting task-related neural activation is that based upon the General Linear Model (GLM), though there are alternatives.

We first need to define a model of what responses we’re looking for, which makes predictions as to what the neural signal should look like. The simplest model would be that the brain is more active at certain times, say, when a picture is on the screen. So our model would be simply a record of when the stimulus was on the screen. This is called a "boxcar" function (guess why):
In fact, we know that the neural response has a certain time lag. So we can improve our model by adding the canonical (meaning “standard”) haemodynamic response function (HRF).
Now consider a single voxel. The MRI signal in this voxel (the brightness) varies over time. If there were no particular neural activation in this area, we’d expect the variation to be purely noise:Now suppose that this voxel was responding to a stimulus present from time-point 40 to 80.
While the signal is on average higher during this period of activation, there’s still a lot of noise, so the data doesn’t fit with the model exactly.
The GLM is a way of asking, for each voxel, how closely it fits a particular model. It estimates a parameter, β, representing the “goodness-of-fit” of the model at that voxel, relative to noise. Higher β, better fit. Note that a model could be more complex than the one above. For example, we could have two kinds of pictures, Faces and Houses, presented on the screen at different times:
In this case, we are estimating two β scores for each voxel, β-faces and β-houses. Each stimulus type is called an explanatory variable (EV). But how do we decide which β scores are high enough to qualify as “activations”? Just by chance, some voxels which contain pure noise will have quite high β scores (even a stopped clock’s right twice per day!)

The answer is to calculate the t score, which for each voxel is β / standard deviation of β across the whole brain. The higher the t score, the more unlikely it is that the model would fit that well by chance alone. It’s conventional to finally convert the t score into the closely-related z score.

We therefore end up with a map of the brain in terms of z. z is a statistical parameter, so fMRI analysis is a form of statistical parametric mapping (even if you don’t use the "SPM" software!) Higher z scores mean more likely activation.

Note also that we are often interested in the difference or contrast between two EVs. For example, we might be interested in areas that respond to Faces more than Houses. In this case, rather than comparing β scores to zero, we compare them to each other – but we still end up with a z score. In fact, even an analysis with just one EV is still a contrast: it’s a contrast between the EV, and an “implicit baseline”, which is that nothing happens.

Now we still need to decide how high of a z score we consider “high enough”, in other words we need to set a threshold. We could use conventional criteria for significance: p less than 0.05. But there are 10,000 voxels in a typical fMRI scan, so that would leave us with 500 false positives.

We could go for a p value 10,000 times smaller, but that would be too conservative. Luckily, real brain activations tend to happen in clusters of connected voxels, especially when you’ve smoothed the data, and clusters are unlikely to occur due to chance. So the solution is to threshold clusters, not voxels.

A typical threshold would be “z greater than 2.3, p less than 0.05”, meaning that you're searching for clusters of voxels, all of which has a z score of at least 2.3, where there's only a 5% chance of finding a cluster that size by chance (based on this theory.) This is called a cluster corrected analysis. Not everyone uses cluster correction, but they should. This is what happens if you don't.

Thus, after all that, we hopefully get some nice colorful blobs for each subject, each blob representing a cluster and colour representing voxel z scores:

This is called a first-level, or single-subject, analysis. Comparing the activations across multiple subjects is called the second-level or group-level analysis, and it relies on similar principles to find clusters which significantly activate across most people.

This discussion has focused on the most common method of model-based detection of activations. There are other "data driven" or "model free" approaches, such as this. There are also ways of analyzing fMRI data to find connections and patterns rather than just activations. But that's another story...

Tuesday, August 17, 2010

What The Internet Thinks About Antidepressants

Toronto team Rizo et al offer a novel approach to psychopharmacology: trawling the internet for people's opinions. It's a rapid, web-based method for obtaining patient views on effects and side-effects of antidepressants.

They designed a script to Google the names of several antidepressants in the context of someone who's taking them, and checks to see if they describe any side-effects.
A large number of URLs were rapidly screened through Google Search™, using one server situated in Ohio, USA. The search strategy used language strings to denote active antidepressant drug usage, such as “I'm on [name of antidepressant]…” or “I
have been on [antidepressant] for ….”, or “I've started [antidepressant]…”, or “the [antidepressant] is giving me or causing me…”
They then used a thing called OpenCalais™ to read the search hits and decide whether they were mentioning particular diseases or symptoms. OpenCalais is a natural language processor which is meant to be able to automatically extract the meaning from text. However, to make sure it wasn't doing anything silly (natural language processing is quite tricky), they manually checked the results.

What happened? They found about 5,000 hits in total from people taking antidepressants, ranging from 210 for mirtazapine (Remeron) up to 835 for duloxetine (Cymbalta). That doesn't seem like all that many considering they searched on the entire internet, although they only searched English language websites.

Anyway, drowsiness, sleepiness or tiredness was mentioned in from 6.4% (duloxetine) down to 2.9% (fluoxetine) of the hits. Insomnia was noted in 4% (desvenlafaxine) down to 2.2% fluoxetine. And so on.

These results are a lot lower than anything previously reported from clinical trials, where the prevalence of drowsiness, for example, is often around 25% (vs. 10% on placebo); with some drugs, it's higher. So there's a big discrepancy, and it's hard to interpret these results. Maybe lots of people are having side effects and just not bothering to write about them. Or they're too embarrassed. Etc.

Still, it's a very clever idea it would probably be better used trying to discover which drugs work best. Neuroskeptic readers will know that clinical trials of antidepressants are flawed in several ways. I'd say they're actually better at telling us about side effects (which are probably roughly the same in clinical trials and in real life) than they are at telling us about efficacy (where this assumption doesn't hold)...

Links: There are many websites where people describe their experiences of medical treatments ranging from the fancy to the crude (but much more informative)...

ResearchBlogging.orgRizo C, Deshpande A, Ing A, & Seeman N (2010). A rapid, Web-based method for obtaining patient views on effects and side-effects of antidepressants. Journal of affective disorders PMID: 20705344