Sunday, January 24, 2010

A "Severe" Warning for Psychiatry

Imagine there was a nasty disease that affected 1 in 100 people. And imagine that someone invented a drug which treated it reasonably well. Good work, surely.

Now imagine that, for some reason, people decided that 10% of the population need to be taking this drug, instead of 1%. So sales of the drug sky-rocket. Eventually some clever person comes along and asks "This is one of the biggest selling drugs in the world - but does it work?" They look into it, and find that it doesn't work very well at all. For about 9 out of 10 people, it's completely useless! What a crap drug.

Of course the drug hasn't changed, and what's crap was the decision to prescribe it to so many people.

*

Back to reality. According to accepted DSM-IV diagnostic criteria, close to 50% of people suffer from a mental illness at some point; a large fraction of this being depression. 10% of Americans took antidepressants last year according to the best estimates.

Guess what? Clever people have started asking "Antidepressants are amongst the biggest selling drugs in the world - but do they work?" And their answer is - not very well. The latest such claim came from Fournier et al and appeared in JAMA a couple of weeks ago: Antidepressant Drug Effects and Depression Severity.

These researchers re-analysed the data from six clinical trials testing antidepressants against placebo pills. The drugs were the tricyclic imipramine and the newer SSRI paroxetine. The total sample size was a respectable 718, and most trials lasted 8 weeks, which is longer than average for this kind of study. Here's what they found -

Grey circles are people on antidepressants, white circles people on placebo. What this shows is that the more severe the patient's depression, the more they get better - when they're given either drugs or placebos. However, because the improvement on antidepressants rises more steeply, the benefit of antidepressants versus placebos correlates with severity. The thin blue line marks the minimum severity for which the average effect of the drugs over placebo was "clinically significant" according to NICE criteria (although these are arbitrary).

*

So, this study says that antidepressants work better in more severe depression. This is not a new claim - Kirsch et al (2008) famously found the same thing, and long before that so did Khan et al (2002). However this new analysis has some advantages over previous ones. First, Fournier et al looked at what happened to each patient individually, whereas the previous studies found that in trials where the patients were more severely depressed, on average, antidepressants worked better.

Second, the patients in this analysis spanned a wide range of severity scores, from 10 points on the Hamilton Scale to nearly 40. In Kirsch et al almost all the trials had average severities in the narrow range of 22 to 29. Finally, none of the trials in the new paper used a placebo run-in period. These are meant to exclude people from the trial if they improve "too well" during an initial week or so of placebo pills. In theory, they bias trials against finding large placebo effects; it's not clear they actually work, but either way, it's good to know it wasn't a factor.

*

Overall, the evidence all seems to point to the idea that people with more serious clinical depression respond better to antidepressants vs. placebos in clinical trials. The exact details are debatable, there's the issue of whether antidepressant clinical trials are realistic, and the question of how clinically effective antidepressants are is also controversial, but I'm not aware of any studies which have contradicted this central claim.

But when you start to think about it, this is a very odd result. Fournier et al say that
The general pattern of results reported in this work is not surprising. As early as the 1950s, researchers conducting controlled investigations of treatments for a wide variety of medical and psychiatric conditions described a phenomenon whereby patients with higher levels of severity showed greater differential (i.e., specific) benefit from the active treatments.
and refer to a couple of papers from the 1960s. But I must admit that I do find this very surprising. We don't wait until someone's nearly dead from a bacterial infection before we give them antibiotics, we give them early, when the disease is still mild. Doctors unfortunately don't tell people "Good news! You've got advanced-stage cancer - just the kind where drugs work best." Why is depression so different?

Look a little closer, and a possible answer emerges. Severity, in all of these studies, was measured using the Hamilton Rating Scale for Depression (HAMD). The HAMD has 17 items, and each asks whether you're suffering from certain symptoms; the more symptoms you have, and the more pronounced they are, the higher your total score. You get 1 point if you have "occasional difficulty falling asleep", 2 points for "nightly difficulty falling asleep", 4 points for "Hand wringing, nail biting, hair-pulling, biting of lips". Here's the whole thing.

The HAMD was designed in 1960 by a psychiatrist, Max Hamilton, and it was originally intended for use by staff at psychiatric hospitals for use on depressed inpatients. So it's not a measure of severity per se: it's a measure of how well your symptoms match those considered to be characteristic of severe depression in 1960.

Psychiatry's concept of depression - not to mention the wider culture's - has changed greatly since then. 1960 was a full 20 years before the DSM-III criteria of depression were published, which form the basis for today's DSM-IV criteria. A quick comparison of the DSM-IV alongside the HAMD reveals a lot of differences. It's quite possible to meet DSM-IV criteria for "Major Depressive Disorder" yet score low on the HAMD.

Which brings us back to the imaginary scenario at the start of this post. My personal interpretation of results like those of Fournier et al is this: antidepressants treat classical clinical depression, of the kind that psychiatrists in 1960 would have recognized. This is the kind of depression that they were originally used for, after all, because the first antidepressants arrived in 1953, and modern antidepressants like Prozac target the same neurotransmitter systems.

Yet in recent years "clinical depression" has become a much broader term. Many people attribute this to marketing on the part of pharmaceutical companies. Whatever the cause, it's almost certain that many people are now being prescribed antidepressants for emotional and personal issues which wouldn't have been considered medical illnesses until quite recently. (Antidepressants also have a long history of use for other conditions, like OCD, but this is a separate issue.)

My imaginary story used made up numbers: I'm not saying that only 10% of the people on antidepressants have "classic" depression. I don't know what the % is. But apart from that, in my opinion (and I don't think I'm alone), it's far from fantasy.

ResearchBlogging.orgFournier, J., DeRubeis, R., Hollon, S., Dimidjian, S., Amsterdam, J., Shelton, R., & Fawcett, J. (2010). Antidepressant Drug Effects and Depression Severity: A Patient-Level Meta-analysis JAMA: The Journal of the American Medical Association, 303 (1), 47-53 DOI: 10.1001/jama.2009.1943

A "Severe" Warning for Psychiatry

Imagine there was a nasty disease that affected 1 in 100 people. And imagine that someone invented a drug which treated it reasonably well. Good work, surely.

Now imagine that, for some reason, people decided that 10% of the population need to be taking this drug, instead of 1%. So sales of the drug sky-rocket. Eventually some clever person comes along and asks "This is one of the biggest selling drugs in the world - but does it work?" They look into it, and find that it doesn't work very well at all. For about 9 out of 10 people, it's completely useless! What a crap drug.

Of course the drug hasn't changed, and what's crap was the decision to prescribe it to so many people.

*

Back to reality. According to accepted DSM-IV diagnostic criteria, close to 50% of people suffer from a mental illness at some point; a large fraction of this being depression. 10% of Americans took antidepressants last year according to the best estimates.

Guess what? Clever people have started asking "Antidepressants are amongst the biggest selling drugs in the world - but do they work?" And their answer is - not very well. The latest such claim came from Fournier et al and appeared in JAMA a couple of weeks ago: Antidepressant Drug Effects and Depression Severity.

These researchers re-analysed the data from six clinical trials testing antidepressants against placebo pills. The drugs were the tricyclic imipramine and the newer SSRI paroxetine. The total sample size was a respectable 718, and most trials lasted 8 weeks, which is longer than average for this kind of study. Here's what they found -

Grey circles are people on antidepressants, white circles people on placebo. What this shows is that the more severe the patient's depression, the more they get better - when they're given either drugs or placebos. However, because the improvement on antidepressants rises more steeply, the benefit of antidepressants versus placebos correlates with severity. The thin blue line marks the minimum severity for which the average effect of the drugs over placebo was "clinically significant" according to NICE criteria (although these are arbitrary).

*

So, this study says that antidepressants work better in more severe depression. This is not a new claim - Kirsch et al (2008) famously found the same thing, and long before that so did Khan et al (2002). However this new analysis has some advantages over previous ones. First, Fournier et al looked at what happened to each patient individually, whereas the previous studies found that in trials where the patients were more severely depressed, on average, antidepressants worked better.

Second, the patients in this analysis spanned a wide range of severity scores, from 10 points on the Hamilton Scale to nearly 40. In Kirsch et al almost all the trials had average severities in the narrow range of 22 to 29. Finally, none of the trials in the new paper used a placebo run-in period. These are meant to exclude people from the trial if they improve "too well" during an initial week or so of placebo pills. In theory, they bias trials against finding large placebo effects; it's not clear they actually work, but either way, it's good to know it wasn't a factor.

*

Overall, the evidence all seems to point to the idea that people with more serious clinical depression respond better to antidepressants vs. placebos in clinical trials. The exact details are debatable, there's the issue of whether antidepressant clinical trials are realistic, and the question of how clinically effective antidepressants are is also controversial, but I'm not aware of any studies which have contradicted this central claim.

But when you start to think about it, this is a very odd result. Fournier et al say that
The general pattern of results reported in this work is not surprising. As early as the 1950s, researchers conducting controlled investigations of treatments for a wide variety of medical and psychiatric conditions described a phenomenon whereby patients with higher levels of severity showed greater differential (i.e., specific) benefit from the active treatments.
and refer to a couple of papers from the 1960s. But I must admit that I do find this very surprising. We don't wait until someone's nearly dead from a bacterial infection before we give them antibiotics, we give them early, when the disease is still mild. Doctors unfortunately don't tell people "Good news! You've got advanced-stage cancer - just the kind where drugs work best." Why is depression so different?

Look a little closer, and a possible answer emerges. Severity, in all of these studies, was measured using the Hamilton Rating Scale for Depression (HAMD). The HAMD has 17 items, and each asks whether you're suffering from certain symptoms; the more symptoms you have, and the more pronounced they are, the higher your total score. You get 1 point if you have "occasional difficulty falling asleep", 2 points for "nightly difficulty falling asleep", 4 points for "Hand wringing, nail biting, hair-pulling, biting of lips". Here's the whole thing.

The HAMD was designed in 1960 by a psychiatrist, Max Hamilton, and it was originally intended for use by staff at psychiatric hospitals for use on depressed inpatients. So it's not a measure of severity per se: it's a measure of how well your symptoms match those considered to be characteristic of severe depression in 1960.

Psychiatry's concept of depression - not to mention the wider culture's - has changed greatly since then. 1960 was a full 20 years before the DSM-III criteria of depression were published, which form the basis for today's DSM-IV criteria. A quick comparison of the DSM-IV alongside the HAMD reveals a lot of differences. It's quite possible to meet DSM-IV criteria for "Major Depressive Disorder" yet score low on the HAMD.

Which brings us back to the imaginary scenario at the start of this post. My personal interpretation of results like those of Fournier et al is this: antidepressants treat classical clinical depression, of the kind that psychiatrists in 1960 would have recognized. This is the kind of depression that they were originally used for, after all, because the first antidepressants arrived in 1953, and modern antidepressants like Prozac target the same neurotransmitter systems.

Yet in recent years "clinical depression" has become a much broader term. Many people attribute this to marketing on the part of pharmaceutical companies. Whatever the cause, it's almost certain that many people are now being prescribed antidepressants for emotional and personal issues which wouldn't have been considered medical illnesses until quite recently. (Antidepressants also have a long history of use for other conditions, like OCD, but this is a separate issue.)

My imaginary story used made up numbers: I'm not saying that only 10% of the people on antidepressants have "classic" depression. I don't know what the % is. But apart from that, in my opinion (and I don't think I'm alone), it's far from fantasy.

ResearchBlogging.orgFournier, J., DeRubeis, R., Hollon, S., Dimidjian, S., Amsterdam, J., Shelton, R., & Fawcett, J. (2010). Antidepressant Drug Effects and Depression Severity: A Patient-Level Meta-analysis JAMA: The Journal of the American Medical Association, 303 (1), 47-53 DOI: 10.1001/jama.2009.1943

Feliz Cumpleanos Papi!


Today is my Dad's birthday! Happy Birthday Papi! I LOVE you lots!!! And I hope you have a nice day. You're the best Dad ever! I want to give you a big hug and tell you that I love you. And I hope you like the breakfast that I'm making you. :) C

Saturday, January 23, 2010

O NOSSO CARPE DIEM.


SEMPRE FALAMOS DO NOSSO PRESENTE, DO NOSSO MOMENTO E DOS PROCESSOS QUE A NOSSA VIDA SEGUE. DOS PERCURSOS QUE TEMOS QUE FAZER PARA SERMOS UMA PESSOA BEM SUCEDIDA E CHEIAS DE REALIZAÇÕES. POIS É, HOJE DE MANHÃ LI UM ARTIGO QUE FALA NESTE PROCESSO. DE SABERMOS VIVER A NOSSA VIDA, OS NOSSOS MOMENTO, NO AGORA, NEM NO LÁ NEM NO CÁ.

MUITO INTERESSANTE. NÃO VOU CITAR A REVISTA, PARA NÃO DAR A IMPRESSÃO, QUE ESTOU FAZENDO O SEU MARKINTNG. MAS ACHO QUE VALE A PENA CITAR, O QUE ELA DIZ, SOBRE O CARPE DIEM, ATRAVÉS DO SEU ESCRITOR DE ARTIGO, REINALDO RISK.

DO LATIM"CARPI DIEM, SIGNIFICA "VIVA O MOMENTO PRESENTE"

ELE AINDA DIZ QUE TEMOS: "VIVER O NOSSO MOMENTO AGORA NO AQUI, E NÃO NO LÁ".

"A VIDA NÃO PODE SER VISTA COMO LÁ, COMO LÁ ESTIVESSE UM PARAÍSO, A BONANÇA, A FELICIDADE. ESTA VISÃO PROVOCA UMA GRANDE ILUSÃO E DEIXAMOS DE VIVER O CÁ, O AQUI E O AGORA, O REAL".

SE EU VIVER SÓ PARA RECLAMAR, SÓ VAI PIORAR AS COISAS. NÃO POSSO ESQUECER DE VIVER ESTE MOMENTOS PRESENTES. ESQUECER NÃO ADIANTA. QUEM SABE ENFRENTÁ-LOS, PODERÁ NOS AJUDAR A VIVER ESTE PROCESSO DE VIDA. SOMOS SERES HUMANOS, TEMOS O DIREITO DE ERRAR E ACERTAR.. USAR DO ERRO PARA MELHOR E FAZER O MELHOR, ESTAREMOS CONTRIBUINDO PARA A NOSSA ALEGRIA DE VIVER O PRESENTE, O MOMENTO. TENHO A CERTEZA, QUE TODOS NÓS QUEREMOS ISSO PARA NOSSA VIDA. ENTÃO, VAMOS VIVER O NOSSO CARPI DIEM, COM MUITO MAIS ALEGRIA E AMOR...

VAMOS VIVER CADA MOMENTO DA MELHOR MANEIRA POSSÍVEL. ISTO SIM VALE A PENA. MAS COM RESPONSABILIDADE E AMOR..

VIVER O AMANHÃ, QUE NEM COMEÇOU É IRREAL.
VIVA O AGORA.
SEJA FELIZ..

E EU AMO VOCÊ, QUE É O MEU REAL MOTIVO DE ESTAR AQUI, AGORA, MANDANDO, DEIXANDO UM GRANDE ABRAÇO A TODOS.
UM LINDO DIA. MESMO COM CHUVA.

ESTE SELO SORRISO VEM BEM DE ENCONTRO COM O TEXTO.
OBRIGADA ANNA.
Image and video hosting by TinyPic
Image and video hosting by TinyPicAgenda da FelicidadeImage and video hosting by TinyPic
O Sorriso é o cartão de Visita das Pessoas Saudáveis!

Leve com Você!






REPASSO A TODOS...

MAS VALE A PENA. PORQUE ESTAMOS VIVOS. ESTE É O MELHOR PRESENTE.
UM GRANDE ABRAÇO A VOCÊ MEU AMIGO QUE ESTÁ CHEGANDO.
SOU FELIZ, PORQUE VOCÊ É O MEU MOMENTO, ALÉM DE TANTOS OUTROS.
COM MUITO CARINHO
SANDRA


VOCÊ QUE ESTÁ VINDO PELA PRIMEIRA VEZ, VENHA CONHECER...

Friday, January 22, 2010

Brain Scanning Software Showdown

You've just finished doing some research using fMRI to measure brain activity. You designed the study, recruited the volunteers, and did all the scans. Phew. Is that it? Can you publish the findings yet?

Unfortunately, no. You still need to do the analysis, and this is often the trickiest stage. The raw data produced during an fMRI experiment are meaningless - in most cases, each scan will give you a few hundred almost-identical grey pictures of the person's brain. Making sense of them requires some complex statistics.

The very first step is choosing which software to use. Just as some people swear by Firefox while others prefer Internet Explorer for browsing the web, neuroscientists have various options to choose from in terms of image analysis software. Everyone's got a favourite. In Britain, the most popular are FSL (developed at Oxford) and SPM (London), while in the USA BrainVoyager sees a lot of use.

These three all do pretty much the same thing, give or take a few minor technical differences, so which one you use ultimately makes little difference. But just as there's more than one way to skin a cat, there's more than one way to analyze a brain. A paper from Fusar-Poli et al compares the results you get with SPM to the results obtained using XBAM, a program which uses a quite different statistical approach.

Here's what happened, according to SPM, when 15 volunteers looked at pictures of faces expressing the emotion of fear, and their brain activity was compared to when they were just looking at a boring "X" on the screen (I think - either that it's compared to looking at neutral faces; the paper isn't clear, but given the size of the blobs I doubt it's that.)

Various bits of the brain were more activated by the scared face pics, as you can see by the huge, fiery blobs. The activation is mostly at the back of the brain, in occipital cortex areas which deal with vision, which is as you'd expect. The cerebellum was also strongly activated, which is a bit less expected.

Now, here's what happens if you analyze exactly the same data using XBAM, setting the statistical threshold at the same level (i.e. in theory being no more or less "strict") -

You get the same visual system blobs, but you also see activation in a number of other areas. Or as Fusar-Poli et al put it -
Analysis using both programs revealed that during the processing of emotional faces, as compared to the baseline stimulus, there was an increased activation in the visual areas (occipital, fusiform and lingual gyri), in the cerebellum, in the parietal cortex [etc] ... Conversely, the temporal regions, insula and putamen were found to be activated using the XBAM analysis software only.
*

This begs two questions: why the difference, and which way is right?

The difference must be a product of the different methods used. By default SPM uses a technique called statistical parametric mapping (hence the name) based on the assumption of normality. FSL and BrainVoyager do too. XBAM, on the other hand, differs from more orthodox software in a number of other ways; the most basic difference is that it uses non-parametric statistics but this document lists no less than five major innovations (Edit: although see the comments below this post)
  1. "not to assume normality but to use permutation testing to construct the null distribution used to make inference about the probability of an "activation" under the null hypothesis."
  2. "recognizing the existence of correlation in the residuals after fitting a statistical model to the data."
  3. using "a mixed effects analysis of group level fMRI data by taking into account both intra and inter subject variances."
  4. using "3D cluster level statistics based on cluster mass (the sum of all the statistical values in the cluster) rather than cluster area (number of voxels)."
  5. using "a wavelet-based time series permutation approach that permitted the handling of complex noise processes in fMRI data rather than simple stationary autocorrelation."
Phew. Which combination of these are responsible for the difference is impossible to say.

The biggest question, though, is: should we all be using XBAM? Is it "better" than SPM? This is where things get tricky. The truth is that there's no right way to statistically analyze any data, let alone fMRI data. There are lots of wrong ways, but even if you avoid making any mistakes, there are still various options as to which statistical methods to use, and which method you use depends on which assumptions you're making. XBAM rests of different assumptions from SPM.

Whether XBAM's assumptions are more appropriate than those of SPM is a difficult question. The people who wrote XBAM presumably think so, and they're very smart people. But so are the people who wrote SPM. The point is, it's a very complex issue, the mathematical details of which go far beyond the understanding of most fMRI users (myself included).

My worry about this paper is that the average Joe Neuroscientist will decide that, because XBAM produces more activation than SPM, it must be "better". The authors are careful not to say this, but for fMRI researchers working in the publish-or-perish world of modern science, and whose greatest fear is that they'll run an analysis and end up with no blobs at all, the temptation to think "the more blobs the merrier" is a powerful one.

ResearchBlogging.orgFusar-Poli, P., Bhattacharyya, S., Allen, P., Crippa, J., Borgwardt, S., Martin-Santos, R., Seal, M., O’Carroll, C., Atakan, Z., & Zuardi, A. (2010). Effect of image analysis software on neurofunctional activation during processing of emotional human faces Journal of Clinical Neuroscience DOI: 10.1016/j.jocn.2009.06.027

Brain Scanning Software Showdown

You've just finished doing some research using fMRI to measure brain activity. You designed the study, recruited the volunteers, and did all the scans. Phew. Is that it? Can you publish the findings yet?

Unfortunately, no. You still need to do the analysis, and this is often the trickiest stage. The raw data produced during an fMRI experiment are meaningless - in most cases, each scan will give you a few hundred almost-identical grey pictures of the person's brain. Making sense of them requires some complex statistics.

The very first step is choosing which software to use. Just as some people swear by Firefox while others prefer Internet Explorer for browsing the web, neuroscientists have various options to choose from in terms of image analysis software. Everyone's got a favourite. In Britain, the most popular are FSL (developed at Oxford) and SPM (London), while in the USA BrainVoyager sees a lot of use.

These three all do pretty much the same thing, give or take a few minor technical differences, so which one you use ultimately makes little difference. But just as there's more than one way to skin a cat, there's more than one way to analyze a brain. A paper from Fusar-Poli et al compares the results you get with SPM to the results obtained using XBAM, a program which uses a quite different statistical approach.

Here's what happened, according to SPM, when 15 volunteers looked at pictures of faces expressing the emotion of fear, and their brain activity was compared to when they were just looking at a boring "X" on the screen (I think - either that it's compared to looking at neutral faces; the paper isn't clear, but given the size of the blobs I doubt it's that.)

Various bits of the brain were more activated by the scared face pics, as you can see by the huge, fiery blobs. The activation is mostly at the back of the brain, in occipital cortex areas which deal with vision, which is as you'd expect. The cerebellum was also strongly activated, which is a bit less expected.

Now, here's what happens if you analyze exactly the same data using XBAM, setting the statistical threshold at the same level (i.e. in theory being no more or less "strict") -

You get the same visual system blobs, but you also see activation in a number of other areas. Or as Fusar-Poli et al put it -
Analysis using both programs revealed that during the processing of emotional faces, as compared to the baseline stimulus, there was an increased activation in the visual areas (occipital, fusiform and lingual gyri), in the cerebellum, in the parietal cortex [etc] ... Conversely, the temporal regions, insula and putamen were found to be activated using the XBAM analysis software only.
*

This begs two questions: why the difference, and which way is right?

The difference must be a product of the different methods used. By default SPM uses a technique called statistical parametric mapping (hence the name) based on the assumption of normality. FSL and BrainVoyager do too. XBAM, on the other hand, differs from more orthodox software in a number of other ways; the most basic difference is that it uses non-parametric statistics but this document lists no less than five major innovations (Edit: although see the comments below this post)
  1. "not to assume normality but to use permutation testing to construct the null distribution used to make inference about the probability of an "activation" under the null hypothesis."
  2. "recognizing the existence of correlation in the residuals after fitting a statistical model to the data."
  3. using "a mixed effects analysis of group level fMRI data by taking into account both intra and inter subject variances."
  4. using "3D cluster level statistics based on cluster mass (the sum of all the statistical values in the cluster) rather than cluster area (number of voxels)."
  5. using "a wavelet-based time series permutation approach that permitted the handling of complex noise processes in fMRI data rather than simple stationary autocorrelation."
Phew. Which combination of these are responsible for the difference is impossible to say.

The biggest question, though, is: should we all be using XBAM? Is it "better" than SPM? This is where things get tricky. The truth is that there's no right way to statistically analyze any data, let alone fMRI data. There are lots of wrong ways, but even if you avoid making any mistakes, there are still various options as to which statistical methods to use, and which method you use depends on which assumptions you're making. XBAM rests of different assumptions from SPM.

Whether XBAM's assumptions are more appropriate than those of SPM is a difficult question. The people who wrote XBAM presumably think so, and they're very smart people. But so are the people who wrote SPM. The point is, it's a very complex issue, the mathematical details of which go far beyond the understanding of most fMRI users (myself included).

My worry about this paper is that the average Joe Neuroscientist will decide that, because XBAM produces more activation than SPM, it must be "better". The authors are careful not to say this, but for fMRI researchers working in the publish-or-perish world of modern science, and whose greatest fear is that they'll run an analysis and end up with no blobs at all, the temptation to think "the more blobs the merrier" is a powerful one.

ResearchBlogging.orgFusar-Poli, P., Bhattacharyya, S., Allen, P., Crippa, J., Borgwardt, S., Martin-Santos, R., Seal, M., O’Carroll, C., Atakan, Z., & Zuardi, A. (2010). Effect of image analysis software on neurofunctional activation during processing of emotional human faces Journal of Clinical Neuroscience DOI: 10.1016/j.jocn.2009.06.027

BOM DIA MEUS QUERIDOS(AS)!!!


QUERO LHE DIZER QUE ESTOU MUITO FELIZ COM A SUA COMPANHIA. ONTEM PASSEI VISITANDO ALGUNS AMIGOS QUE ATEMPO EU NÃO VIA. SÃO MEUS SEGUIDORES E SEGUIDORAS.

A CURIOSA FOI ATRÁS DELES, PARA VER COMO ANDAM.. ALGUNS ELA ENCONTROU OUTROS NÃO...
MAS O QUE IMPORTA, É QUE TODOS E TODAS MORAM DENTRO DO MEU CORAÇÃO.

AS VEZES PRECISAMOS DAR UM TEMPO NAS POSTAGENS E VERMOS COMO ESTÃO ESTES AMIGOS(AS), QUE NOS SEGUEM E AS VEZES NÃO VOLTAM MAIS.
FICO FELIZ QUE A MAIORIA EU ENCONTREI E ESTAVAM BENS.. OUTROS NÃO CONSEGUI ACESSAR.

MAIS FICA AQUI O MEU CARINHO E PRÉSTIMOS DE BOAS VINDAS. SUA COMPANHIA É SEMPRE MUITO AGRADÁVEL...

PARA AS MULHERES QUE AINDA NÃO LEVARAM O SEU SELO, AINDA DÁ TEMPO. É SÓ DESCER NA POSTAGEM ABAIXO E LEVAR O SEU CARINHO. AO SAIR DEIXE SEU RECADINHO E SABEREI QUE LEVOU. PARA QUEM VEIO E LEVOU MEU MUITO OBRIGADO.

AOS HOMENS QUE CONTRIBUÍRAM COM O SEU CARINHO, O MEU GRANDE ABRAÇO.
E DIGO MAIS...VOCÊS, SÃO A NOSSA RAZÃO DE LOUCURA!!!! AMAMOS CADA UM DE VOCÊS.. CADA MULHERES TEM O SEU GRANDE HOMEM..
E CADA HOMEM TEM AO SEU LADO UMA GRANDE MULHER...
ENTÃO CADA UM É IMPORTANTE PARA O OUTRO..CADA UM TEM O SEU VALOR!!!

AGRADEÇO O CARINHO DE CADA UM QUE AQUI PASSOU... TENHAM UM LINDO DIA!!!!


NÃO SAIA SEM LEVAR O SELO ABAIXO, MINHA QUERIDA AMIGA!!
MULHERES PODEROSAS....


PARA VOCÊ QUE ESTÁ CHEGANDO AGORA, VENHA COMIGO, VOU LHE MOSTRAR MINHAS OUTRAS CASA.
É COM MUITO PRAZER TE RECEBO LÁ..