Showing posts with label surveys. Show all posts
Showing posts with label surveys. Show all posts

Monday, March 15, 2010

How to Stop Smoking

1. Don't smoke.
2. See 1.

This is essentially what Simon Chapman and Ross MacKenzie suggest in a provocative PloS Medicine paper, The Global Research Neglect of Unassisted Smoking Cessation: Causes and Consequences.

Their point is deceptively simple: there is lots of research looking at drugs and other treatments to help people quit smoking tobacco, but little attention is paid to people who quit without any help, despite the fact that the majority (up to 75%) of quitters do just that. This is good news for the pharmaceutical industry and others who sell smoking-cessation aids, but it's not clear that it's good for public health.

As they put it,
despite the pharmaceutical industry’s efforts to promote pharmacologically mediated cessation and numerous clinical trials demonstrating the efficacy of pharmacotherapy, the most common method used by most people who have successfully topped smoking remains unassisted cessation ... Tobacco use, like other substance use, has become increasingly pathologised as a treatable condition as knowledge about the neurobiology, genetics, and pharmacology of addiction develops. Meanwhile, the massive decline in smoking that occurred before the advent of cessation treatment is often forgotten.
Debates over drugs, or other treatments, tend to revolve around the question of whether they work: is this drug better than placebo for this disorder? Chapman and MacKenzie point out that even to frame an issue in these terms is to concede a lot to the medical or pathological approach, which may not be a good idea. Before asking, do the drugs work? We should ask, what have drugs got to do with this?

Their argument is not that drugs never help people to quit; nor are they saying that tobacco isn't addictive, or that there is no neurobiology of addiction. Rather, they are saying that the biology is only one aspect of the story. The importance of drugs (and other stop-smoking aids like CBT), and the difficulty of quitting, is systematically exaggerated by the medical literature...
Of the 662 papers [about "smoking cessation" published in 2007 or 2008], 511 were studies of cessation interventions. The other 118 were mainly studies of the prevalence of smoking cessation in whole or special populations. Of the intervention papers, 467 (91.4%) reported the effects of assisted cessation and 44 (8.6%) described the impact of unassisted cessation (Figure 1).... Of the papers describing cessation trends, correlates, and predictors in populations, only 13 (11%) contained any data on unassisted cessation.
And although pharmaceutical industry funding of research plays a part in this, the fact that medical science tends to focus on treatments rather than on untreated individuals is unsurprising since this is fundamentally how science works:
Most tobacco control research is undertaken by individuals trained in positivist scientific traditions. Hierarchies of evidence give experimental evidence more importance than observational evidence; meta-analyses of randomized controlled trials are given the most weight. Cessation studies that focus on discrete proximal variables such as specific cessation interventions provide ‘‘harder’’ causal evidence than those that focus on distal, complex, and interactive influences that coalesce across a smoker’s lifetime to end in cessation.
Overall, it's an excellent paper and well worth a read in full (it's short and it's open access). Of course, it is itself only one side of the story and many in the tobacco control community will find it controversial. But I think Chapman and MacKenzie's is a point that needs to be made, and point applies to other areas of medicine, especially, although not exclusively, to mental health. This week, British social care charity Together told us that
Six out of ten of people have had at least one time in their life where they have found it difficult to cope mentally... stress (70%), anxiety (59%) and depression (55%) were the three most common difficulties encountered by the public
Which was not still not quite as good as rivals Turning Point who last month said
Three quarters of people in the UK experience depression occasionally or regularly yet only a third seek help
These were opinion surveys, not real peer-reviewed science, but they might as well have been: the best available science says that if you go and ask people, 50-70% of the population report suffering at least one diagnosable DSM-IV mental disorder in their lifetime, and that the majority receive no treatment at all. This leads to papers in major journals such as this one warning that "Depression Care in the United States" is "Too Little for Too Few."

But we don't know whether these tens of millions of cases of untreated "mental illness" should be treated, because there is basically no research looking at what happens to such people without treatment. On the other hand, the very fact that they aren't treated, and yet manage to hold down jobs, relationships and so forth, suggests that the situation is not so bad.

Of course we must never forget that depression and anxiety can be crippling diseases, but fortunately, such cases are at least comparatively rare. By using the word "depression" to cover everything from waking-up-at-4-am-in-a-suicidal-panic-melancholia to feeling-a-bit-miserable-because-something-bad-just-happened, it's easy to forget that while clinical depression is a serious matter, feeling a bit miserable is normal and resolves without any help 99% of the time. Even though there are no published scientific studies proving this, because it's not the kind of thing scientists study.

Incidentally, this issue is a good reminder that there's no one big bad conspiracy behind everything. With smoking, Big Tobacco find themselves in direct opposition to Big Pharma, like in From Dusk Till Dawn when the psychopaths fight the vampires. With depression, the people who are quickest to decry the widespread use of antidepressants often seem to be the ones who are most keen on the idea that depression is common and under-treated, perhaps because it allows them to recommend their own favorite psychotherapy. Big Pharma hands the baton to Big Couch in the race to medicalize life.

ResearchBlogging.orgChapman S, & MacKenzie R (2010). The global research neglect of unassisted smoking cessation: causes and consequences. PLoS medicine, 7 (2) PMID: 20161722

How to Stop Smoking

1. Don't smoke.
2. See 1.

This is essentially what Simon Chapman and Ross MacKenzie suggest in a provocative PloS Medicine paper, The Global Research Neglect of Unassisted Smoking Cessation: Causes and Consequences.

Their point is deceptively simple: there is lots of research looking at drugs and other treatments to help people quit smoking tobacco, but little attention is paid to people who quit without any help, despite the fact that the majority (up to 75%) of quitters do just that. This is good news for the pharmaceutical industry and others who sell smoking-cessation aids, but it's not clear that it's good for public health.

As they put it,
despite the pharmaceutical industry’s efforts to promote pharmacologically mediated cessation and numerous clinical trials demonstrating the efficacy of pharmacotherapy, the most common method used by most people who have successfully topped smoking remains unassisted cessation ... Tobacco use, like other substance use, has become increasingly pathologised as a treatable condition as knowledge about the neurobiology, genetics, and pharmacology of addiction develops. Meanwhile, the massive decline in smoking that occurred before the advent of cessation treatment is often forgotten.
Debates over drugs, or other treatments, tend to revolve around the question of whether they work: is this drug better than placebo for this disorder? Chapman and MacKenzie point out that even to frame an issue in these terms is to concede a lot to the medical or pathological approach, which may not be a good idea. Before asking, do the drugs work? We should ask, what have drugs got to do with this?

Their argument is not that drugs never help people to quit; nor are they saying that tobacco isn't addictive, or that there is no neurobiology of addiction. Rather, they are saying that the biology is only one aspect of the story. The importance of drugs (and other stop-smoking aids like CBT), and the difficulty of quitting, is systematically exaggerated by the medical literature...
Of the 662 papers [about "smoking cessation" published in 2007 or 2008], 511 were studies of cessation interventions. The other 118 were mainly studies of the prevalence of smoking cessation in whole or special populations. Of the intervention papers, 467 (91.4%) reported the effects of assisted cessation and 44 (8.6%) described the impact of unassisted cessation (Figure 1).... Of the papers describing cessation trends, correlates, and predictors in populations, only 13 (11%) contained any data on unassisted cessation.
And although pharmaceutical industry funding of research plays a part in this, the fact that medical science tends to focus on treatments rather than on untreated individuals is unsurprising since this is fundamentally how science works:
Most tobacco control research is undertaken by individuals trained in positivist scientific traditions. Hierarchies of evidence give experimental evidence more importance than observational evidence; meta-analyses of randomized controlled trials are given the most weight. Cessation studies that focus on discrete proximal variables such as specific cessation interventions provide ‘‘harder’’ causal evidence than those that focus on distal, complex, and interactive influences that coalesce across a smoker’s lifetime to end in cessation.
Overall, it's an excellent paper and well worth a read in full (it's short and it's open access). Of course, it is itself only one side of the story and many in the tobacco control community will find it controversial. But I think Chapman and MacKenzie's is a point that needs to be made, and point applies to other areas of medicine, especially, although not exclusively, to mental health. This week, British social care charity Together told us that
Six out of ten of people have had at least one time in their life where they have found it difficult to cope mentally... stress (70%), anxiety (59%) and depression (55%) were the three most common difficulties encountered by the public
Which was not still not quite as good as rivals Turning Point who last month said
Three quarters of people in the UK experience depression occasionally or regularly yet only a third seek help
These were opinion surveys, not real peer-reviewed science, but they might as well have been: the best available science says that if you go and ask people, 50-70% of the population report suffering at least one diagnosable DSM-IV mental disorder in their lifetime, and that the majority receive no treatment at all. This leads to papers in major journals such as this one warning that "Depression Care in the United States" is "Too Little for Too Few."

But we don't know whether these tens of millions of cases of untreated "mental illness" should be treated, because there is basically no research looking at what happens to such people without treatment. On the other hand, the very fact that they aren't treated, and yet manage to hold down jobs, relationships and so forth, suggests that the situation is not so bad.

Of course we must never forget that depression and anxiety can be crippling diseases, but fortunately, such cases are at least comparatively rare. By using the word "depression" to cover everything from waking-up-at-4-am-in-a-suicidal-panic-melancholia to feeling-a-bit-miserable-because-something-bad-just-happened, it's easy to forget that while clinical depression is a serious matter, feeling a bit miserable is normal and resolves without any help 99% of the time. Even though there are no published scientific studies proving this, because it's not the kind of thing scientists study.

Incidentally, this issue is a good reminder that there's no one big bad conspiracy behind everything. With smoking, Big Tobacco find themselves in direct opposition to Big Pharma, like in From Dusk Till Dawn when the psychopaths fight the vampires. With depression, the people who are quickest to decry the widespread use of antidepressants often seem to be the ones who are most keen on the idea that depression is common and under-treated, perhaps because it allows them to recommend their own favorite psychotherapy. Big Pharma hands the baton to Big Couch in the race to medicalize life.

ResearchBlogging.orgChapman S, & MacKenzie R (2010). The global research neglect of unassisted smoking cessation: causes and consequences. PLoS medicine, 7 (2) PMID: 20161722

Saturday, January 30, 2010

Is Depression Undertreated?

Neuroskeptic readers will be familiar with the idea that too many people are being treated for mental illness. But not everyone agrees. Many people argue that common mental illnesses, such as depression, are undertreated. Take, for example, a paper just out in the esteemed Archives of General Psychiatry: Depression Care in the United States: Too Little for Too Few.

The authors looked at the results of three large (total N=15,762) surveys designed to measure the prevalence of mental illness in American adults. I've described how these surveys are conducted before: they took a randomly selected representative sample of Americans, and asked them a standardized series of questions (the CIDI interview) about their mood and emotions, in order to try to diagnose mental illness. The interviewers, while trained, were not clinicians.

What did they find? The rate of people experiencing Major Depressive Disorder (MDD), as defined in DSM-IV, in the past year, was 8.3%. When they examined ethnicity, this ranged from 6.7% in African Americans to 11.8% in Puerto Ricans. The average severity of the depression was roughly the same in all ethnic groups.

Of those with MDD, 51% reported that they'd had treatment in the past year, either antidepressants, psychotherapy, or both. This ranged from 53% for Whites down to just 29% of Caribbean Blacks and 33% of Mexican Americans. Therapy was somewhat more popular than drugs in all ethnic groups, although a lot of people used both. However, few of the treatments were classed as "guideline-concordant", i.e. long enough to do any good, which they defined as
use of an antidepressant for at least 60 days with supervision by a psychiatrist, or other prescribing clinician, for at least 4 visits in the past year. For psychotherapy...having at least 4 visits to a mental health professional in the past year lasting on average for at least 30 minutes each.
Only 21% of depressed people were getting such treatment, even though these strike me as very lenient guidelines, especially in the case of psychotherapy - how much good is 2 hours per year doing to do?

*

So depression's undertreated, especially in minorities. Too little, for too few. But this rests on an assumption: that we should treat Major Depressive Disorder.

That might not seem like an assumption, but assumptions generally don't. It seems like common sense, almost a tautology - it's a disorder, of course we should treat it! Yet it's not so simple. DSM-IV criteria for MDD require you to have 5 or more out of a list of 9 symptoms, including either depressed mood or a loss of interest in activities, lasting at least 2 weeks, and causing significant distress or impairment in social, occupational, or other important areas of functioning.

Fair enough. That's quite useful as a way of ensuring that psychiatrists in different countries are talking about the same thing when they talk about depression. But to think that depression is undertreated because only half of people meeting DSM-IV criteria for Major Depressive Disorder are being treated, is to put absolute faith in DSM-IV as a guide to who to treat. This is not what the DSM was meant to be, and there's no evidence it works for that purpose.

Is it really true that people with 5 symptoms need help, and those with 4 don't? Why not 6, or all 9? Why 2 weeks - why not 3 weeks, or 3 months? It's not as if there are loads of studies showing that treating people who have 5 symptoms for 2 weeks, and not treating people who don't, is the best strategy. I'm not aware of any such research. In particular, there's no evidence that people from the general population who meet these criteria when interviewed, but don't seek treatment, would all benefit from treatment as opposed to being left alone. Certainly some would, but they may be a minority.

This is not to say that any other criteria would be better than DSM-IV as guides to treatment, or that there is anything identifiably wrong with the DSM-IV criteria (although there is evidence that antidepressants are not useful in people with relatively "mild" MDD). The point is that doctors don't strictly apply textbook criteria when diagnosing and treating mental illness; they also use clinical judgement.

I don't know any psychiatrist who would prescribe treatment for someone solely on the basis that they met DSM-IV criteria for MDD. They would also want to know about the severity of the symptoms, whether they're related to any stresses or life events, how far they're "out of character" for that individual, etc. In general, they would deploy their training and experience to try to judge whether this person would benefit from treatment. This is why the DSM-IV carries a cautionary statement that "The proper use of these criteria requires specialized clinical training that provides both a body of knowledge and clinical skills."

So, it's far from clear that we should be treating everyone who answers interview questions in such a way that they meet DSM-IV criteria for Major Depressive Disorder. That's an assumption.

This isn't to say that everyone who needs depression treatment gets it. Sadly, there are many sufferers who would benefit from help and don't get any, or don't get it as early as they should. We need to do more to help such people. In this respect, depression is undertreated, although it's hard to know the extent of the problem. Yet it's quite possible that depression is also overtreated at the same time.

H/T Thanks to The Neurocritic for drawing my attention to this paper.

ResearchBlogging.orgGonzalez, H., Vega, W., Williams, D., Tarraf, W., West, B., & Neighbors, H. (2010). Depression Care in the United States: Too Little for Too Few Archives of General Psychiatry, 67 (1), 37-46 DOI: 10.1001/archgenpsychiatry.2009.168

Is Depression Undertreated?

Neuroskeptic readers will be familiar with the idea that too many people are being treated for mental illness. But not everyone agrees. Many people argue that common mental illnesses, such as depression, are undertreated. Take, for example, a paper just out in the esteemed Archives of General Psychiatry: Depression Care in the United States: Too Little for Too Few.

The authors looked at the results of three large (total N=15,762) surveys designed to measure the prevalence of mental illness in American adults. I've described how these surveys are conducted before: they took a randomly selected representative sample of Americans, and asked them a standardized series of questions (the CIDI interview) about their mood and emotions, in order to try to diagnose mental illness. The interviewers, while trained, were not clinicians.

What did they find? The rate of people experiencing Major Depressive Disorder (MDD), as defined in DSM-IV, in the past year, was 8.3%. When they examined ethnicity, this ranged from 6.7% in African Americans to 11.8% in Puerto Ricans. The average severity of the depression was roughly the same in all ethnic groups.

Of those with MDD, 51% reported that they'd had treatment in the past year, either antidepressants, psychotherapy, or both. This ranged from 53% for Whites down to just 29% of Caribbean Blacks and 33% of Mexican Americans. Therapy was somewhat more popular than drugs in all ethnic groups, although a lot of people used both. However, few of the treatments were classed as "guideline-concordant", i.e. long enough to do any good, which they defined as
use of an antidepressant for at least 60 days with supervision by a psychiatrist, or other prescribing clinician, for at least 4 visits in the past year. For psychotherapy...having at least 4 visits to a mental health professional in the past year lasting on average for at least 30 minutes each.
Only 21% of depressed people were getting such treatment, even though these strike me as very lenient guidelines, especially in the case of psychotherapy - how much good is 2 hours per year doing to do?

*

So depression's undertreated, especially in minorities. Too little, for too few. But this rests on an assumption: that we should treat Major Depressive Disorder.

That might not seem like an assumption, but assumptions generally don't. It seems like common sense, almost a tautology - it's a disorder, of course we should treat it! Yet it's not so simple. DSM-IV criteria for MDD require you to have 5 or more out of a list of 9 symptoms, including either depressed mood or a loss of interest in activities, lasting at least 2 weeks, and causing significant distress or impairment in social, occupational, or other important areas of functioning.

Fair enough. That's quite useful as a way of ensuring that psychiatrists in different countries are talking about the same thing when they talk about depression. But to think that depression is undertreated because only half of people meeting DSM-IV criteria for Major Depressive Disorder are being treated, is to put absolute faith in DSM-IV as a guide to who to treat. This is not what the DSM was meant to be, and there's no evidence it works for that purpose.

Is it really true that people with 5 symptoms need help, and those with 4 don't? Why not 6, or all 9? Why 2 weeks - why not 3 weeks, or 3 months? It's not as if there are loads of studies showing that treating people who have 5 symptoms for 2 weeks, and not treating people who don't, is the best strategy. I'm not aware of any such research. In particular, there's no evidence that people from the general population who meet these criteria when interviewed, but don't seek treatment, would all benefit from treatment as opposed to being left alone. Certainly some would, but they may be a minority.

This is not to say that any other criteria would be better than DSM-IV as guides to treatment, or that there is anything identifiably wrong with the DSM-IV criteria (although there is evidence that antidepressants are not useful in people with relatively "mild" MDD). The point is that doctors don't strictly apply textbook criteria when diagnosing and treating mental illness; they also use clinical judgement.

I don't know any psychiatrist who would prescribe treatment for someone solely on the basis that they met DSM-IV criteria for MDD. They would also want to know about the severity of the symptoms, whether they're related to any stresses or life events, how far they're "out of character" for that individual, etc. In general, they would deploy their training and experience to try to judge whether this person would benefit from treatment. This is why the DSM-IV carries a cautionary statement that "The proper use of these criteria requires specialized clinical training that provides both a body of knowledge and clinical skills."

So, it's far from clear that we should be treating everyone who answers interview questions in such a way that they meet DSM-IV criteria for Major Depressive Disorder. That's an assumption.

This isn't to say that everyone who needs depression treatment gets it. Sadly, there are many sufferers who would benefit from help and don't get any, or don't get it as early as they should. We need to do more to help such people. In this respect, depression is undertreated, although it's hard to know the extent of the problem. Yet it's quite possible that depression is also overtreated at the same time.

H/T Thanks to The Neurocritic for drawing my attention to this paper.

ResearchBlogging.orgGonzalez, H., Vega, W., Williams, D., Tarraf, W., West, B., & Neighbors, H. (2010). Depression Care in the United States: Too Little for Too Few Archives of General Psychiatry, 67 (1), 37-46 DOI: 10.1001/archgenpsychiatry.2009.168

Saturday, December 12, 2009

That Sinking Feeling?

Sinking and Swimming is a paper just out from the Young Foundation, a British think-tank. It "explores how psychological and material needs are being met and unmet in Britain." I'm not sure how useful their broad concept of "unmet needs" is, but there's some rather interesting data in this report.

On page 238, and prominently in the executive summary, we find the following terrifying graph, which comes with warnings like "anxiety and depression looks set to double during the course of a single generation..."

The % of the population self-reporting suffering from depression or anxiety seems to have been consistently rising since 1990, from less than 6% to almost 10% today. And the line continues ever upwards. Eeek!

Is Britain really becoming more depressed and anxious? No, and that's what makes this graph terrifying. According to the large government Adult Psychiatric Morbidity Survey, the prevalence of self-reported depression and anxiety symptoms rose slightly from 1993 to 2000 (15.5% to 17.5%) and then stayed level up to 2007 (17.6%). Not very scary. Even the Young Foundation note (on page 80) that when you look at "well-being"
analysis of the English health survey that uses a variation of GHQ [General Health Questionnaire] suggested that the proportion of the working age population with poor psychological well-being decreased from 17% in 1997 to 13% in 2006.
On that measure, we're getting happier. And the rate of new diagnoses of clinical depression fell over the past decade.

So what about that ominous line? Well, that graph was based on "self-reported anxiety or depression", but in a specific sense. People were not reporting feeling scared or unhappy (see above for the data on that), but rather, reporting having anxiety or depression as medical disorders. Curiously the % of people reporting having every other sort of health problems (except with vision) increased from 1991 to 2007 as well:


What seems to be happening is that British people are becoming more willing to label our problems as medical illnesses, although in fact our mental health has not changed much over the past two decades, and may even have improved slightly. This is what's terrifying, because medicalizing emotional issues is a bad idea.

Mental illness does exist, and medicine can help treat it, but medicine can't resolve non-medical problems even if they're labelled as illnesses. Antidepressants, for example, are (imperfectly) effective for severe clinical depression but probably not for "mild depression"; much of what is labelled "mild depression" is probably not, in any meaningful sense, an illness.

Why does this matter? Drugs have side effects, and psychotherapy is expensive. The cost-benefit profile of any treatment is obviously negative when there are no benefits because the treatment is being used inappropriately. My biggest concern, though, is that if someone is unhappy because of tensions in their marriage or because they're in the wrong job, they don't need treatment, they need to do something about it. Labelling a problem as an illness and treating it medically may, in itself, make that problem harder to overcome.

[BPSDB]

That Sinking Feeling?

Sinking and Swimming is a paper just out from the Young Foundation, a British think-tank. It "explores how psychological and material needs are being met and unmet in Britain." I'm not sure how useful their broad concept of "unmet needs" is, but there's some rather interesting data in this report.

On page 238, and prominently in the executive summary, we find the following terrifying graph, which comes with warnings like "anxiety and depression looks set to double during the course of a single generation..."

The % of the population self-reporting suffering from depression or anxiety seems to have been consistently rising since 1990, from less than 6% to almost 10% today. And the line continues ever upwards. Eeek!

Is Britain really becoming more depressed and anxious? No, and that's what makes this graph terrifying. According to the large government Adult Psychiatric Morbidity Survey, the prevalence of self-reported depression and anxiety symptoms rose slightly from 1993 to 2000 (15.5% to 17.5%) and then stayed level up to 2007 (17.6%). Not very scary. Even the Young Foundation note (on page 80) that when you look at "well-being"
analysis of the English health survey that uses a variation of GHQ [General Health Questionnaire] suggested that the proportion of the working age population with poor psychological well-being decreased from 17% in 1997 to 13% in 2006.
On that measure, we're getting happier. And the rate of new diagnoses of clinical depression fell over the past decade.

So what about that ominous line? Well, that graph was based on "self-reported anxiety or depression", but in a specific sense. People were not reporting feeling scared or unhappy (see above for the data on that), but rather, reporting having anxiety or depression as medical disorders. Curiously the % of people reporting having every other sort of health problems (except with vision) increased from 1991 to 2007 as well:


What seems to be happening is that British people are becoming more willing to label our problems as medical illnesses, although in fact our mental health has not changed much over the past two decades, and may even have improved slightly. This is what's terrifying, because medicalizing emotional issues is a bad idea.

Mental illness does exist, and medicine can help treat it, but medicine can't resolve non-medical problems even if they're labelled as illnesses. Antidepressants, for example, are (imperfectly) effective for severe clinical depression but probably not for "mild depression"; much of what is labelled "mild depression" is probably not, in any meaningful sense, an illness.

Why does this matter? Drugs have side effects, and psychotherapy is expensive. The cost-benefit profile of any treatment is obviously negative when there are no benefits because the treatment is being used inappropriately. My biggest concern, though, is that if someone is unhappy because of tensions in their marriage or because they're in the wrong job, they don't need treatment, they need to do something about it. Labelling a problem as an illness and treating it medically may, in itself, make that problem harder to overcome.

[BPSDB]

Wednesday, November 25, 2009

Mental Illness vs. Suicide

Do countries with more mental illness have more suicides?

At first glance,
it seems as though the answer must be "yes". Although not all suicides are related to mental illness, unsurprisingly people with mental illness do have a much higher suicide rate than people without. So, all other things being equal, the rate of mental illness in a country should correlate with the suicide rate. Of course, all other things are not equal, and other factors might come into play such as the quality of mental health services. But it still seems as though there should be a correlation, albeit not a perfect one, between mental illness and suicide.

I decided to see whether or not there is such a correlation. The World Health Organization (WHO)
provides the relevant data here. There have only ever been three studies attempting to measure rates of common mental illnesses internationally (1,2,3), and all three were run by the WHO. The WHO also collates national suicide rates (here) for most countries, although a few are missing. No-one seems to have published anything looking for a correlation between these two sets of numbers of before, or if they did, I've failed to find it.

So what's the story? Take a look -


In short, there's no correlation. The Pearson correlation (unweighted) r = 0.102, which is extremely low. As you can see, both mental illness and suicide rates vary greatly around the world, but there's no relationship. Japan has the second highest suicide rate, but one of the lowest rates of mental illnesses. The USA has the highest rate of mental illness, but a fairly low suicide rate. Brazil has the second highest level of mental illness but the second lowest occurrence of suicide.
*

Some technical notes: Two of the three surveys, the ICPE (2000) and the WMHS (2004), sampled the whole population of each country. The other one, which was also the earliest, the PPGHC (1993), surveyed people attending family doctors. Because this is a slightly different approach, I used the ICPE and the WMHS for the plot above, although the results from the PPGHC are very similar (see below).

The ICPE sampled 7 countries and the WMHS sampled 14, but 4 countries were included in both surveys, so there's a total of 17 countries. I've used the mean of the ICPE and the WMHS for those 4 countries where we have data from both, for the rest I've used whichever is available. For the suicide rates, the WHO gives data for various different years, so I've used 2002, or the nearest available year, since this is between 2000 and 2004. For two countries, Lebanon and Nigeria, the WHO do not report suicide rates. For China, rates of mental illness are given in both Beijing and Shanghai.

The studies used structured diagnostic interviews to try to measure the percentage of people suffering from mental illness in the 12 months before the interview. As I've said previously, this -
attempts to study a random sample of the population of a certain country. In order to establish whether each person is mentally ill or not, they use structured diagnostic interviews. These consists in asking the subject a fixed ("structured") series of questions, and declaring them to have a certain mental disorder if they answer "Yes" to a given number of them.
In this case the structured question interview was called the CIDI and it used DSM-IV criteria. You can check it out here. Example question:
You mentioned having periods that lasted several days or longer when you felt sad, empty, or depressed most of the day. During episodes of this sort, did you ever feel discouraged about how things were going in your life? (YES, NO, DON’T KNOW, REFUSED)

*

The rates from the population surveys (ICPE & WMHS) don't correlate with suicide but they do correlate with the rates from the PPGHC survey of people attending family doctors. The association here is very strong, with a correlation r = 0.693. The only outlier is the US. This is despite the fact that a decade elapsed between the first survey (1993) and the other two (2000, 2004).

This is important because it shows that the mental illness surveys are measuring something about these countries, something which is stable over time. They're not just producing random junk results. But whatever they're measuring, it's not related to suicide.


*

What does this mean? You leave a comment and tell me. But here's my take.
I've often expressed skepticism of population surveys and their (very high) estimates of mental illness, and of the dubious political conclusions certain people have tried to draw from them, but even so, I was surprised to find no correlation at all with suicide. I'd say that any meaningful measure of mental illness should correlate with suicide. These surveys, using the CIDI, don't, so to me they're not meaningful.

One thing to bear in mind about these numbers is that they deal with "common" mental illnesses like depression, substance abuse and anxiety. They leave out the most severe disorders such as schizophrenia. Also, people in psychiatric hospitals, in prison, and the homeless, will not have been included in the studies because they sample "households". That could be why there's no association with suicide, but if so then these surveys are missing a very important aspect of mental health.

The surveys do seem to measure something, but I don't think it has much to do with mental illness. This is just a guess but I suspect they're measuring willingness to talk about your emotional life to strangers. At least stereotypically, the Chinese and the Japanese are known as more reserved in this regard than Brazilians and Americans.
So it's no surprise that when you ask people a load of personal questions, the "rates of mental illness" seem to be lower in Japan than in America. This doesn't mean Americans are really more ill, just more open.

I've been talking about surveys looking at differences between countries, but if these are flawed, then so are surveys looking at just one country.
For example, many studies have looked at mental illness in the USA using similar methods to these. But can we trust these methods bearing in mind that if you ask the same questions in, say, Belgium you get less than half the estimated rate despite it having double the number of suicides? Taken to its logical conclusion, maybe we know little about the prevalence of "common mental illness" anywhere.

ResearchBlogging.orgSartorius N, Ustün TB, Costa e Silva JA, Goldberg D, Lecrubier Y, Ormel J, Von Korff M, & Wittchen HU (1993). An international study of psychological problems in primary care. Preliminary report from the World Health Organization Collaborative Project on 'Psychological Problems in General Health Care'. Archives of general psychiatry, 50 (10), 819-24 PMID: 8215805

WHO (2000). Cross-national comparisons of the prevalences and correlates of mental disorders. WHO International Consortium in Psychiatric Epidemiology. Bulletin of the World Health Organization, 78 (4), 413-26 PMID: 10885160

Demyttenaere K, & et Al (2004). Prevalence, severity, and unmet need for treatment of mental disorders in the World Health Organization World Mental Health Surveys. JAMA, 291 (21), 2581-90 PMID: 15173149

Mental Illness vs. Suicide

Do countries with more mental illness have more suicides?

At first glance,
it seems as though the answer must be "yes". Although not all suicides are related to mental illness, unsurprisingly people with mental illness do have a much higher suicide rate than people without. So, all other things being equal, the rate of mental illness in a country should correlate with the suicide rate. Of course, all other things are not equal, and other factors might come into play such as the quality of mental health services. But it still seems as though there should be a correlation, albeit not a perfect one, between mental illness and suicide.

I decided to see whether or not there is such a correlation. The World Health Organization (WHO)
provides the relevant data here. There have only ever been three studies attempting to measure rates of common mental illnesses internationally (1,2,3), and all three were run by the WHO. The WHO also collates national suicide rates (here) for most countries, although a few are missing. No-one seems to have published anything looking for a correlation between these two sets of numbers of before, or if they did, I've failed to find it.

So what's the story? Take a look -


In short, there's no correlation. The Pearson correlation (unweighted) r = 0.102, which is extremely low. As you can see, both mental illness and suicide rates vary greatly around the world, but there's no relationship. Japan has the second highest suicide rate, but one of the lowest rates of mental illnesses. The USA has the highest rate of mental illness, but a fairly low suicide rate. Brazil has the second highest level of mental illness but the second lowest occurrence of suicide.
*

Some technical notes: Two of the three surveys, the ICPE (2000) and the WMHS (2004), sampled the whole population of each country. The other one, which was also the earliest, the PPGHC (1993), surveyed people attending family doctors. Because this is a slightly different approach, I used the ICPE and the WMHS for the plot above, although the results from the PPGHC are very similar (see below).

The ICPE sampled 7 countries and the WMHS sampled 14, but 4 countries were included in both surveys, so there's a total of 17 countries. I've used the mean of the ICPE and the WMHS for those 4 countries where we have data from both, for the rest I've used whichever is available. For the suicide rates, the WHO gives data for various different years, so I've used 2002, or the nearest available year, since this is between 2000 and 2004. For two countries, Lebanon and Nigeria, the WHO do not report suicide rates. For China, rates of mental illness are given in both Beijing and Shanghai.

The studies used structured diagnostic interviews to try to measure the percentage of people suffering from mental illness in the 12 months before the interview. As I've said previously, this -
attempts to study a random sample of the population of a certain country. In order to establish whether each person is mentally ill or not, they use structured diagnostic interviews. These consists in asking the subject a fixed ("structured") series of questions, and declaring them to have a certain mental disorder if they answer "Yes" to a given number of them.
In this case the structured question interview was called the CIDI and it used DSM-IV criteria. You can check it out here. Example question:
You mentioned having periods that lasted several days or longer when you felt sad, empty, or depressed most of the day. During episodes of this sort, did you ever feel discouraged about how things were going in your life? (YES, NO, DON’T KNOW, REFUSED)

*

The rates from the population surveys (ICPE & WMHS) don't correlate with suicide but they do correlate with the rates from the PPGHC survey of people attending family doctors. The association here is very strong, with a correlation r = 0.693. The only outlier is the US. This is despite the fact that a decade elapsed between the first survey (1993) and the other two (2000, 2004).

This is important because it shows that the mental illness surveys are measuring something about these countries, something which is stable over time. They're not just producing random junk results. But whatever they're measuring, it's not related to suicide.


*

What does this mean? You leave a comment and tell me. But here's my take.
I've often expressed skepticism of population surveys and their (very high) estimates of mental illness, and of the dubious political conclusions certain people have tried to draw from them, but even so, I was surprised to find no correlation at all with suicide. I'd say that any meaningful measure of mental illness should correlate with suicide. These surveys, using the CIDI, don't, so to me they're not meaningful.

One thing to bear in mind about these numbers is that they deal with "common" mental illnesses like depression, substance abuse and anxiety. They leave out the most severe disorders such as schizophrenia. Also, people in psychiatric hospitals, in prison, and the homeless, will not have been included in the studies because they sample "households". That could be why there's no association with suicide, but if so then these surveys are missing a very important aspect of mental health.

The surveys do seem to measure something, but I don't think it has much to do with mental illness. This is just a guess but I suspect they're measuring willingness to talk about your emotional life to strangers. At least stereotypically, the Chinese and the Japanese are known as more reserved in this regard than Brazilians and Americans.
So it's no surprise that when you ask people a load of personal questions, the "rates of mental illness" seem to be lower in Japan than in America. This doesn't mean Americans are really more ill, just more open.

I've been talking about surveys looking at differences between countries, but if these are flawed, then so are surveys looking at just one country.
For example, many studies have looked at mental illness in the USA using similar methods to these. But can we trust these methods bearing in mind that if you ask the same questions in, say, Belgium you get less than half the estimated rate despite it having double the number of suicides? Taken to its logical conclusion, maybe we know little about the prevalence of "common mental illness" anywhere.

ResearchBlogging.orgSartorius N, Ustün TB, Costa e Silva JA, Goldberg D, Lecrubier Y, Ormel J, Von Korff M, & Wittchen HU (1993). An international study of psychological problems in primary care. Preliminary report from the World Health Organization Collaborative Project on 'Psychological Problems in General Health Care'. Archives of general psychiatry, 50 (10), 819-24 PMID: 8215805

WHO (2000). Cross-national comparisons of the prevalences and correlates of mental disorders. WHO International Consortium in Psychiatric Epidemiology. Bulletin of the World Health Organization, 78 (4), 413-26 PMID: 10885160

Demyttenaere K, & et Al (2004). Prevalence, severity, and unmet need for treatment of mental disorders in the World Health Organization World Mental Health Surveys. JAMA, 291 (21), 2581-90 PMID: 15173149

Sunday, September 13, 2009

YouGov Reply

On Thursday, I wrote about British polling company YouGov, in a follow-up to an earlier post about modern Britain's fondness for opinion polls. YouGov's Co-Founder, Stephan Shakespear, has written a response, which I've posted below.

Stephan makes a strong case that YouGov's polling methods are at least as good, or better, than those of other polling companies. I don't disagree, and I don't have any suggestions as to how they could be improved. In the political sphere, YouGov are widely regarded as the most credible British pollsters, and as Stephan says, they have an excellent record of accuracy in that area. Their popularity is why I chose them as the focus of my piece.

In my post, I did rashly suggest that YouGov's internet-based panel approach might be less representative than a random phone sampling method. But as Stephan says, such a system has plenty of serious problems of its own: "There’s no such thing as a random sample for any kind of market research or polling. There is only random invitation, but since the overwhelming majority of people decline the invitation (or don’t even receive it because they are out when the phone rings...) the resulting sample cannot be random. And it is clearly skewed against certain types of people ... as well as different temperaments..."

As he goes on to say, what YouGov do is inherently difficult - "It’s very hard to know with certainty what the population as a whole thinks about a particular topic, by any method." And this was my essential point: YouGov polls, like all polls, are not an infallible window into public opinion. They could be perfectly accurate - but we don't have any way of knowing how accurate they are, except when it comes to elections, which is a special case.

My issue was, and is, with those who commission opinion polls as a form of advertising, and those who try to use them to demonstrate things which they simply cannot do. Very often, these are the same people. The example I used in my original post was of a poll conducted by a company who run private health and fitness clubs. The message was that British people are incredibly unfit and lazy. Amongst other things it reported that 64% of parents are "always" too tired to play with their children. I don't believe that. I don't think an opinion poll is a good way of measuring laziness. Physical fitness is a vital public health issue, but this is just silly.

It's not clear if that was a YouGov poll, but this one was: 75% of Britons text or blog while on the toilet, which puts us at risk of haemorrhoids, according to a poll commissioned by the makers of trendy, expensive 'probiotic' yoghurt, Yakult. That got Yakult mentions in The Telegraph, The Scotsman, The Metro and The London Paper. I could go on.

Of course we can't blame polling companies for what their clients do with their data. But a healthy scepticism of this data is part of the reason why I'm so disappointed at the number of newspaper articles, usually based very closely on press releases (like Yakult's), based on such polls. It's not YouGov's fault, and I'm sure most of the research YouGov do is not like this. But it's a problem. It's lazy journalism, and it's a poor substitute for serious, informed debate about health and social issues.

Anyway, here's Stephan Shakespear's reply:

"As you must realise, there’s no such thing as a random sample for any kind of market research or polling. There is only random invitation, but since the overwhelming majority of people decline the invitation (or don’t even receive it because they are out when the phone rings, or they don’t pick up their phone because they screen calls, etc) the resulting sample cannot be random. And it is clearly skewed against certain types of people (younger people, busier people, etc), as well as different temperaments (most people won’t willingly give up their time to answer surveys: remember that they tend to be quite long, and not usually on very interesting subjects. Would you stop in the street on your way to work for someone with a clipboard? Would you say ‘yes’ when you are called in the middle of making supper for your kids?)

When researchers do manage to talk to someone, there is no way of knowing whether the answers respondents give to the questions reflect their true thinking. Indeed, as a neuroscientist will be quick to point out, it may not be easy to define what their “true thinking” is, because they may never before have thought about the topic they are being asked about. It may well be that ten minutes after the interview, they think differently about it. Or maybe they were lying, either to the interviewer or to themselves. Maybe they were trying to please the interviewer with the answer they thought was wanted. Maybe they want to appear more reasonable than they really are.

So it’s very hard to know with certainty what the population as a whole thinks about a particular topic, by any method. In fact it’s impossible even if one has the latest neuropsychology techniques at one’s disposal. Nowhere in your piece do you discuss any of these issues which apply to all forms of opinion research, under any conditions. Comparison with other methodologies is important, because we must do the best we can when conditions dictate imperfection.

To repeat: all methodologies include selection bias (self-selection to participate in a panel is not essentially different from the overwhelming self-de-selection that applies to random-interruption methods), and all have motivational biases (anyone who wants to spend their time giving opinions is different in some way to people who don’t; why should payment mean a ‘financial interest’ that skews opinions? Are the volunteers used for neuroscience not usually rewarded, often financially? Surely non-payment skews the motivation too?)

For the record, at YouGov, we take a lot of care to recruit people to our panel by a variety of methods. The great majority are proactively recruited, they do not find their own way to the panel. They are recruited from a variety of ‘innocent’ sources to maintain as good a demographic balance as we can. But we do not claim random selection - as stated above, no research agency can possibly enforce participation from a random selection, it’s impossible. It was precisely because of our acknowledgement that true random samples are impossible that we say we ‘model’, we do not merely ‘measure’ – something which most of the industry now agrees with. Because we are explicit about this, and because we have historical data on our respondents, we can model by more variables. In other words, we are more scientific, not less scientific, than the methods which, by implication of your omissions, you prefer. We know more about our sample, so we can compare them with the general population in a more sophisticated way; and we have no interviewer effect; and respondents can think a little longer about their answer. So we think that makes for better data. In fact, wherever our data can be compared to real outcomes, we have a fantastic record.

You say that our record of accuracy in predicting elections does not mean we are accurate in other things. It is true that most areas of public opinion cannot be proved, by any method, and therefore we cannot prove it either. But it’s surely better to use a methodology that has proven its accuracy in areas that can be proven, rather than one that was found to be wrong, no? YouGov has the best record of accuracy in predicting real outcomes; most recently the Euro elections and the London Mayoral election. You may remember other pollsters had Ken Livingstone beating or neck-and-neck with Boris Johnson. We said Johnson would win by 6%. He won by 6%. Would you rather trust a company that gets the provable things right, or a company that gets them wrong? Does your ‘science’ tell you that methodologies which get the wrong political prediction are more likely to be right in other areas? If so, please explain further.

As it happens, the vast majority of the revenue for YouGov comes from market research for companies who do not publish the results in the media, companies which rely on the accuracy of our descriptions and predictions of consumer behaviour for their future planning. You might want to credit them with some kind of quality-control, if only in their self-interest.

Given that we all acknowledge the difficulty of knowing precisely the percentage that think this or that about some topic they may rarely have thought about, what is your suggested better course? As it is ultimately impossible to know what a single person “thinks”, let alone an entire population, maybe we should attempt nothing, report nothing? Would it be better if there were no data available, only the anecdotal publications of bloggers?

We don’t let it rest. We constantly experiment - with, for example, deliberative methodologies to try to measure how people change their thinking when they consider a matter more, when they are given access to more information, etc. Our panel methodology allows us to use very large (20,000+) randomly-split samples where we seek responses from each split to very slightly altered inputs, controlling for all but a single variable. Even you might agree that our methodology here is of a piece with that of your fellow scientists, some of whom we’ve consulted. We are able to do scientific things with our methodology that other, random-digit-dialing methods can’t, or at least can’t do in an affordable way. You might want to credit us with our serious approach to methodology, rather than slag us off in your most unscientific manner.

Stephan Shakespeare, Co-Founder and Chief Innovation Officer, YouGov"

YouGov Reply

On Thursday, I wrote about British polling company YouGov, in a follow-up to an earlier post about modern Britain's fondness for opinion polls. YouGov's Co-Founder, Stephan Shakespear, has written a response, which I've posted below.

Stephan makes a strong case that YouGov's polling methods are at least as good, or better, than those of other polling companies. I don't disagree, and I don't have any suggestions as to how they could be improved. In the political sphere, YouGov are widely regarded as the most credible British pollsters, and as Stephan says, they have an excellent record of accuracy in that area. Their popularity is why I chose them as the focus of my piece.

In my post, I did rashly suggest that YouGov's internet-based panel approach might be less representative than a random phone sampling method. But as Stephan says, such a system has plenty of serious problems of its own: "There’s no such thing as a random sample for any kind of market research or polling. There is only random invitation, but since the overwhelming majority of people decline the invitation (or don’t even receive it because they are out when the phone rings...) the resulting sample cannot be random. And it is clearly skewed against certain types of people ... as well as different temperaments..."

As he goes on to say, what YouGov do is inherently difficult - "It’s very hard to know with certainty what the population as a whole thinks about a particular topic, by any method." And this was my essential point: YouGov polls, like all polls, are not an infallible window into public opinion. They could be perfectly accurate - but we don't have any way of knowing how accurate they are, except when it comes to elections, which is a special case.

My issue was, and is, with those who commission opinion polls as a form of advertising, and those who try to use them to demonstrate things which they simply cannot do. Very often, these are the same people. The example I used in my original post was of a poll conducted by a company who run private health and fitness clubs. The message was that British people are incredibly unfit and lazy. Amongst other things it reported that 64% of parents are "always" too tired to play with their children. I don't believe that. I don't think an opinion poll is a good way of measuring laziness. Physical fitness is a vital public health issue, but this is just silly.

It's not clear if that was a YouGov poll, but this one was: 75% of Britons text or blog while on the toilet, which puts us at risk of haemorrhoids, according to a poll commissioned by the makers of trendy, expensive 'probiotic' yoghurt, Yakult. That got Yakult mentions in The Telegraph, The Scotsman, The Metro and The London Paper. I could go on.

Of course we can't blame polling companies for what their clients do with their data. But a healthy scepticism of this data is part of the reason why I'm so disappointed at the number of newspaper articles, usually based very closely on press releases (like Yakult's), based on such polls. It's not YouGov's fault, and I'm sure most of the research YouGov do is not like this. But it's a problem. It's lazy journalism, and it's a poor substitute for serious, informed debate about health and social issues.

Anyway, here's Stephan Shakespear's reply:

"As you must realise, there’s no such thing as a random sample for any kind of market research or polling. There is only random invitation, but since the overwhelming majority of people decline the invitation (or don’t even receive it because they are out when the phone rings, or they don’t pick up their phone because they screen calls, etc) the resulting sample cannot be random. And it is clearly skewed against certain types of people (younger people, busier people, etc), as well as different temperaments (most people won’t willingly give up their time to answer surveys: remember that they tend to be quite long, and not usually on very interesting subjects. Would you stop in the street on your way to work for someone with a clipboard? Would you say ‘yes’ when you are called in the middle of making supper for your kids?)

When researchers do manage to talk to someone, there is no way of knowing whether the answers respondents give to the questions reflect their true thinking. Indeed, as a neuroscientist will be quick to point out, it may not be easy to define what their “true thinking” is, because they may never before have thought about the topic they are being asked about. It may well be that ten minutes after the interview, they think differently about it. Or maybe they were lying, either to the interviewer or to themselves. Maybe they were trying to please the interviewer with the answer they thought was wanted. Maybe they want to appear more reasonable than they really are.

So it’s very hard to know with certainty what the population as a whole thinks about a particular topic, by any method. In fact it’s impossible even if one has the latest neuropsychology techniques at one’s disposal. Nowhere in your piece do you discuss any of these issues which apply to all forms of opinion research, under any conditions. Comparison with other methodologies is important, because we must do the best we can when conditions dictate imperfection.

To repeat: all methodologies include selection bias (self-selection to participate in a panel is not essentially different from the overwhelming self-de-selection that applies to random-interruption methods), and all have motivational biases (anyone who wants to spend their time giving opinions is different in some way to people who don’t; why should payment mean a ‘financial interest’ that skews opinions? Are the volunteers used for neuroscience not usually rewarded, often financially? Surely non-payment skews the motivation too?)

For the record, at YouGov, we take a lot of care to recruit people to our panel by a variety of methods. The great majority are proactively recruited, they do not find their own way to the panel. They are recruited from a variety of ‘innocent’ sources to maintain as good a demographic balance as we can. But we do not claim random selection - as stated above, no research agency can possibly enforce participation from a random selection, it’s impossible. It was precisely because of our acknowledgement that true random samples are impossible that we say we ‘model’, we do not merely ‘measure’ – something which most of the industry now agrees with. Because we are explicit about this, and because we have historical data on our respondents, we can model by more variables. In other words, we are more scientific, not less scientific, than the methods which, by implication of your omissions, you prefer. We know more about our sample, so we can compare them with the general population in a more sophisticated way; and we have no interviewer effect; and respondents can think a little longer about their answer. So we think that makes for better data. In fact, wherever our data can be compared to real outcomes, we have a fantastic record.

You say that our record of accuracy in predicting elections does not mean we are accurate in other things. It is true that most areas of public opinion cannot be proved, by any method, and therefore we cannot prove it either. But it’s surely better to use a methodology that has proven its accuracy in areas that can be proven, rather than one that was found to be wrong, no? YouGov has the best record of accuracy in predicting real outcomes; most recently the Euro elections and the London Mayoral election. You may remember other pollsters had Ken Livingstone beating or neck-and-neck with Boris Johnson. We said Johnson would win by 6%. He won by 6%. Would you rather trust a company that gets the provable things right, or a company that gets them wrong? Does your ‘science’ tell you that methodologies which get the wrong political prediction are more likely to be right in other areas? If so, please explain further.

As it happens, the vast majority of the revenue for YouGov comes from market research for companies who do not publish the results in the media, companies which rely on the accuracy of our descriptions and predictions of consumer behaviour for their future planning. You might want to credit them with some kind of quality-control, if only in their self-interest.

Given that we all acknowledge the difficulty of knowing precisely the percentage that think this or that about some topic they may rarely have thought about, what is your suggested better course? As it is ultimately impossible to know what a single person “thinks”, let alone an entire population, maybe we should attempt nothing, report nothing? Would it be better if there were no data available, only the anecdotal publications of bloggers?

We don’t let it rest. We constantly experiment - with, for example, deliberative methodologies to try to measure how people change their thinking when they consider a matter more, when they are given access to more information, etc. Our panel methodology allows us to use very large (20,000+) randomly-split samples where we seek responses from each split to very slightly altered inputs, controlling for all but a single variable. Even you might agree that our methodology here is of a piece with that of your fellow scientists, some of whom we’ve consulted. We are able to do scientific things with our methodology that other, random-digit-dialing methods can’t, or at least can’t do in an affordable way. You might want to credit us with our serious approach to methodology, rather than slag us off in your most unscientific manner.

Stephan Shakespeare, Co-Founder and Chief Innovation Officer, YouGov"