Showing posts with label yougov. Show all posts
Showing posts with label yougov. Show all posts

Sunday, September 13, 2009

YouGov Reply

On Thursday, I wrote about British polling company YouGov, in a follow-up to an earlier post about modern Britain's fondness for opinion polls. YouGov's Co-Founder, Stephan Shakespear, has written a response, which I've posted below.

Stephan makes a strong case that YouGov's polling methods are at least as good, or better, than those of other polling companies. I don't disagree, and I don't have any suggestions as to how they could be improved. In the political sphere, YouGov are widely regarded as the most credible British pollsters, and as Stephan says, they have an excellent record of accuracy in that area. Their popularity is why I chose them as the focus of my piece.

In my post, I did rashly suggest that YouGov's internet-based panel approach might be less representative than a random phone sampling method. But as Stephan says, such a system has plenty of serious problems of its own: "There’s no such thing as a random sample for any kind of market research or polling. There is only random invitation, but since the overwhelming majority of people decline the invitation (or don’t even receive it because they are out when the phone rings...) the resulting sample cannot be random. And it is clearly skewed against certain types of people ... as well as different temperaments..."

As he goes on to say, what YouGov do is inherently difficult - "It’s very hard to know with certainty what the population as a whole thinks about a particular topic, by any method." And this was my essential point: YouGov polls, like all polls, are not an infallible window into public opinion. They could be perfectly accurate - but we don't have any way of knowing how accurate they are, except when it comes to elections, which is a special case.

My issue was, and is, with those who commission opinion polls as a form of advertising, and those who try to use them to demonstrate things which they simply cannot do. Very often, these are the same people. The example I used in my original post was of a poll conducted by a company who run private health and fitness clubs. The message was that British people are incredibly unfit and lazy. Amongst other things it reported that 64% of parents are "always" too tired to play with their children. I don't believe that. I don't think an opinion poll is a good way of measuring laziness. Physical fitness is a vital public health issue, but this is just silly.

It's not clear if that was a YouGov poll, but this one was: 75% of Britons text or blog while on the toilet, which puts us at risk of haemorrhoids, according to a poll commissioned by the makers of trendy, expensive 'probiotic' yoghurt, Yakult. That got Yakult mentions in The Telegraph, The Scotsman, The Metro and The London Paper. I could go on.

Of course we can't blame polling companies for what their clients do with their data. But a healthy scepticism of this data is part of the reason why I'm so disappointed at the number of newspaper articles, usually based very closely on press releases (like Yakult's), based on such polls. It's not YouGov's fault, and I'm sure most of the research YouGov do is not like this. But it's a problem. It's lazy journalism, and it's a poor substitute for serious, informed debate about health and social issues.

Anyway, here's Stephan Shakespear's reply:

"As you must realise, there’s no such thing as a random sample for any kind of market research or polling. There is only random invitation, but since the overwhelming majority of people decline the invitation (or don’t even receive it because they are out when the phone rings, or they don’t pick up their phone because they screen calls, etc) the resulting sample cannot be random. And it is clearly skewed against certain types of people (younger people, busier people, etc), as well as different temperaments (most people won’t willingly give up their time to answer surveys: remember that they tend to be quite long, and not usually on very interesting subjects. Would you stop in the street on your way to work for someone with a clipboard? Would you say ‘yes’ when you are called in the middle of making supper for your kids?)

When researchers do manage to talk to someone, there is no way of knowing whether the answers respondents give to the questions reflect their true thinking. Indeed, as a neuroscientist will be quick to point out, it may not be easy to define what their “true thinking” is, because they may never before have thought about the topic they are being asked about. It may well be that ten minutes after the interview, they think differently about it. Or maybe they were lying, either to the interviewer or to themselves. Maybe they were trying to please the interviewer with the answer they thought was wanted. Maybe they want to appear more reasonable than they really are.

So it’s very hard to know with certainty what the population as a whole thinks about a particular topic, by any method. In fact it’s impossible even if one has the latest neuropsychology techniques at one’s disposal. Nowhere in your piece do you discuss any of these issues which apply to all forms of opinion research, under any conditions. Comparison with other methodologies is important, because we must do the best we can when conditions dictate imperfection.

To repeat: all methodologies include selection bias (self-selection to participate in a panel is not essentially different from the overwhelming self-de-selection that applies to random-interruption methods), and all have motivational biases (anyone who wants to spend their time giving opinions is different in some way to people who don’t; why should payment mean a ‘financial interest’ that skews opinions? Are the volunteers used for neuroscience not usually rewarded, often financially? Surely non-payment skews the motivation too?)

For the record, at YouGov, we take a lot of care to recruit people to our panel by a variety of methods. The great majority are proactively recruited, they do not find their own way to the panel. They are recruited from a variety of ‘innocent’ sources to maintain as good a demographic balance as we can. But we do not claim random selection - as stated above, no research agency can possibly enforce participation from a random selection, it’s impossible. It was precisely because of our acknowledgement that true random samples are impossible that we say we ‘model’, we do not merely ‘measure’ – something which most of the industry now agrees with. Because we are explicit about this, and because we have historical data on our respondents, we can model by more variables. In other words, we are more scientific, not less scientific, than the methods which, by implication of your omissions, you prefer. We know more about our sample, so we can compare them with the general population in a more sophisticated way; and we have no interviewer effect; and respondents can think a little longer about their answer. So we think that makes for better data. In fact, wherever our data can be compared to real outcomes, we have a fantastic record.

You say that our record of accuracy in predicting elections does not mean we are accurate in other things. It is true that most areas of public opinion cannot be proved, by any method, and therefore we cannot prove it either. But it’s surely better to use a methodology that has proven its accuracy in areas that can be proven, rather than one that was found to be wrong, no? YouGov has the best record of accuracy in predicting real outcomes; most recently the Euro elections and the London Mayoral election. You may remember other pollsters had Ken Livingstone beating or neck-and-neck with Boris Johnson. We said Johnson would win by 6%. He won by 6%. Would you rather trust a company that gets the provable things right, or a company that gets them wrong? Does your ‘science’ tell you that methodologies which get the wrong political prediction are more likely to be right in other areas? If so, please explain further.

As it happens, the vast majority of the revenue for YouGov comes from market research for companies who do not publish the results in the media, companies which rely on the accuracy of our descriptions and predictions of consumer behaviour for their future planning. You might want to credit them with some kind of quality-control, if only in their self-interest.

Given that we all acknowledge the difficulty of knowing precisely the percentage that think this or that about some topic they may rarely have thought about, what is your suggested better course? As it is ultimately impossible to know what a single person “thinks”, let alone an entire population, maybe we should attempt nothing, report nothing? Would it be better if there were no data available, only the anecdotal publications of bloggers?

We don’t let it rest. We constantly experiment - with, for example, deliberative methodologies to try to measure how people change their thinking when they consider a matter more, when they are given access to more information, etc. Our panel methodology allows us to use very large (20,000+) randomly-split samples where we seek responses from each split to very slightly altered inputs, controlling for all but a single variable. Even you might agree that our methodology here is of a piece with that of your fellow scientists, some of whom we’ve consulted. We are able to do scientific things with our methodology that other, random-digit-dialing methods can’t, or at least can’t do in an affordable way. You might want to credit us with our serious approach to methodology, rather than slag us off in your most unscientific manner.

Stephan Shakespeare, Co-Founder and Chief Innovation Officer, YouGov"

YouGov Reply

On Thursday, I wrote about British polling company YouGov, in a follow-up to an earlier post about modern Britain's fondness for opinion polls. YouGov's Co-Founder, Stephan Shakespear, has written a response, which I've posted below.

Stephan makes a strong case that YouGov's polling methods are at least as good, or better, than those of other polling companies. I don't disagree, and I don't have any suggestions as to how they could be improved. In the political sphere, YouGov are widely regarded as the most credible British pollsters, and as Stephan says, they have an excellent record of accuracy in that area. Their popularity is why I chose them as the focus of my piece.

In my post, I did rashly suggest that YouGov's internet-based panel approach might be less representative than a random phone sampling method. But as Stephan says, such a system has plenty of serious problems of its own: "There’s no such thing as a random sample for any kind of market research or polling. There is only random invitation, but since the overwhelming majority of people decline the invitation (or don’t even receive it because they are out when the phone rings...) the resulting sample cannot be random. And it is clearly skewed against certain types of people ... as well as different temperaments..."

As he goes on to say, what YouGov do is inherently difficult - "It’s very hard to know with certainty what the population as a whole thinks about a particular topic, by any method." And this was my essential point: YouGov polls, like all polls, are not an infallible window into public opinion. They could be perfectly accurate - but we don't have any way of knowing how accurate they are, except when it comes to elections, which is a special case.

My issue was, and is, with those who commission opinion polls as a form of advertising, and those who try to use them to demonstrate things which they simply cannot do. Very often, these are the same people. The example I used in my original post was of a poll conducted by a company who run private health and fitness clubs. The message was that British people are incredibly unfit and lazy. Amongst other things it reported that 64% of parents are "always" too tired to play with their children. I don't believe that. I don't think an opinion poll is a good way of measuring laziness. Physical fitness is a vital public health issue, but this is just silly.

It's not clear if that was a YouGov poll, but this one was: 75% of Britons text or blog while on the toilet, which puts us at risk of haemorrhoids, according to a poll commissioned by the makers of trendy, expensive 'probiotic' yoghurt, Yakult. That got Yakult mentions in The Telegraph, The Scotsman, The Metro and The London Paper. I could go on.

Of course we can't blame polling companies for what their clients do with their data. But a healthy scepticism of this data is part of the reason why I'm so disappointed at the number of newspaper articles, usually based very closely on press releases (like Yakult's), based on such polls. It's not YouGov's fault, and I'm sure most of the research YouGov do is not like this. But it's a problem. It's lazy journalism, and it's a poor substitute for serious, informed debate about health and social issues.

Anyway, here's Stephan Shakespear's reply:

"As you must realise, there’s no such thing as a random sample for any kind of market research or polling. There is only random invitation, but since the overwhelming majority of people decline the invitation (or don’t even receive it because they are out when the phone rings, or they don’t pick up their phone because they screen calls, etc) the resulting sample cannot be random. And it is clearly skewed against certain types of people (younger people, busier people, etc), as well as different temperaments (most people won’t willingly give up their time to answer surveys: remember that they tend to be quite long, and not usually on very interesting subjects. Would you stop in the street on your way to work for someone with a clipboard? Would you say ‘yes’ when you are called in the middle of making supper for your kids?)

When researchers do manage to talk to someone, there is no way of knowing whether the answers respondents give to the questions reflect their true thinking. Indeed, as a neuroscientist will be quick to point out, it may not be easy to define what their “true thinking” is, because they may never before have thought about the topic they are being asked about. It may well be that ten minutes after the interview, they think differently about it. Or maybe they were lying, either to the interviewer or to themselves. Maybe they were trying to please the interviewer with the answer they thought was wanted. Maybe they want to appear more reasonable than they really are.

So it’s very hard to know with certainty what the population as a whole thinks about a particular topic, by any method. In fact it’s impossible even if one has the latest neuropsychology techniques at one’s disposal. Nowhere in your piece do you discuss any of these issues which apply to all forms of opinion research, under any conditions. Comparison with other methodologies is important, because we must do the best we can when conditions dictate imperfection.

To repeat: all methodologies include selection bias (self-selection to participate in a panel is not essentially different from the overwhelming self-de-selection that applies to random-interruption methods), and all have motivational biases (anyone who wants to spend their time giving opinions is different in some way to people who don’t; why should payment mean a ‘financial interest’ that skews opinions? Are the volunteers used for neuroscience not usually rewarded, often financially? Surely non-payment skews the motivation too?)

For the record, at YouGov, we take a lot of care to recruit people to our panel by a variety of methods. The great majority are proactively recruited, they do not find their own way to the panel. They are recruited from a variety of ‘innocent’ sources to maintain as good a demographic balance as we can. But we do not claim random selection - as stated above, no research agency can possibly enforce participation from a random selection, it’s impossible. It was precisely because of our acknowledgement that true random samples are impossible that we say we ‘model’, we do not merely ‘measure’ – something which most of the industry now agrees with. Because we are explicit about this, and because we have historical data on our respondents, we can model by more variables. In other words, we are more scientific, not less scientific, than the methods which, by implication of your omissions, you prefer. We know more about our sample, so we can compare them with the general population in a more sophisticated way; and we have no interviewer effect; and respondents can think a little longer about their answer. So we think that makes for better data. In fact, wherever our data can be compared to real outcomes, we have a fantastic record.

You say that our record of accuracy in predicting elections does not mean we are accurate in other things. It is true that most areas of public opinion cannot be proved, by any method, and therefore we cannot prove it either. But it’s surely better to use a methodology that has proven its accuracy in areas that can be proven, rather than one that was found to be wrong, no? YouGov has the best record of accuracy in predicting real outcomes; most recently the Euro elections and the London Mayoral election. You may remember other pollsters had Ken Livingstone beating or neck-and-neck with Boris Johnson. We said Johnson would win by 6%. He won by 6%. Would you rather trust a company that gets the provable things right, or a company that gets them wrong? Does your ‘science’ tell you that methodologies which get the wrong political prediction are more likely to be right in other areas? If so, please explain further.

As it happens, the vast majority of the revenue for YouGov comes from market research for companies who do not publish the results in the media, companies which rely on the accuracy of our descriptions and predictions of consumer behaviour for their future planning. You might want to credit them with some kind of quality-control, if only in their self-interest.

Given that we all acknowledge the difficulty of knowing precisely the percentage that think this or that about some topic they may rarely have thought about, what is your suggested better course? As it is ultimately impossible to know what a single person “thinks”, let alone an entire population, maybe we should attempt nothing, report nothing? Would it be better if there were no data available, only the anecdotal publications of bloggers?

We don’t let it rest. We constantly experiment - with, for example, deliberative methodologies to try to measure how people change their thinking when they consider a matter more, when they are given access to more information, etc. Our panel methodology allows us to use very large (20,000+) randomly-split samples where we seek responses from each split to very slightly altered inputs, controlling for all but a single variable. Even you might agree that our methodology here is of a piece with that of your fellow scientists, some of whom we’ve consulted. We are able to do scientific things with our methodology that other, random-digit-dialing methods can’t, or at least can’t do in an affordable way. You might want to credit us with our serious approach to methodology, rather than slag us off in your most unscientific manner.

Stephan Shakespeare, Co-Founder and Chief Innovation Officer, YouGov"

Thursday, September 10, 2009

YouGov're Having A Laugh

A few weeks back I wrote about the surveys-of-2,000-people which form a growing proportion of British news stories. Suppose you're a company or activist group. You commission a survey of 2,000 people, and ask them some questions vaguely relating to your product or cause. You pick the most interesting results, write them up into a publication-ready press release, and send it to journalists. There's a good chance that your press release will appear, with minor alterations, as a news story in the British media. Like these articles (BBC, Daily Telegraph), which bear a striking similarity to this press release.

Which is good news for you. You get your name in the papers, and it doesn't even look like advertising. Journalists get column inches for very little work, and the pollsters who conducted the survey get publicity too (and your money). Everyone wins, except the public, who end up bombarded with usually meaningless statistics in the guise of "research".

The survey doesn't have to be of 2,000 people, but that's the norm. This is because this is the size of the samples used by YouGov PLC, who are responsible for (at least it seems to me) the majority of these things.

YouGov polls are everywhere. I'd always assumed that they were telephone surveys of a random sample of the British population. I assumed that because I thought that they were meant to be representative. How naïve.

In fact YouGov polls work like this: you sign up as a panelist, online, which takes two minutes. You then occasionally get e-mails inviting you to do surveys. If you do one, you get 50p credit. When you've got £50 they send you a cheque. It's a great way of making cash online, according to websites about making cash online. Sign up here to get in on the action.

So, the participants in all YouGov polls are not random people but are both self-selected and financially motivated. Many of them will be just doing it for the cash, in which case they will be trying to answer the survey as quickly as possible.

Worst, the panelists are not representative of the British population - they consist of people who use the internet, have heard about YouGov, and chose to participate. YouGov say they have 200,000 users, out of 60 million British people.

Every day, YouGov sends a survey to a certain sample of their users which collects 2,000 responses. They call this the "Omnibus" poll. It costs £500 to commission one, according to the leaflet. That's chump change in advertising terms; I don't know how much it would cost to run an ad in one or more newspapers, but it would be much more. With a YouGov poll you might even get onto bbc.co.uk, which doesn't do paid advertising at all. You can see why it's so popular.

YouGov defend their methodology against criticisms. Their main argument is that their approach has a track record of being accurate in predicting the outcome of British elections. But political polling is unique. Politics is one of the few things that most people have strong opinions about. And elections are just big polls, after all. Pollsters can learn through trial-and-error the best ways of weighting their results to achieve accurate predictions.

So the fact that YouGov are good at predicting elections doesn't mean that their polls are any good at probing the nation's drinking habits, attitudes to the mentally ill, favorite vegetables, or whatever else. They could be totally wrong. Or they could be perfect. We don't know. It doesn't matter to the people who commission these surveys, of course, because it's publicity either way. It should matter to journalists, but it doesn't seem to.

Bottom line: if you want cheap media exposure, call YouGov. If you want serious news, don't. And if you want to know how journalism got into this sorry state, read Bad Science and Flat Earth News. Really. Bloggers like me are not going to shut about those books until everyone's read them at least five times.

[BPSDB]

YouGov're Having A Laugh

A few weeks back I wrote about the surveys-of-2,000-people which form a growing proportion of British news stories. Suppose you're a company or activist group. You commission a survey of 2,000 people, and ask them some questions vaguely relating to your product or cause. You pick the most interesting results, write them up into a publication-ready press release, and send it to journalists. There's a good chance that your press release will appear, with minor alterations, as a news story in the British media. Like these articles (BBC, Daily Telegraph), which bear a striking similarity to this press release.

Which is good news for you. You get your name in the papers, and it doesn't even look like advertising. Journalists get column inches for very little work, and the pollsters who conducted the survey get publicity too (and your money). Everyone wins, except the public, who end up bombarded with usually meaningless statistics in the guise of "research".

The survey doesn't have to be of 2,000 people, but that's the norm. This is because this is the size of the samples used by YouGov PLC, who are responsible for (at least it seems to me) the majority of these things.

YouGov polls are everywhere. I'd always assumed that they were telephone surveys of a random sample of the British population. I assumed that because I thought that they were meant to be representative. How naïve.

In fact YouGov polls work like this: you sign up as a panelist, online, which takes two minutes. You then occasionally get e-mails inviting you to do surveys. If you do one, you get 50p credit. When you've got £50 they send you a cheque. It's a great way of making cash online, according to websites about making cash online. Sign up here to get in on the action.

So, the participants in all YouGov polls are not random people but are both self-selected and financially motivated. Many of them will be just doing it for the cash, in which case they will be trying to answer the survey as quickly as possible.

Worst, the panelists are not representative of the British population - they consist of people who use the internet, have heard about YouGov, and chose to participate. YouGov say they have 200,000 users, out of 60 million British people.

Every day, YouGov sends a survey to a certain sample of their users which collects 2,000 responses. They call this the "Omnibus" poll. It costs £500 to commission one, according to the leaflet. That's chump change in advertising terms; I don't know how much it would cost to run an ad in one or more newspapers, but it would be much more. With a YouGov poll you might even get onto bbc.co.uk, which doesn't do paid advertising at all. You can see why it's so popular.

YouGov defend their methodology against criticisms. Their main argument is that their approach has a track record of being accurate in predicting the outcome of British elections. But political polling is unique. Politics is one of the few things that most people have strong opinions about. And elections are just big polls, after all. Pollsters can learn through trial-and-error the best ways of weighting their results to achieve accurate predictions.

So the fact that YouGov are good at predicting elections doesn't mean that their polls are any good at probing the nation's drinking habits, attitudes to the mentally ill, favorite vegetables, or whatever else. They could be totally wrong. Or they could be perfect. We don't know. It doesn't matter to the people who commission these surveys, of course, because it's publicity either way. It should matter to journalists, but it doesn't seem to.

Bottom line: if you want cheap media exposure, call YouGov. If you want serious news, don't. And if you want to know how journalism got into this sorry state, read Bad Science and Flat Earth News. Really. Bloggers like me are not going to shut about those books until everyone's read them at least five times.

[BPSDB]

Saturday, August 15, 2009

93% of Surveys are Meaningless

Over at Bad Science, Ben Goldacre decries an article about a spurious "study", lifted straight from a corporate press-release, in his own newspaper The Guardian:
On Monday we printed a news article about a “report” “published” by Nuffield Health, headlined “No sex please, we’re British and we’re lazier than ever”. “This is the damning conclusion of a major new report published today,” says the press release from Nuffield ... I asked Nuffield’s press office for a copy of the new report, but they refused, and explained that the material is all secret. The Guardian journalist can’t have read it either. I don’t really see how this “report” has been “published”, and in all honesty, I wonder whether it even exists, in any meaningful sense, outside of a press release.

Nuffield Health are the people who run private hospitals and clinics which you can’t afford....the Guardian gave free advertising to Nuffield, for their unpublished published “report”, which nobody even read, in exchange for 370 words of content. This is endemic, and it creeps me out.

The Telegraph also reprinted the press release; sorry, wrote an article drawing on the press-release amongst many other carefully-research sources. The other papers probably did too; I'm too lazy to check.

For you see, the alleged study found that British people are monumentally slothful: 73% of couples said that they are "regularly" too tired to have sex while 64% of parents say that they are always "too tired" to play with their children, and so on.

Yes, according to Nuffield Health, only one in three British parents ever play with their own children. The rest are always too exhausted. It's a wonder they found 2,000 people who were awake enough to answer their survey - although, as Goldacre says, maybe they didn't.

This "study" is, obviously, bollocks. It serves only as advertising for Nuffield Health's network of fitness centers, the benefits of which are helpfully listed at the end of the press release. That newspapers regularly reproduce press releases because they can't afford to pay journalists to fill the space any other way is well known nowadays. This is thanks mostly to Nick Davies and his outstanding book Flat Earth News which revealed, in great detail, just how bad things have got.

But the fact that this advert was published in the Health section of The Guardian, is more than just a symptom of the decline of British journalism. It also reflects the peculiarly British obsession with "surveys".

Even if the Nuffield data was fully published in a proper journal, and even it had been a survey of 200,000 people, it would still be meaningless. Asking people whether they are lazy is not a good way of finding out whether they are, in fact, lazy. All it can tell you is whether people think of themselves as lazy, which is very different. If you wanted to prove that British people really were lazy and getting lazier, you would have to look at actual indicators of activity like, say, gym membership rates, or amateur sports team participation, or swimming pool use, or condom sales if you really think people are too tired have sex, etc.

Yet surveys like this seem to be almost mandatory if you want to draw attention to your cause in Britain at present. You have to do one, and you have to massively over-interpret the results. The gay rights group Stonewall this week accused British football of being institutionally homophobic. Their basis for this claim was a survey of - guess - 2,000 football fans, finding, amongst other things, that

Only one in six fans said their club was working to tackle anti-gay abuse and 54% believed the Football Association, Premier League and Football League were not doing enough to tackle the issue.

This survey demonstrates, at best, that many football fans think British football is institutionally homophobic. It does not "Sadly demonstrate that football is institutionally homophobic", as a Stonewall spokesman said, unless you think that British football fans are infallible godlike beings.

I have nothing but sympathy for Stonewall, and they may well be right about homophobia in football. But their survey is meaningless. It's advertising, just like Nuffield Health's survey. Attentive Neuroskeptic readers will remember the case of "In The Face of Fear", yet another survey of about 2,000 people, claiming that Britain is in the grip of an epidemic of anxiety disorders (it's not) and serving as advertising for another well-meaning group, the Mental Health Foundation.

A large and growing proportion of British newspaper articles are essentially promotional material for some kind of company, charity, or activist organization. Honestly, newspapers should just go the whole hog and replace half their pages with paid adverts, and use the money earned to pay their journalists to actually do some journalism. There would only be half as much news, but it would at least be news.

[BPSDB]

93% of Surveys are Meaningless

Over at Bad Science, Ben Goldacre decries an article about a spurious "study", lifted straight from a corporate press-release, in his own newspaper The Guardian:
On Monday we printed a news article about a “report” “published” by Nuffield Health, headlined “No sex please, we’re British and we’re lazier than ever”. “This is the damning conclusion of a major new report published today,” says the press release from Nuffield ... I asked Nuffield’s press office for a copy of the new report, but they refused, and explained that the material is all secret. The Guardian journalist can’t have read it either. I don’t really see how this “report” has been “published”, and in all honesty, I wonder whether it even exists, in any meaningful sense, outside of a press release.

Nuffield Health are the people who run private hospitals and clinics which you can’t afford....the Guardian gave free advertising to Nuffield, for their unpublished published “report”, which nobody even read, in exchange for 370 words of content. This is endemic, and it creeps me out.

The Telegraph also reprinted the press release; sorry, wrote an article drawing on the press-release amongst many other carefully-research sources. The other papers probably did too; I'm too lazy to check.

For you see, the alleged study found that British people are monumentally slothful: 73% of couples said that they are "regularly" too tired to have sex while 64% of parents say that they are always "too tired" to play with their children, and so on.

Yes, according to Nuffield Health, only one in three British parents ever play with their own children. The rest are always too exhausted. It's a wonder they found 2,000 people who were awake enough to answer their survey - although, as Goldacre says, maybe they didn't.

This "study" is, obviously, bollocks. It serves only as advertising for Nuffield Health's network of fitness centers, the benefits of which are helpfully listed at the end of the press release. That newspapers regularly reproduce press releases because they can't afford to pay journalists to fill the space any other way is well known nowadays. This is thanks mostly to Nick Davies and his outstanding book Flat Earth News which revealed, in great detail, just how bad things have got.

But the fact that this advert was published in the Health section of The Guardian, is more than just a symptom of the decline of British journalism. It also reflects the peculiarly British obsession with "surveys".

Even if the Nuffield data was fully published in a proper journal, and even it had been a survey of 200,000 people, it would still be meaningless. Asking people whether they are lazy is not a good way of finding out whether they are, in fact, lazy. All it can tell you is whether people think of themselves as lazy, which is very different. If you wanted to prove that British people really were lazy and getting lazier, you would have to look at actual indicators of activity like, say, gym membership rates, or amateur sports team participation, or swimming pool use, or condom sales if you really think people are too tired have sex, etc.

Yet surveys like this seem to be almost mandatory if you want to draw attention to your cause in Britain at present. You have to do one, and you have to massively over-interpret the results. The gay rights group Stonewall this week accused British football of being institutionally homophobic. Their basis for this claim was a survey of - guess - 2,000 football fans, finding, amongst other things, that

Only one in six fans said their club was working to tackle anti-gay abuse and 54% believed the Football Association, Premier League and Football League were not doing enough to tackle the issue.

This survey demonstrates, at best, that many football fans think British football is institutionally homophobic. It does not "Sadly demonstrate that football is institutionally homophobic", as a Stonewall spokesman said, unless you think that British football fans are infallible godlike beings.

I have nothing but sympathy for Stonewall, and they may well be right about homophobia in football. But their survey is meaningless. It's advertising, just like Nuffield Health's survey. Attentive Neuroskeptic readers will remember the case of "In The Face of Fear", yet another survey of about 2,000 people, claiming that Britain is in the grip of an epidemic of anxiety disorders (it's not) and serving as advertising for another well-meaning group, the Mental Health Foundation.

A large and growing proportion of British newspaper articles are essentially promotional material for some kind of company, charity, or activist organization. Honestly, newspapers should just go the whole hog and replace half their pages with paid adverts, and use the money earned to pay their journalists to actually do some journalism. There would only be half as much news, but it would at least be news.

[BPSDB]