Stephan makes a strong case that YouGov's polling methods are at least as good, or better, than those of other polling companies. I don't disagree, and I don't have any suggestions as to how they could be improved. In the political sphere, YouGov are widely regarded as the most credible British pollsters, and as Stephan says, they have an excellent record of accuracy in that area. Their popularity is why I chose them as the focus of my piece.
In my post, I did rashly suggest that YouGov's internet-based panel approach might be less representative than a random phone sampling method. But as Stephan says, such a system has plenty of serious problems of its own: "There’s no such thing as a random sample for any kind of market research or polling. There is only random invitation, but since the overwhelming majority of people decline the invitation (or don’t even receive it because they are out when the phone rings...) the resulting sample cannot be random. And it is clearly skewed against certain types of people ... as well as different temperaments..."
As he goes on to say, what YouGov do is inherently difficult - "It’s very hard to know with certainty what the population as a whole thinks about a particular topic, by any method." And this was my essential point: YouGov polls, like all polls, are not an infallible window into public opinion. They could be perfectly accurate - but we don't have any way of knowing how accurate they are, except when it comes to elections, which is a special case.
My issue was, and is, with those who commission opinion polls as a form of advertising, and those who try to use them to demonstrate things which they simply cannot do. Very often, these are the same people. The example I used in my original post was of a poll conducted by a company who run private health and fitness clubs. The message was that British people are incredibly unfit and lazy. Amongst other things it reported that 64% of parents are "always" too tired to play with their children. I don't believe that. I don't think an opinion poll is a good way of measuring laziness. Physical fitness is a vital public health issue, but this is just silly.
It's not clear if that was a YouGov poll, but this one was: 75% of Britons text or blog while on the toilet, which puts us at risk of haemorrhoids, according to a poll commissioned by the makers of trendy, expensive 'probiotic' yoghurt, Yakult. That got Yakult mentions in The Telegraph, The Scotsman, The Metro and The London Paper. I could go on.
Of course we can't blame polling companies for what their clients do with their data. But a healthy scepticism of this data is part of the reason why I'm so disappointed at the number of newspaper articles, usually based very closely on press releases (like Yakult's), based on such polls. It's not YouGov's fault, and I'm sure most of the research YouGov do is not like this. But it's a problem. It's lazy journalism, and it's a poor substitute for serious, informed debate about health and social issues.
Anyway, here's Stephan Shakespear's reply:
"As you must realise, there’s no such thing as a random sample for any kind of market research or polling. There is only random invitation, but since the overwhelming majority of people decline the invitation (or don’t even receive it because they are out when the phone rings, or they don’t pick up their phone because they screen calls, etc) the resulting sample cannot be random. And it is clearly skewed against certain types of people (younger people, busier people, etc), as well as different temperaments (most people won’t willingly give up their time to answer surveys: remember that they tend to be quite long, and not usually on very interesting subjects. Would you stop in the street on your way to work for someone with a clipboard? Would you say ‘yes’ when you are called in the middle of making supper for your kids?)
When researchers do manage to talk to someone, there is no way of knowing whether the answers respondents give to the questions reflect their true thinking. Indeed, as a neuroscientist will be quick to point out, it may not be easy to define what their “true thinking” is, because they may never before have thought about the topic they are being asked about. It may well be that ten minutes after the interview, they think differently about it. Or maybe they were lying, either to the interviewer or to themselves. Maybe they were trying to please the interviewer with the answer they thought was wanted. Maybe they want to appear more reasonable than they really are.
So it’s very hard to know with certainty what the population as a whole thinks about a particular topic, by any method. In fact it’s impossible even if one has the latest neuropsychology techniques at one’s disposal. Nowhere in your piece do you discuss any of these issues which apply to all forms of opinion research, under any conditions. Comparison with other methodologies is important, because we must do the best we can when conditions dictate imperfection.
To repeat: all methodologies include selection bias (self-selection to participate in a panel is not essentially different from the overwhelming self-de-selection that applies to random-interruption methods), and all have motivational biases (anyone who wants to spend their time giving opinions is different in some way to people who don’t; why should payment mean a ‘financial interest’ that skews opinions? Are the volunteers used for neuroscience not usually rewarded, often financially? Surely non-payment skews the motivation too?)
For the record, at YouGov, we take a lot of care to recruit people to our panel by a variety of methods. The great majority are proactively recruited, they do not find their own way to the panel. They are recruited from a variety of ‘innocent’ sources to maintain as good a demographic balance as we can. But we do not claim random selection - as stated above, no research agency can possibly enforce participation from a random selection, it’s impossible. It was precisely because of our acknowledgement that true random samples are impossible that we say we ‘model’, we do not merely ‘measure’ – something which most of the industry now agrees with. Because we are explicit about this, and because we have historical data on our respondents, we can model by more variables. In other words, we are more scientific, not less scientific, than the methods which, by implication of your omissions, you prefer. We know more about our sample, so we can compare them with the general population in a more sophisticated way; and we have no interviewer effect; and respondents can think a little longer about their answer. So we think that makes for better data. In fact, wherever our data can be compared to real outcomes, we have a fantastic record.
You say that our record of accuracy in predicting elections does not mean we are accurate in other things. It is true that most areas of public opinion cannot be proved, by any method, and therefore we cannot prove it either. But it’s surely better to use a methodology that has proven its accuracy in areas that can be proven, rather than one that was found to be wrong, no? YouGov has the best record of accuracy in predicting real outcomes; most recently the Euro elections and the London Mayoral election. You may remember other pollsters had Ken Livingstone beating or neck-and-neck with Boris Johnson. We said Johnson would win by 6%. He won by 6%. Would you rather trust a company that gets the provable things right, or a company that gets them wrong? Does your ‘science’ tell you that methodologies which get the wrong political prediction are more likely to be right in other areas? If so, please explain further.
As it happens, the vast majority of the revenue for YouGov comes from market research for companies who do not publish the results in the media, companies which rely on the accuracy of our descriptions and predictions of consumer behaviour for their future planning. You might want to credit them with some kind of quality-control, if only in their self-interest.
Given that we all acknowledge the difficulty of knowing precisely the percentage that think this or that about some topic they may rarely have thought about, what is your suggested better course? As it is ultimately impossible to know what a single person “thinks”, let alone an entire population, maybe we should attempt nothing, report nothing? Would it be better if there were no data available, only the anecdotal publications of bloggers?
We don’t let it rest. We constantly experiment - with, for example, deliberative methodologies to try to measure how people change their thinking when they consider a matter more, when they are given access to more information, etc. Our panel methodology allows us to use very large (20,000+) randomly-split samples where we seek responses from each split to very slightly altered inputs, controlling for all but a single variable. Even you might agree that our methodology here is of a piece with that of your fellow scientists, some of whom we’ve consulted. We are able to do scientific things with our methodology that other, random-digit-dialing methods can’t, or at least can’t do in an affordable way. You might want to credit us with our serious approach to methodology, rather than slag us off in your most unscientific manner.
Stephan Shakespeare, Co-Founder and Chief Innovation Officer, YouGov"
No comments:
Post a Comment