10 Reasons Not To Trust Public Opinion Polls




10 Reasons Not to Trust Public Opinion Polls


People are opinionated. The internet at times can seem little more than a maelstrom of porn, adverts, and outspoken viewpoints on everything from politics to fan fiction. Yet for all the multitudes of opinions swirling around in print and through word of mouth, we have yet to come up with a replicable, scientific, consistently reliable way to measure and report them. There is certainly no shortage of opinion polls constantly floating about, but to present their findings as fact is more than a stretch.

Like dietary studies alternatively proving the health benefits and risks of your favorite foods, opinion polls play jump rope with methodology, credibility, and what constitutes “facts” at all. But don’t take our word for it: studies show that lots of people agree, because…

10. Sampling and Non-Response Bias


To reach a truly diverse population, pollsters have to answer the same big questions as reporters: Who, What, When, Where, Why, and How? Only they aren’t just telling a story - they are just trying to find a variety of people who are actually willing and able to answer poll questions.

The law of large numbers holds that as long as the sample size is big enough, it will trend closer to being truly representative. But that is predicated on samples being random to begin with - and truly random sampling is actually quite hard to achieve - and it has been getting harder for years. The prevailing theory holds that, as people have more and more ways to communicate, they get less and less willing to waste their time using this technology to answer poll questions.

The non-response issue goes beyond broad demographics, because little details also matter: timing (are respondents at work or at home, busy or idle, married or single?), as well as location (a university campus vs a retirement home: both are crowded, but the diversity is relative) and especially today, medium (phone vs online - there is a lot of selection in who answers the calls/clicks the link, even with random number dialers).

The subject matter of the poll also changes the nature of “diversity” that pollsters have to satisfy.

Depending on the topic in question, the traits of these various groups can manifest in all sorts of variations and confounds, and pollsters have historically struggled to satisfy diversity demands even when the public actually cooperated.

In short, it is more expensive and increasingly difficult to conduct scientifically sound polls, but there is no shortage of demand for polling data. While experts (whose livelihood depends on their ability to conduct polls) assert that this “non-response” problem shouldn’t invalidate opinion polls, they also acknowledge that it may nonetheless be contributing to a decline in the predictive power of polls, especially around political contests.

9. Herding and Bandwagoning


Basically, people would rather be consistent than correct. The logic of herding works like this: If you conduct a poll whose results diverge from most other polls on the same subject, does that mean your poll is more or less accurate than the others?

Many pollsters (and reporters) are afraid of the answer to this question, and as a result, deliberately fudge things to ensure they confirm, rather than contradict, what other polls are saying - without respect to what is accurate or true. Pollsters can achieve this by employing a little healthy selection bias - basically, ask the same sort of people you know have been polled before, and you’ll likely get the same sort of answers they gave before.

Reporters do essentially the same thing by emphasizing only corroborating polls in the news, and simply not incorporating any data that doesn’t contribute to the narrative consensus. It doesn’t matter if they are saying that Macs are more popular than PCs, or how much people trust their government; when a question gets asked enough by different polls, anything that looks like a trend stands a good chance of being reinforced in the reporting and in future polling. The alternative to playing along is to acknowledge that, when it comes to public opinion, the data shows that either people haven’t made up their minds, or there is simply too much volatility to draw any firm conclusions. But ambiguity is anathema to a good story, so herding often prevails.

The Herding phenomenon doesn’t just affect the pollsters and reporters: it can actually change how people answer poll questions. When respondents change their answers, though, we call it the Bandwagon Effect.

People have a natural tendency to want to agree with others - especially others they identify (or want to be identified) with. The more the results of opinion polls are reported, the more they inform the opinions of those being polled. So if you hear that a movie is bombing at the box office, you are more likely to report having a negative opinion of that movie - regardless of your actual experience watching it.

Unless you happen to be completely ignorant of an issue - in which case your opinion is uninformed - or in the first wave of polling (and don’t already know what other people think of the subject), it is hard for you to provide an opinion that doesn’t just mirror what you perceive popular opinion to be.

8. Methodology Disclosure


As confounding as it is to get a large, diverse, representative sample size, without bowing to the pressures to conform, you might expect pollsters and reporters to admit the many limitations or strengths of every poll they share. More typically, this is just a footnote, if it gets mentioned at all. Studies indicate that researchers often prefer to just gloss over the glaring shortcomings of their opinion polls and cut to the chase, rather than admit that they may not have really learned anything.

When it comes to reporting on this kind of grey data, the obfuscation gets even more robust. Online media routinely summarize data without linking to the source of that data - a shortcut that enables storytelling and hyperbole to overwhelm accuracy. This sort of thing is routine in any reporting of scientific studies, but when it comes to opinion polls it completely removes the conclusion - what people seem to think - from the reality of how those opinions were measured. The rising dominance of social media as a primary source of news for a majority of people takes the problem even further. Memes quoting data and survey results seldom capture the source, much less qualify the numbers with details about the limitations and fundamental flaws in the methodology. Journalistic matters like naming sources and providing a link to the original data doesn’t make for a viral phenomenon.

This problem can actually be overcome to some extent, by taking the average of multiple polls purportedly measuring opinions on the same subject. However, this method only works when you know the original polls’ methodologies, and can select polls that compensate for one another. After all, just because everyone is doing it the same way doesn’t mean they aren’t all doing it wrong. Without this emphasis on methods and possible flaws, even an average of polls may be producing an arbitrary figure, rather than capturing a more reliable snapshot.

If you don’t know the methodology, then you don’t know whether two similar-sounding polls are actually subject to the same fundamental mistakes. Between the lack of disclosure by the pollsters themselves, and the media’s fixation on expressive numbers over nuance and discretion, most opinion polls are reduced to the most attention-grabbing detail, rather than their intrinsic shortcomings.

7. Margin of Error


Even when the methodology gets some tacit coverage - emphasizing how diverse the sample size was, or where the poll was taken, or literally reading the questions being answered (more on that later), reports rarely contextualize the margin of error appropriately.

There is no perfect poll - error is an intrinsic feature. How much that error matters, and how it might impact the results, varies wildly. Just because a sample wasn’t diverse, for example, doesn’t mean it can’t reveal meaningful data about the particular group.

When a poll takes a long hard look in the mirror and learns to accept all of its imperfections and structural, procedural limitations, it comes up with a range of possible results. Anything within that range very well could be the truth, but the final report just aims for the sweet spot right in the middle. Everything surrounding this bullseye is the margin of error.

Far from describing these numbers for what they are, reports on polling results employ what the polling professionals themselves describe as a “sense of false precision.”

So rather than acknowledging the margin of error as capturing a multitude of possible outcomes, it is often reduced to a number representing relative accuracy: that is, we treat it as proof that the numbers we see are trustworthy, and not just a midpoint within a spectrum of possibilities.

Pollsters themselves estimate that their polls only have a 95 percent chance of accuracy for their own polls. Or in other words, there is at least a five percent chance that even their margin of error isn’t wide enough to include the real data.

Reviews of polling data in a situation where there are measurably outcomes of opinion–like an election - show just how close polls were to the results; they also demonstrate whether the reported margin of error was wide enough to capture the gulf between polling data, and electoral data. Not every poll demonstrates any reliable predictive power, much less any self-awareness about its own accuracy.

The bottom line is there are no hard numbers when it comes to opinion polling. Everything is an approximation of an estimate of an informed guess, and the margin of error itself is only a speculation about just how far off the poll really is.

6. Phrasing is Rarely Neutral


Margin of error usually emphasizes numerical variables - size, statistical regression, core methodological limitations. But pollsters tend to avoid the sticky subject of how their questions could be influencing their respondents.

What a poll calls something changes the associative triggers in the people responding to a poll. It works the same way as classical conditioning: a bell rings, and a dog’s mouth waters. A poll says “liberal” or “conservative,” and respondents mentally flock to their political tribe. But thanks to the internet and the abundance of media coverage of any issue of the remotest controversy, people learn to associate certain feelings with different words and names - associations which can then be activated by ostensibly neutral opinion polls.

The difference in how people feel about American healthcare reform, for example, depends on whether they are being asked about Obamacare or the Affordable Care Act. Results varied still more when some polls asked about specific elements of the law, as opposed to simply referring to it by either name. Similar voids open up when comparing things like Liberal vs Progressive, or Conservative vs Republican - not just in terms of which labels people prefer, but when they are applied as descriptors in polls.

No matter how universal the term, there is as much impassioned debate over what the word really means, as there is over whether cream or tea goes into the cup first. It gets to the point where it seems in order to conduct an opinion poll on anything; a series of opinion polls must first be conducted in order to determine what different people prefer to call the very thing they are then supposed to have an opinion about. Are there more canine people or dog people compared to cat people? Men and women or ladies and gentlemen? In which context? Split along which demographic lines?

Depending on which terms are used when, opinion polls may not be measuring what they think they are measuring, because respondents are preoccupied with a word or phrase in the question, rather than what the question is really trying to ask.

Of course, even without such associative biases, the questions themselves can be leading to the point of misleading both respondents and pollsters.

5. Order of Questions (and Responses)


Labels aside, even the phrasing of questions can have an outsized impact on what responses a poll generates. The difference between “Do you have a positive opinion of…” compared to “Do you disapprove of…” can swing responses dramatically, as can the difference between Yes or No questions, and those which ask respondents to indicate which of a variety of statements they most agree with or least agree with.

Multiple choice polls have been repeatedly shown to create unequal outcomes based purely on the order of options. In an election, that means that the candidate whose name comes first has a built-in advantage - though it is difficult to predict just how great that advantage will be ahead of time. Different jurisdictions employ different randomization strategies in order to account for this, without any clear foolproof system emerging as a guarantor of fairness.

But the order of questions, as well as of the possible answers, can be just as confounding to the results of a poll.

This can result from people wanting to appear consistent (once you rate Tom Hanks as a great actor, you will likely want to give his movies higher ratings as well; once you express an opinion on a policy issue, you are more likely to pledge support to any candidate whose view you assume aligns with yours). It can also result from questions leaving a bad taste in a person’s mouth (saying you hated a movie primes you to say you hate other movies; rating a politician as untrustworthy primes you to think of other politicians as untrustworthy).

Anchoring bias - in which you latch on to a particular detail or piece of information, then rely (or over rely) on it for all future decision-making, can be activated just as easily in an opinion poll as on a used car lot.

Mixing up the order of such things from poll to poll and person to person can help, but it can also complicate the poll and make it much more difficult to conduct. Once again, simplicity prevails more often than not, and it is ultimately up to the pollsters to decide whether they feel they are being objective in how they word and order their questions, because no reporting on the results is likely to delve into such obscure details as methodology.

4. Polls Eliminate Nuance and Force Conclusions


When someone says, “Go on three,” do you automatically assume that means, “One, two, three, go,” or “One, two, and go”? Because as common an expression as that is, people don’t always instinctively agree.

Same for any question asking you to rate things on a scale from one to 10, or one to five, or really any arbitrary numerical scale, because you can’t necessarily assume that one is bad and the largest number is the best.

This is a problem with the canned responses built-in to so many opinion polls: respondents can’t share their thoughts, because they have to use the standardized response system created for the poll. Just as the wording of questions can wreak havoc in how people interpret and respond, so does the wording of available responses confuse and confound how people select the answers they give to opinion polls.

Just looking at the numerical ranking system, thanks to anchoring bias, people tend to hedge their feedback by aiming for the middle, changing their responses depending on whether a scale starts at 0 and counts up, or at 10 and counts down. And researchers have found that respondents tend to prefer odd-numbered scales, because that gives them the option of aiming for the middle and avoiding disclosing a strong opinion either way about anything.

Of course, pollsters can add verbal qualifiers to each item, so that rather than just picking a number, respondents are asserting that they “strongly agree” or “strongly disagree” with a leading statement, or else feel “satisfied” or “unsatisfied” with something.

The only trouble with this approach is that no one actually thinks or speaks this way, and people tend to feel more comfortable agreeing with statements that sound like something they would actually say. Some research therefore recommends that feedback systems be psychologically and sociolinguistically calibrated to the target respondents to ensure they feel comfortable and natural in selecting their responses.

Outside of these canned question and answer systems, there is the prevailing theory that presenting people with a false choice yields meaningful data about their preferences. There is a huge difference between preferring something, and wanting something. When you play a round of Would You Rather, you are forcing the choice, often between two undesirable options. Something similar happens routinely in opinion polls looking to measure favorability: would you prefer to drink Coke, or Pepsi? If you don’t like fizzy brown water, your answer might be, “Neither, I like my soda lemon-lime.” If that isn’t an option, then you’re stuck pretending to care, and the polls can enthusiastically claim to have learned something.

No matter what the subject, suggestive wording and rigidly obtuse answer systems quickly and easily create a non-existent narrative by effectively putting words in the mouths of whomever they poll.

3. We Extrapolate Too Much


Even if the polling data were actually accurate and reliable, the way that data is presented very often takes great leaps of creative interpretation. Basically, rather than approaching poll data with a scrutinizing eye, breathless reports point to these numbers as though they are hard, reliable measures of tangible features.

The National Council on Public Polls recommends journalists ask 20 questions every time they encounter an opinion poll before using it as a resource in any reporting. Yet far from exercising restraint and skepticism, media watchdogs find that news reporters are relying more than ever on distinctly unreliable polls. It comes down to urgency, priorities, and culpability. The prevailing attitude among reporters and news organizations in the digital age is that being first is more important than being accurate - although that may not even be an approach unique to the digital age at all.

The declining participation rate in public opinion polls has an inverse relationship with the news media’s reliance on polling data to inform its reporting narrative. People are mistrustful of polling organizations as well as the individuals conducting polls, yet there is no similar decline in how much coverage opinion polls get in assessing the success or popularity of any subject under discussion.

It isn’t just an issue in political reporting: everything from customer satisfaction scores to employee engagement metrics to online movie reviews - every system designed to take the pulse of a group with opinions on any subject - have been shown to be unreliable, fundamentally flawed, and endlessly popular as the basis for measuring success and making future decisions.

Part of the problem is that the number and variety of opinion polls has grown monumentally, to the point that reporting on opinion polls is practically its own beat in the journalistic world. So rather than emphasizing what a margin of error is, stories circulate on how new polls show minute fluctuations in public sentiment.

But there is also an inherent flaw in quantifying something that only exists abstractly and is subject to temporal fluctuation. Or in other words: you can’t put a number on opinions, because they change constantly, arbitrarily, and most people aren’t very adept at expressing them in discrete terms - polls are inherently time-sensitive, literal, and used to turn ideas into statistics.

2. Opinion Polls Make Stories Easier to Tell


The whole news industry operates on the premise that the news itself is just a series of stories. This is why many journalists end up working in corporate marketing and PR roles: any organization of people tells stories about itself to gain attention, inspire loyalty, and attract support.

Because the nature of modern news reporting and sharing favors simplicity over nuance, and social media allows anyone with an internet connection and an agenda to play journalist to their own social network, opinion polls let everyone tell their own story about the world, and site polling data as the “fact” on which that story is based.

There are a lot of ways to do this: a major event like an election tends to generate a multitude of polls with widely varying results. This gives reporters the ability to select whichever set of results fits the narrative they prefer, and report it all as fact.

Reporting opinion polls selectively enables either side of an issue or story to heap doubt on the alternative narrative. It doesn’t even have to be malicious - just an impulse to force consensus or rally contrarians is sufficient to feed bias into the narrative, and obscure the data itself beneath layers of storytelling.

Opinion polls provide another way to reinforce whatever narrative reporters (and their followers) are constructing to explain the world’s events. Not only that, but they turn complex, evolving stories into digestible, shareable snapshots that look an awful lot like facts, without necessarily being true.

There are a lot of reasons why bias can creep into reporting - not just by the media, but by anyone on the hook to tell a story about what is happening, and why. But the proliferation of opinion polls, all conducted with a wide range of scientific methodology and by organizations with their own agendas and biases, even the most earnest reporter can essentially choose which data to reference, and set it loose online to take on a life of its own.

1. People are Idiots


Probably the single biggest, more pervasive problem with public opinion polls is that they assume the people being polled actually understand what they are being asked, and what their answers are taken to mean.

In reality, failing to understand a question rarely stops people from answering it anyway. Or, if they understand the question but didn’t have an opinion on the subject prior to being asked about it, they will usually just invent an opinion just so they can respond.

So when you ask people to rate anything from their hotel stay, to a movie they just watched, to their feelings about a political candidate, they will all have a different interpretation of the scale being used to capture their opinion, the meaning of keywords and phrases in both the question and their response options, and different knee-jerk opinions that reflect neither their actual feelings nor any amount of critical thought.

Then of course, there is the possibility that the environment in which polls are conducted, and the subjects of the poll, drives respondents to change their answers in order to seem more socially acceptable.

You can only provide so much context to gauge what people are really saying in these polls. And as much as we love ranking things - especially through Top 10 lists - the void between absolute rank and relative merits can make for some serious misunderstanding and misrepresentation. But because the dwindling supply of people who are actually willing to take polls are also inclined to answer poll questions no matter what, the nature of polling is shifting from gauging what people think to holding up cue cards and recording people reading them.

Top image credit: ArtsyBee/Pixabay.

[Source: Toptenz. Edited. Top image added.]



Artikel ini hanyalah simpanan cache dari url asal penulis yang berkebarangkalian sudah terlalu lama atau sudah dibuang :

http://www.realfactsrat.xyz/2020/03/10-reasons-not-to-trust-public-opinion.html

Kempen Promosi dan Iklan
Kami memerlukan jasa baik anda untuk menyokong kempen pengiklanan dalam website kami. Serba sedikit anda telah membantu kami untuk mengekalkan servis percuma aggregating ini kepada semua.

Anda juga boleh memberikan sumbangan anda kepada kami dengan menghubungi kami di sini
Do Not Go To Illegal Dataran Merdeka Rally Police Warn Public

Do Not Go To Illegal Dataran Merdeka Rally Police Warn Public

papar berkaitan - pada 1/3/2020 - jumlah : 375 hits
Do not go to illegal Dataran Merdeka rally police warn public 7 50pm Police have warned the public against attending tonight s 8 30pm rally at Dataran Merdeka It s an illegal rally as it will be done all of sudden there s no notice and it d...
In Politics Trust No One Not Even Yourself

In Politics Trust No One Not Even Yourself

papar berkaitan - pada 9/3/2020 - jumlah : 213 hits
So now Dr Mahathir Mohamad feels the pain To be betrayed by none other than one of his closest and long time political allies Muhyiddin Yassin must have hurt and hurt real bad The grand old man said it himself I didn t expect to be betrayed...
Gps Will Not Support A Coalition That Includes Dap Says Sarawak Dep Cm

Gps Will Not Support A Coalition That Includes Dap Says Sarawak Dep Cm

papar berkaitan - pada 29/2/2020 - jumlah : 318 hits
KUCHING Gabungan Parti Sarawak will not support a coalition that includes DAP in the ongoing political crisis says Tan Sri James Masing The Sarawak Deputy Chief Minister said this was one of the conditions the ruling state coalition will co...
Umno S Support For Muhyiddin Not A U Turn Says Zahid

Umno S Support For Muhyiddin Not A U Turn Says Zahid

papar berkaitan - pada 29/2/2020 - jumlah : 334 hits
Umno throwing its support behind Bersatu president Muhyiddin Yassin as its prime minister candidate is not a U turn said Ahmad Zahid Hamidi It is not a U turn The Council of Rulers and the Yang di Pertuan Agong decided that there will be no...
Not Angka Lagu Tuhan Dari Bimbo

Not Angka Lagu Tuhan Dari Bimbo

papar berkaitan - pada 28/2/2020 - jumlah : 636 hits
Bimbo adalah sebuah grup musik asal Bandung yang personelnya terdiri atas Sam Bimbo Acil Bimbo Jaka Bimbo dan Iin Parlina Lagu lagu seniman bersaudara asal Bandung ini menemani berlapis generasi di Tanah Air Mereka bersenandung tentang cint...
Interim Pm Does Not Exist In Nation S Constitution Says Former Ag

Interim Pm Does Not Exist In Nation S Constitution Says Former Ag

papar berkaitan - pada 27/2/2020 - jumlah : 241 hits
Interim PM does not exist in nation s constitution says former AG7 50am Former attorney general Mohamed Apandi Ali questions the legitimacy of interim prime minister The interim prime minister does not exist in the Federal Constitution he s...
Silence Is Not Enough Teaser Featurette A Quiet Place Part Ii

Silence Is Not Enough Teaser Featurette A Quiet Place Part Ii

papar berkaitan - pada 26/2/2020 - jumlah : 297 hits
Paramount Pictures telah melancarkan featurette terbaru filem sekuel seram A Quiet Place Part II yang menampilkan John Krasinski dan Emily Blunt yang menceritakan semula filem pertama serta apa yang diharapkan penonton dalam sekuel akan dat...
Minat Fotografi Suka Ambil Gambar Hp Sebagai Hobi Why Not Share With The World

Minat Fotografi Suka Ambil Gambar Hp Sebagai Hobi Why Not Share With The World

papar berkaitan - pada 26/2/2020 - jumlah : 712 hits
Menarik juga ni Bukan apa Pada yang suka ambil gambar suka suka tapi nak orang appreciate kini ada jalannya Tak kisahlah guna hp smartfon pun ok jek Yang syoknya kita nak tau ada ke orang di luar sana yang minat dengan gambar hp yang kita a...
Dr Mahathir Says He Is Not Power Crazy And Cannot Accept Umno Joining Gov T

Dr Mahathir Says He Is Not Power Crazy And Cannot Accept Umno Joining Gov T

papar berkaitan - pada 27/2/2020 - jumlah : 274 hits
Dr M I cannot accept Umno joining gov t4 51pm Mahathir says he cannot accept forming a government with Umno I can accept Umno members who leave the party and join other parties but Umno would join this unity government as Umno I can t accep...
Sah Kes Najib Razak Cacat

Slot Qris Explained The Key To Faster And Safer Gaming Transactions

Rahsia Kawal Gula Dalam Darah Supaya Tak Melompat Lompat Lagi

Tremendous Nadi Collaboration

Kebaikan Rawatan Rendaman Kaki Bersama Garam Bukit Dan Ais Batu

Salam Dalam Salat Jenazah Sekali Atau Dua Kali

10 Praktik Keberlanjutan Yang Wajib Diterapkan Di Tahun 2025

Takwim Cuti Persekolahan Tahun 2025 2026


echo '';
5 Insiden Jalan Sesak Yang Berlaku Lebih 24 Jam Durasinya

Senarai Lagu Tugasan Konsert Minggu 6 Gegar Vaganza 2024 Musim 11

Keputusan Markah Peserta Konsert Minggu 5 Gegar Vaganza 2024 Musim 11

10 Filem Drama Seram Melayu Berhantu Terbaru 2024 2025 Mesti Tonton

One In A Million 2024 Senarai Peserta Juri Format Pemarkahan Hadiah Dan Segala Info Saksikan Live Di TV3 Malaysia Dan Tonton Calpis Soda OIAM


Farhan Mustapha Mutiara Hati Chord

The West Isn T Dying But It S Working On It

Malaysian Muslims Baffled How Having A Pic With Christmas Tree Santa Claus Can Shake Their Faith

Have A Relaxing Day With Friends Over Lunch Or Dinner

Icqs Bukit Kayu Hitam Sesak Luar Biasa Rakyat Malaysia Melancong Ke Thailand Diminta Jaga Adab

10 Proven Strategies For Raising Resilient Kids