X

College Student Attitudes toward Free Speech? 6 Takeaways from Controversial Poll

Last week, the Brookings Institution posted an article by UCLA Professor John Villasenor, who reported results of a Charles Koch Foundation-funded survey that he had conducted about college student attitudes related to the First Amendment.

As conservative news site Breitbart noted, Villasenor’s motivation for the survey was his concern about the “narrowing window of permissible topics” on campus. And sure enough, Villasenor found exactly what he had expected, that “freedom of expression is deeply imperiled on U.S. campuses.”

Many news outlets from the Wall Street Journal to the Washington Post immediately picked up the news story, but almost as quickly, as reported in The Guardian, some polling experts challenged the poll’s validity. Cliff Zukin, former president of the American Association for Public Opinion Research, the professional polling organization, said the way the poll results were presented constituted “malpractice” and “junk science” and “it should never have appeared in the press.”

iMediaEthics investigated. Here are the six most important takeaways from the controversy surrounding the survey:

1. The media over-reacted to the poll, parroting the results as though handed down from on high.

The survey’s findings quickly gained legs, as it was immediately picked up by the Washington Post, the New York Times, National Public Radio, Five-Thirty-Eight and numerous other news publications across the country. Headlines blared that college students don’t understand the First Amendment and believe violence is acceptable if speakers are offensive. As the Washington Post’s Daniel Drezner wrote:

This study got a lot of traction in the mainstream media. My Washington Post colleague Catherine Rampell described the results as ‘disturbing.’ Eliot Cohen described the findings as ‘scary stuff.’ Commentary’s Noah Rothman was more apocalyptic, warning, ‘America is lurching toward civic crisis. … Subtly or overtly, the message is the same: Violence is coming.’ The Wall Street Journal editorialized that ‘college students are clueless about free speech.’ James Surowiecki bemoaned the fact that, ‘Young liberals have totally lost the plot when it comes to free speech.’ Whether libertarian, centrist or conservative, the basic reaction to this was: ‘Now is the time to panic.’”

He went on to say that:

“This is not the first campus hysteria that turned out to be wildly exaggerated this month….

What concerns me far more right now is the eagerness with which columnists seized on these findings as vindicating their preconceived belief that today’s college students are just the worst.

“They are the ones who failed to look more closely at a result that they so badly wanted to be true.”

 

2. Two well-respected pollsters found a more positive view of student attitudes toward freedom of speech than Villasenor’s sample.

When iMediaEthics asked Villasenor by e-mail about the criticisms of his poll, he affirmed that he had never conducted a poll before, and his academic record – showing that he obtained a Ph.D. in electrical engineering – reveals no training in survey research. He also noted that “I am unable to say, and did not say, that it [his survey] was statistically in line with the broader U.S. college population in every possible respect. But, it has some very good attributes despite having been gathered in through an opt-in online approach.”

iMediaEthics asked the Koch Foundation why it funded Villasenor’s project given his lack of experience or training in polls, if the foundation has a response to the criticism of the survey, if the foundation has funded previous polls or surveys, and if it peer reviews poll-related proposals. The foundation didn’t respond to iMediaEthics’ e-mails, phone call, and tweet.

Villasenor acknowledged to David Drezner in the Washington Post that “maybe my sample wasn’t as good” as Gallup’s. He was referring to a survey of college students that Gallup conducted for the Knight Foundation and Newseum Institute in February/March of 2016, which included a very carefully constructed scientific sample of college students. Villasenor included one question from the Gallup survey in his own poll, and found significantly different results from Gallup’s.

The question asked:

“If you had to choose, do you think it is more important for colleges to  

  • “create a positive learning environment for all students by prohibiting certain speech or expression of viewpoints that are offensive or biased against certain groups of people,

(or to)

  • “create an open learning environment where students are exposed to all types of speech and viewpoints, even if it means allowing speech that is offensive or biased against certain groups of people?”

In his report, Villasenor lamented the finding that a majority of students (53%) preferred the environment where speech was restricted, and only 47% preferred the “open” learning environment. His findings, however, are contradicted by Gallup, which last year found a large majority with the opposite feeling: 78% preferring an open learning environment, and only 22% the restricted environment – representing a 62-point swing in opinion.

A poll by Hart Research for the Panetta Institute of Public Policy last year, which also included a more scientifically chosen sample of college students, found views similar to those reported by Gallup: “Students overwhelmingly side with protection of freedom of speech [70%] over the concern that some people could be hurt by offensive speech [30%].”

Clearly, there are humongous differences between Villasenor’s findings and conclusions about student attitudes, and the findings and conclusions provided by Gallup and Hart Research. Given the gulf in experience and training between Villasenor and the two widely respected polling organizations, it’s not unreasonable to question the validity of Villasenor’s findings.

iMediaEthics asked the Brookings Institution, which published the results of Villasenor’s survey, how the article got through the review process given the problems about the survey’s methodology, how it ensures studies meet the Brooking Institution’s standards, if the institute knew Villasenor had no polling experience, and if the institute stands by the study.

We were provided a statement from Brookings Senior Fellow John Hudak via a Brookings spokesperson that didn’t offer details and dodged the serious criticism stating in part, “His latest blog post was peer reviewed prior to publication, and Dr. Villasenor has been responsive to questions from the media about the survey’s methodology and results. Online survey methodology with weighted responses is a widely used technique among a vast number of survey methodologists and has been shown to be an appropriate way to undertake research.”

 

3. The controversy over the margin of error provided by Villasenor is a red herring, except to the extent it reaffirms that he is largely unaware of polling practices.

In response to iMediaEthics’ inquiry about his poll, Villasenor raised the issue (though we did not) about the margin of error (MOE) he provided for his sample of 1,500 students. The article in The Guardian cited experts criticizing him for calculating a MOE for a non-probability sample: “Polling experts said this was inappropriate and a basic error.”

This is Villasenor’s rebuttal:

“I’m sure you have seen the Guardian piece so I will comment on that. The main objection voiced in the Guardian piece appears to be the fact that I mentioned margins of error. However, that is a perfectly reasonable thing to mention as long as it is accompanied with a caveat, which I very clearly provided, that for such a computation to be possible a sample needs to be probabilistically representative of the broader population of interest.”

However, the caveat he offered in his article suggests he doesn’t understand what the MOE is supposed to do. He wrote:

“To the extent that the demographics of the survey respondents (after weighting for gender) are probabilistically representative of the broader U.S. college undergraduate population, it is possible to estimate the margin of error in the tables”

So, let’s talk about the MOE: It is a measure, based on statistical theory and methods of designing a probability sample, which tells us how representative that sample is of the general population. Villasenor claims that if his sample is representative, the MOE can be calculated – to tell us how representative his sample might be. That is circular thinking. It makes no sense.

Short point: With a probability sample, one can calculate the MOE to see how close the sample is compared with the general population. With a convenience sample, there is no statistical basis for calculating the MOE.

Period.

The whole issue is really a red herring, however, because in actual practice, the MOE – even when calculated for good probability samples – tells us very little about how closely a sample comes to representing the population. Refusal rates are so high these days, the potential non-response error can completely outweigh the MOE. The only reason to give an MOE is to signal that the pollster is using a probability sample. And Villasenor didn’t. Case closed.

 

4. The author overstated the significance of his findings, by failing to acknowledge that the public, including students, have long expressed conflicts between freedom of speech and hate speech designed to hurt people – a “centuries old issue.”

Single, one-time polls are generally of little use in understanding a complex culture. What’s important is to provide trends over time, to see how attitudes have evolved along with changes in the culture.

Villasenor provides no context for his findings. He states in his report that “The results of the study should worry the general public. It has been obvious for a while now that as a society we have been becoming less tolerant of freedom in general, particularly freedom of speech.”

But nowhere in his report is there any evidence that “we as a society,” or students more generally, are less tolerant (or more tolerant) today than they used to be. He makes no comparison with previous studies, but instead proclaims each one of his sui generis findings as “concerning” or “highly concerning.” Without the comparisons, however, we have no idea whether the trend is positive or negative.

Two years ago, in the wake of a Pew poll that found “40% of Millennials OK with limiting speech [that is] offensive to minorities,” Jesse Singal wrote an extensive analysis in New York Magazine that put the numbers in perspective. His comments are relevant today:

“A bit of digging into past poll results shows that this just wasn’t an unusual result. Yes, broad attitudes over free speech change over time … but there’s a general pattern to how Americans answer these questions:

“They’ve shown over and over again that they favor free speech in theory, when asked about it in the broadest terms, but they also tend to be fairly enthusiastic about government bans on forms of speech they find particularly offensive (what’s considered offensive, of course, changes with the times).

“On this subject, millennials are right in line with reams of past polling, and it would be wrong to hold up last week’s results as an example of anything other than an extremely broad tendency that’s existed for a long time.”

Singal went on to give several examples showing how persistent in American culture is this conflicted set of attitudes about freedom of speech and that it is not a new phenomenon. One of the most noteworthy was an article in the Washington Post in March 2015 that reported “American’s growing support for free speech.”

That report was based on the long-running General Social Survey of U.S. adults. The article also noted that “Americans tend to pick and choose who should be afforded civil liberties to some degree, a centuries-old issue….”

 

5. The question wording in Villasenor’s poll was confusing on the “violence” measure, rendering the findings uninterpretable.

One of the most striking findings reported by Villasenor was that one in five students agreed that it was acceptable to use violence to prevent a college-invited speaker from speaking. He prefaced the question by noting that the speaker is “very controversial” and “is known for making offensive and hurtful statements.”

As Drezner noted in his Post article, the Villasenor poll was conducted immediately after the violent clashes in Charlottesville, Virginia, and that Villasenor himself “hypothesized that respondents conflated ‘offensive speaker’ with ‘neo-Nazi.’” Thus, students may have responded to the poll thinking the question was asking if it was “acceptable” to confront violence with violence, rather than to initiate violence against a peaceful speaker.

Given the vagueness of the question and the possibility that respondents may have interpreted the question in a very different context from what Villasenor intended, one cannot help but be skeptical about the usefulness of his finding.

 

6. Despite the criticisms noted in The Guardian, not all polls that eschew probability samples constitute malpractice

To characterize Villasenor’s sample as “malpractice” simply because he used a convenience sample is jumping the gun. We need to know more about how the results compare with those from polls asking similar questions.

As background: Since the 1950s, a “scientific” poll was characterized as a carefully designed probability sample based on statistical theory, whether the interviews were conducted in person, or more frequently by telephone. In the past couple of decades, the response rate for scientifically-designed telephone polls has plummeted, leading several polling organizations to find other ways to obtain samples.

Several polling organizations have created panels of respondents, who volunteer to be part of the panel – usually for some remuneration – and to participate in a certain number of online polls each month. These respondents constitute “convenience” samples, rather than the statistically designed “probability” samples (in which every potential respondent has a known probability of being included in the survey, and it is the pollster who reaches out to specific potential respondents).

Such convenience samples by themselves almost certainly do not represent the general public. Not all Americans have access to the Internet, and the people who volunteer because they need or desire the small remuneration provided can be quite different from people who either don’t need or want such rewards, and who choose to spend their time in other pursuits.

To account for the unrepresentative character of convenience samples, pollsters devise complicated weighting models to make their samples “look” like the general population to which they generalize their findings. Some pollsters are better at this than others, and many provide results that appear to be at least as valid, if not more so, than pollsters using probability samples.

FiveThirtyEight, founded by Nate Silver, rates all public pollsters according to their performance in pre-election polling as well as other criteria related to quality of sampling. iMediaEthics has used SurveyUSA, which uses weighted convenience samples and was given an A rating by FiveThirtyEight. Other pollsters using convenience samples include Ipsos (A-), Angus Reid Global (A-), GfK group (B+), and YouGov (B).

All the major network polls, all using probability samples and live-callers, perform quite well in this rating: CNN/Opinion Research Corp (A-), CBS/New York Times (A-), ABC/Washington Post (A+), NBC/Wall Street Journal (A-), and Fox News (A-). But numerous pollsters using probability samples get grades of C or worse.

To be clear: The ratings by FiveThirtyEight may not be the only way to judge quality, but it is at least one indicator worth considering. The ratings suggest that some pollsters have been able to devise models that use convenience samples in a useful way.

UPDATE: NPR added an editor’s note to its story on the survey. Read iMediaEthics’ story.