Mike Baselice headed the Trump for President Polling Team; he is the president of Baselice & Associates, Inc., a public and corporate affairs opinion research firm headquartered in Austin, Texas that has operated for the past twenty-eight years. In the fall of 2017, he discussed polling the 2016 election at an event for the Institute of Politics, before which he sat down with Gate reporter Megha Bhattacharya.
The Gate: In the height of election season, how do you spend your time?
Mike Baselice: Well, traditionally, polling survey research, but it can come in a couple of different ways. Traditional polling would be still by telephone with a combination of landline and cell phone interviews being conducted.
One of the things we may talk about [at our event] later tonight is what happened in the Hillary Clinton campaign. There have been articles written and discussions about how they stopped their tracking polling, which is nightly polling in the key states. Some in the campaign said they kept doing a name I.D. test and a ballot test (very limited polling), but they weren't doing the full tracking where they asked, for example, “What have you seen, read, or heard, about Hillary Clinton?”, “What have you seen, read, heard about Donald Trump?”, “Why are you voting for Hillary Clinton?”, “Why are you voting for Donald Trump?”, and they weren't getting the “why” answers. And that would be something that we do in the battleground states.
Now, both campaigns ignored certain states. There was no reason for the Trump campaign to engage in California. And there really wasn't any reason for the Clinton campaign to engage in Texas. Those are two states that have been going one way or the other, for Republicans and Democrats, for some time. So what we do is we look at this data each night and make recommendations for where we are in the race: why we're there, whether we have gained or lost any ground with certain subgroups.
I remember Kellyanne Conway asking me about where we should send Donald Trump in Colorado in the final weeks. Since so much of Colorado falls into one media market, Denver, you can move any suburb you want but it's still going to be picked up by the Denver media. But the bigger question was: why even send him back to Colorado? We just sent him to Michigan and Pennsylvania, states that maybe we had a better shot at. We start making those decisions by looking at the data. Should we forget this state and focus more resources in another state? Take Florida for example, which campaigns would tell you was very close. Where should you put extra resources to get the extra votes? Well, the Democrats would go to Miami. And the Republicans may go more into the Panhandle, upstate: Tallahassee, Gainesville, Jacksonville. You have to look at all the regional and demographic information each day to make assessments as to where you're going to send your resources.
Gate: The New York Times, when they were showing their pre-election polling results, said they had a 99 percent confidence interval. What would you say about somebody putting so much confidence in the polling and how people perceive that?
Baselice: Yes, that was some of the different models that were being done, because there's a margin of error and a confidence interval associated with that margin or with each poll.
Let's first talk quickly about the margins of error and confidence intervals associated with each poll. If you have a sample of a thousand interviews whether it's nationwide or in a state, the margin of error on those thousand interviews is plus or minus 3.1 percent that the data is at a 95 percent confidence level. Now, what that means is, if you and I do that same survey over the same days with other random samples, nineteen out of twenty times (95 percent of the time) we would be within plus or minus 3.1 percent of the results that we got on the first survey. That's different than the models that we're looking at [for] the percentage chance of Hillary winning. We're going to talk about that tonight in this session, and a lot of that was based on a lot of public polls, and some of the polls weren't that far off. But the percentage leaning towards Clinton in some states, where she maybe had a slight edge, was probably a mistake by those who were making these predictions because they really weren't outside the margin of error. I mean you have states, like Ohio, that were pretty close that should be kept a toss-up for until the very end. We did so ourselves in the Trump campaign, breaking that open in the last days. So I think one, it's the use of the public polls, two, not understanding what's in the public polls and relying on some leaning data and saying, “Wow someone has a two or three-point lead,” which is still within the margin of error of giving it to Clinton or giving it to Trump. It’s a dangerous proposition for those making these predictions.
Gate: Do you think that the polling changed anyone's voting decision?
Baselice: No. In fact, I saw a case in Mexico, where [Felipe] Calderón was running for president, I think in 2006. And his message in the final days became, “Vote for me; I'm ahead in the polls,” and his five-point lead became less than half a point. It almost set off a constitutional crisis with protests in the streets of Mexico City for three days. That's not a message.
So if anything, you could take something like polling data and misuse it and say, ‘Vote for me because I'm ahead,’ and have it backfire. So, when you have these elections, in particular the last two where Obama was victorious, he had a several-point lead in those elections. I don't think any polling changes anybody's viewpoints of how they're going to vote. I don't think in this race the polling dictated any portion or difference in any of the outcomes. What really matters in close elections is who has the enthusiasm on their side to go vote, or enthusiasm, maybe, against one of the opponents.
Gate: How do you phrase questions to avoid bias in the answer?
Baselice: That's a good question because you can.
Let's say we're talking about how you would [present] the ballot for president in the upcoming election, or, “For which of the following candidates would you vote?” You could rotate the names: Hillary Clinton, the Democrat, and Donald Trump, the Republican. You could say Democrat Hillary Clinton, Republican Donald Trump. That doesn't really change things
Where it starts to change is if you were to say, “For which of the presidential candidates would you vote? Donald Trump, the Republican, or Hillary Clinton, who had that Benghazi scandal and had an email server in a basement.”
Now, that could actually influence that question. Now those type of questions are okay to ask, as long as you get through the straight-up, unbiased question first. That is not push polling. There's a whole definition you can read about on the Association of American Political Consultants about how push polling is a real quick call where, oh, somebody says they're undecided, and you throw out something negative about the opponent and hang up.
Both campaigns test information about their opponent. So, if the Trump campaign is going to put out something about Benghazi, they want to see if it works. Or, is it the email server? Or, is it some tax increase that Hillary Clinton may have voted for? Or, is there a position on coal? She's got positions on Donald Trump and his business past and comments he may have made and so forth. And so both campaigns will use that information. Those are okay to ask just about any way you may want to test them, but you have to back them up.
For example, to ask if one would vote for Joe Blow if you know he voted to increase taxes seventeen times is not biased. But what if he only voted to increase taxes five times? A campaign shouldn't test seventeen times just because it sounds more; they have to back it up if they use it in an attack ad. It's important to be accurate with these things. So questioning wording is very important: how you word a question and in what order you place the questions. I'm going to talk a little bit tonight about why I think it's important to ask right away, i.e. early in the survey, “Do you vote Democratic or Republican most of the time?” Get that out of the way. After we start talking about the candidates and things that you may have seen or heard about them, then, when you get to the end, and their declared party affiliation or lean may be dependent on how many questions were asked that were pro-Republican or pro-Democrat. They may change their answer because they think that there had been questions about Hillary Clinton, and the answers were not very good: I better not say I'm a Democrat. So the order of the questions matters, and I would prefer to ask that up front. Even the media does a pretty good job of asking unbiased questions at the early part of the survey. Most of the time.
Gate: During non-election years, are polls still as partisan?
Baselice: You have a governor's race coming up here in Illinois. You have a Republican up for re-election. It's a Democratic leaning state. I'd be interested to see a poll that had more Republicans in it than Democrats in Illinois. So if there were a poll floating around out there that's 40 percent Republican and 35 percent Democratic in Illinois, I would say time out. That's not the way the people vote here. And so you'd have to look at that partisan shift. Now fewer people turn out typically in a gubernatorial year: maybe 40 percent of registered voters as opposed to 60 or 62 percent in a presidential year. There's a lot of votes always left on the table. Partisanship and the election always matters. You can even make a claim that in states like Texas where mayors and city council are nonpartisan, there's still partisanship built in because it's used in the campaign. So-and-so is really hard right conservative. So-and-so is a liberal Democrat, and it gets thrown in there. There’s no party label—t's just a name on a ballot—but partisanship didn't even find its way into those races. Those are odd-year elections.
Gate: What about polling when Republican presidential candidates were trying to differentiate themselves from each other during the primaries? How does that polling differ from general election polling?
Baselice: In that type of polling, you want a universe to draw from of known Republicans. If you're polling Hillary Clinton against Bernie Sanders, campaigns are looking at polls among people that usually vote in a Democratic primary or caucus, and those lists exist. That's the first place to go to. The question is, how many people do you interview with no previous primary or caucus history and bring them into your fold. Give them a chance to participate, and you can make a case to try and do some of that, particularly in a big old fashion.
Gate: In general, what else do you think was unique about the 2016 election cycle?
Baselice: What was unique about it is that it was the first time we saw two presidential candidates whose negative image was larger than their positive image well before Labor Day. Typically, like, when you look at Obama, he was just slightly better than one to one, say 50 [percent] positive, 46 [percent] negative, and Romney was the other way around, about 45 percent [positive] and 50 [percent negative], and that's typically where you end up. That's what we've seen before. This was a whole new territory, where both candidates were upside down in terms of their positive/negative image, with the negatives higher than positive. We've never seen this before. This was very unusual.
Another thing that we saw was the different turnouts and enthusiasms we saw in different parts of the states. For example, we started to see in Florida with about two and a half weeks to go that there were parts of Florida that looked like they would have a 2014 good Republican turnout. But, then, down in Miami, it kind of looks like 2012 over again—which was good for Democrats. Well, pollsters don't like that; we'd like to apply one turnout model for the whole state. In Pennsylvania, it was a situation where rural Pennsylvania voters went overwhelmingly for Trump. It looked like a 2014, very excited and energized Republican (and deflated Democratic) turnout in the rural parts of the state, while Philadelphia and the suburbs looked like it did for Obama. We were not used to seeing two different types of turnout occur within the same state in the same election. So there's a couple of things that were different.
Gate: Any last things you would want our readers to know about?
Baselice: We have a whole bunch of things to go through here. The thing that I would leave your readers with is the advice to look at the methodology on these surveys, especially with a great race like a race for governor coming up the next year. Look at the methodology. Ask questions about how Republican versus how Democratic the sample is, how many African-American or Hispanic voters are in the sample. Ask what the electorate of Illinois looks like. All those things make a difference in terms of getting an accurate read. My point is to not accept the poll on its face. Look inside the demographics first, then the ballot, and [the poll] will start to make sense more.
I have an example right here in a Fox News poll that just came out. The headline for Fox News is, “Storms erodes Trump's ratings.” And so when you look at his image, his job approval is down to 38 percent. If you look at the demographics in this poll, it's twelve points more Democratic than Republican. In in August, it was only a three-point difference, 42 to 39 [percent]. Now, did the people change their partisan stripes that quickly in two months? I argue not.
Do you really think that his erosion of his job approval is due to the storms? There's another part in this article where the writer goes on to say, “The White House receives mixed marks for its response to recent disasters.” So why would you title it “the storm erodes his ratings” when maybe in fact the demographics eroded his ratings. I think this country is about three points more Democratic than Republican, but I don't think it's twelve points more Democratic than Republican. And if it goes back up to 41 or 42 percent in the next Fox poll, look at the demographics in the back. All that's available on this poll and a number of other polls.
Image licensed under Creative Commons; original may be found here.
Megha Bhattacharya is a third-year Political Science and South Asian Languages and Culture double major. This past summer, she interned with R Street, covering policy work. Previously, Megha has worked as the Summer in Washington and Speaker Series Intern at the IOP. In her spare time, she enjoys playing the ukulele and making Snapchat filters.