Thread
Just out @AJPS_Editor w @j_kalla: “When and Why Are Campaigns’ Persuasive Effects Small” onlinelibrary.wiley.com/doi/10.1111/ajps.12724

The paper helps understand why campaign persuasion fx are often small, esp in general elections & close to e-day

Some potential lessons for campaigns & scholars 🧵👇
In previous work @j_kalla & I found campaign persuasion effects are often v close to 0 in general elections & close to Election Day doi.org/10.1017/S0003055417000363

Why? 2 possible explanations: 1 voters are “intoxicated partisans” OR 2 are processing info like Bayesians. Very different!
We took advantage of 2020 election to test this, where within the same election voters had more information about one candidate (Trump) than the other (Biden & other Dems)

We also tested treatments with varying amounts of informational content

We composed >400 treatments in all
These two explanations yield different predictions for which treatments will work.

This table summarizes key predictions & our findings.

All were consistent with an informational or quasi-Bayesian interpretation. No support for “partisan intoxication” interpretation.
First, persuasive effects were much larger for treatments about Biden than Trump. “Partisan intoxication” would predict limited fx from either. But data was consistent with what Bayesian view predicts: weaker priors about Biden than Trump = larger fx from treatments about Biden.
And it’s not just that voters refused to believe info we gave them about Trump. We see voters rate Trump more poorly in specific areas when they see info about that area — but their priors are too strong for this to budge overall evaluations (unlike for Biden).
Second, there’s a view that campaigns simply bring partisans home. We don’t find this (similar to other recent research). In fact, effects we find driven by partisans crossing aisle! Campaign effects don’t seem to go down just because partisans have all “gone home” already.
Finally, we tested both specific and vague statements.

Vague statements were largely from real campaigns—broadside attacks that aren’t specific and substantiated.

An incredible amount of campaign attacks are super vague. What information can voters really glean from these?
Informational account predicts treatments will be more persuasive if they teach voters information they don’t know.

A lot of CW says this doesn’t matter and “facts don’t change our minds.”

That’s not what we find. More specific & factual treatments were much more effective!
Qualitatively, @j_kalla & I were shocked at how non-specific existing campaign rhetoric was that we tried to adapt.

The more specific messages we wrote indeed performed better.

Voters often don’t know things campaigns assume they do!
In summary: I came away from this data more skeptical that partisan motivated reasoning, identity, or affpol explains variation in campaigns persuasion & see it more through the lens of quasi-Bayesian information processing.

Campaigns: teach voters information they don’t know!
This echoes findings w @j_kalla & @seanjwestwood that affective polarization doesn’t seem to affect political behavior

osf.io/9btsq/

& work w @j_kalla on partisan media, finding that cross-cutting media moderates strong partisans’ attitudes

osf.io/jrw26/
There’s many open questions. Eg, if voters were true Bayesians, why would they believe anything campaigns say? So I wouldn’t subscribe to every tenet of that view. However, I still think the lens of “quasi”-Bayesianism is really helpful for understanding campaign effects.

/end
Mentions
See All