Deepfake or Cheap Take?—Keep Vermont Weird.
-- By
MichaelMacKay - 01 May 2025
The Grass is Always Greener...
In Montpellier, this spring, state legislators have held public hearings on a senate bill that would restrict AI-generated content in elections, but such a measure aimed at “election purity” would hurt democratic participation in campaigning for public office. Last year, neighboring New Hampshire faced an out-of-state
robocall from an AI-generated President Biden discouraging primary voters from the polls—inspiring Vermont’s
S.23. Now, 21 states have enacted
legislation against so-called “deepfakes” in elections, and last year, all three of Vermont’s neighbors enacted similar laws to curb AI-generated content. However, Massachusetts, New Hampshire, and New York also tend to view democratic participation differently than the Green Mountain State (e.g. each restores voting rights to felons immediately upon release from prison, whereas Vermont imposes
no felon disenfranchisement). Hence, Vermont’s approach to “synthetic media” is problematic, even if unoriginal, and like most states, its legislature should avoid adopting such a law where: (1) making political speech appear to be dubious could violate the First Amendment, (2) required disclosures would inevitably cause confusion online, and (3) what well-funded parties can afford without AI is simply unattainable for those less resourced.
The First Amendment Issue
First, there is the freedom of speech issue where the law requires self-disclosure when anyone uses AI “with the intent… to influence the outcome of an election.”
FIRE has already filed an objection on First Amendment grounds, citing Buckley v. Valeo, to argue that Vermont’s proposed “Subchapter 4. Use of Synthetic Media in Elections” would not survive strict scrutiny as a content-based measure. Here, there is some support, as U.S. District Court Judge Mendez recently granted a
preliminary injunction against California’s similarly worded restriction on “synthetic media” in elections. Industry groups like
TechNet? raise no objection—so long as their cloud providers and online services are not liable. But considering how quickly Meta folded
fact-checking operations after the 2024 general election, it is hard to say whether such companies’ support actually pertains to “election purity.” Rather, a firm like Reddit would prefer to avoid obligations under Vermont’s S.23, which would chill anonymous speech, protected under
McIntyre v. Ohio Elections Commission and part of the national discourse since at least The Federalist Papers. Ultimately, if candidates choose to communicate through a dice-throwing machine, their political speech should not bear any extra burdens beyond those upheld by the courts.
The Enforcement Issue
Second, even if Vermont could compel such disclosure within 90 days of an election, there is a maze to online sharing that would confound enforcement. Here, S.23 says that any “deceptive and fraudulent synthetic media” used in elections must have a prominent disclosure indicating that it was developed in whole or part with AI. But what if a video editor in Vermont merely used AI to master some audio from a campaign rally and shared the results on Facebook? That would require a disclaimer “in a size easily readable by the average viewer,” and if shared in an audio format only, then “in a clearly spoken manner.” But how clearly, and presumably, why English? Technically speaking, should a dinosaur now appear in that recording from the campaign rally, could the state even discern its origin? Even if the original authors of the AI-enhanced content diligently complied with the bill—scrutinizing existing contracts with vendors already using AI—the issue of establishing a digital paper trail to track changes makes entry-level penalties of $1000 look onerous and prone to finger-pointing. Instead, the easy case is the misrepresentation already protected under law (like impersonation of a former president), but the harder case is routing through a labyrinth of lighter touches to find the agenda behind an AI-edit.
The Green Issue
Above all, elections are expensive, and S.23 likely disarms minority points of view by removing relatively cheap and effective campaign tools. According to
OpenSecrets? , the average successful campaign for the House of Representatives spent
$2.79M in the last midterms, so what is “synthetic” to some with deep pockets may be an “equalizing” force to most without. Last year, candidates increasingly sought
joint fundraising committees, as
McCutcheon? struck down aggregate contribution limits, so where the
evenly split FEC also allowed these fundraising entities to run ads without allocating costs, laws like Vermont’s S.23 would only make it harder for less well-off upstarts to compete post-Citizens United. Accordingly, the Vermont Green Party could use AI-generated content to more easily communicate against other parties or advocates of ending rampant
homelessness could better mobilize on certain platforms and target hard-to-reach constituencies.
The Unafraid Approach
Across the country, in an even smaller state, a candidate ran to become
mayor of Cheyenne vis-à-vis an AI avatar. His campaign last year was unsuccessful, but his positive message on technology was clear (overcoming a now
failed piece of legislation resembling S.23). In the end, no matter how Vermonters—whether activists, campaign managers, or other volunteers—choose to use "AI," they are still humans, and there are already laws for people on the books. As with any new technology, there are always unknowns, but as with this bill, fear appears to form the basis of restricting the use of AI-generated content "90 days" before an election, as
California had also contemplated in 2019. But fear is no friend of the freedom of speech, and Vermont's House should reject the bill that the state senate passed in late March. Surely, if there is an “uncanny valley” in the Green Mountains, then state lawmakers should keep Vermont weird.
You are entitled to restrict access to your paper if you want to. But we all derive immense benefit from reading one another's work, and I hope you won't feel the need unless the subject matter is personal and its disclosure would be harmful or undesirable.
To restrict access to your paper simply delete the "#" character on the next two lines:
Note: TWiki has strict formatting rules for preference declarations. Make sure you preserve the three spaces, asterisk, and extra space at the beginning of these lines. If you wish to give access to any other users simply add them to the comma separated ALLOWTOPICVIEW list.