We study AI and democracy. We’re worried about 2050, not 2026. Half of humanity lives in countries that held national elections last year. Experts warned that those contests might be derailed by a flood of undetectable, deceptive AI-generated content. Yet what arrived was a wave of AI slop: ubiquitous, low quality, and sometimes misleading, but rarely if ever decisive at the polls. Still, given the outcome of the US presidential election, few observers concerned about democracy felt relief. The immediate, prolonged challenges brought by the second Trump administration make it difficult to do much more than react to crises as they happen. But advances in artificial intelligence do pose risks to democracy. Those risks have less to do with highly persuasive deepfakes in a given election cycle – they exist on a longer time horizon, but are no less insidious. Unfortunately, policymakers’ attention has been captured by highly resourced groups with competing but equally undemocratic views of AI governance. Some warn that artificial intelligence is the only way to solve civilizational threats like the climate crisis, justifying any expenditure and advocating for an unregulated tech sector. Others insist that the United States must not stymie AI development so that it does not lose the 21st century to rivals in Beijing. These positions are beneficial to the bottom lines of those who hold them, but that has not shaken their hold on the debate. Other stances misjudge the timeline on which artificial intelligence might make its mark on democracy. Warnings about immediate and “catastrophic risk”, for example, distract from less speculative harms with fantastic warnings of godlike machines. And while we are sympathetic to the AI reformers who are focused on already existing harms like bias and discrimination, other aspects of AI’s impact on democracy will take decades to play out. In a recent article in the Journal of Democracy, we argue that there are three long term trends associated with AI that could result in serious damage to democracy. First, AI could supplant communication between elected officials and constituents. Political strategists have already started experimenting with using AI tools to enhance aspects of polling, constituent services and campaign outreach. One congressional candidate in Pennsylvania, for instance, used an AI robocaller in a bid to gather votes for the 2024 election. Campaigns and office holders could take this a step further, using AI to analyze and even predict constituent opinions. Such uses could seriously damage the relationships between politicians and constituents – eroding the foundations of genuine deliberative democracy. Numbers are poor substitutes for people, a lesson that US presidential campaigns relearn every four years when some computational model of the electorate inevitably fails to predict or deliver victory. Such methods may not capture or reproduce the parts of politics that come out of exchanges between individuals before they have formed firm opinions. Second, those who control AI technology will become more wealthy and powerful and will leverage these resources to dismantle democracy. Technology-related gains in the latter part of the 20th century significantly boosted economic disparity. AI is likely to lead to further inequality in the coming decades. Several of the wealthiest tech billionaires have used the technology’s potential to argue for replacing human workers, and doing away with democratic, participatory forms of governance. Overblown pronouncements about AI’s alleged capability to streamline even the most complex human processes are self-serving for oligarchs looking to dismiss inconvenient calls for justice, dignity and equality. Finally, AI is poised to transform our already disrupted information landscape in ways that contravene democratic communication. News outlets have already been challenged and damaged by social media platforms’ approach to business, engagement and information curation. Large language models (LLMs) will only continue to divert web traffic away from rigorous, ethical, and fact-based, media entities that actually do the work of investigative journalism. What will replace them? If the answer relies on LLMs, it will cede even greater power to an even smaller number of technology companies. One only has to pause and consider the already worrying susceptibility of the tech sector to political pressure to see how badly this might damage citizens’ access to information. To avoid these outcomes, we must reject the argument that any outcome related to artificial intelligence is inevitable or that AI is too economically and geopolitically important for cautious development. Like the railroad, the lightbulb, and other revolutionary technologies before it, the deployment of artificial intelligence will ultimately be decided by humans and the impact of its diffusion across the economy will take years to assess. The problem is not a lack of policy solutions; there are abundant proposals to tackle challenges like inequality and the decline of public interest journalism. The problem is a lack of political will, stoked by self-interested parties arguing for inaction. Samuel Woolley is the author of Manufacturing Consensus: Understanding Propaganda in the Era of Automation and Anonymity and co-author of Bots. He is a professor at the University of Pittsburgh. Dean Jackson is a senior fellow at the University of Pittsburgh’s CTRL Lab, a contributing editor at Tech Policy Press and the principal of Public Circle LLC, a research consultancy on technology and democracy issues
We research AI election threats. Here’s what we need to prepare for | Samuel Woolley and Dean Jackson
Artificial intelligence endangers democracy – but it’s less about specific deepfakes and more about a bigger transformation
