[Op-Ed] The Algorithm and the Republic

by 
Devan Patel
Fellow, AI

There is a principle older than both the Constitution and the common law system our founders lived under: that those who govern bear a duty of care to those they govern. That those in authority don’t merely administer, they steward. And that when a new power arises capable of reshaping how people live, work, and raise their children, the response of legitimate government isn’t to wait and see. It is to act. 

Artificial intelligence (AI) is that power right now, and the stewards are behind. Not because they lack the tools, but because they haven’t decided to use them. 

A February 2026 Rainey Center survey of 1,000+ registered voters make this unmistakably clear. Eighty-one percent of Americans say it is important or essential that a 2028 presidential candidate have a clear, detailed plan for AI regulation and job protection. Forty-three percent call it essential. The partisan breakdown is striking in its uniformity: Republicans 80%, Democrats 82%, Independents 83%. This isn’t a coalition issue, but a civic one. 

The people have registered their judgment, the question is whether their elected representatives will meet it. 

What makes this moment significant isn’t the technology itself, though it is genuinely unprecedented, but the structural vulnerability it is exposing. The institutions that undergird self-governing societies—representative legislatures, independent courts, civic associations, the family, the local school—were built on assumptions about how information, authority, and accountability flow. AI disrupts all of them simultaneously. And the disruption runs deeper still: it is pressing against the moral formation that faith communities have transmitted across centuries, precisely because that formation doesn’t bend to political winds or optimize for engagement metrics. 

Consider what’s already happening. AI-powered companion apps and behavioral systems are embedded in children’s daily lives, with no meaningful notice or recourse for parents. Entry- and mid-level workers are about to be displaced faster than workforce programs can retrain them, putting pressure on the fiscal and social systems that state governments are responsible to manage. State and local agencies are procuring AI surveillance tools—facial recognition, predictive systems, biometric databases—that will define the relationship between citizen and state for a generation, often through low-visibility purchasing decisions that never see a legislative floor. 

These aren’t edge cases. They are the current reality, already landing on state legislators’ desks long before they surface in federal conversations. 

This is precisely why federal delay isn’t a neutral condition. When Washington stalls, it doesn’t hold the line, it cedes the field. And what fills that vacuum is not governance. It’s vendor contract language. It’s default settings designed by engineers here and in China with other values. It’s algorithmic systems built to keep your kid scrolling at 11pm—engineered for retention, not for them. 

The governance gaps aren’t abstract. They cluster around five concrete pressure points already arriving on state leaders’ desks: child and family safety, labor market resilience, civil liberties and ethical tech, adversarial technology exposure from foreign competitors, and AI-driven strain on energy infrastructure. 

These aren’t ideological talking points. They are the issues showing up in school board meetings, state procurement decisions, utility commission hearings, and in church pews, where pastors and priests are already counseling families navigating harms that no regulatory framework yet addresses. They require structured, expert-informed responses at the state level, because that’s where the authority and the accountability actually sit.

That principle, that decisions should be made at the most local level of competent authority, is itself not a modern invention. It is subsidiarity, a concept refined by Catholic social teaching precisely because centralized power, however well-intentioned, consistently fails the people closest to the problem.

Those traditions of ordered self-government that formed what we know to be western civilization’s democratic institutions and norms—from the Romans who distinguished law from power, to the common law courts that bound the Crown, to the constitutional architects who distributed sovereignty between state and federal governments precisely to keep it accountable—understood that liberty isn’t preserved by passivity but by active, principled, lawful governance. 

The voters already understand this. They are asking whether their elected officials do too. 

States that develop governance capacity now will shape responsible AI policy and capture the economic advantages of the AI boom. States that wait will inherit a set of decisions made without them, by actors with no democratic accountability, maximized for interests other than their citizens’. The political and social liability that follows will be entirely predictable and entirely avoidable.

The window to act wisely is open. Ask anyone who missed the last one how long those tend to last. 

Devan Patel is the Visiting Fellow for Tech Ethics & Democracy at the Rainey Center for Public Policy in DC and is an adjunct professor at Notre Dame Law School. He has been overusing em dashes since well before AI.