Rethinking “checks and balances” for the AI Age

In the late 1780s, shortly after the Industrial Revolution had begun, Alexander Hamilton, James Madison and John Jay wrote a series of 85 spirited essays, collectively known as the Federalist Papers. They argued for ratification of the Constitution and an American system of checks and balances to keep power-hungry “factions” in check.

A new project, orchestrated by Stanford University and published this month, is inspired by the Federalist Papers and contends that today is a broadly similar historical moment of economic and political upheaval that calls for a rethinking of society’s institutional arrangements.

In an introduction to its collection of 12 essays, called the Digitalist Papers, the editors overseeing the project, including Erik Brynjolfsson, director of the Stanford Digital Economy Lab, and Condoleezza Rice, secretary of state in the George W. Bush administration and director of the Hoover Institution, identify their overarching concern.

“A powerful new technology, artificial intelligence,” they write, “explodes onto the scene and threatens to transform, for better or worse, all legacy social institutions.”

The most common theme in the diverse collection of essays: Citizens need to be more involved in determining how to regulate and incorporate AI into their lives. “To build AI for the people, with the people,” as one essay summed it up.

The project is being published as the technology is racing ahead. AI enthusiasts see a future of higher economic growth, increased prosperity and a faster pace of scientific discovery. But the technology is also raising fears of a dystopian alternative — AI chatbots and automated software not only replacing millions of workers, but also generating limitless misinformation and worsening political polarization. How to govern and guide AI in the public interest remains an open question.

  Judge keeps 30-year term at resentencing for man who attacked Paul Pelosi

“Technologists are pushing the AI frontier, and that’s great,” said Brynjolfsson, who initiated the project. “But there’s been no comparable effort given to the institutional innovation needed for this technology to be used less to fuel misinformation and polarization, and more to empower people more broadly.”

By now, many governments, nonprofits and universities and even a few companies have recommended AI guidelines and guardrails, typically a list of dos and don’ts. The Stanford initiative, subtitled “Artificial Intelligence and Democracy in America,” has a different focus, not so much prescriptive solutions as different perspectives on the AI threats to democracy and technology’s potential to revitalize democratic decision-making.

The project’s five editors and 19 essay authors and co-authors span different disciplines and outlooks — economists, political scientists and technologists, liberals and conservatives. Two pillars of the Silicon Valley establishment were invited to contribute essays: Reid Hoffman, co-founder of LinkedIn and a venture capitalist, and Eric Schmidt, former CEO of Google.

Support in funding and staff time for the Digitalist Papers came from Stanford and the Project Liberty Institute, a nonprofit focused on fostering a more human-centered internet.

Most of the Stanford project’s authors share a concern that the economic power of the big tech companies will increasingly result in political power. The essays also look at how to let citizens and consumers, rather than lobbyists and big tech companies, shape AI policy.

Related Articles

Technology |


When AI fails the language test, who is left out of the conversation?

Technology |


On YouTube, major brands’ ads appear alongside racist falsehoods about Haitian immigrants

Technology |


Colorado food nonprofit pays $450k to workers fired for unionizing

Technology |


Denver City Council unanimously signs off on Cherry Creek West redevelopment

  What’s all the fuss about tips and taxes?

Technology |


Developer looking to build Roxborough “Nordic spa” faces first county vote

“The potential for democratic innovation is there, but the current political economy, shaped by moneyed interests and polarization, does not allow change,” said Lawrence Lessig, a professor at Harvard Law School.

One potential avenue to address the problem is what he calls “protected democratic deliberation” — in which some issues can be debated and moved along outside the legacy political process.

Lessig points to the work of “citizen assemblies” in Ireland. Same-sex marriage and abortion were politically off-limits for the Irish parliament, given the influence of the Roman Catholic Church. Citizen assemblies were freer to debate those issues. They came up with positions that the public overwhelmingly ratified in referendums to legalize same-sex marriage and abortion.

Taiwan is cited repeatedly in the essays as a leader in the practice of digitally enabled outreach to citizens to solicit their views on a range of subjects.

The issues tackled by citizens there have included the rules for admitting Uber to compete with local taxi companies and setting priorities to shape AI policy.

Taiwan uses what it calls “alignment assemblies,” soliciting the ideas and views of thousands of randomly selected citizens. One such assembly on misinformation online this year helped influence anti-fraud legislation that includes stronger reporting and disclosure requirements for big tech social networks.

A key to Taiwan’s success, said Saffron Huang, co-founder of the Collective Intelligence Project, which has worked with the Taiwanese government, is that the citizen views have repeatedly been translated into policy actions, which has built trust in the process.

Audrey Tang, Taiwan’s founding digital minister, said the online forums could be “a very effective way for citizens to contribute to the agenda and guide the trajectory of technology policy instead of the brakes and pedals of traditional regulation.”

  Grading The Week: What’s wrong with Nuggets’ new Nike alternate jerseys? We’ll give you 5,280 things

The conservative contributors to the project also see a strong ecosystem of civic and other independent institutions — like those in Taiwan — as crucial counterweights to the rising power of the big tech companies. But they regard them as players in a marketplace for ideas best left free of most government controls.

“It is AI regulation, not AI, that threatens democracy,” writes John H. Cochrane, a senior fellow at the Hoover Institution.

The main danger, Cochrane said, is having a government or corporate bureaucracy decide what is and is not appropriate speech. “We’re talking about censorship,” he said.

Regulation, Cochrane said, should come after abuses become clear instead of preemptively setting rules. Who in 2004, when Facebook was founded, could have predicted the problems coming with social networks harming teenage girls in particular?

“It’s a process of constant learning and reform,” Cochrane said. “Bit by bit, in a contentious democracy, that’s how we figure out what to do.”

After the publication of the project, its organizers, including Rice and Brynjolfsson, plan to meet with policymakers and make presentations. Their goal, they say, is to encourage analysis and debate, and begin to build a case for optimism.

“We can build new systems of governance and guide technological development with an eye toward supporting and even enhancing democratic principles, rather than undermining them,” the editors wrote.

This article originally appeared in The New York Times.

Get more business news by signing up for our Economy Now newsletter.

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *