AI and accountability: Policymakers risk halting AI innovation

Moving through the California Legislature is Senate Bill 1047, introduced by Sen. Scott Wiener, D-San Francisco, an ambitious proposal to supervise advanced artificial intelligence (AI) models.

AI will eventually transform the economy, communications, and government itself, bringing uncertainty and, for many people, an understandable hesitance. However, legislators must be careful not to empower an unaccountable bureaucracy to fund itself while stunting an industry with enormous potential to help develop new medical treatments, energy breakthroughs to address climate change, and improve worker productivity.

Legislators are right to take AI seriously, but to preserve California’s entrepreneurship ecosystem—the envy of the world—in a new era, they must guard against these four pitfalls. 

First is how this agency, the “Frontier Model Division” (FMD), would be funded. Instead of using tax revenues, its staff would assess fees on AI development. One would presumably be assessed when a company submits its plan for ensuring its model is safe, but there are no real limits on the number, amount, or reason for fees. In essence, regulators appear to have a blank check.

Second, while this poses a challenge especially for smaller firms, it’s also a classic formula for “regulatory capture” by larger tech companies. This is a common phenomenon where a regulator ends up serving the interests of the businesses it regulates rather than the interests of the public. The FMD will be funded by the same companies it regulates, making this a particularly salient risk. 

Indeed, the proposed FMD resembles the Nuclear Regulatory Commission (NRC). Founded in the 1970s with the best of intentions, it’s now widely regarded as an obstacle to fighting climate change. Most NRC funding comes from fees assessed on nuclear power plants. Academics, think tanks, investigative journalists, and even former President Obama have all observed the NRC’s regulatory capture, which many believe relates to its funding.

  Nolan Schanuel hits first homer of the spring as Angels defeat Guardians

Third is the requirement for “positive safety determinations,” where companies self-certify under threat of perjury that their AI models won’t cause “major harms.” This threshold is quite high: $500 million in estimated damage to critical infrastructure. This sounds reasonable, but in practice creates problems. AI is a general-purpose technology whose full range of applications are not always knowable in advance. Imagine asking Apple to certify that an iPhone or Mac will not be used to commit a major crime.

That leads to a fourth concern: How would a tech company know this without constant surveillance of users’ activity? Policymakers, including many in California, have taken a stand against this kind of surveillance. Now some legislators may lead companies to require it.

Related Articles

Opinion |


The anti-diet movement: Eat now, pay later

Opinion |


The threat Robert F. Kennedy Jr. poses in the 2024 presidential election

Opinion |


Californians are struggling with an affordability crisis. But Sacramento keeps making things worse.

Opinion |


NIMBY law happily not on the ballot

Opinion |


Combat disinformation with better norms, not more laws

Imagine a model such as ChatGPT and cyberattacker whose goal is to hack into an urban wastewater treatment plant by sending a phishing email to steal passwords. The attacker, a skilled coder who speaks broken English, tasks the model with writing a professional-sounding email posing as the plant’s IT director. All the malicious code is his own, but AI provided the email. 

Even with surveillance, how would the model or its developer know the intent of this seemingly innocuous piece of language? What if the email were instead used as an example to illustrate the correct defense to phishing? AI companies and developers in this situation might be investigated for perjury—a criminal offense. It’s like prosecuting Ford because a Mustang was used in a bank heist.

  Chargers tender offers to kicker Cameron Dicker, lineman Foster Sarell

There will be good and bad aspects of the AI transformation, as there are in all technological revolutions, from electricity to vehicles to personal computers. Policymakers can help ensure that we maximize the good and minimize the bad, but that doesn’t require more regulatory capture, privacy concerns, or misplaced criminal liability.

Dean W. Ball is a research fellow with the Mercatus Center at George Mason University.

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *