When California Gov. Gavin Newsom vetoed SB 1047 — a state bill regulating artificial intelligence technology — last year, Redwood Research CEO Buck Shlegeris was furious and flabbergasted at the governor’s disregard of artificial intelligence’s dangers.
“I think Newsom caved to the interest of his big donors and other business supporters in a way that is quite shameful,” Shlegeris said. “SB 1047 was supported by the majority of Californians who were polled. It was supported by a majority of experts.”
Berkeley-based Redwood Research, a consulting company focused on mitigating the risks of A.I., hopes to have its research implemented throughout the Bay Area’s many A.I. companies. Though Shlegeris sees A.I. as a technology that appears infinitely capable, he also believes it could be existentially dangerous.
RELATED: Beleaguered Cal State University’s $17 million artificial intelligence initiative defended, attacked
The rise of the technology in recent years has led to divergent opinions about how the tech industry should regulate its exponential growth. The Bay Area is ground zero for this intellectual debate between those who are opposed to regulating A.I. and those who believe it will condemn humanity to extinction.
Shlegeris hopes Redwood Research can make headway with companies like Google Deep Mind and Anthropic before his worst fears are realized.
Q: How would you describe the potential of A.I.?
A: I think that AI has the potential to be a really transformative technology, even more so than electricity. Electricity is what economists call a general purpose technology, where you can apply it to heaps and heaps of different things. Like, once you have an electricity setup, it affects basically every job, because electricity is just such a convenient way of moving power around. And similarly, I think that if A.I. companies succeed in building A.I.s that are able to replace human intelligence, this will be very transformative for the world.
The world economy grows every year and the world is getting richer. The world is getting more technical, and technologically advanced every year, and this has been true for basically forever. It increased around the Industrial Revolution. It’s been getting faster since then, mostly. And a big limit on how fast the economy grows is the limit on how much intellectual labor can be done, how much science and technology can be invented, and how effectively organizations can be run. And currently this is bottlenecked on the human population. But if we get the ability to use computers to do the thinking, it’s plausible that we will very quickly get massively accelerated technological growth. This might have extremely good outcomes, but also, I think poses extreme risks.
Q: What are those risks? What is the worst-case scenario for A.I.?
A: I don’t want to talk about literally the worst-case scenario. But I think that A.I.s that have fundamentally misaligned goals with humanity, becoming powerful enough that they’re able to basically seize control of the world, and then killing everybody in the course of using the world for their own purposes… I think is a plausible outcome.
Q: That’s certainly scary.
A: I think the scenario where giant robot armies are built, at first by countries that want robot armies for the obvious reason that like, they’d be really helpful in fighting wars. But then the robot armies are expanded by AIs that autonomously desire them to be built, and are purchasing them autonomously, and building factories autonomously that then turn around and kill everyone is conceivable.
Q: So are we talking about a 1% chance?
A: More than 1%. Another bad outcome would be, I think it’s conceivable, that someone from an A.I. company seizes control of the world and appoints himself as an emperor of the world.
Q: Shifting back to the Bay Area-specific AI industry: San Francisco appears to be a bed of emerging behemoths in the tech sector, while Berkeley and Oakland seem to be more of a hub for research and AI safety guards. How have these disparate factions evolved in the Bay Area?
It’s largely a historical accident. Like, There’s just been an AI safety community in Berkeley for a long time, basically, just because. The Machine Intelligence Research Institute (MIRI), which used to be a big deal in this space, was based in Berkeley from like 2007. And then I think it’s just a ton of people, like, a nucleated community. I know a lot of people who work in MIRI. I used to work there myself, and they were in Berkeley, and so I ended up working for them, so I moved to Berkeley. Another way of saying this is Berkeley has been a hub of the rationalist community for a long time, and a lot of people who are interested in A.I. safety research, which I think you’re referring to, are associated with the rationalist community.
Q: I enjoy seeing a historical tie that explains how communities have grown, even with a technology like A.I. that only goes back 30-something years.
A: And the reason why the S.F. stuff is in S.F. is mostly just because that’s where VC startups have been historically. There’s just not very many big tech companies in Berkeley and Oakland.
Q: How does Silicon Valley factor into this division in A.I.?
A: If I were to draw in broad strokes, the big Silicon Valley companies — by which I mean Google and Apple and Meta — the way they look at stuff is ‘How are we going to make huge amounts of money given our vast resources of technical talent and capital?’ In my experience, those companies are just trying to pursue A.I. capabilities because they think it’ll be helpful for them in good products. The A.I. people at Meta, a lot of them are people who just got into it recently. But the people who started Open A.I. and Anthropic were true believers who got into this stuff before Chat GPT, before it was obvious that this was going to be a big deal in the near term. And so you do see a difference where the open A.I. people and Anthropic people are more idealistic. Sam Altman has been saying very extreme things about A.I. on the internet for more than a decade. That’s way less true of the Meta people.
Q: Do you think the hype that is coming out of these A.I. companies is overblown — or are they underselling it?
A: I think that a lot of people, especially tech journalists, have a tendency to be a bit cynical when they hear the A.I. people talk about how powerful they think A.I. might be. But I’m worried that that instinct is misfiring here. I think that the A.I. people are not over-hyping their technology. My sense is that the big A.I. companies, if anything, underhyped what they’re actually building because they would sound incredibly irresponsible. I think that they sometimes say things about how big a deal they think their technology will be, which makes it sound crazy that private companies are allowed to develop it. I bet that if you went to these companies, you would hear them say way crazier stuff than they say publicly.
Buck Shlegeris profile
Title: CEO of Redwood Research
Age: 30
Education: B.S. in Computer Science from the Australian National University
Residence: Berkeley, Calif.
5 things to know about Buck Shlegeris
- He has worked at the Machine Intelligence Research Institute in Berkeley where he helped research A.I. safety theory.
- He is a teaching assistant at App Academy in San Francisco, and he plans to use his earnings in programming to give to charities that improve the future.
- Originally from Australia, Shlegeris immigrated to the United States 10 years ago.
- He is a multi-disciplinary musician, including guitar, bass and saxophone.
- While studying at Australia National University, he tutored students on code programs like Python, Javascript and Haskell.