Opinion: Big tech companies are burning through reservoirs of trust with AI

We live in an interesting time. We are watching some of the big tech companies — though not all — move fast and break their own things now.

Google has been burning through its remaining reservoir of trust in search via its new AI overviews, which have occasionally offered up misinformation in reply to searches — for example, that Barack Obama was the first Muslim president of the United States or that staring at the sun for five to 10 minutes per day is safe. (Following public outcry, the company has reduced the number of such overviews presented.)

Microsoft burned through some of its remaining stockpile of trust in cybersecurity with its Recall feature, which was supposed to take screenshots of a computer every few seconds and pool that information into a database for future searches. (After an explosion of articles criticizing the feature as a “security disaster,” Microsoft first announced that Recall would not be enabled by default in Windows and then removed the feature entirely from the launch of the company’s Copilot Plus PCs.)

After publishing research claiming that 67% of the remote workers interviewed “trust their colleagues more when those colleagues have video on” in their Zoom calls, Zoom’s CEO is now aspiring to video meetings populated with AI deepfakes (described by the journalist interviewing him as “digital twins” that can go to Zoom meetings on your behalf and even make decisions for you”).

Amazon, meanwhile, is full of AI-generated knockoffs of books — including “an ersatz version of ‘Artificial Intelligence: A Guide for Thinking Humans.’ ”

  PPP fraud was fueled in part by brokers taking kickbacks but escaping punishment, Sun-Times finds

Meta, which didn’t really have a reservoir of trust to mine, is inserting AI-generated comments into conversations among Facebook group members (featuring, at times, weird claims of AI parenthood). And X, still trying not to be Twitter and already flooded with bots, just announced an updated policy under which it will allow “consensually produced and distributed adult pornographic content” including “adult nudity or sexual content that is AI-generated” (though not content that is “exploitative … or promotes objectification” — because of course AI-generated content would not do that).

In turn, having launched the generative AI era with its initial release of ChatGPT, OpenAI followed up by opening a ChatGPT Store, a platform through which users can distribute software that builds on ChatGPT to add specific features, creating what the company calls “custom versions of ChatGPT.” In its January announcement of the store, the company said that users had already created more than 3 million such versions. The trustworthiness of those tools will now also impact the trust that users have in OpenAI.

Is generative AI a “personal productivity tool,” as some tech executives have argued, or primarily a destroyer-of-trust-in-tech-companies device?

In the process of rushing to deploy products, however, those companies are not just disrupting themselves. By hyping their generative AI products beyond recognition and pushing for their adoption by people who don’t understand the limitations of those products, they are disrupting our access to accurate information, our privacy and security, our communication with other humans and our perception of all the various organizations (including government agencies and nonprofits) that are adopting and deploying flawed generative AI tools.

  Red Sox Trade Could ‘Target’ League-Leading $39 Million Star

Related Articles

Opinion Columnists |


Scientists create AI model to ‘catch Alzheimer’s disease early’

Opinion Columnists |


Bay Area teens lean on AI for mental health support

Opinion Columnists |


AI experimentation is high risk, high reward for low-profile political campaigns

Opinion Columnists |


Power demand expected to double by 2040 thanks to AI and EVs, PG&E’s CEO says

Opinion Columnists |


South Bay must build more housing for cutting-edge tech and AI jobs: expert

Generative AI also has a massive negative environmental impact. According to a recent article published by the World Economic Forum, the “computational power required for sustaining AI’s rise is doubling roughly every 100 days” — and 80% of the environmental impact occurs at the “inference” or usage stage, rather than in the initial training of the algorithms. The “inference” pool includes all the AI-generated overviews in search, the AI-generated comments in social media groups, the AI-generated fake books on Amazon and the AI-generated “adult” content on X. This pool, unlike the reservoirs of trust, is growing by the day.

Many of the companies generating AI tools have internal corporate efforts focused on ESG (environmental, social and governance standards) and RAI (responsible AI). Despite those efforts, however, generative AI is rapidly consuming energy, water and rare elements — including trust.

Irina Raicu is the director of the Internet Ethics program at the Markkula Center for Applied Ethics at Santa Clara University. 

(Visited 1 times, 1 visits today)

Leave a Reply

Your email address will not be published. Required fields are marked *