Artificial intelligence bots are running rampant on the internet as they scour for data to train language models. Much of that data includes content created by real-deal humans, and many are unhappy with their data being used that way. To combat this, companies are creating tools to prevent AI bots’ access to data on both their websites and their products.
Why are people worried about AI bots?
Training generative AI requires significant amounts of data. To collect the information, several companies have AI bots scouring the web for content. Data comes in two forms: public and private. Public data is readily available on the internet for anyone to glean, while private data “includes things like text messages, emails and social media posts made from private accounts,” said The New York Times. The problem is that public data is running out, which is leading to the creation of AI bots bent on scouring the internet for the private alternative.
“As companies look to train their AI models on data that is protected by privacy laws, they’re carefully rewriting their terms and conditions to include words like ‘artificial intelligence,’ ‘machine learning’ and ‘generative AI,'” said the Times. Essentially, companies including Google and Meta have begun using private user data such as social media posts to train their AI models. People are worried that using private data to train generative AI could render AI capable of replicating content created by humans, especially in areas like art, music and literature. “In three, four, five years’ time, there might not be entire segments of this creative industry because we’ll just be decimated,” Sasha Yanshin, a YouTube personality and co-founder of a travel recommendation site, said to the Times
How are companies fighting back?
Generative AI’s data thirst has presented a lucrative opportunity for companies that have a strong stock of private data. “Thanks to the scarcity of high-quality data and the immense pressure and demand to build even bigger and better models, we’re in a rare moment where data owners actually have some leverage,” said MIT Technology Review. For example, music labels have opted to sue the AI music companies Suno and Udio, claiming the two companies “made use of copyrighted music in their training data ‘at an almost unimaginable scale,’ allowing the AI models to generate songs that ‘imitate the qualities of genuine human sound recordings.'”
In a bigger step, Cloudfare, a content delivery network and cloud security platform, created a tool designed to block AI bots from scraping text from websites. “We hear clearly that customers don’t want AI bots visiting their websites and especially those that do so dishonestly,” said Cloudfare in a blog post. While this is not a surefire solution because more advanced bots can mimic how a real person uses a website, such a block could nonetheless limit a significant amount of bot activity.
However, several content owners are “torn between their instinct to protect their intellectual property and their eagerness to take money from those AI makers,” said Axios. Platforms like Reddit and Stack Overflow are attempting to balance the use of AI with the protection of data, but the bot “free-for-all over access to web data is just the opening salvo of what will be an increasingly hot war.”