AI Training Data Dilemma: OpenAI, Google, and Meta’s Questionable Use of YouTube Content

Authored by: Ms. Tanima

KEY HIGHLIGHTS

1. Controversial Data Use: Google and OpenAI’s alleged use of YouTube transcripts for AI training sparks ethical debate and copyright concerns.
2. Creators’ Rights: Content creators’ intellectual property rights are jeopardized by the use of their YouTube content without consent, highlighting issues of fair use and transparency.
3. AI Development Challenges: The reliance on unconventional data sources underscores the industry’s struggle to acquire high-quality training data, necessitating ethical considerations in AI advancement.
INTRODUCTION

The New York Times recently reported that some of the biggest tech giants have been using transcripts from YouTube videos to train their powerful AI language models, potentially violating creators’ copyrights. According to the report, OpenAI used its speech recognition tool Whisper to transcribe over a million hours of YouTube content, which was then fed into GPT-4, the AI model that powers ChatGPT Plus, as training data. Google was also accused of doing the same, with teams at the company allegedly scraping YouTube videos to build up datasets for their Large Language Model (LLMs) like Bard/Gemini.

Google has acknowledged that “unauthorized scraping or downloading of YouTube content” goes against their policies, but the report suggests that the company may have turned a blind eye to OpenAI’s YouTube transcript heist because they were doing similar things themselves. Both companies had reportedly hit limits on the amount of useful training data they could find from more conventional sources like books, websites, and databases. OpenAI exhausted useful supplies back in 2021, for instance. So, these companies started looking at new data streams like videos and podcasts.

OpenAI and Google have defended their practices, claiming they only use public data or content where they have permission. However, the allegations raise some thorny questions about fair use, copyright, and data privacy. After all, most YouTube creators probably didn’t expect their videos could end up transcribed without their knowledge. The report highlights the growing concern around the use of training data for AI models, which has become a critical component in the development of AI technology. The effectiveness of AI models is enhanced by the volume of data they’re trained on. As AI technology has advanced, the demand for large volumes of high-quality data has surged, pushing companies to explore unconventional and sometimes controversial methods of data acquisition. The report also highlights the challenges that AI companies face in gathering high-quality training data. According to a recent report from The Wall Street Journal, AI companies are running into a wall when it comes to gathering high-quality training data. The New York Times detailed some of the ways companies have dealt with this, including doing things that fall into the hazy gray area of AI copyright law.

HUNGER FOR TRAINING DATA 

The story opens on OpenAI, which, desperate for training data, reportedly developed its Whisper audio transcription model to get over the hump, transcribing over a million hours of YouTube videos to train GPT-4, its most advanced Large Language Model (LLM). That’s according to The New York Times, which reports that the company knew this was legally questionable but believed it to be fair use.OpenAI president Greg Brockman was personally involved in collecting videos that were used, the Times writes. OpenAI spokesperson Lindsay Held told The Verge in an email that the company curates “unique” datasets for each of its models to “help their understanding of the world” and maintain its global research competitiveness. Held added that the company uses “numerous sources including publicly available data and partnerships for non-public data,” and that it’s looking into generating its synthetic data. The Times article says that the company exhausted supplies of useful data in 2021 and discussed transcribing YouTube videos, podcasts, and audiobooks after blowing through other resources. By then, it had trained its models on data that included computer code from Github, chess move databases, and schoolwork content from Quizlet.

Google also gathered transcripts from YouTube, according to the Times’ sources. Bryant said that the company has trained its models “on some YouTube content, per our agreements with YouTube creators.” The Times writes that Google’s legal department asked the company’s privacy team to tweak its policy language to expand what it could do with consumer data, such as its office tools like Google Docs

Google spokesperson Matt Bryant told The Verge in an email the company has “seen unconfirmed reports” of OpenAI’s activity, adding that “both our robots.txt files and Terms of Service prohibit unauthorized scraping or downloading of YouTube content,” echoing the company’s terms of use. YouTube CEO Neal Mohan said similar things about the possibility that OpenAI used YouTube to train its Sora video-generating model this week.

Meta likewise bumped against the limits of good training data availability, and in recordings the Times heard, its AI team discussed its unpermitted use of copyrighted works while working to catch up to OpenAI. The company, after going through “almost available English-language book, essay, poem and news article on the internet,” apparently considered taking steps like paying for book licenses or even buying a large publisher outright. It was also apparently limited in the ways it could use consumer data by privacy-focused changes it made in the wake of the Cambridge Analytica scandal.

CONCLUSION

Google, OpenAI, and the broader AI training world are wrestling with quickly evaporating training data for their models, which get better the more data they absorb. The Journal wrote this week that companies may outpace new content by 2028. Possible solutions to that problem mentioned by the Journal on Monday include training models on “synthetic” data created by their models or so-called “curriculum learning,” which involves feeding models high-quality data in an ordered fashion in hopes that they can use make “smarter connections between concepts” using far less information, but neither approach is proven, yet. But the companies’ other option is using whatever they can find, whether they have permission or not, and based on multiple lawsuits filed in the last year or so, that way is, let’s say, more than a little fraught.

In conclusion, the use of training data for AI models has become a critical component in the development of AI technology. However, the challenges that AI companies face in gathering high-quality training data have led to some controversial methods of data acquisition, including the use of transcripts from YouTube videos, potentially violating creators’ copyrights. As AI technology continues to advance, the demand for large volumes of high-quality data will only increase, making it essential for companies to find ethical and legal ways to acquire training data.

REFERENCES: