YouTube will incorporate disclosure requirements and regulations for content generated using artificial intelligence (AI). These upcoming changes are part of the platform's initiative to integrate and adapt to generative AI, wrote Jennifer Flannery O’Connor and Emily Moxley, vice presidents of product management at YouTube, in a blog post. Expressing enthusiasm about the potential of this technology, O’Connor and Moxley emphasized the profound impact it will have on creative industries in the years to come. They stated: “We’re taking the time to balance these benefits with ensuring the continued safety of our community at this pivotal moment — and we’ll work hand-in-hand with creators, artists, and others across the creative industries to build a future that benefits us all.”The disclosure requirements and new content labels will be rolled out in the coming months, obliging creators to specify whether content has been manipulated or artificially generated, with or without the use of AI tools. According to the post, this new label will be added to the description panel or, for sensitive topics, displayed more prominently on the video player. Creators who fail to disclose such information could face penalties, including content removal. Additionally, YouTube plans to introduce a feature allowing individuals to request the removal of AI-generated or synthetically altered content mimicking the face, voice, or other identifiable characteristics of a person.Requests to remove material that is a parody or involves well-known individuals will be subject to a higher standard. Simultaneously, YouTube is actively enhancing the speed and accuracy of its content moderation systems, incorporating both human reviewers and machine learning technology.This announcement follows Meta's recent declaration about imposing new controls on AI-generated ads in anticipation of the 2024 presidential election.