YouTube Adds New Ai-Generated Content Labeling Tool

YouTube has started rolling out a disclosure tool for  creators to self-label when their videos contain AI-generated or synthetic material.

The checkbox appears in the uploading and posting process, and creators are required to disclose “altered or synthetic” content that seems realistic. That includes things like making a real person say or do something they didn’t; altering footage of real events and places; or showing a “realistic-looking scene” that didn’t actually happen. Some examples YouTube offers are showing a fake tornado moving toward a real town or using deepfake voices to have a real person narrate a video.

However, creators don’t need to use the tool to disclose generative AI use for productivity purposes, such as script generation or generating content ideas, or when the synthetic media is unrealistic and/or the changes are inconsequential.

These labels will appear in the video descriptions and sometimes on the videos themselves for videos focused on sensitive topics like health, news, elections, or finance. YouTube will also automatically add labels when creators don’t disclose videos. Eventually, enforcement measures will be added when creators fail to disclose.

For videos on more sensitive topics including health, news, elections, or finance more prominent labels will be shown in the video itself. Additionally, YouTube says it will add disclosure labels to videos even when creators do not add them, especially if the altered content has the potential to confuse or mislead people.

The new disclosures are meant to prevent users from being duped into believing that a synthetically created video is real, as new generative AI tools are making it harder to differentiate between what’s real and what’s fake. The launch comes as experts have warned that AI and deepfakes will pose a notable risk during the upcoming U.S. presidential election.

Viewers will start to see the labels across all YouTube formats in the coming weeks, starting with the YouTube mobile app, and soon on desktop and TV.

In November, YouTube detailed its AI-generated content policy, essentially creating two tiers of rules: strict rules that protect music labels and artists and looser guidelines for everyone else. Deepfake music, like Drake singing Ice Spice or rapping a song written by someone else, can be taken down by an artist’s label if they don’t like it.

Mr. Ogonji is a highly professional and talented journalist with a solid experience in covering compelling stories, reporting facts, and engaging audiences. He is driven to uncover the truth behind today's most pressing issues and share stories that make a genuine impact.

You may also like...