YouTube has announced it's working on a new set of tools designed to detect AI-generated content across its platform, and that these tools have been created with the intention of protecting the likeness of creators.
In a new post on the official YouTube blog, the company's Vice President of Creator Productors at YouTube, Amjad Hanif, explained how the new tools represent YouTube's commitment to "responsible AI development," and part of that is regulating the content on its platform that AI is helping create. The blog post reveals the video platform has created a new tool that is capable of detecting the signing voice of a musician, or the musician's "likeness".
The same principle has been applied to another tool that's designed to identify AI-generated content showing the faces of actors, creators, athletes, political figures, and more. The tools are meant to be guardrails for how YouTube is going to deal with the influx of AI-generated video content on its platform when AI-generation tools eventually make it into more people's daily devices. That isn't to say AI-generated content isn't already being posted on YouTube because it certainly is, and at an increasing rate.
These guardrails and the many others that will undoubtedly be implemented in the future will help YouTube viewers distinguish between content created entirely by AI and content created by humans. It will be interesting to see where YouTube draws its distinctions on what consists of an AI-generated video and specifically how much of a video needs to be assisted by AI generation to qualify.