Celebrities are facing an enormous and fast-growing problem: Generative AI is fueling the unauthorized use of their name, image and likeness in fake content distributed across the internet.
Yet mitigating the problem is experienced as a perpetual game of whack-a-mole given how fast the infringing content is proliferating, as VIP+ illustrated with exclusive data measuring the sheer volume of offending materials that have been detected online since late 2022.
Conversations with sources at talent agencies and companies specializing in curbing the rising tide provided deeper insight into the problem itself and the developing approaches to address infringing content.
Proactive mitigation efforts are underway, but deployment is still early stage and unevenly applied among talent and their teams. Agency sources also referred to legal uncertainty as a partial obstacle to knowing the best course of action.
Traditionally, finding NIL infringements and issuing Digital Millennium Copyright Act takedown requests has largely been a manual process, often undertaken by talent's own reps, which generally include attorneys, managers, publicists and agents. These teams typically send their clients weekly or monthly maintenance reports documenting infringements and takedowns.
But the manual nature of the takedown process may be inadequate amid the sheer scale of oncoming infringements. One agency source told VIP+ some clients have begun hiring cybersecurity companies to help them mitigate the onslaught of NIL deepfakes, but even this line of defense is still slow going. In short, a lot of offending material will be missed — or costs for talent will simply balloon as teams are forced to grow or work more billable hours.
"Even with these legal teams and cybersecurity companies finding these infringements, they're still missing so much because it's a very manual process," one agency source told VIP+. "The amount of infringements we're seeing for clients who spend hundreds of thousands of dollars of legal fees for this very manual process is really baffling."
Scale isn't the only challenge of gen AI fake content; another is the diffuse and evasive ways content is being disseminated across the internet. Sources described varied infringement scenarios becoming more common. For example, one agency source shared how talent likenesses have appeared in fake ads served on porn websites.
Yet another challenge for talent may become the increasing complexity of their own teams as a growing pipeline of responders tasked with finding violating content and issuing deepfake takedowns.
As the problem has grown, sources said some talent are beginning to work with emerging automated detection and takedown services offered by companies such as Vermillio and Loti, both of which WME has partnered with to allow their clients to take up the service if they choose at an extra cost.
Agency sources expected that more talent would begin to take such solutions because they're automated and "more all in one," rather than relying only on numerous teams working in manual ways on an impossible problem.
Solutions such as Loti and Vermillio fall under an emerging and fast-expanding category of startups offering data-protection services, some specifically catering to entertainment. Agency sources referenced meeting with "dozens" of startup solutions, whether sourced through referral or proactive research.
But all are not equally effective, particularly for what's required to address the scale and multifaceted aspects of the problem for talent and IP. Few are capable of not just detecting and positively identifying AI-generated content but matching it with a specific identity, analyzing whether the content is appropriate and then automatically issuing a takedown request — not to mention a necessary degree of understanding of the entertainment industry and ability to work with talent.
RELATED: Exclusive Data on How Gen AI Is Fueling an 'Exponential' Rise in Celebrity NIL Ripoffs
One agency source relayed that vetting solutions meant establishing the solution had capability (e.g., being able to show proof of success), capacity to deliver at needed scale (to manage 1,000+ clients), cybersecurity (ensuring the company is secure if and when client data is shared) and legal (to make sure the solution is compliant and fits the needs).
Both Vermillio and Loti scan large portions of the public internet to detect infringing content — whether misusing a specific individual's NIL or a rights holder's IP — and automatically issue DMCA takedown requests to have it removed.
These solutions are technically complex, involving a dense tech stack that requires systems to ingest enormous volumes of content every day, many different machine learning models to analyze and detect AI-generated content, facial recognition to find and positively identify client faces and systems to automate takedown requests across numerous platforms.
For agencies, addressing this problem for talent has also meant communicating and collaborating much more directly with social media companies in particular.
"There are ongoing conversations at every agency, speaking with the major platforms, the YouTubes and the Metas," an agency source told VIP+. "We believe the largest player in being able to prevent and detect all of this stuff is the platform. And who's better at doing YouTube takedowns than YouTube?"
For their part, social media companies have their own strong incentives to build their own detection, personal identification and management tools. Earlier this month, YouTube announced it had developed a synthetic-singing identification technology within Content ID to help partners automatically detect and manage content that replicates their likeness — and said it would be releasing similar tools for people working across other industries, including creators, actors, musicians and athletes.
But even the most vigilant social platform can do only so much. One source described synthetic content commonly snaking across platforms, such as a link embedded on a social media post that directs the user out to another platform or site page (e.g., Patreon, product page, adult content site or interactive chatbot app) enticing victims out to where they can transact.
That cross-platform "path to purchase" means even if a social media company can detect AI content on its own platform, it may not always have enough visibility to know it should take down a piece of content that directs a user to harmful content located off-platform.
As a result, third-party tech solutions, such as those offered by Loti and Vermillio, that scan broader areas of the internet would even be essential for social media companies, including directly integrating into a platform to enhance content moderation in ways it wouldn't otherwise be able to.
Any partnership integration of a third-party service into a social platform can also give talent teams and reps more backing to their takedown requests so that social platforms follow through.
No comments:
Post a Comment