Skip to main content

British Trade Org Petitions Against AI Duplication of Models’ Likenesses

Models are increasingly speaking out about the ways artificial intelligence could impact their livelihoods. 

The British Fashion Model Agents Association (BFMA) launched an initiative it calls, “My Face Is My Own,” which it said calls on regulators in the United Kingdom to put up guardrails around entities’ ability to use artificial intelligence to manipulate humans’ likenesses without their consent. 

The trade group shared a petition, signed by more than 2,300 models, calling on the industry and the government to recognize workers’ right to make decisions related to their own privacy and their own likeness. 

Related Stories

The models said the petition “confirms unequivocally that the model signatories do not (and have not) granted any permission for their likeness, image and/or characteristics, to be used for any artificial intelligence purposes” and note that they should be required to hand over clear, written, voluntary consent in order for any company or entity to leverage their digital likeness using AI. 

The signatories said that because the UK government has not passed laws that specifically address the issue of digital doubles, they have to work to stretch the limitations of existing laws that weren’t necessarily meant to regulate AI systems. 

“There is currently no single clear legal protection for misuse of an individual’s image (including for AI purposes). This current position, where the protection is so piecemeal, results in unequal bargaining power between commercial stakeholders and leaves individuals exposed to unauthorized use of their image,” the signatories wrote in the petition.

They further note that, in other countries, governments have started to put protections into place for people who find their digital likenesses used without consent. 

The EU AI Act includes specific requirements around disclosing deepfakes, which are images meant to imitate a real person in a falsified situation. 

In Tennessee, legislators passed and signed the Ensuring Likeness, Voice and Image Security Act of 2024 (ELVIS Act), which protects people’s likenesses from unauthorized use, particularly for commercial purposes. 

In May, President Donald Trump signed the federal Take It Down Act into law. That legislation “prohibits the nonconsensual online publication of intimate visual depictions of individuals, both authentic and computer-generated, and requires certain online platforms to promptly remove such depictions upon receiving notice of their existence.” 

While none of these legislation pieces are particularly targeted at models and their jobs, they offer a jumping off point for other bills to address the type of issues the BFMA raises in its letter. Some of that advocacy has already proved fruitful overseas; other trade groups have fought for similar protections from AI. 

In the U.S., New York legislators passed the New York Fashion Workers Act, which requires that modeling agencies or brands interested in using a model’s digital likeness secure that model’s written consent. The law, which took effect in June, also sees to it that agencies don’t hold power of attorney over a model’s digital likeness, which means that it can’t be arbitrarily tossed into contracts at the demand of a brand or retailer. 

Sara Ziff, founder and executive director of advocacy group Model Alliance, told Sourcing Journal that she hopes the protections extended to models in that law will become an industry standard. The Fashion Workers Act is a first-of-its-kind piece of legislation.

The BFMA urged UK regulators to follow on the heels of the states and countries working to offer humans some semblance of protection from exploitation at the hands of AI. 

“Without clear protections, models face their images being used without consent or compensation and there could be a dramatic loss of jobs within the industry,” the signatories said in the letter.

As models struggle to discern whether the benefits of licensing their likenesses to generative AI campaigns outweigh the potential risks of a largely unregulated environment, companies have started experimenting with AI-created models. 

H&M announced earlier this year that it had partnered with several models to create digital doubles of those people; the company received mixed feedback and has been met with a number of questions about how its digital likeness program actually works—as well as the compensation considerations that go along with it. 

Other companies have worked to replace human models with AI-generated models that don’t bear specific resemblance to one particular human; for instance, Eileen Fisher has partnered with Veesual on AI-based imagery and models for its product description pages. 

Criticism over AI-generated models is far from new. Some companies, like Levi’s, have seen public sentiment snafus when trying to leverage AI for campaigns in years gone by. In more recent months, criticism has been directed toward companies like Guess, which placed an advertisement with an AI-generated model in it in an issue of American Vogue. 

The BFMA said it’s not merely concerned with models’ rights, but also worried about the potential for others working in the creative to be impacted. The campaign, the trade organization contends, is meant to offer protection to these professionals, as well.

“For most fashion shoots, there are teams including photographers, hair and makeup artists, stylists and more. None are protected and, without regulation, may well become victims of the next evolution of digital trends,” the BFMA said.