When it comes to artificial intelligence, both critics and enthusiasts can agree on one thing: Few sectors will remain untouched by the latest breakthroughs in machine intelligence, which can now speak, write and produce creative works as well as — if not better than — humans.
The pros and cons of generative AI’s innovation and inventions run deep, especially for design-driven sectors like fashion. WWD explored the topic last month at Fairchild’s Tech Symposium with panelists Michael Ferraro, executive director at Fashion Institute of Technology Design and Technology Lab; senior designer Shelley Niu of Stability.ai, and Lalaland.ai chief executive officer and founder Michael Musandu. They were in conversation with WWD Paris general assignment editor Rhonda Richford.
Lalaland might ring a bell. As a tech partner for Levi Strauss & Co., the firm found itself in the middle of a controversy last spring when the brand unveiled a new diversity campaign featuring a range of generative AI-created fashion models. The backlash was swift against the notion of using fake people to show diversity instead of hiring actual Black and brown models.
You May Also Like
The criticism appeared to catch Musandu, an engineer in computer science and artificial intelligence by training, by surprise. He explained that “the whole mission for Lalaland.ai has never been to replace real models. We need real models, especially from brands that are serious about inclusion efforts.”
As he told it, the project came together as a practical matter with good intentions, and it seemed to make sense from a business perspective. On a personal level, it also hit close to home.
“As a person of color, I felt that pain point that we all commonly do as consumers, which is just not always seeing models that resemble us,” he said.
“[But] in defense of most fashion brands, it’s just way too costly without them having to efficiently increase their price per product — which is something we don’t want at all — to actually showcase, let’s say, nine models or 12 models on one product.”
Lalaland’s stated aim is to supplement traditional photography, he continued, which can offer benefits in different scenarios. When consumers can see product on people of various sizes, ages and ethnicities, it can give them more confidence in their purchasing decisions. It’s also a helpful tool for business-to-business sales.
“So you could have EMEA market versus U.S. market having different types of models that represent those consumers,” Musandu said. Another intriguing use case for Lalaland applies generative AI even earlier, way back in the product design process. At this stage, the tech allows designers to visualize their work as they create it, so they can make adjustments in real time.
This benefit can’t be overestimated, according to Niu. Designers can implement ideas more efficiently, saving them a lot of time and resources, whether they work in fashion design or in the advertising and media that represent those brands.
“The tools facilitate agencies and marketers alike to expand their creative pool and brainstorming,” Niu said. What used to take days or weeks and countless mood boards, sketchbooks or Photoshop files can happen lightning fast now. “Those who have already adopted generative AI in their projects have created jaw-dropping content in record time.”
Sometimes, it’s not about speed, but filling a much-needed gap.
Ferraro described a student AI project prompted by financial institution HSBC that focused on generating inclusive fashion imagery for people with disabilities.
“We really looked at how to use AI and 3D modeling and animation to create an aspirational vision of people with disabilities in the metaverse, loosely speaking,” he explained. When the students researched how this segment of society is represented now, one thing became quite clear: “It was really not particularly inspiring…the AI framework and the database of images that were available for people with disabilities, in sort of elevated fashion shows, was really pretty limited.”
Lacking sufficient real-world imagery, the team generated hundreds of variations to feed into the database. AI was applied in numerous ways, from churning through early ideas to creating new fabric patterns that went into the apparel.
The students used 3D fashion prototyping software to create a small collection, dressed the avatars in the designs and set them against dramatic environments with atmospheric effects, lighting and reimagined touches, like a highly stylized wheelchair. At the center of these generated images were women from their 20s to 70s in a wide range of sizes, ethnicities and skin colors.
For an audience typically faced with dreary wardrobes that prioritize function over fashion, it’s easy to see how the results could feel fresh and groundbreaking. Now that such empowering visuals exist in the world, they could even influence future designs.
“So there is kind of an interesting platform that people in the future can start to use and build on,” Ferraro said. “And the process involves coming up with terms that were appropriate for a [fashion] pageant, where you’re looking for a diverse community of people of color, that are full-figured, and they were at a fashion pageant with designs inspired by Chanel.”
Generative AI is especially good at creating works that never actually existed before. That’s also the specialty of artists and designers — which could mean that fashion’s mission for the foreseeable future may be to explore all the ways they can come together.