Stitch Fix wants its customers to be able to see how an outfit could look before it ever lands on their doorstep.
The company said Monday it will allow its users to see how garments look on digital likenesses of themselves, using a feature it calls Stitch Fix Vision. That means that, rather than seeing items on a model — whether digitally generated or real — customers will now have the chance to understand how an item might look directly on their body.
The company said that, to participate in the beta, users will need to submit one selfie and one full-body photo in the Stitch Fix app, which will be used to create an initial set of images. The items that appear in those images can be shopped, so users can purchase them directly or request that specific items be sent in their next Fix — what the company calls the subscription-based packages it sends users.
You May Also Like
Stitch Fix said it will use existing data on customers, paired with human stylists’ input and an algorithm, to select the pieces users will see on their digital likenesses.
Tony Bacos, chief product and technology officer, said the technology is meant to make unique, personalized fashion more accessible to Stitch Fix customers.
“We believe shopping for clothes should be easy and fun, and that everyone should feel confident in what they wear,” Bacos said in a statement. “Vision delivers on both, providing first-of-its-kind personal style visualization and inspiration. It is an entirely new approach to style discovery, unlike any of the existing ‘virtual try on’ experiences that require shoppers to do all the work.”
The company said consumers will be able to see their digital likenesses in a variety of situations. The idea is that people will see not only the clothes, but also the context they might fit into. For instance, if a Vision image includes a pair of leather pants and a white ruffled shirt, the consumer might see their likeness on a city street; if the Vision image sees the user’s likeness wearing a floral skirt and a crop top, the consumer might see their likeness on the beach.
Stitch Fix said the Vision experience is powered by generative AI. The company built the technology behind the experience in house, and integrated several leading AI models into the backend.
As users interact with Vision, Stitch Fix can leverage that information to further personalize consumers’ experiences and give its stylists further information about the type of items that hit home.
Google announced a similar feature in May; the technology giant allows consumers to use photos of themselves to virtually simulate how they would look in various garments. At the time, it said users would only be able to try on one garment at a time. That is to say, if a user had an interest in a shirt and a pair of pants, they would have to use the try-on tool to demo each item separately.
That varies from Stitch Fix’s approach; the subscription fashion company has decided to show users full outfits selected by an algorithm.
Matt Baer, chief executive officer of Stitch Fix, said the tool is another way for the company to show its consumers how well it knows them.
“When Stitch Fix launched nearly 15 years ago, we disrupted the retail market and today, we are rewriting the retail playbook once again with our latest suite of innovations, including Stitch Fix Vision,” Baer said in a statement. “We have billions of data points on our clients’ fit and style preferences, and we are using these insights, as well as the latest in gen AI technology, our assortment of leading brands and expertise of our stylists, to deliver ultra personalization at scale and bring to life new ways for clients to discover the styles they will love.”