Skip to main content

Byte-Sized AI: Google Lens Gets an Upgrade; Zyler Launches Customizable Virtual Try-On Tool

Byte-Sized AI is a bi-weekly column that covers all things artificial intelligence—from startup funding, to newly inked partnerships, to just-launched, AI-powered capabilities from major retailers, software providers and supply chain players.

Google enables Lens to show in-store product inventory

Google announced in mid-November that it had added new functionalities to its Google Lens product. The feature already allowed consumers to search for a product by taking a photo of it. For instance, if a shopper loved a sweater they saw their friend wearing, they could upload an image of it; from there, Google would compare the image to others and show relevant results, allowing the consumer to find the sweater they had been looking for.

Related Stories

Now, Google Lens is going one step further. When consumers are in stores, they can use Google Lens to see customer reviews, comparable products, market pricing for the item and more. According to the technology company, the tool represents “major advancements in [its] AI recognition technology.” The shiny new version of Lens is powered by its generative AI tool, Google Gemini; its real-time product information database, Shopping Graph; as well as inventory information from various brands and retailers.

At the moment, Lens is available for beauty products, toys and electronics “at stores of all sizes that share their local inventory with Google.” That may later expand to include products that have less apparent brand markings on them, like apparel and fashion items that don’t immediately bear a logo or product name on the packaging like beauty, toys or electronics would.

The tool is meant to improve the in-store experience for consumers. It comes as brands and retailers continue to pursue higher customer retention rates in a competitive retail environment. Many have gone the extra mile to interconnect their in-store experiences with their e-commerce experiences to increase personalization and brand perception.

“Seventy-two percent of Americans say they use their smartphone while shopping in store, an dmore than half say they’ve left a store empty-handed because they didn’t feel confident enough to buy,” the company wrote in a blog. “This new feature can give shoppers the information and confidence they need to make a decision on the spot.”

In addition to the Google Lens update, the technology company also announced its plans to integrate shopping with Google Maps. During the holiday season, consumers will be matched with stores selling products of interest after typing what they’re looking for into Google Maps, rather than Google’s general search function.

For instance, if a consumer searches, “holiday sweaters” on Google Maps, they may be directed to a nearby Old Navy, J. Crew or Macy’s store that has inventory.

Goddiva partners with Zyler for access to new virtual try-on product

Women’s clothing brand Goddiva has partnered with Zyler for virtual try-on. In tandem with its latest partnership, the technology company announced it had launched a new product, called the AI-Driven Digital Dressing Room.

The new product enables users to enter their measurements, as well as a headshot of themselves, to create a virtual visualization of their own bodies. They can then use that avatar to browse and understand how items will fit their own bodies. According to Zyler, Digital Dressing Room can also suggest the best size for each individual consumer, based on how a garment is meant to fit and product details from brands and retailers.

As is the case with most virtual try-on activations, Digital Dressing Room is slated to boost engagement for Goddiva and other brands and retailers leveraging it. Zyler also anticipates the tool will help its clients reduce return rates, which may be particularly useful this season as high return rates have already begun to plague the industry.

Amber Domenech, head of e-commerce at Zyler, said the company hopes the tool will help decrease instances of wardrobing.

“We hope the combination of the virtually generated customer image alongside the accurate sizing recommendation will help improve customer confidence, increase conversions and reduce returns as customers will be less likely to order various styles or sizes to try on at home,” she noted in a statement.

Alexander Berend, CEO of Zyler, said the company looks forward to help improve customer outcomes for e-commerce.

“We’re proud to launch this exciting partnership with Goddiva and bring our virtual try-on technology to their customers,” Berend said in a statement. “We know how essential it is for shoppers to feel confident and enthusiastic about their online purchases. By enabling customers to see themselves in their chosen outfits, we’re creating a more enjoyable, personalized and inclusive shopping experience that not only encourages informed decisions but also reduces returns and enhances overall satisfaction.”

Zyler’s new tool, Digital Dressing Room, has launched on Goddiva’s site. Photo courtesy of Zyler.

Amazon invests another $4 billion with partner Anthropic

Amazon has continued its love affair with AI company Anthropic. The two announced late last month that the e-commerce giant invested an additional $4 billion in Anthropic, and that Amazon Web Services (AWS) has become Anthropic’s primary training partner. The AI company will use AWS chips to train, and subsequently bring to market, its future foundation models.

Some AWS customers will see the benefits of the continued partnership; in a blog, Amazon noted that it has allowed AWS customers early customization access for Anthropic’s generative AI model, Claude. According to the two companies, having access to the foundation model that underlays Claude, as well as the generative AI tool itself, has enabled clients to power “everything from customer service chatbots, coding assistants and translation applications, to drug discovery, engineering design and complex business processes.” In short, the models have helped clients’ front end user experiences and backend processes, increasing efficiency; in the at-large retail industry, consumers are privy to only a fraction of the AI that helps bring their shopping and customer services experiences to life.

Matt Garman, CEO of AWS, said customers’ abilities will only continue to get sharper.

“By continuing to deploy Anthropic models in Amazon Bedrock and collaborating with Anthropic on the development of our custom Trainium chips, we’ll keep pushing the boundaries of what customers can achieve with generative AI technologies. We’ve been impressed by Anthropic’s pace of innovation and commitment to responsible development of generative AI, and look forward to deepening our collaboration.”

Tukatech uses gen AI to allow customers to create hyperspecific models

Tukatech, which provides 2D and 3D technology to the fashion industry, announced Monday it has launched an AI-powered tool it calls EraseID, which helps brands create AI-rendered versions of 3D models for product visualization.

According to a release from the company, users can prompt the system to create an AI-powered model that meets their specifications; customizable traits include age, hair length and color, eye color, facial expression, skin tone and more.

Geoff Taylor, president of Tukaweb, said the technology helps fill a gap in the company’s existing offerings; though previously, clients had been able to use basic avatars or 3D models in imagery of their products, the images used in the backend couldn’t always be used on e-commerce sites because of quality. The technology can be paired with Tuka3D avatars to refine their general look.

To enable the technology, TukaTech partnered with PiktID, which uses generative AI to process the images at scale in a photorealistic way.

“Our 3D models have always been exceptional for visualizing the look of fabrics, prints, and trims but the facial features were not photorealistic, potentially limiting use for e-commerce. Now that challenge has been solved with EraseID,” Taylor said in a statement. “I was able to generate five different model looks for the same garment in about five minutes, the very first time I used the application. That is the sort of magic AI can deliver with EraseID.”

At left, model on TukaTech’s Tuka3D application. At right, images created with Tukatech’s new tool, EraseID. Photo courtesy of Tukatech.