Just imagine how convenient it would be to try on clothes without stepping into a fitting room, leaving your home, and even without having the item shipped.
Well, believe it or not, Google’s AI virtual try-on feature is turning this dream into reality.
Much like Ikea’s virtual reality tool that allows customers to visualize furniture in their homes, this groundbreaking tool is here to revolutionize the online shopping experience.
Google’s AI virtual try-on feature allows users to see how an article of clothing would look on real models, including a wide range of body shapes, skin tones, and hair types. So, shoppers can now get a realistic image of how a piece of clothing will look on them, making online shopping a more secure and personalized experience.
So, let’s explore its benefits and challenges in detail.
Google’s virtual try-on feature might be a game-changer in online shopping. But what exactly is it, and how does it work? Let’s break it down.
The virtual try-on tool uses a generative AI model, a type of artificial intelligence that can generate new data from existing data sets. In this case, the AI takes a clothing photo and generates a realistic image of how that clothing would look on different models. It’s like having a digital dressing room right on your screen!
But the magic doesn’t stop there. The AI model uses a technique called a diffusion model. This machine learning method starts with an original image (in this case, an item of clothing) and gradually adds or subtracts ‘noise‘ to create a new picture. The AI was trained using pairs of images, including a person wearing a garment in two unique poses. This helps the AI understand how clothing might drape, form wrinkles, and look at different body shapes.
The initial launch of this feature focuses on women’s tops from brands like H&M, Loft, Everlane, and Anthropologie. So, whether you’re searching for the perfect piece for a night out or a comfortable top for a casual day at home, you can now see how it might look on a model representing your body, skin tone, and hair type. This is a significant step towards fashion inclusivity and a more personalized shopping experience.
Google’s AI-powered virtual try-on feature has some challenges, such as ensuring the accuracy of AI-generated clothing images. The AI must accurately depict how a piece of clothing will form wrinkles, drape, and fit different body shapes. This requires a complex understanding of fabric dynamics, lighting, and human anatomy.
Another challenge is representing a wide range of models in clothing. The AI must generate images that accurately represent different body sizes, skin tones, hairstyles, and body shapes. This is crucial for creating a genuinely inclusive shopping experience.
Furthermore, integrating this feature with existing e-commerce platforms and websites presents another hurdle. Each online retailer’s unique website design and infrastructure can complicate integrating Google’s virtual try-on tool. Retailers may need to adjust their product listings and website design to accommodate this feature, which could require time and resources.
Despite these challenges, Google’s engineers and AI specialists continuously work to improve the virtual try-on feature. Their goal is to create a tool that enhances the online shopping experience and pushes the boundaries of what’s possible with AI in the fashion industry.
Google’s virtual try-on tool is more than just a neat feature—it’s a testament to the power of artificial intelligence and the strides we’ve made in machine learning. But what’s going on behind the scenes? Let’s take a closer look.
At the heart of this tool is a generative AI model. This type of AI is capable of creating new content from existing data. In the case of the virtual try-on feature, the AI takes an image of a clothing item and generates a new picture of that item on a model. It allows shoppers to see how an item of clothing may look on various body shapes, skin tones, and hair types.
But the AI doesn’t just slap the clothing image onto a model. It uses a diffusion model, a machine learning technique that starts with the original image and gradually adds or subtracts ‘noise’ to create a new one. This process helps the AI generate a realistic picture of how the clothing item will look on the model.
The AI was trained using pairs of images, each showing a model wearing a garment in two poses. This helps the AI understand how clothing moves and drapes on a human body, allowing it to generate more accurate and realistic images.
This combination of generative AI and diffusion models makes Google’s virtual try-on tool powerful. It’s not just about showing shoppers how a piece of clothing might look—it’s about creating a more inclusive and personalized shopping experience.
Lilian Rincon, Google’s Senior Director of Product, Shopping, has shared her insights on the impact of the virtual try-on feature:
Our new virtual try-on feature uses a technique called diffusion to show you what clothes look like on a wide range of people. Learn more about the tech that's making it easier for you to get a better sense of what clothes will look like on you → https://t.co/MbhscWYUml pic.twitter.com/F6pWCXmFER
— Google (@Google) June 15, 2023
“We photographed real people from sizes extra small to 4XL with different skin tones, body shapes, and hair types. And we created a diffusion-based neural network that allows us to take any garment photo plus a photo of a person and visualize how that garment might look on the person in different poses. We even preserve the fabric nuances, including the draping, the wrinkles, and shadows. We’ll roll out this technology broadly across retailers to help us make Google a more helpful and inclusive shopping experience.”
While the virtual try-on feature has been praised for its innovation and inclusivity, it has also faced criticism from some corners of the fashion industry.
There are also concerns over user privacy, as the feature requires users to input personal information such as body size and skin tone. In response to these criticisms, Google emphasizes its rigorous privacy standards and continues to refine the AI models to improve the accuracy and realism of virtual try-on images. The company also plans to expand to other clothing categories and men’s clothing over the next year.
Furthermore, Google’s group product manager, Shyam Sunder, stated that the company has no plans to monetize virtual try-on, as it is more interested in providing a free value add for brands using AI.
Google’s AI-powered virtual try-on feature may soon change online retail as we know it now. It offers a solution to the long-standing problem of trying on clothes when shopping online. While it faces some challenges and criticisms, the potential benefits for shoppers and retailers are immense. As with any new technology, it will continue to evolve and improve over time. One thing is for sure: the future of online shopping is here, and it’s more exciting than ever.
For more details on this revolutionary feature, check out the official announcement on Google’s blog.
Disclosure: This post may contain affiliate links, and if you decide to buy any of the promoted products, I may receive a commission at no additional cost to you. By doing this, I might feel more inspired to continue writing on this blog. You can read our affiliate disclosure in our privacy policy.
Editorial process: My reviews always result from real-world experience. Read my Editorial Guidelines to learn more.
The posts may contain affiliate links. However, this never impacts my commitment to honest, unbiased recommendations. If you decide to buy any of the promoted products, I may receive a commission at no additional cost. By doing this, you inspire me to create helpful and engaging content. You can read about affiliate disclosure in the privacy policy.
We improve our products and advertising by using Microsoft Clarity to see how you use our website. By using our site, you agree that we and Microsoft can collect and use this data. Our privacy statement has more details.