This is a demonstration of using NLP to create voice-activated experiences for interacting with products using 3D technologies. It uses NLP to generate product descriptions and intent recognition to create actions from voice. It leverages Web Speech API for text-to-speech and ThreeJS for rendering 3D, currently works on Chrome.
3D technologies like AR and VR have recently proven to increase customer engagement and acquisition in the fashion industry. However, fashion brands/designers need to possess or access the necessary 3D modeling skills to create the asset they need to take advantage of this trend. This lowers the barrier to entry for most fashion brands in emerging markets, having them being left behind. Our team thinks we can solve this problem for them by offering a simple tool that automates the creation of 3D assets. We created a simple prototype using ReactJs and Dreamfusion that allows users to upload images of the clothes. The image is analyzed and the clothes detected are cropped out. We use magic123 which creates probabilistic images of the clothing from different camera views and uses them to generate the 3D model in gLTF format. The model is then presented back to the user in the app. We plan to add an Augmented Reality view of the 3D models and a way for users to embed these views in their online digital properties.