This website uses cookies. By using the website you agree with our use of cookies. Know more

Technology

iFETCH: Multimodal Conversational Agents for the Online Fashion Marketplace

By Ricardo Sousa
Ricardo Sousa
A "Tom Ford" admirer who codes by day and sports practitioner in his spare time.
View All Posts
iFETCH: Multimodal Conversational Agents for the Online Fashion Marketplace

High-end marketplaces require first-rate client connection. Users expect dependable, precise, and timely service, as well as best-in-class customer support throughout the customer journey. As a result, a seamless experience with a high-touch feel is critical to client engagement. As the luxury fashion industry evolves, and with the increase in demand due to the COVID pandemic, the services provided in the online retail space can be easily strained. However, technology can play a crucial role in ensuring the unrivalled experience provided to our customers can continue to exceed their expectations. 

Introducing iFETCH


iFetch's challenge is to simulate a fashion professional who knows the customer's needs and delivers fashion advice by leveraging extensive textual and visual data as well as knowledge acquired from previous experiences with a large number of customers. Our goal is to revolutionise the online high-fashion sector by developing conversational AI technology with multimodal capabilities.



Figure 1: A future solution concept that will be applied in our marketplace. Mockups by Sérgio Pires


Innovative Approach

Our innovation approach has a wide-ranging impact across our organisation; it is of general knowledge that humans reason and convey complicated concepts more naturally, utilising both language and images. In this way, iFetch is at the centre of a dominant technology that will soon enable consumers to access information more naturally and, as a result, make better judgments.
iFetch is incorporating a new generation of conversational agents that communicate with users ostensibly using textual and visual data. iFetch provides its customer base with guidance and a "physical store"-like experience through interaction, all while retaining user engagement. Contributions to the following components are:

  • Extraction of key semantic characteristics from language and visuals that express human intent;
  • Responses in many modes that keep consumers interested in the dialogue;
  • A product knowledge base that is connected and from which appropriate product lists can be derived.

FARFETCH as the leader of the project, gathers renowned institutions, namely Carnegie Mellon University (CMU), Instituto Superior Técnico (IST) and Universidade Nova de Lisboa (UNL), to address and deliver innovative solutions in the four pillars of a conversational AI ecosystem:

  1. Natural Language Understanding, with an attention-based multimodal utterance processor.
  2. Dialogue Manager,  encompassing the dialogue state tracking and dialogue policy;
  3. Product Catalogue and Multimodal Knowledge Base;
  4. Natural Language Generation and Product Retrieval.

Figure 2: Representation of the iFetch consortium



More information about the project can be found at ifetch-chatbot.github.io

This is the first in a series of blog postings highlighting the progress made thus far, so stay tuned!


Related Articles