This website uses cookies. By using the website you agree with our use of cookies. Know more

Technology

Search at Farfetch - A glimpse of Semantic Search

By José Marcelino
José Marcelino
Risk-taker and natural problem solver. Keen on making algorithms look 'Yeezy'.
View All Posts
Search at Farfetch  - A glimpse of Semantic Search
At Farfetch, our search box enables millions of our users to find and explore their favourite fashion items in our extensive catalogue. In this context, we are constantly looking for ways to better understand our users' needs and intentions in order to match their expectations by providing the highest quality search experience.


Semantic Search Demo - Before (Left) and After (Right)

Any approach to this is technically challenging. Our users expect us to understand their natural language and to retrieve the correct list of items, even when there are misspellings, acronyms or out of domain words. And, as our range of fashion products expands, we increase the chances of finding ambiguities and the need to deal with them. 

To give an example, Farfetch works with the well-known designer Valentino, which has a fashion line for young/urban customers named RED Valentino, in some cases known as RED. In the context of the query "red dresses from Red Valentino”, how should the word "red” be interpreted for the two contexts? Is the first occurrence equal to the second?

Our search solution, Semantic Search, aims to "understand” our user intentions by solving ambiguous cases. However, before presenting it, let's understand how a traditional search engine works. 

In a traditional search engine, what happens when I type into the search bar the query 'red dresses'?

As an information retrieval system, we try to fetch items in our catalogue ranked by their relevance to the given query. To measure relevance, we may use different fields to match the query terms. For instance, at Farfetch,  each fashion item has a brand, category, colour, and a long and short description field. All of that information could be used to measure the relevance of an item given a certain query. Since such item information is textual, we can compare item-related words to the query terms.

Items with the term ‘red’ in one of its fields should have a higher score than items that don’t possess it. Moreover, those that contain the term ‘dresses’ are expected to have an even bigger score. The more often a term appears in an item the more relevant it is. The field where the match appears is also important for scoring: a match in the category is obviously more important than one in a long description. The relevance of words themselves depends on their commonness. Words like ‘is’ or ‘a’ should be taken much more lightly than less frequent ones, such as ‘red’ or ‘blue’.

In a simplistic view, we just need to define the function that weights all the mentioned factors. Then, we just apply a minimum cut-off value to these scores and we have a reasonable search engine working.

As for our engine, low response time is mandatory. To accomplish this, we rely on  Elasticsearch in order to help us with such a complex task. 

Why isn’t a traditional search pipeline enough?

After having such a solution in place, we started bumping into new problems. For instance, we found that users often type "YSL”, which is an acronym for Yves Saint Laurent. This required us to add a list of synonyms in order to expand such specific terms. We also had to deal with misspellings and empty pages.  At this point, ambiguities started to show up. 

The query "golden shoes”, for example, returns all sorts of jewellery mixed with items from Golden Goose Deluxe Brand. "Golden” is just a word that appears in the item context (being present in the designer, colour, material, and description fields), and without having a clue of what it represents we are simply retrieving items which maximize such word frequency-based relevance, without any contextual awareness. Identifying "golden” as a colour would, in fact, elevate the user experience as a whole.

To understand our customer's intention, we needed to understand each query by recognizing the underlying characteristics of the language and its semantics.

How do we understand each query with Semantic Search?

Semantic Search (also presented as a scientific publication on the AI for fashion workshop at KDD 2018) is a key part of our search engine, being responsible for query understanding. 

It is capable of extracting entities, such as colors or categories, from a user query, improving the catalogue retrieval of products. Going back to our example mentioned above, Semantic Search is capable of identifying ‘golden’ as a colour and ‘shoes’ as a category. As such, all the items presented are pre-filtered on those specific terms for the respective fields. By doing so, "golden”, now seen as color, will no longer trigger unrelated Golden Goose Deluxe Brand items.

Semantic Search is structurally divided in a set of necessary steps:
  • Word Representation
  • Part-of-Speech Tagging
  • Dependency Parsing
  • Named Entity Recognition
  • Entity Mapping

These steps are sequential and the information learnt by each one is fed into the next step. The following image shows the connections among the components using common deep learning components. 

Semantic Search AI - Architecture overview

Our Word Representation phase extracts a semantically-guided word representation, which in practice is a numerical vector embedding a word’s meaning. This layer is the one responsible for transforming written words into numeric matrices as inputs for our models. Ideally, semantically similar words will have similar vectors. Furthermore, we know the words "red” and "blue” have similar meanings and we expect their numeric vectors to be close as well. This semantic representation of words is important to ensure the quality and generalisation of any Natural Language Processing model.
The creation of a dense vector representation was benchmarked using several different techniques (GloVe, word2vec and fastText). In our case, all the approaches had similar results. We chose fastText, which enables our models to cope with words that have never been seen before. In order to increase robustness, we have also taken advantage of a dynamically trained character embedding input layer. 

In the Part-of-Speech Tagging phase, we sought to learn the correct syntactic annotation for each token (eg. verb, noun, adjective). For instance, golden in "golden goose” and in "golden shoes” both have a different syntactic function (noun and adjective, respectively). Making that distinction helps the system understand whether the term relates to a designer or a colour.

We consider that, in our platform, search queries are a simplification of items descriptions. By correctly learning annotated descriptions, the model should easily generalize the part-of-speech tags for our search queries. This module is an end-to-end sequence labelling solution, based on a Long-Short-Term-Memory (LSTM) network combined with a Conditional Random Fields (CRF) classifier, inspired by Xuezhe Ma and Eduard Hovy's work.

The Dependency parsing module takes Part-of-Speech tags and word embeddings as features in order to learn the relationship between words. For instance, in the query "red dress from Red Valentino”, this module allows us to understand that the first "red” in the query is modifying "dress” and it is not directly connected to the brand word "Valentino”, as the second one is.


Dependency Parsing module output for the query - "red dress from Red Valentino”

For the training phase of this module, we use the descriptions corpus and their corresponding annotations. Yet again, it was necessary to train a tailor-made solution from scratch. Inspired by Dyer's Stack-LSTM work, we modified the architecture into an end-to-end sequence labelling problem.

The Named Entity Recognition phase is the most important component of Semantic Search. It is responsible for finding query sub-expressions which represent entity types (category, brand, colour, etc). That’s the step which will ultimately dictate the quality of our search filtering. Our end-to-end sequence labelling solution explores a similar solution to the aforementioned Xuezhe Ma and Eduard Hovy work, and uses Part-of-Speech tagging and Dependency Parsing features, in addition to the sequential word representations of the query. The data produced by each one of the previous models are used as input to the Entity Recognizer, making full use of the hierarchical construction.

All the steps described so far are agnostic to the Farfetch ecosystem. For Semantic Search to be effective, we must find the exact entities users are looking for. We refer to this task as Entity Linking. Let’s say the Named Entity Recognizer ends up labelling "dress” as a category. We still need to look at Farfetch categories and pick the relevant products based on similarity and mapping functions (in this case, the "dresses” category would be the best match).

Challenges ahead?

A number of challenges await us in the future.

First of all, we still need to investigate ways to extend the Semantic Search to non-English languages. We feel this is important, as we want all users to be amazed by our search experience.

Understanding ‘trending’ topics, such as new collections or dressing trends inspired by a global event are still challenges to overcome. This requires not only the textual context but also a generic awareness of fashion trends. 

In conclusion, with the help of state-of-the-art machine learning techniques, the semantic understanding of the query provides a high-quality search experience and takes us one step closer to approaching future trends, such as neural information retrieval, voice search, multilingual search, and personalized search.  

Special thanks to Luís Baía, Vitor Teixeira, João Faria, Carlos Leite, Pedro Balage, Peter Knox, Ricardo G. Sousa, Rui Silva, Nikola Misic, João Pires, João Santos, Pedro Vale and Hugo Galvão.
Related Articles