This website uses cookies. By using the website you agree with our use of cookies. Know more

Technology

The need for speed - 1: measuring today's web performance

The need for speed - 1: measuring today's web performance
We live in a world that is becoming more and more connected. Data is flowing around us and we just can't get enough of what it help us find. Not having access to what we want to see just annoys us... a lot! When users are searching for some piece of information, they want it fast and they will not wait long to get it. If that turns out to be the case, then they will just move on and will look for that information somewhere else. Depending if the users are on the move, the amount of time they are willing to wait reduces significantly. We just don't have time to waste.

Has everyone seen the movie "Her"? We are heading in that direction.


By the time we leave our homes, walk to the subway, get out and reach our work desk, we would have virtually done a ton of things during that period of time. Data will be exchanged between system integrators that "talk" to each other accomplishing complex business transactions. On top of that, we wouldn't have to move a finger. Some other artificial intelligence in the form of personal assistants would do that for us. No sweat!

In such a world, we at Farfetch, don't have the luxury of sitting down and watching it all happen. We need to evolve and push our services using new media channels that appeal to our always connected customers. Whatever the service people are using, one thing is for sure - it has to be fast! Actually fast is not enough; it also has to be resilient. Our quality of service needs to be top notch for us to stay relevant.
 
All of these high standards apply to the web today as well, so web performance is unavoidable. Speed is part of our DNA, speed is a feature. Not everyone understands how important it can be for user engagement and conversion rates.




 
For the majority of users, a website’s speed is more important than how good it looks or how easy it is to find what they are looking for. It's funny how the non-functional side of our apps can become more important than the functional side, which is the side where we tend to spend most of our time.
 
Fast page loads boost user engagement, retention and sales. According to Aberdeen Group research, super fast page loads lead to higher conversion rates and every one-second delay decreases page views by 11%, customer satisfaction by 16% and conversion rates by 7%. Performance can mean the difference between making a sale or losing a customer to the competition. Even good performance may not quite cut it, as our users may have a different opinion. That's why understanding the human brain helps in delivering experiences that our users perceive as fast.

Perception is key. Perceived performance is rooted in human experience factors. Generically, customers perceive websites as fast or slow, easy to use and smooth or not, then they will share their opinions with others. This is part of the game, and we have to accept as engineers that, although we will never control all constraints, what we have to deliver is a fast website. To do our jobs better, we can take lessons from science and psychology. For instance, understanding that time can be both subjective and objective helps in understanding how perception works.



When optimising, we are working to improve the objective time. The thing is, what if the users do not notice those optimisations? That's why concepts like the Just Noticeable Difference (JND - experimental psychology) can help and guide an engineering team in making the right impact on the objective time. The JND, also known as the difference threshold, "is the minimum level of stimulation that a person can detect 50% of the time”. A difference of 20% or less in objective time is imperceptible by users, and that's why we need to be more ambitious and shoot for more than 20% improvement. Focusing on reaching an impact on subjective time stimulates a better perceived performance and materially improves the engagement of our users.

Naturally, as an engineering team, we need to optimise and measure our progress. We need indicators that tell us if we have successfully implemented the target JND.

Speaking of metrics 

At Farfetch, we've been using Page Load Time (PLT) as the major metric to assess web performance. The importance of PLT goes beyond this assessment as it is also used as an engineering and business driver for overall company goals.

PLT is a somewhat "safe” metric to use because it can accurately be measured across all browsers. However, the web has evolved a lot over the years. It has become more and more dynamic. The next example shows that PLT doesn’t reflect the user perception as well as it once did. That's why we have come to realise that PLT is not enough anymore.


Farfetch homepage loading on mobile - on the left 2.9 seconds (90% rendered). On the right 5.97 seconds (PLT)

If we think about it, PLT accounts for the complete page load given what is above and below the fold. We are in a phase where concepts like the critical rendering path and progressive rendering are part of our optimisation arsenal with specific purposes of promoting critical content first to increase user engagement. This is because not all bytes are created equal. Users engage with what they see, so we absolutely need to point our monitoring into that direction as well.
 
Choosing the right performance metric is not easy as there is no silver bullet. There are companies that may choose DOMInteractive, for instance, as NetFlix did a few years back when moving netflix.com to node.js. Others may use the Time To First Byte. However, we can't capture the user experience based on a single metric, which represents a single point in time.
We need to define our user's journey where we will mark different key moments. These key moments may vary, but it is also part of experimentation. Performance is a continuous process, where we need to monitor, learn and improve. So, we set out to find some friends for our PLT to provide a broader context of the overall user experience. There are a lot of choices out there and we can differentiate between Visual Metrics and Interactivity Metrics. Here are just a few:

Visual Metrics

  • Start Render
  • Time to Title
  • First Contentful Paint
  • First Meaningful Paint
  • Speed Index
  • Hero Rendering Times

Interactivity Metrics

  • Estimated Queuing Time
  • Time to Interactive
  • First CPU Idle
  • First Input Delay

How do we evaluate and pick the right metrics to use at Farfetch? We could go into the technical details of how each metric works. However, we at Farfetch would much prefer using my good old weapon of choice, WebPageTest. I can perform a test run on our site and watch something that everyone could relate to: the filmstrip. This gives me a reflection of what our users are seeing on our platform.

By analysing the filmstrip, we can clearly see how the application loads along the way, just like if we were watching a movie. This gives us a great insight into the user's perspective. WebPageTest is the cameraman, recording everything on tape. Let's take, for instance, a product detail page and a resumed filmstrip where we can see where different metrics are fired.


These would be the most relevant visual changes that the page goes through. Naturally, there are other performance data, such as what the browser main thread is doing or what is happening with all requests over the network. From a visual inspection, these would be perhaps the most relevant changes the viewport goes through, with the exception of First Interactive since it is not fired based on visual parameters. Some of these moments may not be spotted by the user due to their proximity along the filmstrip but all are important visual checkpoints that we cannot ignore.

So which ones should be chosen? There's no perfect answer but we can have good reasoning behind each choice and try to extract as much value as possible. In the future, we can still identify other moments that offer us value. Thus we could consider bringing them as well.
 
We tried to bring the perceived performance concept into the selection process. The nice part about thinking how our users would react is not so strange because we are all users, so we go through the same set of stimulus when we are looking at a website. So, with our eyes on the filmstrip, we ask ourselves what are we really looking for when we hit that URL.

In the next post, we will find out what are these questions and what kind of performance metrics can help us in answering those questions.

Let's continue!


Related Articles