What it is and why it matters – CSS Wizardry – Web Performance Optimization

Written by on CSS Wizardry.

Table of Contents
  1. What is TTFB?
  2. Demystifying TTFB

I’m currently working on a client project, and since they are an e-commerce site, there are many facets of performance I would like to explore for them: loading times are a good start, startup rendering is the key for customers who want to see information quickly ( tip: it’s them all), and customer-specific metrics such as
how fast was the key product image loaded? can all provide valuable insights.

One metric that I feel front-end developers overlook too quickly is
Time for first byte (TTFB). This is understandable – almost forgivable – when you consider that the TTFB is starting to move into back-end territory, but if I were to summarize the problem as briefly as possible, I would say:

While a good TTFB does not necessarily mean you want a fast website, a bad TTFB almost certainly guarantees a slow one.

Although you as a front-end developer may not be able to make improvements to TTFB yourself, it is important to know that any issues with a high TTFB will leave you behind, and any efforts you make to optimize images will clear the critical path and asynchronously loading your web fonts will all be made in the spirit of playing catchup. This is not to say that more front-end-oriented optimizations should be dispensed with, but there is unfortunately a feeling of closing the stable door after the horse is bolted. You really want to thrash these TTFB bugs as soon as you can.

What is TTFB?

The TTFB timing is not very insightful. See full size / quality (375 KB)

TTFB is a bit opaque to say the least. It consists of so many different things that I often think we tend to just shoot over it. Many people assume that TTFB is just time spent on the server, but that is only a small fraction of the true scope of things.

The first – and often most surprising for people to learn – thing that I want to draw your attention to is this TTFB counts an entire tour with latency. TTFB is not only time spent on the server, it is also time spent getting from our device to the server and back again (brings, it’s right, the first byte of data!).

Armed with this knowledge, we can quickly understand why TTFB can often rise so dramatically on mobile. You’ve probably wondered before, the server has no idea I’m on a mobile device – how can it increase its TTFB ?! The reason is that mobile networks are usually high latency connections. For example, if your Round Trip Time (RTT) from your phone to a server and back again is 250ms, you will immediately see a corresponding increase in TTFB.

If there’s one important thing I would like you to take from this post, it’s it
TTFB is affected by latency.

But what else is TTFB? Fasten yourself; here is a non-exhaustive list presented in any particular order:

  • Reaction time: As above, we count a trip to and a return trip from the server. A trip from a device in London to a server in New York has a theoretical best-case speed of 28ms over fiber, but this gives many very optimistic assumptions. Expect closer to 75ms.
    • That’s why it’s so important to serve your content from a CDN: Even in the Internet age, it’s beneficial to be geographically closer to your customers.
  • Route: If you use a CDN – and you should be! – a customer in Leeds may be directed to the MAN data center only to find that the resource they are requesting is not in the cache of that PoP. Consequently, they are routed all the way back to your origin server to retrieve it from there. If your origin is in Virginia, for example, there will be a large and invisible increase in TTFB.
  • The file system reads: The server simply reading static files such as images or stylesheets from the file system has a cost. It’s all added to your TTFB.
  • Priority: HTTP / 2 has a (re) prioritization mechanism whereby it can choose to stop lower priority response on the server while sending higher priority response. H / 2 prioritization issues aside, even when H / 2 runs smoothly, these expected delays will contribute to your TTFB.
  • Application run: It’s actually a bit obvious, however the time it takes to run your actual application code will be a major contributor to your TTFB.
  • Database queries: Pages that require data from a database will incur a cost of searching it. More TTFB.
  • API call: If you need to call any APIs (internal or otherwise) to populate a page, overhead will be counted in your TTFB.
  • Server-side rendering: The cost of server rendering of a page may be trivial, but it will still contribute to your TTFB.
  • Cheap hosting: Hosting that is optimized for cost rather than performance usually means that you share a server with any number of other sites, so expect degraded server performance, which may affect your ability to fulfill requests, or may simply mean that hardware that is not live, trying to run your application.
  • DDoS or heavy load: Like the previous point, increased load without the ability to automatically scale your application will lead to degraded performance as you begin to explore the boundaries of your infrastructure.
  • WAFs and load balancers: Services such as web application firewalls or load balancers located in front of your application will also contribute to your TTFB.
  • CDN features: Although a CDN is a huge net gain, in some scenarios their features can lead to additional TTFB. For example, request to collapse, edge-side include, etc.).
  • Last-mile latency: When we think of a computer in London visiting a server in New York, we tend to simplify that journey quite drastically, almost imagining that the two were directly connected. The reality is that there is a much more complex array of intermediaries from our own router to our ISP; from a mobile tower to a submarine cable. Last mile latency deals with the disproportionate complexity towards the terminus of a connection.

It is impossible to have a 0ms TTFB, so it is important to note that the list above does not represent things that are necessarily bad or slow down your TTFB. On the contrary, your TTFB represents any number of the above elements. My goal here is not to point fingers at any particular part of the stack, but instead to help you understand what exactly TTFB may entail. And with so much potentially taking place in our TTFB phase, it’s almost a miracle that websites load at all!

So. Much. Stuff!

Demystifying TTFB

Fortunately, the whole thing is not so unclear anymore! With a little extra work spent implementing the Server Timing API, we can begin to measure and display complex front-end timings, allowing web developers to identify and troubleshoot potential bottlenecks that were previously hidden from view.

Server Timing API allows developers to extend their response with an extra Server-Timing HTTP header which contains timing information that the application itself has measured.

This is exactly what we did on BBC iPlayer last year:

The newly available Server-Timing header can be added to any response. See full size / quality (533 KB)

NB Server timing does not come for free: you actually have to measure the above aspects yourself and then fill in yours Server-Timing header with the relevant data. All the browser does is display the data in the appropriate tool, making it accessible on the front-end:

Now we can see right there in the browser how long certain aspects of our TTFB took. See full size / quality (419 KB)

To help you get started, Christopher Sidebottom wrote his implementation of the Server Timing API during our time optimizing iPlayer.

It is important that we understand what TTFB can cover and exactly how critical it can be to overall performance. TTFB has contagious effects, which can be a good thing or a bad thing, depending on whether it starts low or high.

If you are slow out of the gate, you will spend the rest of the race playing catch-up.

☕️ did this help? Buy me a cup of coffee!

Leave a Comment