Improving our PageSpeed

How I reduced the Daemon homepage load time from 10 seconds to 2 seconds.

Written by Liv Rennie


I joined Daemon (pronounced Dee-mun) as a recent graduate in July, and since then I have been working in the Performance Engineering team. I have been lucky enough to be involved with a very interesting project: the release of our new Pagespeed Leaderboard, where we’re measuring and reporting on the web page performance of over 200 companies.

As a part of this, we are monitoring the performance of our own website and noticed that our homepage didn’t perform well on mobile, taking over 10 seconds to load on a fast 3G connection. To tackle this we decided to test our tooling and our recommendations, and I started to work on improving our homepage performance.

I’ve never worked on anything like this before, so it was a great opportunity for me to get stuck in and test the tooling’s recommendations, having no web development experience meant that I really had to rely on these. Having looked through the recommendations, videos and a couple of other features I noticed that there were four main problem areas (images, caching, fonts and cookie script), and tackled this one by one.



The main issues flagged by our tooling were to do with images. The total size of these images was 1.2MB, accounting for 85% of the entire page transfer size. The main cause of this was sending images far larger than required: multiple images were being scaled by more than 100 pixels in the browser, meaning the browser had to scale the image down. In fact, some images on our site were up to 1000px too wide, images this large are huge, making the total page weight far too heavy.

As well as the recommendation to better size images, I also noted that they were not being served in ‘next gen’ formats, e.g. WebP rather than JPEG or PNG. WebP formatted images are Google’s own format, which further compresses image weight by a few kB. The final image recommendation was to reduce the size of the company favicon, which was also being scaled down to fit, from 16kB to 1kB.


The first step was to look up popular viewport sizes so that I could use Inspect to check how wide each image was in these resolutions. Two of the larger images on the page scaled to different sizes depending on the viewport, following that I resized each to multiple sizes to match popular viewports, in order to store multiple images to send to browsers, which are now smart enough to only choose and load the correct one. The image sizes I chose were 380px, 640px, 840px and 1020px wide, as this worked for the required most common viewports (desktop, tablet and mobile).

These images were still quite large, so I converted them to WebP. However, since not all browsers support WebP, I needed to be able to serve these images in both WebP and JPEG/PNG formats. To do this, and to serve multiple sizes of images I used picture and srcset attributes. At first, the images were not being resized correctly on all viewports, so I added the sizes attribute in order to specify what image to serve for each viewport size. For browsers that don’t support the picture, srcset or sizes attributes, there is a fallback to the original image, so that the correct image will always be displayed. You can see an example of the code I used below:

The sizes of the logos on Daemon stayed the same regardless of device, so there was a much easier fix: I simply reduced the logos to 150px (some of these were originally over 1000px wide) and partner logos to 250px. As the images were already small, I didn’t convert them to WebP.

For our favicon, I researched favicon sizes and found that the optimal size is 16px x 16px, so I reduced the size of ours to that from 48px x 48px. This reduced the weight of the favicon to just 1.1kB, almost exactly what I was aiming for.


Following these fixes, the total page weight was reduced from ~1.4MB to less than 1MB. This reduction in page weight had the biggest effect on page load times of all of the changes. On a fast 3G connection, we saw a reduced page load time from 10 seconds to 5 seconds from these image changes alone.



Storing cached items in the end user’s browser can save a huge amount of page load time on consequent hits to a page. Some text or images and other static assets don’t change often, these should be cached in the browser, so they aren’t downloaded every time a user visits a page. Our tooling flagged that over 30 Daemon homepage resources had no caching policy, and these resources rarely changed, so a browser has to download them every time.


To improve this, I added cache-control headers to the following assets across our website: js|svg|png|css|jpg|webp|csv|ico. The header we added was max-age=31557600, (this is the number of seconds in 1 year), which tells each user to store these assets in their cache for a year.


Although the impact isn’t visible through the page load time on our tooling, as this always acts as a ‘first-time hit’ to the site, the speed difference would be notable to end users who hit the site more than once. Our Daemon Pagespeed Index score also improved with this change. Cache fixes like this are recommended best practice because they vastly improve the experience for anyone who regularly visits the site.



Our tooling showed that we were calling multiple Google fonts, this is a huge issue for speed, often resulting in fonts not appearing until after the images, and fonts shifting last second.

From the waterfall chart on the left below, you can see that the fonts weren’t being called until all assets from Daemon were loaded, and then it took a long time to connect to Google fonts before the final fonts loaded.


I added pre-connect and DNS-prefetch for third-party calls, in the head of index.html. This fix meant that the site connected to Google’s fonts before rendering images, so fonts would load much faster.


Looking at the new waterfall chart on the right above you can see that the fonts are called early in the page loading process, and then downloaded later. As we reduced the image sizes already, the time it took for all of these to load was greatly reduced, so the fonts came in even faster.

This fix actually caused the page to be marked down in other areas, as we had introduced more into the head of the HTML. Ultimately I made the decision that it was worth losing some points in a couple of areas, as the improvement in load time, and user experience was so large. The impact of this was felt instantly. It improved how quickly fonts loaded, which was a visible change, making the overall UX much better.

Cookie Script


When examining the pagespeed video, I could see that the cookies prompt banner didn’t load until long after everything else in the viewport had loaded. This meant that although the page had finished loading, it still took another 2 seconds for the Cookie consent pop-up to appear. As such, the website took 2 seconds longer to be fully usable. I wanted to make the Cookie script load much earlier in the process, to make the website available as early as possible.


The fix was to change our Cookie consent script, so that the banner whilst the page is loading, rather than after. This made the banner visible to users far earlier, instead of waiting until the entire page had loaded.


The impact of this small fix was significant. Users no longer have to wait for the Cookie banner, creating a much better experience. This fix took almost 2 seconds off the total load time, reducing our final complete load to less than 2.5 seconds.

Changes mean results

Once I’d made all of these changes, I documented them and handed them over to a web developer, who made the same changes on our production site. They made a huge impact, reducing the load time from ~11 seconds to ~2 seconds. The homepage also received one of the highest DPSI scores out of all the domains we were monitoring, up from the low-seventies to the mid-nineties.

This project was a great way for me to learn about pagespeed optimisation and our tooling. I learnt how to use the tooling, how to interpret the results, and how to use these to make meaningful changes which improve the end user’s experience. It also gave me great insight into the iterative process that is key to performance engineering. For each change I made, I ran a new test to see the impact: whether I’d fixed an issue, whether this made the page faster, and whether the score improved. This isn’t just a one-time change, in the future users will expect pages to load even faster, and there will be new methods we can use to meet this demand.

If you’ve got a slow-performing page, there’s a lot that you can do quickly to get it to perform noticeably faster, and greatly improve user experience. Just be sure to tackle one problem at a time, and to measure the effect of your changes as you go.

Back to Blog