Customers hate it when websites are slow, and they have to wait for a long time while pages are loading. Research has shown that customers that are on a mobile device are even more impatient than those using a desktop. One of the explanations for this is that mobile users are on the road and don’t have much time. Another possible explanation could be that most desktop users are at work where they do have time 😉

Contrary to what most people believe, people prefer a website to be always slow rather than sometimes slow. With a website that is always slow they know what they can expect and take the delay for granted. With websites that are fast most of the time they get frustrated much faster and more users decide to leave the site during a slow phase.

Many companies set fixed alert levels within their web monitoring tools. Send me an alert when this page takes more than 10 seconds to load. Other companies monitor the average response time per day. Both methods will not identify a high variance in response times.

Many of the websites that we have analyzed over time are showing a significant degree of variance. In some cases, this is caused by periods of peak traffic that have a negative impact on the backend. Sometimes we see that housekeeping on the backend is causing delays, sometimes the variance is caused by network issues or the performance of third-party providers. All in all, for most websites, stability is the result of hard work and does not come out of the box.

The above chart is showing an example of a hotel website that has a fast back-end but the page is much too big with too many images, video, and 82 javascript requests. It takes a long time (always over 10 seconds) for the home page to become interactive and the total page load time is always over 20 seconds. The spikes are significant and clearly visible.

The second example is a site that we are currently working on. At the end of June we improved caching and optimized images which reduced time to interactive and time to complete by roughly 40%. The page became much more stable (except for a short outage because of back-end maintenance on the 8th of July. More improvements will be made in the next releases.


When we work on optimization projects, our first goal is to achieve stability in page load times and then to gradually improve performance until we have reached the customer’s objectives. Depending on the root causes of the instability, we can improve the results by using a Content Delivery Network (we partner actively with 7 different providers and gave a working experience with many more). Using a CDN will reduce the negative impact of latency caused by a long distance between customers and origin, it will reduce the load on origin and reduce the impact of a slow back-end and form a first line of defense against Denial of Service attacks. Using Dynalight FLEX we can actively cache dynamic content to achieve even better results. With Dynalight ELIOS we can add a Web Application Firewall to block unwanted traffic (bad bots, crawlers, traffic from unwanted countries and more).

This combination will usually reduce the variance in page load times by 80% and make the whole site much faster. Something your customers will greatly appreciate.


A request control service which modifies HTTP requests to guide traffuic basd on application requirements.

Let’s discuss how Lighthouse can improve your stability.

We would love to stay in touch with you!
Please subscribe to our MAILING LIST