Do you remember the last time you were waiting to be served at a restaurant? Or the last time you were queueing at an amusement park? We bet these weren’t the most pleasant experiences you’ve ever had. Neither is waiting for a website to load – and yet, users still have to put up with the poorly optimized web apps whose performance leaves a lot to be desired.
Luckily, this situation can be fixed. As always, we can’t offer you one solution that will magically do away with all performance roadblocks your product facing. What we can do, however, is present you with a set of best practices for improving web app performance which, when carefully matched with your needs, will surely cause your business to flourish.
Why should you care for web app performance?
First things first: let’s start by demonstrating how important web performance optimization – which, by the way, is a never-ending job – is for the long-term success of your venture.
Optimal UX lowers customer churn
“Content is where I expect much of the real money will be made on the Internet” – remember this part of Content is King essay by Bill Gates? A lot has changed since 1996, but the immense value of content, especially when it’s engaging and useful to customers, remains unaltered to this day. Similar is the case with matching services with the needs of a given target audience. Nevertheless, even the best-optimized content and most relevant offering won’t do if your web app performance isn’t satisfactory.
Just take a look at two real-life cases. When working on their website, BBC has found out that for every additional second it took to load, 10% of users would leave. DoubleClick, on the other hand, has seen 53% of users abandon a page if it took more than 3 seconds to load. The explanation for this phenomenon is simple: with the ever-increasing amount of stimuli around, we’ve grown extremely impatient and all we care about online is instant gratification. According to Harry Shum, a speed specialist at Microsoft, your users are likely to head to a competitors’ website if it outspeeds yours by as little as 250 milliseconds. And if a video within your app stalls while loading, 80% of users are going to click away – and likely, never come back.
All the above-mentioned statistics prove that flawless performance is fundamental to positive user experience. And speaking of users being more eager to interact with your web application...
Improved web app performance drives conversions
User retention driven by satisfactory web app performance is fundamental to increasing both conversion rates and revenue. Once again, to fact-check this claim, let’s quote a couple of real-life cases of businesses profiting from website optimization:
- Walmart noticed a 2% increase in conversions for every 1 second of improvement in load time. What’s more, for every 100 milliseconds of performance improvement, the company’s revenue grew by 1%.
- AliExpress’s attempts to reduce load time by 36% have caused conversions for new customers to grow by 27% and the number of orders to increase by 10.5%.
- For Mobify, reduction of the homepage’s load time by 100 milliseconds resulted in an increase of conversion rates by 1.11%. This translated into the increase of the annual revenue by 380 000 USD in 2016.
Looking at these figures, you can’t help but notice the interdependence of performance and profit. If you’re curious about what impact the site speed exerts on your business, you can quickly satisfy your thirst for knowledge with Google’s Test My Site: a tool that will assess your website’s current performance, benchmark it against the competition, as well as correlate future improvement with the potential increase in revenue.
Source: Raidboxes.io
Visibility and SERP position
Last but surely least, improving web app performance is vital for enhancing your business’s online visibility. Of course, there are at least a couple of search engines used around the globe. Nonetheless, accounting now for over 85% of the global search market, Google is definitely a front runner – which is why we’re going to focus on its ranking algorithm in this paragraph.
Worldwide desktop market share of leading search engines from January 2010 to January 2020
The digital giant remains quite secretive about what factors it takes into consideration when deciding where in SERPs a given website ends up. What we know, however, is that Google uses around 200 ranking signals, including domain-related, on-page, and off-page factors – and one of them is page speed. How fast is fast enough, you may ask? Once again, there is no definite answer; however, it is assumed that loading in three and two seconds for an ordinary website and an e-commerce platform respectively are satisfactory results. If your web app is slower than that, Google’s algorithm is most likely to punish you by lowering its SERP position, regardless of how well it is optimized for SEO.
How do you measure web app performance?
Now that you have fully realized how important a smooth functioning of your application is to the long-term success of your venture, you might be tempted to jump headlong into speed optimization – especially if you’ve heard about Google’s plans to mark slow websites with a “badge of shame”. The thing is, however, there are no silver bullets to improving web app performance.
In the next paragraph, you’ll see that there are a number of steps you can take to increase web app speed. Nonetheless, no actions undertaken in the IT world exist in a vacuum and employing one of the undermentioned solutions can radically affect the remaining aspects of your app. For this reason alone, we encourage you to start by subjecting your web app to thorough performance evaluation with one of the following tools:
- Google’s Lighthouse and Pagespeed Insights
The former is an open-source automated webpage auditing tool. The report it generates includes information about performance, PWA optimization, accessibility, best practices, and SEO. The latter is a more precise tool since it focuses exclusively on performance. As opposed to Lighthouse, Pagespeed Insights bases its analysis not only on the lab but also on the field data – which makes tailoring the performance of your web app to different devices and real-world network conditions much easier.
Don’t be fooled by the first impression: although its interface screams old-fashioned, WebPageTest is a robust tool for running both simple and advanced speed tests. According to the official website, as part of an in-depth analysis of web app performance, WebPageTest provides “diagnostic information including resource loading waterfall charts, Page Speed optimization checks, and suggestions for improvements”. On top of that, the tool allows testing performance with a wide range of locations, browsers, and devices in mind.
Simpler than WebPageTest and boasting a clear UI, Pingdom is a free website speed test tool suited to the needs of less technical users. It offers comprehensive summary tables for the analyzed metrics, in-depth waterfall charts, and allows to run tests from 7 different locations – all while remaining highly user-friendly.
Most relevant web app performance metrics
It’s enough to take a quick look at the reports generated by each of the above-mentioned tools to see how many factors actually influence the speed of any application. To help you understand this complex issue, let us now focus on some of the most popular web app performance metrics.
- Time to first byte
Let’s start by discussing time to first byte which is a responsiveness indicator and can be defined as the amount of time needed for the browser to receive the first byte of information from the server. The recommended score is below 200 milliseconds, although only 2% of websites examined by the Web Almanac in 2019 proved to do that well. It’s also worth noting that, by definition, we can’t speak of fast first contentful paint (more on that below) if the time to first byte is longer than one second.
The average time to first byte in 2019
- Time to first paint
First paint, first contentful paint, and first meaningful are all user-centric perceived performance metrics or, simply put, metrics referring to the actual experience of your web app users. The former indicates how long it takes for the first pixels to appear on the screen and inform the visitor that the content is about to load soon. First contentful paint points out to how much time passes before any content defined in the DOM, be it an image or a text, is rendered. The latter, on the other hand, shows when the entire website content is finally loaded and ready to – as the name suggests – provide users with meaningful information.
The difference between first paint, first contentful paint, and first meaningful paint
As suggested by Google, there are essentially two ways of improving the first paint metrics: shortening the time needed for downloading resources and doing less work that would hinder the browser from rendering the DOM content.
- Speed index
Speed index is yet another metric referring to user experience – in particular, to what and when the visitor sees. Defined as an indicator of how quickly the above the fold content is displayed on a screen, speed index is sensitive not only to the quality of the web connection but also to the size of the viewport, which makes it invaluable in optimizing web applications for varying screen sizes. Although Google recommends that the speed index does not exceed 4.3 seconds, according to Backlinko’s research, the average speed index score equals 4.782 seconds for desktop and 11.455 seconds for mobile as of 2019.
- Time to interactive
The third human-centric metric discussed in this article is time to interactive: an indicator measuring how much time passes before a page becomes fully interactive meaning the first contentful paint occurs, event handlers are registered for the majority of visible elements, and the website responds to a user action like pushing buttons in less than 50 milliseconds. The recommended time to interact doesn’t exceed 5.2 seconds and the score your app achieves can be lowered by optimizing JavaScript. One real-life example of the tremendous impact that TTI can exert on your business is that of Pinterest which, having reduced the score from 23 to 5.6 seconds observed an increase of user engagement by 60%.
Time to interactive, as compared with other performance metrics
How to go about improving web app performance?
Since you already know that web app performance can make or break a business and what metrics you should take into consideration when determining whether the speed of your product is satisfactory or not, we are ready to move from theory to practice. In the following part of the article, we’ll present you with speed-related problems and real-life solutions that can be implemented throughout the software modernization process on the client or server-side of the application. As these two sides continuously influence each other, let’s start with the tips and tricks that can work both on frontend and backend.
Spotting bottlenecks
We’ve already established that when trying to optimize web app performance, the first thing you must do is to identify problems and bottlenecks that are slowing your app down. And you can trust us when we say that doing so without proper tools is an almost impossible task.
As far as spotting bottlenecks on the client-side is concerned, we already discussed some simple solutions that don’t require extensive technical knowledge, such as the built-in developer tools shipped with your web browser. For instance, if you want to quickly find out if your web app is well optimized or what curbs its speed, just go to the audits tab in Google Developer Tools and run Lighthouse audit.
If, on the other hand, you’re a more tech-savvy CTO whose product has turned out too big on the verge of deployment, we advise you to try webpack-bundle-analyzer. Working for applications configured with webpack, this particular tool generates an interactive graph showing the contents of your app. Using this information, you’re one step closer to minimized bundle size and optimized performance.
Treemap visualization of the contents of all bundles created by webpack-bundle-analyzer
When it comes to identifying bottlenecks on the backend, the selection of tools will correspond to the framework used in the project. If, for instance, your framework of choice is Django, Django Debug Toolbar can come in handy. It displays many different metrics such as the time needed to load the page or information about performed SQL queries, which means that using it, you can see exactly what and how many queries were performed to load the page or resource along with the time needed to perform them. In other words, Django Debug Toolbar exposes redundant or slow queries, thus giving you a chance to refactor the code.
Browser and server-side caching
If you were now asked to recall where you spent a vacation in 2003, how fast would you be able to do this? What if you were asked about it 10 minutes later? This time it wouldn’t take you that long because the mental image of the sunny resort would have been extracted from the deepest depth of your memory and brought closer to the here and now. Why are we talking about recollection? Because that’s just how cache works, whether it happens on the client or server-side.
Let’s start with the type of caching that many users are already familiar with: browser caching, which is a way of storing website resources like images or CSS files on a local machine for a definite amount of time. When the user visits a website for the very first time, their browser downloads all files needed to display the content properly. Such files can be saved in the browser cache and used to minify HTTP requests later on. Think of the header with your venture's logo, for example – once downloaded, it is served instantly upon the user’s next visit to your website, which decreases the number of requests sent to the server.
Source: Reseller Club
How to implement browser caching? It’s as simple as adding a Cache-Control header to your request. This allows you to control how, under what conditions, and for how long browsers cache the response.
Server-side caching differs from browser caching in that it stores content not on the user’s machine but rather on the website’s server. If you implement this type of caching, it’s only the first visitor to your website who requests content from the origin server – all subsequent users are served data from the same cache, which relieves the server and, as a result, improves page load time.
As far as specific backend solutions are concerned, let us take Django as an example once again. The framework is quite flexible in allowing caching the entire dynamic site, its selected pieces, or specific views only. One step that you needn't forget about when setting up the cache in Django is pointing to the place where the data lives, meaning the database, the filesystem, or the memory. One example of an in-memory caching solution used in Django-based web apps is Redis, which, thanks to storing data in RAM and facilitating data access, boasts exceptionally high performance.
Ways to increase app’s speed on the frontend side
On the client-side, improving web app performance boils down to reducing page load time. Be careful, though – achieving one goal of shortening the loading time is no picnic. On the contrary, it calls for undertaking a set of deliberate actions, the most effective of which we present below.
Choosing your tools wisely
There are many reasons to challenge someone and various types of combat, most of which involve the use of weapons. To win, you need to not only choose the arms wisely but also use them to your advantage. In this regard, building a high-performing web app is just like an old-fashioned duel: to do it right, you need to begin by cherry-picking the tools that will suit your needs best.
- Speed and size
The three most popular frontend frameworks available on the market are React, Angular, and Vue. Wondering which one achieves the shortest first contentful paint time (which, as we said above, is one of the most important performance metrics)? Check out this video:
According to Inian Parameshwaran, Vue and Preact are the ones to achieve the best results. Yes, you read it right, Preact – a fast alternative to React, which weights only 3kB. React comes third and Angular is not even mentioned in the results. In addition to speed, it’s worth considering the framework's size as well. After all, the smaller the file, the faster we can download it.
Want to get to know frontend frameworks better? Take a moment to read these articles:
🔸 Angular vs. React - choosing the right technology for your next project
🔸 What is Preact and when should you consider using it?
🔸 Next.js and React Server-Side Rendering integration
- Static vs server rendering
The method of delivering content to the user and rendering the page varies depending on the technology. Striving for the best performance, you should consider serving static sites, meaning plain HTML served directly to the browser. In this case, there is no code executed while loading – loading a page involves only downloading index.html file. If you decide on building a static site – which, by the way, is really hard to beat in terms of speed and performance – we recommend trying out Gatsby.js, a React-based open-source framework.
Sometimes, however, you can’t predict all requests that visitors can make or the response will change depending on the user, in which case you simply can't afford the content to be static. If you still care about performance but need to manage the content dynamically, consider using a framework that supports server-side rendering, such as Next.js. The thing about server rendering is that it also serves an HTML file of your page that is ready to be rendered. However, when Next.js is running on node.js server, every new request for page content will trigger a new rendering process so that you can fetch fresh data from your API.
- Size of third-party libraries
Imagine your users having a really poor internet connection whereby every bit of data makes a difference. In this scenario, lowering loading time will surely help you achieve significantly higher conversion rates. Keeping that in mind, while adding new NPM or Yarn packages to your project, you should bear in mind their impact on your web app’s size. To simplify this task, it’s a good idea to use a bundle analyzer that will let you keep track of your dependencies.
Lazy loading
You might have heard about lazy loading with respect to images and videos. Now, however, let’s a bit deeper into the topic and look at lazy loading from the perspective of code splitting. How are the two connected? It’s fairly simple: if you split the code into smaller portions, you most likely do it with the thought of lazy loading those bundles later on.
Code splitting is the ability to split an entire app into smaller chunks called modules, which are interconnected. This practice decreases the size of the main bundle downloaded on the first visit. But what happens when one needs the functionality of the module that’s not located in the main bundle?
That’s when lazy loading steps in as a solution helping to save both data and time. But what’s the point of loading modules that are not needed right away, you may ask? Doing so lets you load these bundles later on, at a more appropriate time – when the user needs it, for example. Thanks to lazy loading, you still get access to their functionalities but you don’t need to load these modules when the app initializes, which – yes, you guessed it right – helps reduce loading time. As far as specific lazy loading techniques are concerned, it’s quite popular to use IntersectionObserver to keep track of what element should be downloaded next as well as to split modules by route, where the required modules are only fetched when a user navigates to a specific route.
Optimizing content for loading speed
Since we have mentioned assets when introducing lazy loading, it would be a shame not to say a word or two about the optimization of visual content as well. To begin with, make sure you go for the right format. Sure, PNG does ensure high quality – this, however, comes at a cost of size and it might turn out that JPEG, although less detailed, will be enough for you. If you’re uploading a logo or an icon, consider using the vector SVG format which ensures detailed rendering of geometric shapes in all resolutions while simultaneously remaining lightweight. And if you’re in need of an animation, a looped video may prove to be a better choice than a heavy GIF.
Here’s a decision tree by Google’s Web Fundamentals to help you choose the best image format for your needs
The other thing to remember about when optimizing assets for load speed is serving scaled images. If your web app displays visual content at 800 x 600 pixels, there is no point in uploading an image ten times larger since it will be downscaled anyway while remaining heavy. This, in turn, brings us to another performance-related practice: lossless and lossy compression. The former compresses some pixel data, thus maintaining the high quality of the uploaded image and communicating greater detail; the latter, on the other hand, eliminates selected pixel data. While lossy conversion sounds like a quite radical practice, when balanced right with its lossless counterpart, it doesn’t have to turn out noticeable for the users.
Asset subjected to lossy compression as compared to the original image
A tool that may come in handy when optimizing assets for improved web app performance is Gzip, a data compression program allowing to both compress and decompress on the fly all types of files, ranging from plain text to detailed images or even audio files.
Minimizing API calls
Last but not least, when discussing improving web app performance on the frontend, we need to mention minimizing API calls as well. Imagine making three different requests to gain all the necessary data to complete your view. It doesn’t seem like the most effective way of running things, does it? In such a scenario, consider using GraphQL, a query language for executing queries using a type system you define for your data. Unlike REST, GraphQL can fetch all data in a single request, without over-fetching data as it may happen with REST APIs. At the end of the day, the fewer API calls made, the greater the speed of your app.
Tips and tricks to improve web app performance on backend
Now that we have covered client-side performance optimization techniques, it’s high time we covered the backend, which is said to be the beating heart of any application.
Optimizing your code
Once you identify slow and redundant queries made to the SQL database, you are ready to rise to the challenge of optimizing your code. As is the case with the frontend, the ultimate choice of tools and techniques on the backend will depend on the server-side tech stack your product is based on. In this paragraph, we’ll take Django as an example.
Speaking of Django, you may be interested to learn how it differs from another well-known Python framework, Flask.
In Django, queries are made lazily which means that the framework won’t actually run the query against the database until the query’s result is evaluated in the code. Although this approach has its perks, it can also cause some performance issues, especially when we’re talking about using foreign keys and many-to-many relationships in the database.
One common problem is getting an object from a database and then accessing its related objects by iterating through them in a loop. This creates a new SQL query for every iteration to fetch the related object and, as a result, can lead to hundreds and thousands of additional unnecessary queries made, thus slowing the entire app. You can overcome this obstacle in two steps. Firstly, you can minimize the number of queries to just one and cache the result by using the select_related method when fetching an object with foreign keys. Secondly, employing prefetch_related for many-to-many relationships will give you two queries and the results will be joined with the help of Python rather than the database.
Speeding up data search
The way you’re retrieving data from an SQL database is another critical factor behind the server-side performance, especially when the dataset is big and complicated or you want to swimmingly perform full-text searches – in which case, the use of a search engine such as Elasticsearch can be your key to success.
Elasticsearch is an Apache Lucene-based distributed search and analytics engine first released in 2010. Although not a database itself, it can be used alongside an SQL database to speed up searching, filtering and sorting of large amounts of data in places where retrieving data from an SQL database may be slow.
Interested in technicalities? Read our guide on how to use Elasticsearch with Django and REST Framework.
Since the topic of web app performance is closely related to that of web app scalability, we also need to mention that Elasticsearch has been designed to support horizontal scalability. Using the engine, you can continue adding more nodes so that the increased amount of data does not slow down the performance of your application. If you’re curious about the way Elasticsearch handles changes in index size, cluster size, and throughput, you can read more on that topic in this article.
Wrapping it up
As you can see, improving web app performance is a complex task and the list of optimization techniques is by no means a finished one. You can use the tips and tricks that we described above as a starting point but don’t hesitate to supplement this checklist with other steps that will allow you to speed up your application. And, above, all, don’t stop analyzing your web app performance and testing your assumptions – for optimization is a never-ending process.
Is the poor performance dragging your business down? Read our whitepaper to find out how to prepare for and execute modernization that'll helo you address this issue!
Navigate the changing IT landscape
Some highlighted content that we want to draw attention to to link to our other resources. It usually contains a link .