You Don’t Need SSR (frameworks)
When over-engineering makes you forget about what really matters
NB: I’ve been working on a new JavaScript/TypeScript framework called Perseid, that allows developers to prototype and build amazing full-stack web apps in minutes, so feel free to check it out and give me your feedback! 🫶
Introduction
My team and I recently worked on optimizing the performance of one of our SaaS applications, in order to reduce infrastructure costs, and improve overall user experience by cutting down loading and rendering times as much as possible. We explored many options and tested different frameworks. Server-Side Rendering was one of the techniques we especially focused on. Here is our feedback regarding this solution. As the title suggests, this article is quite one-sided, however I tried to rely on factual data as far as possible, so that it is not just a debate of beliefs.
What is Server-Side Rendering?
Nowadays, when we talk about web apps, it usually means an application written in JavaScript, using a popular frontend framework out there (VueJS, React, Angular, Svelte to mention the most famous ones). In such case, everything is generated on the client side (i.e. in the browser), once all the essential assets have been loaded. Most of the time, the web page returned by the server looks like:
And that’s pretty much it. The whole DOM is built dynamically. It makes a fundamental difference with older methods, where pages where 100% statically generated, server side, and where the browser just had to render the raw output. This trade-off between server and client has several important implications:
- Computation is now mostly done in browser, freeing a lot of load from back-end, reducing their usage costs and increasing the amount of traffic they can handle.
- It is possible to create fully interactive pages, with a much more deep user experience than with static pages while avoiding successive reloading.
- As pages are not sent once generated, search engines and crawlers won’t index anything valuable (although this statement may be obsolete now, as all modern crawlers are able to execute JavaScript and wait for the page to be rendered before indexing its content).
- Performance/UX: the fact that pages are not sent back ready to be displayed to end user makes it potentially longer to load. Especially when network connection is bad, or device computation resource is limited. We’ll come back to that point later.
In order to get the best of two worlds, developers thought of a new hybrid system, called Server-Side Rendering (SSR). As its name suggests, the principle is to render the page first on server, before sending it, and then bring all interactivity within it by a process called “hydration”. This way, crawlers see plain HTML that is good to index, users get their page ready to be displayed, but still benefit from all interactivity brought by reactive frameworks.
Challenges brought by SSR
Theoretically, it sounds like a great idea. But in practice, while trying to implement this system, you will find yourself facing a bunch of new challenges, that can require entire teams for days, and can cause many headaches to solve. In his article, stereobooster makes a pretty accurate list of those concerns you will have to deal with. To sum up, you will have to ensure that the following is also working seamlessly on back-end:
- Dynamic data fetching from an API or any web service before rendering
- Using a front-end router, or internationalization (i18n) utilities
- Using HMR for development
- Lazy loading components
- Using a state manager like redux or diox
For each one of these, libraries, hacks or workarounds are available. They have nonetheless a lot of limitations, and you will probably spend a huge time adapting them to your own use case. So, is implementing Server-Side Rendering in your apps really that worth? To answer this question, let’s dig further through the different drawbacks SSR tries to solve.
SEO
As I previously mentioned, without SSR, your website’s indexing won’t be impacted on Google or Bing. But it will most likely be on other less advanced crawlers. Before trying to fix that point, the most important question you should answer first is: do you really need SEO?
If you are building a web app, odds are that this app will go through an authentication system, will display user info that must remain private, and its content will always vary from user to user. In such case, the later question is a no-brainer, you don’t even have to think about indexing your pages (and probably don’t event want to).
In more specific cases, you may want to index your website (e.g. e-commerce, list of public profiles, …) while providing great experience to end users. Despite of that, implementing SSR yourself is not the only option. Several easy back-end solutions are available out there, for free, like Pupeteer. A simple trick consists in checking either current visitor is a robot, and if so, pre-rendering the app before sending a full, raw HTML. SaaS services can also help you at this task, as prerender.io. It can be worth having a look at them, depending on your needs.
On a more philosophical perspective, being able to index dynamically rendered pages for a crawler is now the way to go, and almost a prerequisite. More and more websites are created this way for convenience, productivity and all the benefits it brings. Search engines will have to adapt to that global trend in order to stay competitive. So, spending a huge amount of time trying to improve support for less advanced robots feels a bit like keeping your site IE8-compatible, isn’t it?
Performance & User Experience
This part is a bit more complex to determine, as it varies a lot from device to device, and depends on both network’s quality and performance. Firebase team made a very interesting series of videos speaking about how essential SSR is and how it dramatically improves your app’s performance and reactivity. But if we think about it from another angle, what do we learn? When it comes to improving pages loading times, only 3 KPIs matter. FP, or First Paint, this is the time it takes before the browser actually displays something to the user. FCP, or First Contentful Paint, is the delay after which the page AND meaningful data are display to the user. And finally, TTFI, or Time To First Interaction, is the time from which user is able to interact with the page (e.g. clicking buttons, navigating, …).
Let’s illustrate these KPIs with a blog example. As soon as user navigates on that blog, the browser will display a white screen, until the minimal page (HTML) and its assets (CSS) are fully loaded. This duration corresponds to the First Paint. It does not mean user can see all the blogs’ article after that time however. The site is only displaying a small loader for now, suggesting to the user those articles are still loading. Shortly later, JavaScript has been processed, DOM has been rendered, and user can now interact with the menu for instance. It is the Time To First Interaction. Once articles are eventually loaded, they show up on the page. This is the First Contentful Paint, that’s to say the moment user sees the contents he came for.
As we can see, improving UX on page load basically consists in improving each one of those KPIs as much as possible, so that end-user can see and interact with its application without waiting.
Now let’s go further in the demonstration, and perform a quick simulation of two identical apps, one with SSR, the other without. To clearly emphasis the differences, let’s say one way trip network request takes 1 second to be fulfilled (in real life it’s going to be closer to 10ms, or you got a deeper problem and SSR is probably not your priority ;)). Browser’s rendering time (parsing JavaScript, mounting the DOM) is 0.5s. Our apps are composed of 1 CSS file, 3 JavaScript assets (retrieved in the mean time thanks to HTTP 2.0), and makes 2 synchronous API calls to retrieve relevant data (user info and his blog posts).
From the diagrams above, we can notice that:
- With SSR, page takes much more time to load because API calls are performed on back-end, before sending any response to the client. During that time, end-user is just waiting in front of an white screen. However, once everything is loaded, useful content is displayed sooner than without SSR. Regarding TTFI, as all assets are required and DOM must be hydrated, user sees a complete page, but he cannot interact with it for 2.5 seconds. It makes the app feel cumbersome, laggy, and can generate frictions.
- Without SSR, page is loaded way faster (twice, actually), but is only displaying a loader / splash screen / whatever is in the HTML page at first, for 2.5s until the JavaScript has been completely loaded and interpreted. At that moment, user can really interact with the UI, but no interesting content is displayed. Meaningful content appears 2.5 seconds later than with SSR. Nonetheless it gives you a window to let your user know the app is still waiting for the payload, by displaying relevant UI elements.
UX-wise, this example shows pretty clearly that SSR itself is not especially better or brings any valuable improvement. Moreover, you can do better without SSR. How? By optimizing your UI components, assets size, by lazy-loading unnecessary elements, and so on.
One last thing to keep in mind is that with SSR, the server’s HTML response will contain a significant additional overhead, which can dramatically increase loading time. Indeed, while containing the fully generated DOM, the HTML will also contain the same information in a JSON format to perform DOM hydration. Taking back the example of a blog, it means that the page returned by the first request will contain the HTML-formatted blog posts and the JSON list of posts, ready to be parsed in JavaScript. And the same goes if you are using a state management framework.
Costs & Complexity
Besides the two main concerns we previously discussed, there is another one that developers don’t really talk when promoting SSR: the costs that inevitably come with increased complexity.
- Enabling SSR requires to rewrite a large part of your app so it is 100% isomorphic (although you never completely achieve that). It takes a huge amount of time, that you could have spent improving other aspects of the app or focusing on other ways to increase performance.
- You will definitely need a more powerful and complex infrastructure, as making an app work on back-end has many implications in terms of security and features.
- Your servers will be able to handle much less traffic than before (because of the computational resource it takes to render pages), increasing your costs.
Conclusion — About simplicity
I personally like very much the saying YAGNI, or “You Ain’t Going to Need It”, because it makes totally sense, especially in software engineering. Developers love finding new problems, coding awesome stuff and framework that beautifully solve them, for the sake of technological achievement itself, more than the benefits it actually brings to end-user. I believe that SSR is one of them. It makes you loose focus on what really matters, your users. It’s another technical challenge that brings infinite discussions and costs so much time for such a small ROI.
You may strongly disagree with the conclusion I made in this article, and again, you could be right. Maybe we missed some interesting points, and it would be great to highlight them, so please feel free to add any comments! As a matter of fact, the debate is still not settled in the industry. Big companies are using SSR for their platforms (such as Netflix, Facebook or Quora). Others are not (Youtube, Twitter, Slack). Some even simply keep relying on good old static rendering adding touches of JavaScript only when it’s necessary (as Amazon or Wikipedia), which might also be a proper solution at some point.
In any case, if you are interested in exploring Server Side Rendering for you apps, I recommend you the most popular open source frameworks out there: