Depending on how long you have been using the web, you may remember a time when all functionality was handled by servers. You loaded a page, decided what you were going to do next, clicked (this was several years before smart phones so no tapping… yet) and waited while the server processed your choice. After the server had built the next representation of the state of your session and sent it back down the wire to be rendered by Internet Explorer - your flavour may vary, but I first used IE 3 - you then repeated the whole process until you got bored (or your housemate tried to use the dial-up too and dropped your connection).
XMLHttpRequest usage was invisible to their bots and left out of the index.
In 2009, Google recognised the rising trend for web applications being built using AJAX and client-side rendering and announced their AJAX Crawling Scheme. In a nutshell, this scheme provided a way for developers to tell Google where to find a HTML snapshot which mirrored the client-side state seen by a user.
The end of Google’s AJAX Crawling Scheme
To try and understand if this is the case I specifically asked if API-driven sites would be penalised and received a response from John Mueller - Webmaster Trends Analyst at Google:
Ian Thomas - How well does the crawler handle SPAs that compose their content from further XHR calls on load? I’ve seen good results with apps that bundle their content into their JS payload but nothing to suggest that those backed by an API with data loaded after DOM Ready will be as well crawled.
John Mueller - I’d use the Fetch and Render tool to double-check. Loading data via AJAX/JSON shouldn’t be a problem, as long as everything’s crawlable.
Why should we favour client-side rendering?
Web applications which implement client-side rendering can feel faster than more traditional websites. Looking specifically at our own Skybet mobile site, there are significant performance enhancements achieved by not requiring a full page load on every customer action. Additionally, the introduction of transition feedback and persistent UI elements during content load makes browsing the site feel slicker.
There are product features that only become possible with client-side rendering - an example being our recently released video player which can be docked to the top of the screen allowing uninterrupted stream viewing across page views.
A client-only stack can take advantage of advanced tooling designed to optimise developer productivity and UI performance such as Webpack or Browserify. It’s also possible to separate out data dependencies without need to understand a component tree ahead of sending a response back from a server - allowing us to deliver a library of components which can be plugged in and re-used across applications.2
An additional benefit arising from client-side rendering is the requirement to power the front-end through well defined APIs. This helps decouple our platform and allows us to develop new products from our data or even open up our service to third-parties to build new applications as part of a wider Sky Betting & Gaming ecosystem.
Why shouldn’t we favour client-side rendering?
Are there any non-functional concerns about removing server rendering?
Choosing to go with a third-party framework could be risky as we don’t have ownership of their development roadmap and there are no guarantees of long term support. We can make educated decisions to mitigate this risk and given the main contributor to React and Redux is Facebook, it seems to be a risk that we can afford to take. That said, if ever we wanted to experiment with a new technology for a single part of the site, that would be very difficult indeed.
We know the performance profile of our servers and can scale appropriately, moving more processing to the client removes any control we have over the execution environment. Initial performance tests show an increased usage of device CPU (which is to be expected) so for those customers on older/weaker devices there may be greater performance penalties than a server rendered approach and we may cause excessive battery drain if we aren’t careful.
There’s also an increased reliance on monitoring real-user data to ensure products are working as expected. We have very detailed logging from our servers which gives detailed visibility of their health and makes debugging issues less painful. When a significant amount of processing is done on a customer’s device we do not have the same level of control or visibility so triage and debugging could be much harder. Equally, the sheer number of different devices and software makes it difficult to pinpoint problems precisely.
How can we put this approach to the test?
SEO and rich links from third parties are critical to our marketing and ongoing acquisition strategy; we need to be certain that this approach won’t incur SEO penalties. We decided that the simplest way to test this theory was to build a lightweight website which is entirely client-side rendered and see what gets indexed. This blog post is our organic way of linking to the proof of concept so search engines can find it (and, of course, so that you can view our thinking)!
In addition to organic indexing, we can hook up the site to Google’s Webmaster tools to Test what Google sees through the fetch and render - this is the most immediate way to get a feel for how crawlers might see the site.
How did we build it?
It’s always exciting to have the opportunity to work on completely greenfield projects so we took the opportunity with this spike to review several exciting front-end technologies that we’ve had our eye on for a while:
- React Router
- ES2015 transpiled via Babel
- CSS modules
- Hot Module Reloading
If you’re interested to see what we built, it’s available to view at www.skybet-nextgen.com.
Well, it’s a bit early to say on this one, but early signs are not so good for getting our content into Google. Using the fetch and render feature shows a beautifully accurate representation of the loading state of the demo website, with none of the API requests having completed at the time an image was captured. Whether this means that the actual indexing behaviour also misses the XHR data fetch is yet to be seen, we’ll have to wait to see how any organic crawling performs.
Keep checking for a follow on post containing the full results of this test.
We are specifically looking at the way a component based UI could work using a technology like React, but the other frameworks mentioned are equally capable at working in this way. ↩
As we are specifically looking at React the component lifecycle could be used to empower smart components which know how to fetch and update their own state from dedicated endpoints - an approach that is inefficient and difficult to implement cleanly when using React’s server side rendering capability. ↩