Datatables and rails, server side or not?

When using datatables one of the big questions is whether to go with server-side processing or client-side processing.

With client-side processing all the data is retrieved in one go, and held in memory on the browser.  Paging and filtering are done locally in the browser.

With server-side processing the data is retrieved in pages.  A subset of data (typically a page) is returned to the browser, when the user pages down, changes the sort order or changes the filter criteria, a request is made to the server for a new page of data.

We’ve spent a bit of time thinking about which we would want for the application, and this post goes through the pros and cons as I see them.

Consider from the client-side processing view, as this is the default.  The advantages (and therefore disadvantages of server-side) are:

  • It’s the default.  It’s easier to code, server side processing takes a bit of trickiness to maintain state (if I have a sort, and then I do a filter, and I’m on page 3 of data, where do I expect to end up?  Is my sort maintained?);
  • There is only one server call – when the data is first retrieved.  All subsequent work is done by the browser without calling upon server resources.  This could be useful in a cloud-based app – less server traffic can mean less cost-to-serve.  This is a trade-off though.  If you return thousands of rows of data, and the user only looks at the first 3 rows, then that was wasted server time;
  • The filtering and sorting is more responsive, and can do more flexible things without creating a lot of server code.  For example, the default datatables filter searches across all columns in parallel, rather than needing a structured “query-builder” type approach – this can be more user-friendly

The disadvantages are:

  • If you have a lot of data then that first server call can be slow, impacting user experience.  This can be somewhat mitigated by rendering the page and the data separately, which gives the user a better feeling app (they can see something happening) whilst not actually changing the end-performance, but the bottom line is that you can end up with slow queries if you don’t optimise carefully;
  • If you have a lot of data the browser can get overwhelmed, running out of memory or CPU;
  • If you have a lot of data the network traffic could get expensive, particularly if you’re not using a compact interchange format like JSON
  • If other users are updating the data whilst this user is interacting with it then the data in the browser can get out-of-date, this could impact if you allow update functionality (user would get optimistic locking errors)

As can be seen, a key differentiator here is “lots of data”.  The question is what “lots of data” is – is the cutoff 100 rows, 1,000 rows or 10,000 rows?  This is a harder question than it seems, it depends on how complex your queries are, how big your end-user device is, and what sort of end-user experience your users are OK with.

I ran some tests to understand the timings, refer to this blog post for some detail on the tuning of the queries.  The bottom line for me is that on low-spec hardware and in a test environment I’m seeing about 1.4 seconds for 1,000 rows.  My browser seems to be seeing no ill-effects, and the memory usage looked OK.    So, I’ve done a small table of results, breaking out both the full render time and the database time:

Rows         Time to provide the JSON (server side)  
------------ --------------------------------------  
 1,000                    818ms
 2,000                   1705ms
 3,000                   2541ms
 4,000                   3131ms
 5,000                   4001ms
 6,000                   4777ms
 7,000                   5485ms
 8,000                   6542ms
 9,000                   8224ms                             
10,000                   9641ms

We can also see that the database query time for the last (10,000) is only 82ms (reflecting that the data is all in cache – in the real world it’d be slower), indicating that the limiting factor is how long it takes to format the data as JSON and send it up to the client.  The JSON file is about 5MB in size, so probably the transfer time is a limiting factor.

Following this post, we can compress the response by changing our config.ru file to include:

  use Rack::Deflater

This then reduces our size from 5MB to about 140KB.  The render time appears to have remained the same however.

I think the short answer on this post is that the client-side processing is easier and sufficiently fast out to about 1,000 records.  If you’re going much beyond that, you’ll start to find the response times are beyond that reasonable for a normal web application.

Advertisements

One thought on “Datatables and rails, server side or not?

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s