开发者

Working with large Backbone collections

开发者 https://www.devze.com 2023-04-09 04:55 出处:网络
We\'re designing a backbone application, in which each server-side collection has the potential to contain tens of thousands of records. As an analogy - think of going into the \'Sent Items\' view of

We're designing a backbone application, in which each server-side collection has the potential to contain tens of thousands of records. As an analogy - think of going into the 'Sent Items' view of an email application.

In the majority of Backbone examples I've seen, the collections involved are at most 100-200 records, and therefore fetching the whole collection and working with it in the client is relatively easy. I don't believe this would be the case with a much larger set.

Has anyone done any work with Backbone on large server-side collections?

  • Have you encountered performance issues (especially on mobile devices) at开发者_JAVA技巧 a particular collection size?
  • What decision(s) did you take around how much to fetch from the server?
  • Do you download everything or just a subset?
  • Where do you put the logic around any custom mechanism (Collection prototype for example?)


  1. Yes, at about 10,000 items, older browsers could not handle the display well. We thought it was a bandwidth issue, but even locally, with as much bandwidth as a high-performance machine could throw at it, Javascript just kinda passed out. This was true on Firefox 2 and IE7; I haven't tested it on larger systems since.

  2. We were trying to fetch everything. This didn't work for large datasets. It was especially pernicious with Android's browser.

  3. Our data was in a tree structure, with other data depending upon the presence of data in the tree structure. The data could change due to actions from other users, or other parts of the program. Eventually, we made the tree structure fetch only the currently visible nodes, and the other parts of the system verified the validity of the datasets on which they dependent independently. This is a race condition, but in actual deployment we never saw any problems. I would have liked to use socket.io here, but management didn't understand or trust it.

  4. Since I use Coffeescript, I just inherited from Backbone.Collection and created my own superclass, which also instantiated a custom sync() call. The syntax for invoking a superclass's method is really useful here:

    class Dataset extends BaseAccessClass
        initialize: (attributes, options) ->
            Dataset.__super__.initialize.apply(@, arguments)
            # Customizations go here.
    


Like Elf said you should really paginate loading data from the server. You'd save a lot of load on the server from downloading items you may not need. Just creating a collection with 10k models locally in Chrome take half a second. It's a huge load.

You can put the work on another physical CPU thread by using a worker and then use transient objects to sent it to the main thread in order to render it on the DOM.

Once you have a collection that big rendering in the DOM lazy rendering will only get you so far. The memory will slowly increase until it crashes the browser (that will be quick on tablets). You should use object pooling on the elements. It will allow you to set a small max size for the memory and keep it there.

I'm building a PerfView for Backbone that can render 1,000,000 models and scroll at 120FPS on Chrome. The code is all up on Github https://github.com/puppybits/BackboneJS-PerfView. It;s commented so theres a lot of other optimizations you'd need to display large data sets.

0

精彩评论

暂无评论...
验证码 换一张
取 消

关注公众号