Skip to content
This repository has been archived by the owner on Nov 13, 2023. It is now read-only.

Ajax loads are big, slow #15

Open
kevinburke opened this issue Mar 12, 2012 · 6 comments
Open

Ajax loads are big, slow #15

kevinburke opened this issue Mar 12, 2012 · 6 comments

Comments

@kevinburke
Copy link

Hi,
I love the design of dochub but it's currently too slow to be my day-to-day reference. I appreciate the search bar appearing early, but it's still tough to use This could be mitigated either with deep links or by pulling down less Javascript on every search.

Would be happy to help work on this; I appreciate your thoughts.

Kevin

@vecter
Copy link
Collaborator

vecter commented Mar 19, 2012

Hi Kevin,

Thanks for the notice. Raf and I are very busy with our main jobs right now, but I'd be interested to hear your thoughts on how to make DocHub faster, especially if this is something you'd like to implement.

As it stands now, DocHub pulls all of its documentation for a particular language/framework when that tab is clicked. The search is then entirely clientside, and there are no other JSON requests (if that's what you mean by pulling down less Javascript on every search). What do you mean by deep links?

Kevin

@kevinburke
Copy link
Author

Maybe you've thought of this, but I was thinking about one or more of the following optimizations:

  • when you request #css or #html, you return an array containing only the titles, so you can populate the sidebar and the user can keep navigating.
  • if the distribution of resources is skewed, start loading the full text of the most popular resources, after the titles are populated
  • if the user requests a specific page, download just that resource (I think this is what I meant by deep links above.)

It would require splitting out the .json files into more pieces, but that probably could be achieved in the scraper.

I also noticed you're loading a lot of small Javascript files on the frontend, which means they can only be downloaded 6 at a time (same goes with the images from mozilla.org). This is something you'd have to configure, but if you could split resource loading across multiple subdomains it would help load the page faster.

@jeffbyrnes
Copy link

Kevin,

You might also consider running dochub locally, rather than using http://dochub.io.

@b1rdex
Copy link

b1rdex commented Jul 14, 2012

Runing locally isn't the solution — json is cached after first visit of section and loads very fast.
The problem is ≈1 min json parsing and building DOM if you open python tab.
I'm running Chrome 21 on Phenom 945. Is that fast? I think no.

And the case is over 4000 child nodes for ul#toc-results to show just 20 or 30 of them.
It's just wrong way to insert all that nodes.

I think it should be like this:

  1. loading json of choosed section
  2. show 100 first entities of it

Than cases:

  • If user scrolling to 80s or so element — just adding another 100 to list.
  • If user starts typing to search box
    1. searching for term and caching search results
    2. clearing child nodes of #toc-results list
    3. putting 100 first results to it.

Scrolling politics applies to 'searching' case too (but we're not going to search again any scroll — we can cache search result in memory).

100 and 80 was just for example. I thing 500 nodes will be shown really fast (needs testing).

The other problem with same roots is huge memory consumption.
If you open dochub.io, consecuentialy click all tabs and than look to chrome's task manager (shift+esc) — you see problem.
problem

As you can see, dochub eats like all my tabs and plugins together.
But all that json objects just in-memory will not so big and the solution is same.

@jeffbyrnes
Copy link

Ah, ok, I gotcha. I've not opened the Python tab, though the jQuery & PHP ones take a good bit of time to load up. Your solution definitely sounds like a valid way forward; a sort of "lazy loading" as it's sometimes called.

@b1rdex
Copy link

b1rdex commented Jul 16, 2012

Yes, it is.
And last thoughts:

if the distribution of resources is skewed, start loading the full text of the most popular resources, after the titles are populated
if the user requests a specific page, download just that resource (I think this is what I meant by deep links

This is totally wrong way. There will be too much ajax requests and high traffic. The doc base is not modifyed realtime or so and it can be effectively cached as single json file for each docs section (php, jquery, html and so on).

Can code my ideas but have no time for now.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants