-
Notifications
You must be signed in to change notification settings - Fork 250
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Use snoopcompile to more aggressively precompile Gadfly #1280
base: master
Are you sure you want to change the base?
Conversation
This gives a marginal improvement on speed and memory use during first plot, but the biggest benefit is that it makes the code easier to read and the profiling significantly easier to grok.
Comparing nothing by value (==, =!) rather than by identity (===, !==) or by type (isnothing) can have negative consequences on code speed. This commit changes all nothing comparisons to use the isnothing utility added in Julia 1.1 and Compat 2.1.
@timholy do you know if the segfault seen here in CI on Julia 1.0.3 a known problem with the SnoopCompile/1.0 combination? I must admit that the precompile statements generated are a bit of a black box to me, and I didn't see any mention of this problem in the SnoopCompile repo. |
After further evaluation, the wins here aren't nearly as clearcut as I had hoped. The gains to drawing time are almost entirely offset by So if you're not using DataFrames, your time to first plot improves by a relatively significant margin with this PR. If you are, it actually gets worse. :/ |
The segfault is definitely not expected. But that would seem to be a Julia bug. I'd consider trying to binary-search-comment out blocks until you identify the line that's causing it. A couple of usage tips: when I first wrote SnoopCompile, I didn't fully understand that the relevant time to measure was inference time rather than the core compiler's time (only type-inferred code gets cached, not native code). So the strategy I use now is to measure inference time (requires julia 1.2) with some cutoff, e.g., Also, function signatures like The gains won't be very large until something like JuliaLang/julia#31466 gets merged. There are just too many methods that fail to get cached with the way julia currently does this. |
worth noting that we used to precompile but stripped it out as it didn't make that much difference. definitely worth revisiting though, especially in light of tim's new strategy to focus on inference time. should also reference #921 |
is this still set to be done?
|
I assume it still needs someone to tackle it. A few tips I've learned in the interim:
|
For reference, CategoricalArrays 0.9 should trigger far fewer invalidations, see JuliaData/CategoricalArrays.jl#310. |
the invalidations are now down to 375 (from ~1850) on Gadfly and Compose master using julia 1.6-rc1. this is with no Gadfly-specific effort to actually reduce them. all of the "backedges" (as indicated with "superseding") are some other package invalidating Base. i don't think there's anything we can change in Gadfly to fix those. (am i wrong about that?). not sure how to look into "mt_backedges" (as indicated with "signature"), as
fresh session:
|
exactly
https://timholy.github.io/SnoopCompile.jl/stable/snoopr/#mt_backedges-invalidations |
Contributor checklist:
NEWS.md
squash
'ed orfixup
'ed junk commits with git-rebaseThis PR builds on #1278 primarily so that I can benchmark more accurately; please only look at e9733d6 as part of this PR. The
precompile.jl
script was generated mostly using https://github.com/timholy/SnoopCompile.jl with some manual intervention on my part when it had trouble because of Requires.jl usage. I think the results below speak for themselves. We may be able to get a similar speedup by applying a similar procedure to Compose.There's two clusters because we can't apply SnoopCompile to tests using DataFrames very well since it's behind Requires.jl.
That's over a 15% reduction overall, and obviously it's much more significant for the tests where it's most effective.