-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
EdgePopulation.get_attribute with significant number of non-consecutive edge ids #225
Comments
Thank you for quantifying this - I've always assumed it's a problem, but never got around to looking at it. Dispatching many small reads vs coalescing them makes a huge difference, I will have to give that a try sometime. |
I couldn't help but try: |
I think this could need the same treatment as done for the report API: close consecutive ids/elements could be merged into chunks that are used to aggregate reads and not hit the file system as much. Noting this as it may be the underlying issue we see in SPIND-235. |
did a quick and dirty version here: https://github.com/BlueBrain/libsonata/tree/try-chunked-pop-read can you see if that helps w/ your use case @matz-e? it doesn't work exactly how I'd want it to yet, but it seems faster-ish. |
Thanks, @mgeplf! I'll try to test that soon™ |
It's pretty hacky, but at least it gives an idea of how things could be better. if one sets GAP, one can change the max chunk size. A quick scan of values gives 1e6 to be a sweet spot:
|
I stumbled on a case in which I have a big number of scattered (i.e., generally non-consecutive) edge ids. If I wrap the selection in
libsonata.Selection
, the call forEdgePopulation.get_attribute(...)
takes a significant amount of time.For example, if I
the performance is significantly worse than doing:
There's a test case to demonstrate this effect in:
It can be run with the
run.sh
. Example of the run output:For smaller circuits, I guess doing
select_all
is not an issue, but for bigger ones, there might be concerns such as memory usage.The text was updated successfully, but these errors were encountered: