-
Notifications
You must be signed in to change notification settings - Fork 3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[GraphBolt] modify logic for HeteroItemSet
indexing
#7428
base: master
Are you sure you want to change the base?
Conversation
To trigger regression tests:
|
HeteroItemSet
indexing
Do you have a benchmark comparing the new approach to the old one for different K values? |
data[key] = self._itemsets[key][ | ||
index[mask] - self._offsets[key_id] | ||
] | ||
if len(index) < self._threshold: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we need a benchmark before settings such a threshold only by looking at runtime complexity.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
You're right. I'll do it right away.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's use K values 1 2 4 8 16 32 etc.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is a little bit complex, I think the optimal complexity should be O(N*logK), with additional 2 helper array: offsets = [0, 10, 30, 60], etypes = ["A", "B", "C", "D"]
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
NVM, I see the points, to do the way I suggested, we need to implement our own C++ kernel with parallel optimization.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is any of the ops used in the new implementation single threaded?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Seems no. Maybe sorting op itself is a bit slow even it's multi-threaded?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How did you verify?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm not sure, I'm just guessing. Did you find out anything from the benchmarking code?
@mfbalin @frozenbugs See benchmark results in the description. The new implementation does not seem to be as efficient as we thought. Maybe we should keep it as is? |
Let me take a look at the code to see if we missed anything. Thank you for the benchmark. |
if len(index) < self._threshold: | ||
# Say N = len(index), and K = num_types. | ||
# If logN < K, we use the algo with time complexity O(N*logN). | ||
sorted_index, indices = index.sort() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we try numpy.argsort here to get indices and index[indices] to get sorted_index? It looks like numpy might have a more efficient sorting implementation. When benchmarking, we should ensure that we have a recent version of numpy installed. It looks like numpy uses this efficient sorting implementation by intel: https://github.com/intel/x86-simd-sort
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is assuming that the sort is the bottleneck for this code.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks for your info!
When benchmarking, we should ensure that we have a recent version of numpy installed.
How recent is the very version? Because it seems that we have just diabled numpy>=2.0.0 in #7479 .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
numpy/numpy#22315
They added faster sort in this PR. Looks like the version number is 1.25 or later.
https://github.com/search?q=repo%3Anumpy%2Fnumpy%20%2322315&type=code
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's use the latest 1.x version and see how the performance is.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Though the code changes were commited in numpy/numpy#22315 where the version is 1.25, this improvement was not officially announced until NumPy 2.0.0 Release Notes. Therefore, it is likely that they did not integrate the changes until version 2.0.0.
I plan to move that we offer full support for numpy>=2.0.0 at the Monday meeting, and perform the benchmark after we do so.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
numpy>=2
is compatible with DGL. I think we can perform this benchmark. I just ran the graphbolt tests with numpy>=2
installed and all passed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
See https://docs.google.com/document/d/1Bbmp8gMekiGIYYxEMVbmXSANRZlZ_nTNbhpWul4RaKA/edit?usp=sharing . The results seem to have changed little.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the updated numbers. I will profile the benchmark code and see if there is a potential improvement we can do.
continue | ||
current_indices, _ = indices[ | ||
index_offsets[key_id] : index_offsets[key_id + 1] | ||
].sort() |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We could use np.sort here as well.
Description
First let's take a look at the current code for indexing a HeteroItemSet (occurs in
HeteroItemSet.__getitem__
):Say the length of indices is
N
and the number of etypes/ntyeps isK
, then the time complexity of current implementation of indexing a dictionary isO(N * K)
, which is mainly introduced by the lineIf there are a lot of etypes, this line could easily become the bottleneck.
This draft PR intends to propose an alternative to current logic:
whose time complexity is
O(N * logN)
where thelog
is introduced by the sorting operation.This will imporve the performance when there are many etypes, but might cause more time consuming when there are few etypes. A thoughtful consideration lies in striking a balance between the two approaches.
Update on June 18
Benchmark: https://docs.google.com/document/d/1Bbmp8gMekiGIYYxEMVbmXSANRZlZ_nTNbhpWul4RaKA/edit?usp=sharing
The results show that the original algorithm is faster than the new algorithm (theoretical time complexity N*logN) for almost all batch_size and num_types.
Checklist
Please feel free to remove inapplicable items for your PR.
Changes