-
Notifications
You must be signed in to change notification settings - Fork 48
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
mapreducedim! is not implemented for AnyROCArray Types #234
Comments
As a stop-gap we can just convert to a dense |
Why not implement |
@maleadt should changing the function prototype to accept |
I think so, but I'm not very familiar with AMDGPU.jl or it's mapreduce implementation. Try it out and see what breaks? |
@maleadt @jpsamaroo I've tried it before, but the problem is that the AMDGPU implementation queries the ROCm device associated with the ROCArray using its buffer field here: Line 133 in 99967b7
I have to look into it, but I don't think it will be as simple as changing the function prototype. |
Then JuliaGPU/Adapt.jl#52 would probably be useful. |
@maleadt it looks good, but do you have an ETA on when it will merge? |
No ETA on that change. You should probably use an
It's currently up to the user to make sure to use the correct array on its respective device. I don't think I'll make it automatic, since it's not clear what to do if multiple devices are involved. It's also possible to create arrays whose memory is accessible from other devices. |
@maleadt @jpsamaroo are SubArrays supposed to be a type of AnyROCArray or AnyCuArray? One thing I noticed is that this function doesn't get dispatched with a type of SubArray, backed by a ROCArray parent. |
Ah yeah, we disabled those subtypes in Adapt.jl (which the Any* types are based on): https://github.com/JuliaGPU/Adapt.jl/blob/d9f852a61ee42258e543f5ff3b28e1f3abbb48bc/src/wrappers.jl#L86-L90=. That's because Adapt.jl used to be only used by CUDA.jl, where CuArray is used to encode views and reshapes (i..e, without using the array wrappers). Either we make it 'mandatory' for users of the Any* union types to handle SubArray/Reinterpret/Reshape themselves, or we should generalize Adapt.jl |
@maleadt do CuSubArrays currently count as AnyCuArrays? Can I do the same thing CUDA.jl is doing to make AMDGPU support this type hierarchy? Ideally I want it in Adapt.jl because it seems useful but for now I need the workaround for our project. |
There is no CuSubArray, CuArray itself is used to represent contiguous views (it encodes an offset argument for that), reshapes and reinterpretations. That would probably be useful for AMDGPU.jl too (because it simplifies dispatching to vendor libraries), but would be up to @jpsamaroo to decide. |
Yeah that sounds like the reasonable approach to me. |
@jpsamaroo @maleadt not sure I'm following. What exactly is the preferred course of action here? |
@jpsamaroo and @maleadt could we do a quick zoom to sync up on this - I think its a quick fix. |
@jpsamaroo I've created a pull request for this: |
Mapreduce is now defined for |
@pxl-th yes I think I implemented it myself and made the PR but forgot about this issue :D. |
Hello,
GPUArray.mapreducedim!
should be implemented forAnyROCArray
instead of the more specific type ofROCArray
. Is there a way forAnyROCArray
to be adapted toROCArray
, so that this function is implemented forAnyROCArray
?Thanks
The text was updated successfully, but these errors were encountered: