Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LinearInterpolation: Fix scalar indexing and generalize for ndims(u) > 2 #335

Merged
merged 6 commits into from
Sep 24, 2024

Conversation

ashutosh-b-b
Copy link
Contributor

@ashutosh-b-b ashutosh-b-b commented Sep 23, 2024

Checklist

  • Appropriate tests were added
  • Any code changes were done in a way that does not break public API
  • All documentation related to code changes were updated
  • The new code follows the
    contributor guidelines, in particular the SciML Style Guide and
    COLPRAC.
  • Any new documentation only uses public API

Additional context

Add any other context about the problem here.

@sathvikbhagavan
Copy link
Member

What is the context? Any MWE for this?

@ashutosh-b-b
Copy link
Contributor Author

ashutosh-b-b commented Sep 23, 2024

What is the context? Any MWE for this?

Yes, the current implementation fails on GPU. I also realized we can generalize LinearInterpolation for N-Dimensional arrays rather than restricting it to a Matrix

julia> x = collect(0.0:0.1:10.0);

julia> y = [sin.(x) cos.(x)]' |> collect |> cu
2×101 CuArray{Float64, 2, CUDA.DeviceMemory}:
 0.0  0.0998334  0.198669  0.29552   0.389418  0.479426  …  -0.174327  -0.271761  -0.366479  -0.457536  -0.544021
 1.0  0.995004   0.980067  0.955337  0.921061  0.877583     -0.984688  -0.962365  -0.930426  -0.889191  -0.839072

julia> li = LinearInterpolation(y, x)
ERROR: Scalar indexing is disallowed.
Invocation of getindex resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations *do not* execute on the GPU, but very slowly on the CPU,
and therefore should be avoided.

If you want to allow scalar iteration, use `allowscalar` or `@allowscalar`
to enable scalar iteration globally or for the operations in question.
Stacktrace:
  [1] error(s::String)
    @ Base ./error.jl:35
  [2] errorscalar(op::String)
    @ GPUArraysCore ~/.julia/packages/GPUArraysCore/GMsgk/src/GPUArraysCore.jl:155
  [3] _assertscalar(op::String, behavior::GPUArraysCore.ScalarIndexing)
    @ GPUArraysCore ~/.julia/packages/GPUArraysCore/GMsgk/src/GPUArraysCore.jl:128
  [4] assertscalar(op::String)
    @ GPUArraysCore ~/.julia/packages/GPUArraysCore/GMsgk/src/GPUArraysCore.jl:116
  [5] getindex
    @ ~/.julia/packages/GPUArrays/qt4ax/src/host/indexing.jl:50 [inlined]
  [6] scalar_getindex
    @ ~/.julia/packages/GPUArrays/qt4ax/src/host/indexing.jl:36 [inlined]
  [7] _getindex
    @ ~/.julia/packages/GPUArrays/qt4ax/src/host/indexing.jl:19 [inlined]
  [8] getindex
    @ ~/.julia/packages/GPUArrays/qt4ax/src/host/indexing.jl:17 [inlined]
  [9] getindex
    @ ~/.julia/packages/ReadOnlyArrays/I7g0C/src/ReadOnlyArrays.jl:63 [inlined]
 [10] (::DataInterpolations.var"#1#2"{ReadOnlyArrays.ReadOnlyMatrix{Float64, CuArray{…}}, Int64})(j::Int64)
    @ DataInterpolations ./none:0
 [11] iterate
    @ ./generator.jl:47 [inlined]
 [12] collect(itr::Base.Generator{UnitRange{Int64}, DataInterpolations.var"#1#2"{ReadOnlyArrays.ReadOnlyMatrix{…}, Int64}})
    @ Base ./array.jl:834
 [13] linear_interpolation_parameters(u::ReadOnlyArrays.ReadOnlyMatrix{…}, t::ReadOnlyArrays.ReadOnlyVector{…}, idx::Int64)
    @ DataInterpolations ~/.julia/packages/DataInterpolations/cSfd8/src/parameter_caches.jl:17
 [14] _broadcast_getindex_evalf
    @ ./broadcast.jl:709 [inlined]
 [15] _broadcast_getindex
    @ ./broadcast.jl:682 [inlined]
 [16] getindex
    @ ./broadcast.jl:636 [inlined]
 [17] macro expansion
    @ ./broadcast.jl:1004 [inlined]
 [18] macro expansion
    @ ./simdloop.jl:77 [inlined]
 [19] copyto!
    @ ./broadcast.jl:1003 [inlined]
 [20] copyto!
    @ ./broadcast.jl:956 [inlined]
 [21] copy
    @ ./broadcast.jl:928 [inlined]
 [22] materialize
    @ ./broadcast.jl:903 [inlined]
 [23] DataInterpolations.LinearParameterCache(u::ReadOnlyArrays.ReadOnlyMatrix{…}, t::ReadOnlyArrays.ReadOnlyVector{…})
    @ DataInterpolations ~/.julia/packages/DataInterpolations/cSfd8/src/parameter_caches.jl:6
 [24] LinearInterpolation(u::CuArray{Float64, 2, CUDA.DeviceMemory}, t::Vector{Float64}; extrapolate::Bool, safetycopy::Bool)
    @ DataInterpolations ~/.julia/packages/DataInterpolations/cSfd8/src/interpolation_caches.jl:33
 [25] LinearInterpolation(u::CuArray{Float64, 2, CUDA.DeviceMemory}, t::Vector{Float64})
    @ DataInterpolations ~/.julia/packages/DataInterpolations/cSfd8/src/interpolation_caches.jl:31
 [26] top-level scope
    @ REPL[26]:1
Some type information was truncated. Use `show(err)` to see complete types.

@ashutosh-b-b ashutosh-b-b marked this pull request as ready for review September 23, 2024 10:45
@ashutosh-b-b
Copy link
Contributor Author

ashutosh-b-b commented Sep 24, 2024

There is one breaking change here. The output of LinearInterpolation in case of u<:AbstractMatrix is not a vector. i.e.

julia> A(0.1)
2×1 Matrix{Float64}:
 0.19999999999999996
 0.2999999999999998

This was a vector before. But since we disallowed scalar indexing, the slope for an AbstractArray is no longer a Vector{Vector} but Vector{AbstractArray} instead.

This would fail the interpolation tests for 2d matrices.

@ChrisRackauckas
Copy link
Member

Run the formatter

@ChrisRackauckas ChrisRackauckas merged commit beba54c into SciML:master Sep 24, 2024
10 checks passed
@adrhill
Copy link

adrhill commented Oct 1, 2024

There is one breaking change here.

This indeed broke my package extension tests in SparseConnectivityTracer.

The output of LinearInterpolation in case of u<:AbstractMatrix is not a vector.

Is this now inconsistent across AbstractInterpolations?

@adrhill
Copy link

adrhill commented Oct 1, 2024

Related issue: #328

@ChrisRackauckas
Copy link
Member

That should now be consistent?

@adrhill
Copy link

adrhill commented Oct 1, 2024

No, as described by @ashutosh-b-b, LinearInterpolation now returns a matrix, whereas other methods return vectors:

julia> u = rand(5, 5)
5×5 Matrix{Float64}:
 0.668608   0.874591   0.971824     0.878126  0.413327
 0.291351   0.0898619  0.000914161  0.299064  0.936915
 0.359392   0.67281    0.0165996    0.519808  0.130723
 0.0474623  0.902087   0.216814     0.691552  0.118745
 0.271747   0.846124   0.937015     0.815366  0.257844

julia> t = 0:4
0:4

julia> ConstantInterpolation(u, t)(1.0)
5-element Vector{Float64}:
 0.8745907971604465
 0.08986185939550206
 0.6728097691137314
 0.9020867050011258
 0.8461239822546711

julia> LinearInterpolation(u, t)(1.0)
5×1 Matrix{Float64}:
 0.8745907971604465
 0.08986185939550206
 0.6728097691137314
 0.9020867050011258
 0.8461239822546711

julia> QuadraticInterpolation(u, t)(1.0)
5-element Vector{Float64}:
 0.8745907971604465
 0.08986185939550206
 0.6728097691137314
 0.9020867050011258
 0.8461239822546711

@ChrisRackauckas
Copy link
Member

Oh wait, you weren't talking about an array of matrices input? Or higher order tensor?

@adrhill
Copy link

adrhill commented Oct 1, 2024

This PR makes interpolations inconsistent in the sense that LinearInterpolation returns a matrix when called with a scalar, while all other methods return vectors.

For SCT and #328, I would like to be able to tell the state dimensionality from the type.
Ideally, all AbstractInterpolation{T,N} would either return a N×1 Matrix or a vector with N entries when called with a scalar. As long as the interface is consistent, I don't mind which one.

@ChrisRackauckas
Copy link
Member

That should definitely be a vec. I thought this was a case where it was interpolating arrays of matrices, not the output of data that is vector of vectors.

@sathvikbhagavan
Copy link
Member

Yeah, to get a matrix, we can always do A([t])

@ChrisRackauckas
Copy link
Member

@sathvikbhagavan can you handle it?

@ashutosh-b-b
Copy link
Contributor Author

I have a potential fix open: #339 .
This PR removed scalar indexing from the slope calculations. And thus :

x[:, idx+ 1] - x[:, idx] 
# became 
x[: , (idx + 1):(idx+1)] - x[:, idx:idx]

Which prompted the Matrix change

@sathvikbhagavan
Copy link
Member

Nice! I will tackle #328

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants