Skip to content

Commit

Permalink
Merge pull request #70 from SciML/ChrisRackauckas-patch-2
Browse files Browse the repository at this point in the history
Fix GPU Tests
  • Loading branch information
ChrisRackauckas authored Jun 22, 2023
2 parents e22c0c6 + 6af0692 commit 83250da
Showing 1 changed file with 3 additions and 4 deletions.
7 changes: 3 additions & 4 deletions test/gpu_all.jl
Original file line number Diff line number Diff line change
@@ -1,6 +1,5 @@
using LinearAlgebra,
OrdinaryDiffEq, Test, PreallocationTools, CUDA, ForwardDiff,
ArrayInterfaceCore
OrdinaryDiffEq, Test, PreallocationTools, CUDA, ForwardDiff

# upstream
OrdinaryDiffEq.DiffEqBase.anyeltypedual(x::FixedSizeDiffCache, counter = 0) = Any
Expand All @@ -16,8 +15,8 @@ tmp_du_CUA = get_tmp(cache_CU, u0_CU)
tmp_dual_du_CUA = get_tmp(cache_CU, dual_CU)
tmp_du_CUN = get_tmp(cache_CU, 0.0f0)
tmp_dual_du_CUN = get_tmp(cache_CU, dual_N)
@test ArrayInterfaceCore.parameterless_type(typeof(cache_CU.dual_du)) ==
ArrayInterfaceCore.parameterless_type(typeof(u0_CU)) #check that dual cache array is a GPU array for performance reasons.
@test SciMLBase.parameterless_type(typeof(cache_CU.dual_du)) ==
SciMLBase.parameterless_type(typeof(u0_CU)) #check that dual cache array is a GPU array for performance reasons.
@test size(tmp_du_CUA) == size(u0_CU)
@test typeof(tmp_du_CUA) == typeof(u0_CU)
@test eltype(tmp_du_CUA) == eltype(u0_CU)
Expand Down

0 comments on commit 83250da

Please sign in to comment.