Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MeanPool doesn't support complex CUDA arrays #981

Open
salbert83 opened this issue Oct 17, 2024 · 4 comments
Open

MeanPool doesn't support complex CUDA arrays #981

salbert83 opened this issue Oct 17, 2024 · 4 comments
Labels
enhancement New feature or request

Comments

@salbert83
Copy link

As several layers support complex types (e.g., Dense, Conv, Bilinear), I expected the same from MeanPool and AdaptiveMeanPool. Example below for MeanPool (copied and pasted from jupyterlab). I've experienced the same issue on Linux.

using CUDA
using LinearAlgebra
using Lux
using LuxCUDA
using Pkg
using Random

versioninfo(), Pkg.status()
*********************************** output *******************************
Julia Version 1.11.0
Commit 501a4f25c2 (2024-10-07 11:40 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: 8 × Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz
WORD_SIZE: 64
LLVM: libLLVM-16.0.6 (ORCJIT, icelake-client)
Threads: 1 default, 0 interactive, 1 GC (on 8 virtual cores)
Status C:\Users\salbe\OneDrive\Documents\Research\JuliaBugs\Project.toml
[052768ef] CUDA v5.5.2
[7073ff75] IJulia v1.25.0
[b2108857] Lux v1.1.0
[d0bbae9a] LuxCUDA v0.3.3
(nothing, nothing)


CUDA.allowscalar(false)
X = randn(ComplexF64, 64, 64, 3, 10);
mp = MeanPool((5,5),stride=(1,1))
ps, st = Lux.setup(Xoshiro(), mp)
Y = mp(X, ps, st)[1]
size(Y), typeof(Y)
output*********
((60, 60, 3, 10), Array{ComplexF64, 4})


X_ = CuArray{ComplexF64}(X)
ps_ = Lux.gpu_device()(ps)
st_ = Lux.gpu_device()(st)
************************** output *******************************
NamedTuple()


Y_ = mp(X_, ps_, st_)
***********************error message ********************************
Scalar indexing is disallowed.
Invocation of getindex resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations do not execute on the GPU, but very slowly on the CPU,
and therefore should be avoided.

If you want to allow scalar iteration, use allowscalar or @allowscalar
to enable scalar iteration globally or for the operations in question.

Stacktrace:
[1] error(s::String)
@ Base .\error.jl:35
[2] errorscalar(op::String)
@ GPUArraysCore C:\Users\salbe.julia\packages\GPUArraysCore\GMsgk\src\GPUArraysCore.jl:155
[3] _assertscalar(op::String, behavior::GPUArraysCore.ScalarIndexing)
@ GPUArraysCore C:\Users\salbe.julia\packages\GPUArraysCore\GMsgk\src\GPUArraysCore.jl:128
[4] assertscalar(op::String)
@ GPUArraysCore C:\Users\salbe.julia\packages\GPUArraysCore\GMsgk\src\GPUArraysCore.jl:116
[5] getindex
@ C:\Users\salbe.julia\packages\GPUArrays\qt4ax\src\host\indexing.jl:50 [inlined]
[6] scalar_getindex
@ C:\Users\salbe.julia\packages\GPUArrays\qt4ax\src\host\indexing.jl:36 [inlined]
[7] _getindex
@ C:\Users\salbe.julia\packages\GPUArrays\qt4ax\src\host\indexing.jl:19 [inlined]
[8] getindex
@ C:\Users\salbe.julia\packages\GPUArrays\qt4ax\src\host\indexing.jl:17 [inlined]
[9] meanpool_direct!(y::CuArray{ComplexF64, 5, CUDA.DeviceMemory}, x::CuArray{ComplexF64, 5, CUDA.DeviceMemory}, pdims::PoolDims{3, 3, 3, 6, 3}, ::Val{(5, 5, 1)}, ::Val{3}, ::Val{(0, 0, 0, 0, 0, 0)}, ::Val{(1, 1, 1)}, ::Val{(1, 1, 1)}; alpha::Int64, beta::Int64, kwargs::@kwargs{})
@ NNlib C:\Users\salbe.julia\packages\NNlib\CkJqS\src\impl\pooling_direct.jl:96
[10] meanpool_direct!(y::CuArray{ComplexF64, 5, CUDA.DeviceMemory}, x::CuArray{ComplexF64, 5, CUDA.DeviceMemory}, pdims::PoolDims{3, 3, 3, 6, 3}; alpha::Int64, beta::Int64, kwargs::@kwargs{})
@ NNlib C:\Users\salbe.julia\packages\NNlib\CkJqS\src\impl\pooling_direct.jl:7
[11] meanpool_direct!
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\impl\pooling_direct.jl:4 [inlined]
[12] #meanpool!#410
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:41 [inlined]
[13] meanpool!
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:38 [inlined]
[14] #meanpool!#425
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:73 [inlined]
[15] meanpool!
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:70 [inlined]
[16] meanpool(x::CuArray{ComplexF64, 4, CUDA.DeviceMemory}, pdims::PoolDims{2, 2, 2, 4, 2}; kwargs::@kwargs{})
@ NNlib C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:119
[17] meanpool
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:114 [inlined]
[18] MeanPoolOp
@ C:\Users\salbe.julia\packages\Lux\VkHFW\src\layers\pooling.jl:39 [inlined]
[19] PoolingLayer
@ C:\Users\salbe.julia\packages\Lux\VkHFW\src\layers\pooling.jl:81 [inlined]
[20] apply
@ C:\Users\salbe.julia\packages\LuxCore\IBKvY\src\LuxCore.jl:155 [inlined]
[21] (::MeanPool{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}})(x::CuArray{ComplexF64, 4, CUDA.DeviceMemory}, ps::@NamedTuple{}, st::@NamedTuple{})
@ LuxCore C:\Users\salbe.julia\packages\LuxCore\IBKvY\src\LuxCore.jl:266
[22] top-level scope
@ In[41]:1

@avik-pal avik-pal added the enhancement New feature or request label Oct 17, 2024
@salbert83
Copy link
Author

Even when one attempts a work-around, there is an issue with Zygote.

using CUDA
using LinearAlgebra
using Lux
using LuxCUDA
using Pkg
using Random
using Statistics
using Zygote
versioninfo(), Pkg.status()

Julia Version 1.11.0
Commit 501a4f25c2 (2024-10-07 11:40 UTC)
Build Info:
Official https://julialang.org/ release
Platform Info:
OS: Windows (x86_64-w64-mingw32)
CPU: 8 × Intel(R) Core(TM) i7-1065G7 CPU @ 1.30GHz
WORD_SIZE: 64
LLVM: libLLVM-16.0.6 (ORCJIT, icelake-client)
Threads: 1 default, 0 interactive, 1 GC (on 8 virtual cores)
Status C:\Users\salbe\OneDrive\Documents\Research\JuliaBugs\Project.toml
[052768ef] CUDA v5.5.2
[7073ff75] IJulia v1.25.0
[b2108857] Lux v1.1.0
[d0bbae9a] LuxCUDA v0.3.3
[e88e6eb3] Zygote v0.6.72
(nothing, nothing)

f(X, ps, st) = mean(abs2, mp(real(X), ps, st)[1] + im*mp(imag(X), ps, st)[1])
f(X_, ps_, st_) # X_, ps_, st_ defined in code segment previously posted
output --> 0.040390380456779694
Zygote.gradient(z -> f(X_, z, st_), ps_)
************************ Error message ******************************************
Scalar indexing is disallowed.
Invocation of getindex resulted in scalar indexing of a GPU array.
This is typically caused by calling an iterating implementation of a method.
Such implementations do not execute on the GPU, but very slowly on the CPU,
and therefore should be avoided.

If you want to allow scalar iteration, use allowscalar or @allowscalar
to enable scalar iteration globally or for the operations in question.

Stacktrace:
[1] error(s::String)
@ Base .\error.jl:35
[2] errorscalar(op::String)
@ GPUArraysCore C:\Users\salbe.julia\packages\GPUArraysCore\GMsgk\src\GPUArraysCore.jl:155
[3] _assertscalar(op::String, behavior::GPUArraysCore.ScalarIndexing)
@ GPUArraysCore C:\Users\salbe.julia\packages\GPUArraysCore\GMsgk\src\GPUArraysCore.jl:128
[4] assertscalar(op::String)
@ GPUArraysCore C:\Users\salbe.julia\packages\GPUArraysCore\GMsgk\src\GPUArraysCore.jl:116
[5] getindex
@ C:\Users\salbe.julia\packages\GPUArrays\qt4ax\src\host\indexing.jl:50 [inlined]
[6] scalar_getindex
@ C:\Users\salbe.julia\packages\GPUArrays\qt4ax\src\host\indexing.jl:36 [inlined]
[7] _getindex
@ C:\Users\salbe.julia\packages\GPUArrays\qt4ax\src\host\indexing.jl:19 [inlined]
[8] getindex
@ C:\Users\salbe.julia\packages\GPUArrays\qt4ax\src\host\indexing.jl:17 [inlined]
[9] ∇meanpool_direct!(dx::CuArray{Float64, 5, CUDA.DeviceMemory}, dy::CuArray{ComplexF64, 5, CUDA.DeviceMemory}, y::CuArray{Float64, 5, CUDA.DeviceMemory}, x::CuArray{Float64, 5, CUDA.DeviceMemory}, pdims::PoolDims{3, 3, 3, 6, 3}, ::Val{(5, 5, 1)}; alpha::Int64, beta::Int64, kwargs::@kwargs{})
@ NNlib C:\Users\salbe.julia\packages\NNlib\CkJqS\src\impl\pooling_direct.jl:232
[10] ∇meanpool_direct!(dx::CuArray{Float64, 5, CUDA.DeviceMemory}, dy::CuArray{ComplexF64, 5, CUDA.DeviceMemory}, y::CuArray{Float64, 5, CUDA.DeviceMemory}, x::CuArray{Float64, 5, CUDA.DeviceMemory}, pdims::PoolDims{3, 3, 3, 6, 3}, ::Val{(5, 5, 1)})
@ NNlib C:\Users\salbe.julia\packages\NNlib\CkJqS\src\impl\pooling_direct.jl:183
[11] ∇meanpool_direct!(dx::CuArray{Float64, 5, CUDA.DeviceMemory}, dy::CuArray{ComplexF64, 5, CUDA.DeviceMemory}, y::CuArray{Float64, 5, CUDA.DeviceMemory}, x::CuArray{Float64, 5, CUDA.DeviceMemory}, pdims::PoolDims{3, 3, 3, 6, 3}; kwargs::@kwargs{})
@ NNlib C:\Users\salbe.julia\packages\NNlib\CkJqS\src\impl\pooling_direct.jl:176
[12] ∇meanpool_direct!
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\impl\pooling_direct.jl:172 [inlined]
[13] #∇meanpool!#413
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:57 [inlined]
[14] ∇meanpool!
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:53 [inlined]
[15] #∇meanpool!#426
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:90 [inlined]
[16] ∇meanpool!
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:86 [inlined]
[17] ∇meanpool(dy::CuArray{ComplexF64, 4, CUDA.DeviceMemory}, y::CuArray{Float64, 4, CUDA.DeviceMemory}, x::CuArray{Float64, 4, CUDA.DeviceMemory}, pdims::PoolDims{2, 2, 2, 4, 2}; kwargs::@kwargs{})
@ NNlib C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:129
[18] ∇meanpool
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:123 [inlined]
[19] meanpool_pullback
@ C:\Users\salbe.julia\packages\NNlib\CkJqS\src\pooling.jl:207 [inlined]
[20] ZBack
@ C:\Users\salbe.julia\packages\Zygote\NRp5C\src\compiler\chainrules.jl:212 [inlined]
[21] MeanPoolOp
@ C:\Users\salbe.julia\packages\Lux\VkHFW\src\layers\pooling.jl:39 [inlined]
[22] PoolingLayer
@ C:\Users\salbe.julia\packages\Lux\VkHFW\src\layers\pooling.jl:81 [inlined]
[23] apply
@ C:\Users\salbe.julia\packages\LuxCore\IBKvY\src\LuxCore.jl:155 [inlined]
[24] AbstractLuxWrapperLayer
@ C:\Users\salbe.julia\packages\LuxCore\IBKvY\src\LuxCore.jl:266 [inlined]
[25] f
@ .\In[23]:1 [inlined]
[26] (::Zygote.Pullback{Tuple{typeof(f), CuArray{ComplexF64, 4, CUDA.DeviceMemory}, @NamedTuple{}, @NamedTuple{}}, Tuple{Zygote.var"#1986#back#197"{Zygote.var"#193#196"{Zygote.Context{false}, GlobalRef, Complex{Bool}}}, Zygote.var"#2029#back#216"{Zygote.var"#back#214"{2, 1, Zygote.Context{false}, CuArray{Float64, 4, CUDA.DeviceMemory}}}, Zygote.ZBack{ChainRules.var"#times_pullback#571"{Complex{Bool}, CuArray{Float64, 4, CUDA.DeviceMemory}, ChainRulesCore.ProjectTo{AbstractArray, @NamedTuple{element::ChainRulesCore.ProjectTo{Float64, @NamedTuple{}}, axes::NTuple{4, Base.OneTo{Int64}}}}, ChainRulesCore.ProjectTo{ComplexF64, @NamedTuple{}}}}, Zygote.var"#1986#back#197"{Zygote.var"#193#196"{Zygote.Context{false}, GlobalRef, MeanPool{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}}}, Zygote.Pullback{Tuple{MeanPool{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}, CuArray{Float64, 4, CUDA.DeviceMemory}, @NamedTuple{}, @NamedTuple{}}, Tuple{Zygote.Pullback{Tuple{typeof(LuxCore.apply), Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}, CuArray{Float64, 4, CUDA.DeviceMemory}, @NamedTuple{}, @NamedTuple{}}, Tuple{Zygote.Pullback{Tuple{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}, CuArray{Float64, 4, CUDA.DeviceMemory}, @NamedTuple{}, @NamedTuple{}}, Tuple{Zygote.Pullback{Tuple{Lux.MeanPoolOp, CuArray{Float64, 4, CUDA.DeviceMemory}, PoolDims{2, 2, 2, 4, 2}}, Tuple{Zygote.ZBack{NNlib.var"#meanpool_pullback#459"{@kwargs{}, CuArray{Float64, 4, CUDA.DeviceMemory}, PoolDims{2, 2, 2, 4, 2}, CuArray{Float64, 4, CUDA.DeviceMemory}}}}}, Zygote.ZBack{LuxCoreChainRulesCoreExt.var"#∇getproperty#1"}, Zygote.ZBack{Returns{Tuple{ChainRulesCore.NoTangent, ChainRulesCore.NoTangent}}}, Zygote.ZBack{LuxCoreChainRulesCoreExt.var"#∇getproperty#1"}, Zygote.var"#2013#back#207"{typeof(identity)}}}}}, Zygote.var"#2180#back#306"{Zygote.var"#back#305"{:layer, Zygote.Context{false}, MeanPool{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}, Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}}}}, Zygote.var"#3085#back#823"{Zygote.var"#819#822"}, Zygote.var"#2029#back#216"{Zygote.var"#back#214"{2, 1, Zygote.Context{false}, CuArray{Float64, 4, CUDA.DeviceMemory}}}, Zygote.Pullback{Tuple{MeanPool{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}, CuArray{Float64, 4, CUDA.DeviceMemory}, @NamedTuple{}, @NamedTuple{}}, Tuple{Zygote.Pullback{Tuple{typeof(LuxCore.apply), Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}, CuArray{Float64, 4, CUDA.DeviceMemory}, @NamedTuple{}, @NamedTuple{}}, Tuple{Zygote.Pullback{Tuple{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}, CuArray{Float64, 4, CUDA.DeviceMemory}, @NamedTuple{}, @NamedTuple{}}, Tuple{Zygote.Pullback{Tuple{Lux.MeanPoolOp, CuArray{Float64, 4, CUDA.DeviceMemory}, PoolDims{2, 2, 2, 4, 2}}, Tuple{Zygote.ZBack{NNlib.var"#meanpool_pullback#459"{@kwargs{}, CuArray{Float64, 4, CUDA.DeviceMemory}, PoolDims{2, 2, 2, 4, 2}, CuArray{Float64, 4, CUDA.DeviceMemory}}}}}, Zygote.ZBack{LuxCoreChainRulesCoreExt.var"#∇getproperty#1"}, Zygote.ZBack{Returns{Tuple{ChainRulesCore.NoTangent, ChainRulesCore.NoTangent}}}, Zygote.ZBack{LuxCoreChainRulesCoreExt.var"#∇getproperty#1"}, Zygote.var"#2013#back#207"{typeof(identity)}}}}}, Zygote.var"#2180#back#306"{Zygote.var"#back#305"{:layer, Zygote.Context{false}, MeanPool{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}, Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}}}}, Zygote.var"#1986#back#197"{Zygote.var"#193#196"{Zygote.Context{false}, GlobalRef, MeanPool{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}}}, Zygote.var"#3615#back#1104"{Zygote.var"#1100#1103"}, Zygote.var"#3053#back#807"{Zygote.var"#803#806"}, Zygote.ZBack{ChainRules.var"#mean_pullback_f#892"{Int64, ChainRules.var"#sum_abs2_pullback#744"{CuArray{ComplexF64, 4, CUDA.DeviceMemory}}}}}})(Δ::Float64)
@ Zygote C:\Users\salbe.julia\packages\Zygote\NRp5C\src\compiler\interface2.jl:0
[27] #3
@ .\In[26]:1 [inlined]
[28] (::Zygote.var"#78#79"{Zygote.Pullback{Tuple{var"#3#4", @NamedTuple{}}, Tuple{Zygote.Pullback{Tuple{typeof(f), CuArray{ComplexF64, 4, CUDA.DeviceMemory}, @NamedTuple{}, @NamedTuple{}}, Tuple{Zygote.var"#1986#back#197"{Zygote.var"#193#196"{Zygote.Context{false}, GlobalRef, Complex{Bool}}}, Zygote.var"#2029#back#216"{Zygote.var"#back#214"{2, 1, Zygote.Context{false}, CuArray{Float64, 4, CUDA.DeviceMemory}}}, Zygote.ZBack{ChainRules.var"#times_pullback#571"{Complex{Bool}, CuArray{Float64, 4, CUDA.DeviceMemory}, ChainRulesCore.ProjectTo{AbstractArray, @NamedTuple{element::ChainRulesCore.ProjectTo{Float64, @NamedTuple{}}, axes::NTuple{4, Base.OneTo{Int64}}}}, ChainRulesCore.ProjectTo{ComplexF64, @NamedTuple{}}}}, Zygote.var"#1986#back#197"{Zygote.var"#193#196"{Zygote.Context{false}, GlobalRef, MeanPool{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}}}, Zygote.Pullback{Tuple{MeanPool{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}, CuArray{Float64, 4, CUDA.DeviceMemory}, @NamedTuple{}, @NamedTuple{}}, Tuple{Zygote.Pullback{Tuple{typeof(LuxCore.apply), Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}, CuArray{Float64, 4, CUDA.DeviceMemory}, @NamedTuple{}, @NamedTuple{}}, Tuple{Zygote.Pullback{Tuple{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}, CuArray{Float64, 4, CUDA.DeviceMemory}, @NamedTuple{}, @NamedTuple{}}, Tuple{Zygote.Pullback{Tuple{Lux.MeanPoolOp, CuArray{Float64, 4, CUDA.DeviceMemory}, PoolDims{2, 2, 2, 4, 2}}, Tuple{Zygote.ZBack{NNlib.var"#meanpool_pullback#459"{@kwargs{}, CuArray{Float64, 4, CUDA.DeviceMemory}, PoolDims{2, 2, 2, 4, 2}, CuArray{Float64, 4, CUDA.DeviceMemory}}}}}, Zygote.ZBack{LuxCoreChainRulesCoreExt.var"#∇getproperty#1"}, Zygote.ZBack{Returns{Tuple{ChainRulesCore.NoTangent, ChainRulesCore.NoTangent}}}, Zygote.ZBack{LuxCoreChainRulesCoreExt.var"#∇getproperty#1"}, Zygote.var"#2013#back#207"{typeof(identity)}}}}}, Zygote.var"#2180#back#306"{Zygote.var"#back#305"{:layer, Zygote.Context{false}, MeanPool{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}, Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}}}}, Zygote.var"#3085#back#823"{Zygote.var"#819#822"}, Zygote.var"#2029#back#216"{Zygote.var"#back#214"{2, 1, Zygote.Context{false}, CuArray{Float64, 4, CUDA.DeviceMemory}}}, Zygote.Pullback{Tuple{MeanPool{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}, CuArray{Float64, 4, CUDA.DeviceMemory}, @NamedTuple{}, @NamedTuple{}}, Tuple{Zygote.Pullback{Tuple{typeof(LuxCore.apply), Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}, CuArray{Float64, 4, CUDA.DeviceMemory}, @NamedTuple{}, @NamedTuple{}}, Tuple{Zygote.Pullback{Tuple{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}, CuArray{Float64, 4, CUDA.DeviceMemory}, @NamedTuple{}, @NamedTuple{}}, Tuple{Zygote.Pullback{Tuple{Lux.MeanPoolOp, CuArray{Float64, 4, CUDA.DeviceMemory}, PoolDims{2, 2, 2, 4, 2}}, Tuple{Zygote.ZBack{NNlib.var"#meanpool_pullback#459"{@kwargs{}, CuArray{Float64, 4, CUDA.DeviceMemory}, PoolDims{2, 2, 2, 4, 2}, CuArray{Float64, 4, CUDA.DeviceMemory}}}}}, Zygote.ZBack{LuxCoreChainRulesCoreExt.var"#∇getproperty#1"}, Zygote.ZBack{Returns{Tuple{ChainRulesCore.NoTangent, ChainRulesCore.NoTangent}}}, Zygote.ZBack{LuxCoreChainRulesCoreExt.var"#∇getproperty#1"}, Zygote.var"#2013#back#207"{typeof(identity)}}}}}, Zygote.var"#2180#back#306"{Zygote.var"#back#305"{:layer, Zygote.Context{false}, MeanPool{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}, Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}}}}, Zygote.var"#1986#back#197"{Zygote.var"#193#196"{Zygote.Context{false}, GlobalRef, MeanPool{Lux.PoolingLayer{Lux.GenericPoolMode{Tuple{Int64, Int64}, Tuple{Int64, Int64}, NTuple{4, Int64}, Tuple{Int64, Int64}}, Lux.MeanPoolOp}}}}, Zygote.var"#3615#back#1104"{Zygote.var"#1100#1103"}, Zygote.var"#3053#back#807"{Zygote.var"#803#806"}, Zygote.ZBack{ChainRules.var"#mean_pullback_f#892"{Int64, ChainRules.var"#sum_abs2_pullback#744"{CuArray{ComplexF64, 4, CUDA.DeviceMemory}}}}}}, Zygote.var"#1986#back#197"{Zygote.var"#193#196"{Zygote.Context{false}, GlobalRef, CuArray{ComplexF64, 4, CUDA.DeviceMemory}}}, Zygote.var"#1986#back#197"{Zygote.var"#193#196"{Zygote.Context{false}, GlobalRef, @NamedTuple{}}}}}})(Δ::Float64)
@ Zygote C:\Users\salbe.julia\packages\Zygote\NRp5C\src\compiler\interface.jl:91
[29] gradient(f::Function, args::@NamedTuple{})
@ Zygote C:\Users\salbe.julia\packages\Zygote\NRp5C\src\compiler\interface.jl:148
[30] top-level scope
@ In[26]:1

Click to add a cell.

@avik-pal
Copy link
Member

Not really a Zygote issue. Something is promoting the inputs to the pullbacks to a complex array

@salbert83
Copy link
Author

salbert83 commented Oct 18, 2024 via email

@avik-pal
Copy link
Member

avik-pal commented Oct 22, 2024

This should be handled in NNlib.jl (Posted in FluxML/NNlib.jl#610)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request
Projects
None yet
Development

No branches or pull requests

2 participants