Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Differentiability tests for time stepping #656

Open
wants to merge 65 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 61 commits
Commits
Show all changes
65 commits
Select commit Hold shift + click to select a range
94f202d
diff tests in extra folder
maximilian-gelbrecht Jan 3, 2025
986086f
more WIP
maximilian-gelbrecht Jan 7, 2025
90c95b1
new timestepping
maximilian-gelbrecht Jan 9, 2025
b657098
Merge branch 'main' into mg/barotropic-differentiability
maximilian-gelbrecht Jan 9, 2025
757dfaa
changelog diff tests timestepping
maximilian-gelbrecht Jan 9, 2025
f4d3ca8
Merge branch 'main' into mg/barotropic-differentiability
milankl Jan 9, 2025
b354516
more wip
maximilian-gelbrecht Jan 9, 2025
8935181
prognosticvariables one, fill, zero
maximilian-gelbrecht Jan 10, 2025
8026ec5
WIP
maximilian-gelbrecht Jan 10, 2025
ed522fa
more VIP step vor
maximilian-gelbrecht Jan 10, 2025
21e1500
changelog
maximilian-gelbrecht Jan 10, 2025
2db1583
more WIP
maximilian-gelbrecht Jan 13, 2025
ad43ef7
fix old spectral gradient test
maximilian-gelbrecht Jan 19, 2025
be8d08c
PrognosticVariables compatible with FiniteDifferences
maximilian-gelbrecht Jan 19, 2025
684aef6
cleanup timestepping tests a bit, test currenlty failing
maximilian-gelbrecht Jan 19, 2025
c96bd74
small fixes and WIP
maximilian-gelbrecht Jan 19, 2025
9701b28
make_zero works for model
maximilian-gelbrecht Jan 20, 2025
055f496
updated finitedifferences compat
maximilian-gelbrecht Jan 22, 2025
e0d44ac
FiniteDifferences now working with Diagn and Progn
maximilian-gelbrecht Jan 22, 2025
8ff6b3a
minor confusion about finitedifference correctness
maximilian-gelbrecht Jan 22, 2025
5e414aa
working towards timestepping diff PrimitiveModel
maximilian-gelbrecht Jan 23, 2025
6a421c0
Merge branch 'main' into mg/barotropic-differentiability
maximilian-gelbrecht Jan 23, 2025
28372ae
minor test update
maximilian-gelbrecht Jan 24, 2025
c6561c5
copy! for Clock
maximilian-gelbrecht Jan 25, 2025
5158af7
avoid ringgrids broadcasting
maximilian-gelbrecht Jan 25, 2025
92e42fe
include one differentiability test in main test set
maximilian-gelbrecht Jan 27, 2025
52aad60
leapfrog correctness passing
maximilian-gelbrecht Jan 27, 2025
4d5b0ef
fix FD ext, towards transfrom diag correctnes test
maximilian-gelbrecht Jan 27, 2025
ea96381
first doc sketch
maximilian-gelbrecht Jan 28, 2025
60e28fa
more correctness tests
maximilian-gelbrecht Jan 28, 2025
141dc28
one for diag based on to_vec
maximilian-gelbrecht Jan 28, 2025
6a9f6cc
finish correctness test individual functions barotropic model
maximilian-gelbrecht Jan 28, 2025
922f0c1
cleanup barotropic diff tests
maximilian-gelbrecht Jan 28, 2025
cd3e837
more work on the correctness tests
maximilian-gelbrecht Jan 28, 2025
5889981
less hardcoded to_vec
maximilian-gelbrecht Jan 29, 2025
bf1f842
remove flatten function
maximilian-gelbrecht Jan 29, 2025
130de65
Merge remote-tracking branch 'origin/main' into mg/barotropic-differe…
maximilian-gelbrecht Jan 29, 2025
26141d9
adjust new surface fluxes, less type instabilities
maximilian-gelbrecht Jan 29, 2025
50dd2ec
adjust show and copy! progn
maximilian-gelbrecht Jan 29, 2025
a3f1638
fix copy progn
maximilian-gelbrecht Jan 29, 2025
4dc8944
remove merge conflict message from changelog
maximilian-gelbrecht Jan 29, 2025
ece4449
adjust CI test
maximilian-gelbrecht Jan 29, 2025
65823f0
fix CI test (again)
maximilian-gelbrecht Jan 29, 2025
1375887
copy! progn avoids ringgrids broadcasting
maximilian-gelbrecht Jan 29, 2025
67e0dac
barotropic correctness tests done
maximilian-gelbrecht Jan 30, 2025
b27609e
WIP tests
maximilian-gelbrecht Jan 30, 2025
525ef09
WIP primtive wet tests
maximilian-gelbrecht Jan 30, 2025
21af75d
Merge remote-tracking branch 'origin/main' into mg/barotropic-differe…
maximilian-gelbrecht Jan 31, 2025
36f9160
adjust progn zero to new land
maximilian-gelbrecht Jan 31, 2025
a968050
adjust zero for progn
maximilian-gelbrecht Jan 31, 2025
0f472fd
actually fix it...
maximilian-gelbrecht Jan 31, 2025
af14a22
replace NaNs in to_vec for FD
maximilian-gelbrecht Jan 31, 2025
d05eb7d
WIP correctness tests
maximilian-gelbrecht Jan 31, 2025
2635431
Implement dataids for array types (fixes broadcasting diff issue)
maximilian-gelbrecht Jan 31, 2025
3375674
adjust to now functioning ringgrids broadcast
maximilian-gelbrecht Jan 31, 2025
796b331
reduce resolution of CI test
maximilian-gelbrecht Jan 31, 2025
6a7767e
copy all structs for FD
maximilian-gelbrecht Feb 4, 2025
06de07d
parameter test in timestepping as well
maximilian-gelbrecht Feb 4, 2025
16dc412
doc update
maximilian-gelbrecht Feb 5, 2025
cff60e1
remove to-do note
maximilian-gelbrecht Feb 5, 2025
7bda523
add enzyme to docs env
maximilian-gelbrecht Feb 5, 2025
42ce046
remove Enzyme dep from docs again
maximilian-gelbrecht Feb 5, 2025
7a49401
Merge branch 'main' into mg/barotropic-differentiability
maximilian-gelbrecht Feb 13, 2025
f82883a
full diff test only in Julia 1.10
maximilian-gelbrecht Feb 13, 2025
926d699
actually restrict Julia version correclty ...
maximilian-gelbrecht Feb 13, 2025
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,7 @@

## Unreleased

- Differentiability tests for timestepping added [#656](https://github.com/SpeedyWeather/SpeedyWeather.jl/pull/656)
- Interfaces for interpolation of AbstractGridArray [#671](https://github.com/SpeedyWeather/SpeedyWeather.jl/pull/671)
- Test folder sorted into subfolders [#671](https://github.com/SpeedyWeather/SpeedyWeather.jl/pull/671)
- Land model modularised + land netCDF output [#671](https://github.com/SpeedyWeather/SpeedyWeather.jl/pull/671)
Expand Down
1 change: 1 addition & 0 deletions docs/Project.toml
Original file line number Diff line number Diff line change
@@ -1,6 +1,7 @@
[deps]
CairoMakie = "13f3f980-e62b-5c42-98c6-ff1f3baf88f0"
Documenter = "e30172f5-a6a5-5a46-863b-614d45cd2de4"
Enzyme = "7da242da-08ed-463a-9acd-ee780be4f1d9"
Copy link
Member Author

@maximilian-gelbrecht maximilian-gelbrecht Feb 5, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Adding Enzyme to the docs and actually running the example code, makes the whole CI process for the docs much longer. So long actually, the process timed out. We may also just hardcode the Enzyme example in the docs.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Happy with that for now! We can later check how to automate tests if that's currently unfeasible

GeoMakie = "db073c08-6b98-4ee5-b6a4-5efafb3259c6"
NCDatasets = "85f8d34a-cbdd-5861-8df4-14fed0d494ab"
UnicodePlots = "b8865327-cd53-5732-bb35-84acbb429228"
Expand Down
1 change: 1 addition & 0 deletions docs/make.jl
Original file line number Diff line number Diff line change
Expand Up @@ -20,6 +20,7 @@ makedocs(
"Stochastic physics" => "stochastic_physics.md",
"Analysis"=>"analysis.md",
"Tree structure"=>"structure.md",
"Differentiability and Adjoint Model"=>"differentiability.md",
"NetCDF output"=>"output.md",
],
"Extending SpeedyWeather" => [
Expand Down
57 changes: 57 additions & 0 deletions docs/src/differentiability.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,57 @@
# Differentiability and Adjoint Model

SpeedyWeather.jl is written with differentiability in mind. This means that our model is differentiable by automatic differentiation (AD). If you are interested in machine learning (ML), this means that you can integrate our model directly into your ML models without the need to train your ANNs seperately first. For atmospheric modellers this means that you get an adjoint model for free which is always generated automatically, so that we don't need to maintain it seperatly. So, you can calibrate SpeedyWeather.jl in a fully automatic, data-driven way.

!!! Work in progress
The differentiability of SpeedyWeather.jl is still work in progress and some parts of this documentation might be not be always updated to the latest state. We will extend this documentation over time. Don't hesitate to contact us via GitHub issues or mail when you have questions or want to colloborate.

For the differentiability of our model we rely on [Enzyme.jl](https://github.com/EnzymeAD/Enzyme.jl). If you've used Enzyme before, just go ahead and try to differentiate the model! It should work. We have checked the correctness of the gradients extensively against a finite differences differentiation with [FiniteDifferences.jl](https://github.com/JuliaDiff/FiniteDifferences.jl/). In the following we present a simple example how we can take the gradient of a single timestep of the primitive equation model with respect to one of the model parameter.

!!! Enzyme with Julia 1.11
Currently there are still some issues with Enzyme in Julia 1.11, we recommend to use Julia 1.10 for the following

First we initialize the model as usual:

```@example autodiff
using SpeedyWeather, Enzyme

spectral_grid = SpectralGrid(trunc=23, nlayers=3)
model = PrimitiveWetModel(; spectral_grid)
simulation = initialize!(model)
initialize!(simulation)
run!(simulation, period=Day(10)) # spin-up the model a bit
```

Then, we get all variables we need from our `simulation`

```@example autodiff
(; prognostic_variables, diagnostic_variables, model) = simulation
(; Δt, Δt_millisec) = model.time_stepping
dt = 2Δt

progn = prognostic_variables
diagn = diagnostic_variables
```

Next, we will prepare to use Enzyme. Enzyme saves the gradient information in a shadow of the original input. For the inputs this shadow is initialized zero, whereas for the output the shadow is used as the seed of the AD. In other words, as we are doing reverse-mode AD, the shadow of the output is the value that is backpropageted by the reverse-mode AD. Ok, let's initialize everything:

```@example autodiff
dprogn = one(progn) # shadow for the progn values
ddiagn = make_zero(diagn) # shadow for the diagn values
dmodel = make_zero(model) # here, we'll accumulate all parameter derivatives
```

Then, we can already do the differentiation with Enzyme

```@example autodiff
autodiff(Reverse, SpeedyWeather.timestep!, Const, Duplicated(progn, dprogn), Duplicated(diagn, ddiagn), Const(dt), Duplicated(model, dmodel))
```

The derivitaves are accumulated in the `dmodel` shadow. So, if we e.g. want to know the derivative with respect to the gravity constant, we just have to inspect:

```@example autodiff
dmodel.planet.gravity
```

Doing a full sensitivity analysis through a long integration is computationally much more demanding, and is something that we are currently working on.

29 changes: 22 additions & 7 deletions ext/SpeedyWeatherEnzymeExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -2,6 +2,8 @@ module SpeedyWeatherEnzymeExt

using SpeedyWeather
using Enzyme
using Enzyme.EnzymeCore
using SpeedyWeather.ProgressMeter
import .EnzymeRules: reverse, augmented_primal
using .EnzymeRules

Expand All @@ -17,23 +19,23 @@ function adjoint_scale(S::SpectralTransform)
(; nlat_half, nlons, rfft_plans) = S
nfreqs = [rfft_plan.osz[1] for rfft_plan in rfft_plans] # TODO: This works with FFTW, but does it with cuFFT as well?

scale = zeros(Int, maximum(nfreqs), nlat_half)
scale = zeros(Int, maximum(nfreqs), 1, nlat_half) # the scratch memory is (Freq x lvl x lat), so we insert
# an additional dimension here for easier matrix multiply

for i=1:nlat_half
scale[1:nfreqs[i],i] = rfft_adjoint_scale(nfreqs[i], nlons[i])
scale[1:nfreqs[i],1,i] = rfft_adjoint_scale(nfreqs[i], nlons[i])
end

# TODO: transfer array to GPU in case we are on GPU
return reshape(scale, maximum(nfreqs), 1, nlat_half) # the scratch memory is (Freq x lvl x lat), so we insert
maximilian-gelbrecht marked this conversation as resolved.
Show resolved Hide resolved
# an additional dimension here for easier matrix multiply
# TODO: transfer array to GPU in case we are on GPU?
return scale
end

# Computes the scale for the adjoint/pullback of a real discrete fourier transform.
function rfft_adjoint_scale(n_freq::Int, n_real::Int)
if iseven(n_real)
return [1; [2 for i=2:(n_freq-1)]; 1]
return [1 < i < n_freq ? 2 : 1 for i=1:n_freq]
else
return [1; [2 for i=2:n_freq]]
return [1 < i ? 2 : 1 for i=1:n_freq]
maximilian-gelbrecht marked this conversation as resolved.
Show resolved Hide resolved
end
end

Expand Down Expand Up @@ -112,4 +114,17 @@ function reverse(config::EnzymeRules.RevConfigWidth{1}, func::Const{typeof(_four
return (nothing, nothing, nothing, nothing)
end

###
# implement make_zero where the default one fails

# this lock is part of the ProgressMeter that's part of the Feedback of all models
@inline function Enzyme.make_zero(
::Type{ProgressMeter.ProgressCore},
seen::IdDict,
prev::ProgressMeter.ProgressCore,
::Val{copy_if_inactive} = Val(false),
)::ProgressMeter.ProgressCore where {copy_if_inactive}
return prev
end

end
87 changes: 87 additions & 0 deletions ext/SpeedyWeatherFiniteDifferencesExt.jl
Original file line number Diff line number Diff line change
Expand Up @@ -27,4 +27,91 @@ function FiniteDifferences.to_vec(x::LTA) where LTA <: LowerTriangularArray
return x_vec, LowerTriangularArray_from_vec
end

# Vector{Particle} needs an extra modification because an empty vector yields Any[] with to_vec for Particle (which isn't the case for number types)
function FiniteDifferences.to_vec(x::Vector{Particle{NF}}) where NF
if isempty(x)
return NF[], identity
else # the else statement is the unmodified to_vec(::DenseVector)
x_vecs_and_backs = map(to_vec, x)
x_vecs, backs = first.(x_vecs_and_backs), last.(x_vecs_and_backs)
function Vector_from_vec(x_vec)
sz = cumsum(map(length, x_vecs))
x_Vec = [backs[n](x_vec[sz[n] - length(x_vecs[n]) + 1:sz[n]]) for n in eachindex(x)]
return oftype(x, x_Vec)
end
# handle empty x
x_vec = isempty(x_vecs) ? eltype(eltype(x_vecs))[] : reduce(vcat, x_vecs)
return x_vec, Vector_from_vec
end
end

# A version of the generic fallback from FiniteDifferences that excludes some of the fields
# that we don't want to be varied for our big data structures
# also replaces NaNs that are expected in land and ocean variables
function FiniteDifferences.to_vec(x::T) where {T <: Union{PrognosticVariables, PrognosticVariablesOcean, PrognosticVariablesLand, DiagnosticVariables, Tendencies, GridVariables, DynamicsVariables, PhysicsVariables, ParticleVariables}}

excluded_fields_pre, included_fields, excluded_fields_post = determine_included_fields(T)

val_vecs_and_backs = map(name -> to_vec(getfield(x, name)), included_fields)
vals = first.(val_vecs_and_backs)
backs = last.(val_vecs_and_backs)

vals_excluded_pre = map(name -> getfield(x, name), excluded_fields_pre)
vals_excluded_post = map(name -> getfield(x, name), excluded_fields_post)

v, vals_from_vec = to_vec(vals)
v = replace_NaN(x, v)

function structtype_from_vec(v::Vector{<:Real})
val_vecs = vals_from_vec(v)
values = map((b, v) -> b(v), backs, val_vecs)

T(vals_excluded_pre..., values..., vals_excluded_post...)
end
return v, structtype_from_vec
end

function determine_included_fields(T::Type)
names = fieldnames(T)

included_field_types = Union{SpeedyWeather.AbstractDiagnosticVariables,
SpeedyWeather.AbstractPrognosticVariables, SpeedyWeather.ColumnVariables,
NTuple, Dict{Symbol, <:Tuple}, Dict{Symbol, <:AbstractArray}, AbstractArray}

excluded_fields_pre = []
included_fields = []
excluded_fields_post = []

for name in names
if fieldtype(T, name) <: included_field_types
push!(included_fields, name)
else
if isempty(included_fields)
push!(excluded_fields_pre, name)
else
push!(excluded_fields_post, name)
end
end
end

return excluded_fields_pre, included_fields, excluded_fields_post
end

# in the ocean and land variables we have NaNs, FiniteDifferences can't deal with those, so we replace them
function replace_NaN(x_type::T, vec) where {T <: Union{PrognosticVariablesOcean, PrognosticVariablesLand, PhysicsVariables}}
nan_indices = isnan.(vec)
vec[nan_indices] .= 0
return vec
end

# fallback, we really only want to replace the NaNs in ocean and land variables
replace_NaN(type, vec) = vec

# By default FiniteDifferences doesn't include this, even though Integers can't be varied.
# there's an old GitHub issue and PR about this
function FiniteDifferences.to_vec(x::Integer)
Integer_from_vec(v) = x
return Bool[], Integer_from_vec
end

end
2 changes: 2 additions & 0 deletions src/LowerTriangularMatrices/lower_triangular_array.jl
Original file line number Diff line number Diff line change
Expand Up @@ -123,6 +123,8 @@ function Base.array_summary(io::IO, L::LowerTriangularMatrix{T}, inds::Tuple{Var
print(io, Base.dims2string(length.(inds)), ", $(mn[1])x$(mn[2]) LowerTriangularMatrix{$T}")
end

@inline Base.dataids(L::LowerTriangularArray) = Base.dataids(L.data)

# CREATE INSTANCES (ZEROS, ONES, UNDEF)
for f in (:zeros, :ones, :rand, :randn)
@eval begin
Expand Down
8 changes: 5 additions & 3 deletions src/RingGrids/general.jl
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,9 @@ nonparametric_type(grid::AbstractGridArray) = nonparametric_type(typeof(grid))
# also needed for other array types, defined in extensions
nonparametric_type(::Type{<:Array}) = Array

# needed for unalias
@inline Base.dataids(grid::AbstractGridArray) = Base.dataids(grid.data)

"""$(TYPEDSIGNATURES) Full grid array type for `grid`. Always returns the N-dimensional `*Array`
not the two-dimensional (`N=1`) `*Grid`. For reduced grids the corresponding full grid that
share the same latitudes."""
Expand Down Expand Up @@ -427,7 +430,6 @@ end
## BROADCASTING
# following https://docs.julialang.org/en/v1/manual/interfaces/#man-interfaces-broadcasting
import Base.Broadcast: BroadcastStyle, Broadcasted, DefaultArrayStyle
import LinearAlgebra: isstructurepreserving, fzeropreserving

# {1} as grids are <:AbstractVector, Grid here is the non-parameteric Grid type!
struct AbstractGridArrayStyle{N, Grid} <: Broadcast.AbstractArrayStyle{N} end
Expand All @@ -440,7 +442,7 @@ Base.BroadcastStyle(::Type{Grid}) where {Grid<:AbstractGridArray{T, N, ArrayType

# allocation for broadcasting, create a new Grid with undef of type/number format T
function Base.similar(bc::Broadcasted{AbstractGridArrayStyle{N, Grid}}, ::Type{T}) where {N, Grid, T}
return Grid(Array{T}(undef, size(bc)...))
return Grid(Array{T}(undef, size(bc)))
end

# ::Val{0} for broadcasting with 0-dimensional, ::Val{1} for broadcasting with vectors, etc
Expand Down Expand Up @@ -485,7 +487,7 @@ function Base.similar(
::Type{T},
) where {N, ArrayType, Grid, T}
ArrayType_ = nonparametric_type(ArrayType)
return Grid(ArrayType_{T}(undef, size(bc)...))
return Grid(ArrayType_{T}(undef, size(bc)))
end

function Adapt.adapt_structure(to, grid::Grid) where {Grid <: AbstractGridArray}
Expand Down
12 changes: 12 additions & 0 deletions src/dynamics/clock.jl
Original file line number Diff line number Diff line change
Expand Up @@ -38,6 +38,18 @@ function Base.show(io::IO, C::Clock)
print_fields(io, C, keys)
end

# copy!
function Base.copy!(clock::Clock, clock_old::Clock)
clock.time = clock_old.time
clock.start = clock_old.start
clock.period = clock_old.period
clock.timestep_counter = clock_old.timestep_counter
clock.n_timesteps = clock_old.n_timesteps
clock.Δt = clock_old.Δt

return nothing
end
Comment on lines +42 to +51
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good, but why is that needed? I'm suspecting to restart a simulation? Cause in that case we'd need to check whether all of them are supposed to just be copied over or whether some of them should be set differently afterwards

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As you see in the code of the test, we really need to copy the structs quite often. I also sometimes use out-of-place forms of the functions, so my intend was to really have them identical if I copy them. Intuitively that's also what I would expect from a copy! function.
I am happy to revert the change though, I also realise now that it's a bit more directly related to the integration. But maybe we want to have it like this? Because before a new integration is started set_period and initialise!(::Clock) is called again anyway. So this doesn't break anything as far as I can see it.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No I agree that's what I expect a copy! function to do, but you know that deepcopy is always defined for all types?

julia> struct S
           a
       end

julia> s = S(1)
S(1)

julia> deepcopy(s)
S(1)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah,deepcopy is very generic and works for all types, copy has to be specifically defined.


"""
$(TYPEDSIGNATURES)
Initialize the clock with the time step `Δt` in the `time_stepping`."""
Expand Down
8 changes: 4 additions & 4 deletions src/dynamics/implicit.jl
Original file line number Diff line number Diff line change
Expand Up @@ -209,12 +209,12 @@ Initialize the implicit terms for the PrimitiveEquation models."""
function initialize!(
implicit::ImplicitPrimitiveEquation,
dt::Real, # the scaled time step radius*dt
diagn::DiagnosticVariables,
diagn::DiagnosticVariables{NF},
geometry::AbstractGeometry,
geopotential::AbstractGeopotential,
atmosphere::AbstractAtmosphere,
adiabatic_conversion::AbstractAdiabaticConversion,
)
) where NF

(; trunc, nlayers, α, temp_profile, S, S⁻¹, L, R, U, W, L0, L1, L2, L3, L4) = implicit
(; σ_levels_full, σ_levels_thick) = geometry
Expand Down Expand Up @@ -267,10 +267,10 @@ function initialize!(

for r in 1:nlayers
L1[k, r] = ΔT_below*σ_levels_thick[r]*σₖ # vert advection operator below
L1[k, r] -= k>=r ? σ_levels_thick[r] : 0
L1[k, r] -= k>=r ? σ_levels_thick[r] : zero(NF)

L1[k, r] += ΔT_above*σ_levels_thick[r]*σₖ_above # vert advection operator above
L1[k, r] -= (k-1)>=r ? σ_levels_thick[r] : 0
L1[k, r] -= (k-1)>=r ? σ_levels_thick[r] : zero(NF)
end

# _sum_above operator itself
Expand Down
3 changes: 3 additions & 0 deletions src/dynamics/particles.jl
Original file line number Diff line number Diff line change
Expand Up @@ -48,6 +48,9 @@ function Base.zeros(ArrayType::Type{<:AbstractArray{P}}, n::Int...) where {P<:Pa
fill!(z, zero(P))
end

Base.eltype(::Type{Particle{NF}}) where NF = NF
Base.eltype(::Particle{NF}) where NF = NF

Base.rand(rng::Random.AbstractRNG, ::Random.Sampler{Particle}) = rand(rng, Particle{DEFAULT_NF,true})
Base.rand(rng::Random.AbstractRNG, ::Random.Sampler{Particle{NF}}) where NF = rand(rng, Particle{NF,true})

Expand Down
Loading
Loading