-
Notifications
You must be signed in to change notification settings - Fork 2
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BUG] BlockArraysExtensions
breaks TensorAlgebra.contract
#57
Comments
Investigating. The using BlockArrays: BlockArray, blockedrange
using TensorAlgebra: fusedims, FusionStyle
d = blockedrange([2, 3])
a1 = BlockArray(ones((d,d,d,d)))
FusionStyle(a1) # TensorAlgebra.ReshapeFusion()
fusedims(a1, (1,2), (3,4)) isa Base.ReshapedArray{Float64, 2, <:BlockedArray{Float64,4}}
using BlockSparseArrays: BlockSparseArray
FusionStyle(a1) # TensorAlgebra.BlockReshapeFusion()
fusedims(a1, (1,2), (3,4)) # BoundsError ERROR: BoundsError: attempt to access Int64 at index [2]
Stacktrace:
[1] getindex
@ ./number.jl:98 [inlined]
[2] #13
@ ~/.julia/packages/BlockSparseArrays/RsnSQ/src/BlockArraysExtensions/BlockArraysExtensions.jl:281 [inlined]
[3] map
@ ./tuple.jl:356 [inlined]
[4] blockreshape(a::BlockArray{Float64, 4, Array{…}, NTuple{…}}, axes::Tuple{BlockedOneTo{…}, BlockedOneTo{…}})
@ BlockSparseArrays ~/.julia/packages/BlockSparseArrays/RsnSQ/src/BlockArraysExtensions/BlockArraysExtensions.jl:281
[5] fusedims
@ ~/.julia/packages/BlockSparseArrays/RsnSQ/ext/BlockSparseArraysTensorAlgebraExt/BlockSparseArraysTensorAlgebraExt.jl:19 [inlined]
[6] fusedims
@ ~/Documents/itensor/TensorAlgebra.jl/src/fusedims.jl:34 [inlined]
[7] fusedims
@ ~/Documents/itensor/TensorAlgebra.jl/src/fusedims.jl:43 [inlined]
[8] fusedims
@ ~/Documents/itensor/TensorAlgebra.jl/src/fusedims.jl:68 [inlined]
[9] fusedims(a::BlockArray{…}, blockedperm::TensorAlgebra.BlockedPermutation{…})
@ TensorAlgebra ~/Documents/itensor/TensorAlgebra.jl/src/fusedims.jl:73
[10] fusedims(::BlockArray{Float64, 4, Array{…}, NTuple{…}}, ::Tuple{Int64, Int64}, ::Tuple{Int64, Int64})
@ TensorAlgebra ~/Documents/itensor/TensorAlgebra.jl/src/fusedims.jl:51
[11] top-level scope
@ REPL[21]:1
Some type information was truncated. Use `show(err)` to see complete types. I see 2 possibilites: preserving |
I think |
The Edit: for non-sparse arrays, |
Maybe we should have Then, |
Here is how I see things: # === TensorAlgebraGradedUnitRangesExt, extension of TensorAlgebra.jl ===
function TensorAlgebra.:⊗(a1::AbstractBlockedUnitRange, a2::AbstractBlockedUnitRange)
return tensor_product(a1, a2)
end # === TensorAlgebraBlockArraysExt, extension of TensorAlgebra.jl ===
FusionStyle(::AbstractBlockedUnitRange) = BlockReshapeFusion()
fusedims(a::BlockReshapeFusion, a::AbstractArray, axes::AbstractUnitRange...) = blockreshape(a, axes)
blockreshape(a::AbstractBlockArray, axes::AbstractUnitRange) = ... # === BlockSparseArraysTensorAlgebraExt, extension of BlockSparseArrays.jl ===
TensorAlgebra.blockreshape(a::AbstractBlockSparseArray, axes::AbstractUnitRange) = ... Then we would not need a I am unsure about # not loading GradedUnitRanges
d = blockedrange([2, 3])
a = BlockedArray(ones((d,d,d,d)))
axes(fusedims(a, (d⊗d, d⊗d))) # (Base.OneTo(25), Base.OneTo(25))
using GradedUnitRanges: GradedUnitRanges
axes(fusedims(a, (d⊗d, d⊗d))) # (BlockedOneTo([4, 10, 16, 25]), BlockedOneTo([4, 10, 16, 25])) |
The thing I don't like about that plan is that it would be weird for Regarding |
Here is a new proposal: # === TensorAlgebra.jl ===
TensorAlgebra.FusionStyle(a::AbstractBlockSparseArray) = FusionStyle(a, axes(a))
FusionStyle(::AbstractArray, ::Tuple{Vararg{AbstractUnitRange}}) = ReshapeFusion() # default
# no more combine_fusion_styles
fusedims(a::ReshapeFusion, a::AbstractArray, axes::AbstractUnitRange...) = ... # default
⊗(a1::AbstractUnitRange, a2::AbstractUnitRange) = sectorless_tensor_product(a1, a2)
sectorless_tensor_product(a1::AbstractUnitRange, a2::AbstractUnitRange) = Base.OneTo(prod(length.((a1,a2)) # default # === TensorAlgebraGradedUnitRangesExt, extension of TensorAlgebra.jl ===
TensorAlgebra.:⊗(a1::AbstractGradedUnitRange, a2::AbstractGradedUnitRange) = GradedUnitRanges.tensor_product(a1, a2) # === TensorAlgebraBlockArraysExt, extension of TensorAlgebra.jl ===
sectorless_tensor_product(a1::AbstractBlockedUnitRange, a2::AbstractBlockedUnitRange) = ...
# maybe remove and use default ReshapeFusion
FusionStyle(::AbstractBlockArray, ::Tuple{Vararg{AbstractUnitRange}}) = BlockReshapeFusion()
fusedims(::BlockReshapeFusion, a::AbstractArray, axes::AbstractUnitRange...) = ... # === BlockSparseArraysTensorAlgebraExt, extension of BlockSparseArrays.jl ===
TensorAlgebra.FusionStyle(a::AbstractBlockSparseArray, ::Tuple{Vararg{AbstractUnitRange}}) = SparseBlockReshapeFusion()
TensorAlgebra.FusionStyle(a::AbstractBlockSparseArray, ::Tuple{Vararg{AbstractGradedUnitRange}}) = MergeFusion()
fusedims(::SparseBlockReshapeFusion, a::AbstractArray, axes::AbstractUnitRange...) = ...
fusedims(::MergeFusion, a::AbstractArray, axes::AbstractUnitRange...) = ... # === GradedUnitRanges.jl ===
using TensorAlgebra: sectorless_tensor_product # now a dependency of GradedUnitRanges
tensor_product(a1::AbstractUnitRange, a2::AbstractUnitRange) = sectorless_tensor_product(a1, a2)
tensor_product(a1::AbstractGradedUnitRange, a2::AbstractGradedUnitRange) = ... EDIT: we can even remove I don't really like how More generally, I think we need to rethink package dependencies and get a clear pictures on who depends on whom, who has tests dependencies on whom, who has package dependencies on whom. |
Maybe this warrants defining an interface package (say called TensorProducts.jl) where we define |
Indeed creating a new package # === TensorProducts.jl ===
⊗(args...) = tensor_product(args...)
tensor_product(a1, a2, as...) = tensor_product(a1, tensor_product(a2, as...)) # no type constraint to accept AbstractSector later
tensor_product(a1::AbstractUnitRange, a2::AbstractUnitRange) = Base.OneTo(prod(length.((a1,a2)) # default # === TensorProductBlockArraysExt, extension of TensorProducts.jl ===
tensor_product(a1::AbstractBlockedUnitRange, a2::AbstractBlockedUnitRange) = ... # === GradedUnitRanges.jl ===
using TensorProducts: TensorProducts # now a dependency of GradedUnitRanges
TensorProducts.tensor_product(a1::AbstractGradedUnitRange, a2::AbstractGradedUnitRange) = ... # === SymmetrySectors.jl ===
using TensorProducts: TensorProducts # now a dependency of SymmetrySectors
TensorProducts.tensor_product(s1::AbstractSector, s2::AbstractSector) = ... # === TensorAlgebra.jl ===
FusionStyle(a::AbstractArray) = FusionStyle(a, axes(a))
FusionStyle(::AbstractArray, ::Tuple{Vararg{AbstractUnitRange}}) = ReshapeFusion() # default
# no more combine_fusion_styles
fusedims(a::ReshapeFusion, a::AbstractArray, axes::AbstractUnitRange...) = ... # default
# remove unneeded TensorAlgebraGradedUnitRangesExt
# later, we may define TensorAlgebraBlockArraysExt to optimize fusedims(::BlockArray) # === BlockSparseArraysTensorAlgebraExt, extension of BlockSparseArrays.jl ===
TensorAlgebra.FusionStyle(a::AbstractBlockSparseArray, ::Tuple{Vararg{AbstractUnitRange}}) = SparseBlockReshapeFusion()
TensorAlgebra.fusedims(::SparseBlockReshapeFusion, a::AbstractArray, axes::AbstractUnitRange...) = ... # === BlockSparseArraysGradedUnitRanges extension of BlockSparseArrays.jl ===
TensorAlgebra.FusionStyle(a::AbstractBlockSparseArray, ::Tuple{Vararg{AbstractGradedUnitRange}}) = MergeFusion()
TensorAlgebra.fusedims(::MergeFusion, a::AbstractArray, axes::AbstractUnitRange...) = ... We may even add |
That plans looks good to me. Defining What do you think of renaming I'd be a bit worried about getting rid of FusionStyle(a::AbstractArray) = FusionStyle(a, axes(a))
FusionStyle(a::AbstractArray, ax::Tuple{Vararg{AbstractUnitRange}}) = FusionStyle(a, combine_fusion_styles(FusionStyle.(ax)...))
FusionStyle(ax::AbstractUnitRange) = ReshapeFusion() # default
FusionStyle(a::AbstractArray, ax_style::FusionStyle) = ReshapeFusion() # default so then you can decide the fusion style on the array type and the combined fusion styles of the axes. |
BlockArraysExtensions
breaksTensorAlgebra.contract
. All tests introduced in ITensor/TensorAlgebra.jl#34 pass whenBlockSparseArrays.jl
is not loaded (with the exception of zero-dim output type).I find it surprising that
TensorAlgebra.contract(::BlockArray)
behavior changes whether or notBlockSparseArrays
is loaded. I believe we should refactor package extensions to avoid this, probably creating a newTensorAlgebra/BlockArraysExtension
. We may then makeTensorAlgebra
a strong dependency ofBlockSparseArray
, then the extension will always be loaded to be used with aBlockSparseArray
.The text was updated successfully, but these errors were encountered: