Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BlockSparseArrays] BlockSparseArray functionality #1336

Open
48 of 59 tasks
ogauthe opened this issue Feb 16, 2024 · 70 comments
Open
48 of 59 tasks

[BlockSparseArrays] BlockSparseArray functionality #1336

ogauthe opened this issue Feb 16, 2024 · 70 comments
Labels
BlockSparseArrays enhancement New feature or request NDTensors Requires changes to the NDTensors.jl library.

Comments

@ogauthe
Copy link
Contributor

ogauthe commented Feb 16, 2024

This issue lists functionalities and feature requests for BlockSparseArray.

using LinearAlgebra
using NDTensors.BlockSparseArrays: BlockSparseArray

a = BlockSparseArray{Float64}([2, 3], [2, 3]);

Bugs

Feature requests

  • Allow the syntax BlockSparseArray{Float64}([U1(0) => 2, U1(1) => 3], [U1(0) => 2, U1(1) => 3]) which could implicitly create axes with GradedUnitRange internally.
  • Define RecursivePermutedDimsArrays, use instead of SparsePermutedDimsArrayBlocks (similar to how we are removing SparseAdjointBlocks/SparseTransposeBlocks in [BlockSparseArrays] Simplifications of blocks for blocksparse Adjoint and Transpose #1580).
  • If slices are just blockwise, like b = @view a[Block.(1:2), [Block(2), Block(1)], define blocks(b) as @view b[1:2, [2, 1]], as opposed to using the more general SparseSubArrayBlocks in those cases. Like the proposed RecursivePermutedDimsArrays, in principle SparseSubArrayBlocks could be replaced by a NestedSubArray type that defines the slicing behavior of the array storing block and also the slicing of the blocks themselves, but that might be overkill and the concept is very particular to block arrays. But maybe SubArray of the blocks could still be used to simplify the code logic in SparseSubArrayBlocks.
  • Constructor from Dictionary should check block sizes ([BlockSparseArrays] BlockSparseArray functionality #1336 (comment)).
  • Blockwise linear algebra operations like svd, qr, etc. See [BlockSparseArrays] Blockwise matrix factorizations #1515. These are well defined if the block sparse matrix has a block structure (i.e. the sparsity pattern of the sparse array of arrays blocks(a)) corresponding to a generalized permutation matrix. Probably they should be called something like block_svd, block_eigen, block_qr, etc. to distinguish that they are meant to be used on block sparse matrices with those structures (and error if they don't have that structure). See 1 for a prototype of a blockwise QR. See also BlockDiagonals.jl for an example in Julia of blockwise factorizations, they use a naming scheme svd_blockwise, eigen_blockwise, etc. The slicing operation introduced in [BlockSparseArrays] Sub-slices of multiple blocks #1489 will be useful for performing block-wise truncated factorizations.
  • Define Strided.@strided for view(::BlockSparseArray, ::Block). As a workaround it will work if you use view!/@view! introduced in [BlockSparseArrays] Define in-place view that may instantiate blocks #1498. (EDIT: We will have to decide what this should do, maybe it takes a strided view of the block data if the block exists, but otherwise errors if the block doesn't exist.)
  • Change the behavior of slicing with non-blocked ranges (such as a[1:2, 1:2]) to output non-blocked arrays, and define @blocked a[1:2, 1:2] to explicitly preserve blocking. See the discussion in Functionality for slicing with unit ranges that preserves block information JuliaArrays/BlockArrays.jl#347.
  • Reconsider the design of how duality is stored in graded unit ranges (graded axes), for example storing it at the level of the sector labels with a new SectorDual type and/or as a boolean flag.
  • Display non-initialized blocks differently from zero-initialized blocks, currently they both print as zeros.

Fixed

Footnotes

  1. https://github.com/ITensor/ITensors.jl/blob/v0.3.57/NDTensors/src/lib/BlockSparseArrays/src/backup/LinearAlgebraExt/qr.jl

  2. https://github.com/JuliaGPU/GPUArrays.jl/blob/v11.1.0/lib/GPUArraysCore/src/GPUArraysCore.jl#L27, https://github.com/JuliaGPU/CUDA.jl/blob/v5.4.2/src/array.jl#L396

  3. https://github.com/ITensor/ITensors.jl/pull/1452, https://github.com/JuliaArrays/BlockArrays.jl/pull/255

  4. [BlockSparseArrays] Fix adjoint and transpose #1470

  5. [BlockSparseArrays] More general broadcasting and slicing #1332 2 3

@ogauthe ogauthe added enhancement New feature or request NDTensors Requires changes to the NDTensors.jl library. labels Feb 16, 2024
@mtfishman
Copy link
Member

mtfishman commented Feb 16, 2024

Thanks. This currently works:

julia> using NDTensors.BlockSparseArrays: Block, BlockSparseArray, blocks

julia> using LinearAlgebra: I

julia> a = BlockSparseArray{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0
 ──────────┼──────────
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0

julia> a[Block(2, 2)] = I(3)
3×3 Diagonal{Bool, Vector{Bool}}:
 1    
   1  
     1

julia> a
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0
 ──────────┼──────────
 0.0  0.01.0  0.0
 0.0  0.00.0  1.0

julia> using NDTensors.SparseArrayInterface: stored_indices

julia> stored_indices(blocks(a))
1-element Dictionaries.MappedDictionary{CartesianIndex{2}, CartesianIndex{2}, NDTensors.SparseArrayInterface.var"#1#2"{NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}}, Tuple{Dictionaries.Indices{CartesianIndex{2}}}}
 CartesianIndex(2, 2) │ CartesianIndex(2, 2)

though using this alternative syntax is currently broken:

julia> a = BlockSparseArray{Float64}([2, 2], [2, 2])
2×2-blocked 4×4 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{Int64}}, BlockArrays.BlockedUnitRange{Vector{Int64}}}}:
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0
 ──────────┼──────────
 0.0  0.00.0  0.0
 0.0  0.00.0  0.0

julia> a[Block(2), Block(2)] = I(3)
ERROR: DimensionMismatch: tried to assign (3, 3) array to (2, 2) block
Stacktrace:
 [1] setindex!(::BlockSparseArray{…}, ::Diagonal{…}, ::Block{…}, ::Block{…})
   @ BlockArrays ~/.julia/packages/BlockArrays/L5yjb/src/abstractblockarray.jl:165
 [2] top-level scope
   @ REPL[30]:1
Some type information was truncated. Use `show(err)` to see complete types.

I would have to think about if it makes sense to support a[Block(2, 2)] .= 1.0 or a[Block(2), Block(2)] .= 1.0 if Block(2, 2) isn't initialized yet (probably we should support that).

@mtfishman
Copy link
Member

mtfishman commented Feb 16, 2024

In terms of svd and qr of BlockSparseMatrix, it is well defined to perform them block-wise (i.e. map the problem to factorizing the individual non-zero blocks) as long as there is only one block per row or column, i.e. diagonal up to block permutations of the rows and columns. So I was thinking we would have specialized block_svd and block_qr that checks if they have that structure and performs svd or qr block-wise, and otherwise it errors (it can check if blocks(a) is a generalized permutation matrix).

I have a protype of a QR decomposition of a BlockSparseMatrix here: https://github.com/ITensor/ITensors.jl/blob/v0.3.57/NDTensors/src/lib/BlockSparseArrays/src/backup/LinearAlgebraExt/qr.jl but it may be outdated since it was written based on an earlier version of BlockSparseArrays.

@mtfishman mtfishman changed the title [NDTensors] [ENHANCEMENT] BlockSparseArray functionalities [BlockSparseArrays][ENHANCEMENT] BlockSparseArray functionalities Feb 16, 2024
@mtfishman
Copy link
Member

mtfishman commented Feb 18, 2024

Also note that slicing like this should work right now:

a[Block(1, 1)[1:2, 1:2]]

i.e. you can take slices within a specified block. See BlockArrays.jl for a reference on that slicing notation.

@ogauthe
Copy link
Contributor Author

ogauthe commented Feb 26, 2024

new feature request: Base.:*(::BlockSparseArray, x::Number) and Base.:/(::BlockSparseArray, x::Number)

I updated the first comment.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented Feb 26, 2024

new issue: 1im * a does not modify a data type if a is empty. It crashes if a contains data.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented Mar 1, 2024

new issue: similar(a::BlockSparseArray, eltype::type) do not set new eltype.
similar(a::BlockSparseArray, eltype::type, size) behavior depends on size type.

  • For size::NTuple{N,Int}, it returns a dense julia Array with correct eltype.
  • For size::NTuple{N,AbstractUnitRange}, it returns a BlockSparseArray{Float64} and eltype is not correct.

edit: FIXED

@mtfishman
Copy link
Member

@ogauthe a number of these issues were fixed by #1332, I've updated the list in the first post accordingly. I added regression tests in #1360 for ones that still need to be fixed, and additionally added placeholder tests that I've marked as broken in the BlockSparseArrays tests. Please continue to update this post with new issues you find, and/or make PRs with broken behavior marked with @test_broken or bug fixes and regression tests. I think with the reorganization I did in #1332 these kinds of issues will be easier to fix going forward, though I have to admit it may be a bit tough to jump into that code right now since there are many abstraction layers involved, since the goal is for that code to be as minimal and general as possible.

@ogauthe
Copy link
Contributor Author

ogauthe commented Apr 24, 2024

Feature request: display(::Adjoint). The function Base.adjoint is well defined and returns an Adjoint type, but it throws an error at display.
Closely related, it is not clear to me how GradedAxes.dual and BlockSparseArray should work together. The current behavior is to only accept a GradedUnitRange as an axis and to throw for a UnitRangeDual input. Base.adjoint does not dual the axis. I do not have an opinion yet whether this should be modified or not.

Edit: FIXED

@mtfishman
Copy link
Member

mtfishman commented Apr 24, 2024

I think ideally BlockSparseArray would accept any AbstractUnitRange.

Alternatively, BlockArrays.jl introduced an AbstractBlockedUnitRange in JuliaArrays/BlockArrays.jl#348, we could define a BlockedUnitRangeDual <: AbstractBlockedUnitRange type which wraps another AbstractBlockedUnitRange. I think it will require some experimentation to see which one leads to simpler code.

Good question about whether or not the axes should get dualed if Base.adjoint(::BlockSparseMatrix) is called. Probably we should dual the axes. For example, if we want operations like a' * a to work for a::BlockSparseMatrix and GradedUnitRange axes, I think we need to dual the axes.

@ogauthe
Copy link
Contributor Author

ogauthe commented May 2, 2024

The solution to accept any AbstractUnitRange is to replace the definition of Axes with

  Axes<:Tuple{Vararg{<:AbstractUnitRange,N}},

Then one can construct a BlockSparseArray with a dual axis. For a reason I do not really understand, the resulting array can be printed but not displayed. The error is not the same as when one tries to display a transpose or adoint.

g1 = gradedrange([U1(0) => 1])
m1 = BlockSparseArray{Float64}(dual(g1), g1,)

outputs

1×1-blocked 1×1 BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}:
Error showing value of type BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}, BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}:
ERROR: MethodError: no method matching NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(::Int64)

Closest candidates are:
  (::Type{NDTensors.LabelledNumbers.LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:792
  (::Type{T})(::BigFloat) where T<:Integer
   @ Base mpfr.jl:378
  ...

Stacktrace:
  [1] convert(::Type{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, x::Int64)
    @ Base ./number.jl:7
  [2] cvt1
    @ ./essentials.jl:468 [inlined]
  [3] ntuple
    @ ./ntuple.jl:49 [inlined]
  [4] convert(::Type{Tuple{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Int64}}, x::Tuple{Int64, Int64})
    @ Base ./essentials.jl:470
  [5] push!(a::Vector{Tuple{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Int64}}, item::Tuple{Int64, Int64})
    @ Base ./array.jl:1118
  [6] alignment(io::IOContext{Base.TTY}, X::AbstractVecOrMat, rows::Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, cols::Vector{Int64}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
    @ Base ./arrayshow.jl:76
  [7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
    @ Base ./arrayshow.jl:207
  [8] print_matrix(io::IOContext{…}, X::BlockSparseArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
    @ Base ./arrayshow.jl:171
  [9] print_matrix
    @ ./arrayshow.jl:171 [inlined]
 [10] print_array
    @ ./arrayshow.jl:358 [inlined]
 [11] show(io::IOContext{Base.TTY}, ::MIME{Symbol("text/plain")}, X::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ Base ./arrayshow.jl:399
 [12] (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273
 [13] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [14] display(d::REPL.REPLDisplay, mime::MIME{Symbol("text/plain")}, x::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:259
 [15] display
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
 [16] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:340
 [17] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [18] invokelatest
    @ ./essentials.jl:889 [inlined]
 [19] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{Nothing, AbstractDisplay})
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:315
 [20] (::REPL.var"#57#58"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:284
 [21] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [22] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:282
 [23] (::REPL.var"#do_respond#80"{Bool, Bool, REPL.var"#93#103"{REPL.LineEditREPL, REPL.REPLHistoryProvider}, REPL.LineEditREPL, REPL.LineEdit.Prompt})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:911
 [24] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [25] invokelatest
    @ ./essentials.jl:889 [inlined]
 [26] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2656
 [27] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1312
 [28] (::REPL.var"#62#68"{REPL.LineEditREPL, REPL.REPLBackendRef})()
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:386
Some type information was truncated. Use `show(err)` to see complete types.

Edit: FIXED

@mtfishman
Copy link
Member

Thanks for investigating. That seems like the right move to generalize the axes in that way. Hopefully that error is easy enough to circumvent.

@ogauthe
Copy link
Contributor Author

ogauthe commented May 2, 2024

I continue in exploring the effect of dual. eachindex is broken on such an array. This affects ==.

julia> eachindex(m1)
ERROR: MethodError: no method matching AbstractUnitRange{…}(::NDTensors.GradedAxes.UnitRangeDual{…})

Closest candidates are:
  AbstractUnitRange{T}(::AbstractUnitRange{T}) where T
   @ Base range.jl:1308
  AbstractUnitRange{T}(::NDTensors.LabelledNumbers.LabelledUnitRange) where T
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledunitrange.jl:17
  AbstractUnitRange{Int64}(::Static.OptionallyStaticUnitRange)
   @ Static ~/.julia/packages/Static/6JeQI/src/ranges.jl:258
  ...

Stacktrace:
 [1] OrdinalRange{…}(r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base ./range.jl:1313
 [2] convert(::Type{…}, r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base ./range.jl:265
 [3] (::Base.IteratorsMD.var"#3#4")(r::NDTensors.GradedAxes.UnitRangeDual{…})
   @ Base.IteratorsMD ./multidimensional.jl:259
 [4] map
   @ ./tuple.jl:292 [inlined]
 [5] CartesianIndices(inds::Tuple{NDTensors.GradedAxes.UnitRangeDual{…}, BlockArrays.BlockedUnitRange{…}})
   @ Base.IteratorsMD ./multidimensional.jl:259
 [6] eachindex(::IndexCartesian, A::BlockSparseArray{…})
   @ Base.IteratorsMD ./multidimensional.jl:392
 [7] eachindex(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
   @ Base ./abstractarray.jl:378
 [8] top-level scope
   @ REPL[37]:1
Some type information was truncated. Use `show(err)` to see complete types.

Edit: FIXED

@mtfishman mtfishman changed the title [BlockSparseArrays][ENHANCEMENT] BlockSparseArray functionalities [BlockSparseArrays] [ENHANCEMENT] BlockSparseArray functionalities May 10, 2024
@ogauthe
Copy link
Contributor Author

ogauthe commented May 10, 2024

issue: I cannot write a slice of a block

a[BlockArrays.Block(1,1)][1:2,1:2] = ones((2,2))

does not write a. This is probably associated to "Fix initializing a block with broadcasting syntax".

EDIT: consistent with julia slicing convention, nothing to fix.

@mtfishman mtfishman changed the title [BlockSparseArrays] [ENHANCEMENT] BlockSparseArray functionalities [BlockSparseArrays] BlockSparseArray functionality May 10, 2024
@ogauthe
Copy link
Contributor Author

ogauthe commented May 11, 2024

issue: LinearAlgebra.norm(a) triggers AssertionError when a contains NaN

a[BlockArrays.Block(1,1)] = ones((2,2))
println(LinearAlgebra.norm(a))  # 2.0
a[BlockArrays.Block(1,1)][1, 1] = NaN
println(LinearAlgebra.norm(a[BlockArrays.Block(1,1)]))  # NaN
println(LinearAlgebra.norm(a))  # AssertionError

outputs

ERROR: AssertionError: op(output, (eltype(output))(f_notstored)) == output
Stacktrace:
  [1] #sparse_mapreduce#16
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:118 [inlined]
  [2] sparse_mapreduce
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:115 [inlined]
  [3] #mapreduce#29
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:82 [inlined]
  [4] mapreduce
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81 [inlined]
  [5] generic_normInf
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:453 [inlined]
  [6] normInf
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:527 [inlined]
  [7] generic_norm2(x::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ LinearAlgebra ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:463
  [8] norm2
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:529 [inlined]
  [9] norm(itr::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, p::Int64)
    @ LinearAlgebra ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/LinearAlgebra/src/generic.jl:598
 [10] _norm
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:297 [inlined]
 [11] norm
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:298 [inlined]
 [12] norm(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:298
 [13] top-level scope
    @ REPL[1220]:1
Some type information was truncated. Use `show(err)` to see complete types.

I just checked that replacing NaN with Inf is correct.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented May 17, 2024

issue: a block can be written with an invalid shape. An error should be raised.

a = BlockSparseArray{Float64}([2, 3], [2, 3])
println(size(a))  # (5,5)
b = BlockArrays.Block(1,1)
println(size(a[b]))  # (2,2)
a[b] = ones((3,3))
println(size(a))  # (5,5)
println(size(a[b]))  # (3,3)

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented May 28, 2024

Thanks to #1467, I can now initialize a BlockSparseArray with a dual axis. However contracting such arrays is broken:

using NDTensors.GradedAxes: GradedAxes, dual, gradedrange
using NDTensors.Sectors: U1
using NDTensors.BlockSparseArrays: BlockSparseArray

g1 = gradedrange([U1(0) => 1, U1(1) => 2, U1(2) => 3])
g2 = gradedrange([U1(0) => 2, U1(1) => 2, U1(3) => 1])

m1 = BlockSparseArray{Float64}(g1, GradedAxes.dual(g2));  # display crash
m2 = BlockSparseArray{Float64}(g2, GradedAxes.dual(g1));  # display crash

m12 = m1 * m2;  # MethodError
m21 = m2 * m1;  # MethodError
ERROR: MethodError: no method matching AbstractUnitRange{…}(::NDTensors.GradedAxes.UnitRangeDual{…})

Closest candidates are:
  AbstractUnitRange{T}(::AbstractUnitRange{T}) where T
   @ Base range.jl:1308
  AbstractUnitRange{T}(::NDTensors.LabelledNumbers.LabelledUnitRange) where T
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledunitrange.jl:17
  AbstractUnitRange{Int64}(::Static.OptionallyStaticUnitRange)
   @ Static ~/.julia/packages/Static/6JeQI/src/ranges.jl:258
  ...

Stacktrace:
  [1] OrdinalRange{…}(r::NDTensors.GradedAxes.UnitRangeDual{…})
    @ Base ./range.jl:1313
  [2] convert(::Type{…}, r::NDTensors.GradedAxes.UnitRangeDual{…})
    @ Base ./range.jl:265
  [3] (::Base.IteratorsMD.var"#3#4")(r::NDTensors.GradedAxes.UnitRangeDual{…})
    @ Base.IteratorsMD ./multidimensional.jl:259
  [4] map
    @ ./tuple.jl:292 [inlined]
  [5] CartesianIndices(inds::Tuple{BlockArrays.BlockedUnitRange{…}, NDTensors.GradedAxes.UnitRangeDual{…}})
    @ Base.IteratorsMD ./multidimensional.jl:259
  [6] eachindex(::IndexCartesian, A::BlockSparseArray{…})
    @ Base.IteratorsMD ./multidimensional.jl:392
  [7] eachindex
    @ ./abstractarray.jl:378 [inlined]
  [8] fill!(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}}, x::Float64)
    @ Base ./multidimensional.jl:1113
  [9] zero!(::BlockArrays.BlockLayout{…}, A::BlockSparseArray{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:289
 [10] zero!(A::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/ArrayLayouts.jl:288
 [11] _fill_copyto!(dest::BlockSparseArray{…}, C::FillArrays.Zeros{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:80
 [12] copyto!
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:82 [inlined]
 [13] copy(M::ArrayLayouts.MulAdd{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:77
 [14] copy
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:130 [inlined]
 [15] materialize
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:127 [inlined]
 [16] mul
    @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:128 [inlined]
 [17] *(A::BlockSparseArray{…}, B::BlockSparseArray{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:213
 [18] top-level scope
    @ REPL[13]:1
Some type information was truncated. Use `show(err)` to see complete types.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented May 28, 2024

When no dual axis is involved, m' * m and m * m' trigger StackOverflowError

Stacktrace:
     [1] muladd!(α::Float64, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, β::Float64, C::BlockSparseArray{…}; kw::@Kwargs{})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:75
     [2] mul!(dest::BlockSparseArray{…}, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, α::Float64, β::Float64)
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:134
     [3] mul!(dest::BlockSparseArray{…}, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, α::Float64, β::Float64)
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:271
     [4] materialize!(m::ArrayLayouts.MulAdd{…})
       @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/arraylayouts.jl:14
--- the last 4 lines are repeated 10900 more times ---
 [43605] muladd!(α::Float64, A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…}, β::Float64, C::BlockSparseArray{…}; kw::@Kwargs{…})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:75
 [43606] copyto!
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:82 [inlined]
 [43607] copy(M::ArrayLayouts.MulAdd{…})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/muladd.jl:77
 [43608] copy
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:130 [inlined]
 [43609] materialize
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:127 [inlined]
 [43610] mul
       @ ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:128 [inlined]
 [43611] *(A::BlockSparseArray{…}, B::LinearAlgebra.Adjoint{…})
       @ ArrayLayouts ~/.julia/packages/ArrayLayouts/i2w0I/src/mul.jl:290
Some type information was truncated. Use `show(err)` to see complete types.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented Jun 4, 2024

issue: display error when writing a block

using BlockArrays: BlockArrays
using NDTensors.BlockSparseArrays: BlockSparseArrays
using NDTensors.GradedAxes: GradedAxes
using NDTensors.Sectors: U1

g = GradedAxes.gradedrange([U1(0) => 1])
m = BlockSparseArrays.BlockSparseArray{Float64}(g, g)
m[BlockArrays.Block(1,1)] .= 1
1×1 view(::NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}, BlockSlice(Block(1),1:1), BlockSlice(Block(1),1:1)) with eltype Float64 with indices NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0])×NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(1, U(1)[0]):
Error showing value of type SubArray{Float64, 2, NDTensors.BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Tuple{BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, BlockArrays.BlockedUnitRange{Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}, Tuple{BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}, BlockArrays.BlockSlice{BlockArrays.Block{1, Int64}, UnitRange{Int64}}}, false}:
ERROR: MethodError: no method matching NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}(::Int64)

Closest candidates are:
  (::Type{NDTensors.LabelledNumbers.LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:792
  (::Type{IntT})(::NDTensors.Block{1}) where IntT<:Integer
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/blocksparse/block.jl:63
  ...

Stacktrace:
  [1] convert(::Type{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}, x::Int64)
    @ Base ./number.jl:7
  [2] cvt1
    @ ./essentials.jl:468 [inlined]
  [3] ntuple
    @ ./ntuple.jl:49 [inlined]
  [4] convert(::Type{Tuple{…}}, x::Tuple{Int64, Int64})
    @ Base ./essentials.jl:470
  [5] push!(a::Vector{Tuple{…}}, item::Tuple{Int64, Int64})
    @ Base ./array.jl:1118
  [6] alignment(io::IOContext{…}, X::AbstractVecOrMat, rows::Vector{…}, cols::Vector{…}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
    @ Base ./arrayshow.jl:76
  [7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
    @ Base ./arrayshow.jl:207
  [8] print_matrix(io::IOContext{…}, X::SubArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
    @ Base ./arrayshow.jl:171
  [9] print_matrix
    @ ./arrayshow.jl:171 [inlined]
 [10] print_array
    @ ./arrayshow.jl:358 [inlined]
 [11] show(io::IOContext{…}, ::MIME{…}, X::SubArray{…})
    @ Base ./arrayshow.jl:399
 [12] (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273
 [13] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [14] display(d::REPL.REPLDisplay, mime::MIME{Symbol("text/plain")}, x::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:259
 [15] display
    @ ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
 [16] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:340
 [17] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [18] invokelatest
    @ ./essentials.jl:889 [inlined]
 [19] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{…})
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:315
 [20] (::REPL.var"#57#58"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:284
 [21] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [22] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:282
 [23] (::REPL.var"#do_respond#80"{})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:911
 [24] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [25] invokelatest
    @ ./essentials.jl:889 [inlined]
 [26] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2656
 [27] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1312
 [28] (::REPL.var"#62#68"{REPL.LineEditREPL, REPL.REPLBackendRef})()
    @ REPL ~/.julia/juliaup/julia-1.10.3+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:386
Some type information was truncated. Use `show(err)` to see complete types.

This looks like the same error as previously triggered by dual axes.

Edit: FIXED

@mtfishman
Copy link
Member

Thanks for the report, looks like it is more generally a problem printing views of blocks of BlockSparseArray with GradedUnitRange axes:

using BlockArrays: Block
using NDTensors.BlockSparseArrays: BlockSparseArray
using NDTensors.GradedAxes: gradedrange
using NDTensors.Sectors: U1
r = gradedrange([U1(0) => 1])
a = BlockSparseArray{Float64}(r, r)
@view a[Block(1, 1)]

@ogauthe
Copy link
Contributor Author

ogauthe commented Jul 4, 2024

Overall, I find the interface defined by block_stored_indices not satisfying, the output type is hard to manipulate. As discussed, returning a Vector{Block} would be much more convenient. I finally found out how to recover the indices from the Block itself

b = BlockArrays.Block(1,1)
inds = Int.(Tuple(b))  # (1,1)

using the index from the Block directly, CartesianIndex is no more needed.

@ogauthe
Copy link
Contributor Author

ogauthe commented Jul 8, 2024

issue: LinearAlgebra.Diagonal as a block type is brittle. It is possible to construct such a BlockSparseArray, but any operation on it will cast it back to Matrix

using LinearAlgebra: LinearAlgebra

using BlockArrays: BlockArrays
using Dictionaries: Dictionaries

using NDTensors.BlockSparseArrays: BlockSparseArrays

sdic = Dictionaries.Dictionary{
  BlockArrays.Block{2,Int},LinearAlgebra.Diagonal{Float64,Vector{Float64}}
}()
Dictionaries.set!(sdic, BlockArrays.Block(1, 1), LinearAlgebra.Diagonal([1.0, 2.0]))
s = BlockSparseArrays.BlockSparseArray(sdic, (1:2, 1:2))
println(typeof(s))  # BlockSparseArrays.BlockSparseArray{Float64, 2, LinearAlgebra.Diagonal...}
println(typeof(similar(s)))  # BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix...}
println(typeof(s * s))  # BlockSparseArrays.BlockSparseArray{Float64, 2, Matrix...}

Edit : I guess this is what "Support for blocks that are DiagonalArrays and SparseArrayDOKs" stands for.

@ogauthe
Copy link
Contributor Author

ogauthe commented Jul 8, 2024

issue: cannot take a block subslice of a BlockSparseArray with a dual axis. I guess this can be solved by making UnitRangeDual a subtype of BlockArrays.AbstractBlockedUnitRange.

g = GradedAxes.gradedrange([U1(1) => 2])
m1 = BlockSparseArrays.BlockSparseArray{Float64}(g,g)
m2 = BlockSparseArrays.BlockSparseArray{Float64}(GradedAxes.dual(g), g)

m1[BlockArrays.Block(1,1)] = ones((2,2))
m2[BlockArrays.Block(1,1)] = ones((2,2))   # need to initialize a block to trigger the error

I = [BlockArrays.Block(1)[1:1]]
m1[I,I] # Ok
m1[I,:] # Ok
m1[:, I] # Ok

m2[I,I] #   first axis lost its label
m2[I, :] #  first axis lost its label
m2[:, I] ;  # MethodError
ERROR: MethodError: no method matching to_blockindexrange(::Base.Slice{NDTensors.GradedAxes.UnitRangeDual{…}}, ::BlockArrays.Block{1, Int64})

Closest candidates are:
  to_blockindexrange(::Base.Slice{<:BlockArrays.BlockedOneTo}, ::BlockArrays.Block{1})
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:194
  to_blockindexrange(::NDTensors.BlockSparseArrays.BlockIndices{var"#s50", T} where {var"#s50"<:(BlockArrays.BlockArray{var"#s49", 1, var"#s11", BS} where {var"#s49"<:(BlockArrays.BlockIndex{1, TI, Tα} where {TI<:Tuple{Integer}, Tα<:Tuple{Integer}}), var"#s11"<:(Vector{<:BlockArrays.BlockIndexRange{1, R, I} where {R<:Tuple{AbstractUnitRange{<:Integer}}, I<:Tuple{Integer}}}), BS<:Tuple{AbstractUnitRange{<:Integer}}}), T<:Integer}, ::BlockArrays.Block{1})
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:186

Stacktrace:
  [1] (::NDTensors.BlockSparseArrays.var"#49#50"{SubArray{}, Tuple{}})(dim::Int64)
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:208
  [2] ntuple
    @ ./ntuple.jl:19 [inlined]
  [3] viewblock
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:208 [inlined]
  [4] viewblock
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:147 [inlined]
  [5] view
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/views.jl:125 [inlined]
  [6] getindex
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/blocksparsearrayinterface.jl:249 [inlined]
  [7] (::NDTensors.BlockSparseArrays.var"#70#73"{Tuple{SubArray{}}, Tuple{BlockArrays.BlockIndexRange{}}})(i::Int64)
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:78
  [8] ntuple
    @ ./ntuple.jl:19 [inlined]
  [9] sparse_map!(::NDTensors.BlockSparseArrays.BlockSparseArrayStyle{…}, f::NDTensors.BroadcastMapConversion.MapFunction{…}, a_dest::NDTensors.BlockSparseArrays.BlockSparseArray{…}, a_srcs::SubArray{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:77
 [10] sparse_map!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:93 [inlined]
 [11] copyto!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/broadcast.jl:37 [inlined]
 [12] materialize!
    @ ./broadcast.jl:914 [inlined]
 [13] materialize!
    @ ./broadcast.jl:911 [inlined]
 [14] sub_materialize
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/arraylayouts.jl:21 [inlined]
 [15] sub_materialize
    @ ~/.julia/packages/ArrayLayouts/3byqH/src/ArrayLayouts.jl:131 [inlined]
 [16] sub_materialize
    @ ~/.julia/packages/ArrayLayouts/3byqH/src/ArrayLayouts.jl:132 [inlined]
 [17] layout_getindex
    @ ~/.julia/packages/ArrayLayouts/3byqH/src/ArrayLayouts.jl:138 [inlined]
 [18] getindex(A::NDTensors.BlockSparseArrays.BlockSparseArray{…}, kr::Colon, jr::Vector{…})
    @ ArrayLayouts ~/.julia/packages/ArrayLayouts/3byqH/src/ArrayLayouts.jl:155
 [19] top-level scope
    @ REPL[296]:1
Some type information was truncated. Use `show(err)` to see complete types.

EDIT: fixed by #1531

@ogauthe
Copy link
Contributor Author

ogauthe commented Jul 8, 2024

Feature request: LinearAlgebra.Adjoint{T, BlockSparseMatrix} to behave the same way as BlockSparseMatrix under slicing operations.
Currently slicing an Adjoint with a Block returns a BlockedArray

typeof(m1[BlockArrays.Block(1),:])  # BlockSparseArray
typeof(adjoint(m1)[BlockArrays.Block(1),:])  # Matrix

EDIT: I wonder if the design of LinearAlgebra.Adjoint{T, BlockSparseMatrix} is the best one. Maybe adjoint should return a BlockSparseMatrix where each block would be a LinearAlgebra.Adjoint{Matrix}? That would solve slicing issues and should be similar to supporting DiagonalArrays as block type.

@mtfishman
Copy link
Member

mtfishman commented Jul 27, 2024

@ogauthe thanks for the issue reports.

  1. One reason to not use Vector{<:Block} for the return type of block_stored_indices is that lookup wouldn't be O(1), that's why it is a Dictionary. You are free to convert to Vector{<:Block} if you want to.
  2. Regarding block_stored_indices(::Adjoint), I can look into that when I have time but it may be faster for you to do it yourself, it should be an easy fix. Feel free to make a PR.
  3. I'm aware of the issues using LinearAlgebra.Diagonal as blocks of a BlockSparseMatrix, I definitely plan to support that but didn't have time to look into it. I've clarified that in the issue list in the first post. For now you'll just have to use Array blocks instead and accept the performance penalties with that. Mostly I think the issue is that as a shortcut I hardcoded some return types of certain operations (for example in some similar definitions) to be BlockSparseArray with Array blocks instead of inferring the block type from the input BlockSparseArray. Feel free to make PRs fixing issues you come across with that.
  4. Those issues with slicing block sparse arrays with dual axes and also adjointed block sparse arrays look more subtle, feel free to look into them and make PRs if you have time.
  5. I don't think it is a good idea to make adjoint of BlockSparseArray not return Adjoint, I think that would make some generic code harder to write since it would be more difficult to get this kind of behavior:
julia> v = [1, 2, 3];

julia> v' * v
14

if v was a BlockSparseVector rather than Adjoint wrapping a BlockSparseVector. More generally, the Adjoint wrapper is helpful for performing optimizations and dispatch. We should just focus on fixing issues with BlockSparseArray wrapped in Adjoint as they come up. I think the challenge with slicing is that it makes a SubArray wrapping an Adjoint wrapping a BlockSparseArray, handling multiple wrapper types is a bit tricky but I'm sure we can make it work. Again, feel free to make a PR with fixes for issues you come across.

@ogauthe
Copy link
Contributor Author

ogauthe commented Jul 31, 2024

Thanks for the detailed answer.

Concerning block_stored_indices, I get your point about complexity. I still feel like the duplication is not needed and should be avoided: the issue comes precisely because keys duplicated at NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/interface_optional.jl:27 are only partially transposed at NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/blocksparsearrayinterface.jl:221. Preserving a Dictionaries.Indices (type before duplication) or a Set for instance would avoid reaching such inconsistent state.

@ogauthe
Copy link
Contributor Author

ogauthe commented Sep 11, 2024

issue: display error when dual is involved
main at 5d4111e3f0d720b17b58de137f633225094fb379

g1 = gradedrange([U1(0) => 1])
m1 = BlockSparseArrays.BlockSparseArray{Float64}(g1, g1)
m2 = BlockSparseArrays.BlockSparseArray{Float64}(g1, dual(g1))
display(m1[:,:])  # Ok
display(m2)  # Ok
display(m2[:,:])  # MethodError
ERROR: MethodError: no method matching LabelledInteger{Int64, U1}(::Int64)

Closest candidates are:
  (::Type{LabelledInteger{Value, Label}} where {Value<:Integer, Label})(::Any, ::Any)
   @ NDTensors ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/LabelledNumbers/src/labelledinteger.jl:2
  (::Type{T})(::T) where T<:Number
   @ Core boot.jl:792
  (::Type{IntT})(::NDTensors.Block{1}) where IntT<:Integer
   @ NDTensors ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/blocksparse/block.jl:63
  ...

Stacktrace:
  [1] convert(::Type{LabelledInteger{Int64, U1}}, x::Int64)
    @ Base ./number.jl:7
  [2] cvt1
    @ ./essentials.jl:468 [inlined]
  [3] ntuple
    @ ./ntuple.jl:49 [inlined]
  [4] convert(::Type{Tuple{Int64, LabelledInteger{Int64, U1}}}, x::Tuple{Int64, Int64})
    @ Base ./essentials.jl:470
  [5] push!(a::Vector{Tuple{Int64, LabelledInteger{Int64, U1}}}, item::Tuple{Int64, Int64})
    @ Base ./array.jl:1118
  [6] alignment(io::IOContext{…}, X::AbstractVecOrMat, rows::Vector{…}, cols::Vector{…}, cols_if_complete::Int64, cols_otherwise::Int64, sep::Int64, ncols::Int64)
    @ Base ./arrayshow.jl:76
  [7] _print_matrix(io::IOContext{…}, X::AbstractVecOrMat, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64, rowsA::UnitRange{…}, colsA::UnitRange{…})
    @ Base ./arrayshow.jl:207
  [8] print_matrix(io::IOContext{…}, X::NDTensors.BlockSparseArrays.BlockSparseArray{…}, pre::String, sep::String, post::String, hdots::String, vdots::String, ddots::String, hmod::Int64, vmod::Int64)
    @ Base ./arrayshow.jl:171
  [9] print_matrix
    @ ./arrayshow.jl:171 [inlined]
 [10] print_array
    @ ./arrayshow.jl:358 [inlined]
 [11] show(io::IOContext{…}, ::MIME{…}, X::NDTensors.BlockSparseArrays.BlockSparseArray{…})
    @ Base ./arrayshow.jl:399
 [12] #blocksparse_show#11
    @ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:120 [inlined]
 [13] blocksparse_show
    @ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:112 [inlined]
 [14] #show#12
    @ ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:130 [inlined]
 [15] show(io::IOContext{…}, mime::MIME{…}, a::NDTensors.BlockSparseArrays.BlockSparseArray{…})
    @ NDTensors.BlockSparseArrays.BlockSparseArraysGradedAxesExt ~/Documents/Recherche/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/ext/BlockSparseArraysGradedAxesExt/src/BlockSparseArraysGradedAxesExt.jl:127
 [16] (::OhMyREPL.var"#15#16"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::IOContext{Base.TTY})
    @ OhMyREPL ~/.julia/packages/OhMyREPL/HzW5x/src/output_prompt_overwrite.jl:23
 [17] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [18] display
    @ ~/.julia/packages/OhMyREPL/HzW5x/src/output_prompt_overwrite.jl:6 [inlined]
 [19] display
    @ ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
 [20] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:340
 [21] top-level scope
    @ REPL[30]:1
Some type information was truncated. Use `show(err)` to see complete types.

This is the same error as in #1336 (comment), in a different context. This previous case was fixed and does not error any more.

This is another case that should be fixed by refactoring UnitRangeDual (I am currently working on it)

EDIT: fixed by #1531 (was due to missing axes(Base.Slice(<:UnitRangeDual)})

@ogauthe
Copy link
Contributor Author

ogauthe commented Sep 12, 2024

I realize there are other issues with a[:,:]. The axes are cast to Base.IdentityUnitRange{-> axes(a) <-}. This is causing the display error above, but also creates issues with blocklabels and blocklengths. So when calling a[:,:], the array itself is correct but the axes have yet another type that does not behave as expected.

Should we change the behavior of a[:,:] to preserve the axes or should we define blocklabels and blocklengths for Base.IdentityUnitRange{GradedUnitRange}? I currently favor fixing the axes types to avoid adding more code to support exotic axes types.

EDIT: fixed by #1531 (was due to missing axes(Base.Slice(<:UnitRangeDual)})

@mtfishman
Copy link
Member

I think a[:, :] should always be exactly the same as copy(a), if I understand correctly, so if it doesn't act like that then we should change it to act like that.

@ogauthe
Copy link
Contributor Author

ogauthe commented Sep 12, 2024

issue: it is still possible to create a GradedAxes.GradedUnitRange, a.k.a BlockArrays.BlockedUnitRange{<:LabelledInteger} by slicing a BlockedOneTo. Using such an axis for a BlockSparseArrays creates many issues.
e.g

r = gradedrange([U1(1) => 2, U1(2) => 2])[1:3]
a = BlockSparseArray{Float64}(r,r)
a[1:2,1:2]  # MethodError
ERROR: MethodError: no method matching to_blockindices(::BlockArrays.BlockedUnitRange{…}, ::UnitRange{…})

Closest candidates are:
  to_blockindices(::UnitRangeDual, ::UnitRange{<:Integer})
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/unitrangedual.jl:54
  to_blockindices(::Base.OneTo, ::UnitRange{<:Integer})
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/blockedunitrange.jl:186
  to_blockindices(::BlockedOneTo, ::UnitRange{<:Integer})
   @ NDTensors ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/blockedunitrange.jl:170

Stacktrace:
 [1] blocksparse_to_indices(a::BlockSparseArray{…}, inds::Tuple{…}, I::Tuple{…})
   @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearrayinterface/blocksparsearrayinterface.jl:32
 [2] to_indices(a::BlockSparseArray{…}, inds::Tuple{…}, I::Tuple{…})
   @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:26
 [3] to_indices
   @ ./indices.jl:344 [inlined]
 [4] view
   @ ./subarray.jl:183 [inlined]
 [5] layout_getindex
   @ ~/.julia/packages/ArrayLayouts/31idh/src/ArrayLayouts.jl:138 [inlined]
 [6] getindex(::BlockSparseArray{…}, ::UnitRange{…}, ::UnitRange{…})
   @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/wrappedabstractblocksparsearray.jl:92
 [7] top-level scope
   @ REPL[57]:1
Some type information was truncated. Use `show(err)` to see complete types.

main at 5d4111e3f0d720b17b58de137f633225094fb379

EDIT: fixed by #1531

@ogauthe
Copy link
Contributor Author

ogauthe commented Sep 16, 2024

issue: copy(adjoint) does not preserve dual axes

r = gradedrange([U1(0) => 2, U1(1) => 2])
a = BlockSparseArray{Float64}(r, r)

@test isdual.(axes(a)) == (false, false)
@test isdual.(axes(adjoint(a))) == (true, true)
@test_broken isdual.(axes(copy(adjoint(a)))) == (true, true)

main at 5d4111e3f0d720b17b58de137f633225094fb379

EDIT: I got confused with adjoint, there is only an issue with copy here.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented Oct 22, 2024

issue: permutedims crashes for some arrays

g1 = blockedrange([1, 1, 1])
g2 = blockedrange([1, 2, 3])
g3 = blockedrange([2, 2, 1])
g4 = blockedrange([1, 2, 1])
bsa = BlockSparseArray{Float64}(g1, g2, g3, g4);
bsa[Block(3, 2, 2, 3)] .= 1.0

bsat = permutedims(bsa, (2, 3, 4, 1))
ERROR: BoundsError: attempt to access 3×1×2×1 PermutedDimsArray(::Array{Float64, 4}, (2, 3, 4, 1)) with eltype Float64 at index [1:2, 1:2, 1:1, 1:1]
Stacktrace:
  [1] throw_boundserror(A::PermutedDimsArray{Float64, 4, (2, 3, 4, 1), (4, 1, 2, 3), Array{…}}, I::NTuple{4, UnitRange{…}})
    @ Base ./abstractarray.jl:737
  [2] checkbounds
    @ ./abstractarray.jl:702 [inlined]
  [3] view
    @ ./subarray.jl:184 [inlined]
  [4] (::NDTensors.BlockSparseArrays.var"#71#74"{Tuple{}, Tuple{}})(i::Int64)
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81
  [5] ntuple
    @ ./ntuple.jl:19 [inlined]
  [6] sparse_map!(::NDTensors.BlockSparseArrays.BlockSparseArrayStyle{…}, f::Function, a_dest::BlockSparseArray{…}, a_srcs::PermutedDimsArray{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81
  [7] sparse_map!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:93 [inlined]
  [8] sparse_copyto!(dest::BlockSparseArray{…}, src::PermutedDimsArray{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:8
  [9] sparse_permutedims!(dest::BlockSparseArray{…}, src::BlockSparseArray{…}, perm::NTuple{…})
    @ NDTensors.SparseArrayInterface ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:13
 [10] permutedims!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:135 [inlined]
 [11] permutedims(A::BlockSparseArray{…}, perm::NTuple{…})
    @ Base.PermutedDimsArrays ./permuteddimsarray.jl:145
 [12] top-level scope
    @ REPL[227]:1
Some type information was truncated. Use `show(err)` to see complete types.

I guess there is a mismatch between permuting the array structure and permuting the inner blocks.

Edit: FIXED

@ogauthe
Copy link
Contributor Author

ogauthe commented Oct 22, 2024

issue: surprising display error in a very specific context. The error is different from previous display errors mentioned here.

g0 = gradedrange([U1(0) => 1])
g1 = dual(gradedrange([U1(-2) => 1, U1(-1) => 2, U1(0) => 1]))
g2 = dual(gradedrange([U1(-2) => 2, U1(-1) => 2, U1(0) => 1]))
bsa1 = BlockSparseArray{Float64}(g0, g1)
@show bsa1  # Ok
@show bsa1, 1   # Ok
bsa2 = BlockSparseArray{Float64}(g0, g2)
@show bsa2  # Ok
@show bsa2, 1   # BoundsError
(bsa2, 1) = ([0.0 0.0 0.0 0.0 0.0], 1)
([0.0 0.0 … 0.0Error showing value of type Tuple{BlockSparseArray{Float64, 2, Matrix{Float64}, NDTensors.SparseArrayDOKs.SparseArrayDOK{Matrix{Float64}, 2, NDTensors.BlockSparseArrays.BlockZero{Tuple{BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, NDTensors.GradedAxes.UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}}, Tuple{BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}, NDTensors.GradedAxes.UnitRangeDual{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, BlockArrays.BlockedOneTo{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}, Vector{NDTensors.LabelledNumbers.LabelledInteger{Int64, U1{Int64}}}}}}}, Int64}:
ERROR: BoundsError: attempt to access 2-blocked 2-element BlockArrays.BlockedUnitRange{Int64, Vector{Int64}} at index [5]
Stacktrace:
  [1] throw_boundserror(A::BlockArrays.BlockedUnitRange{Int64, Vector{Int64}}, I::Int64)
    @ Base ./abstractarray.jl:737
  [2] getindex
    @ ./range.jl:948 [inlined]
  [3] getindex(a::BlockArrays.BlockedUnitRange{NDTensors.LabelledNumbers.LabelledInteger{…}, Vector{…}}, index::Int64)
    @ NDTensors.GradedAxes ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/gradedunitrange.jl:269
  [4] iterate
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/gradedunitrange.jl:221 [inlined]
  [5] iterate
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/GradedAxes/src/unitrangedual.jl:95 [inlined]
  [6] _show_nonempty(io::IOContext{…}, X::AbstractMatrix, prefix::String, drop_brackets::Bool, axs::Tuple{…})
    @ Base ./arrayshow.jl:447
  [7] _show_nonempty(io::IOContext{…}, X::BlockSparseArray{…}, prefix::String)
    @ Base ./arrayshow.jl:413
  [8] show
    @ ./arrayshow.jl:491 [inlined]
  [9] show_delim_array(io::IOContext{…}, itr::Tuple{…}, op::Char, delim::Char, cl::Char, delim_one::Bool, i1::Int64, n::Int64)
    @ Base ./show.jl:1378
 [10] show_delim_array
    @ ./show.jl:1363 [inlined]
 [11] show
    @ ./show.jl:1396 [inlined]
 [12] show(io::IOContext{Base.TTY}, ::MIME{Symbol("text/plain")}, x::Tuple{BlockSparseArray{…}, Int64})
    @ Base.Multimedia ./multimedia.jl:47
 [13] (::REPL.var"#55#56"{REPL.REPLDisplay{REPL.LineEditREPL}, MIME{Symbol("text/plain")}, Base.RefValue{Any}})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:273
 [14] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [15] display(d::REPL.REPLDisplay, mime::MIME{Symbol("text/plain")}, x::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:259
 [16] display
    @ ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:278 [inlined]
 [17] display(x::Any)
    @ Base.Multimedia ./multimedia.jl:340
 [18] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [19] invokelatest
    @ ./essentials.jl:889 [inlined]
 [20] print_response(errio::IO, response::Any, show_value::Bool, have_color::Bool, specialdisplay::Union{…})
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:315
 [21] (::REPL.var"#57#58"{REPL.LineEditREPL, Pair{Any, Bool}, Bool, Bool})(io::Any)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:284
 [22] with_repl_linfo(f::Any, repl::REPL.LineEditREPL)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:569
 [23] print_response(repl::REPL.AbstractREPL, response::Any, show_value::Bool, have_color::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:282
 [24] (::REPL.var"#do_respond#80"{})(s::REPL.LineEdit.MIState, buf::Any, ok::Bool)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:911
 [25] (::REPL.var"#98#108"{})(::REPL.LineEdit.MIState, ::Any, ::Vararg{…})
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1248
 [26] #invokelatest#2
    @ ./essentials.jl:892 [inlined]
 [27] invokelatest
    @ ./essentials.jl:889 [inlined]
 [28] (::REPL.LineEdit.var"#27#28"{REPL.var"#98#108"{}, String})(s::Any, p::Any)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:1612
 [29] prompt!(term::REPL.Terminals.TextTerminal, prompt::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2749
 [30] run_interface(terminal::REPL.Terminals.TextTerminal, m::REPL.LineEdit.ModalInterface, s::REPL.LineEdit.MIState)
    @ REPL.LineEdit ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/LineEdit.jl:2651
 [31] run_frontend(repl::REPL.LineEditREPL, backend::REPL.REPLBackendRef)
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:1312
 [32] (::REPL.var"#62#68"{REPL.LineEditREPL, REPL.REPLBackendRef})()
    @ REPL ~/.julia/juliaup/julia-1.10.5+0.x64.linux.gnu/share/julia/stdlib/v1.10/REPL/src/REPL.jl:386
Some type information was truncated. Use `show(err)` to see complete types.

EDIT: fixed by #1531

@ogauthe
Copy link
Contributor Author

ogauthe commented Oct 22, 2024

issue: cannot create a zero dim BlockSparseArray

zerodim = BlockSparseArray{Float64}(())
ERROR: MethodError: (BlockSparseArray{Float64, N, A, Blocks} where {N, A<:AbstractArray{Float64, N}, Blocks<:AbstractArray{A, N}})(::Tuple{}) is ambiguous.

Candidates:
  (BlockSparseArray{T, N, A, Blocks} where {N, A<:AbstractArray{T, N}, Blocks<:AbstractArray{A, N}})(dims::Tuple{Vararg{Vector{Int64}}}) where T
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearray/blocksparsearray.jl:61
  (BlockSparseArray{T, N, A, Blocks} where {N, A<:AbstractArray{T, N}, Blocks<:AbstractArray{A, N}})(axes::Tuple{Vararg{AbstractUnitRange}}) where T
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/blocksparsearray/blocksparsearray.jl:65

Possible fix, define
  (BlockSparseArray{T, N, A, Blocks} where {N, A<:AbstractArray{T, N}, Blocks<:AbstractArray{A, N}})(::Tuple{}) where T

Stacktrace:
 [1] top-level scope
   @ REPL[75]:1

Edit: FIXED

@mtfishman
Copy link
Member

@ogauthe are all of the open issues you've listed in recent comments also listed in the first post?

@ogauthe
Copy link
Contributor Author

ogauthe commented Oct 30, 2024

A few were missing, I updated the first post and added links.

@mtfishman
Copy link
Member

issue: I cannot write a slice of a block

a[BlockArrays.Block(1,1)][1:2,1:2] = ones((2,2))

does not write a. This is probably associated to "Fix initializing a block with broadcasting syntax".

I don't think that this should write to a since a[Block(1,1)] returns a copy of the block, following the convention of slicing in Julia:

julia> a = randn(4, 4)
4×4 Matrix{Float64}:
  0.461072  0.8415    -0.25594   -0.0362716
  1.64976   0.325521  -0.174059  -1.27251
  0.676818  0.705131   0.909353  -0.295874
 -0.159376  0.27667    0.949735   0.135925

julia> a[2:4, 2:4][1:2, 1:2] = zeros(2, 2)
2×2 Matrix{Float64}:
 0.0  0.0
 0.0  0.0

julia> a
4×4 Matrix{Float64}:
  0.461072  0.8415    -0.25594   -0.0362716
  1.64976   0.325521  -0.174059  -1.27251
  0.676818  0.705131   0.909353  -0.295874
 -0.159376  0.27667    0.949735   0.135925

You could use a view, or use a[Block(1,1)[1:2,1:2]] = ones(2, 2).

@ogauthe
Copy link
Contributor Author

ogauthe commented Nov 5, 2024

Many axes-related errors were fixed by #1531. I updated the first comment.

@ogauthe
Copy link
Contributor Author

ogauthe commented Nov 13, 2024

Feature request: block sizes are not checked in constructor from Dictionary. When they do not fit, the object is inconsistent but the error is hard to detect. The constructor should give a clean error.

rg = blockedrange([2, 3])
cg = blockedrange([1])
m1 = ones((2, 1))
m2 = ones((1, 1))  # too small
mdic = Dictionary{Block{2,Int64},Matrix{Float64}}()
set!(mdic, Block(1, 1), m1)
set!(mdic, Block(2, 1), m2)
m = BlockSparseArray(mdic, (rg, cg))
copy(m)  # example; many operations will fail
ERROR: BoundsError: attempt to access 1×1 Matrix{Float64} at index [1:3, 1:1]
Stacktrace:
  [1] throw_boundserror(A::Matrix{Float64}, I::Tuple{UnitRange{Int64}, UnitRange{Int64}})
    @ Base ./abstractarray.jl:737
  [2] checkbounds
    @ ./abstractarray.jl:702 [inlined]
  [3] view
    @ ./subarray.jl:184 [inlined]
  [4] #73
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81 [inlined]
  [5] ntuple
    @ ./ntuple.jl:19 [inlined]
  [6] sparse_map!(::NDTensors.BlockSparseArrays.BlockSparseArrayStyle{…}, f::Function, a_dest::BlockSparseArray{…}, a_srcs::BlockSparseArray{…})
    @ NDTensors.BlockSparseArrays ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:81
  [7] sparse_map!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/map.jl:93 [inlined]
  [8] sparse_copyto!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/SparseArrayInterface/src/sparsearrayinterface/copyto.jl:8 [inlined]
  [9] copyto!
    @ ~/Documents/itensor/ITensors.jl/NDTensors/src/lib/BlockSparseArrays/src/abstractblocksparsearray/map.jl:116 [inlined]
 [10] copymutable
    @ ./abstractarray.jl:1201 [inlined]
 [11] copy(a::BlockSparseArray{Float64, 2, Matrix{…}, NDTensors.SparseArrayDOKs.SparseArrayDOK{…}, Tuple{…}})
    @ Base ./abstractarray.jl:1144
 [12] top-level scope
    @ REPL[293]:1
Some type information was truncated. Use `show(err)` to see complete types.

@mtfishman
Copy link
Member

Related to that, another thing that would be nice would be to automatically determine the blocked axes from the blocks that are passed. However, if not enough blocks are passed there may not be enough information to determine all of the sizes.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
BlockSparseArrays enhancement New feature or request NDTensors Requires changes to the NDTensors.jl library.
Projects
None yet
Development

No branches or pull requests

3 participants