diff --git a/dev/AdvancedUsageGuide.html b/dev/AdvancedUsageGuide.html index f0290eb1ab..84b9cef17d 100644 --- a/dev/AdvancedUsageGuide.html +++ b/dev/AdvancedUsageGuide.html @@ -461,4 +461,4 @@ myscale2! (generic function with 1 method) julia> @btime myscale2!(A, 2) setup = (A = random_itensor(i)); - 3.549 μs (2 allocations: 112 bytes)

A very efficient function is written for the Tensor type. Then, the ITensor version just wraps the Tensor function by calling it after converting the ITensor to a Tensor (without any copying) with the tensor function. This is the basis for the design of all performance critical ITensors.jl functions.

+ 3.549 μs (2 allocations: 112 bytes)

A very efficient function is written for the Tensor type. Then, the ITensor version just wraps the Tensor function by calling it after converting the ITensor to a Tensor (without any copying) with the tensor function. This is the basis for the design of all performance critical ITensors.jl functions.

diff --git a/dev/CodeTiming.html b/dev/CodeTiming.html index 3749905828..41ba21ab33 100644 --- a/dev/CodeTiming.html +++ b/dev/CodeTiming.html @@ -1,2 +1,2 @@ -Timing and profiling · ITensors.jl

Timing and Profiling your code

It is very important to time and profile your code to make sure your code is running as fast as possible. Here are some tips on timing and profiling your code.

If you are concerned about the performance of your code, a good place to start is Julia's performance tips.

Timing and benchmarking

Julia has many nice timing tools available. Tools like @time and TimerOutputs can be used to measure the time of specific lines of code. For microbenchmarking, we recommend the BenchmarkTools package. For profiling your code, see the Julia documentation on profiling.

+Timing and profiling · ITensors.jl

Timing and Profiling your code

It is very important to time and profile your code to make sure your code is running as fast as possible. Here are some tips on timing and profiling your code.

If you are concerned about the performance of your code, a good place to start is Julia's performance tips.

Timing and benchmarking

Julia has many nice timing tools available. Tools like @time and TimerOutputs can be used to measure the time of specific lines of code. For microbenchmarking, we recommend the BenchmarkTools package. For profiling your code, see the Julia documentation on profiling.

diff --git a/dev/ContractionSequenceOptimization.html b/dev/ContractionSequenceOptimization.html index 8cfea36dd3..2fa816a98c 100644 --- a/dev/ContractionSequenceOptimization.html +++ b/dev/ContractionSequenceOptimization.html @@ -1,5 +1,5 @@ -Contraction sequence optimization · ITensors.jl

Contraction sequence optimization

When contracting a tensor network, the sequence of contraction makes a big difference in the computational cost. However, the complexity of determining the optimal sequence grows exponentially with the number of tensors, but there are many heuristic algorithms available for computing optimal sequences for small networks[1][2][3][4][5][6]. ITensors.jl provides some functionality for helping you find the optimal contraction sequence for small tensor network, as we will show below.

The algorithm in ITensors.jl currently uses a modified version of[1] with simplifications for outer product contractions similar to those used in TensorOperations.jl.

Functions

ITensors.ContractionSequenceOptimization.contraction_costFunction
contraction_cost(A; sequence)

Return the cost of contracting the collection of ITensors according to the specified sequence, where the cost is measured in the number of floating point operations that would need to be performed to contract dense tensors of the dimensions specified by the indices of the tensors (so for now, sparsity is ignored in computing the costs). Pairwise costs are returned in a vector (contracting N tensors requires N-1 pairwise contractions). You can use sum(contraction_cost(A; sequence)) to get the total cost of the contraction.

If no sequence is specified, left associative contraction is used, in other words the sequence is equivalent to [[[[1, 2], 3], 4], …].

source
NDTensors.contractFunction
contract(ψ::MPS, A::MPO; kwargs...) -> MPS
+Contraction sequence optimization · ITensors.jl

Contraction sequence optimization

When contracting a tensor network, the sequence of contraction makes a big difference in the computational cost. However, the complexity of determining the optimal sequence grows exponentially with the number of tensors, but there are many heuristic algorithms available for computing optimal sequences for small networks[1][2][3][4][5][6]. ITensors.jl provides some functionality for helping you find the optimal contraction sequence for small tensor network, as we will show below.

The algorithm in ITensors.jl currently uses a modified version of[1] with simplifications for outer product contractions similar to those used in TensorOperations.jl.

Functions

ITensors.ContractionSequenceOptimization.contraction_costFunction
contraction_cost(A; sequence)

Return the cost of contracting the collection of ITensors according to the specified sequence, where the cost is measured in the number of floating point operations that would need to be performed to contract dense tensors of the dimensions specified by the indices of the tensors (so for now, sparsity is ignored in computing the costs). Pairwise costs are returned in a vector (contracting N tensors requires N-1 pairwise contractions). You can use sum(contraction_cost(A; sequence)) to get the total cost of the contraction.

If no sequence is specified, left associative contraction is used, in other words the sequence is equivalent to [[[[1, 2], 3], 4], …].

source
NDTensors.contractFunction
contract(ψ::MPS, A::MPO; kwargs...) -> MPS
 *(::MPS, ::MPO; kwargs...) -> MPS
 
 contract(A::MPO, ψ::MPS; kwargs...) -> MPS
@@ -9,7 +9,7 @@
 # Bring the indices back to pairs of primed and unprimed
 C = apply(A, B; alg="naive", truncate=false)

Keywords

  • cutoff::Float64=1e-14: the cutoff value for truncating the density matrix eigenvalues. Note that the default is somewhat arbitrary and subject to change, in general you should set a cutoff value.
  • maxdim::Int=maxlinkdim(A) * maxlinkdim(B)): the maximal bond dimension of the results MPS.
  • mindim::Int=1: the minimal bond dimension of the resulting MPS.
  • alg="zipup": Either "zipup" or "naive". "zipup" contracts pairs of site tensors and truncates with SVDs in a sweep across the sites, while "naive" first contracts pairs of tensor exactly and then truncates at the end if truncate=true.
  • truncate=true: Enable or disable truncation. If truncate=false, ignore other truncation parameters like cutoff and maxdim. This is most relevant for the "naive" version, if you just want to contract the tensors pairwise exactly. This can be useful if you are contracting MPOs that have diverging norms, such as MPOs originating from sums of local operators.

See also apply for details about the arguments available.

*(As::ITensor...; sequence = default_sequence(), kwargs...)
 *(As::Vector{<: ITensor}; sequence = default_sequence(), kwargs...)
-contract(As::ITensor...; sequence = default_sequence(), kwargs...)

Contract the set of ITensors according to the contraction sequence.

The default sequence is "automatic" if ITensors.using_contraction_sequence_optimization() is true, otherwise it is "left_associative" (the ITensors are contracted from left to right).

You can change the default with ITensors.enable_contraction_sequence_optimization() and ITensors.disable_contraction_sequence_optimization().

For a custom sequence, the sequence should be provided as a binary tree where the leaves are integers n specifying the ITensor As[n] and branches are accessed by indexing with 1 or 2, i.e. sequence = Any[Any[1, 3], Any[2, 4]].

source

Examples

In the following example we show how to compute the contraction sequence cost of a

using ITensors
+contract(As::ITensor...; sequence = default_sequence(), kwargs...)

Contract the set of ITensors according to the contraction sequence.

The default sequence is "automatic" if ITensors.using_contraction_sequence_optimization() is true, otherwise it is "left_associative" (the ITensors are contracted from left to right).

You can change the default with ITensors.enable_contraction_sequence_optimization() and ITensors.disable_contraction_sequence_optimization().

For a custom sequence, the sequence should be provided as a binary tree where the leaves are integers n specifying the ITensor As[n] and branches are accessed by indexing with 1 or 2, i.e. sequence = Any[Any[1, 3], Any[2, 4]].

source

Examples

In the following example we show how to compute the contraction sequence cost of a

using ITensors
 using Symbolics
 
 using ITensors: contraction_cost
@@ -103,4 +103,4 @@
 # Fix d to a certain value (such as 4 for a Hubbard site)
 @variables d
 var_sub = Dict(d => 4)
-display(substitute.(sum.(getindex.(sequence_costs, :symbolic_cost)), (var_sub,)))

A future direction will be to allow optimizing over contraction sequences with the dimensions specified symbolically, so that the optimal sequence in limits of certain dimensions can be found. In addition, we plan to implement more algorithms that work for larger networks, as well as algorithms like[2] which take an optimal sequence for a closed network and generate optimal sequences for environments of each tensor in the network, which is helpful for computing gradients of tensor networks.

+display(substitute.(sum.(getindex.(sequence_costs, :symbolic_cost)), (var_sub,)))

A future direction will be to allow optimizing over contraction sequences with the dimensions specified symbolically, so that the optimal sequence in limits of certain dimensions can be found. In addition, we plan to implement more algorithms that work for larger networks, as well as algorithms like[2] which take an optimal sequence for a closed network and generate optimal sequences for environments of each tensor in the network, which is helpful for computing gradients of tensor networks.

diff --git a/dev/DMRG.html b/dev/DMRG.html index d122538d1c..802655c5cd 100644 --- a/dev/DMRG.html +++ b/dev/DMRG.html @@ -2,4 +2,4 @@ DMRG · ITensors.jl

DMRG

ITensorMPS.dmrgFunction
dmrg(H::MPO, psi0::MPS; kwargs...)
 dmrg(H::MPO, psi0::MPS, sweeps::Sweeps; kwargs...)

Use the density matrix renormalization group (DMRG) algorithm to optimize a matrix product state (MPS) such that it is the eigenvector of lowest eigenvalue of a Hermitian matrix H, represented as a matrix product operator (MPO).

dmrg(Hs::Vector{MPO}, psi0::MPS; kwargs...)
 dmrg(Hs::Vector{MPO}, psi0::MPS, sweeps::Sweeps; kwargs...)

Use the density matrix renormalization group (DMRG) algorithm to optimize a matrix product state (MPS) such that it is the eigenvector of lowest eigenvalue of a Hermitian matrix H. This version of dmrg accepts a representation of H as a Vector of MPOs, Hs = [H1, H2, H3, ...] such that H is defined as H = H1 + H2 + H3 + ... Note that this sum of MPOs is not actually computed; rather the set of MPOs [H1,H2,H3,..] is efficiently looped over at each step of the DMRG algorithm when optimizing the MPS.

dmrg(H::MPO, Ms::Vector{MPS}, psi0::MPS; weight=1.0, kwargs...)
-dmrg(H::MPO, Ms::Vector{MPS}, psi0::MPS, sweeps::Sweeps; weight=1.0, kwargs...)

Use the density matrix renormalization group (DMRG) algorithm to optimize a matrix product state (MPS) such that it is the eigenvector of lowest eigenvalue of a Hermitian matrix H, subject to the constraint that the MPS is orthogonal to each of the MPS provided in the Vector Ms. The orthogonality constraint is approximately enforced by adding to H terms of the form w|M1><M1| + w|M2><M2| + ... where Ms=[M1, M2, ...] and w is the "weight" parameter, which can be adjusted through the optional weight keyword argument.

Note

dmrg will report the energy of the operator H + w|M1><M1| + w|M2><M2| + ..., not the operator H. If you want the expectation value of the MPS eigenstate with respect to just H, you can compute it yourself with an observer or after DMRG is run with inner(psi', H, psi).

The MPS psi0 is used to initialize the MPS to be optimized.

The number of sweeps of thd DMRG algorithm is controlled by passing the nsweeps keyword argument. The keyword arguments maxdim, cutoff, noise, and mindim can also be passed to control the cost versus accuracy of the algorithm - see below for details.

Alternatively the number of sweeps and accuracy parameters can be passed through a Sweeps object, though this interface is no longer preferred.

Returns:

  • energy::Number - eigenvalue of the optimized MPS
  • psi::MPS - optimized MPS

Keyword arguments:

  • nsweeps::Int - number of "sweeps" of DMRG to perform

Optional keyword arguments:

  • maxdim - integer or array of integers specifying the maximum size allowed for the bond dimension or rank of the MPS being optimized.
  • cutoff - float or array of floats specifying the truncation error cutoff or threshold to use for truncating the bond dimension or rank of the MPS.
  • eigsolve_krylovdim::Int = 3 - maximum dimension of Krylov space used to locally solve the eigenvalue problem. Try setting to a higher value if convergence is slow or the Hamiltonian is close to a critical point. [krylovkit]
  • eigsolve_tol::Number = 1e-14 - Krylov eigensolver tolerance. [krylovkit]
  • eigsolve_maxiter::Int = 1 - number of times the Krylov subspace can be rebuilt. [krylovkit]
  • eigsolve_verbosity::Int = 0 - verbosity level of the Krylov solver. Warning: enabling this will lead to a lot of outputs to the terminal. [krylovkit]
  • ishermitian=true - boolean specifying if dmrg should assume the MPO (or more general linear operator) represents a Hermitian matrix. [krylovkit]
  • noise - float or array of floats specifying strength of the "noise term" to use to aid convergence.
  • mindim - integer or array of integers specifying the minimum size of the bond dimension or rank, if possible.
  • outputlevel::Int = 1 - larger outputlevel values make DMRG print more information and 0 means no output.
  • observer - object implementing the Observer interface which can perform measurements and stop DMRG early.
  • write_when_maxdim_exceeds::Int - when the allowed maxdim exceeds this value, begin saving tensors to disk to free RAM memory in large calculations
  • write_path::String = tempdir() - path to use to save files to disk (to save RAM) when maxdim exceeds the write_when_maxdim_exceeds option, if set
  • krylovkitThe dmrg function in ITensors.jl currently uses the eigsolve function in KrylovKit.jl as the internal the eigensolver. See the KrylovKit.jl documention on the eigsolve function for more details: KrylovKit.eigsolve.
+dmrg(H::MPO, Ms::Vector{MPS}, psi0::MPS, sweeps::Sweeps; weight=1.0, kwargs...)

Use the density matrix renormalization group (DMRG) algorithm to optimize a matrix product state (MPS) such that it is the eigenvector of lowest eigenvalue of a Hermitian matrix H, subject to the constraint that the MPS is orthogonal to each of the MPS provided in the Vector Ms. The orthogonality constraint is approximately enforced by adding to H terms of the form w|M1><M1| + w|M2><M2| + ... where Ms=[M1, M2, ...] and w is the "weight" parameter, which can be adjusted through the optional weight keyword argument.

Note

dmrg will report the energy of the operator H + w|M1><M1| + w|M2><M2| + ..., not the operator H. If you want the expectation value of the MPS eigenstate with respect to just H, you can compute it yourself with an observer or after DMRG is run with inner(psi', H, psi).

The MPS psi0 is used to initialize the MPS to be optimized.

The number of sweeps of thd DMRG algorithm is controlled by passing the nsweeps keyword argument. The keyword arguments maxdim, cutoff, noise, and mindim can also be passed to control the cost versus accuracy of the algorithm - see below for details.

Alternatively the number of sweeps and accuracy parameters can be passed through a Sweeps object, though this interface is no longer preferred.

Returns:

Keyword arguments:

Optional keyword arguments:

diff --git a/dev/DMRGObserver.html b/dev/DMRGObserver.html index 38235904a8..4450e93eb6 100644 --- a/dev/DMRGObserver.html +++ b/dev/DMRGObserver.html @@ -11,4 +11,4 @@ sites::Vector{<:Index}; energy_tol=0.0, minsweeps=2, - energy_type=Float64)

Construct a DMRGObserver, provide an array of ops of operator names which are strings recognized by the op function. Each of these operators will be measured on every site during every step of DMRG and the results recorded inside the DMRGOberver for later analysis. The array sites is the basis of sites used to define the MPS and MPO for the DMRG calculation.

Optionally, one can provide an energy tolerance used for early stopping, and minimum number of sweeps that must be done.

Optional keyword arguments:

Methods

ITensorMPS.measurementsMethod
measurements(o::DMRGObserver)

After using a DMRGObserver object o within a DMRG calculation, retrieve a dictionary of measurement results, with the keys being operator names and values being DMRGMeasurement objects.

ITensorMPS.DMRGMeasurementType

A DMRGMeasurement object is an alias for Vector{Vector{Float64}}, in other words an array of arrays of real numbers.

Given a DMRGMeasurement M,the result for the measurement on sweep n and site i as M[n][i].

ITensorMPS.energiesMethod
energies(o::DMRGObserver)

After using a DMRGObserver object o within a DMRG calculation, retrieve an array of the energy after each sweep.

+ energy_type=Float64)

Construct a DMRGObserver, provide an array of ops of operator names which are strings recognized by the op function. Each of these operators will be measured on every site during every step of DMRG and the results recorded inside the DMRGOberver for later analysis. The array sites is the basis of sites used to define the MPS and MPO for the DMRG calculation.

Optionally, one can provide an energy tolerance used for early stopping, and minimum number of sweeps that must be done.

Optional keyword arguments:

Methods

ITensorMPS.measurementsMethod
measurements(o::DMRGObserver)

After using a DMRGObserver object o within a DMRG calculation, retrieve a dictionary of measurement results, with the keys being operator names and values being DMRGMeasurement objects.

ITensorMPS.DMRGMeasurementType

A DMRGMeasurement object is an alias for Vector{Vector{Float64}}, in other words an array of arrays of real numbers.

Given a DMRGMeasurement M,the result for the measurement on sweep n and site i as M[n][i].

ITensorMPS.energiesMethod
energies(o::DMRGObserver)

After using a DMRGObserver object o within a DMRG calculation, retrieve an array of the energy after each sweep.

diff --git a/dev/DeveloperGuide.html b/dev/DeveloperGuide.html index b235fc3357..19786240d7 100644 --- a/dev/DeveloperGuide.html +++ b/dev/DeveloperGuide.html @@ -29,4 +29,4 @@ # Call fA like this: fA(my_callback, psi; callback_args = (; a, b)) -
  • External (non-ITensor) Functions: Though it requires judgment in each case, if the keyword arguments an external (non-ITensor) function accepts are small in number, not expected to change, and known ahead of time, try to list them explicitly if possible (rather than forwarding with kwargs...). Possible exceptions could be if you want to make use of defaults defined for keyword arguments of an external function.

  • +
  • External (non-ITensor) Functions: Though it requires judgment in each case, if the keyword arguments an external (non-ITensor) function accepts are small in number, not expected to change, and known ahead of time, try to list them explicitly if possible (rather than forwarding with kwargs...). Possible exceptions could be if you want to make use of defaults defined for keyword arguments of an external function.

  • diff --git a/dev/Einsum.html b/dev/Einsum.html index 3864534117..1626d43958 100644 --- a/dev/Einsum.html +++ b/dev/Einsum.html @@ -1,23 +1,23 @@ -ITensor indices and Einstein notation · ITensors.jl

    ITensor Index identity: dimension labels and Einstein notation

    Many tensor contraction libraries use Einstein notation, such as NumPy's einsum function, ncon, and various Julia packages such as TensorOperations.jl, Tullio.jl, OMEinsum.jl, and Einsum.jl, among others.

    ITensor also uses Einstein notation, however the labels are stored inside the tensor and carried around with them during various operations. In addition, the labels that determine if tensor indices match with each other, and therefore automatically contract when doing * or match when adding or subtracting, are more sophisticated than simple characters or strings. ITensor indices are given a unique random ID number when they are constructed, and additionally users can add additional information like prime levels and tags which uniquely determine an Index. This is in contrast to simpler implementations of the same idea, such as the NamedDims.jl package, which only allow symbols as the metadata for uniquely identifying a tensor/array dimension.

    Index identity

    Here is an illustration of how the different types of Index metadata (random ID, prime level, and tags) work for Index identity:

    julia> i = Index(2)(dim=2|id=134)
    julia> j = Index(2)(dim=2|id=38)
    julia> i == jfalse
    julia> id(i)0x33d6f1d868a47016
    julia> id(j)0x26a417a1d4f0a3de
    julia> ip = i'(dim=2|id=134)'
    julia> ip == ifalse
    julia> plev(i) == 0true
    julia> plev(ip) == 1true
    julia> noprime(ip) == itrue
    julia> ix = addtags(i, "x")(dim=2|id=134|"x")
    julia> ix == ifalse
    julia> removetags(ix, "x") == itrue
    julia> ixyz = addtags(ix, "y,z")(dim=2|id=134|"x,y,z")
    julia> ixyz == addtags(i, "z,y,x")true

    The different metadata that are stored inside of ITensor indices that determine their identity are useful in different contexts. The random ID is particularly useful in the case when a new Index needs to be generated internally by ITensor, such as when performing a matrix factorization. In the case of a matrix factorization, we want to make sure that the new Index will not accidentally clash with an existing one, for example:

    julia> i = Index(2, "i")(dim=2|id=114|"i")
    julia> j = Index(2, "j")(dim=2|id=905|"j")
    julia> A = random_itensor(i, j)ITensor ord=2 (dim=2|id=114|"i") (dim=2|id=905|"j") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> U, S, V = svd(A, i; lefttags="i", righttags="j");
    julia> inds(U)((dim=2|id=114|"i"), (dim=2|id=781|"i"))
    julia> inds(S)((dim=2|id=781|"i"), (dim=2|id=39|"j"))
    julia> inds(V)((dim=2|id=905|"j"), (dim=2|id=39|"j"))
    julia> norm(U * S * V - A)2.7128146828510983e-15

    You can see that it would have been a problem here if there wasn't a new ID assigned to the Index, since it would have clashed with the original index. In this case, it could be avoided by giving the new indices different tags (with the keyword arguments lefttags and righttags), but in more complicated examples where it is not practical to do that (such as a case where many new indices are being introduced, for example for a tensor train (TT)/matrix product state (MPS)), it is convenient to not force users to come up with unique prime levels or tags themselves. It can also help to avoid accidental contractions in more complicated tensor network algorithms where there are many indices that can potentially have the same prime levels or tags.

    In contrast, using multiple indices with the same Index ID but different prime levels and tags can be useful in situations where there is a more fundamental relationship between the spaces. For example, in the case of an ITensor corresponding to a Hermitian operator, it is helpful to make the bra space and ket spaces the same up to a prime level:

    i = Index(2, "i")
    +ITensor indices and Einstein notation · ITensors.jl

    ITensor Index identity: dimension labels and Einstein notation

    Many tensor contraction libraries use Einstein notation, such as NumPy's einsum function, ncon, and various Julia packages such as TensorOperations.jl, Tullio.jl, OMEinsum.jl, and Einsum.jl, among others.

    ITensor also uses Einstein notation, however the labels are stored inside the tensor and carried around with them during various operations. In addition, the labels that determine if tensor indices match with each other, and therefore automatically contract when doing * or match when adding or subtracting, are more sophisticated than simple characters or strings. ITensor indices are given a unique random ID number when they are constructed, and additionally users can add additional information like prime levels and tags which uniquely determine an Index. This is in contrast to simpler implementations of the same idea, such as the NamedDims.jl package, which only allow symbols as the metadata for uniquely identifying a tensor/array dimension.

    Index identity

    Here is an illustration of how the different types of Index metadata (random ID, prime level, and tags) work for Index identity:

    julia> i = Index(2)(dim=2|id=825)
    julia> j = Index(2)(dim=2|id=652)
    julia> i == jfalse
    julia> id(i)0x14e6500726badf51
    julia> id(j)0x303964c803ee91c4
    julia> ip = i'(dim=2|id=825)'
    julia> ip == ifalse
    julia> plev(i) == 0true
    julia> plev(ip) == 1true
    julia> noprime(ip) == itrue
    julia> ix = addtags(i, "x")(dim=2|id=825|"x")
    julia> ix == ifalse
    julia> removetags(ix, "x") == itrue
    julia> ixyz = addtags(ix, "y,z")(dim=2|id=825|"x,y,z")
    julia> ixyz == addtags(i, "z,y,x")true

    The different metadata that are stored inside of ITensor indices that determine their identity are useful in different contexts. The random ID is particularly useful in the case when a new Index needs to be generated internally by ITensor, such as when performing a matrix factorization. In the case of a matrix factorization, we want to make sure that the new Index will not accidentally clash with an existing one, for example:

    julia> i = Index(2, "i")(dim=2|id=630|"i")
    julia> j = Index(2, "j")(dim=2|id=201|"j")
    julia> A = random_itensor(i, j)ITensor ord=2 (dim=2|id=630|"i") (dim=2|id=201|"j") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> U, S, V = svd(A, i; lefttags="i", righttags="j");
    julia> inds(U)((dim=2|id=630|"i"), (dim=2|id=521|"i"))
    julia> inds(S)((dim=2|id=521|"i"), (dim=2|id=935|"j"))
    julia> inds(V)((dim=2|id=201|"j"), (dim=2|id=935|"j"))
    julia> norm(U * S * V - A)2.7128146828510983e-15

    You can see that it would have been a problem here if there wasn't a new ID assigned to the Index, since it would have clashed with the original index. In this case, it could be avoided by giving the new indices different tags (with the keyword arguments lefttags and righttags), but in more complicated examples where it is not practical to do that (such as a case where many new indices are being introduced, for example for a tensor train (TT)/matrix product state (MPS)), it is convenient to not force users to come up with unique prime levels or tags themselves. It can also help to avoid accidental contractions in more complicated tensor network algorithms where there are many indices that can potentially have the same prime levels or tags.

    In contrast, using multiple indices with the same Index ID but different prime levels and tags can be useful in situations where there is a more fundamental relationship between the spaces. For example, in the case of an ITensor corresponding to a Hermitian operator, it is helpful to make the bra space and ket spaces the same up to a prime level:

    i = Index(2, "i")
     j = Index(3, "j")
     A = random_itensor(i', j', dag(i), dag(j))
     H = 0.5 * (A + swapprime(dag(A), 0 => 1))
     v = random_itensor(i, j)
     Hv = noprime(H * v)
     vH = dag(v)' * H
    -norm(Hv - dag(vH))

    Note that we have added dag in a few places, which is superfluous in this case since the tensors are real and dense but become important when the tensors are complex and/or have symmetries. You can see that in this case, it is very useful to relate the bra and ket spaces by prime levels, since it makes it much easier to perform operations that map from one space to another. We could have created A from 4 entirely different indices with different ID numbers, but it would make the operations a bit more cumbersome, as shown below:

    julia> i = Index(2, "i")(dim=2|id=748|"i")
    julia> j = Index(3, "j")(dim=3|id=532|"j")
    julia> ip = Index(2, "i")(dim=2|id=931|"i")
    julia> jp = Index(3, "jp")(dim=3|id=855|"jp")
    julia> A = random_itensor(ip, jp, dag(i), dag(j))ITensor ord=4 (dim=2|id=931|"i") (dim=3|id=855|"jp") (dim=2|id=748|"i") (dim=3|id=532|"j") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> H = 0.5 * (A + swapinds(dag(A), (i, j), (ip, jp)))ITensor ord=4 (dim=2|id=931|"i") (dim=3|id=855|"jp") (dim=2|id=748|"i") (dim=3|id=532|"j") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> v = random_itensor(i, j)ITensor ord=2 (dim=2|id=748|"i") (dim=3|id=532|"j") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> Hv = replaceinds(H * v, (ip, jp) => (i, j))ITensor ord=2 (dim=2|id=748|"i") (dim=3|id=532|"j") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> vH = replaceinds(dag(v), (i, j) => (ip, jp)) * HITensor ord=2 (dim=2|id=748|"i") (dim=3|id=532|"j") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> norm(Hv - dag(vH))0.0

    Relationship to other Einstein notation-based libraries

    Here we show examples of different ways to perform the contraction "ab,bc,cd->ad" in ITensor.

    julia> da, dc = 2, 3;
    julia> db, dd = da, dc;
    julia> tags = ("a", "b", "c", "d");
    julia> dims = (da, db, dc, dd);
    julia> a, b, c, d = Index.(dims, tags);
    julia> Aab = random_itensor(a, b)ITensor ord=2 (dim=2|id=841|"a") (dim=2|id=164|"b") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> Bbc = random_itensor(b, c)ITensor ord=2 (dim=2|id=164|"b") (dim=3|id=720|"c") +norm(Hv - dag(vH))

    Note that we have added dag in a few places, which is superfluous in this case since the tensors are real and dense but become important when the tensors are complex and/or have symmetries. You can see that in this case, it is very useful to relate the bra and ket spaces by prime levels, since it makes it much easier to perform operations that map from one space to another. We could have created A from 4 entirely different indices with different ID numbers, but it would make the operations a bit more cumbersome, as shown below:

    julia> i = Index(2, "i")(dim=2|id=773|"i")
    julia> j = Index(3, "j")(dim=3|id=782|"j")
    julia> ip = Index(2, "i")(dim=2|id=448|"i")
    julia> jp = Index(3, "jp")(dim=3|id=936|"jp")
    julia> A = random_itensor(ip, jp, dag(i), dag(j))ITensor ord=4 (dim=2|id=448|"i") (dim=3|id=936|"jp") (dim=2|id=773|"i") (dim=3|id=782|"j") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> H = 0.5 * (A + swapinds(dag(A), (i, j), (ip, jp)))ITensor ord=4 (dim=2|id=448|"i") (dim=3|id=936|"jp") (dim=2|id=773|"i") (dim=3|id=782|"j") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> v = random_itensor(i, j)ITensor ord=2 (dim=2|id=773|"i") (dim=3|id=782|"j") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> Hv = replaceinds(H * v, (ip, jp) => (i, j))ITensor ord=2 (dim=2|id=773|"i") (dim=3|id=782|"j") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> vH = replaceinds(dag(v), (i, j) => (ip, jp)) * HITensor ord=2 (dim=2|id=773|"i") (dim=3|id=782|"j") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> norm(Hv - dag(vH))0.0

    Relationship to other Einstein notation-based libraries

    Here we show examples of different ways to perform the contraction "ab,bc,cd->ad" in ITensor.

    julia> da, dc = 2, 3;
    julia> db, dd = da, dc;
    julia> tags = ("a", "b", "c", "d");
    julia> dims = (da, db, dc, dd);
    julia> a, b, c, d = Index.(dims, tags);
    julia> Aab = random_itensor(a, b)ITensor ord=2 (dim=2|id=785|"a") (dim=2|id=975|"b") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> Bbc = random_itensor(b, c)ITensor ord=2 (dim=2|id=975|"b") (dim=3|id=827|"c") NDTensors.Dense{Float64, Vector{Float64}}
    julia> Ccd = random_itensor(c, d) - # "ab,bc,cd->ad"ITensor ord=2 (dim=3|id=720|"c") (dim=3|id=836|"d") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> out1 = Aab * Bbc * CcdITensor ord=2 (dim=2|id=841|"a") (dim=3|id=836|"d") + # "ab,bc,cd->ad"ITensor ord=2 (dim=3|id=827|"c") (dim=3|id=156|"d") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> out1 = Aab * Bbc * CcdITensor ord=2 (dim=2|id=785|"a") (dim=3|id=156|"d") NDTensors.Dense{Float64, Vector{Float64}}
    julia> @show hassameinds(out1, (a, d)) # @@ -25,9 +25,9 @@ # # "ba,bc,dc->ad"hassameinds(out1, (a, d)) = true -true
    julia> Aba = replaceinds(Aab, (a, b) => (b, a))ITensor ord=2 (dim=2|id=164|"b") (dim=2|id=841|"a") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> Cdc = replaceinds(Ccd, (c, d) => (d, c))ITensor ord=2 (dim=3|id=836|"d") (dim=3|id=720|"c") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=841|"a") (dim=3|id=836|"d") +true
    julia> Aba = replaceinds(Aab, (a, b) => (b, a))ITensor ord=2 (dim=2|id=975|"b") (dim=2|id=785|"a") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> Cdc = replaceinds(Ccd, (c, d) => (d, c))ITensor ord=2 (dim=3|id=156|"d") (dim=3|id=827|"c") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=785|"a") (dim=3|id=156|"d") NDTensors.Dense{Float64, Vector{Float64}}
    julia> @show hassameinds(out2, (a, d)) # @@ -38,9 +38,9 @@ # since it doesn't check if the indices # are compatible in dimension, # so is not recommended in general.hassameinds(out2, (a, d)) = true -true
    julia> using ITensors: setinds
    julia> Aba = setinds(Aab, (b, a))ITensor ord=2 (dim=2|id=164|"b") (dim=2|id=841|"a") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> Cdc = setinds(Ccd, (d, c))ITensor ord=2 (dim=3|id=836|"d") (dim=3|id=720|"c") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=841|"a") (dim=3|id=836|"d") +true
    julia> using ITensors: setinds
    julia> Aba = setinds(Aab, (b, a))ITensor ord=2 (dim=2|id=975|"b") (dim=2|id=785|"a") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> Cdc = setinds(Ccd, (d, c))ITensor ord=2 (dim=3|id=156|"d") (dim=3|id=827|"c") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=785|"a") (dim=3|id=156|"d") NDTensors.Dense{Float64, Vector{Float64}}
    julia> @show hassameinds(out2, (a, d)) # @@ -48,14 +48,14 @@ # the indices were made with these # prime levels in the first place) #hassameinds(out2, (a, d)) = true -true
    julia> a = Index(da, "a")(dim=2|id=118|"a")
    julia> c = Index(dc, "c")(dim=3|id=954|"c")
    julia> b, d = a', c'((dim=2|id=118|"a")', (dim=3|id=954|"c")')
    julia> Aab = random_itensor(a, b)ITensor ord=2 (dim=2|id=118|"a") (dim=2|id=118|"a")' -NDTensors.Dense{Float64, Vector{Float64}}
    julia> Bbc = random_itensor(b, c)ITensor ord=2 (dim=2|id=118|"a")' (dim=3|id=954|"c") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> Ccd = random_itensor(c, d)ITensor ord=2 (dim=3|id=954|"c") (dim=3|id=954|"c")' -NDTensors.Dense{Float64, Vector{Float64}}
    julia> out1 = Aab * Bbc * CcdITensor ord=2 (dim=2|id=118|"a") (dim=3|id=954|"c")' +true
    julia> a = Index(da, "a")(dim=2|id=6|"a")
    julia> c = Index(dc, "c")(dim=3|id=935|"c")
    julia> b, d = a', c'((dim=2|id=6|"a")', (dim=3|id=935|"c")')
    julia> Aab = random_itensor(a, b)ITensor ord=2 (dim=2|id=6|"a") (dim=2|id=6|"a")' +NDTensors.Dense{Float64, Vector{Float64}}
    julia> Bbc = random_itensor(b, c)ITensor ord=2 (dim=2|id=6|"a")' (dim=3|id=935|"c") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> Ccd = random_itensor(c, d)ITensor ord=2 (dim=3|id=935|"c") (dim=3|id=935|"c")' +NDTensors.Dense{Float64, Vector{Float64}}
    julia> out1 = Aab * Bbc * CcdITensor ord=2 (dim=2|id=6|"a") (dim=3|id=935|"c")' NDTensors.Dense{Float64, Vector{Float64}}
    julia> @show hassameinds(out1, (a, d))hassameinds(out1, (a, d)) = true -true
    julia> Aba = swapprime(Aab, 0 => 1)ITensor ord=2 (dim=2|id=118|"a")' (dim=2|id=118|"a") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> Cdc = swapprime(Ccd, 0 => 1)ITensor ord=2 (dim=3|id=954|"c")' (dim=3|id=954|"c") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=118|"a") (dim=3|id=954|"c")' +true
    julia> Aba = swapprime(Aab, 0 => 1)ITensor ord=2 (dim=2|id=6|"a")' (dim=2|id=6|"a") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> Cdc = swapprime(Ccd, 0 => 1)ITensor ord=2 (dim=3|id=935|"c")' (dim=3|id=935|"c") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=6|"a") (dim=3|id=935|"c")' NDTensors.Dense{Float64, Vector{Float64}}
    julia> @show hassameinds(out2, (a, d)) # @@ -63,14 +63,14 @@ # the indices were made with these # tags in the first place) #hassameinds(out2, (a, d)) = true -true
    julia> a = Index(da, "a")(dim=2|id=85|"a")
    julia> c = Index(dc, "c")(dim=3|id=606|"c")
    julia> b, d = settags(a, "b"), settags(c, "d")((dim=2|id=85|"b"), (dim=3|id=606|"d"))
    julia> Aab = random_itensor(a, b)ITensor ord=2 (dim=2|id=85|"a") (dim=2|id=85|"b") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> Bbc = random_itensor(b, c)ITensor ord=2 (dim=2|id=85|"b") (dim=3|id=606|"c") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> Ccd = random_itensor(c, d)ITensor ord=2 (dim=3|id=606|"c") (dim=3|id=606|"d") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> out1 = Aab * Bbc * CcdITensor ord=2 (dim=2|id=85|"a") (dim=3|id=606|"d") +true
    julia> a = Index(da, "a")(dim=2|id=204|"a")
    julia> c = Index(dc, "c")(dim=3|id=961|"c")
    julia> b, d = settags(a, "b"), settags(c, "d")((dim=2|id=204|"b"), (dim=3|id=961|"d"))
    julia> Aab = random_itensor(a, b)ITensor ord=2 (dim=2|id=204|"a") (dim=2|id=204|"b") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> Bbc = random_itensor(b, c)ITensor ord=2 (dim=2|id=204|"b") (dim=3|id=961|"c") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> Ccd = random_itensor(c, d)ITensor ord=2 (dim=3|id=961|"c") (dim=3|id=961|"d") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> out1 = Aab * Bbc * CcdITensor ord=2 (dim=2|id=204|"a") (dim=3|id=961|"d") NDTensors.Dense{Float64, Vector{Float64}}
    julia> @show hassameinds(out1, (a, d))hassameinds(out1, (a, d)) = true -true
    julia> Aba = swaptags(Aab, "a", "b")ITensor ord=2 (dim=2|id=85|"b") (dim=2|id=85|"a") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> Cdc = swaptags(Ccd, "c", "d")ITensor ord=2 (dim=3|id=606|"d") (dim=3|id=606|"c") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=85|"a") (dim=3|id=606|"d") +true
    julia> Aba = swaptags(Aab, "a", "b")ITensor ord=2 (dim=2|id=204|"b") (dim=2|id=204|"a") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> Cdc = swaptags(Ccd, "c", "d")ITensor ord=2 (dim=3|id=961|"d") (dim=3|id=961|"c") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=204|"a") (dim=3|id=961|"d") NDTensors.Dense{Float64, Vector{Float64}}
    julia> @show hassameinds(out2, (a, d)) # @@ -83,14 +83,14 @@ -1.26482 0.972584 0.0521089
    julia> C = randn(dc, dd)3×3 Matrix{Float64}: 0.073576 -0.572066 0.237091 -0.355748 -0.244972 0.662303 - 1.91204 1.69045 0.713253
    julia> tags = ("a", "b", "c", "d")("a", "b", "c", "d")
    julia> dims = (da, db, dc, dd)(2, 2, 3, 3)
    julia> a, b, c, d = Index.(dims, tags)((dim=2|id=688|"a"), (dim=2|id=507|"b"), (dim=3|id=890|"c"), (dim=3|id=789|"d"))
    julia> Aab = itensor(A, a, b)ITensor ord=2 (dim=2|id=688|"a") (dim=2|id=507|"b") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> Bbc = itensor(B, b, c)ITensor ord=2 (dim=2|id=507|"b") (dim=3|id=890|"c") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> Ccd = itensor(C, c, d)ITensor ord=2 (dim=3|id=890|"c") (dim=3|id=789|"d") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> out1 = Aab * Bbc * CcdITensor ord=2 (dim=2|id=688|"a") (dim=3|id=789|"d") + 1.91204 1.69045 0.713253
    julia> tags = ("a", "b", "c", "d")("a", "b", "c", "d")
    julia> dims = (da, db, dc, dd)(2, 2, 3, 3)
    julia> a, b, c, d = Index.(dims, tags)((dim=2|id=952|"a"), (dim=2|id=775|"b"), (dim=3|id=535|"c"), (dim=3|id=243|"d"))
    julia> Aab = itensor(A, a, b)ITensor ord=2 (dim=2|id=952|"a") (dim=2|id=775|"b") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> Bbc = itensor(B, b, c)ITensor ord=2 (dim=2|id=775|"b") (dim=3|id=535|"c") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> Ccd = itensor(C, c, d)ITensor ord=2 (dim=3|id=535|"c") (dim=3|id=243|"d") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> out1 = Aab * Bbc * CcdITensor ord=2 (dim=2|id=952|"a") (dim=3|id=243|"d") NDTensors.Dense{Float64, Vector{Float64}}
    julia> @show hassameinds(out1, (a, d))hassameinds(out1, (a, d)) = true -true
    julia> Aba = itensor(A, b, a)ITensor ord=2 (dim=2|id=507|"b") (dim=2|id=688|"a") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> Cdc = itensor(C, d, c)ITensor ord=2 (dim=3|id=789|"d") (dim=3|id=890|"c") -NDTensors.Dense{Float64, Vector{Float64}}
    julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=688|"a") (dim=3|id=789|"d") +true
    julia> Aba = itensor(A, b, a)ITensor ord=2 (dim=2|id=775|"b") (dim=2|id=952|"a") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> Cdc = itensor(C, d, c)ITensor ord=2 (dim=3|id=243|"d") (dim=3|id=535|"c") +NDTensors.Dense{Float64, Vector{Float64}}
    julia> out2 = Aba * Bbc * CdcITensor ord=2 (dim=2|id=952|"a") (dim=3|id=243|"d") NDTensors.Dense{Float64, Vector{Float64}}
    julia> @show hassameinds(out2, (a, d)) # @@ -103,4 +103,4 @@ # #out2 = A[b, a] * B[b, c] * C[d, c] #@show hassameinds(out2, (a, d))hassameinds(out2, (a, d)) = true -true
    +true
    diff --git a/dev/HDF5FileFormats.html b/dev/HDF5FileFormats.html index 033cfb31cc..8ea163b2a4 100644 --- a/dev/HDF5FileFormats.html +++ b/dev/HDF5FileFormats.html @@ -1,2 +1,2 @@ -HDF5 File Formats · ITensors.jl

    HDF5 File Formats

    This page lists the formats for the HDF5 representations of various types in the ITensors module.

    HDF5 is a portable file format which has a directory structure similar to a file system. In addition to containing "groups" (= directories) and "datasets" (= files), groups can have "attributes" appended to them, which are similar to 'tags' or 'keywords'. Unless otherwise specified, integers are 64 bit and are signed (H5T_STD_I64LE) unless explicitly stated. (For example, the "id" field of the Index type is stored as an unsigned 64 bit integer (H5T_STD_U64LE).)

    Each type in ITensor which is writeable to HDF5 is written to its own group, with the name of the group either specified by the user or specified to some default value when it is a subgroup of another ITensor type (for example, the Index type saves its TagSet in a subgroup named "tags").

    Each group corresponding to an ITensors type always carries the following attributes:

    • "type" –- a string such as Index or TagSet specifying the information necessary to determine the type of the object saved to the HDF5 group
    • "version" –- an integer specifying the file format version used to store the data. This version is in general different from the release version of ITensors.jl. The purpose of the version number is to aid in maintaining backwards compatibility, while allowing the format to be occasionally changed.

    The C++ version of ITensor uses exactly the same file formats listed below, for the purpose of interoperability with the Julia version of ITensor, even though conventions such as the "type" field values are Julia-centric.

    TagSet

    HDF5 file format for the ITensors.TagSet type.

    Attributes:

    • "version" = 1
    • "type" = "TagSet"

    Datasets and Subgroups:

    • "tags" [string] = a comma separated string of the tags in the TagSet

    QN

    HDF5 file format for the ITensors.QN type.

    Attributes:

    • "version" = 1
    • "type" = "QN"

    Datasets and Subgroups:

    • "names" [group] = array of strings (length 4) of names of quantum numbers
    • "vals" [group] = array of integers (length 4) of quantum number values
    • "mods" [group] = array of integers (length 4) of moduli of quantum numbers

    QNBlocks

    HDF5 file format for the ITensors.QNBlocks type. (Note: QNBlocks is equivalent to Vector{Pair{QN, Int64}}.)

    Attributes:

    • "version" = 1
    • "type" = "QNBlocks"

    Datasets and Subgroups:

    • "length" [integer] = the number of blocks (length of Vector)
    • "dims" [group] = array of (integer) dimensions of each block
    • "QN[n]" [group] = these groups "QN[1]", "QN[2]", etc. correspond to the QN of each block

    Index

    HDF5 file format for the ITensors.Index type.

    Attributes:

    • "version" = 1
    • "type" = "Index"
    • "space_type" = "Int" if the Index is a regular, dense Index or "QNBlocks" if the Index is a QNIndex (carries QN subspace information)

    Datasets and Subgroups:

    • "id" [unsigned integer] = id number of the Index
    • "dim" [integer] = dimension of the Index
    • "dir" [integer] = arrow direction of the Index, +1 for ITensors.Out and -1 for ITensors.In
    • "plev" [integer] = prime level of the Index
    • "tags" [group] = the TagSet of the Index

    Optional Datasets and Subgroups:

    • "space" [group] = if the "space_type" attribute is "QNBlocks", this group is present and represents a QNBlocks object

    IndexSet

    HDF5 file format for types in the Union type ITensors.Indices which includes IndexSet and tuples of Index objects.

    Attributes:

    • "version" = 1
    • "type" = "IndexSet"

    Datasets and Subgroups:

    • "length" [integer] = number of indices
    • "index_n" [group] = for n=1 to n=length each of these groups contains an Index

    ITensor

    HDF5 file format for the ITensors.ITensor type.

    Attributes:

    • "version" = 1
    • "type" = "ITensor"

    Datasets and Subgroups:

    • "inds" [group] = indices of the ITensor
    • "storage" [group] = storage of the ITensor (note that some earlier versions of ITensors.jl may call this group "store")

    NDTensors.Dense

    HDF5 file format for objects which are subtypes of ITensors.NDTensors.Dense.

    Attributes:

    • "version" = 1
    • "type" = "Dense{Float64}" or "Dense{ComplexF64}"

    Datasets and Subgroups:

    • "data" = array of either real or complex values (in the same dataset format used by the HDF5.jl library for storing Vector{Float64} or Vector{ComplexF64})

    NDTensors.BlockSparse

    HDF5 file format for objects which are subtypes of ITensors.NDTensors.BlockSparse.

    Attributes:

    • "version" = 1
    • "type" = "BlockSparse{Float64}" or "BlockSparse{ComplexF64}"

    Datasets and Subgroups:

    • "ndims" [integer] = number of dimensions (order) of the tensor
    • "offsets" = block offset data flattened into an array of integers
    • "data" = array of either real or complex values (in the same dataset format used by the HDF5.jl library for storing Vector{Float64} or Vector{ComplexF64})

    MPS

    HDF5 file format for ITensors.MPS

    Attributes:

    • "version" = 1
    • "type" = "MPS"

    Datasets and Subgroups:

    • "length" [integer] = number of tensors of the MPS
    • "rlim" [integer] = right orthogonality limit
    • "llim" [integer] = left orthogonality limit
    • "MPS[n]" [group,ITensor] = each of these groups, where n=1,...,length, stores the nth ITensor of the MPS

    MPO

    HDF5 file format for ITensors.MPO

    Attributes:

    • "version" = 1
    • "type" = "MPO"

    Datasets and Subgroups:

    • "length" [integer] = number of tensors of the MPO
    • "rlim" [integer] = right orthogonality limit
    • "llim" [integer] = left orthogonality limit
    • "MPO[n]" [group,ITensor] = each of these groups, where n=1,...,length, stores the nth ITensor of the MPO
    +HDF5 File Formats · ITensors.jl

    HDF5 File Formats

    This page lists the formats for the HDF5 representations of various types in the ITensors module.

    HDF5 is a portable file format which has a directory structure similar to a file system. In addition to containing "groups" (= directories) and "datasets" (= files), groups can have "attributes" appended to them, which are similar to 'tags' or 'keywords'. Unless otherwise specified, integers are 64 bit and are signed (H5T_STD_I64LE) unless explicitly stated. (For example, the "id" field of the Index type is stored as an unsigned 64 bit integer (H5T_STD_U64LE).)

    Each type in ITensor which is writeable to HDF5 is written to its own group, with the name of the group either specified by the user or specified to some default value when it is a subgroup of another ITensor type (for example, the Index type saves its TagSet in a subgroup named "tags").

    Each group corresponding to an ITensors type always carries the following attributes:

    • "type" –- a string such as Index or TagSet specifying the information necessary to determine the type of the object saved to the HDF5 group
    • "version" –- an integer specifying the file format version used to store the data. This version is in general different from the release version of ITensors.jl. The purpose of the version number is to aid in maintaining backwards compatibility, while allowing the format to be occasionally changed.

    The C++ version of ITensor uses exactly the same file formats listed below, for the purpose of interoperability with the Julia version of ITensor, even though conventions such as the "type" field values are Julia-centric.

    TagSet

    HDF5 file format for the ITensors.TagSet type.

    Attributes:

    • "version" = 1
    • "type" = "TagSet"

    Datasets and Subgroups:

    • "tags" [string] = a comma separated string of the tags in the TagSet

    QN

    HDF5 file format for the ITensors.QN type.

    Attributes:

    • "version" = 1
    • "type" = "QN"

    Datasets and Subgroups:

    • "names" [group] = array of strings (length 4) of names of quantum numbers
    • "vals" [group] = array of integers (length 4) of quantum number values
    • "mods" [group] = array of integers (length 4) of moduli of quantum numbers

    QNBlocks

    HDF5 file format for the ITensors.QNBlocks type. (Note: QNBlocks is equivalent to Vector{Pair{QN, Int64}}.)

    Attributes:

    • "version" = 1
    • "type" = "QNBlocks"

    Datasets and Subgroups:

    • "length" [integer] = the number of blocks (length of Vector)
    • "dims" [group] = array of (integer) dimensions of each block
    • "QN[n]" [group] = these groups "QN[1]", "QN[2]", etc. correspond to the QN of each block

    Index

    HDF5 file format for the ITensors.Index type.

    Attributes:

    • "version" = 1
    • "type" = "Index"
    • "space_type" = "Int" if the Index is a regular, dense Index or "QNBlocks" if the Index is a QNIndex (carries QN subspace information)

    Datasets and Subgroups:

    • "id" [unsigned integer] = id number of the Index
    • "dim" [integer] = dimension of the Index
    • "dir" [integer] = arrow direction of the Index, +1 for ITensors.Out and -1 for ITensors.In
    • "plev" [integer] = prime level of the Index
    • "tags" [group] = the TagSet of the Index

    Optional Datasets and Subgroups:

    • "space" [group] = if the "space_type" attribute is "QNBlocks", this group is present and represents a QNBlocks object

    IndexSet

    HDF5 file format for types in the Union type ITensors.Indices which includes IndexSet and tuples of Index objects.

    Attributes:

    • "version" = 1
    • "type" = "IndexSet"

    Datasets and Subgroups:

    • "length" [integer] = number of indices
    • "index_n" [group] = for n=1 to n=length each of these groups contains an Index

    ITensor

    HDF5 file format for the ITensors.ITensor type.

    Attributes:

    • "version" = 1
    • "type" = "ITensor"

    Datasets and Subgroups:

    • "inds" [group] = indices of the ITensor
    • "storage" [group] = storage of the ITensor (note that some earlier versions of ITensors.jl may call this group "store")

    NDTensors.Dense

    HDF5 file format for objects which are subtypes of ITensors.NDTensors.Dense.

    Attributes:

    • "version" = 1
    • "type" = "Dense{Float64}" or "Dense{ComplexF64}"

    Datasets and Subgroups:

    • "data" = array of either real or complex values (in the same dataset format used by the HDF5.jl library for storing Vector{Float64} or Vector{ComplexF64})

    NDTensors.BlockSparse

    HDF5 file format for objects which are subtypes of ITensors.NDTensors.BlockSparse.

    Attributes:

    • "version" = 1
    • "type" = "BlockSparse{Float64}" or "BlockSparse{ComplexF64}"

    Datasets and Subgroups:

    • "ndims" [integer] = number of dimensions (order) of the tensor
    • "offsets" = block offset data flattened into an array of integers
    • "data" = array of either real or complex values (in the same dataset format used by the HDF5.jl library for storing Vector{Float64} or Vector{ComplexF64})

    MPS

    HDF5 file format for ITensors.MPS

    Attributes:

    • "version" = 1
    • "type" = "MPS"

    Datasets and Subgroups:

    • "length" [integer] = number of tensors of the MPS
    • "rlim" [integer] = right orthogonality limit
    • "llim" [integer] = left orthogonality limit
    • "MPS[n]" [group,ITensor] = each of these groups, where n=1,...,length, stores the nth ITensor of the MPS

    MPO

    HDF5 file format for ITensors.MPO

    Attributes:

    • "version" = 1
    • "type" = "MPO"

    Datasets and Subgroups:

    • "length" [integer] = number of tensors of the MPO
    • "rlim" [integer] = right orthogonality limit
    • "llim" [integer] = left orthogonality limit
    • "MPO[n]" [group,ITensor] = each of these groups, where n=1,...,length, stores the nth ITensor of the MPO
    diff --git a/dev/ITensorType.html b/dev/ITensorType.html index c064b4271a..7aa6646ffd 100644 --- a/dev/ITensorType.html +++ b/dev/ITensorType.html @@ -60,20 +60,20 @@ NDTensors.Dense{Float64,Array{Float64,1}} 2×2 -0.3674957028513448 1.6904886171664615 - 1.2579101497658178 -1.3559959053693322source

    Dense Constructors

    ITensors.ITensorMethod
    ITensor([::Type{ElT} = Float64, ]inds)
    +  1.2579101497658178  -1.3559959053693322
    source

    Dense Constructors

    ITensors.ITensorMethod
    ITensor([::Type{ElT} = Float64, ]inds)
     ITensor([::Type{ElT} = Float64, ]inds::Index...)

    Construct an ITensor filled with zeros having indices inds and element type ElT. If the element type is not specified, it defaults to Float64.

    The storage will have NDTensors.Dense type.

    Examples

    i = Index(2,"index_i")
     j = Index(4,"index_j")
     k = Index(3,"index_k")
     
     A = ITensor(i,j)
    -B = ITensor(ComplexF64,k,j)
    source
    ITensors.ITensorMethod
    ITensor([::Type{ElT} = Float64, ]::UndefInitializer, inds)
    +B = ITensor(ComplexF64,k,j)
    source
    ITensors.ITensorMethod
    ITensor([::Type{ElT} = Float64, ]::UndefInitializer, inds)
     ITensor([::Type{ElT} = Float64, ]::UndefInitializer, inds::Index...)

    Construct an ITensor filled with undefined elements having indices inds and element type ElT. If the element type is not specified, it defaults to Float64. One purpose for using this constructor is that initializing the elements in an undefined way is faster than initializing them to a set value such as zero.

    The storage will have NDTensors.Dense type.

    Examples

    i = Index(2,"index_i")
     j = Index(4,"index_j")
     k = Index(3,"index_k")
     
     A = ITensor(undef,i,j)
    -B = ITensor(ComplexF64,undef,k,j)
    source
    ITensors.ITensorMethod
    ITensor([ElT::Type, ]x::Number, inds)
    -ITensor([ElT::Type, ]x::Number, inds::Index...)

    Construct an ITensor with all elements set to x and indices inds.

    If x isa Int or x isa Complex{Int} then the elements will be set to float(x) unless specified otherwise by the first input.

    The storage will have NDTensors.Dense type.

    Examples

    ```julia i = Index(2,"indexi"); j = Index(4,"indexj"); k = Index(3,"index_k");

    A = ITensor(1.0, i, j) A = ITensor(1, i, j) # same as above B = ITensor(2.0+3.0im, j, k) ```

    !!! warning In future versions this may not automatically convert integer inputs with float, and in that case the particular element type should not be relied on.

    source
    ITensors.ITensorMethod
    ITensor([ElT::Type, ]A::AbstractArray, inds)
    +B = ITensor(ComplexF64,undef,k,j)
    source
    ITensors.ITensorMethod
    ITensor([ElT::Type, ]x::Number, inds)
    +ITensor([ElT::Type, ]x::Number, inds::Index...)

    Construct an ITensor with all elements set to x and indices inds.

    If x isa Int or x isa Complex{Int} then the elements will be set to float(x) unless specified otherwise by the first input.

    The storage will have NDTensors.Dense type.

    Examples

    ```julia i = Index(2,"indexi"); j = Index(4,"indexj"); k = Index(3,"index_k");

    A = ITensor(1.0, i, j) A = ITensor(1, i, j) # same as above B = ITensor(2.0+3.0im, j, k) ```

    !!! warning In future versions this may not automatically convert integer inputs with float, and in that case the particular element type should not be relied on.

    source
    ITensors.ITensorMethod
    ITensor([ElT::Type, ]A::AbstractArray, inds)
     ITensor([ElT::Type, ]A::AbstractArray, inds::Index...)
     
     itensor([ElT::Type, ]A::AbstractArray, inds)
    @@ -87,13 +87,13 @@
     T = ITensor(M, i, j)
     T[i => 1, j => 1] = 3.3
     M[1, 1] == 3.3
    -T[i => 1, j => 1] == 3.3
    Warning

    In future versions this may not automatically convert Int/Complex{Int} inputs to floating point versions with float (once tensor operations using Int/Complex{Int} are natively as fast as floating point operations), and in that case the particular element type should not be relied on. To avoid extra conversions (and therefore allocations) it is best practice to directly construct with itensor([0. 1; 1 0], i', dag(i)) if you want a floating point element type. The conversion is done as a performance optimization since often tensors are passed to BLAS/LAPACK and need to be converted to floating point types compatible with those libraries, but future projects in Julia may allow for efficient operations with more general element types (for example see https://github.com/JuliaLinearAlgebra/Octavian.jl).

    source
    ITensors.random_itensorMethod
    random_itensor([rng=Random.default_rng()], [ElT=Float64], inds)
    +T[i => 1, j => 1] == 3.3
    Warning

    In future versions this may not automatically convert Int/Complex{Int} inputs to floating point versions with float (once tensor operations using Int/Complex{Int} are natively as fast as floating point operations), and in that case the particular element type should not be relied on. To avoid extra conversions (and therefore allocations) it is best practice to directly construct with itensor([0. 1; 1 0], i', dag(i)) if you want a floating point element type. The conversion is done as a performance optimization since often tensors are passed to BLAS/LAPACK and need to be converted to floating point types compatible with those libraries, but future projects in Julia may allow for efficient operations with more general element types (for example see https://github.com/JuliaLinearAlgebra/Octavian.jl).

    source
    ITensors.random_itensorMethod
    random_itensor([rng=Random.default_rng()], [ElT=Float64], inds)
     random_itensor([rng=Random.default_rng()], [ElT=Float64], inds::Index...)

    Construct an ITensor with type ElT and indices inds, whose elements are normally distributed random numbers. If the element type is not specified, it defaults to Float64.

    Examples

    i = Index(2,"index_i")
     j = Index(4,"index_j")
     k = Index(3,"index_k")
     
     A = random_itensor(i,j)
    -B = random_itensor(ComplexF64,undef,k,j)
    source
    ITensors.onehotFunction
    onehot(ivs...)
    +B = random_itensor(ComplexF64,undef,k,j)
    source
    ITensors.onehotFunction
    onehot(ivs...)
     setelt(ivs...)
     onehot(::Type, ivs...)
     setelt(::Type, ivs...)

    Create an ITensor with all zeros except the specified value, which is set to 1.

    Examples

    i = Index(2,"i")
    @@ -105,7 +105,7 @@
     
     j = Index(3,"j")
     B = onehot(i=>1,j=>3)
    -# B[i=>1,j=>3] == 1, all other element zero
    source

    Dense View Constructors

    ITensors.itensorMethod
    itensor(args...; kwargs...)

    Like the ITensor constructor, but with attempt to make a view of the input data when possible.

    source

    QN BlockSparse Constructors

    ITensors.ITensorMethod
    ITensor([::Type{ElT} = Float64, ][flux::QN = QN(), ]inds)
    +# B[i=>1,j=>3] == 1, all other element zero
    source

    Dense View Constructors

    ITensors.itensorMethod
    itensor(args...; kwargs...)

    Like the ITensor constructor, but with attempt to make a view of the input data when possible.

    source

    QN BlockSparse Constructors

    ITensors.ITensorMethod
    ITensor([::Type{ElT} = Float64, ][flux::QN = QN(), ]inds)
     ITensor([::Type{ElT} = Float64, ][flux::QN = QN(), ]inds::Index...)

    Construct an ITensor with BlockSparse storage filled with zero(ElT) where the nonzero blocks are determined by flux.

    If ElT is not specified it defaults to Float64.

    If flux is not specified, the ITensor will be empty (it will contain no blocks, and have an undefined flux). The flux will be set by the first element that is set.

    Examples

    julia> i
     (dim=3|id=212|"i") <Out>
      1: QN(0) => 1
    @@ -209,7 +209,7 @@
      2  0
     
     julia> flux(A)
    -QN(-1)
    source
    ITensors.ITensorMethod
    ITensor([ElT::Type, ]A::AbstractArray, inds)
    +QN(-1)
    source
    ITensors.ITensorMethod
    ITensor([ElT::Type, ]A::AbstractArray, inds)
     ITensor([ElT::Type, ]A::AbstractArray, inds::Index...)
     
     itensor([ElT::Type, ]A::AbstractArray, inds)
    @@ -223,7 +223,7 @@
     T = ITensor(M, i, j)
     T[i => 1, j => 1] = 3.3
     M[1, 1] == 3.3
    -T[i => 1, j => 1] == 3.3
    Warning

    In future versions this may not automatically convert Int/Complex{Int} inputs to floating point versions with float (once tensor operations using Int/Complex{Int} are natively as fast as floating point operations), and in that case the particular element type should not be relied on. To avoid extra conversions (and therefore allocations) it is best practice to directly construct with itensor([0. 1; 1 0], i', dag(i)) if you want a floating point element type. The conversion is done as a performance optimization since often tensors are passed to BLAS/LAPACK and need to be converted to floating point types compatible with those libraries, but future projects in Julia may allow for efficient operations with more general element types (for example see https://github.com/JuliaLinearAlgebra/Octavian.jl).

    source
    ITensor([ElT::Type, ]::AbstractArray, inds; tol=0.0, checkflux=true)

    Create a block sparse ITensor from the input Array, and collection of QN indices. Zeros are dropped and nonzero blocks are determined from the zero values of the array.

    Optionally, you can set a tolerance such that elements less than or equal to the tolerance are dropped.

    By default, this will check that the flux of the nonzero blocks are consistent with each other. You can disable this check by setting checkflux=false.

    Examples

    julia> i = Index([QN(0)=>1, QN(1)=>2], "i");
    +T[i => 1, j => 1] == 3.3
    Warning

    In future versions this may not automatically convert Int/Complex{Int} inputs to floating point versions with float (once tensor operations using Int/Complex{Int} are natively as fast as floating point operations), and in that case the particular element type should not be relied on. To avoid extra conversions (and therefore allocations) it is best practice to directly construct with itensor([0. 1; 1 0], i', dag(i)) if you want a floating point element type. The conversion is done as a performance optimization since often tensors are passed to BLAS/LAPACK and need to be converted to floating point types compatible with those libraries, but future projects in Julia may allow for efficient operations with more general element types (for example see https://github.com/JuliaLinearAlgebra/Octavian.jl).

    source
    ITensor([ElT::Type, ]::AbstractArray, inds; tol=0.0, checkflux=true)

    Create a block sparse ITensor from the input Array, and collection of QN indices. Zeros are dropped and nonzero blocks are determined from the zero values of the array.

    Optionally, you can set a tolerance such that elements less than or equal to the tolerance are dropped.

    By default, this will check that the flux of the nonzero blocks are consistent with each other. You can disable this check by setting checkflux=false.

    Examples

    julia> i = Index([QN(0)=>1, QN(1)=>2], "i");
     
     julia> A = [1e-9 0.0 0.0;
                 0.0 2.0 3.0;
    @@ -242,17 +242,17 @@
     Block: (2, 2)
      [2:3, 2:3]
      2.0  3.0
    - 0.0  4.0
    source
    ITensors.ITensorMethod
    ITensor([::Type{ElT} = Float64,] ::UndefInitializer, flux::QN, inds)
    + 0.0  4.0
    source
    ITensors.ITensorMethod
    ITensor([::Type{ElT} = Float64,] ::UndefInitializer, flux::QN, inds)
     ITensor([::Type{ElT} = Float64,] ::UndefInitializer, flux::QN, inds::Index...)

    Construct an ITensor with indices inds and BlockSparse storage with undefined elements of type ElT, where the nonzero (allocated) blocks are determined by the provided QN flux. One purpose for using this constructor is that initializing the elements in an undefined way is faster than initializing them to a set value such as zero.

    The storage will have NDTensors.BlockSparse type.

    Examples

    i = Index([QN(0)=>1, QN(1)=>2], "i")
     A = ITensor(undef,QN(0),i',dag(i))
     B = ITensor(Float64,undef,QN(0),i',dag(i))
    -C = ITensor(ComplexF64,undef,QN(0),i',dag(i))
    source

    Diagonal constructors

    ITensors.diag_itensorMethod
    diag_itensor([::Type{ElT} = Float64, ]inds)
    -diag_itensor([::Type{ElT} = Float64, ]inds::Index...)

    Make a sparse ITensor of element type ElT with only elements along the diagonal stored. Defaults to having zero(T) along the diagonal.

    The storage will have NDTensors.Diag type.

    source
    ITensors.diag_itensorMethod
    diag_itensor([ElT::Type, ]v::AbstractVector, inds...)
    -diagitensor([ElT::Type, ]v::AbstractVector, inds...)

    Make a sparse ITensor with non-zero elements only along the diagonal. In general, the diagonal elements will be those stored in v and the ITensor will have element type eltype(v), unless specified explicitly by ElT. The storage will have NDTensors.Diag type.

    In the case when eltype(v) isa Union{Int, Complex{Int}}, by default it will be converted to float(v). Note that this behavior is subject to change in the future.

    The version diag_itensor will never output an ITensor whose storage data is an alias of the input vector data.

    The version diagitensor might output an ITensor whose storage data is an alias of the input vector data in order to minimize operations.

    source
    ITensors.diag_itensorMethod
    diag_itensor([ElT::Type, ]x::Number, inds...)
    -diagitensor([ElT::Type, ]x::Number, inds...)

    Make a sparse ITensor with non-zero elements only along the diagonal. In general, the diagonal elements will be set to the value x and the ITensor will have element type eltype(x), unless specified explicitly by ElT. The storage will have NDTensors.Diag type.

    In the case when x isa Union{Int, Complex{Int}}, by default it will be converted to float(x). Note that this behavior is subject to change in the future.

    source
    ITensors.deltaMethod
    delta([::Type{ElT} = Float64, ]inds)
    -delta([::Type{ElT} = Float64, ]inds::Index...)

    Make a uniform diagonal ITensor with all diagonal elements one(ElT). Only a single diagonal element is stored.

    This function has an alias δ.

    source

    QN Diagonal constructors

    ITensors.diag_itensorMethod
    diag_itensor([::Type{ElT} = Float64, ][flux::QN = QN(), ]is)
    -diag_itensor([::Type{ElT} = Float64, ][flux::QN = QN(), ]is::Index...)

    Make an ITensor with storage type NDTensors.DiagBlockSparse with elements zero(ElT). The ITensor only has diagonal blocks consistent with the specified flux.

    If the element type is not specified, it defaults to Float64. If theflux is not specified, it defaults to QN().

    source
    ITensors.deltaMethod
    delta([::Type{ElT} = Float64, ][flux::QN = QN(), ]is)
    -delta([::Type{ElT} = Float64, ][flux::QN = QN(), ]is::Index...)

    Make an ITensor with storage type NDTensors.DiagBlockSparse with uniform elements one(ElT). The ITensor only has diagonal blocks consistent with the specified flux.

    If the element type is not specified, it defaults to Float64. If theflux is not specified, it defaults to QN().

    source

    Convert to Array

    Core.ArrayMethod
    Array{ElT, N}(T::ITensor, i:Index...)
    +C = ITensor(ComplexF64,undef,QN(0),i',dag(i))
    source

    Diagonal constructors

    ITensors.diag_itensorMethod
    diag_itensor([::Type{ElT} = Float64, ]inds)
    +diag_itensor([::Type{ElT} = Float64, ]inds::Index...)

    Make a sparse ITensor of element type ElT with only elements along the diagonal stored. Defaults to having zero(T) along the diagonal.

    The storage will have NDTensors.Diag type.

    source
    ITensors.diag_itensorMethod
    diag_itensor([ElT::Type, ]v::AbstractVector, inds...)
    +diagitensor([ElT::Type, ]v::AbstractVector, inds...)

    Make a sparse ITensor with non-zero elements only along the diagonal. In general, the diagonal elements will be those stored in v and the ITensor will have element type eltype(v), unless specified explicitly by ElT. The storage will have NDTensors.Diag type.

    In the case when eltype(v) isa Union{Int, Complex{Int}}, by default it will be converted to float(v). Note that this behavior is subject to change in the future.

    The version diag_itensor will never output an ITensor whose storage data is an alias of the input vector data.

    The version diagitensor might output an ITensor whose storage data is an alias of the input vector data in order to minimize operations.

    source
    ITensors.diag_itensorMethod
    diag_itensor([ElT::Type, ]x::Number, inds...)
    +diagitensor([ElT::Type, ]x::Number, inds...)

    Make a sparse ITensor with non-zero elements only along the diagonal. In general, the diagonal elements will be set to the value x and the ITensor will have element type eltype(x), unless specified explicitly by ElT. The storage will have NDTensors.Diag type.

    In the case when x isa Union{Int, Complex{Int}}, by default it will be converted to float(x). Note that this behavior is subject to change in the future.

    source
    ITensors.deltaMethod
    delta([::Type{ElT} = Float64, ]inds)
    +delta([::Type{ElT} = Float64, ]inds::Index...)

    Make a uniform diagonal ITensor with all diagonal elements one(ElT). Only a single diagonal element is stored.

    This function has an alias δ.

    source

    QN Diagonal constructors

    ITensors.diag_itensorMethod
    diag_itensor([::Type{ElT} = Float64, ][flux::QN = QN(), ]is)
    +diag_itensor([::Type{ElT} = Float64, ][flux::QN = QN(), ]is::Index...)

    Make an ITensor with storage type NDTensors.DiagBlockSparse with elements zero(ElT). The ITensor only has diagonal blocks consistent with the specified flux.

    If the element type is not specified, it defaults to Float64. If theflux is not specified, it defaults to QN().

    source
    ITensors.deltaMethod
    delta([::Type{ElT} = Float64, ][flux::QN = QN(), ]is)
    +delta([::Type{ElT} = Float64, ][flux::QN = QN(), ]is::Index...)

    Make an ITensor with storage type NDTensors.DiagBlockSparse with uniform elements one(ElT). The ITensor only has diagonal blocks consistent with the specified flux.

    If the element type is not specified, it defaults to Float64. If theflux is not specified, it defaults to QN().

    source

    Convert to Array

    Core.ArrayMethod
    Array{ElT, N}(T::ITensor, i:Index...)
     Array{ElT}(T::ITensor, i:Index...)
     Array(T::ITensor, i:Index...)
     
    @@ -260,9 +260,9 @@
     Matrix(T::ITensor, row_i:Index, col_i::Index)
     
     Vector{ElT}(T::ITensor)
    -Vector(T::ITensor)

    Given an ITensor T with indices i..., returns an Array with a copy of the ITensor's elements. The order in which the indices are provided indicates the order of the data in the resulting Array.

    source
    NDTensors.arrayMethod
    array(T::ITensor, inds...)

    Convert an ITensor T to an Array.

    The ordering of the elements in the Array are specified by the input indices inds. This tries to avoid copying of possible (i.e. may return a view of the original data), for example if the ITensor's storage is Dense and the indices are already in the specified ordering so that no permutation is required.

    Warning

    Note that in the future we may return specialized AbstractArray types for certain storage types, for example a LinearAlgebra.Diagonal type for an ITensor with Diag storage. The specific storage type shouldn't be relied upon.

    See also matrix, vector.

    source
    NDTensors.matrixMethod
    matrix(T::ITensor, inds...)

    Convert an ITensor T to a Matrix.

    The ordering of the elements in the Matrix are specified by the input indices inds. This tries to avoid copying of possible (i.e. may return a view of the original data), for example if the ITensor's storage is Dense and the indices are already in the specified ordering so that no permutation is required.

    Warning

    Note that in the future we may return specialized AbstractArray types for certain storage types, for example a LinearAlgebra.Diagonal type for an ITensor with Diag storage. The specific storage type shouldn't be relied upon.

    See also array, vector.

    source
    NDTensors.vectorMethod
    vector(T::ITensor, inds...)

    Convert an ITensor T to an Vector.

    The ordering of the elements in the Array are specified by the input indices inds. This tries to avoid copying of possible (i.e. may return a view of the original data), for example if the ITensor's storage is Dense and the indices are already in the specified ordering so that no permutation is required.

    Warning

    Note that in the future we may return specialized AbstractArray types for certain storage types, for example a LinearAlgebra.Diagonal type for an ITensor with Diag storage. The specific storage type shouldn't be relied upon.

    See also array, matrix.

    source
    NDTensors.arrayMethod
    array(T::ITensor)

    Given an ITensor T, returns an Array with a copy of the ITensor's elements, or a view in the case the the ITensor's storage is Dense.

    The ordering of the elements in the Array, in terms of which Index is treated as the row versus column, depends on the internal layout of the ITensor.

    Warning

    This method is intended for developer use only and not recommended for use in ITensor applications unless you know what you are doing (for example you are certain of the memory ordering of the ITensor because you permuted the indices into a certain order).

    See also matrix, vector.

    source
    NDTensors.matrixMethod
    matrix(T::ITensor)

    Given an ITensor T with two indices, returns a Matrix with a copy of the ITensor's elements, or a view in the case the ITensor's storage is Dense.

    The ordering of the elements in the Matrix, in terms of which Index is treated as the row versus column, depends on the internal layout of the ITensor.

    Warning

    This method is intended for developer use only and not recommended for use in ITensor applications unless you know what you are doing (for example you are certain of the memory ordering of the ITensor because you permuted the indices into a certain order).

    See also array, vector.

    source
    NDTensors.vectorMethod
    vector(T::ITensor)

    Given an ITensor T with one index, returns a Vector with a copy of the ITensor's elements, or a view in the case the ITensor's storage is Dense.

    See also array, matrix.

    source

    Getting and setting elements

    Base.getindexMethod
    getindex(T::ITensor, ivs...)

    Get the specified element of the ITensor, using a list of IndexVals or Pair{<:Index, Int}.

    Example

    i = Index(2; tags = "i")
    +Vector(T::ITensor)

    Given an ITensor T with indices i..., returns an Array with a copy of the ITensor's elements. The order in which the indices are provided indicates the order of the data in the resulting Array.

    source
    NDTensors.arrayMethod
    array(T::ITensor, inds...)

    Convert an ITensor T to an Array.

    The ordering of the elements in the Array are specified by the input indices inds. This tries to avoid copying of possible (i.e. may return a view of the original data), for example if the ITensor's storage is Dense and the indices are already in the specified ordering so that no permutation is required.

    Warning

    Note that in the future we may return specialized AbstractArray types for certain storage types, for example a LinearAlgebra.Diagonal type for an ITensor with Diag storage. The specific storage type shouldn't be relied upon.

    See also matrix, vector.

    source
    NDTensors.matrixMethod
    matrix(T::ITensor, inds...)

    Convert an ITensor T to a Matrix.

    The ordering of the elements in the Matrix are specified by the input indices inds. This tries to avoid copying of possible (i.e. may return a view of the original data), for example if the ITensor's storage is Dense and the indices are already in the specified ordering so that no permutation is required.

    Warning

    Note that in the future we may return specialized AbstractArray types for certain storage types, for example a LinearAlgebra.Diagonal type for an ITensor with Diag storage. The specific storage type shouldn't be relied upon.

    See also array, vector.

    source
    NDTensors.vectorMethod
    vector(T::ITensor, inds...)

    Convert an ITensor T to an Vector.

    The ordering of the elements in the Array are specified by the input indices inds. This tries to avoid copying of possible (i.e. may return a view of the original data), for example if the ITensor's storage is Dense and the indices are already in the specified ordering so that no permutation is required.

    Warning

    Note that in the future we may return specialized AbstractArray types for certain storage types, for example a LinearAlgebra.Diagonal type for an ITensor with Diag storage. The specific storage type shouldn't be relied upon.

    See also array, matrix.

    source
    NDTensors.arrayMethod
    array(T::ITensor)

    Given an ITensor T, returns an Array with a copy of the ITensor's elements, or a view in the case the the ITensor's storage is Dense.

    The ordering of the elements in the Array, in terms of which Index is treated as the row versus column, depends on the internal layout of the ITensor.

    Warning

    This method is intended for developer use only and not recommended for use in ITensor applications unless you know what you are doing (for example you are certain of the memory ordering of the ITensor because you permuted the indices into a certain order).

    See also matrix, vector.

    source
    NDTensors.matrixMethod
    matrix(T::ITensor)

    Given an ITensor T with two indices, returns a Matrix with a copy of the ITensor's elements, or a view in the case the ITensor's storage is Dense.

    The ordering of the elements in the Matrix, in terms of which Index is treated as the row versus column, depends on the internal layout of the ITensor.

    Warning

    This method is intended for developer use only and not recommended for use in ITensor applications unless you know what you are doing (for example you are certain of the memory ordering of the ITensor because you permuted the indices into a certain order).

    See also array, vector.

    source
    NDTensors.vectorMethod
    vector(T::ITensor)

    Given an ITensor T with one index, returns a Vector with a copy of the ITensor's elements, or a view in the case the ITensor's storage is Dense.

    See also array, matrix.

    source

    Getting and setting elements

    Base.getindexMethod
    getindex(T::ITensor, ivs...)

    Get the specified element of the ITensor, using a list of IndexVals or Pair{<:Index, Int}.

    Example

    i = Index(2; tags = "i")
     A = ITensor(2.0, i, i')
    -A[i => 1, i' => 2] # 2.0, same as: A[i' => 2, i => 1]
    source
    Base.setindex!Method
    setindex!(T::ITensor, x::Number, ivs...)
    +A[i => 1, i' => 2] # 2.0, same as: A[i' => 2, i => 1]
    source
    Base.setindex!Method
    setindex!(T::ITensor, x::Number, ivs...)
     
     setindex!(T::ITensor, x::Number, I::Integer...)
     
    @@ -273,41 +273,41 @@
     
     # Some simple slicing is also supported
     A[i => 2, i' => :] = [2.0 3.0]
    -A[2, :] = [2.0 3.0]
    source

    Properties

    NDTensors.indsMethod
    inds(T::ITensor)

    Return the indices of the ITensor as a Tuple.

    source
    NDTensors.indMethod
    ind(T::ITensor, i::Int)

    Get the Index of the ITensor along dimension i.

    source
    ITensors.dirMethod
    dir(A::ITensor, i::Index)

    Return the direction of the Index i in the ITensor A.

    source

    Priming and tagging

    ITensors.primeMethod
    prime[!](A::ITensor, plinc::Int = 1; <keyword arguments>) -> ITensor
    +A[2, :] = [2.0 3.0]
    source

    Properties

    NDTensors.indsMethod
    inds(T::ITensor)

    Return the indices of the ITensor as a Tuple.

    source
    NDTensors.indMethod
    ind(T::ITensor, i::Int)

    Get the Index of the ITensor along dimension i.

    source
    ITensors.dirMethod
    dir(A::ITensor, i::Index)

    Return the direction of the Index i in the ITensor A.

    source

    Priming and tagging

    ITensors.primeMethod
    prime[!](A::ITensor, plinc::Int = 1; <keyword arguments>) -> ITensor
     
    -prime(inds, plinc::Int = 1; <keyword arguments>) -> IndexSet

    Increase the prime level of the indices of an ITensor or collection of indices.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.setprimeMethod
    setprime[!](A::ITensor, plev::Int; <keyword arguments>) -> ITensor
    +prime(inds, plinc::Int = 1; <keyword arguments>) -> IndexSet

    Increase the prime level of the indices of an ITensor or collection of indices.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.setprimeMethod
    setprime[!](A::ITensor, plev::Int; <keyword arguments>) -> ITensor
     
    -setprime(inds, plev::Int; <keyword arguments>) -> IndexSet

    Set the prime level of the indices of an ITensor or collection of indices.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.noprimeMethod
    noprime[!](A::ITensor; <keyword arguments>) -> ITensor
    +setprime(inds, plev::Int; <keyword arguments>) -> IndexSet

    Set the prime level of the indices of an ITensor or collection of indices.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.noprimeMethod
    noprime[!](A::ITensor; <keyword arguments>) -> ITensor
     
    -noprime(inds; <keyword arguments>) -> IndexSet

    Set the prime level of the indices of an ITensor or collection of indices to zero.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.mapprimeMethod
    replaceprime[!](A::ITensor, plold::Int, plnew::Int; <keyword arguments>) -> ITensor
    +noprime(inds; <keyword arguments>) -> IndexSet

    Set the prime level of the indices of an ITensor or collection of indices to zero.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.mapprimeMethod
    replaceprime[!](A::ITensor, plold::Int, plnew::Int; <keyword arguments>) -> ITensor
     replaceprime[!](A::ITensor, plold => plnew; <keyword arguments>) -> ITensor
     mapprime[!](A::ITensor, <arguments>; <keyword arguments>) -> ITensor
     
     replaceprime(inds, plold::Int, plnew::Int; <keyword arguments>)
     replaceprime(inds::IndexSet, plold => plnew; <keyword arguments>)
    -mapprime(inds, <arguments>; <keyword arguments>)

    Set the prime level of the indices of an ITensor or collection of indices with prime level plold to plnew.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.swapprimeMethod
    swapprime[!](A::ITensor, pl1::Int, pl2::Int; <keyword arguments>) -> ITensor
    +mapprime(inds, <arguments>; <keyword arguments>)

    Set the prime level of the indices of an ITensor or collection of indices with prime level plold to plnew.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.swapprimeMethod
    swapprime[!](A::ITensor, pl1::Int, pl2::Int; <keyword arguments>) -> ITensor
     swapprime[!](A::ITensor, pl1 => pl2; <keyword arguments>) -> ITensor
     
     swapprime(inds, pl1::Int, pl2::Int; <keyword arguments>)
    -swapprime(inds, pl1 => pl2; <keyword arguments>)

    Set the prime level of the indices of an ITensor or collection of indices with prime level pl1 to pl2, and those with prime level pl2 to pl1.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.TagSets.addtagsMethod
    addtags[!](A::ITensor, ts::String; <keyword arguments>) -> ITensor
    +swapprime(inds, pl1 => pl2; <keyword arguments>)

    Set the prime level of the indices of an ITensor or collection of indices with prime level pl1 to pl2, and those with prime level pl2 to pl1.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.TagSets.addtagsMethod
    addtags[!](A::ITensor, ts::String; <keyword arguments>) -> ITensor
     
    -addtags(inds, ts::String; <keyword arguments>)

    Add the tags ts to the indices of an ITensor or collection of indices.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.TagSets.removetagsMethod
    removetags[!](A::ITensor, ts::String; <keyword arguments>) -> ITensor
    +addtags(inds, ts::String; <keyword arguments>)

    Add the tags ts to the indices of an ITensor or collection of indices.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.TagSets.removetagsMethod
    removetags[!](A::ITensor, ts::String; <keyword arguments>) -> ITensor
     
    -removetags(inds, ts::String; <keyword arguments>)

    Remove the tags ts from the indices of an ITensor or collection of indices.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.TagSets.replacetagsMethod
    replacetags[!](A::ITensor, tsold::String, tsnew::String; <keyword arguments>) -> ITensor
    +removetags(inds, ts::String; <keyword arguments>)

    Remove the tags ts from the indices of an ITensor or collection of indices.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.TagSets.replacetagsMethod
    replacetags[!](A::ITensor, tsold::String, tsnew::String; <keyword arguments>) -> ITensor
     
    -replacetags(is::IndexSet, tsold::String, tsnew::String; <keyword arguments>) -> IndexSet

    Replace the tags tsold with tsnew for the indices of an ITensor.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.settagsMethod
    settags[!](A::ITensor, ts::String; <keyword arguments>) -> ITensor
    +replacetags(is::IndexSet, tsold::String, tsnew::String; <keyword arguments>) -> IndexSet

    Replace the tags tsold with tsnew for the indices of an ITensor.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.settagsMethod
    settags[!](A::ITensor, ts::String; <keyword arguments>) -> ITensor
     
    -settags(is::IndexSet, ts::String; <keyword arguments>) -> IndexSet

    Set the tags of the indices of an ITensor or IndexSet to ts.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.swaptagsMethod
    swaptags[!](A::ITensor, ts1::String, ts2::String; <keyword arguments>) -> ITensor
    +settags(is::IndexSet, ts::String; <keyword arguments>) -> IndexSet

    Set the tags of the indices of an ITensor or IndexSet to ts.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source
    ITensors.swaptagsMethod
    swaptags[!](A::ITensor, ts1::String, ts2::String; <keyword arguments>) -> ITensor
     
    -swaptags(is::IndexSet, ts1::String, ts2::String; <keyword arguments>) -> IndexSet

    Swap the tags ts1 with ts2 for the indices of an ITensor.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source

    Index collections set operations

    ITensors.commonindsFunction
    commoninds(A, B; kwargs...)

    Return a Vector with indices that are common between the indices of A and B (the set intersection, similar to Base.intersect).

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.commonindFunction
    commonind(A, B; kwargs...)

    Return the first Index common between the indices of A and B.

    See also commoninds.

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.uniqueindsFunction
    uniqueinds(A, B; kwargs...)

    Return Vector with indices that are unique to the set of indices of A and not in B (the set difference, similar to Base.setdiff).

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.uniqueindFunction
    uniqueind(A, B; kwargs...)

    Return the first Index unique to the set of indices of A and not in B.

    See also uniqueinds.

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.noncommonindsFunction
    noncommoninds(A, B; kwargs...)

    Return a Vector with indices that are not common between the indices of A and B (the symmetric set difference, similar to Base.symdiff).

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.noncommonindFunction
    noncommonind(A, B; kwargs...)

    Return the first Index not common between the indices of A and B.

    See also noncommoninds.

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.unionindsFunction
    unioninds(A, B; kwargs...)

    Return a Vector with indices that are the union of the indices of A and B (the set union, similar to Base.union).

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.unionindFunction
    unionind(A, B; kwargs...)

    Return the first Index in the union of the indices of A and B.

    See also unioninds.

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.hascommonindsFunction
    hascommoninds(A, B; kwargs...)
    +swaptags(is::IndexSet, ts1::String, ts2::String; <keyword arguments>) -> IndexSet

    Swap the tags ts1 with ts2 for the indices of an ITensor.

    Optionally, only modify the indices with the specified keyword arguments.

    Arguments

    • tags = nothing: if specified, only modify Index i if hastags(i, tags) == true.
    • plev = nothing: if specified, only modify Index i if hasplev(i, plev) == true.

    The ITensor functions come in two versions, f and f!. The latter modifies the ITensor in-place. In both versions, the ITensor storage is not modified or copied (so it returns an ITensor with a view of the original storage).

    source

    Index collections set operations

    ITensors.commonindsFunction
    commoninds(A, B; kwargs...)

    Return a Vector with indices that are common between the indices of A and B (the set intersection, similar to Base.intersect).

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.commonindFunction
    commonind(A, B; kwargs...)

    Return the first Index common between the indices of A and B.

    See also commoninds.

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.uniqueindsFunction
    uniqueinds(A, B; kwargs...)

    Return Vector with indices that are unique to the set of indices of A and not in B (the set difference, similar to Base.setdiff).

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.uniqueindFunction
    uniqueind(A, B; kwargs...)

    Return the first Index unique to the set of indices of A and not in B.

    See also uniqueinds.

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.noncommonindsFunction
    noncommoninds(A, B; kwargs...)

    Return a Vector with indices that are not common between the indices of A and B (the symmetric set difference, similar to Base.symdiff).

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.noncommonindFunction
    noncommonind(A, B; kwargs...)

    Return the first Index not common between the indices of A and B.

    See also noncommoninds.

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.unionindsFunction
    unioninds(A, B; kwargs...)

    Return a Vector with indices that are the union of the indices of A and B (the set union, similar to Base.union).

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.unionindFunction
    unionind(A, B; kwargs...)

    Return the first Index in the union of the indices of A and B.

    See also unioninds.

    Optional keyword arguments:

    • tags::String - a tag name or comma separated list of tag names that the returned indices must all have
    • plev::Int - common prime level that the returned indices must all have
    • inds - Index or collection of indices. Returned indices must come from this set of indices.
    source
    ITensors.hascommonindsFunction
    hascommoninds(A, B; kwargs...)
     
    -hascommoninds(B; kwargs...) -> f::Function

    Check if the ITensors or sets of indices A and B have common indices.

    If only one ITensor or set of indices B is passed, return a function f such that f(A) = hascommoninds(A, B; kwargs...)

    source

    Index Manipulations

    ITensors.replaceindMethod
    replaceind[!](A::ITensor, i1::Index, i2::Index) -> ITensor

    Replace the Index i1 with the Index i2 in the ITensor.

    The indices must have the same space (i.e. the same dimension and QNs, if applicable).

    source
    ITensors.replaceindsMethod
    replaceinds(A::ITensor, inds1, inds2) -> ITensor
    +hascommoninds(B; kwargs...) -> f::Function

    Check if the ITensors or sets of indices A and B have common indices.

    If only one ITensor or set of indices B is passed, return a function f such that f(A) = hascommoninds(A, B; kwargs...)

    source

    Index Manipulations

    ITensors.replaceindMethod
    replaceind[!](A::ITensor, i1::Index, i2::Index) -> ITensor

    Replace the Index i1 with the Index i2 in the ITensor.

    The indices must have the same space (i.e. the same dimension and QNs, if applicable).

    source
    ITensors.replaceindsMethod
    replaceinds(A::ITensor, inds1, inds2) -> ITensor
     
    -replaceinds!(A::ITensor, inds1, inds2)

    Replace the Index inds1[n] with the Index inds2[n] in the ITensor, where n runs from 1 to length(inds1) == length(inds2).

    The indices must have the same space (i.e. the same dimension and QNs, if applicable).

    The storage of the ITensor is not modified or copied (the output ITensor is a view of the input ITensor).

    source
    ITensors.swapindMethod
    swapind(A::ITensor, i1::Index, i2::Index) -> ITensor
    +replaceinds!(A::ITensor, inds1, inds2)

    Replace the Index inds1[n] with the Index inds2[n] in the ITensor, where n runs from 1 to length(inds1) == length(inds2).

    The indices must have the same space (i.e. the same dimension and QNs, if applicable).

    The storage of the ITensor is not modified or copied (the output ITensor is a view of the input ITensor).

    source
    ITensors.swapindMethod
    swapind(A::ITensor, i1::Index, i2::Index) -> ITensor
     
    -swapind!(A::ITensor, i1::Index, i2::Index)

    Swap the Index i1 with the Index i2 in the ITensor.

    The indices must have the same space (i.e. the same dimension and QNs, if applicable).

    source
    ITensors.swapindsMethod
    swapinds(A::ITensor, inds1, inds2) -> ITensor
    +swapind!(A::ITensor, i1::Index, i2::Index)

    Swap the Index i1 with the Index i2 in the ITensor.

    The indices must have the same space (i.e. the same dimension and QNs, if applicable).

    source
    ITensors.swapindsMethod
    swapinds(A::ITensor, inds1, inds2) -> ITensor
     
    -swapinds!(A::ITensor, inds1, inds2)

    Swap the Index inds1[n] with the Index inds2[n] in the ITensor, where n runs from 1 to length(inds1) == length(inds2).

    The indices must have the same space (i.e. the same dimension and QNs, if applicable).

    The storage of the ITensor is not modified or copied (the output ITensor is a view of the input ITensor).

    source

    Math operations

    Base.:*Method
    A::ITensor * B::ITensor
    +swapinds!(A::ITensor, inds1, inds2)

    Swap the Index inds1[n] with the Index inds2[n] in the ITensor, where n runs from 1 to length(inds1) == length(inds2).

    The indices must have the same space (i.e. the same dimension and QNs, if applicable).

    The storage of the ITensor is not modified or copied (the output ITensor is a view of the input ITensor).

    source

    Math operations

    Base.:*Method
    A::ITensor * B::ITensor
     contract(A::ITensor, B::ITensor)

    Contract ITensors A and B to obtain a new ITensor. This contraction * operator finds all matching indices common to A and B and sums over them, such that the result will have only the unique indices of A and B. To prevent indices from matching, their prime level or tags can be modified such that they no longer compare equal - for more information see the documentation on Index objects.

    Examples

    i = Index(2,"index_i"); j = Index(4,"index_j"); k = Index(3,"index_k")
     
     A = random_itensor(i,j)
    @@ -324,7 +324,7 @@
     
     A = random_itensor(i,j,k)
     B = random_itensor(k,i,j)
    -C = A * B # inner product of A and B, all indices contracted
    source
    ITensors.dagMethod
    dag(T::ITensor; allow_alias = true)

    Complex conjugate the elements of the ITensor T and dagger the indices.

    By default, an alias of the ITensor is returned (i.e. the output ITensor may share data with the input ITensor). If allow_alias = false, an alias is never returned.

    source
    ITensors.directsumMethod
    directsum(A::Pair{ITensor}, B::Pair{ITensor}, ...; tags)
    +C = A * B # inner product of A and B, all indices contracted
    source
    ITensors.dagMethod
    dag(T::ITensor; allow_alias = true)

    Complex conjugate the elements of the ITensor T and dagger the indices.

    By default, an alias of the ITensor is returned (i.e. the output ITensor may share data with the input ITensor). If allow_alias = false, an alias is never returned.

    source
    ITensors.directsumMethod
    directsum(A::Pair{ITensor}, B::Pair{ITensor}, ...; tags)
     
     directsum(output_inds, A::Pair{ITensor}, B::Pair{ITensor}, ...; tags)

    Given a list of pairs of ITensors and indices, perform a partial direct sum of the tensors over the specified indices. Indices that are not specified to be summed must match between the tensors.

    (Note: Pair{ITensor} in Julia is short for Pair{ITensor,<:Any} which means any pair T => x where T is an ITensor.)

    If all indices are specified then the operation is equivalent to creating a block diagonal tensor.

    Returns the ITensor representing the partial direct sum as well as the new direct summed indices. The tags of the direct summed indices are specified by the keyword arguments.

    Optionally, pass the new direct summed indices of the output tensor as the first argument (either a single Index or a collection), which must be proper direct sums of the input indices that are specified to be direct summed.

    See Section 2.3 of https://arxiv.org/abs/1405.7786 for a definition of a partial direct sum of tensors.

    Examples

    x = Index(2, "x")
     i1 = Index(3, "i1")
    @@ -350,18 +350,18 @@
     S, s = directsum(A1 => (i1, j1), A2 => (i2, j2); tags = ["sum_i", "sum_j"])
     length(s) == 2
     dim(s[1]) == dim(i1) + dim(i2)
    -dim(s[2]) == dim(j1) + dim(j2)
    source
    Base.expMethod
    exp(A::ITensor, Linds=Rinds', Rinds=inds(A,plev=0); ishermitian = false)

    Compute the exponential of the tensor A by treating it as a matrix $A_{lr}$ with the left index l running over all indices in Linds and r running over all indices in Rinds.

    Only accepts index lists Linds,Rinds such that: (1) length(Linds) + length(Rinds) == length(inds(A)) (2) length(Linds) == length(Rinds) (3) For each pair of indices (Linds[n],Rinds[n]), Linds[n] and Rinds[n] represent the same Hilbert space (the same QN structure in the QN case, or just the same length in the dense case), and appear in A with opposite directions.

    When ishermitian=true the exponential of Hermitian(A_{lr}) is computed internally.

    source
    LinearAlgebra.nullspaceMethod
    nullspace(T::ITensor, left_inds...; tags="n", atol=1E-12, kwargs...)

    Viewing the ITensor T as a matrix with the provided left_inds viewed as the row space and remaining indices viewed as the right indices or column space, the nullspace function computes the right null space. That is, it will return a tensor N acting on the right indices of T such that T*N is zero. The returned tensor N will also have a new index with the label "n" which indexes through the 'vectors' in the null space.

    For example, if T has the indices i,j,k, calling N = nullspace(T,i,k) returns N with index j such that

           ___       ___
    +dim(s[2]) == dim(j1) + dim(j2)
    source
    Base.expMethod
    exp(A::ITensor, Linds=Rinds', Rinds=inds(A,plev=0); ishermitian = false)

    Compute the exponential of the tensor A by treating it as a matrix $A_{lr}$ with the left index l running over all indices in Linds and r running over all indices in Rinds.

    Only accepts index lists Linds,Rinds such that: (1) length(Linds) + length(Rinds) == length(inds(A)) (2) length(Linds) == length(Rinds) (3) For each pair of indices (Linds[n],Rinds[n]), Linds[n] and Rinds[n] represent the same Hilbert space (the same QN structure in the QN case, or just the same length in the dense case), and appear in A with opposite directions.

    When ishermitian=true the exponential of Hermitian(A_{lr}) is computed internally.

    source
    LinearAlgebra.nullspaceMethod
    nullspace(T::ITensor, left_inds...; tags="n", atol=1E-12, kwargs...)

    Viewing the ITensor T as a matrix with the provided left_inds viewed as the row space and remaining indices viewed as the right indices or column space, the nullspace function computes the right null space. That is, it will return a tensor N acting on the right indices of T such that T*N is zero. The returned tensor N will also have a new index with the label "n" which indexes through the 'vectors' in the null space.

    For example, if T has the indices i,j,k, calling N = nullspace(T,i,k) returns N with index j such that

           ___       ___
       i --|   |     |   |
           | T |--j--| N |--n  ≈ 0
       k --|   |     |   |
    -       ---       ---

    The index n can be obtained by calling n = uniqueindex(N,T)

    Note that the implementation of this function is subject to change in the future, in which case the precise atol value that gives a certain null space size may change in future versions of ITensor.

    Keyword arguments:

    • atol::Float64=1E-12 - singular values of T†*T below this value define the null space
    • tags::String="n" - choose the tags of the index selecting elements of the null space
    source

    Decompositions

    LinearAlgebra.svdMethod
    svd(A::ITensor, inds::Index...; <keyword arguments>)

    Singular value decomposition (SVD) of an ITensor A, computed by treating the "left indices" provided collectively as a row index, and the remaining "right indices" as a column index (matricization of a tensor).

    The first three return arguments are U, S, and V, such that A ≈ U * S * V.

    Whether or not the SVD performs a trunction depends on the keyword arguments provided.

    If the left or right set of indices are empty, all input indices are put on V or U respectively. To specify an empty set of left indices, you must explicitly use svd(A, ()) (svd(A) is currently undefined).

    Examples

    Computing the SVD of an order-three ITensor, such that the indices i and k end up on U and j ends up on V

    i = Index(2)
    +       ---       ---

    The index n can be obtained by calling n = uniqueindex(N,T)

    Note that the implementation of this function is subject to change in the future, in which case the precise atol value that gives a certain null space size may change in future versions of ITensor.

    Keyword arguments:

    • atol::Float64=1E-12 - singular values of T†*T below this value define the null space
    • tags::String="n" - choose the tags of the index selecting elements of the null space
    source

    Decompositions

    LinearAlgebra.svdMethod
    svd(A::ITensor, inds::Index...; <keyword arguments>)

    Singular value decomposition (SVD) of an ITensor A, computed by treating the "left indices" provided collectively as a row index, and the remaining "right indices" as a column index (matricization of a tensor).

    The first three return arguments are U, S, and V, such that A ≈ U * S * V.

    Whether or not the SVD performs a trunction depends on the keyword arguments provided.

    If the left or right set of indices are empty, all input indices are put on V or U respectively. To specify an empty set of left indices, you must explicitly use svd(A, ()) (svd(A) is currently undefined).

    Examples

    Computing the SVD of an order-three ITensor, such that the indices i and k end up on U and j ends up on V

    i = Index(2)
     j = Index(5)
     k = Index(2)
     A = random_itensor(i, j, k)
     U, S, V = svd(A, i, k);
     @show norm(A - U * S * V) <= 10 * eps() * norm(A)

    The following code will truncate the last 2 singular values, since the total number of singular values is 4. The norm of the difference with the original tensor will be the sqrt root of the sum of the squares of the singular values that get truncated.

    trunc, Strunc, Vtrunc = svd(A, i, k; maxdim=2);
     @show norm(A - Utrunc * Strunc * Vtrunc) ≈ sqrt(S[3, 3]^2 + S[4, 4]^2)

    Alternatively we can specify that we want to truncate the weights of the singular values up to a certain cutoff, so the total error will be no larger than the cutoff.

    Utrunc2, Strunc2, Vtrunc2 = svd(A, i, k; cutoff=1e-10);
    -@show norm(A - Utrunc2 * Strunc2 * Vtrunc2) <= 1e-10

    Keywords

    • maxdim::Int: the maximum number of singular values to keep.
    • mindim::Int: the minimum number of singular values to keep.
    • cutoff::Float64: set the desired truncation error of the SVD, by default defined as the sum of the squares of the smallest singular values.
    • lefttags::String = "Link,u": set the tags of the Index shared by U and S.
    • righttags::String = "Link,v": set the tags of the Index shared by S and V.
    • alg::String = "divide_and_conquer". Options:
    • "divide_and_conquer" - A divide-and-conquer algorithm. LAPACK's gesdd. Fast, but may lead to some innacurate singular values for very ill-conditioned matrices. Also may sometimes fail to converge, leading to errors (in which case "qr_iteration" or "recursive" can be tried).
      • "qr_iteration" - Typically slower but more accurate for very ill-conditioned matrices compared to "divide_and_conquer". LAPACK's gesvd.
      • "recursive" - ITensor's custom svd. Very reliable, but may be slow if high precision is needed. To get an svd of a matrix A, an eigendecomposition of $A^{\dagger} A$ is used to compute U and then a qr of $A^{\dagger} U$ is used to compute V. This is performed recursively to compute small singular values.
    • use_absolute_cutoff::Bool = false: set if all probability weights below the cutoff value should be discarded, rather than the sum of discarded weights.
    • use_relative_cutoff::Bool = true: set if the singular values should be normalized for the sake of truncation.
    • min_blockdim::Int = 0: for SVD of block-sparse or QN ITensors, require that the number of singular values kept be greater than or equal to this value when possible

    See also: factorize, eigen

    source
    LinearAlgebra.eigenMethod
    eigen(A::ITensor[, Linds, Rinds]; <keyword arguments>)

    Eigendecomposition of an ITensor A, computed by treating the "left indices" Linds provided collectively as a row index, and remaining "right indices" Rinds as a column index (matricization of a tensor).

    If no indices are provided, pairs of primed and unprimed indices are searched for, with Linds taken to be the primed indices and Rinds taken to be the unprimed indices.

    The return arguments are the eigenvalues D and eigenvectors U as tensors, such that A * U ∼ U * D (more precisely they are approximately equal up to proper replacements of indices, see the example for details).

    Whether or not eigen performs a trunction depends on the keyword arguments provided. Note that truncation is only well defined for positive semidefinite matrices.

    Arguments

    - `maxdim::Int`: the maximum number of singular values to keep.
    +@show norm(A - Utrunc2 * Strunc2 * Vtrunc2) <= 1e-10

    Keywords

    • maxdim::Int: the maximum number of singular values to keep.
    • mindim::Int: the minimum number of singular values to keep.
    • cutoff::Float64: set the desired truncation error of the SVD, by default defined as the sum of the squares of the smallest singular values.
    • lefttags::String = "Link,u": set the tags of the Index shared by U and S.
    • righttags::String = "Link,v": set the tags of the Index shared by S and V.
    • alg::String = "divide_and_conquer". Options:
    • "divide_and_conquer" - A divide-and-conquer algorithm. LAPACK's gesdd. Fast, but may lead to some innacurate singular values for very ill-conditioned matrices. Also may sometimes fail to converge, leading to errors (in which case "qr_iteration" or "recursive" can be tried).
      • "qr_iteration" - Typically slower but more accurate for very ill-conditioned matrices compared to "divide_and_conquer". LAPACK's gesvd.
      • "recursive" - ITensor's custom svd. Very reliable, but may be slow if high precision is needed. To get an svd of a matrix A, an eigendecomposition of $A^{\dagger} A$ is used to compute U and then a qr of $A^{\dagger} U$ is used to compute V. This is performed recursively to compute small singular values.
    • use_absolute_cutoff::Bool = false: set if all probability weights below the cutoff value should be discarded, rather than the sum of discarded weights.
    • use_relative_cutoff::Bool = true: set if the singular values should be normalized for the sake of truncation.
    • min_blockdim::Int = 0: for SVD of block-sparse or QN ITensors, require that the number of singular values kept be greater than or equal to this value when possible

    See also: factorize, eigen

    source
    LinearAlgebra.eigenMethod
    eigen(A::ITensor[, Linds, Rinds]; <keyword arguments>)

    Eigendecomposition of an ITensor A, computed by treating the "left indices" Linds provided collectively as a row index, and remaining "right indices" Rinds as a column index (matricization of a tensor).

    If no indices are provided, pairs of primed and unprimed indices are searched for, with Linds taken to be the primed indices and Rinds taken to be the unprimed indices.

    The return arguments are the eigenvalues D and eigenvectors U as tensors, such that A * U ∼ U * D (more precisely they are approximately equal up to proper replacements of indices, see the example for details).

    Whether or not eigen performs a trunction depends on the keyword arguments provided. Note that truncation is only well defined for positive semidefinite matrices.

    Arguments

    - `maxdim::Int`: the maximum number of singular values to keep.
     - `mindim::Int`: the minimum number of singular values to keep.
     - `cutoff::Float64`: set the desired truncation error of the eigenvalues,
        by default defined as the sum of the squares of the smallest eigenvalues.
    @@ -393,7 +393,7 @@
     D, U = eigen(A, Linds, Rinds)
     dl, dr = uniqueind(D, U), commonind(D, U)
     Ul = replaceinds(U, (Rinds..., dr) => (Linds..., dl))
    -A * U ≈ Ul * D # true

    See also: svd, factorize

    source
    LinearAlgebra.factorizeMethod
    factorize(A::ITensor, Linds::Index...; <keyword arguments>)

    Perform a factorization of A into ITensors L and R such that A ≈ L * R.

    Arguments

    • ortho::String = "left": Choose orthogonality properties of the factorization.
      • "left": the left factor L is an orthogonal basis such that L * dag(prime(L, commonind(L,R))) ≈ I.
      • "right": the right factor R forms an orthogonal basis.
      • "none", neither of the factors form an orthogonal basis, and in general are made as symmetrically as possible (depending on the decomposition used).
    • which_decomp::Union{String, Nothing} = nothing: choose what kind of decomposition is used.
      • nothing: choose the decomposition automatically based on the other arguments. For example, when nothing is chosen and ortho = "left" or "right", and a cutoff is provided, svd or eigen is used depending on the provided cutoff (eigen is only used when the cutoff is greater than 1e-12, since it has a lower precision). When no truncation is requested qr is used for dense ITensors and svd for block-sparse ITensors (in the future qr will be used also for block-sparse ITensors in this case).
      • "svd": L = U and R = S * V for ortho = "left", L = U * S and R = V for ortho = "right", and L = U * sqrt.(S) and R = sqrt.(S) * V for ortho = "none". To control which svd algorithm is choose, use the svd_alg keyword argument. See the documentation for svd for the supported algorithms, which are the same as those accepted by the alg keyword argument.
      • "eigen": L = U and $R = U^{\dagger} A$ where U is determined from the eigendecompositon $A A^{\dagger} = U D U^{\dagger}$ for ortho = "left" (and vice versa for ortho = "right"). "eigen" is not supported for ortho = "none".
      • "qr": L=Q and R an upper-triangular matrix when ortho = "left", and R = Q and L a lower-triangular matrix when ortho = "right" (currently supported for dense ITensors only). In the future, other decompositions like QR (for block-sparse ITensors), polar, cholesky, LU, etc. are expected to be supported.

    For truncation arguments, see: svd

    source

    Memory operations

    ITensors.permuteMethod
    permute(T::ITensor, inds...; allow_alias = false)

    Return a new ITensor T with indices permuted according to the input indices inds. The storage of the ITensor is permuted accordingly.

    If called with allow_alias = true, it avoids copying data if possible. Therefore, it may return an alias of the input ITensor (an ITensor that shares the same data), such as if the permutation turns out to be trivial.

    By default, allow_alias = false, and it never returns an alias of the input ITensor.

    Examples

    i = Index(2, "index_i"); j = Index(4, "index_j"); k = Index(3, "index_k");
    +A * U ≈ Ul * D # true

    See also: svd, factorize

    source
    LinearAlgebra.factorizeMethod
    factorize(A::ITensor, Linds::Index...; <keyword arguments>)

    Perform a factorization of A into ITensors L and R such that A ≈ L * R.

    Arguments

    • ortho::String = "left": Choose orthogonality properties of the factorization.
      • "left": the left factor L is an orthogonal basis such that L * dag(prime(L, commonind(L,R))) ≈ I.
      • "right": the right factor R forms an orthogonal basis.
      • "none", neither of the factors form an orthogonal basis, and in general are made as symmetrically as possible (depending on the decomposition used).
    • which_decomp::Union{String, Nothing} = nothing: choose what kind of decomposition is used.
      • nothing: choose the decomposition automatically based on the other arguments. For example, when nothing is chosen and ortho = "left" or "right", and a cutoff is provided, svd or eigen is used depending on the provided cutoff (eigen is only used when the cutoff is greater than 1e-12, since it has a lower precision). When no truncation is requested qr is used for dense ITensors and svd for block-sparse ITensors (in the future qr will be used also for block-sparse ITensors in this case).
      • "svd": L = U and R = S * V for ortho = "left", L = U * S and R = V for ortho = "right", and L = U * sqrt.(S) and R = sqrt.(S) * V for ortho = "none". To control which svd algorithm is choose, use the svd_alg keyword argument. See the documentation for svd for the supported algorithms, which are the same as those accepted by the alg keyword argument.
      • "eigen": L = U and $R = U^{\dagger} A$ where U is determined from the eigendecompositon $A A^{\dagger} = U D U^{\dagger}$ for ortho = "left" (and vice versa for ortho = "right"). "eigen" is not supported for ortho = "none".
      • "qr": L=Q and R an upper-triangular matrix when ortho = "left", and R = Q and L a lower-triangular matrix when ortho = "right" (currently supported for dense ITensors only). In the future, other decompositions like QR (for block-sparse ITensors), polar, cholesky, LU, etc. are expected to be supported.

    For truncation arguments, see: svd

    source

    Memory operations

    ITensors.permuteMethod
    permute(T::ITensor, inds...; allow_alias = false)

    Return a new ITensor T with indices permuted according to the input indices inds. The storage of the ITensor is permuted accordingly.

    If called with allow_alias = true, it avoids copying data if possible. Therefore, it may return an alias of the input ITensor (an ITensor that shares the same data), such as if the permutation turns out to be trivial.

    By default, allow_alias = false, and it never returns an alias of the input ITensor.

    Examples

    i = Index(2, "index_i"); j = Index(4, "index_j"); k = Index(3, "index_k");
     T = random_itensor(i, j, k)
     
     pT_1 = permute(T, k, i, j)
    @@ -409,4 +409,4 @@
     
     pT_alias = permute(T, i, j, k; allow_alias = true)
     pT_alias[1, 1, 1] = 12
    -T[1, 1, 1] == pT_alias[1, 1, 1]
    source
    NDTensors.denseMethod
    dense(T::ITensor)

    Make a new ITensor where the storage is the closest Dense storage, avoiding allocating new data if possible. For example, an ITensor with Diag storage will become Dense storage, filled with zeros except for the diagonal values.

    source
    NDTensors.denseblocksMethod
    denseblocks(T::ITensor)

    Make a new ITensor where any blocks which have a sparse format, such as diagonal sparsity, are made dense while still preserving the outer block-sparse structure. This method avoids allocating new data if possible.

    For example, an ITensor with DiagBlockSparse storage will have BlockSparse storage afterwards.

    source
    +T[1, 1, 1] == pT_alias[1, 1, 1]source
    NDTensors.denseMethod
    dense(T::ITensor)

    Make a new ITensor where the storage is the closest Dense storage, avoiding allocating new data if possible. For example, an ITensor with Diag storage will become Dense storage, filled with zeros except for the diagonal values.

    source
    NDTensors.denseblocksMethod
    denseblocks(T::ITensor)

    Make a new ITensor where any blocks which have a sparse format, such as diagonal sparsity, are made dense while still preserving the outer block-sparse structure. This method avoids allocating new data if possible.

    For example, an ITensor with DiagBlockSparse storage will have BlockSparse storage afterwards.

    source
    diff --git a/dev/IncludedSiteTypes.html b/dev/IncludedSiteTypes.html index 27031978f4..031443e775 100644 --- a/dev/IncludedSiteTypes.html +++ b/dev/IncludedSiteTypes.html @@ -13,4 +13,4 @@ sites = siteinds("Electron",N)

    Available keyword arguments for enabling and customizing quantum numbers (QN) subspaces:

    For example:

    sites = siteinds("Electron",N; conserve_nfparity=true)

    "Electron" States

    The available state names for "Electron" sites are:

    "Electron" Operators

    Operators associated with "Electron" sites can be made using the op function, for example

    Cup = op("Cup",s)
     Cup4 = op("Cup",sites[4])

    Single-fermion operators:

    Non-fermionic single particle operators (these do not have Jordan-Wigner string attached, so will commute within systems such as OpSum or the apply function):

    "tJ" SiteType

    "tJ" sites are similar to electron sites, but cannot be doubly occupied The states of site indices with the "tJ" SiteType correspond to $|0\rangle$, $|\!\uparrow\rangle$, $|\!\downarrow\rangle$.

    Making a single "tJ" site or collection of N "tJ" sites

    s = siteind("tJ")
     sites = siteinds("tJ",N)

    Available keyword arguments for enabling and customizing quantum numbers (QN) subspaces:

    For example:

    sites = siteinds("tJ",N; conserve_nfparity=true)

    "tJ" States

    The available state names for "tJ" sites are:

    "tJ" Operators

    Operators associated with "tJ" sites can be made using the op function, for example

    Cup = op("Cup",s)
    -Cup4 = op("Cup",sites[4])

    Single-fermion operators:

    Non-fermionic single particle operators (these do not have Jordan-Wigner string attached, so will commute within systems such as OpSum or the apply function):

    +Cup4 = op("Cup",sites[4])

    Single-fermion operators:

    Non-fermionic single particle operators (these do not have Jordan-Wigner string attached, so will commute within systems such as OpSum or the apply function):

    diff --git a/dev/IndexSetType.html b/dev/IndexSetType.html index 6209325767..a653193f4c 100644 --- a/dev/IndexSetType.html +++ b/dev/IndexSetType.html @@ -1,9 +1,9 @@ -Index collections · ITensors.jl

    Index collections

    Collections of Index are used throughout ITensors.jl to represent the dimensions of tensors. In general, collections that are recognized and returned by ITensors.jl functions are either Vector of Index or Tuple of Index, depending on the context. For example internally an ITensor has a static number of indices so stores a Tuple of Index, while set operations like commoninds((i, j, k), (j, k, l)) will return a Vector [j, k] since the operation is inherently dynamic, i.e. the number of indices in the intersection can't in general be known before running the code. Vector of Index and Tuple of Index can usually be used interchangeably, but one or the other may be faster depending on the operation being performed.

    Priming and tagging

    Documentation for priming and tagging collections of Index can be found in the ITensor Priming and tagging section.

    Set operations

    Documentation for set operations involving Index collections can be found in the ITensor Index collections set operations section.

    Subsets

    ITensors.getfirstMethod
    getfirst(f::Function, is::Indices)

    Get the first Index matching the pattern function, return nothing if not found.

    source
    ITensors.getfirstMethod
    getfirst(is::Indices)

    Return the first Index in the Indices. If the Indices is empty, return nothing.

    source

    Iterating

    ITensors.eachvalMethod
    eachval(is::Index...)
    -eachval(is::Tuple{Vararg{Index}})

    Create an iterator whose values correspond to a Cartesian indexing over the dimensions of the provided Index objects.

    source
    ITensors.eachindvalMethod
    eachindval(is::Index...)
    +Index collections · ITensors.jl

    Index collections

    Collections of Index are used throughout ITensors.jl to represent the dimensions of tensors. In general, collections that are recognized and returned by ITensors.jl functions are either Vector of Index or Tuple of Index, depending on the context. For example internally an ITensor has a static number of indices so stores a Tuple of Index, while set operations like commoninds((i, j, k), (j, k, l)) will return a Vector [j, k] since the operation is inherently dynamic, i.e. the number of indices in the intersection can't in general be known before running the code. Vector of Index and Tuple of Index can usually be used interchangeably, but one or the other may be faster depending on the operation being performed.

    Priming and tagging

    Documentation for priming and tagging collections of Index can be found in the ITensor Priming and tagging section.

    Set operations

    Documentation for set operations involving Index collections can be found in the ITensor Index collections set operations section.

    Subsets

    ITensors.getfirstMethod
    getfirst(f::Function, is::Indices)

    Get the first Index matching the pattern function, return nothing if not found.

    source
    ITensors.getfirstMethod
    getfirst(is::Indices)

    Return the first Index in the Indices. If the Indices is empty, return nothing.

    source

    Iterating

    ITensors.eachvalMethod
    eachval(is::Index...)
    +eachval(is::Tuple{Vararg{Index}})

    Create an iterator whose values correspond to a Cartesian indexing over the dimensions of the provided Index objects.

    source
    ITensors.eachindvalMethod
    eachindval(is::Index...)
     eachindval(is::Tuple{Vararg{Index}})

    Create an iterator whose values are Index=>value pairs corresponding to a Cartesian indexing over the dimensions of the provided Index objects.

    Example

    i = Index(3; tags="i")
     j = Index(2; tags="j")
     T = random_itensor(j, i)
     for iv in eachindval(i, j)
       @show T[iv...]
    -end
    source
    ITensors.dirMethod
    dir(is::Indices, i::Index)

    Return the direction of the Index i in the Indices is.

    source
    +end
    source
    ITensors.dirMethod
    dir(is::Indices, i::Index)

    Return the direction of the Index i in the Indices is.

    source
    diff --git a/dev/IndexType.html b/dev/IndexType.html index 3c40116829..b8b0e006e9 100644 --- a/dev/IndexType.html +++ b/dev/IndexType.html @@ -1,5 +1,5 @@ -Index · ITensors.jl

    Index

    Description

    ITensors.IndexType

    An Index represents a single tensor index with fixed dimension dim. Copies of an Index compare equal unless their tags are different.

    An Index carries a TagSet, a set of tags which are small strings that specify properties of the Index to help distinguish it from other Indices. There is a special tag which is referred to as the integer tag or prime level which can be incremented or decremented with special priming functions.

    Internally, an Index has a fixed id number, which is how the ITensor library knows two indices are copies of a single original Index. Index objects must have the same id, as well as the tags to compare equal.

    source
    ITensors.QNIndexType

    A QN Index is an Index with QN block storage instead of just an integer dimension. The QN block storage is a vector of pairs of QNs and block dimensions. The total dimension of a QN Index is the sum of the dimensions of the blocks of the Index.

    source

    Constructors

    ITensors.IndexMethod
    Index(dim::Int; tags::Union{AbstractString, TagSet} = "",
    +Index · ITensors.jl

    Index

    Description

    ITensors.IndexType

    An Index represents a single tensor index with fixed dimension dim. Copies of an Index compare equal unless their tags are different.

    An Index carries a TagSet, a set of tags which are small strings that specify properties of the Index to help distinguish it from other Indices. There is a special tag which is referred to as the integer tag or prime level which can be incremented or decremented with special priming functions.

    Internally, an Index has a fixed id number, which is how the ITensor library knows two indices are copies of a single original Index. Index objects must have the same id, as well as the tags to compare equal.

    source
    ITensors.QNIndexType

    A QN Index is an Index with QN block storage instead of just an integer dimension. The QN block storage is a vector of pairs of QNs and block dimensions. The total dimension of a QN Index is the sum of the dimensions of the blocks of the Index.

    source

    Constructors

    ITensors.IndexMethod
    Index(dim::Int; tags::Union{AbstractString, TagSet} = "",
                     plev::Int = 0)

    Create an Index with a unique id, a TagSet given by tags, and a prime level plev.

    Examples

    julia> i = Index(2; tags="l", plev=1)
     (dim=2|id=818|"l")'
     
    @@ -10,7 +10,7 @@
     1
     
     julia> tags(i)
    -"l"
    source
    ITensors.IndexMethod
    Index(dim::Integer, tags::Union{AbstractString, TagSet}; plev::Int = 0)

    Create an Index with a unique id and a tagset given by tags.

    Examples

    julia> i = Index(2, "l,tag")
    +"l"
    source
    ITensors.IndexMethod
    Index(dim::Integer, tags::Union{AbstractString, TagSet}; plev::Int = 0)

    Create an Index with a unique id and a tagset given by tags.

    Examples

    julia> i = Index(2, "l,tag")
     (dim=2|id=58|"l,tag")
     
     julia> dim(i)
    @@ -20,12 +20,12 @@
     0
     
     julia> tags(i)
    -"l,tag"
    source
    ITensors.IndexMethod
    Index(qnblocks::Pair{QN, Int64}...; dir::Arrow = Out,
                                         tags = "",
    -                                    plev::Integer = 0)

    Construct a QN Index from a list of pairs of QN and block dimensions.

    Example

    Index(QN("Sz", -1) => 1, QN("Sz", 1) => 1; tags = "i")
    source
    ITensors.IndexMethod
    Index(qnblocks::Vector{Pair{QN, Int64}}; dir::Arrow = Out,
    +                                    plev::Integer = 0)

    Construct a QN Index from a list of pairs of QN and block dimensions.

    Example

    Index(QN("Sz", -1) => 1, QN("Sz", 1) => 1; tags = "i")
    source
    ITensors.IndexMethod
    Index(qnblocks::Vector{Pair{QN, Int64}}; dir::Arrow = Out,
                                              tags = "",
    -                                         plev::Integer = 0)

    Construct a QN Index from a Vector of pairs of QN and block dimensions.

    Note: in the future, this may enforce that all blocks have the same QNs (which would allow for some optimizations, for example when constructing random QN ITensors).

    Example

    Index([QN("Sz", -1) => 1, QN("Sz", 1) => 1]; tags = "i")
    source
    ITensors.IndexMethod
    Index(qnblocks::Vector{Pair{QN, Int64}}, tags; dir::Arrow = Out,
    -                                               plev::Integer = 0)

    Construct a QN Index from a Vector of pairs of QN and block dimensions.

    Example

    Index([QN("Sz", -1) => 1, QN("Sz", 1) => 1], "i"; dir = In)
    source

    Properties

    ITensors.idMethod
    id(i::Index)

    Obtain the id of an Index, which is a unique 64 digit integer.

    source
    ITensors.hasidMethod
    hasid(i::Index, id::ITensors.IDType)

    Check if an Index i has the provided id.

    Examples

    julia> i = Index(2)
    +                                         plev::Integer = 0)

    Construct a QN Index from a Vector of pairs of QN and block dimensions.

    Note: in the future, this may enforce that all blocks have the same QNs (which would allow for some optimizations, for example when constructing random QN ITensors).

    Example

    Index([QN("Sz", -1) => 1, QN("Sz", 1) => 1]; tags = "i")
    source
    ITensors.IndexMethod
    Index(qnblocks::Vector{Pair{QN, Int64}}, tags; dir::Arrow = Out,
    +                                               plev::Integer = 0)

    Construct a QN Index from a Vector of pairs of QN and block dimensions.

    Example

    Index([QN("Sz", -1) => 1, QN("Sz", 1) => 1], "i"; dir = In)
    source

    Properties

    ITensors.idMethod
    id(i::Index)

    Obtain the id of an Index, which is a unique 64 digit integer.

    source
    ITensors.hasidMethod
    hasid(i::Index, id::ITensors.IDType)

    Check if an Index i has the provided id.

    Examples

    julia> i = Index(2)
     (dim=2|id=321)
     
     julia> hasid(i, id(i))
    @@ -35,23 +35,23 @@
     (dim=2|id=17)
     
     julia> hasid(i, id(j))
    -false
    source
    ITensors.TagSets.set_strict_tags!Method
    set_strict_tags!(enable::Bool) -> Bool
    -

    Enable or disable checking for overflow of the number of tags of a TagSet or the number of characters of a tag. If enabled (set to true), an error will be thrown if overflow occurs, otherwise the overflow will be ignored and the extra tags or tag characters will be dropped. This could cause unexpected bugs if tags are being used to distinguish Index objects that have the same ids and prime levels, but that is generally discouraged and should only be used if you know what you are doing.

    See also ITensors.using_strict_tags.

    source
    ITensors.TagSets.hastagsMethod
    hastags(i::Index, ts::Union{AbstractString,TagSet})

    Check if an Index i has the provided tags, which can be a string of comma-separated tags or a TagSet object.

    Examples

    julia> i = Index(2, "SpinHalf,Site,n=3")
    +false
    source
    ITensors.TagSets.set_strict_tags!Method
    set_strict_tags!(enable::Bool) -> Bool
    +

    Enable or disable checking for overflow of the number of tags of a TagSet or the number of characters of a tag. If enabled (set to true), an error will be thrown if overflow occurs, otherwise the overflow will be ignored and the extra tags or tag characters will be dropped. This could cause unexpected bugs if tags are being used to distinguish Index objects that have the same ids and prime levels, but that is generally discouraged and should only be used if you know what you are doing.

    See also ITensors.using_strict_tags.

    source
    ITensors.TagSets.hastagsMethod
    hastags(i::Index, ts::Union{AbstractString,TagSet})

    Check if an Index i has the provided tags, which can be a string of comma-separated tags or a TagSet object.

    Examples

    julia> i = Index(2, "SpinHalf,Site,n=3")
     (dim=2|id=861|"Site,SpinHalf,n=3")
     
     julia> hastags(i, "SpinHalf,Site")
     true
     
     julia> hastags(i, "Link")
    -false
    source
    ITensors.hasplevMethod
    hasplev(i::Index, plev::Int)

    Check if an Index i has the provided prime level.

    Examples

    julia> i = Index(2; plev=2)
    +false
    source
    ITensors.hasplevMethod
    hasplev(i::Index, plev::Int)

    Check if an Index i has the provided prime level.

    Examples

    julia> i = Index(2; plev=2)
     (dim=2|id=543)''
     
     julia> hasplev(i, 2)
     true
     
     julia> hasplev(i, 1)
    -false
    source
    NDTensors.dimMethod
    dim(i::Index)

    Obtain the dimension of an Index.

    For a QN Index, this is the sum of the block dimensions.

    source
    Base.:==Method
    ==(i1::Index, i1::Index)

    Compare indices for equality. First the id's are compared, then the prime levels are compared, and finally the tags are compared.

    source
    ITensors.dirMethod
    dir(i::Index)

    Return the direction of an Index (ITensors.In, ITensors.Out, or ITensors.Neither).

    source

    Priming and tagging methods

    ITensors.primeMethod
    prime(i::Index, plinc::Int = 1)

    Return a copy of Index i with its prime level incremented by the amount plinc

    source
    Base.:^Method
    ^(i::Index, pl::Int)

    Prime an Index using the notation i^3.

    source
    ITensors.setprimeMethod
    setprime(i::Index, plev::Int)

    Return a copy of Index i with its prime level set to plev

    source
    ITensors.settagsMethod
    settags(i::Index, ts)

    Return a copy of Index i with tags replaced by the ones given The ts argument can be a comma-separated string of tags or a TagSet.

    Examples

    julia> i = Index(2, "SpinHalf,Site,n=3")
    +false
    source
    NDTensors.dimMethod
    dim(i::Index)

    Obtain the dimension of an Index.

    For a QN Index, this is the sum of the block dimensions.

    source
    Base.:==Method
    ==(i1::Index, i1::Index)

    Compare indices for equality. First the id's are compared, then the prime levels are compared, and finally the tags are compared.

    source
    ITensors.dirMethod
    dir(i::Index)

    Return the direction of an Index (ITensors.In, ITensors.Out, or ITensors.Neither).

    source

    Priming and tagging methods

    ITensors.primeMethod
    prime(i::Index, plinc::Int = 1)

    Return a copy of Index i with its prime level incremented by the amount plinc

    source
    Base.:^Method
    ^(i::Index, pl::Int)

    Prime an Index using the notation i^3.

    source
    ITensors.setprimeMethod
    setprime(i::Index, plev::Int)

    Return a copy of Index i with its prime level set to plev

    source
    ITensors.settagsMethod
    settags(i::Index, ts)

    Return a copy of Index i with tags replaced by the ones given The ts argument can be a comma-separated string of tags or a TagSet.

    Examples

    julia> i = Index(2, "SpinHalf,Site,n=3")
     (dim=2|id=543|"Site,SpinHalf,n=3")
     
     julia> hastags(i, "Link")
    @@ -64,7 +64,7 @@
     true
     
     julia> hastags(j, "n=4,Link")
    -true
    source
    ITensors.TagSets.addtagsMethod
    addtags(i::Index,ts)

    Return a copy of Index i with the specified tags added to the existing ones. The ts argument can be a comma-separated string of tags or a TagSet.

    source
    ITensors.TagSets.removetagsMethod
    removetags(i::Index, ts)

    Return a copy of Index i with the specified tags removed. The ts argument can be a comma-separated string of tags or a TagSet.

    source
    ITensors.TagSets.addtagsMethod
    addtags(i::Index,ts)

    Return a copy of Index i with the specified tags added to the existing ones. The ts argument can be a comma-separated string of tags or a TagSet.

    source
    ITensors.TagSets.removetagsMethod
    removetags(i::Index, ts)

    Return a copy of Index i with the specified tags removed. The ts argument can be a comma-separated string of tags or a TagSet.

    source
    ITensors.TagSets.replacetagsMethod
    replacetags(i::Index, tsold, tsnew)
     
     replacetags(i::Index, tsold => tsnew)

    If the tag set of i contains the tags specified by tsold, replaces these with the tags specified by tsnew, preserving any other tags. The arguments tsold and tsnew can be comma-separated strings of tags, or TagSet objects.

    Examples

    julia> i = Index(2; tags="l,x", plev=1)
     (dim=2|id=83|"l,x")'
    @@ -73,4 +73,4 @@
     (dim=2|id=83|"m,x")'
     
     julia> replacetags(i, "l" => "m")
    -(dim=2|id=83|"m,x")'
    source

    Methods

    NDTensors.simMethod
    sim(i::Index; tags = tags(i), plev = plev(i), dir = dir(i))

    Produces an Index with the same properties (dimension or QN structure) but with a new id.

    source

    Iterating

    ITensors.eachvalMethod
    eachval(i::Index)

    Create an iterator whose values range over the dimension of the provided Index.

    source
    ITensors.eachindvalMethod
    eachindval(i::Index)

    Create an iterator whose values are Pairs of the form i=>n with n from 1:dim(i). This iterator is useful for accessing elements of an ITensor in a loop without needing to know the ordering of the indices. See also eachindval(is::Index...).

    source
    +(dim=2|id=83|"m,x")'
    source

    Methods

    NDTensors.simMethod
    sim(i::Index; tags = tags(i), plev = plev(i), dir = dir(i))

    Produces an Index with the same properties (dimension or QN structure) but with a new id.

    source

    Iterating

    ITensors.eachvalMethod
    eachval(i::Index)

    Create an iterator whose values range over the dimension of the provided Index.

    source
    ITensors.eachindvalMethod
    eachindval(i::Index)

    Create an iterator whose values are Pairs of the form i=>n with n from 1:dim(i). This iterator is useful for accessing elements of an ITensor in a loop without needing to know the ordering of the indices. See also eachindval(is::Index...).

    source
    diff --git a/dev/MPSandMPO.html b/dev/MPSandMPO.html index 0e946341b6..822530db60 100644 --- a/dev/MPSandMPO.html +++ b/dev/MPSandMPO.html @@ -302,4 +302,4 @@ y = convert(MPS, Y) outer(x, y) # Incorrect! Site indices must be unique. outer(x', y) # Incorrect! Site indices must be unique. -outer(addtags(x, "Out"), addtags(y, "In")) # This performs a proper outer product.

    The keyword arguments determine the truncation, and accept the same arguments as contract(::MPO, ::MPO; kwargs...).

    See also apply, contract.

    ITensorMPS.projectorMethod
    projector(x::MPS; <keyword argument>) -> MPO

    Computes the projector onto the state x. In Dirac notation, this is the operation |x⟩⟨x|/|⟨x|x⟩|².

    Use keyword arguments to control the level of truncation, which are the same as those accepted by contract(::MPO, ::MPO; kw...).

    Keywords

    • normalize::Bool=true: whether or not to normalize the input MPS before forming the projector. If normalize==false and the input MPS is not already normalized, this function will not output a proper project, and simply outputs outer(x, x) = |x⟩⟨x|, i.e. the projector scaled by norm(x)^2.
    • truncation keyword arguments accepted by contract(::MPO, ::MPO; kw...).

    See also outer, contract.

    +outer(addtags(x, "Out"), addtags(y, "In")) # This performs a proper outer product.

    The keyword arguments determine the truncation, and accept the same arguments as contract(::MPO, ::MPO; kwargs...).

    See also apply, contract.

    ITensorMPS.projectorMethod
    projector(x::MPS; <keyword argument>) -> MPO

    Computes the projector onto the state x. In Dirac notation, this is the operation |x⟩⟨x|/|⟨x|x⟩|².

    Use keyword arguments to control the level of truncation, which are the same as those accepted by contract(::MPO, ::MPO; kw...).

    Keywords

    • normalize::Bool=true: whether or not to normalize the input MPS before forming the projector. If normalize==false and the input MPS is not already normalized, this function will not output a proper project, and simply outputs outer(x, x) = |x⟩⟨x|, i.e. the projector scaled by norm(x)^2.
    • truncation keyword arguments accepted by contract(::MPO, ::MPO; kw...).

    See also outer, contract.

    diff --git a/dev/Multithreading.html b/dev/Multithreading.html index 669810dcca..11f9fc0cb1 100644 --- a/dev/Multithreading.html +++ b/dev/Multithreading.html @@ -36,7 +36,7 @@ $ julia -t 4 -$ JULIA_NUM_THREADS=4 julia

    In addition, we have found that it is best to disable BLAS and Strided multithreading when using block sparse multithreading. You can do that with the commands using LinearAlgebra; BLAS.set_num_threads(1) and ITensors.Strided.disable_threads().

    See also: ITensors.enable_threaded_blocksparse, ITensors.disable_threaded_blocksparse, ITensors.using_threaded_blocksparse.

    source
    enable_threaded_blocksparse(enable::Bool)

    enable_threaded_blocksparse(true) enables threaded block sparse operations (equivalent to enable_threaded_blocksparse()).

    enable_threaded_blocksparse(false) disables threaded block sparse operations (equivalent to enable_threaded_blocksparse()).

    source

    Here is a simple example of using block sparse multithreading to speed up a sparse tensor contraction:

    using BenchmarkTools
    +$ JULIA_NUM_THREADS=4 julia

    In addition, we have found that it is best to disable BLAS and Strided multithreading when using block sparse multithreading. You can do that with the commands using LinearAlgebra; BLAS.set_num_threads(1) and ITensors.Strided.disable_threads().

    See also: ITensors.enable_threaded_blocksparse, ITensors.disable_threaded_blocksparse, ITensors.using_threaded_blocksparse.

    source
    enable_threaded_blocksparse(enable::Bool)

    enable_threaded_blocksparse(true) enables threaded block sparse operations (equivalent to enable_threaded_blocksparse()).

    enable_threaded_blocksparse(false) disables threaded block sparse operations (equivalent to enable_threaded_blocksparse()).

    source

    Here is a simple example of using block sparse multithreading to speed up a sparse tensor contraction:

    using BenchmarkTools
     using ITensors, ITensorMPS
     using LinearAlgebra
     using Strided
    @@ -92,4 +92,4 @@
     Threaded contract:
       5.934 ms (446 allocations: 7.37 MiB)
     
    -C_contract ≈ C_threaded_contract = true

    In addition, we plan to add more threading to other parts of the code beyond contraction (such as SVD) and improve composibility with other forms of threading like BLAS and Strided, so stay tuned!

    +C_contract ≈ C_threaded_contract = true

    In addition, we plan to add more threading to other parts of the code beyond contraction (such as SVD) and improve composibility with other forms of threading like BLAS and Strided, so stay tuned!

    diff --git a/dev/Observer.html b/dev/Observer.html index d2043e54c5..e2d138fc30 100644 --- a/dev/Observer.html +++ b/dev/Observer.html @@ -84,4 +84,4 @@ energy, psi = dmrg(H,psi0; nsweeps, cutoff, maxdim, observer=obs, outputlevel=1) return -end +end diff --git a/dev/OpSum.html b/dev/OpSum.html index 9631da2843..8b4d344ee7 100644 --- a/dev/OpSum.html +++ b/dev/OpSum.html @@ -1,5 +1,5 @@ -OpSum · ITensors.jl

    OpSum

    Description

    ITensors.Ops.OpSumType

    An OpSum represents a sum of operator terms.

    Often it is used to create matrix product operator (MPO) approximation of the sum of the terms in the OpSum oject. Each term is a product of local operators specified by names such as "Sz" or "N", times an optional coefficient which can be real or complex.

    Which local operator names are available is determined by the function op associated with the TagType defined by special Index tags, such as "S=1/2", "S=1", "Fermion", and "Electron".

    source

    Methods

    ITensorMPS.add!Function
    add!(opsum::OpSum,
    +OpSum · ITensors.jl

    OpSum

    Description

    ITensors.Ops.OpSumType

    An OpSum represents a sum of operator terms.

    Often it is used to create matrix product operator (MPO) approximation of the sum of the terms in the OpSum oject. Each term is a product of local operators specified by names such as "Sz" or "N", times an optional coefficient which can be real or complex.

    Which local operator names are available is determined by the function op associated with the TagType defined by special Index tags, such as "S=1/2", "S=1", "Fermion", and "Electron".

    source

    Methods

    ITensorMPS.add!Function
    add!(opsum::OpSum,
          op1::String, i1::Int)
     
     add!(opsum::OpSum,
    @@ -34,4 +34,4 @@
     sites = siteinds("S=1/2",4)
     H = MPO(os,sites)
     H = MPO(Float32,os,sites)
    -H = MPO(os,sites; splitblocks=false)
    +H = MPO(os,sites; splitblocks=false)
    diff --git a/dev/ProjMPO.html b/dev/ProjMPO.html index c3012e3705..126cd3b837 100644 --- a/dev/ProjMPO.html +++ b/dev/ProjMPO.html @@ -7,4 +7,4 @@ (P::ProjMPO)(v::ITensor)

    Efficiently multiply the ProjMPO P by an ITensor v in the sense that the ProjMPO is a generalized square matrix or linear operator and v is a generalized vector in the space where it acts. The returned ITensor will have the same indices as v. The operator overload P(v) is shorthand for product(P,v).

    ITensorMPS.position!Method
    position!(P::ProjMPO, psi::MPS, pos::Int)

    Given an MPS psi, shift the projection of the MPO represented by the ProjMPO P such that the set of unprojected sites begins with site pos. This operation efficiently reuses previous projections of the MPO on sites that have already been projected. The MPS psi must have compatible bond indices with the previous projected MPO tensors for this operation to succeed.

    ITensorMPS.noisetermMethod
    noiseterm(P::ProjMPO,
               phi::ITensor,
    -          ortho::String)

    Return a "noise term" or density matrix perturbation ITensor as proposed in Phys. Rev. B 72, 180403 for aiding convergence of DMRG calculations. The ITensor phi is the contracted product of MPS tensors acted on by the ProjMPO P, and ortho is a String which can take the values "left" or "right" depending on the sweeping direction of the DMRG calculation.

    Properties

    Base.lengthMethod
    length(P::ProjMPO)

    The length of a ProjMPO is the same as the length of the MPO used to construct it

    Base.eltypeMethod
    eltype(P::ProjMPO)

    Deduce the element type (such as Float64 or ComplexF64) of the tensors in the ProjMPO P.

    Base.sizeMethod
    size(P::ProjMPO)

    The size of a ProjMPO are its dimensions (d,d) when viewed as a matrix or linear operator acting on a space of dimension d.

    For example, if a ProjMPO maps from a space with indices (a,s1,s2,b) to the space (a',s1',s2',b') then the size is (d,d) where d = dim(a)*dim(s1)*dim(s1)*dim(b)

    + ortho::String)

    Return a "noise term" or density matrix perturbation ITensor as proposed in Phys. Rev. B 72, 180403 for aiding convergence of DMRG calculations. The ITensor phi is the contracted product of MPS tensors acted on by the ProjMPO P, and ortho is a String which can take the values "left" or "right" depending on the sweeping direction of the DMRG calculation.

    Properties

    Base.lengthMethod
    length(P::ProjMPO)

    The length of a ProjMPO is the same as the length of the MPO used to construct it

    Base.eltypeMethod
    eltype(P::ProjMPO)

    Deduce the element type (such as Float64 or ComplexF64) of the tensors in the ProjMPO P.

    Base.sizeMethod
    size(P::ProjMPO)

    The size of a ProjMPO are its dimensions (d,d) when viewed as a matrix or linear operator acting on a space of dimension d.

    For example, if a ProjMPO maps from a space with indices (a,s1,s2,b) to the space (a',s1',s2',b') then the size is (d,d) where d = dim(a)*dim(s1)*dim(s1)*dim(b)

    diff --git a/dev/ProjMPOSum.html b/dev/ProjMPOSum.html index 76c2e540a6..dcaf76a0e4 100644 --- a/dev/ProjMPOSum.html +++ b/dev/ProjMPOSum.html @@ -7,4 +7,4 @@ (P::ProjMPOSum)(v::ITensor)

    Efficiently multiply the ProjMPOSum P by an ITensor v in the sense that the ProjMPOSum is a generalized square matrix or linear operator and v is a generalized vector in the space where it acts. The returned ITensor will have the same indices as v. The operator overload P(v) is shorthand for product(P,v).

    ITensorMPS.position!Method
    position!(P::ProjMPOSum, psi::MPS, pos::Int)

    Given an MPS psi, shift the projection of the MPO represented by the ProjMPOSum P such that the set of unprojected sites begins with site pos. This operation efficiently reuses previous projections of the MPOs on sites that have already been projected. The MPS psi must have compatible bond indices with the previous projected MPO tensors for this operation to succeed.

    ITensorMPS.noisetermMethod
    noiseterm(P::ProjMPOSum,
               phi::ITensor,
    -          ortho::String)

    Return a "noise term" or density matrix perturbation ITensor as proposed in Phys. Rev. B 72, 180403 for aiding convergence of DMRG calculations. The ITensor phi is the contracted product of MPS tensors acted on by the ProjMPOSum P, and ortho is a String which can take the values "left" or "right" depending on the sweeping direction of the DMRG calculation.

    Properties

    Base.eltypeMethod
    eltype(P::ProjMPOSum)

    Deduce the element type (such as Float64 or ComplexF64) of the tensors in the ProjMPOSum P.

    Base.sizeMethod
    size(P::ProjMPOSum)

    The size of a ProjMPOSum are its dimensions (d,d) when viewed as a matrix or linear operator acting on a space of dimension d.

    For example, if a ProjMPOSum maps from a space with indices (a,s1,s2,b) to the space (a',s1',s2',b') then the size is (d,d) where d = dim(a)*dim(s1)*dim(s1)*dim(b)

    + ortho::String)

    Return a "noise term" or density matrix perturbation ITensor as proposed in Phys. Rev. B 72, 180403 for aiding convergence of DMRG calculations. The ITensor phi is the contracted product of MPS tensors acted on by the ProjMPOSum P, and ortho is a String which can take the values "left" or "right" depending on the sweeping direction of the DMRG calculation.

    Properties

    Base.eltypeMethod
    eltype(P::ProjMPOSum)

    Deduce the element type (such as Float64 or ComplexF64) of the tensors in the ProjMPOSum P.

    Base.sizeMethod
    size(P::ProjMPOSum)

    The size of a ProjMPOSum are its dimensions (d,d) when viewed as a matrix or linear operator acting on a space of dimension d.

    For example, if a ProjMPOSum maps from a space with indices (a,s1,s2,b) to the space (a',s1',s2',b') then the size is (d,d) where d = dim(a)*dim(s1)*dim(s1)*dim(b)

    diff --git a/dev/QN.html b/dev/QN.html index 33232d2e53..f631faa235 100644 --- a/dev/QN.html +++ b/dev/QN.html @@ -1,4 +1,4 @@ -QN · ITensors.jl

    QN

    Description

    ITensors.QuantumNumbers.QNType

    A QN object stores a collection of up to four named values such as ("Sz",1) or ("N",0). These values can include a third integer "m" which makes them obey addition modulo m, for example ("P",1,2) for a value obeying addition mod 2. (The default is regular integer addition).

    Adding or subtracting pairs of QN objects performs addition and subtraction element-wise on each of the named values. If a name is missing from the collection, its value is treated as zero.

    source

    Constructors

    ITensors.QuantumNumbers.QNMethod
    QN(qvs...)

    Construct a QN from a set of up to four named value tuples.

    Examples

    q = QN(("Sz",1))
    +QN · ITensors.jl

    QN

    Description

    ITensors.QuantumNumbers.QNType

    A QN object stores a collection of up to four named values such as ("Sz",1) or ("N",0). These values can include a third integer "m" which makes them obey addition modulo m, for example ("P",1,2) for a value obeying addition mod 2. (The default is regular integer addition).

    Adding or subtracting pairs of QN objects performs addition and subtraction element-wise on each of the named values. If a name is missing from the collection, its value is treated as zero.

    source

    Constructors

    ITensors.QuantumNumbers.QNMethod
    QN(qvs...)

    Construct a QN from a set of up to four named value tuples.

    Examples

    q = QN(("Sz",1))
     q = QN(("N",1),("Sz",-1))
    -q = QN(("P",0,2),("Sz",0)).
    source
    ITensors.QuantumNumbers.QNType
    QN(name,val::Int,modulus::Int=1)

    Construct a QN with a single named value by providing the name, value, and optional modulus.

    source
    ITensors.QuantumNumbers.QNType
    QN(val::Int,modulus::Int=1)

    Construct a QN with a single unnamed value (equivalent to the name being the empty string) with optional modulus.

    source

    Properties

    ITensors.valMethod
    val(q::QN,name)

    Get the value within the QN q corresponding to the string name

    source
    Base.zeroMethod
    zero(q::QN)

    Returns a QN object containing the same names as q, but with all values set to zero.

    source
    +q = QN(("P",0,2),("Sz",0)).
    source
    ITensors.QuantumNumbers.QNType
    QN(name,val::Int,modulus::Int=1)

    Construct a QN with a single named value by providing the name, value, and optional modulus.

    source
    ITensors.QuantumNumbers.QNType
    QN(val::Int,modulus::Int=1)

    Construct a QN with a single unnamed value (equivalent to the name being the empty string) with optional modulus.

    source

    Properties

    ITensors.valMethod
    val(q::QN,name)

    Get the value within the QN q corresponding to the string name

    source
    Base.zeroMethod
    zero(q::QN)

    Returns a QN object containing the same names as q, but with all values set to zero.

    source
    diff --git a/dev/QNTricks.html b/dev/QNTricks.html index 0b556de4df..4c25048cec 100644 --- a/dev/QNTricks.html +++ b/dev/QNTricks.html @@ -2,71 +2,71 @@ Symmetric (QN conserving) tensors: background and usage · ITensors.jl

    Symmetric (QN Conserving) Tensors: Background and Usage

    Here is a collection of background material and example codes for understanding how symmetric tensors (tensors with conserved quantum numbers) work in ITensors.jl

    Combiners and Symmetric Tensors

    In ITensors.jl, combiners are special sparse tensors that represent the action of taking the tensor product of one or more indices. It generalizes the idea of reshaping and permuting. For dense ITensors, a combiner is just the action of permuting and reshaping the data of the tensor. For symmetric tensors (quantum number conserving tensors represented as block sparse tensors), the combiner also fuses symmetry sectors together. They can be used for various purposes. Generally they are used internally in the library, for example in order to reshape a high order ITensor into an order 2 ITensor to perform a matrix decomposition like an SVD or eigendecomposition.

    For example:

    julia> using ITensors
            
            # This is a short code showing how a combiner
    -       # can be used to "flip" the direction of an Index
    julia> i = Index([QN(0) => 2, QN(1) => 3], "i")(dim=5|id=848|"i") <Out> + # can be used to "flip" the direction of an Index
    julia> i = Index([QN(0) => 2, QN(1) => 3], "i")(dim=5|id=201|"i") <Out> 1: QN(0) => 2 - 2: QN(1) => 3
    julia> j = Index([QN(0) => 2, QN(1) => 3], "j")(dim=5|id=553|"j") <Out> + 2: QN(1) => 3
    julia> j = Index([QN(0) => 2, QN(1) => 3], "j")(dim=5|id=386|"j") <Out> 1: QN(0) => 2 2: QN(1) => 3
    julia> A = random_itensor(i, dag(j))ITensor ord=2 -(dim=5|id=848|"i") <Out> +(dim=5|id=201|"i") <Out> 1: QN(0) => 2 2: QN(1) => 3 -(dim=5|id=553|"j") <In> +(dim=5|id=386|"j") <In> 1: QN(0) => 2 2: QN(1) => 3 NDTensors.BlockSparse{Float64, Vector{Float64}, 2}
    julia> C = combiner(i, dag(j); tags = "c", dir = dir(i))ITensor ord=3 -(dim=25|id=253|"c") <Out> +(dim=25|id=78|"c") <Out> 1: QN(-1) => 6 2: QN(0) => 13 3: QN(1) => 6 -(dim=5|id=848|"i") <In> +(dim=5|id=201|"i") <In> 1: QN(0) => 2 2: QN(1) => 3 -(dim=5|id=553|"j") <Out> +(dim=5|id=386|"j") <Out> 1: QN(0) => 2 2: QN(1) => 3 -NDTensors.Combiner
    julia> inds(A)((dim=5|id=848|"i") <Out> +NDTensors.Combiner
    julia> inds(A)((dim=5|id=201|"i") <Out> 1: QN(0) => 2 - 2: QN(1) => 3, (dim=5|id=553|"j") <In> + 2: QN(1) => 3, (dim=5|id=386|"j") <In> 1: QN(0) => 2 - 2: QN(1) => 3)
    julia> inds(A * C)((dim=25|id=253|"c") <Out> + 2: QN(1) => 3)
    julia> inds(A * C)((dim=25|id=78|"c") <Out> 1: QN(-1) => 6 2: QN(0) => 13 3: QN(1) => 6,)

    You can see that the combiner reshapes the indices of A into a single Index that contains the tensor product of the two input spaces. The spaces have size QN(-1) => 2 * 3, QN(0) => 2 * 2 + 3 * 3, and QN(0) => 2 * 3 (determined from all of the combinations of combining the sectors of the different indices, where the QNs are added and the block dimensions are multiplied). The ordering of the sectors is determined internally by ITensors.jl.

    You can also use a combiner on a single Index, which can be helpful for changing the direction of an Index or combining multiple sectors of the same symmetry into a single sector:

    julia> using ITensors
            
            # This is a short code showing how a combiner
    -       # can be used to "flip" the direction of an Index
    julia> i = Index([QN(0) => 2, QN(1) => 3], "i")(dim=5|id=897|"i") <Out> + # can be used to "flip" the direction of an Index
    julia> i = Index([QN(0) => 2, QN(1) => 3], "i")(dim=5|id=161|"i") <Out> 1: QN(0) => 2 - 2: QN(1) => 3
    julia> j = dag(Index([QN(0) => 2, QN(1) => 3], "j"))(dim=5|id=480|"j") <In> + 2: QN(1) => 3
    julia> j = dag(Index([QN(0) => 2, QN(1) => 3], "j"))(dim=5|id=1|"j") <In> 1: QN(0) => 2 2: QN(1) => 3
    julia> A = random_itensor(i, j)ITensor ord=2 -(dim=5|id=897|"i") <Out> +(dim=5|id=161|"i") <Out> 1: QN(0) => 2 2: QN(1) => 3 -(dim=5|id=480|"j") <In> +(dim=5|id=1|"j") <In> 1: QN(0) => 2 2: QN(1) => 3 NDTensors.BlockSparse{Float64, Vector{Float64}, 2}
    julia> C = combiner(j; tags = "jflip", dir = -dir(j))ITensor ord=2 -(dim=5|id=110|"jflip") <Out> +(dim=5|id=22|"jflip") <Out> 1: QN(-1) => 3 2: QN(0) => 2 -(dim=5|id=480|"j") <Out> +(dim=5|id=1|"j") <Out> 1: QN(0) => 2 2: QN(1) => 3 -NDTensors.Combiner
    julia> inds(A)((dim=5|id=897|"i") <Out> +NDTensors.Combiner
    julia> inds(A)((dim=5|id=161|"i") <Out> 1: QN(0) => 2 - 2: QN(1) => 3, (dim=5|id=480|"j") <In> + 2: QN(1) => 3, (dim=5|id=1|"j") <In> 1: QN(0) => 2 - 2: QN(1) => 3)
    julia> inds(A * C)((dim=5|id=110|"jflip") <Out> + 2: QN(1) => 3)
    julia> inds(A * C)((dim=5|id=22|"jflip") <Out> 1: QN(-1) => 3 - 2: QN(0) => 2, (dim=5|id=897|"i") <Out> + 2: QN(0) => 2, (dim=5|id=161|"i") <Out> 1: QN(0) => 2 - 2: QN(1) => 3)

    Unless you are writing very specialized custom code with symmetric tensors, this is generally not needed.

    Block Sparsity and Quantum Numbers

    In general, not all blocks that are allowed according to the flux will actually exist in the tensor (which helps in many cases for efficiency). Usually this would happen when the tensor is first constructed and not all blocks are explicitly set:

    julia> using ITensors
    julia> i = Index([QN(0) => 1, QN(1) => 1])(dim=2|id=488) <Out> + 2: QN(1) => 3)

    Unless you are writing very specialized custom code with symmetric tensors, this is generally not needed.

    Block Sparsity and Quantum Numbers

    In general, not all blocks that are allowed according to the flux will actually exist in the tensor (which helps in many cases for efficiency). Usually this would happen when the tensor is first constructed and not all blocks are explicitly set:

    julia> using ITensors
    julia> i = Index([QN(0) => 1, QN(1) => 1])(dim=2|id=715) <Out> 1: QN(0) => 1 2: QN(1) => 1
    julia> A = ITensor(i', dag(i));
    julia> A[2, 2] = 1.0;
    julia> @show A;A = ITensor ord=2 -Dim 1: (dim=2|id=488)' <Out> +Dim 1: (dim=2|id=715)' <Out> 1: QN(0) => 1 2: QN(1) => 1 -Dim 2: (dim=2|id=488) <In> +Dim 2: (dim=2|id=715) <In> 1: QN(0) => 1 2: QN(1) => 1 NDTensors.BlockSparse{Float64, Vector{Float64}, 2} @@ -74,31 +74,31 @@ Block(2, 2) [2:2, 2:2] 1.0
    julia> D, U = eigen(A; ishermitian=true);
    julia> @show D;D = ITensor ord=2 -Dim 1: (dim=1|id=662|"Link,eigen")' <Out> +Dim 1: (dim=1|id=750|"Link,eigen")' <Out> 1: QN(1) => 1 -Dim 2: (dim=1|id=662|"Link,eigen") <In> +Dim 2: (dim=1|id=750|"Link,eigen") <In> 1: QN(1) => 1 NDTensors.DiagBlockSparse{Float64, Vector{Float64}, 2} 1×1 Block(1, 1) [1:1, 1:1] 1.0
    julia> @show U;U = ITensor ord=2 -Dim 1: (dim=2|id=488) <Out> +Dim 1: (dim=2|id=715) <Out> 1: QN(0) => 1 2: QN(1) => 1 -Dim 2: (dim=1|id=662|"Link,eigen") <In> +Dim 2: (dim=1|id=750|"Link,eigen") <In> 1: QN(1) => 1 NDTensors.BlockSparse{Float64, Vector{Float64}, 2} 2×1 Block(2, 1) [2:2, 1:1] - 1.0

    If we had set A[1, 1] = 0.0 as well, then all of the allowed blocks (according to the flux QN(0) would exist and would be included in the eigendecomposition:

    julia> using ITensors
    julia> i = Index([QN(0) => 1, QN(1) => 1])(dim=2|id=605) <Out> + 1.0

    If we had set A[1, 1] = 0.0 as well, then all of the allowed blocks (according to the flux QN(0) would exist and would be included in the eigendecomposition:

    julia> using ITensors
    julia> i = Index([QN(0) => 1, QN(1) => 1])(dim=2|id=205) <Out> 1: QN(0) => 1 2: QN(1) => 1
    julia> A = ITensor(i', dag(i));
    julia> A[2, 2] = 1.0;
    julia> A[1, 1] = 0.0;
    julia> @show A;A = ITensor ord=2 -Dim 1: (dim=2|id=605)' <Out> +Dim 1: (dim=2|id=205)' <Out> 1: QN(0) => 1 2: QN(1) => 1 -Dim 2: (dim=2|id=605) <In> +Dim 2: (dim=2|id=205) <In> 1: QN(0) => 1 2: QN(1) => 1 NDTensors.BlockSparse{Float64, Vector{Float64}, 2} @@ -110,10 +110,10 @@ Block(1, 1) [1:1, 1:1] 0.0
    julia> D, U = eigen(A; ishermitian=true);
    julia> @show D;D = ITensor ord=2 -Dim 1: (dim=2|id=629|"Link,eigen")' <Out> +Dim 1: (dim=2|id=612|"Link,eigen")' <Out> 1: QN(0) => 1 2: QN(1) => 1 -Dim 2: (dim=2|id=629|"Link,eigen") <In> +Dim 2: (dim=2|id=612|"Link,eigen") <In> 1: QN(0) => 1 2: QN(1) => 1 NDTensors.DiagBlockSparse{Float64, Vector{Float64}, 2} @@ -125,10 +125,10 @@ Block(2, 2) [2:2, 2:2] 1.0
    julia> @show U;U = ITensor ord=2 -Dim 1: (dim=2|id=605) <Out> +Dim 1: (dim=2|id=205) <Out> 1: QN(0) => 1 2: QN(1) => 1 -Dim 2: (dim=2|id=629|"Link,eigen") <In> +Dim 2: (dim=2|id=612|"Link,eigen") <In> 1: QN(0) => 1 2: QN(1) => 1 NDTensors.BlockSparse{Float64, Vector{Float64}, 2} @@ -139,24 +139,24 @@ Block(2, 2) [2:2, 2:2] - 1.0

    "Missing" blocks can also occur with tensor contractions, since the final blocks of the output tensor are made from combinations of contractions of blocks from the input tensors, and there is no guarantee that all flux-consistent blocks will end up in the result:

    julia> using ITensors
    julia> i = Index([QN(0) => 1, QN(1) => 1])(dim=2|id=177) <Out> + 1.0

    "Missing" blocks can also occur with tensor contractions, since the final blocks of the output tensor are made from combinations of contractions of blocks from the input tensors, and there is no guarantee that all flux-consistent blocks will end up in the result:

    julia> using ITensors
    julia> i = Index([QN(0) => 1, QN(1) => 1])(dim=2|id=227) <Out> 1: QN(0) => 1 - 2: QN(1) => 1
    julia> j = Index([QN(0) => 1])(dim=1|id=767) <Out> + 2: QN(1) => 1
    julia> j = Index([QN(0) => 1])(dim=1|id=616) <Out> 1: QN(0) => 1
    julia> A = ITensor(i, dag(j));
    julia> A[2, 1] = 1.0;
    julia> @show A;A = ITensor ord=2 -Dim 1: (dim=2|id=177) <Out> +Dim 1: (dim=2|id=227) <Out> 1: QN(0) => 1 2: QN(1) => 1 -Dim 2: (dim=1|id=767) <In> +Dim 2: (dim=1|id=616) <In> 1: QN(0) => 1 NDTensors.BlockSparse{Float64, Vector{Float64}, 2} 2×1 Block(2, 1) [2:2, 1:1] 1.0
    julia> A2 = prime(A, i) * dag(A);
    julia> @show A2;A2 = ITensor ord=2 -Dim 1: (dim=2|id=177)' <Out> +Dim 1: (dim=2|id=227)' <Out> 1: QN(0) => 1 2: QN(1) => 1 -Dim 2: (dim=2|id=177) <In> +Dim 2: (dim=2|id=227) <In> 1: QN(0) => 1 2: QN(1) => 1 NDTensors.BlockSparse{Float64, Vector{Float64}, 2} @@ -164,22 +164,22 @@ Block(2, 2) [2:2, 2:2] 1.0
    julia> D, U = eigen(A2; ishermitian=true);
    julia> @show D;D = ITensor ord=2 -Dim 1: (dim=1|id=644|"Link,eigen")' <Out> +Dim 1: (dim=1|id=896|"Link,eigen")' <Out> 1: QN(1) => 1 -Dim 2: (dim=1|id=644|"Link,eigen") <In> +Dim 2: (dim=1|id=896|"Link,eigen") <In> 1: QN(1) => 1 NDTensors.DiagBlockSparse{Float64, Vector{Float64}, 2} 1×1 Block(1, 1) [1:1, 1:1] 1.0
    julia> @show U;U = ITensor ord=2 -Dim 1: (dim=2|id=177) <Out> +Dim 1: (dim=2|id=227) <Out> 1: QN(0) => 1 2: QN(1) => 1 -Dim 2: (dim=1|id=644|"Link,eigen") <In> +Dim 2: (dim=1|id=896|"Link,eigen") <In> 1: QN(1) => 1 NDTensors.BlockSparse{Float64, Vector{Float64}, 2} 2×1 Block(2, 1) [2:2, 1:1] - 1.0
    + 1.0 diff --git a/dev/RunningOnGPUs.html b/dev/RunningOnGPUs.html index 85b4632ad6..9d64d17173 100644 --- a/dev/RunningOnGPUs.html +++ b/dev/RunningOnGPUs.html @@ -26,4 +26,4 @@ Bmtl = mtl(B) # Perform tensor operations on Apple GPU -Amtl * Bmtl

    Note that we highly recommend using these new package extensions as opposed to ITensorGPU.jl, which is ITensor's previous CUDA backend. The package extensions are better integrated into the main library so are more reliable and better supported right now. We plan to deprecate ITensorGPU.jl in the future.

    GPU backends

    ITensor currently provides package extensions for the following GPU backends:

    Our goal is to support all GPU backends which are supported by the JuliaGPU organization.

    Notice that cuTENSOR.jl is an extension of CUDA.jl that provides new functionality for accelerated binary tensor contractions. If the cuTENSOR.jl library is loaded then ITensors with CuArray data are contracted using cuTENSOR and if the cuTENSOR.jl library is not loaded but CUDA.jl is loaded then binary tensor contractions are mapped to a matrix multiplication and performed using cuBLAS.

    Some important caveats to keep in mind related to the ITensor GPU backends are:

    The table below summarizes each backend's current capabilities.

    CUDAcuTENSORROCmMetaloneAPI
    Contractions (dense)✓ (cuBLAS)N/A[oneapi]
    QR (dense)✓ (cuSOLVER)✓ (cuSOLVER)On CPU[linalg]On CPU[linalg]N/A[oneapi]
    SVD (dense)✓ (cuSOLVER)✓ (cuSOLVER)On CPU[linalg]On CPU[linalg]N/A[oneapi]
    Eigendecomposition (dense)✓ (cuSOLVER)✓ (cuSOLVER)On CPU[linalg]On CPU[linalg]N/A[oneapi]
    Double precision (Float64)N/A[metal]N/A[oneapi]
    Block sparse[blocksparse][blocksparse][blocksparse][blocksparse]N/A[oneapi]
    +Amtl * Bmtl

    Note that we highly recommend using these new package extensions as opposed to ITensorGPU.jl, which is ITensor's previous CUDA backend. The package extensions are better integrated into the main library so are more reliable and better supported right now. We plan to deprecate ITensorGPU.jl in the future.

    GPU backends

    ITensor currently provides package extensions for the following GPU backends:

    Our goal is to support all GPU backends which are supported by the JuliaGPU organization.

    Notice that cuTENSOR.jl is an extension of CUDA.jl that provides new functionality for accelerated binary tensor contractions. If the cuTENSOR.jl library is loaded then ITensors with CuArray data are contracted using cuTENSOR and if the cuTENSOR.jl library is not loaded but CUDA.jl is loaded then binary tensor contractions are mapped to a matrix multiplication and performed using cuBLAS.

    Some important caveats to keep in mind related to the ITensor GPU backends are:

    The table below summarizes each backend's current capabilities.

    CUDAcuTENSORROCmMetaloneAPI
    Contractions (dense)✓ (cuBLAS)N/A[oneapi]
    QR (dense)✓ (cuSOLVER)✓ (cuSOLVER)On CPU[linalg]On CPU[linalg]N/A[oneapi]
    SVD (dense)✓ (cuSOLVER)✓ (cuSOLVER)On CPU[linalg]On CPU[linalg]N/A[oneapi]
    Eigendecomposition (dense)✓ (cuSOLVER)✓ (cuSOLVER)On CPU[linalg]On CPU[linalg]N/A[oneapi]
    Double precision (Float64)N/A[metal]N/A[oneapi]
    Block sparse[blocksparse][blocksparse][blocksparse][blocksparse]N/A[oneapi]
    diff --git a/dev/SiteType.html b/dev/SiteType.html index 822f1b0348..29d4145618 100644 --- a/dev/SiteType.html +++ b/dev/SiteType.html @@ -41,8 +41,8 @@ 0.0 0.0 0.0 0.0 0.0 1.0 0.0 0.0 0.0 0.0 0.0 0.0 - 0.0 0.0 0.0 1.0

    Many operators are available, for example:

    You can view the source code for the internal SiteType definitions and operators that are defined here.

    source

    Methods

    ITensors.SiteTypes.opFunction
    op(opname::String, s::Index; kwargs...)

    Return an ITensor corresponding to the operator named opname for the Index s. The operator is constructed by calling an overload of either the op or op! methods which take a SiteType argument that corresponds to one of the tags of the Index s and an OpName"opname" argument that corresponds to the input operator name.

    Operator names can be combined using the "*" symbol, for example "S+*S-" or "Sz*Sz*Sz". The result is an ITensor made by forming each operator then contracting them together in a way corresponding to the usual operator product or matrix multiplication.

    The op system is used by the OpSum system to convert operator names into ITensors, and can be used directly such as for applying operators to MPS.

    Example

    s = Index(2, "Site,S=1/2")
    -Sz = op("Sz", s)

    To see all of the operator names defined for the site types included with ITensor, please view the source code for each site type. Note that some site types such as "S=1/2" and "Qubit" are aliases for each other and share operator definitions.

    source
    op(X::AbstractArray, s::Index...)
    + 0.0  0.0  0.0  1.0

    Many operators are available, for example:

    • SiteType"S=1/2": "Sz", "Sx", "Sy", "S+", "S-", ...
    • SiteType"Electron": "Nup", "Ndn", "Nupdn", "Ntot", "Cup", "Cdagup", "Cdn", "Cdagdn", "Sz", "Sx", "Sy", "S+", "S-", ...
    • ...

    You can view the source code for the internal SiteType definitions and operators that are defined here.

    source

    Methods

    ITensors.SiteTypes.opFunction
    op(opname::String, s::Index; kwargs...)

    Return an ITensor corresponding to the operator named opname for the Index s. The operator is constructed by calling an overload of either the op or op! methods which take a SiteType argument that corresponds to one of the tags of the Index s and an OpName"opname" argument that corresponds to the input operator name.

    Operator names can be combined using the "*" symbol, for example "S+*S-" or "Sz*Sz*Sz". The result is an ITensor made by forming each operator then contracting them together in a way corresponding to the usual operator product or matrix multiplication.

    The op system is used by the OpSum system to convert operator names into ITensors, and can be used directly such as for applying operators to MPS.

    Example

    s = Index(2, "Site,S=1/2")
    +Sz = op("Sz", s)

    To see all of the operator names defined for the site types included with ITensor, please view the source code for each site type. Note that some site types such as "S=1/2" and "Qubit" are aliases for each other and share operator definitions.

    source
    op(X::AbstractArray, s::Index...)
     op(M::Matrix, s::Index...)

    Given a matrix M and a set of indices s,t,... return an operator ITensor with matrix elements given by M and indices s, s', t, t'

    Example

    julia> s = siteind("S=1/2")
     (dim=2|id=575|"S=1/2,Site")
     
    @@ -59,57 +59,57 @@
      0.5   0.0
      0.0  -0.5
     ITensor ord=2 (dim=2|id=575|"S=1/2,Site")' (dim=2|id=575|"S=1/2,Site")
    -NDTensors.Dense{Float64, Vector{Float64}}
    source
    op(opname::String,sites::Vector{<:Index},n::Int; kwargs...)

    Return an ITensor corresponding to the operator named opname for the n'th Index in the array sites.

    Example

    s = siteinds("S=1/2", 4)
    -Sz2 = op("Sz", s, 2)
    source
    ITensors.SiteTypes.stateFunction
    state(s::Index, name::String; kwargs...)

    Return an ITensor corresponding to the state named name for the Index s. The returned ITensor will have s as its only index.

    The terminology here is based on the idea of a single-site state or wavefunction in physics.

    The state function is implemented for various Index tags by overloading either the state or state! methods which take a SiteType argument corresponding to one of the tags of the Index s and an StateName"name" argument that corresponds to the input state name.

    The state system is used by the MPS type to construct product-state MPS and for other purposes.

    Example

    s = Index(2, "Site,S=1/2")
    +NDTensors.Dense{Float64, Vector{Float64}}
    source
    op(opname::String,sites::Vector{<:Index},n::Int; kwargs...)

    Return an ITensor corresponding to the operator named opname for the n'th Index in the array sites.

    Example

    s = siteinds("S=1/2", 4)
    +Sz2 = op("Sz", s, 2)
    source
    ITensors.SiteTypes.stateFunction
    state(s::Index, name::String; kwargs...)

    Return an ITensor corresponding to the state named name for the Index s. The returned ITensor will have s as its only index.

    The terminology here is based on the idea of a single-site state or wavefunction in physics.

    The state function is implemented for various Index tags by overloading either the state or state! methods which take a SiteType argument corresponding to one of the tags of the Index s and an StateName"name" argument that corresponds to the input state name.

    The state system is used by the MPS type to construct product-state MPS and for other purposes.

    Example

    s = Index(2, "Site,S=1/2")
     sup = state(s,"Up")
     sdn = state(s,"Dn")
     sxp = state(s,"X+")
    -sxm = state(s,"X-")
    source
    ITensors.valFunction
    val(q::QN,name)

    Get the value within the QN q corresponding to the string name

    source
    val(s::Index, name::String)

    Return an integer corresponding to the name of a certain value the Index s can take. In other words, the val function maps strings to specific integer values within the range 1:dim(s).

    The val function is implemented for various Index tags by overloading methods named val which take a SiteType argument corresponding to one of the tags of the Index s and an ValName"name" argument that corresponds to the input name.

    Example

    s = Index(2, "Site,S=1/2")
    +sxm = state(s,"X-")
    source
    ITensors.valFunction
    val(q::QN,name)

    Get the value within the QN q corresponding to the string name

    source
    val(s::Index, name::String)

    Return an integer corresponding to the name of a certain value the Index s can take. In other words, the val function maps strings to specific integer values within the range 1:dim(s).

    The val function is implemented for various Index tags by overloading methods named val which take a SiteType argument corresponding to one of the tags of the Index s and an ValName"name" argument that corresponds to the input name.

    Example

    s = Index(2, "Site,S=1/2")
     val(s,"Up") == 1
     val(s,"Dn") == 2
     
     s = Index(2, "Site,Fermion")
     val(s,"Emp") == 1
    -val(s,"Occ") == 2
    source
    ITensors.spaceFunction
    space(::SiteType"Qubit";
    +val(s,"Occ") == 2
    source
    ITensors.spaceFunction
    space(::SiteType"Qubit";
           conserve_qns = false,
           conserve_parity = conserve_qns,
           conserve_number = false,
           qnname_parity = "Parity",
    -      qnname_number = "Number")

    Create the Hilbert space for a site of type "Qubit".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    space(::SiteType"S=1/2";
    +      qnname_number = "Number")

    Create the Hilbert space for a site of type "Qubit".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    space(::SiteType"S=1/2";
           conserve_qns = false,
           conserve_sz = conserve_qns,
           conserve_szparity = false,
           qnname_sz = "Sz",
    -      qnname_szparity = "SzParity")

    Create the Hilbert space for a site of type "S=1/2".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    space(::SiteType"S=1";
    +      qnname_szparity = "SzParity")

    Create the Hilbert space for a site of type "S=1/2".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    space(::SiteType"S=1";
           conserve_qns = false,
           conserve_sz = conserve_qns,
    -      qnname_sz = "Sz")

    Create the Hilbert space for a site of type "S=1".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    space(::SiteType"Fermion";
    +      qnname_sz = "Sz")

    Create the Hilbert space for a site of type "S=1".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    space(::SiteType"Fermion";
           conserve_qns=false,
           conserve_nf=conserve_qns,
           conserve_nfparity=conserve_qns,
           qnname_nf = "Nf",
           qnname_nfparity = "NfParity",
           qnname_sz = "Sz",
    -      conserve_sz = false)

    Create the Hilbert space for a site of type "Fermion".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    space(::SiteType"Electron";
    +      conserve_sz = false)

    Create the Hilbert space for a site of type "Fermion".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    space(::SiteType"Electron";
           conserve_qns = false,
           conserve_sz = conserve_qns,
           conserve_nf = conserve_qns,
           conserve_nfparity = conserve_qns,
           qnname_sz = "Sz",
           qnname_nf = "Nf",
    -      qnname_nfparity = "NfParity")

    Create the Hilbert space for a site of type "Electron".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    space(::SiteType"tJ";
    +      qnname_nfparity = "NfParity")

    Create the Hilbert space for a site of type "Electron".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    space(::SiteType"tJ";
           conserve_qns = false,
           conserve_sz = conserve_qns,
           conserve_nf = conserve_qns,
           conserve_nfparity = conserve_qns,
           qnname_sz = "Sz",
           qnname_nf = "Nf",
    -      qnname_nfparity = "NfParity")

    Create the Hilbert space for a site of type "tJ".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    space(::SiteType"Qudit";
    +      qnname_nfparity = "NfParity")

    Create the Hilbert space for a site of type "tJ".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    space(::SiteType"Qudit";
           dim = 2,
           conserve_qns = false,
           conserve_number = false,
    -      qnname_number = "Number")

    Create the Hilbert space for a site of type "Qudit".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    space(::SiteType"Boson";
    +      qnname_number = "Number")

    Create the Hilbert space for a site of type "Qudit".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    space(::SiteType"Boson";
           dim = 2,
           conserve_qns = false,
           conserve_number = false,
    -      qnname_number = "Number")

    Create the Hilbert space for a site of type "Boson".

    Optionally specify the conserved symmetries and their quantum number labels.

    source
    + qnname_number = "Number")

    Create the Hilbert space for a site of type "Boson".

    Optionally specify the conserved symmetries and their quantum number labels.

    source diff --git a/dev/Sweeps.html b/dev/Sweeps.html index 36c40b3414..0f3a12155e 100644 --- a/dev/Sweeps.html +++ b/dev/Sweeps.html @@ -19,4 +19,4 @@ 4cutoff = 1.0E-12, maxdim = 400, mindim = 20, noise = 0.0E+00 5cutoff = 1.0E-12, maxdim = 800, mindim = 20, noise = 1.0E-11 6cutoff = 1.0E-12, maxdim = 800, mindim = 20, noise = 0.0E+00

    Modifying Sweeps Objects

    ITensorMPS.setmaxdim!Function
    maxdim!(sw::Sweeps,maxdims::Int...)

    Set the maximum MPS bond dimension for each sweep by providing up to nsweep(sw) values. If fewer values are provided, the last value is repeated for the remaining sweeps.

    ITensorMPS.setcutoff!Function
    cutoff!(sw::Sweeps,maxdims::Int...)

    Set the MPS truncation error used for each sweep by providing up to nsweep(sw) values. If fewer values are provided, the last value is repeated for the remaining sweeps.

    ITensorMPS.setnoise!Function
    noise!(sw::Sweeps,maxdims::Int...)

    Set the noise-term coefficient used for each sweep by providing up to nsweep(sw) values. If fewer values are provided, the last value is repeated for the remaining sweeps.

    ITensorMPS.setmindim!Function
    mindim!(sw::Sweeps,maxdims::Int...)

    Set the minimum MPS bond dimension for each sweep by providing up to nsweep(sw) values. If fewer values are provided, the last value is repeated for the remaining sweeps.

    Getting Sweeps Object Data

    ITensorMPS.nsweepMethod
    nsweep(sw::Sweeps)
    -length(sw::Sweeps)

    Obtain the number of sweeps parameterized by this sweeps object.

    NDTensors.maxdimMethod
    maxdim(sw::Sweeps,n::Int)

    Maximum MPS bond dimension allowed by the Sweeps object sw during sweep n

    ITensorMPS.cutoffMethod
    cutoff(sw::Sweeps,n::Int)

    Truncation error cutoff setting of the Sweeps object sw during sweep n

    ITensorMPS.noiseMethod
    noise(sw::Sweeps,n::Int)

    Noise term coefficient setting of the Sweeps object sw during sweep n

    NDTensors.mindimMethod
    mindim(sw::Sweeps,n::Int)

    Minimum MPS bond dimension allowed by the Sweeps object sw during sweep n

    +length(sw::Sweeps)

    Obtain the number of sweeps parameterized by this sweeps object.

    NDTensors.maxdimMethod
    maxdim(sw::Sweeps,n::Int)

    Maximum MPS bond dimension allowed by the Sweeps object sw during sweep n

    ITensorMPS.cutoffMethod
    cutoff(sw::Sweeps,n::Int)

    Truncation error cutoff setting of the Sweeps object sw during sweep n

    ITensorMPS.noiseMethod
    noise(sw::Sweeps,n::Int)

    Noise term coefficient setting of the Sweeps object sw during sweep n

    NDTensors.mindimMethod
    mindim(sw::Sweeps,n::Int)

    Minimum MPS bond dimension allowed by the Sweeps object sw during sweep n

    diff --git a/dev/UpgradeGuide_0.1_to_0.2.html b/dev/UpgradeGuide_0.1_to_0.2.html index 81e0af0631..73d9437f19 100644 --- a/dev/UpgradeGuide_0.1_to_0.2.html +++ b/dev/UpgradeGuide_0.1_to_0.2.html @@ -185,4 +185,4 @@ 2: QN("Sz",-1) => 1 (dim=2|id=810|"S=1/2,Site,n=4") <Out> 1: QN("Sz",1) => 1 - 2: QN("Sz",-1) => 1

    This shouldn't affect end users in general. The new convention is a bit more intuitive since the quantum number can be thought of as counting the total number of 1 bits in the state, though the conventions can be mapped to each other with a constant.

    maxlinkdim for MPS/MPO with no indices

    maxlinkdim(::MPS/MPO) returns a minimum of 1 (previously it returned 0 for MPS/MPO without and link indices) (PR #663).

    + 2: QN("Sz",-1) => 1

    This shouldn't affect end users in general. The new convention is a bit more intuitive since the quantum number can be thought of as counting the total number of 1 bits in the state, though the conventions can be mapped to each other with a constant.

    maxlinkdim for MPS/MPO with no indices

    maxlinkdim(::MPS/MPO) returns a minimum of 1 (previously it returned 0 for MPS/MPO without and link indices) (PR #663).

    diff --git a/dev/examples/DMRG.html b/dev/examples/DMRG.html index 7541481f34..ec229b1c8c 100644 --- a/dev/examples/DMRG.html +++ b/dev/examples/DMRG.html @@ -315,4 +315,4 @@ After sweep 4, |psi| = 2.863 MiB, |PH| = 7.246 MiB After sweep 4 energy=-44.127710946536645 maxlinkdim=56 maxerr=9.99E-09 time=0.445 After sweep 5, |psi| = 3.108 MiB, |PH| = 7.845 MiB -After sweep 5 energy=-44.127736798226536 maxlinkdim=57 maxerr=9.98E-09 time=0.564 +After sweep 5 energy=-44.127736798226536 maxlinkdim=57 maxerr=9.98E-09 time=0.564 diff --git a/dev/examples/ITensor.html b/dev/examples/ITensor.html index 78aea82c3b..fc234553ad 100644 --- a/dev/examples/ITensor.html +++ b/dev/examples/ITensor.html @@ -23,8 +23,8 @@ T = random_itensor(k,m) @show T
    T = ITensor ord=2
    -Dim 1: (dim=4|id=15|"index_k")
    -Dim 2: (dim=2|id=293|"index_m")
    +Dim 1: (dim=4|id=719|"index_k")
    +Dim 2: (dim=2|id=729|"index_m")
     NDTensors.Dense{Float64, Vector{Float64}}
      4×2
      -0.037025446544394686   0.4967800795298839
    @@ -57,21 +57,21 @@
     T .= myf.(T)

    Making an ITensor with a Single Non-Zero Element

    It is often useful to make ITensors with all elements zero except for a specific element that is equal to 1.0. Use cases can include making product-state quantum wavefunctions or contracting single-element ITensors with other ITensors to set their indices to a fixed value.

    To make such an ITensor, use the onehot function. Borrowing terminology from engineering, a "one hot" vector or tensor has a single element equal to 1.0 and the rest zero. (In previous versions of ITensor this function was called setelt.)

    The ITensor function onehot takes one or more Index-value Pairs such as i=>2 and j=>1 and returns an ITensor with a 1.0 in the location specified by the Index values:

    i = Index(2)
     O1 = onehot(i=>1)
     println(O1)
    ITensor ord=1
    -Dim 1: (dim=2|id=888)
    +Dim 1: (dim=2|id=4)
     NDTensors.Dense{Float64, Vector{Float64}}
      2-element
      1.0
      0.0
    O2 = onehot(i=>2)
     println(O2)
    ITensor ord=1
    -Dim 1: (dim=2|id=31)
    +Dim 1: (dim=2|id=749)
     NDTensors.Dense{Float64, Vector{Float64}}
      2-element
      0.0
      1.0
    j = Index(3)
     T = onehot(i=>2,j=>3)
     println(T)
    ITensor ord=2
    -Dim 1: (dim=2|id=551)
    -Dim 2: (dim=3|id=636)
    +Dim 1: (dim=2|id=970)
    +Dim 2: (dim=3|id=868)
     NDTensors.Dense{Float64, Vector{Float64}}
      2×3
      0.0  0.0  0.0
    @@ -105,11 +105,11 @@
     @show norm(U*S*V-T)
     @show (norm(U*S*V - T)/norm(T))^2

    QR Factorization

    Computing the QR factorization of an ITensor works in a similar way as for the SVD. In addition to passing the ITensor you want to factorize, you must also pass the indices you want to end up on the tensor Q, in other words to be treated as the "row" indices for the purpose of defining the QR factorization.

    Say we want to compute the QR factorization of an ITensor T with indices i,j,k, putting the indices i and k onto Q and the remaining indices onto R. We can do this as follows:

    T = random_itensor(i,j,k)
     Q,R = qr(T,(i,k);positive=true)

    Note the use of the optional positive=true keyword argument, which ensures that the diagonal elements of R are non-negative. With this option, the QR factorization is unique, which can be useful in certain cases.

    Combining Multiple Indices into One Index

    It can be very useful to combine or merge multiple indices of an ITensor into a single Index. Say we have an ITensor with indices i,j,k and we want to combine Index i and Index k into a new Index. This new Index (call it c) will have a dimension whose size is the dimension of i times the dimension of k.

    To carry out this procedure we can make a special kind of ITensor: a combiner. To make a combiner, call the function combiner, passing the indices you want to combine:

    C = combiner(i,k; tags="c")

    Then if we have an ITensor

    T = random_itensor(i,j,k)
    -@show inds(T)
    ((dim=4|id=874|"i"), (dim=3|id=757|"j"), (dim=2|id=259|"k"))

    we can combine indices i and k by contracting with the combiner:

    CT = C * T

    Printing out the indices of the new ITensor CT we can see that it has only two indices:

    @show inds(CT)
    ((dim=8|id=945|"c"), (dim=3|id=757|"j"))

    The first is the newly made combined Index, which was made for us by the combiner function and the second is the j Index of T which was not part of the combining process. To access the combined Index you can call the combinedind function on the combiner:

    ci = combinedind(C)
    (dim=8|id=945|"c")

    We can visualize all of the steps above as follows:

    Combining is not limited to two indices and you can combine any number of indices, in any order, using a combiner.

    To undo the combining process and uncombine the Index c back into i,k, just contract with the conjugate of the combiner ITensor dag(C).

    UT = dag(C) * CT
    -@show inds(UT)
    ((dim=4|id=874|"i"), (dim=2|id=259|"k"), (dim=3|id=757|"j"))

    Write and Read an ITensor to Disk with HDF5

    Info

    Make sure to install the HDF5 package to use this feature. (Run julia> ] add HDF5 in the Julia REPL console.)

    Saving ITensors to disk can be very useful. For example, you might encounter a bug in your own code, and by reading the ITensors involved from disk you can shortcut the process of running a lengthy algorithm over many times to reproduce the bug. Or you can save the output of an expensive calculation, such as a DMRG calculation, and use it as a starting point for multiple follow-up calculations such as computing time-dependent properties.

    ITensors can be written to files using the HDF5 format. HDF5 offers many benefits such as being portable across different machine types, and offers a standard interface across various libraries and languages.

    Writing an ITensor to an HDF5 File

    Let's say you have an ITensor T which you have made or obtained from a calculation. To write it to an HDF5 file named "myfile.h5" you can use the following pattern:

    using HDF5
    +@show inds(T)
    ((dim=4|id=262|"i"), (dim=3|id=284|"j"), (dim=2|id=301|"k"))

    we can combine indices i and k by contracting with the combiner:

    CT = C * T

    Printing out the indices of the new ITensor CT we can see that it has only two indices:

    @show inds(CT)
    ((dim=8|id=629|"c"), (dim=3|id=284|"j"))

    The first is the newly made combined Index, which was made for us by the combiner function and the second is the j Index of T which was not part of the combining process. To access the combined Index you can call the combinedind function on the combiner:

    ci = combinedind(C)
    (dim=8|id=629|"c")

    We can visualize all of the steps above as follows:

    Combining is not limited to two indices and you can combine any number of indices, in any order, using a combiner.

    To undo the combining process and uncombine the Index c back into i,k, just contract with the conjugate of the combiner ITensor dag(C).

    UT = dag(C) * CT
    +@show inds(UT)
    ((dim=4|id=262|"i"), (dim=2|id=301|"k"), (dim=3|id=284|"j"))

    Write and Read an ITensor to Disk with HDF5

    Info

    Make sure to install the HDF5 package to use this feature. (Run julia> ] add HDF5 in the Julia REPL console.)

    Saving ITensors to disk can be very useful. For example, you might encounter a bug in your own code, and by reading the ITensors involved from disk you can shortcut the process of running a lengthy algorithm over many times to reproduce the bug. Or you can save the output of an expensive calculation, such as a DMRG calculation, and use it as a starting point for multiple follow-up calculations such as computing time-dependent properties.

    ITensors can be written to files using the HDF5 format. HDF5 offers many benefits such as being portable across different machine types, and offers a standard interface across various libraries and languages.

    Writing an ITensor to an HDF5 File

    Let's say you have an ITensor T which you have made or obtained from a calculation. To write it to an HDF5 file named "myfile.h5" you can use the following pattern:

    using HDF5
     f = h5open("myfile.h5","w")
     write(f,"T",T)
     close(f)

    Above, the string "T" can actually be any string you want such as "ITensor T" or "Result Tensor" and doesn't have to have the same name as the reference T. Closing the file f is optional and you can also write other objects to the same file before closing it.

    Reading an ITensor from an HDF5 File

    Say you have an HDF5 file "myfile.h5" which contains an ITensor stored as a dataset with the name "T". (Which would be the situation if you wrote it as in the example above.) To read this ITensor back from the HDF5 file, use the following pattern:

    using HDF5
     f = h5open("myfile.h5","r")
     T = read(f,"T",ITensor)
    -close(f)

    Note the ITensor argument to the read function, which tells Julia which read function to call and how to interpret the data stored in the HDF5 dataset named "T". In the future we might lift the requirement of providing the type and have it be detected automatically from the data stored in the file.

    +close(f)

    Note the ITensor argument to the read function, which tells Julia which read function to call and how to interpret the data stored in the HDF5 dataset named "T". In the future we might lift the requirement of providing the type and have it be detected automatically from the data stored in the file.

    diff --git a/dev/examples/MPSandMPO.html b/dev/examples/MPSandMPO.html index ae0ee4cb5f..61239a2c81 100644 --- a/dev/examples/MPSandMPO.html +++ b/dev/examples/MPSandMPO.html @@ -89,4 +89,4 @@ H = MPO(os,sites) # Compute <psi|H|psi> -energy_psi = inner(psi',H,psi)

    Note the MPS argument to the read function, which tells Julia which read function to call and how to interpret the data stored in the HDF5 dataset named "psi". In the future we might lift the requirement of providing the type and have it be detected automatically from the data stored in the file.

    Writing and Reading MPOs

    To write or read MPOs to or from HDF5 files, just follow the examples above but use the type MPO when reading an MPO from the file instead of the type MPS.

    +energy_psi = inner(psi',H,psi)

    Note the MPS argument to the read function, which tells Julia which read function to call and how to interpret the data stored in the HDF5 dataset named "psi". In the future we might lift the requirement of providing the type and have it be detected automatically from the data stored in the file.

    Writing and Reading MPOs

    To write or read MPOs to or from HDF5 files, just follow the examples above but use the type MPO when reading an MPO from the file instead of the type MPS.

    diff --git a/dev/examples/Physics.html b/dev/examples/Physics.html index c9fe8e784e..07bbe8d907 100644 --- a/dev/examples/Physics.html +++ b/dev/examples/Physics.html @@ -102,8 +102,8 @@ 0 0 0 -3/2]

    As you can see, the function is passed two objects: an OpName and a SiteType. The strings "Sz" and "S=3/2" are also part of the type of these objects, and have the meaning of which operator name we are defining and which site type these operators are defined for.

    The body of this overload of ITensors.op constructs and returns a Julia matrix which gives the matrix elements of the operator we are defining.

    Once this function is defined, and if you have an Index such as

    s = Index(4,"S=3/2")

    then, for example, you can get the "Sz" operator for this Index and print it out by doing:

    using ITensors, ITensorMPS
     Sz = op("Sz",s)
     println(Sz)
    ITensor ord=2
    -Dim 1: (dim=4|id=939|"S=3/2")'
    -Dim 2: (dim=4|id=939|"S=3/2")
    +Dim 1: (dim=4|id=760|"S=3/2")'
    +Dim 2: (dim=4|id=760|"S=3/2")
     NDTensors.Dense{Float64, Vector{Float64}}
      4×4
      1.5  0.0   0.0   0.0
    @@ -145,4 +145,4 @@
     

    Now let's look at each part of the code above.

    The space function

    In the previous code example above, we discussed that the function space tells the ITensor library the basic information about how to construct an Index associated with a special Index tag, in this case the tag "S=3/2". As in that code formula, if the user does not request that quantum numbers be included (the case conserve_qns=false) then all that the space function returns is the number 4, indicating that a "S=3/2" Index should be of dimension 4.

    But if the conserve_qns keyword argument gets set to true, the space function we defined above returns an array of QN=>Int pairs. (The notation a=>b in Julia constructs a Pair object.) Each pair in the array denotes a subspace. The QN part of each pair says what quantum number the subspace has, and the integer following it indicates the dimension of the subspace.

    After defining the space function this way, you can write code like:

    using ITensors, ITensorMPS
     s = siteind("S=3/2"; conserve_qns=true)

    to obtain a single "S=3/2" Index which carries quantum number information. The siteind function built into ITensor relies on your custom space function to ask how to construct a "S=3/2" Index but also includes some other Index tags which are conventional for all site indices.

    You can now also call code like:

    using ITensors, ITensorMPS
     N = 100
    -sites = siteinds("S=3/2",N; conserve_qns=true)

    to obtain an array of N "S=3/2" indices which carry quantum numbers.

    The op Function in the Quantum Number Case

    Note that the op function overloads are exactly the same as for the more basic case of defining an "S=3/2" Index type that does not carry quantum numbers. There is no need to upgrade any of the op functions for the QN-conserving case. The reason is that all QN, block-sparse information about an ITensor is deduced from the indices of the tensor, and setting elements of such tensors does not require any other special code.

    However, only operators which have a well-defined QN flux–-meaning they always change the quantum number of a state they act on by a well-defined amount–-can be used in practice in the case of QN conservation. Attempting to build an operator, or any ITensor, without a well-defined QN flux out of QN-conserving indices will result in a run time error. An example of an operator that would lead to such an error would be the "Sx" spin operator since it alternately increases $S^z$ or decreases $S^z$ depending on the state it acts on, thus it does not have a well-defined QN flux. But it is perfectly fine to define an op overload for the "Sx" operator and to make this operator when working with dense, non-QN-conserving ITensors or when $S^z$ is not conserved.

    +sites = siteinds("S=3/2",N; conserve_qns=true)

    to obtain an array of N "S=3/2" indices which carry quantum numbers.

    The op Function in the Quantum Number Case

    Note that the op function overloads are exactly the same as for the more basic case of defining an "S=3/2" Index type that does not carry quantum numbers. There is no need to upgrade any of the op functions for the QN-conserving case. The reason is that all QN, block-sparse information about an ITensor is deduced from the indices of the tensor, and setting elements of such tensors does not require any other special code.

    However, only operators which have a well-defined QN flux–-meaning they always change the quantum number of a state they act on by a well-defined amount–-can be used in practice in the case of QN conservation. Attempting to build an operator, or any ITensor, without a well-defined QN flux out of QN-conserving indices will result in a run time error. An example of an operator that would lead to such an error would be the "Sx" spin operator since it alternately increases $S^z$ or decreases $S^z$ depending on the state it acts on, thus it does not have a well-defined QN flux. But it is perfectly fine to define an op overload for the "Sx" operator and to make this operator when working with dense, non-QN-conserving ITensors or when $S^z$ is not conserved.

    diff --git a/dev/faq/DMRG.html b/dev/faq/DMRG.html index f5cfb4b89a..d3664fbdf5 100644 --- a/dev/faq/DMRG.html +++ b/dev/faq/DMRG.html @@ -15,4 +15,4 @@ end hterms += "Sz",1,"Sz",N # term 'wrapping' around the ring -H = MPO(hterms,sites)

    For two-dimensional DMRG calculations, where the most common approach is to use periodic boundary conditions in the y-direction only, and not in the x-direction, you do a similar step in making your OpSum input to ITensor DMRG: you include terms wrapping around the periodic cylinder in the y direction but not in the x direction.

    However, fully periodic boundary conditions are only recommended for small systems when absolutely needed, and in general are not recommended. For a longer discussion of alternatives to using fully periodic boundaries, see the next section below.

    The reason fully periodic boundary conditions (periodic in x in 1D, and periodic in both x and y in 2D) are not recommended in general is that the DMRG algorithm, as we are defining it here, optimizes an open-boundary MPS. So if you input a periodic-boundary Hamiltonian, there is a kind of "mismatch" that happens where you can still get the correct answer, but it requires much more resources (a larger bond dimension and more sweeps) to get good accuracy. There has been some research into "truly" periodic DMRG, [Pippan] that is DMRG that optimizes an MPS with a ring-like topology, but it is not widely used, is still an open area of algorithm development, and is not currently available in ITensor.

    What boundary conditions should I choose: open, periodic, or infinite?

    One of the weaknesses of the density matrix renormalization group (DMRG), and its time-dependent or finite-temperature extensions, is that it works poorly with periodic boundary conditions. This stems from the fact that conventional DMRG optimizes over open-boundary matrix product state (MPS) wavefunctions whether or not the Hamiltonian includes periodic interactions.

    But this begs the question, when are periodic boundary conditions (PBC) really needed? DMRG offers some compelling alternatives to PBC:

    However, there are a handful of cases where PBC remains preferable despite the extra overhead. A few such cases are:

    (Note that in the remaining discussion, by PBC I mean fully periodic boundary conditions in all directions. For the case of DMRG applied to quasi-two-dimensional systems, it remains a good practice to use periodic boundaries in the shorter direction, while still using open (or infinite) boundaries in the longer direction along the DMRG/MPS path.)

    Below I discuss more about the problems with using PBC, as well as some misconceptions about when PBC seems necessary even though there are better alternatives.

    Drawbacks of Periodic Boundary Conditions

    Periodic boundary conditions are straightforward to implement in conventional DMRG. The simplest approach is to include a "long bond" directly connecting site 1 to site N in the Hamiltonian. However this naive approach has a major drawback: if open-boundary DMRG achieves a given accuracy when keeping $m$ states (bond dimension of size $m$), then to reach the same accuracy with PBC one must keep closer to $m^2$ states! The reason is that now every bond of the MPS not only carries local entanglement as with OBC, but also the entanglement between the first and last sites. (There is an alternative DMRG algorithm[Pippan] for periodic systems which may have better scaling than the above approach but has not been widely applied and tested, as far as I am aware, especially for 2D or critical systems .)

    The change in scaling from $m$ to $m^2$ is a severe problem. For example, many gapped one-dimensional systems only require about $m=100$ to reach good accuracy (truncation errors of less than 1E-9 or so). To reach the same accuracy with naive PBC would then require using 10,000 states, which can easily fill the RAM of a typical desktop computer for a large enough system, not to mention the extra time needed to work with larger matrices.

    But poor scaling is not the only drawback of PBC. Systems that exhibit spontaneous symmetry breaking are simple to work with under OBC, where one has the additional freedom of applying edge pinning terms to drive the bulk into a specific symmetry sector. Using edge pinning reduces the bulk entanglement and makes measuring order parameters straightforward. Similarly one can use infinite DMRG to directly observe symmetry breaking effects.

    But under PBC, order parameters remain equal to zero and can only be accessed through correlation functions. Though using correlation functions is often presented as the "standard" or "correct" approach, such reasoning pre-supposes that PBC is the best choice. Recent work in the quantum Monte Carlo community demonstrates that open boundaries with pinning fields can actually be a superior approach.[Assaad]

    Cases Where Periodic BC Seems Necessary, But Open/Infinite BC Can be Better

    Below are some cases where periodic boundary conditions seem to be necessary at a first glance. But in many of these cases, not only can open or infinite boundaries be just as successful, they can even be the better choice.

    In conclusion, consider carefully whether you really need to use periodic boundary conditions, as they impose a steep computational cost within DMRG. Periodic BC can actually be worse for the very types of measurements where they are often presented as the best or "standard" choice. Many of the issues periodic boundaries circumvent can be avoided more elegantly by using infinite DMRG, or when that is not applicable, by using open boundary conditions with sufficient care.

    +H = MPO(hterms,sites)

    For two-dimensional DMRG calculations, where the most common approach is to use periodic boundary conditions in the y-direction only, and not in the x-direction, you do a similar step in making your OpSum input to ITensor DMRG: you include terms wrapping around the periodic cylinder in the y direction but not in the x direction.

    However, fully periodic boundary conditions are only recommended for small systems when absolutely needed, and in general are not recommended. For a longer discussion of alternatives to using fully periodic boundaries, see the next section below.

    The reason fully periodic boundary conditions (periodic in x in 1D, and periodic in both x and y in 2D) are not recommended in general is that the DMRG algorithm, as we are defining it here, optimizes an open-boundary MPS. So if you input a periodic-boundary Hamiltonian, there is a kind of "mismatch" that happens where you can still get the correct answer, but it requires much more resources (a larger bond dimension and more sweeps) to get good accuracy. There has been some research into "truly" periodic DMRG, [Pippan] that is DMRG that optimizes an MPS with a ring-like topology, but it is not widely used, is still an open area of algorithm development, and is not currently available in ITensor.

    What boundary conditions should I choose: open, periodic, or infinite?

    One of the weaknesses of the density matrix renormalization group (DMRG), and its time-dependent or finite-temperature extensions, is that it works poorly with periodic boundary conditions. This stems from the fact that conventional DMRG optimizes over open-boundary matrix product state (MPS) wavefunctions whether or not the Hamiltonian includes periodic interactions.

    But this begs the question, when are periodic boundary conditions (PBC) really needed? DMRG offers some compelling alternatives to PBC:

    However, there are a handful of cases where PBC remains preferable despite the extra overhead. A few such cases are:

    (Note that in the remaining discussion, by PBC I mean fully periodic boundary conditions in all directions. For the case of DMRG applied to quasi-two-dimensional systems, it remains a good practice to use periodic boundaries in the shorter direction, while still using open (or infinite) boundaries in the longer direction along the DMRG/MPS path.)

    Below I discuss more about the problems with using PBC, as well as some misconceptions about when PBC seems necessary even though there are better alternatives.

    Drawbacks of Periodic Boundary Conditions

    Periodic boundary conditions are straightforward to implement in conventional DMRG. The simplest approach is to include a "long bond" directly connecting site 1 to site N in the Hamiltonian. However this naive approach has a major drawback: if open-boundary DMRG achieves a given accuracy when keeping $m$ states (bond dimension of size $m$), then to reach the same accuracy with PBC one must keep closer to $m^2$ states! The reason is that now every bond of the MPS not only carries local entanglement as with OBC, but also the entanglement between the first and last sites. (There is an alternative DMRG algorithm[Pippan] for periodic systems which may have better scaling than the above approach but has not been widely applied and tested, as far as I am aware, especially for 2D or critical systems .)

    The change in scaling from $m$ to $m^2$ is a severe problem. For example, many gapped one-dimensional systems only require about $m=100$ to reach good accuracy (truncation errors of less than 1E-9 or so). To reach the same accuracy with naive PBC would then require using 10,000 states, which can easily fill the RAM of a typical desktop computer for a large enough system, not to mention the extra time needed to work with larger matrices.

    But poor scaling is not the only drawback of PBC. Systems that exhibit spontaneous symmetry breaking are simple to work with under OBC, where one has the additional freedom of applying edge pinning terms to drive the bulk into a specific symmetry sector. Using edge pinning reduces the bulk entanglement and makes measuring order parameters straightforward. Similarly one can use infinite DMRG to directly observe symmetry breaking effects.

    But under PBC, order parameters remain equal to zero and can only be accessed through correlation functions. Though using correlation functions is often presented as the "standard" or "correct" approach, such reasoning pre-supposes that PBC is the best choice. Recent work in the quantum Monte Carlo community demonstrates that open boundaries with pinning fields can actually be a superior approach.[Assaad]

    Cases Where Periodic BC Seems Necessary, But Open/Infinite BC Can be Better

    Below are some cases where periodic boundary conditions seem to be necessary at a first glance. But in many of these cases, not only can open or infinite boundaries be just as successful, they can even be the better choice.

    In conclusion, consider carefully whether you really need to use periodic boundary conditions, as they impose a steep computational cost within DMRG. Periodic BC can actually be worse for the very types of measurements where they are often presented as the best or "standard" choice. Many of the issues periodic boundaries circumvent can be avoided more elegantly by using infinite DMRG, or when that is not applicable, by using open boundary conditions with sufficient care.

    diff --git a/dev/faq/Development.html b/dev/faq/Development.html index 652c24941c..4da3c063e6 100644 --- a/dev/faq/Development.html +++ b/dev/faq/Development.html @@ -1,2 +1,2 @@ -ITensor Development FAQs · ITensors.jl

    ITensor Development Frequently Asked Questions

    What are the steps to contribute code to ITensor?

    1. Please contact us (support at itensor.org) if you are planning to submit a major contribution (more than a few lines of code, say). If so, we would like to discuss your plan and design before you spend significant time on it, to increase the chances we will merge your pull request.

    2. Fork the ITensors.jl Github repo, create a new branch and make changes (commits) on that branch. ITensor imposes code formatting for contributions. Please run using JuliaFormatter; format(".") in the project directory to ensure formatting. As an alternative you may also use pre-commit. Install pre-commit with e.g. pip install pre-commit, then run pre-commit install in the project directory in order for pre-commit to run automatically before any commit.

    3. Run the ITensor unit tests by going into the test/ folder and running julia runtests.jl. To run individual test scripts, start a Julia REPL (interactive terminal) session and include each script, such as include("itensor.jl").

    4. Push your new branch and changes to your forked repo. Github will give you the option to make a pull request (PR) out of your branch that will be submitted to us, and which you can view under the list of ITensors.jl pull requests. If your PR's tests pass and we approve your changes, we will merge it or ask you to merge it. If you merge your PR, please use the Squash and Merge option. We may also ask you to make more changes to bring your PR in line with our design goals or technical requirements.

    +ITensor Development FAQs · ITensors.jl

    ITensor Development Frequently Asked Questions

    What are the steps to contribute code to ITensor?

    1. Please contact us (support at itensor.org) if you are planning to submit a major contribution (more than a few lines of code, say). If so, we would like to discuss your plan and design before you spend significant time on it, to increase the chances we will merge your pull request.

    2. Fork the ITensors.jl Github repo, create a new branch and make changes (commits) on that branch. ITensor imposes code formatting for contributions. Please run using JuliaFormatter; format(".") in the project directory to ensure formatting. As an alternative you may also use pre-commit. Install pre-commit with e.g. pip install pre-commit, then run pre-commit install in the project directory in order for pre-commit to run automatically before any commit.

    3. Run the ITensor unit tests by going into the test/ folder and running julia runtests.jl. To run individual test scripts, start a Julia REPL (interactive terminal) session and include each script, such as include("itensor.jl").

    4. Push your new branch and changes to your forked repo. Github will give you the option to make a pull request (PR) out of your branch that will be submitted to us, and which you can view under the list of ITensors.jl pull requests. If your PR's tests pass and we approve your changes, we will merge it or ask you to merge it. If you merge your PR, please use the Squash and Merge option. We may also ask you to make more changes to bring your PR in line with our design goals or technical requirements.

    diff --git a/dev/faq/HPC.html b/dev/faq/HPC.html index 7c707ac8b1..afc0d8e8bb 100644 --- a/dev/faq/HPC.html +++ b/dev/faq/HPC.html @@ -1,2 +1,2 @@ -High-Performance Computing FAQs · ITensors.jl

    High Performance Computing (HPC) Frequently Asked Questions

    My code is using a lot of RAM - what can I do about this?

    Tensor network algorithms can often use a large amount of RAM. On top of this essential fact, the Julia programming languge is "garbage collected" which means that unused memory isn't given back to the operating system right away, but only when the Julia runtime dynamically reclaims it. When your code allocates memory very rapidly, this can lead to high memory usage overall.

    Fortunately there are various steps you can take to keep the memory usage of your code under control.

    1. Avoid Repeatedly Allocating, Especially in Fast or "Hot" Loops

    More memory gets used whenever your code "allocates", which happens most commonly when you use dynamic storage types like Vector and Matrix. If you have a code pattern where you allocate or resize an array or vector inside a 'hot' loop, meaning a loop that iterates quickly very many times, the memory from the previous allocations may pile up very quickly before the next garbage collector run.

    To avoid this, allocate the array once before the loop begins if possible, then overwrite its contents during each iteration. More generally, try as much as possible to estimate the sizes of dynamic resources ahead of time. Or do one allocation that creates a large enough "workspace" that dynamic algorithms can reuse part of without reallocating the whole workspace (i.e. making a large array once then using portions of it when smaller arrays are needed).

    2. Use the --heap-size-hint Flag

    A simple step you can take to help with overall memory usage is to pass the --heap-size-hint flag to the Julia program when you start it. For example, you can call Julia as:

    julia --heap-size-hint=60G

    When you pass this heap size, Julia will try to keep the memory usage at or below this value if possible.

    In cases where this does not work, your code simply may be allocating too much memory. Be sure not to allocate over and over again inside of "hot" loops which execute many times.

    Another possibility is that you are simply working with a tensor network with large bond dimensions, which may fundamentally use a lot of memory. In those cases, you can try to use features such as "write to disk mode" of the ITensor DMRG code or other related techniques. (See the write_when_maxdim_exceeds keyword of the ITensor dmrg function.)

    3. In Rare Case, Force a Garbage Collection Run

    In some rare cases, such as when your code cannot be optimized to avoid any more allocations or when the --heap-size-hint provided above is not affecting the behavior of the Julia garbage collector, you can force the garbage collector (GC) to run at a specific point in your code by calling:

    GC.gc()

    Alternatively, you can call GC.gc(true) to force a "full run" rather than just collecting a more 'young' subset of previous allocations.

    While this approach works well to reduce memory usage, it can have the unfortunate downside of slowing down your code each time the garbage collector runs, which can be especially harmful to multithreaded or parallel algorithms. Therefore, if this approach must be used try calling GC.gc() as infrequently as possible and ideally only in the outermost functions and loops of your code (highest levels of your code).

    Can Julia Be Used to Perform Parallel, Distributed Calculations on Large Clusters?

    Yes. The Julia ecosystem offers multiple approaches to parallel computing across multiple machines including on large HPC clusters and including GPU resources.

    For an overall view of some of these options, the Julia on HPC Clusters website is a good resource.

    Some of the leading approaches to parallelism in Julia are:

    • MPI, through the MPI.jl package. Has the advantage of optionally using an MPI backend that is optimized for a particular cluster and possibly using fast interconnects like Infiniband.
    • Dagger, a framework for parallel computing across all kinds of resources, like CPUs and GPUs, and across multiple threads and multiple servers.
    • Distributed. Part of the base Julia library, giving tools to perform calculations distributed across multiple machines.

    Does My Cluster Admin Have to Install Julia for Me? What are the Best Practices for Installing Julia on Clusters?

    The most common approach to installing and using Julia on clusters is for users to install their own Julia binary and dependencies, which is quite easy to do. However, for certain libraries like MPI.jl, there may be MPI backends that are preferred by the cluster administrator. Fortunately, it is possible for admins to set global defaults for such backends and other library preferences.

    For more information on best practices for installing Julia on clusters, see the Julia on HPC Clusters website.

    +High-Performance Computing FAQs · ITensors.jl

    High Performance Computing (HPC) Frequently Asked Questions

    My code is using a lot of RAM - what can I do about this?

    Tensor network algorithms can often use a large amount of RAM. On top of this essential fact, the Julia programming languge is "garbage collected" which means that unused memory isn't given back to the operating system right away, but only when the Julia runtime dynamically reclaims it. When your code allocates memory very rapidly, this can lead to high memory usage overall.

    Fortunately there are various steps you can take to keep the memory usage of your code under control.

    1. Avoid Repeatedly Allocating, Especially in Fast or "Hot" Loops

    More memory gets used whenever your code "allocates", which happens most commonly when you use dynamic storage types like Vector and Matrix. If you have a code pattern where you allocate or resize an array or vector inside a 'hot' loop, meaning a loop that iterates quickly very many times, the memory from the previous allocations may pile up very quickly before the next garbage collector run.

    To avoid this, allocate the array once before the loop begins if possible, then overwrite its contents during each iteration. More generally, try as much as possible to estimate the sizes of dynamic resources ahead of time. Or do one allocation that creates a large enough "workspace" that dynamic algorithms can reuse part of without reallocating the whole workspace (i.e. making a large array once then using portions of it when smaller arrays are needed).

    2. Use the --heap-size-hint Flag

    A simple step you can take to help with overall memory usage is to pass the --heap-size-hint flag to the Julia program when you start it. For example, you can call Julia as:

    julia --heap-size-hint=60G

    When you pass this heap size, Julia will try to keep the memory usage at or below this value if possible.

    In cases where this does not work, your code simply may be allocating too much memory. Be sure not to allocate over and over again inside of "hot" loops which execute many times.

    Another possibility is that you are simply working with a tensor network with large bond dimensions, which may fundamentally use a lot of memory. In those cases, you can try to use features such as "write to disk mode" of the ITensor DMRG code or other related techniques. (See the write_when_maxdim_exceeds keyword of the ITensor dmrg function.)

    3. In Rare Case, Force a Garbage Collection Run

    In some rare cases, such as when your code cannot be optimized to avoid any more allocations or when the --heap-size-hint provided above is not affecting the behavior of the Julia garbage collector, you can force the garbage collector (GC) to run at a specific point in your code by calling:

    GC.gc()

    Alternatively, you can call GC.gc(true) to force a "full run" rather than just collecting a more 'young' subset of previous allocations.

    While this approach works well to reduce memory usage, it can have the unfortunate downside of slowing down your code each time the garbage collector runs, which can be especially harmful to multithreaded or parallel algorithms. Therefore, if this approach must be used try calling GC.gc() as infrequently as possible and ideally only in the outermost functions and loops of your code (highest levels of your code).

    Can Julia Be Used to Perform Parallel, Distributed Calculations on Large Clusters?

    Yes. The Julia ecosystem offers multiple approaches to parallel computing across multiple machines including on large HPC clusters and including GPU resources.

    For an overall view of some of these options, the Julia on HPC Clusters website is a good resource.

    Some of the leading approaches to parallelism in Julia are:

    • MPI, through the MPI.jl package. Has the advantage of optionally using an MPI backend that is optimized for a particular cluster and possibly using fast interconnects like Infiniband.
    • Dagger, a framework for parallel computing across all kinds of resources, like CPUs and GPUs, and across multiple threads and multiple servers.
    • Distributed. Part of the base Julia library, giving tools to perform calculations distributed across multiple machines.

    Does My Cluster Admin Have to Install Julia for Me? What are the Best Practices for Installing Julia on Clusters?

    The most common approach to installing and using Julia on clusters is for users to install their own Julia binary and dependencies, which is quite easy to do. However, for certain libraries like MPI.jl, there may be MPI backends that are preferred by the cluster administrator. Fortunately, it is possible for admins to set global defaults for such backends and other library preferences.

    For more information on best practices for installing Julia on clusters, see the Julia on HPC Clusters website.

    diff --git a/dev/faq/JuliaAndCpp.html b/dev/faq/JuliaAndCpp.html index f9d82e303f..66188cf373 100644 --- a/dev/faq/JuliaAndCpp.html +++ b/dev/faq/JuliaAndCpp.html @@ -1,2 +1,2 @@ -Programming Language (Julia, C++, ...) FAQs · ITensors.jl

    Programming Language (Julia, C++) Frequently Asked Questions

    Should I use the Julia or C++ version of ITensor?

    We recommend the Julia version of ITensor for most people, because:

    • Julia ITensor has more and newer features than C++ ITensor, and we are developing it more rapidly
    • Julia is a more productive language than C++ with more built-in features, such as linear algebra, iteration tools, etc.
    • Julia is a compiled language with performance rivaling C++ (see next question below for a longer discussion)
    • Julia has a rich ecosystem with a package manager, many well-designed libraries, and helpful tutorials

    Even if Julia is not available by default on your computer cluster, it is easy to set up your own local install of Julia on a cluster.

    However, some good reasons to use the C++ version of ITensor are:

    • using ITensor within existing C++ codes
    • you already have expertise in C++ programming
    • multithreading support in C++, such as with OpenMP, offer certain sophisticated features compared to Julia multithreading (though Julia's support for multithreading has other benefits such as composability and is rapidly improving)
    • you need other specific features of C++, such as control over memory management or instant start-up times

    Which is faster: Julia or C++ ?

    Julia and C++ offer about the same performance.

    Each language gets compiled to optimized assembly code and offer arrays and containers which can efficiently stored and iterated. Well-written Julia code can be even faster than comparable C++ codes in many cases.

    The longer answer is of course that it depends:

    • Julia is a more productive language than C++, with many highly-optimized libraries for numerical computing tasks, and excellent tools for profiling and benchmarking. These features help significantly to tune Julia codes for optimal performance.
    • C++ offers much more fine-grained control over memory management, which can enhance performance in certain applications and control memory usage.
    • Julia codes can slow down significantly during refactoring or when introducing new code if certain best practices are not followed. The most important of these is writing type-stable code. For more details see the Performance Tips section of the Julia documentation.
    • C++ applications start instantly, while Julia codes can be slow to start. However, if this start-up time is subtracted, the rest of the time of running a Julia application is similar to C++.

    Why did you choose Julia over Python for ITensor?

    Julia offers much better performance than Python, while still having nearly all of Python's benefits. One consequence is that ITensor can be written purely in Julia, whereas to write high-performance Python libraries it is necessary to implement many parts in C or C++ (the "two-language problem").

    The main reasons Julia codes can easily outperform Python codes are:

    1. Julia is a (just-in-time) compiled language with functions specialized for the types of the arguments passed to them
    2. Julia arrays and containers are specialized to the types they contain, and perform similarly to C or C++ arrays when all elements have the same type
    3. Julia has sophisticated support for multithreading while Python has significant problems with multithreading

    Of course there are some drawbacks of Julia compared to Python, including a less mature ecosystem of libraries (though it is simple to call Python libraries from Julia using PyCall), and less widespread adoption.

    Is Julia ITensor a wrapper around the C++ version?

    No. The Julia version of ITensor is a complete, ground-up port of the ITensor library to the Julia language and is written 100% in Julia.

    +Programming Language (Julia, C++, ...) FAQs · ITensors.jl

    Programming Language (Julia, C++) Frequently Asked Questions

    Should I use the Julia or C++ version of ITensor?

    We recommend the Julia version of ITensor for most people, because:

    • Julia ITensor has more and newer features than C++ ITensor, and we are developing it more rapidly
    • Julia is a more productive language than C++ with more built-in features, such as linear algebra, iteration tools, etc.
    • Julia is a compiled language with performance rivaling C++ (see next question below for a longer discussion)
    • Julia has a rich ecosystem with a package manager, many well-designed libraries, and helpful tutorials

    Even if Julia is not available by default on your computer cluster, it is easy to set up your own local install of Julia on a cluster.

    However, some good reasons to use the C++ version of ITensor are:

    • using ITensor within existing C++ codes
    • you already have expertise in C++ programming
    • multithreading support in C++, such as with OpenMP, offer certain sophisticated features compared to Julia multithreading (though Julia's support for multithreading has other benefits such as composability and is rapidly improving)
    • you need other specific features of C++, such as control over memory management or instant start-up times

    Which is faster: Julia or C++ ?

    Julia and C++ offer about the same performance.

    Each language gets compiled to optimized assembly code and offer arrays and containers which can efficiently stored and iterated. Well-written Julia code can be even faster than comparable C++ codes in many cases.

    The longer answer is of course that it depends:

    • Julia is a more productive language than C++, with many highly-optimized libraries for numerical computing tasks, and excellent tools for profiling and benchmarking. These features help significantly to tune Julia codes for optimal performance.
    • C++ offers much more fine-grained control over memory management, which can enhance performance in certain applications and control memory usage.
    • Julia codes can slow down significantly during refactoring or when introducing new code if certain best practices are not followed. The most important of these is writing type-stable code. For more details see the Performance Tips section of the Julia documentation.
    • C++ applications start instantly, while Julia codes can be slow to start. However, if this start-up time is subtracted, the rest of the time of running a Julia application is similar to C++.

    Why did you choose Julia over Python for ITensor?

    Julia offers much better performance than Python, while still having nearly all of Python's benefits. One consequence is that ITensor can be written purely in Julia, whereas to write high-performance Python libraries it is necessary to implement many parts in C or C++ (the "two-language problem").

    The main reasons Julia codes can easily outperform Python codes are:

    1. Julia is a (just-in-time) compiled language with functions specialized for the types of the arguments passed to them
    2. Julia arrays and containers are specialized to the types they contain, and perform similarly to C or C++ arrays when all elements have the same type
    3. Julia has sophisticated support for multithreading while Python has significant problems with multithreading

    Of course there are some drawbacks of Julia compared to Python, including a less mature ecosystem of libraries (though it is simple to call Python libraries from Julia using PyCall), and less widespread adoption.

    Is Julia ITensor a wrapper around the C++ version?

    No. The Julia version of ITensor is a complete, ground-up port of the ITensor library to the Julia language and is written 100% in Julia.

    diff --git a/dev/faq/JuliaPkg.html b/dev/faq/JuliaPkg.html index 08cabefeff..fbc42f67c1 100644 --- a/dev/faq/JuliaPkg.html +++ b/dev/faq/JuliaPkg.html @@ -1,3 +1,3 @@ Julia Package Manager FAQs · ITensors.jl

    Julia Package Manager Frequently Asked Questions

    What if I can't upgrade ITensors.jl to the latest version?

    Sometimes you may find that doing ] update ITensors or equivalently doing ] up ITensors within Julia package manager mode doesn't result in the ITensors package actually being upgraded. You may see that the current version you have remains stuck to a version that is lower than the latest one which you can check here.

    What is most likely going on is that you have other packages installed which are blocking ITensors from being updated.

    To get more information into which packages may be doing this, and what versions they are requiring, you can do the following. First look up the latest version of ITensors.jl. Let's say for this example that it is v0.3.0.

    Next, input the following command while in package manager mode:

    julia> ]
    -pkg> add ITensors@v0.3.0

    If the package manager cannot update to this version, it will list all of the other packages that are blocking this from happening and give information about why. To go into a little more depth, each package has a compatibility or "compat" entry in its Project.toml file which says which versions of the ITensors package it is compatible with. If these versions do not include the latest one, perhaps because the package has not been updated, then it can block the ITensors package from being updated on your system.

    Generally the solution is to just update each of these packages, then try again to update ITensors. If that does not work, then check the following

    • Are any of the blocking packages in "dev mode" meaning you called dev PackageName on them in the past? Try doing free PackageName if so to bring them out of dev mode.
    • Are any of the blocking packages unregistered packages that were installed through a GitHub repo link? If so, you may need to do something like add https://github.com/Org/PackageName#main to force update that package to the latest code available on its main branch.

    If you still can't get the ITensors package update, feel free to post a question or contact us for help.

    +pkg> add ITensors@v0.3.0

    If the package manager cannot update to this version, it will list all of the other packages that are blocking this from happening and give information about why. To go into a little more depth, each package has a compatibility or "compat" entry in its Project.toml file which says which versions of the ITensors package it is compatible with. If these versions do not include the latest one, perhaps because the package has not been updated, then it can block the ITensors package from being updated on your system.

    Generally the solution is to just update each of these packages, then try again to update ITensors. If that does not work, then check the following

    If you still can't get the ITensors package update, feel free to post a question or contact us for help.

    diff --git a/dev/faq/QN.html b/dev/faq/QN.html index 5caa38eac5..473487038a 100644 --- a/dev/faq/QN.html +++ b/dev/faq/QN.html @@ -1,4 +1,4 @@ Quantum Number (QN) FAQs · ITensors.jl

    Quantum Number Frequently Asked Questions

    Can I mix different types of quantum numbers within the same system?

    Yes, you can freely mix quantum numbers (QNs) of different types. For example, you can make the sites of your systems alternate between sites carrying spin "Sz" QNs and fermion sites carrying particle number "Nf" QNs. The QNs will not mix with each other and will separately be conserved to the original value you set for your initial wavefunction.

    How can I separately conserve QNs which have the same name?

    If you have two physically distinct types of sites, such as "Qudit" sites, but which carry identically named QNs called "Number", and you want the qudit number to be separately conserved within each type of site, you must make the QN names different for the two types of sites.

    For example, the following line of code will make an array of site indices with the qudit number QN having the name "Number_odd" on odd sites and "Number_even" on even sites:

    sites = [isodd(n) ? siteind("Qudit", n; dim=10, conserve_qns=true, qnname_number="Number_odd")
                       : siteind("Qudit", n; dim=2, conserve_qns=true, qnname_number="Number_even")
    -                  for n in 1:2*L]

    (You may have to collapse the above code into a single line for it to run properly.)

    + for n in 1:2*L]

    (You may have to collapse the above code into a single line for it to run properly.)

    diff --git a/dev/faq/RelationshipToOtherLibraries.html b/dev/faq/RelationshipToOtherLibraries.html index 766165f017..226f5d783f 100644 --- a/dev/faq/RelationshipToOtherLibraries.html +++ b/dev/faq/RelationshipToOtherLibraries.html @@ -1,2 +1,2 @@ -Relationship of ITensor to other tensor libraries FAQs · ITensors.jl

    Relationship of ITensor to other tensor libraries

    Here we will describe the relationship of ITensor to more traditional Julia Arrays or deep learning libraries like TensorFlow and PyTorch. There are a few things that distinguish ITensor from those approaches:

    1. ITensors have dimensions with labels that get passed around, which makes it simple to perform certain operations like contraction, addition, and tensor decompositions with a high level interface, independent of memory layout. This is along the same lines as Julia packages like NamedDims.jl and AxisArrays.jl and libraries in Python like xarray, however I would argue that the ITensor approach is a little more sophisticated (the dimensions have more metadata which makes them easier to manipulate for different situations, random ids to help avoid name clashes, etc.). This design was inspired by the needs of tensor network algorithms, where there are many tensor dimensions in the computation (of which many of them are dynamically created during the calculation), but would be helpful for writing other algorithms too.

    2. The ITensor type has a dynamic high level interface, where the type itself is mutable and the data can be swapped out. This allows for conveniently allocating the data of an ITensor on the fly "as needed", which makes for a nicer, more flexible interface (like initializing an empty ITensor before a loop, and filling it with the correct data type when the first value is set), at the expense of a small overhead for accessing data in the ITensor. We have found this tradeoff is worth it, since we expect ITensors to be used for medium to large scale calculations where operations on the tensors like contraction, addition, and tensor decomposition dominate the cost of the calculation, and code can be designed with function barriers to speed up operations when data is being accessed repeatedly.

    3. Another feature that ITensor has that goes beyond what is available in standard Julia, TensorFlow, and PyTorch is tensors which are symmetric under a group action. The physical interpretation of these tensors are ones that have a conserved quantity (like a quantum state with a conserved number of particles), so that feature is more physics-oriented, but could have applications in other areas like machine learning as well. In practice, these tensors are block sparse, and have extra metadata on the dimensions labeling representations of the group.

    4. Based on the features above, the ITensor library provides high level implementations of tensor network algorithms (algebraic operations of very high dimensional tensors, such as addition, multiplication, and finding dominant eigenvectors). In general these algorithms can (and have been) written on top of other libraries like standard Julia Arrays/AD, PyTorch, or TensorFlow, but they might have various downsides (a less convenient interface for dealing with tensor operations, no support for the types of symmetric tensors we often need, limited support for tensors with complex numbers in the case of libraries like PyTorch, though perhaps that has improved since I last checked, etc.).

    Although ITensor has primarily focused on quantum physics and quantum computing applications, there is work using ITensor for machine learning applications (so far focused on applications of tensor networks to machine learning, so no neural network calculations yet as far as I know). In general, these different libraries (ITensor, Flux, PyTorch, TensorFlow) are biased towards their specific methods and application areas that they are used for the most: ITensor is more biased towards tensor network calculations and quantum physics/quantum computing applications, based on the available features and interface, while PyTorch and TensorFlow are more biased towards neural network calculations. However, our goal would be to provide more features to ITensor that would make it useful for neural network applications as well, such as better support for slicing operations.

    +Relationship of ITensor to other tensor libraries FAQs · ITensors.jl

    Relationship of ITensor to other tensor libraries

    Here we will describe the relationship of ITensor to more traditional Julia Arrays or deep learning libraries like TensorFlow and PyTorch. There are a few things that distinguish ITensor from those approaches:

    1. ITensors have dimensions with labels that get passed around, which makes it simple to perform certain operations like contraction, addition, and tensor decompositions with a high level interface, independent of memory layout. This is along the same lines as Julia packages like NamedDims.jl and AxisArrays.jl and libraries in Python like xarray, however I would argue that the ITensor approach is a little more sophisticated (the dimensions have more metadata which makes them easier to manipulate for different situations, random ids to help avoid name clashes, etc.). This design was inspired by the needs of tensor network algorithms, where there are many tensor dimensions in the computation (of which many of them are dynamically created during the calculation), but would be helpful for writing other algorithms too.

    2. The ITensor type has a dynamic high level interface, where the type itself is mutable and the data can be swapped out. This allows for conveniently allocating the data of an ITensor on the fly "as needed", which makes for a nicer, more flexible interface (like initializing an empty ITensor before a loop, and filling it with the correct data type when the first value is set), at the expense of a small overhead for accessing data in the ITensor. We have found this tradeoff is worth it, since we expect ITensors to be used for medium to large scale calculations where operations on the tensors like contraction, addition, and tensor decomposition dominate the cost of the calculation, and code can be designed with function barriers to speed up operations when data is being accessed repeatedly.

    3. Another feature that ITensor has that goes beyond what is available in standard Julia, TensorFlow, and PyTorch is tensors which are symmetric under a group action. The physical interpretation of these tensors are ones that have a conserved quantity (like a quantum state with a conserved number of particles), so that feature is more physics-oriented, but could have applications in other areas like machine learning as well. In practice, these tensors are block sparse, and have extra metadata on the dimensions labeling representations of the group.

    4. Based on the features above, the ITensor library provides high level implementations of tensor network algorithms (algebraic operations of very high dimensional tensors, such as addition, multiplication, and finding dominant eigenvectors). In general these algorithms can (and have been) written on top of other libraries like standard Julia Arrays/AD, PyTorch, or TensorFlow, but they might have various downsides (a less convenient interface for dealing with tensor operations, no support for the types of symmetric tensors we often need, limited support for tensors with complex numbers in the case of libraries like PyTorch, though perhaps that has improved since I last checked, etc.).

    Although ITensor has primarily focused on quantum physics and quantum computing applications, there is work using ITensor for machine learning applications (so far focused on applications of tensor networks to machine learning, so no neural network calculations yet as far as I know). In general, these different libraries (ITensor, Flux, PyTorch, TensorFlow) are biased towards their specific methods and application areas that they are used for the most: ITensor is more biased towards tensor network calculations and quantum physics/quantum computing applications, based on the available features and interface, while PyTorch and TensorFlow are more biased towards neural network calculations. However, our goal would be to provide more features to ITensor that would make it useful for neural network applications as well, such as better support for slicing operations.

    diff --git a/dev/getting_started/DebugChecks.html b/dev/getting_started/DebugChecks.html index e8bfeb0b34..7e3a4979be 100644 --- a/dev/getting_started/DebugChecks.html +++ b/dev/getting_started/DebugChecks.html @@ -35,4 +35,4 @@ [8] noprime(::ITensor) @ ITensors ~/.julia/packages/ITensors/cu9Bo/src/itensor.jl:1211 [9] top-level scope - @ REPL[7]:1

    You can track where debug checks are located in the code here, and add your own debug checks to your own code by wrapping your code with the macro ITensors.@debug_check.

    + @ REPL[7]:1

    You can track where debug checks are located in the code here, and add your own debug checks to your own code by wrapping your code with the macro ITensors.@debug_check.

    diff --git a/dev/getting_started/Installing.html b/dev/getting_started/Installing.html index b572cc1bb3..0e86637ee0 100644 --- a/dev/getting_started/Installing.html +++ b/dev/getting_started/Installing.html @@ -3,4 +3,4 @@ $ mkdir -p bin $ wget https://julialang-s3.julialang.org/bin/linux/x64/1.7/julia-1.7.2-linux-x86_64.tar.gz $ tar xvzf julia-1.7.2-linux-x86_64.tar.gz -$ ln -s julia-1.7.2/bin/julia bin/julia

    If you want to install Julia 1.6.6, you would change 1.7 to 1.6 and 1.7.2 to 1.6.6. In general we recommend using the current stable release of Julia, which you can find out by going to the Julia Downloads page. We also don't recommend using versions of Julia below 1.6, which are no longer compatible with ITensors.jl as of ITensors 0.3.

    After these steps, you should be able to type julia from your terminal to run Julia in interactive mode. If that works, then you have the Julia language and can run it in all the usual ways. If it does not work, you may need to log out and back in, and check that the bin directory is in your program execution path (PATH environment variable).

    Explanation of the sample commands above:

    Installing ITensor (ITensors.jl Package)

    Installing the Julia version of ITensor is easy once you have the Julia language installed. For more information about installing Julia, please see the Julia language downloads page.

    Once you have installed Julia on your machine,

    1. Enter the command julia to launch an interactive Julia session (a.k.a. the Julia "REPL")
    2. Type ] to enter the package manager (pkg> prompt should now show)
    3. Enter the command add ITensors
    4. After installation completes, press backspace to return to the normal julia> prompt
    5. Optional but Recommended: Enter the command julia> using ITensors; ITensors.compile() to compile a large fraction of the ITensor library code and following the instructions afterward to make an alias for loading a pre-built ITensor system image with Julia. This step can take up to 10 minutes to complete but only has to be done once for each version of ITensor. See the section on compiling ITensor for more information.

    Sample screenshot:

    +$ ln -s julia-1.7.2/bin/julia bin/julia

    If you want to install Julia 1.6.6, you would change 1.7 to 1.6 and 1.7.2 to 1.6.6. In general we recommend using the current stable release of Julia, which you can find out by going to the Julia Downloads page. We also don't recommend using versions of Julia below 1.6, which are no longer compatible with ITensors.jl as of ITensors 0.3.

    After these steps, you should be able to type julia from your terminal to run Julia in interactive mode. If that works, then you have the Julia language and can run it in all the usual ways. If it does not work, you may need to log out and back in, and check that the bin directory is in your program execution path (PATH environment variable).

    Explanation of the sample commands above:

    Installing ITensor (ITensors.jl Package)

    Installing the Julia version of ITensor is easy once you have the Julia language installed. For more information about installing Julia, please see the Julia language downloads page.

    Once you have installed Julia on your machine,

    1. Enter the command julia to launch an interactive Julia session (a.k.a. the Julia "REPL")
    2. Type ] to enter the package manager (pkg> prompt should now show)
    3. Enter the command add ITensors
    4. After installation completes, press backspace to return to the normal julia> prompt
    5. Optional but Recommended: Enter the command julia> using ITensors; ITensors.compile() to compile a large fraction of the ITensor library code and following the instructions afterward to make an alias for loading a pre-built ITensor system image with Julia. This step can take up to 10 minutes to complete but only has to be done once for each version of ITensor. See the section on compiling ITensor for more information.

    Sample screenshot:

    diff --git a/dev/getting_started/NextSteps.html b/dev/getting_started/NextSteps.html index 80bb45bccc..ab89ccf4be 100644 --- a/dev/getting_started/NextSteps.html +++ b/dev/getting_started/NextSteps.html @@ -1,2 +1,2 @@ -Next Steps · ITensors.jl
    +Next Steps · ITensors.jl
    diff --git a/dev/getting_started/RunningCodes.html b/dev/getting_started/RunningCodes.html index bb82a8e5c5..67f0e6bfae 100644 --- a/dev/getting_started/RunningCodes.html +++ b/dev/getting_started/RunningCodes.html @@ -20,4 +20,4 @@ end main(; d1 = 4, d2 = 5)

    which can be useful in interactive mode, particularly if you might want to run your code with a variety of different arguments.

    Running a Script

    Now say you put the above code into a file named code.jl. Then you can run this code on the command line as follows

    $ julia code.jl

    This script-like mode of running Julia is convenient for running longer jobs, such as on a cluster.

    Running Interactively

    However, sometimes you want to do rapid development when first writing and testing a code. For this kind of work, the long startup and compilation times currently incurred by the Julia compiler can be a nuisance. Fortunately a nice solution is to alternate between modifying your code then running it by loading it into an already running Julia session.

    To set up this kind of session, take the following steps:

    1. Enter the interactive mode of Julia, by inputting the command julia on the command line. You will now be in the Julia "REPL" (read-eval-print loop) with the prompt julia> on the left of your screen.

    2. To run a code such as the code.jl file discussed above, input the command

      julia> include("code.jl")

      Note that you must be in the same folder as code.jl for this to work; otherwise input the entire path to the code.jl file. The code will run and you will see its output in the REPL.

    3. Now say you want to modify and re-run the code. To do this, just edit the file in an editor in another window, without closing your Julia session. Now run the command

      julia> include("code.jl")

      again and your updated code will run, but this time skipping any of the precompilation overhead incurred on previous steps.

    The above steps to running a code interactively has a big advantage that you only have to pay the startup time of compiling ITensor and other libraries you are using once. Further changes to your code only incur very small extra compilation times, facilitating rapid development.

    Compiling an ITensor System Image

    The above strategy of running code in the Julia REPL (interactive mode) works well, but still incurs a large start-up penalty for the first run of your code. Fortunately there is a nice way around this issue too: compiling ITensors.jl and making a system image built by the PackageCompiler.jl library.

    To use this approach, we have provided a convenient one-line command:

    julia> using ITensors; ITensors.compile()

    Once ITensors.jl is installed, you can just run this command in an interactive Julia session. It can take a few minutes to run, but you only have to run it once for a given version of ITensors.jl. When it is done, it will create a file sys_itensors.so in the directory ~/.julia/sysimages/.

    To use the compiled system image together with Julia, run the julia command (for interactive mode or scripts) in the following way:

    $ julia --sysimage ~/.julia/sysimages/sys_itensors.so

    A convenient thing to do is to make an alias in your shell for this command. To do this, edit your .bashrc or .zshrc or similar file for the shell you use by adding the following line:

    alias julia_itensors="julia --sysimage ~/.julia/sysimages/sys_itensors.so -e \"using ITensors\" -i "

    where of course you can use the command name you like when defining the alias. Now running commands like julia_itensors code.jl or julia_itensors to start an interactive session will have the ITensor system image pre-loaded and you will notice significantly faster startup times. The arguments -e \"using ITensors\" -i make it so that running julia_itensors also loads the ITensor library as soon as Julia starts up, so that you don't have to type using ITensors every time.

    Using a Compiled Sysimage in Jupyter or VS Code

    If you have compiled a sysimage for ITensor as shown above, you can use it in Jupyter by running the following code:

    using IJulia
    -installkernel("julia_ITensors","--sysimage=~/.julia/sysimages/sys_itensors.so")

    in the Julia REPL (Julia console).

    To load the ITensor sysimage in VS Code, you can add

    "--sysimage ~/.julia/sysimages/sys_itensors.so"

    as an argument under the julia.additionalArgs setting in your Settings.json file.

    For more information on the above, see the following Julia Discourse post.

    +installkernel("julia_ITensors","--sysimage=~/.julia/sysimages/sys_itensors.so")

    in the Julia REPL (Julia console).

    To load the ITensor sysimage in VS Code, you can add

    "--sysimage ~/.julia/sysimages/sys_itensors.so"

    as an argument under the julia.additionalArgs setting in your Settings.json file.

    For more information on the above, see the following Julia Discourse post.

    diff --git a/dev/index.html b/dev/index.html index 5bdec4d17c..ac3859d76e 100644 --- a/dev/index.html +++ b/dev/index.html @@ -161,4 +161,4 @@ After sweep 3 energy=-138.940080155429 maxlinkdim=92 maxerr=1.00E-10 time=4.522 After sweep 4 energy=-138.940086009318 maxlinkdim=100 maxerr=1.05E-10 time=11.644 After sweep 5 energy=-138.940086058840 maxlinkdim=96 maxerr=1.00E-10 time=12.771 -Final energy = -138.94008605883985

    You can find more examples of running dmrg and related algorithms here.

    +Final energy = -138.94008605883985

    You can find more examples of running dmrg and related algorithms here.

    diff --git a/dev/search.html b/dev/search.html index ffc83db401..251c688677 100644 --- a/dev/search.html +++ b/dev/search.html @@ -1,2 +1,2 @@ -Search · ITensors.jl

    Loading search...

      +Search · ITensors.jl

      Loading search...

        diff --git a/dev/tutorials/DMRG.html b/dev/tutorials/DMRG.html index 57383a96c9..d3c8a98a53 100644 --- a/dev/tutorials/DMRG.html +++ b/dev/tutorials/DMRG.html @@ -23,7 +23,7 @@ return end

        Steps of The Code

        The first two lines

        using ITensors, ITensorMPS
         N = 100
        -sites = siteinds("S=1",N)

        tells the function siteinds to make an array of ITensor Index objects which have the properties of $S=1$ spins. This means their dimension will be 3 and they will carry the "S=1" tag, which will enable the next part of the code to know how to make appropriate operators for them.

        Try printing out some of these indices to verify their properties:

        @show sites[1]
        (dim=3|id=597|"S=1,Site,n=1")

        The next part of the code builds the Hamiltonian:

        os = OpSum()
        +sites = siteinds("S=1",N)

        tells the function siteinds to make an array of ITensor Index objects which have the properties of $S=1$ spins. This means their dimension will be 3 and they will carry the "S=1" tag, which will enable the next part of the code to know how to make appropriate operators for them.

        Try printing out some of these indices to verify their properties:

        @show sites[1]
        (dim=3|id=642|"S=1,Site,n=1")

        The next part of the code builds the Hamiltonian:

        os = OpSum()
         for j=1:N-1
           os += "Sz",j,"Sz",j+1
           os += 1/2,"S+",j,"S-",j+1
        @@ -31,4 +31,4 @@
         end
         H = MPO(os,sites)

        An OpSum is an object which accumulates Hamiltonian terms such as "Sz",1,"Sz",2 so that they can be summed afterward into a matrix product operator (MPO) tensor network. The line of code H = MPO(os,sites) constructs the Hamiltonian in the MPO format, with physical indices given by the array sites.

        The line

        psi0 = random_mps(sites;linkdims=10)

        constructs an MPS psi0 which has the physical indices sites and a bond dimension of 10. It is made by a random quantum circuit that is reshaped into an MPS, so that it will have as generic and unbiased properties as an MPS of that size can have. This choice can help prevent the DMRG calculation from getting stuck in a local minimum.

        The lines

        nsweeps = 5
         maxdim = [10,20,100,100,200]
        -cutoff = [1E-10]

        define the number of DMRG sweeps (five) we will instruct the code to do, as well as the parameters that will control the speed and accuracy of the DMRG algorithm within each sweep. The array maxdim limits the maximum MPS bond dimension allowed during each sweep and cutoff defines the truncation error goal of each sweep (if fewer values are specified than sweeps, the last value is used for all remaining sweeps).

        Finally the call

        energy,psi = dmrg(H,psi0;nsweeps,maxdim,cutoff)

        runs the DMRG algorithm included in ITensor, using psi0 as an initial guess for the ground state wavefunction. The optimized MPS psi and its eigenvalue energy are returned.

        After the dmrg function returns, you can take the returned MPS psi and do further calculations with it, such as measuring local operators or computing entanglement entropy.

        +cutoff = [1E-10]

        define the number of DMRG sweeps (five) we will instruct the code to do, as well as the parameters that will control the speed and accuracy of the DMRG algorithm within each sweep. The array maxdim limits the maximum MPS bond dimension allowed during each sweep and cutoff defines the truncation error goal of each sweep (if fewer values are specified than sweeps, the last value is used for all remaining sweeps).

        Finally the call

        energy,psi = dmrg(H,psi0;nsweeps,maxdim,cutoff)

        runs the DMRG algorithm included in ITensor, using psi0 as an initial guess for the ground state wavefunction. The optimized MPS psi and its eigenvalue energy are returned.

        After the dmrg function returns, you can take the returned MPS psi and do further calculations with it, such as measuring local operators or computing entanglement entropy.

        diff --git a/dev/tutorials/MPSTimeEvolution.html b/dev/tutorials/MPSTimeEvolution.html index 8510cd7fbb..1ff06ce9ec 100644 --- a/dev/tutorials/MPSTimeEvolution.html +++ b/dev/tutorials/MPSTimeEvolution.html @@ -51,4 +51,4 @@ end

        Steps of The Code

        First we setsome parameters, like the system size N and time step $\tau$ to use.

        The line s = siteinds("S=1/2",N;conserve_qns=true) defines an array of spin 1/2 tensor indices (Index objects) which will be the site or physical indices of the MPS.

        Next we make an empty array gates = ITensor[] that will hold ITensors that will be our Trotter gates. Inside the for n=1:N-1 loop that follows the lines

        hj =      op("Sz",s1) * op("Sz",s2) +
             1/2 * op("S+",s1) * op("S-",s2) +
             1/2 * op("S-",s1) * op("S+",s2)

        call the op function which reads the "S=1/2" tag on our site indices (sites j and j+1) and which then knows that we want the spin 1/ 2 version of the "Sz", "S+", and "S-" operators. The op function returns these operators as ITensors and we tensor product and add them together to compute the operator $h_{j,j+1}$ defined as

        \[h_{j,j+1} = S^z_j S^z_{j+1} + \frac{1}{2} S^+_j S^-_{j+1} + \frac{1}{2} S^-_j S^+_{j+1}\]

        which we call hj in the code.

        To make the corresponding Trotter gate Gj we exponentiate hj times a factor $-i \tau/2$ and then append or push this onto the end of the gate array gates.

        Gj = exp(-im * tau/2 * hj)
        -push!(gates,Gj)

        Having made the gates for bonds (1,2),(2,3),(3,4), etc. we still need to append the gates in reverse order to complete the correct Trotter formula. Here we can conveniently do that by just calling the Julia append! function and supply a reversed version of the array of gates we have made so far. This can be done in a single line of code append!(gates,reverse(gates)).

        The line of code psi = MPS(s, n -> isodd(n) ? "Up" : "Dn") initializes our MPS psi as a product state of alternating up and down spins.

        To carry out the time evolution we loop over the range of times from 0.0 to ttotal in steps of tau, using the Julia range notation 0.0:tau:ttotal to easily set up this loop as for t in 0.0:tau:ttotal.

        Inside the loop, we use the expect function to measure the expected value of the "Sz" operator on the center site.

        To evolve the MPS to the next time, we call the function

        psi = apply(gates, psi; cutoff)

        which applies the array of ITensors called gates to our current MPS psi, truncating the MPS at each step using the truncation error threshold supplied as the variable cutoff.

        The apply function is smart enough to determine which site indices each gate has, and then figure out where to apply it to our MPS. It automatically handles truncating the MPS and can even handle non-nearest-neighbor gates, though that feature is not used in this example.

        +push!(gates,Gj)

        Having made the gates for bonds (1,2),(2,3),(3,4), etc. we still need to append the gates in reverse order to complete the correct Trotter formula. Here we can conveniently do that by just calling the Julia append! function and supply a reversed version of the array of gates we have made so far. This can be done in a single line of code append!(gates,reverse(gates)).

        The line of code psi = MPS(s, n -> isodd(n) ? "Up" : "Dn") initializes our MPS psi as a product state of alternating up and down spins.

        To carry out the time evolution we loop over the range of times from 0.0 to ttotal in steps of tau, using the Julia range notation 0.0:tau:ttotal to easily set up this loop as for t in 0.0:tau:ttotal.

        Inside the loop, we use the expect function to measure the expected value of the "Sz" operator on the center site.

        To evolve the MPS to the next time, we call the function

        psi = apply(gates, psi; cutoff)

        which applies the array of ITensors called gates to our current MPS psi, truncating the MPS at each step using the truncation error threshold supplied as the variable cutoff.

        The apply function is smart enough to determine which site indices each gate has, and then figure out where to apply it to our MPS. It automatically handles truncating the MPS and can even handle non-nearest-neighbor gates, though that feature is not used in this example.

        diff --git a/dev/tutorials/QN_DMRG.html b/dev/tutorials/QN_DMRG.html index a9582fb603..0559526767 100644 --- a/dev/tutorials/QN_DMRG.html +++ b/dev/tutorials/QN_DMRG.html @@ -36,4 +36,4 @@ energy, psi = dmrg(H,psi0; nsweeps, maxdim, cutoff) return -end +end