You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
On the road to minimize allocations and re use memory when possible, I'd like to convert most of the functions to work in place when possible, this goes from the higher functions like apply to lower level functions like the various implementations of factorize.
Where to start
Most of the times in this functions indices must be grouped, therefore the contraction CombinerTensor with DenseTensor Is very common, but as implemented it always creates a new tensor even when it is not necessary, for this reason I created a macro @Combine! that consider such product and overwrites the ITensor with DenseStorage and returns a new ITensor with the combined indices but that shares the same storage of the starting tensor. This macro can be used (in the future) every time a NDTensor must be reduced to a 2DTensor to perform low level calculations.
My Implementation
I have started this project on a personal fork of the library, for the moment I have only created the macro @Combine!, this is the repo on which I'm working.
I wanted to ask for advises both about the implementation and on how managing the repo, if I can rebase the branch here or is it better to keep it on a fork, and when would it be the case to do a pull request. Thank you in advance for the suggestions.
The text was updated successfully, but these errors were encountered:
We are in the process of a full rewrite of NDTensors.jl/ITensors.jl. We were keeping track of the plan and progress here: #1250 though that is pretty outdated now, but it could give you an idea of the changes we have planned. See also this Pluto notebook that I made that demonstrates the current state of the rewrite: https://github.com/ITensor/itensor-deconstructed. I'm mentioning that as something to keep in mind since I see you are making a lot of changes to NDTensors.jl, and probably we won't accept large PRs to NDTensors.jl at this point since it would be a lot of work for us to review them and NDTensors.jl will be replaced anyway once the rewrite is complete.
Thank you very much for the head on, https://github.com/ITensor/ITensors.jl/tree/nameddimsarrays_rewrite this should be the branch, isn't it? Since #1250 is outdated, can you be more clear on the current state of the rewrite? can it be used or would it be better to still stick to the current version?
It is not meant for end users to use right now. We are making good progress (for example, we have dense DMRG running with the new system, and are close to having block sparse/abelian symmetric DMRG running) but we can't give estimates on when it might be ready since it has to be coordinated with other big changes we are making, like adding support for non-abelian symmetries.
On the road to minimize allocations and re use memory when possible, I'd like to convert most of the functions to work in place when possible, this goes from the higher functions like
apply
to lower level functions like the various implementations offactorize
.Where to start
Most of the times in this functions indices must be grouped, therefore the contraction
CombinerTensor
withDenseTensor
Is very common, but as implemented it always creates a new tensor even when it is not necessary, for this reason I created a macro@Combine!
that consider such product and overwrites theITensor
withDenseStorage
and returns a newITensor
with the combined indices but that shares the same storage of the starting tensor. This macro can be used (in the future) every time aNDTensor
must be reduced to a2DTensor
to perform low level calculations.My Implementation
I have started this project on a personal fork of the library, for the moment I have only created the macro
@Combine!
, this is the repo on which I'm working.I wanted to ask for advises both about the implementation and on how managing the repo, if I can rebase the branch here or is it better to keep it on a fork, and when would it be the case to do a pull request. Thank you in advance for the suggestions.
The text was updated successfully, but these errors were encountered: