-
Notifications
You must be signed in to change notification settings - Fork 21
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Improving infinite mpo matrix #77
Conversation
Co-authored-by: github-actions[bot] <41898282+github-actions[bot]@users.noreply.github.com>
Fixing tests and including Mat's comment.
All tests work on my branch without any measurable slowdown. Probably a lot of simplification/optimization can be done. Up to you if you integrate it right now, or if we wait a bit til I can have an overview of possible changes. |
@LHerviou I'm fine waiting to merge this if you think there are simplifications or optimizations to make once you have the time, unless @JanReimers needs this for something and therefore it's useful to have merged. |
I have a branch with
|
On my computer, this runs nicely. To tackle some missing methods in NDTensors, I explicitely defined an addition formula for EmptyITensors. I would be interested in someone having a look at it for type stability. This touches part of the Pkg I am not familiar with. |
Sounds perfect. Does it allow also to remove the extension of datatype I
had to do?
Thanks a lot, Mat.
…On Tue, Jun 20, 2023 at 8:23 PM Matt Fishman ***@***.***> wrote:
***@***.**** commented on this pull request.
------------------------------
In src/infinitempomatrix.jl
<#77 (comment)>
:
> + new_tags = get(kwargs, :left_tags, tags(old_ind))
+ top, middle, bottom = build_three_projectors_from_index(old_ind; tags=new_tags)
+ vector = fill(op("Zero", local_sit), 3, 1)
+ else
+ new_tags = get(kwargs, :right_tags, tags(old_ind))
+ top, middle, bottom = build_three_projectors_from_index(old_ind; tags=new_tags)
+ vector = fill(op("Zero", local_sit), 1, 3)
+ end
+ for (idx, proj) in enumerate([top, middle, bottom])
+ vector[idx] = proj * tensor
+ end
+ return vector
+end
+
+#Matrix multiplication for Matrix of ITensors
+function Base.:*(A::Matrix{ITensor}, B::Matrix{ITensor})
@LHerviou <https://github.com/LHerviou> I have a simple fix for this
issue which makes sure the contraction code outputs the correct types, so
then there is no issue in subsequent operations like addition. I'll make a
PR to NDTensors/ITensors directly.
I can also add the definition Base.zero(T::ITensor) = false * T so then
with those two fixes Matrix{ITensor} * Matrix{ITensor} will "just work".
—
Reply to this email directly, view it on GitHub
<#77 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AXMRNR4CA3F7F3G2L2OG343XMHTBHANCNFSM6AAAAAAZIEX4ZI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
I don't seem to need to overload |
I will check again tomorrow. It is possible it came from something else.
…On Tue, Jun 20, 2023 at 11:23 PM Matt Fishman ***@***.***> wrote:
I don't seem to need to overload NDTensors.datatype for Matrix{ITensor} *
Matrix{ITensor} to work. Is it possible another operation needed that?
—
Reply to this email directly, view it on GitHub
<#77 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AXMRNR653Z3QAD5V6T675KTXMIIEDANCNFSM6AAAAAAZIEX4ZI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
@LHerviou the issues we discussed have been implemented in the latest version of ITensors.jl (ITensor/ITensors.jl#1135). |
Apparently, there is still one definition missing with how you implemented zero.
Adding "Base.fill!(::NDTensors.NoData, ::Any) = 0 " is enough to solve the issue. |
Adding the small fix. It should just be integrated in ITensors.
…icant overhead in some cases for translatecell and index access
Ideally we would come up with a name that works well that we won't have to change going forward. Also we could do that name change in a future PR. |
Temp improving infinite mpo matrix
I think InfiniteBlockMPO is a good name. It is a MPO by block in the same sense block matrices exist. I have no idea how many people (beyond me and some of my collaborators) used the InfiniteMPOMatrix anyway. |
@LHerviou this looks good to me now. Do you think this is ready to merge? |
Hi
I believe so.
…On Tue, 5 Sept 2023, 19:28 Matt Fishman, ***@***.***> wrote:
@LHerviou <https://github.com/LHerviou> this looks good to me now. Do you
think this is ready to merge?
—
Reply to this email directly, view it on GitHub
<#77 (comment)>,
or unsubscribe
<https://github.com/notifications/unsubscribe-auth/AXMRNRYSBNUSG3QI2ZCVKDDXY5OL7ANCNFSM6AAAAAAZIEX4ZI>
.
You are receiving this because you were mentioned.Message ID:
***@***.***>
|
For the future: I guess I am letting @JanReimers finally do his own updates, then I will add all the iDMRG and so on using his version of compression. |
Thanks again! |
@LHerviou not sure what's going on but it looks like this PR broke some tests on main (https://github.com/ITensor/ITensorInfiniteMPS.jl/commits/main), could you take a look? |
I will have a look either tonight or tomorrow. |
Ah we have been changing some internals of We are in the process of refactoring and simplifying |
@mtfishman Ok, I found the source of the bug. l269 and l272 of [tensor_operations/tensor_algebra.jl](https://github.com/ITensor/ITensors.jl/blob/main/src/tensor_operations/tensor_algebra.jl),
to
In the end, the problem is that one(EmptyNumber) is not defined. I am testing some fixes right now. |
Ok interesting, thanks for looking into it. Does that mean an |
Yes, when I am fusing some indices. |
For info: I am not sure it is a good workaround though, as the projectors are not projectors anymore, and could be used at some point later. |
I'm not sure that is something we want to define, since conversion shouldn't lose information (i.e. I wonder if |
We are replacing the |
As described, some redefinitions and improvements on the form of InfiniteMPOMatrix, and modifications that follow.
The core idea:
All of that normally significantly simplifies the manipulation and the optimization of the InfiniteMPOMatrix and the corresponding InfiniteMPO
I did not have time to fully debug, so I will keep updating in the next few days, and confirm when one can merge.