-
Notifications
You must be signed in to change notification settings - Fork 4
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support mixed states #39
Comments
OK short version, yes, but it is unlikely that I will work on it before I graduate from my Ph.D. unless there is a project actually needs this functionality as I think this package probably won't lead to any publication (in physics) in the short term. Long version for whoever wants to work on this direction: First of all, for clarification, this package is still very WIP, and it aims to solve a few problems we have currently inside Yao that needs a complete rework, so not everything works perfectly yet, I won't recommend using it for serious purpose right now, there could be bugs, performance issues that not yet battle-tested. the registered version only contains the major implementation on single-threaded subspace matrix multiplication that has speedup compared to the current stable Yao simulation routines. the routines in this package are actually not trying to do a collection of pure states, but trying to implement the more fundamental routine subspace matrix multiplication the Kraus operator evaluation can be done by doing the following U * (rho * U')
U * (U * rho')' For the integral version of the Kraus operator, I think you can also benefit from this fundamental numerical routine \int_s U(s) * rho * U(s)' since the underlying integrator will also rely on this type of tensor contraction, as the integrator has to implement some kind of discrete steps on top of and we don't actually need to worry about the so this means what matters is how you apply a matrix on density matrix in a subspace, e.g how to do the following tensor contraction efficiently U_{ai} * U_{bj} * U_{ck} * rho'_{123...a...b...c...n} now the problem becomes how to accelerate a general tensor contraction that has a pattern of a large tensor And how to do this kind of tensor contraction efficiently? I have a blog post explaining the simple version: https://blog.rogerluo.dev/2020/03/31/yany/ but to achieve the best performance, one actually need to do more specialization on specific operators, such as Paulis etc. This means even with the stable routine But I know there are some potential issues with
And the current YaoBlocks is certainly not a good place to have the IR support for noisy channels, since it wasn't designed for channels and thus has problems. I hope to address this in the development of YaoCompiler, but it is not currently supported or implemented. |
one addition, by doing more specifications on specific operators means a small compiler will be needed to do this automatically, which is the idea behind that scary metaprogramming in the codegen dir of this package, and I will hope to improve the quality of that code by reworking it using https://github.com/Roger-luo/Expronicon.jl and MLStyle. Metatheory could also be a good option, but it will depend on how complicated the simplification rules can be as currently it is quite simple. |
This package is great, and has amazing performance.
However, I'd like to use it to perform noisy simulations, where my state is a density matrix.
This does not work right now, as a matrix is interpreted as a collection of pure states.
Do you have plans to support this use-case in the future?
The text was updated successfully, but these errors were encountered: