Release v0.7.0
Release 0.7.0
New features
-
Added a new interface for backends, as well as a
numpy
backend (which is now default). Users can run
all the functions in theutils
,math
,physics
, andlab
with both backends, whiletraining
requires usingtensorflow
. Thenumpy
backend provides significant improvements both in import
time and runtime. (#301) -
Added the classes and methods to create, contract, and draw tensor networks with
mrmustard.math
.
(#284) -
Added functions in physics.bargmann to join and contract (A,b,c) triples.
(#295) -
Added an Ansatz abstract class and PolyExpAnsatz concrete implementation. This is used in the Bargmann representation.
(#295) -
Added
complex_gaussian_integral
andreal_gaussian_integral
methods.
(#295) -
Added
Bargmann
representation (parametrized by Abc). Supports all algebraic operations and CV (exact) inner product.
(#296)
Breaking changes
-
Removed circular dependencies by:
- Removing
graphics.py
--movedProgressBar
totraining
andmikkel_plot
tolab
. - Moving
circuit_drawer
andwigner
tophysics
. - Moving
xptensor
tomath
.
(#289)
- Removing
-
Created
settings.py
file to hostSettings
.
(#289) -
Moved
settings.py
,logger.py
, andtyping.py
toutils
.
(#289) -
Removed the
Math
class. To use the mathematical backend, replace
from mrmustard.math import Math ; math = Math()
withimport mrmustard.math as math
in your scripts.
(#301) -
The
numpy
backend is now default. To switch to thetensorflow
backend, add the linemath.change_backend("tensorflow")
to your scripts.
(#301)
Improvements
-
Calculating Fock representations and their gradients is now more numerically stable (i.e. numerical blowups that
result from repeatedly applying the recurrence relation are postponed to higher cutoff values).
This holds for both the "vanilla strategy" (#274) and for the
"diagonal strategy" and "single leftover mode strategy" (#288).
This is done by representing Fock amplitudes with a higher precision than complex128 (countering floating-point errors).
We run Julia code via PyJulia (where Numba was used before) to keep the code fast.
The precision is controlled bysetting settings.PRECISION_BITS_HERMITE_POLY
. The default value is128
,
which uses the old Numba code. When setting to a higher value, the new Julia code is run. -
Replaced parameters in
training
withConstant
andVariable
classes.
(#298) -
Improved how states, transformations, and detectors deal with parameters by replacing the
Parametrized
class withParameterSet
.
(#298) -
Includes julia dependencies into the python packaging for downstream installation reproducibility.
Removes dependency on tomli to load pyproject.toml for version info, uses importlib.metadata instead.
(#303)
(#304) -
Improves the algorithms implemented in
vanilla
andvanilla_vjp
to achieve a speedup.
Specifically, the improved algorithms work on flattened arrays (which are reshaped before being returned) as opposed to multi-dimensional array.
(#312)
(#318) -
Adds functions
hermite_renormalized_batch
andhermite_renormalized_diagonal_batch
to speed up calculating
Hermite polynomials over a batch of B vectors.
(#308) -
Added suite to filter undesired warnings, and used it to filter tensorflow's
ComplexWarning
s.
(#332)
Bug fixes
- Added the missing
shape
input parameters to all methodsU
in thegates.py
file.
(#291) - Fixed inconsistent use of
atol
in purity evaluation for Gaussian states.
(#294) - Fixed the documentations for loss_XYd and amp_XYd functions for Gaussian channels.
(#305) - Replaced all instances of
np.empty
withnp.zeros
to fix instabilities.
(#309)
Documentation
Tests
- Added tests for calculating Fock amplitudes with a higher precision than
complex128
.
Contributors
@elib20 @rdprins @SamFerracin @jan-provaznik @sylviemonet @ziofil