Releases: XanaduAI/MrMustard
Release 0.7.3
Release 0.7.2
New features
- Added functions to generate the (A, b, c) triples for the Fock-Bargmann representation of several states and gates. #338
Contributors
Release v0.7.0
Release 0.7.0
New features
-
Added a new interface for backends, as well as a
numpy
backend (which is now default). Users can run
all the functions in theutils
,math
,physics
, andlab
with both backends, whiletraining
requires usingtensorflow
. Thenumpy
backend provides significant improvements both in import
time and runtime. (#301) -
Added the classes and methods to create, contract, and draw tensor networks with
mrmustard.math
.
(#284) -
Added functions in physics.bargmann to join and contract (A,b,c) triples.
(#295) -
Added an Ansatz abstract class and PolyExpAnsatz concrete implementation. This is used in the Bargmann representation.
(#295) -
Added
complex_gaussian_integral
andreal_gaussian_integral
methods.
(#295) -
Added
Bargmann
representation (parametrized by Abc). Supports all algebraic operations and CV (exact) inner product.
(#296)
Breaking changes
-
Removed circular dependencies by:
- Removing
graphics.py
--movedProgressBar
totraining
andmikkel_plot
tolab
. - Moving
circuit_drawer
andwigner
tophysics
. - Moving
xptensor
tomath
.
(#289)
- Removing
-
Created
settings.py
file to hostSettings
.
(#289) -
Moved
settings.py
,logger.py
, andtyping.py
toutils
.
(#289) -
Removed the
Math
class. To use the mathematical backend, replace
from mrmustard.math import Math ; math = Math()
withimport mrmustard.math as math
in your scripts.
(#301) -
The
numpy
backend is now default. To switch to thetensorflow
backend, add the linemath.change_backend("tensorflow")
to your scripts.
(#301)
Improvements
-
Calculating Fock representations and their gradients is now more numerically stable (i.e. numerical blowups that
result from repeatedly applying the recurrence relation are postponed to higher cutoff values).
This holds for both the "vanilla strategy" (#274) and for the
"diagonal strategy" and "single leftover mode strategy" (#288).
This is done by representing Fock amplitudes with a higher precision than complex128 (countering floating-point errors).
We run Julia code via PyJulia (where Numba was used before) to keep the code fast.
The precision is controlled bysetting settings.PRECISION_BITS_HERMITE_POLY
. The default value is128
,
which uses the old Numba code. When setting to a higher value, the new Julia code is run. -
Replaced parameters in
training
withConstant
andVariable
classes.
(#298) -
Improved how states, transformations, and detectors deal with parameters by replacing the
Parametrized
class withParameterSet
.
(#298) -
Includes julia dependencies into the python packaging for downstream installation reproducibility.
Removes dependency on tomli to load pyproject.toml for version info, uses importlib.metadata instead.
(#303)
(#304) -
Improves the algorithms implemented in
vanilla
andvanilla_vjp
to achieve a speedup.
Specifically, the improved algorithms work on flattened arrays (which are reshaped before being returned) as opposed to multi-dimensional array.
(#312)
(#318) -
Adds functions
hermite_renormalized_batch
andhermite_renormalized_diagonal_batch
to speed up calculating
Hermite polynomials over a batch of B vectors.
(#308) -
Added suite to filter undesired warnings, and used it to filter tensorflow's
ComplexWarning
s.
(#332)
Bug fixes
- Added the missing
shape
input parameters to all methodsU
in thegates.py
file.
(#291) - Fixed inconsistent use of
atol
in purity evaluation for Gaussian states.
(#294) - Fixed the documentations for loss_XYd and amp_XYd functions for Gaussian channels.
(#305) - Replaced all instances of
np.empty
withnp.zeros
to fix instabilities.
(#309)
Documentation
Tests
- Added tests for calculating Fock amplitudes with a higher precision than
complex128
.
Contributors
@elib20 @rdprins @SamFerracin @jan-provaznik @sylviemonet @ziofil
Release 0.6.1-post1
Release v0.6.0
Release 0.6.0
New features
-
Added a new method to discretize Wigner functions that revolves Clenshaw summations. This method is expected to be fast and
reliable for systems with high number of excitations, for which the pre-existing iterative method is known to be unstable. Users
can select their preferred methods by setting the value ofSettings.DISCRETIZATION_METHOD
to eitherinteractive
(default) or
clenshaw
.
(#280) -
Added the
PhaseNoise(phase_stdev)
gate (non-Gaussian). Output is a mixed state in Fock representation.
It is not based on a choi operator, but on a nonlinear transformation of the density matrix.
(#275)
Breaking changes
-
The value of
hbar
can no longer be specified outside ofSettings
. All the classes and
methods that allowed specifying its value as an input now retrieve it directly fromSettings
.
(#278) -
Certain attributes of
Settings
can no longer be changed after their value is queried for the
first time.
(#278)
Improvements
- Tensorflow bumped to v2.14 with poetry installation working out of the box on Linux and Mac.
(#281)
Bug fixes
-
Fixed a bug about the variable names in functions (apply_kraus_to_ket, apply_kraus_to_dm, apply_choi_to_ket, apply_choi_to_dm).
(#271) -
Fixed a bug that was leading to an error when computing the Choi representation of a unitary transformation.
(#283) -
Fixed the internal function to calculate ABC of Bargmann representation (now corresponds to the literature) and other fixes to get the correct Fock tensor.
(#255)
Documentation
Contributors
Release v0.5.0
Release 0.5.0
New features
-
Optimization callback functionalities has been improved. A dedicated
Callback
class is added which
is able to access the optimizer, the cost function, the parameters as well as gradients, during the
optimization. In addition, multiple callbacks can be specified. This opens up the endless possiblities
of customizing the the optimization progress with schedulers, trackers, heuristics, tricks, etc.
(#219) -
Tensorboard-based optimization tracking is added as a builtin
Callback
class:TensorboardCallback
.
It can automatically track costs as well as all trainable parameters during optimization in realtime.
Tensorboard can be most conveniently viewed from VScode.
(#219)import numpy as np from mrmustard.training import Optimizer, TensorboardCallback def cost_fn(): ... def as_dB(cost): delta = np.sqrt(np.log(1 / (abs(cost) ** 2)) / (2 * np.pi)) cost_dB = -10 * np.log10(delta**2) return cost_dB tb_cb = TensorboardCallback(cost_converter=as_dB, track_grads=True) opt = Optimizer(euclidean_lr = 0.001); opt.minimize(cost_fn, max_steps=200, by_optimizing=[...], callbacks=tb_cb) # Logs will be stored in `tb_cb.logdir` which defaults to `./tb_logdir/...` but can be customized. # VScode can be used to open the Tensorboard frontend for live monitoring. # Or, in command line: `tensorboard --logdir={tb_cb.logdir}` and open link in browser.
-
Gaussian states support a
bargmann
method for returning the bargmann representation.
(#235) -
The
ket
method ofState
now supports new keyword argumentsmax_prob
andmax_photons
.
Use them to speed-up the filling of a ket array up to a certain probability or total photon number.
(#235)from mrmustard.lab import Gaussian # Fills the ket array up to 99% probability or up to the |0,3>, |1,2>, |2,1>, |3,0> subspace, whichever is reached first. # The array has the autocutoff shape, unless the cutoffs are specified explicitly. ket = Gaussian(2).ket(max_prob=0.99, max_photons=3)
-
Gaussian transformations support a
bargmann
method for returning the bargmann representation.
(#239) -
BSGate.U now supports method='vanilla' (default) and 'schwinger' (slower, but stable to any cutoff)
(#248)
Breaking Changes
-
The previous
callback
argument toOptimizer.minimize
is nowcallbacks
since we can now pass
multiple callbacks to it.
(#219) -
The
opt_history
attribute ofOptimizer
does not have the placeholder at the beginning anymore.
(#235)
Improvements
-
The math module now has a submodule
lattice
for constructing recurrence relation strategies in the Fock lattice.
There are a few predefined strategies inmrmustard.math.lattice.strategies
.
(#235) -
Gradients in the Fock lattice are now computed using the vector-jacobian product.
This saves a lot of memory and speeds up the optimization process by roughly 4x.
(#235) -
Tests of the compact_fock module now use hypothesis.
(#235) -
Faster implementation of the fock representation of
BSgate
,Sgate
andSqueezedVacuum
, ranging from 5x to 50x.
(#239) -
More robust implementation of cutoffs for States.
(#239) -
Dependencies and versioning are now managed using Poetry.
(#257)
Bug fixes
-
Fixed a bug that would make two progress bars appear during an optimization
(#235) -
The displacement of the dual of an operation had the wrong sign
(#239) -
When projecting a Gaussian state onto a Fock state, the upper limit of the autocutoff now respect the Fock projection.
(#246) -
Fixed a bug for the algorithms that allow faster PNR sampling from Gaussian circuits using density matrices. When the
cutoff of the first detector is equal to 1, the resulting density matrix is now correct.
Documentation
Contributors
Release v0.4.1
Release v0.4.0
Release 0.4.0
New features
-
Ray-based distributed trainer is now added to
training.trainer
. It acts as a replacement
forfor
loops and enables the parallelization of running many circuits as well as their
optimizations. To install the extra dependencies:pip install .[ray]
.
(#194)from mrmustard.lab import Vacuum, Dgate, Ggate from mrmustard.physics import fidelity from mrmustard.training.trainer import map_trainer def make_circ(x=0.): return Ggate(num_modes=1, symplectic_trainable=True) >> Dgate(x=x, x_trainable=True, y_trainable=True) def cost_fn(circ=make_circ(0.1), y_targ=0.): target = Gaussian(1) >> Dgate(-1.5, y_targ) s = Vacuum(1) >> circ return -fidelity(s, target) # Use case 0: Calculate the cost of a randomly initialized circuit 5 times without optimizing it. results_0 = map_trainer( cost_fn=cost_fn, tasks=5, ) # Use case 1: Run circuit optimization 5 times on randomly initialized circuits. results_1 = map_trainer( cost_fn=cost_fn, device_factory=make_circ, tasks=5, max_steps=50, symplectic_lr=0.05, ) # Use case 2: Run circuit optimization 2 times on randomly initialized circuits with custom parameters. results_2 = map_trainer( cost_fn=cost_fn, device_factory=make_circ, tasks=[ {'x': 0.1, 'euclidean_lr': 0.005, 'max_steps': 50, 'HBAR': 1.}, {'x': -0.7, 'euclidean_lr': 0.1, 'max_steps': 2, 'HBAR': 2.}, ], y_targ=0.35, symplectic_lr=0.05, AUTOCUTOFF_MAX_CUTOFF=7, )
-
Sampling for homodyne measurements is now integrated in Mr Mustard: when no measurement outcome
value is specified by the user, a value is sampled from the reduced state probability distribution
and the conditional state on the remaining modes is generated.
(#143)import numpy as np from mrmustard.lab import Homodyne, TMSV, SqueezedVacuum # conditional state from measurement conditional_state = TMSV(r=0.5, phi=np.pi)[0, 1] >> Homodyne(quadrature_angle=np.pi/2)[1] # measurement outcome measurement_outcome = SqueezedVacuum(r=0.5) >> Homodyne()
-
The optimizer
minimize
method now accepts an optional callback function, which will be called
at each step of the optimization and it will be passed the step number, the cost value,
and the value of the trainable parameters. The result is added to thecallback_history
attribute of the optimizer.
(#175) -
the Math interface now supports linear system solving via
math.solve
.
(#185) -
We introduce the tensor wrapper
MMTensor
(available inmath.mmtensor
) that allows for
a very easy handling of tensor contractions. Internally MrMustard performs lots of tensor
contractions and this wrapper allows one to label each index of a tensor and perform
contractions using the@
symbol as if it were a simple matrix multiplication (the indices
with the same name get contracted).
(#185)
(#195)from mrmustard.math.mmtensor import MMTensor # define two tensors A = MMTensor(np.random.rand(2, 3, 4), axis_labels=["foo", "bar", "contract"]) B = MMTensor(np.random.rand(4, 5, 6), axis_labels=["contract", "baz", "qux"]) # perform a tensor contraction C = A @ B C.axis_labels # ["foo", "bar", "baz", "qux"] C.shape # (2, 3, 5, 6) C.tensor # extract actual result
-
MrMustard's settings object (accessible via
from mrmustard import settings
) now supports
SEED
(an int). This will give reproducible results whenever randomness is involved.
The seed is assigned randomly by default, and it can be reassigned again by setting it to None:
settings.SEED = None
. If one desires, the seeded random number generator is accessible directly
viasettings.rng
(e.g.settings.rng.normal()
).
(#183) -
The
Circuit
class now has an ascii representation, which can be accessed via the repr method.
It looks great in Jupyter notebooks! There is a new option atsettings.CIRCUIT_DECIMALS
which controls the number of decimals shown in the ascii representation of the gate parameters.
IfNone
, only the name of the gate is shown.
(#196) -
PNR sampling from Gaussian circuits using density matrices can now be performed faster.
When all modes are detected, this is done by replacingmath.hermite_renormalized
bymath.hermite_renormalized_diagonal
. If all but the first mode are detected,
math.hermite_renormalized_1leftoverMode
can be used.
The complexity of these new methods is equal to performing a pure state simulation.
The methods are differentiable, so that they can be used for defining a cost function.
(#154) -
MrMustard repo now provides a fully furnished vscode development container and a Dockerfile. To
find out how to use dev containers for development check the documentation
here.
(#214)
Breaking changes
Improvements
-
The
Dgate
is now implemented directly in MrMustard (instead of on The Walrus) to calculate the
unitary and gradients of the displacement gate in Fock representation, providing better numerical
stability for larger cutoff and displacement values.
(#147)
(#211) -
Now the Wigner function is implemented in its own module and uses numba for speed.
(#171)from mrmustard.utils.wigner import wigner_discretized W, Q, P = wigner_discretized(dm, q, p) # dm is a density matrix
-
Calculate marginals independently from the Wigner function thus ensuring that the marginals are
physical even though the Wigner function might not contain all the features of the state
within the defined window. Also, expose some plot parameters and return the figure and axes.
(#179) -
Allows for full cutoff specification (index-wise rather than mode-wise) for subclasses
ofTransformation
. This allows for a more compact Fock representation where needed.
(#181) -
The
mrmustard.physics.fock
module now provides convenience functions for applying kraus
operators and choi operators to kets and density matrices.
(#180)from mrmustard.physics.fock import apply_kraus_to_ket, apply_kraus_to_dm, apply_choi_to_ket, apply_choi_to_dm ket_out = apply_kraus_to_ket(kraus, ket_in, indices) dm_out = apply_choi_to_dm(choi, dm_in, indices) dm_out = apply_kraus_to_dm(kraus, dm_in, indices) dm_out = apply_choi_to_ket(choi, ket_in, indices)
-
Replaced norm with probability in the repr of
State
. This improves consistency over the
old behaviour (norm was the sqrt of prob if the state was pure and prob if the state was mixed).
(#182) -
Added two new modules (
physics.bargmann
andphysics.husimi
) to host the functions related
to those representations, which have been refactored and moved out ofphysics.fock
.
(#185) -
The internal type system in MrMustard has been beefed up with much clearer types, like ComplexVector,
RealMatrix, etc... as well as a generic typeBatch
, which can be parametrized using the other types,
likeBatch[ComplexTensor]
. This will allow for better type checking and better error messages.
(#199) -
Added multiple tests and improved the use of Hypothesis.
(#191) -
The
fock.autocutoff
function now uses the new diagonal methods for calculating a
probability-based cutoff. Usesettings.AUTOCUTOFF_PROBABILITY
to set the probability threshold.
(#203) -
The unitary group optimization (for the interferometer) and the orthogonal group optimization
(for the real interferometer) have been added. The symplectic matrix that describes an
interferometer belongs to the intersection of the orthogonal group and the symplectic group,
which is a unitary group, so we needed both.
(#208)
Bug fixes
-
The
Dgate
and theRgate
now correctly parse the case when a single scalar is intended
as the same parameter of a number of gates in parallel.
(#180) -
The trace function in the fock module was giving incorrect results when called with certain
choices of modes. This is now fixed.
(#180) -
The purity function for fock states no longer normalizes the density matrix before computing
the purity.
(#180) -
The function
dm_to_ket
no longer normalizes the density matrix before diagonalizing it.
(#180) -
The internal fock representation of states returns the correct cutoffs in all cases
(solves an issue when a pure dm was converted to ket).
[(#184)](...
Release v0.3.0
New features
-
Can switch progress bar on and off (default is on) from the settings via
settings.PROGRESSBAR = True/False
.
(#128) -
States in Gaussian and Fock representation now can be concatenated.
from mrmustard.lab.states import Gaussian, Fock from mrmustard.lab.gates import Attenuator # concatenate pure states fock_state = Fock(4) gaussian_state = Gaussian(1) pure_state = fock_state & gaussian_state # also can concatenate mixed states mixed1 = fock_state >> Attenuator(0.8) mixed2 = gaussian_state >> Attenuator(0.5) mixed_state = mixed1 & mixed2 mixed_state.dm()
-
Parameter passthrough allows one to use custom variables and/or functions as parameters. For example we can use parameters of other gates:
from mrmustard.lab.gates import Sgate, BSgate BS = BSgate(theta=np.pi/4, theta_trainable=True)[0,1] S0 = Sgate(r=BS.theta)[0] S1 = Sgate(r=-BS.theta)[1] circ = S0 >> S1 >> BS
Another possibility is with functions:
def my_r(x): return x**2 x = math.new_variable(0.5, bounds = (None, None), name="x") def cost_fn(): # note that my_r needs to be in the cost function # in order to track the gradient S = Sgate(r=my_r(x), theta_trainable=True)[0,1] return # some function of S opt.Optimize(cost_fn, by_optimizing=[x])
-
Adds the new trainable gate
RealInterferometer
: an interferometer that doesn't mix the q and p quadratures. (#132) -
Now marginals can be iterated over:
for mode in state: print(mode.purity)
Breaking changes
-
The Parametrized and Training classes have been refactored: now trainable tensors are wrapped in an instance of the
Parameter
class. To define a set of parameters dofrom mrmustard.training import Parametrized params = Parametrized( magnitude=10, magnitude_trainable=False, magnitude_bounds=None, angle=0.1, angle_trainable=True, angle_bounds=(-0.1,0.1) )
which will automatically define the properties
magnitude
andangle
on theparams
object.
To access the backend tensor defining the values of such parameters use thevalue
propertyparams.angle.value params.angle.bounds params.magnitude.value
Gates will automatically be an instance of the
Parametrized
class, for examplefrom mrmustard.lab import BSgate bs = BSgate(theta = 0.3, phi = 0.0, theta_trainable: True) # access params bs.theta.value bs.theta.bounds bs.phi.value
Improvements
-
The Parametrized and Training classes have been refactored. The new training module has been added and with it the new
Parameter
class: now trainable tensors are being wrapped in an instance ofParameter
. (#133), patch (#144) and (#158). -
The string representations of the
Circuit
andTransformation
objects have been improved: theCircuit.__repr__
method now produces a string that can be used to generate a circuit in an identical state (same gates and parameters), theTransformation.__str__
and objects inheriting from it now prints the name, memory location of the object as well as the modes of the circuit in which the transformation is acting on. The_markdown_repr_
has been implemented and on a jupyter notebook produces a table with valuable information of the Transformation objects. (#141) -
Add the argument
modes
to theInterferometer
operation to indicate which modes the Interferometer is applied to. (#121)
Bug fixes
-
Fixed a bug in the
State.ket()
method. An attribute was called with a typo in its name. (#135) -
The
math.dagger
function applying the hermitian conjugate to an operator was incorrectly transposing the indices of the input tensor. Nowmath.dagger
appropriately calculates the Hermitian conjugate of an operator. (#156)
Documentation
-
The centralized Xanadu Sphinx Theme is now used to style the Sphinx documentation. (#126)
-
The documentation now contains the
mm.training
section. The optimization examples on the README and Basic API Reference section have been updated to use the latest API. (#133)
Contributors
This release contains contributions from (in alphabetical order):
Mikhail Andrenkov (@Mandrenkov), Sebastian Duque Mesa (@sduquemesa), Filippo Miatto (@ziofil), Yuan Yao (@sylviemonet)
Release v0.2.0
New features since last release
-
Fidelity can now be calculated between two mixed states. (#115)
-
A configurable logger module is added. (#107)
from mrmustard.logger import create_logger logger = create_logger(__name__) logger.warning("Warning message")
Improvements
- The tensorflow and torch backend adhere to
MathInterface
. (#103)
Bug fixes
-
Setting the modes on which detectors and state acts using
modes
kwarg or__getitem__
give consistent results. (#114) -
Lists are used instead of generators for indices in fidelity calculations. (#117)
-
A raised
KeyboardInterrupt
while on a optimization loop now stops the execution of the program. #105
Documentation
- Basic API reference is updated to use the latest Mr Mustard API. (#119)
Contributors
This release contains contributions from (in alphabetical order):