- COSMIT: accepted Black a the code style of choice, introduced pre-commit hooks for developers
- FIX: having a dunder-version in the root of the package is a the standard (issue #24)
- FIX: set the minimal python to
3.7
as pointed out in issue #24 - UPD: bumped the base version of torch to at least
1.8
- FIX: upgraded
.utils.spectrum
to new native torch complex backend (torch>=1.8
) - FIX: ensured ONNX support in PR #14
- ENH: implemented modulus-based maxpooling, requested in issue #17
- FIX: made
.Cplx
instancesdeepcopy
-able, fixing issue #18 - DOC: improved docs for
.nn.ModReLU
indicating the sign-deviation from the original paper proposing it (issue #22) - DOC: added a basic TOC to the main README docs
- misnamed VD and misplaced ARD layers in
.nn.relevance
- sparsity stats badly placed in
.utils.stats
- misnamed
$\ell_0$ probabilistic pruning layer in.nn.relevance.extensions.real
, since it had nothing to do with the Automatic Relevance Determination Bayesian approach
- FIX: Fixed shape mismatch in
.nn.init.cplx_trabelsi_independent_
, which prevented it from working properly # 11 - ENH: Hendrik Schröter implemented Complex Transposed Convolutions # 8, squeeze/unsqueeze methods for
Cplx
# 7, and added support for.view
and.view_as
methods forCplx
# 6 - ENH: Introduce converters for special torch format of complex tensors (last dim is exactly 2) see torch.fft
- ENH:
Cplx
now also has.size()
method, which mimicstorch.Tensor.size()
- DOC: Improved documentation of
.nn.casting
modules
- structure of the
.nn.relevance
was simplified- importing from
nn.relevance.ard
has been deprecated, and ARD layers have been moved to.real
or.complex
depending on their type
- importing from
- changed relevance layers class hierarchy in
.relevance.real
and.relevance.complex
:- factored out Gaussian Local Reparameterization into pure
*Gaussian
layers, that reside in.real.base
and.complex.base
- subclassed Variational Dropout layers (
*VD
) from*Gaussian
with improper prior KL mixin - subclassed ARD layers (
*ARD
) from Variational Dropout layers*VD
with ARD Gaussian prior KL mixin
- factored out Gaussian Local Reparameterization into pure
- The structure of the
.nn
sub-module now more closely resembles that oftorch
.base
:CplxToCplx
and parameter typeCplxParameter
.casting
: real-Cplx tensor conversion layers.linear
,.conv
,.activation
: essential layers and activations.container
: sequential container which explicitly checks types of internal layers.extra
: 1-dim Bernoulli Dropout for complex-valued tensors (Cplx)
CplxToCplx
can now promote torch's univariate functions to split-complex activations, e.g. useCplxToCplx[AvgPoool1d]
instead ofCplxAvgPool1d
- Niche complex-valued containers were removed, dropped dedicated activations, like
CplxLog
andCplxExp
- misnamed Bayesian layers in
.nn.relevance
were moved around and corrected- layers in
.real
and.complex
were renamed to Var Dropout, with deprecation warnings for old names .ard
implements the Bayesian Dropout methods with Automatic Relevance Determination priors
- layers in
.extensions
submodule contains relaxations, approximations, and related but non-Bayesian layers\ell_0
stochastic regularization layer was moved to.real
Lasso
was kept to illustrate extensibility, but similarly moved to.real
- Variational Dropout approximations and speeds ups were moved to
.complex
CplxParameter
now supports real-to-complex promotion during.load_state_dict
- added submodule-specific README's, explaining typical use cases and peculiarities
Prior version used different version numbering and although the layers are backwards compatible, their location within the library was much different.