Skip to content

Releases: pytorchbearer/torchbearer

Version 0.2.6.1

25 Feb 14:20
Compare
Choose a tag to compare

[0.2.6.1] - 2019-02-25

Fixed

  • Fixed a bug where predictions would multiply when predict was called more than once

Version 0.2.6

19 Dec 14:09
Compare
Choose a tag to compare

[0.2.6] - 2018-12-19

Added

Changed

  • Y_PRED, Y_TRUE and X can now equivalently be accessed as PREDICTION, TARGET and INPUT respectively

Deprecated

Removed

Fixed

  • Fixed a bug where the LiveLossPlot callback would trigger an error if run and evaluate were called separately
  • Fixed a bug where state key errors would report to the wrong stack level
  • Fixed a bug where the user would wrongly get a state key error in some cases

Version 0.2.5

19 Dec 09:02
Compare
Choose a tag to compare

[0.2.5] - 2018-12-19

Added

  • Added flag to replay to replay only a single batch per epoch
  • Added support for PyTorch 1.0.0 and Python 3.7
  • MetricTree can now unpack dictionaries from root, this is useful if you want to get a mean of a metric. However, this should be used with caution as it extracts only the first value in the dict and ignores the rest.
  • Added a callback for the livelossplot visualisation tool for notebooks

Changed

  • All error / accuracy metrics can now optionally take state keys for predictions and targets as arguments

Deprecated

Removed

Fixed

  • Fixed a bug with the EpochLambda metric which required y_true / y_pred to have specific forms

Version 0.2.4

16 Nov 14:01
Compare
Choose a tag to compare

[0.2.4] - 2018-11-16

Added

  • Added metric functionality to state keys so that they can be used as metrics if desired
  • Added customizable precision to the printer callbacks
  • Added threshold to binary accuracy. Now it will appropriately handle any values in [0, 1]

Changed

  • Changed the default printer precision to 4s.f.
  • Tqdm on_epoch now shows metrics immediately when resuming

Deprecated

Removed

Fixed

  • Fixed a bug which would incorrectly trigger version warnings when loading in models
  • Fixed bugs where the Trial would not fail gracefully if required objects were not in state
  • Fixed a bug where none criterion didn't work with the add_to_loss callback
  • Fixed a bug where tqdm on_epoch always started at 0

Version 0.2.3

12 Oct 15:36
Compare
Choose a tag to compare

[0.2.3] - 2018-10-12

Added

  • Added string representation of Trial to give summary
  • Added option to log Trial summary to TensorboardText
  • Added a callback point ('on_checkpoint') which can be used for model checkpointing after the history ios updated

Changed

  • When resuming training checkpointers no longer delete the state file the trial was loaded from
  • Changed the metric eval to include a data_key which tells us what data we are evaluating on

Deprecated

Removed

Fixed

  • Fixed a bug where callbacks weren't handled correctly in the predict and evaluate methods of Trial
  • Fixed a bug where the history wasn't updated when new metrics were calculated with the evaluate method of Trial
  • Fixed a bug where tensorboard writers couldn't be reused
  • Fixed a bug where the none criterion didn't require gradient
  • Fix bug where tqdm wouldn't get correct iterator length when evaluating on test generator
  • Fixed a bug where evaluating before training tried to update history before it existed
  • Fixed a bug where the metrics would output 'val_acc' even if evaluating on test or train data
  • Fixed a bug where roc metric didn't detach y_pred before sending to numpy
  • Fixed a bug where resuming from a checkpoint saved with one of the callbacks didn't populate the epoch number correctly

Version 0.2.2

18 Sep 07:24
Compare
Choose a tag to compare

[0.2.2] - 2018-09-18

Added

  • The default_for_key metric decorator can now be used to pass arguments to the init of the inner metric
  • The default metric for the key 'top_10_acc' is now the TopKCategoricalAccuracy metric with k set to 10
  • Added global verbose flag for trial that can be overridden by run, evaluate, predict
  • Added an LR metric which retrieves the current learning rate from the optimizer, default for key 'lr'

Fixed

  • Fixed a bug where the DefaultAccuracy metric would not put the inner metric in eval mode if the first call to reset was after the call to eval
  • Fixed a bug where trying to load a state dict in a different session to where it was saved didn't work properly
  • Fixed a bug where the empty criterion would trigger an error if no Y_TRUE was put in state

Version 0.2.1

11 Sep 13:56
Compare
Choose a tag to compare

[0.2.1] - 2018-09-11

Added

  • Evaluation and prediction can now be done on any data using data_key keywork arg
  • Text tensorboard/visdom logger that writes epoch/batch metrics to text

Changed

  • TensorboardX, Numpy, Scikit-learn and Scipy are no longer dependancies and only required if using the tensorboard callbacks or roc metric

Deprecated

Removed

Fixed

  • Model class setting generator incorrectly leading to stop iterations.
  • Argument ordering is consistent in Trial.with_generators and Trial.__init__
  • Added a state dict for the early stopping callback
  • Fixed visdom parameters not getting set in some cases

Version 0.2.0

21 Aug 10:34
021373f
Compare
Choose a tag to compare

See [NEW!] in README.md for new key features

[0.2.0] - 2018-08-21

Added

  • Added the ability to pass custom arguments to the tqdm callback
  • Added an ignore_index flag to the categorical accuracy metric, similar to nn.CrossEntropyLoss. Usage: metrics=[CategoricalAccuracyFactory(ignore_index=0)]
  • Added TopKCategoricalAccuracy metric (default for key: top_5_acc)
  • Added BinaryAccuracy metric (default for key: binary_acc)
  • Added MeanSquaredError metric (default for key: mse)
  • Added DefaultAccuracy metric (use with 'acc' or 'accuracy') - infers accuracy from the criterion
  • New Trial api torchbearer.Trial to replace the Model api. Trial api is more atomic and uses the fluent pattern to allow chaining of methods.
  • torchbearer.Trial has with_x_generator and with_x_data methods to add training/validation/testing generators to the trial. There is a with_generators method to allow passing of all generators in one call.
  • torchbearer.Trial has for_x_steps and for_steps to allow running of trails without explicit generators or data tensors
  • torchbearer.Trial keeps a history of run calls which tracks number of epochs ran and the final metrics at each epoch. This allows seamless resuming of trial running.
  • torchbearer.Trial.state_dict now returns the trial history and callback list state allowing for full resuming of trials
  • torchbearer.Trial has a replay method that can replay training (with callbacks and display) from the history. This is useful when loading trials from state.
  • The backward call can now be passed args by setting state[torchbearer.BACKWARD_ARGS]
  • torchbearer.Trial implements the forward pass, loss calculation and backward call as a optimizer closure
  • Metrics are now explicitly calculated with no gradient

Changed

  • Callback decorators can now be chained to allow construction with multiple methods filled
  • Callbacks can now implement state_dict and ``load_state_dict` to allow callbacks to resume with state
  • State dictionary is now accepts StateKey objects which are unique and generated through torchbearer.state.get_state
  • State dictionary now warns when accessed with strings as this allows for collisions
  • Checkpointer callbacks will now resume from a state dict when resume=True in Trial

Deprecated

  • torchbearer.Model has been deprecated in favour of the new torchbearer.Trial api

Removed

  • Removed the MetricFactory class. Decorators still work in the same way but the Factory is no longer needed.

Fixed

Version 0.1.7

14 Aug 12:50
Compare
Choose a tag to compare

[0.1.7] - 2018-08-14

Added

  • Added visdom logging support to tensorbard callbacks
  • Added option to choose tqdm module (tqdm, tqdm_notebook, ...) to Tqdm callback
  • Added some new decorators to simplify custom callbacks that must only run under certain conditions (or even just once).

Changed

  • Instantiation of Model will now trigger a warning pending the new Trial API in the next version
  • TensorboardX dependancy now version 1.4

Deprecated

Removed

Fixed

  • Mean and standard deviation calculations now work correctly for network outputs with many dimensions
  • Callback list no longer shared between fit calls, now a new copy is made each fit

Version 0.1.6

10 Aug 12:58
Compare
Choose a tag to compare

[0.1.6] - 2018-08-10

Added

  • Added a verbose level (options are now 0,1,2) which will print progress for the entire fit call, updating every epoch. Useful when doing dynamic programming with little data.
  • Added support for dictionary outputs of dataloader
  • Added abstract superclass for building TensorBoardX based callbacks

Changed

  • Timer callback can now also be used as a metric which allows display of specified timings to printers and has been moved to metrics.
  • The loss_criterion is renamed to criterion in torchbearer.Model arguments.
  • The criterion in torchbearer.Model is now optional and will provide a zero loss tensor if it is not given.
  • TensorBoard callbacks refactored to be based on a common super class
  • TensorBoard callbacks refactored to use a common SummaryWriter for each log directory

Deprecated

Removed

Fixed

  • Standard deviation calculation now returns 0 instead of complex value when given very close samples