Skip to content

Releases: BindsNET/bindsnet

Multicompartment Connection

18 Oct 17:36
1b97861
Compare
Choose a tag to compare

Here's a clearer and more friendly version of the text:


What's New

In this version, along with our usual performance improvements (thanks to @C-Earl and @SimonInParis!), we've introduced the MulticompartmentConnection class for more complex connections. This allows spikes to pass through different features like weights, biases, and boolean masks in a specified order. You can create custom pipelines with these features, and they’ll execute in the order you define. For example, you can use a pipeline that first multiplies spikes by weights, then adds a bias. You can even add more features before, after, or between these steps!

For more details, check out our documentation.

General Updates:

  • Added MulticompartmentConnection #695 by @C-Earl
  • Various package updates and reformatting #542
  • Python updated to 3.11 #550
  • PyTorch updated to 2.0 #630
  • Maintenance updates, dependency bumps, and more! #593, #602

Bug Fixes:

  • Fixed an issue with net.run for unconnected layers #547
  • Corrected plotting aspect ratios #651
  • Fixed small bugs in environment_pipeline #579
  • Addressed typo errors #661

Performance Enhancements:

  • Added vector learning rates for all UpdateRules #552
  • CUDA updates and performance fixes #655

Code Refactoring:

  • Made the code more "Pythonic" #570
  • Removed debug print statements #566

Examples & Documentation:

  • Updated mnist examples #557
  • Addressed issues with Read the Docs #642

Dependency Updates:

  • Bumped various dependencies, including jupyter-server, pillow, requests, and more.

New Contributors

A warm welcome to our new contributors:

Full Changelog: Compare changes from 0.3.1 to 0.3.3


This version organizes the updates to help you quickly find what’s new and improved. Enjoy the new features, and as always, thanks for your contributions!

0.3.1

13 Feb 22:57
821fbc9
Compare
Choose a tag to compare

This release summarizes the last changes and improvements in 0.3.1

  1. Fix WeightDependentPostPre Post-synaptic update #534
  2. Adding Conv1D Conv3D connection and improving Conv2D #526
  3. Fixed the MaxPool batch size issue #526
  4. Add three local connection classes (1D, 2D, and 3D) supporting multi-channel inputs alongside MNIST example files #536
  5. Changing execution order in Izhikevich neuron and inject Vmem in network forward #530
  6. Improve installation scripts with Poetry #518, #520
  7. Code improvements #515,
  8. Documentation improvements #517, #532

Thanks for everyone involved with this release!
@danielgafni, @ArefAz , @hafezgh, @amirHossein-Ebrahimi,

0.3.0

20 Sep 01:16
3c9b9ba
Compare
Choose a tag to compare

This release summarizes the last changes and improvements in 0.3.0

Changes:

  1. New environment for RL experiments - dot tracing #507
  2. Improve encoding performance #484
  3. Improve nodes trace values and network input spikes format assert. #501
  4. Added ability to use tensors for wmin/wmax on synaptic connections #509
  5. Fixing issue with demos:
    • Fix an issue with accuracy reporting #482
    • fix dimensions issues with layers with different shape #488
    • Fix dimensions size issue at breakout baseline network #489
    • Improve documentation for reservoir #492
    • Fix typos #501, #492
  6. Updating to PyTorch 1.9 #499
  7. Switch to Poetry installation (#513, #517,)
  8. Adding isort and autoflake to the commit workflow #518

Thanks for everyone involved with this release!
@het-25 @mahbodnr @petermarathas @cearlUmass @kamue1a @SimonInParis @danielgafni

0.2.9

26 Apr 00:41
ead5521
Compare
Choose a tag to compare

This release summarizes the last changes and improvements in 0.2.9

Changes:

  1. Performance optimization of Monitors object (#446)
  2. Optimizing variables in connection and neurons objects (#428)
  3. Performance increases of PostPre update rule and BoostedLIF (#429)
  4. Implement Cumulative Spike Response Model Nodes (#443)
  5. Import code from sister project (#438)
  6. Update to PyTorch 1.8.1 (#477, #478)
  7. Fix misc issues with BindsNET examples (#437, #457, #458, #478, #474 )

Thanks for all the contributors!

0.2.8

25 Oct 15:40
e69b975
Compare
Choose a tag to compare

This release summarizes the last changes and improvements in 0.2.8

Changes:

  1. Runtime optimization speed up - core functions (#384) .
  2. Installation scripts - added python 3.8 and PyTorch 1.6 (#392, #400, #404)
  3. Examples - code readability, graphs, and reproducibility (#386, #387, #396, #411).
  4. When using GPU, some variables (GYM, reward STDP, and graph related), accidentally stayed on the CPU, now moved to GPU (#388, #403, #406, #409, #412, #420) .
  5. More flexibility when building network. Adding the ability to build Network without a designated input layer (#416), now every layer can get external input using a volt injection or spike injection.

We know we have some open issues, feel free to give a hand.

Improve performance, stability and examples

10 Jul 23:47
70d9160
Compare
Choose a tag to compare

This release emphasizes performance enhancements, reordering the examples, and several bug fixes.

Frontiers in Neuroinformatics release with minor fix (0.2.1)

28 Oct 12:46
da730bc
Compare
Choose a tag to compare

This release accompanies our draft submission to Frontiers in Neuroinformatics. It features a number of bug fixes and example scripts used in drafting the paper.

Refactoring code and cleaning up

12 Jun 00:43
Compare
Choose a tag to compare

This small release features:

  • A current-based leaky integrate-and-fire neuron model (CurrentLIFNodes)
  • Lots of code refactoring to conform (a little bit closer) to PEP standards
  • Making things look and read nicer

Open-sourcing BindsNET

05 Jun 11:45
Compare
Choose a tag to compare

Notes

After a few missteps in the PyPI distribution process, we are proud to annouce the release of BindsNET v0.1! We will likely follow up with a series of incremental releases (v0.1.x) to address bugs found by users, or add small-scale features that we may have missed.

Features

This release features the network core functionality of the package, which enables the construction and simulation of spiking neural networks (SNNs). The Network object may be composed of any number of Nodes, Connections, and / or Monitors, of which there several varieties. Learning on Connection objects is implemented by specifying functions from the learning module. Popular machine learning (ML) datasets may be loaded using datasets, which can be converted into spike trains (like any other numerical data) with encoding.

An interface into the Open AI gym reinforcement learning (RL) library is implemented using the environments module, allowing for the first time easy experimentation with SNNs on RL problems.

To eliminate messy implementation details, a Pipeline object is provided (in the pipeline module) which simulates altogether the interaction between a spiking neural network and a dataset or environments. This saves users from having to write long scripts to run experiments on supported datasets or RL environments.

Plotting functionality is available in the analysis.plotting and analysis.visualization modules. The former is typically used for plotting "online" during simulation, and the latter, "offline", for studying long-term network behavior or making figures.

Other modules exist in a developmental or low-user / low-priority state.

Future work?

This depends largely on the users and in particular the needs of the BINDS lab. Some things we would personally like to see include:

  • Tighter integration with PyTorch. This likely means using more functionality from the torch.nn.functional module (e.g., convolution, pooling, activation functions, etc.), or conforming our network API to that of torch's neural network API.
  • Automatic smoothing of SNNs: Recent work has shown that it's possible to convert trained deep learning NNs to SNNs without much loss in accuracy. Conversion of PyTorch models or models specified in the ONNX format may be supported in BindsNET in the future!
  • More features! Nodes (neuron) types, Connection types, Datasets, learning functions, and more. In particular, we want to take steps towards making SNNs robust for ML / RL.

Cheers,
@djsaunde