Trying to recreate SGD network from frontiers bindsnet article #524
Replies: 2 comments
-
RealInput is not available anymore. According to the 2018 paper, "for simulating constant current injection with the RealInput object". You can still replicate this by constantly adding the voltage to the input layer neurons by using the I think this code snippet you're referring to was a only a POC, nothing really useful. Ideally, if you're looking for gradient descent in a SNN design with BindsNet, you will probably have to compute you own continuous differentiable function to simulate the spikes occurring in each layer. For example, see: https://arxiv.org/pdf/1901.09948.pdf |
Beta Was this translation helpful? Give feedback.
-
Hi Simon,
Thank you for your response. I am not necessarily interested in SGD but in a complete example of supervised training in BindsNet.
I will keep trying to build a simple SNN example in BindsNet.
Thanx
Dennis
From: Simon CABY ***@***.***>
Sent: Monday, November 15, 2021 10:28 PM
To: BindsNET/bindsnet ***@***.***>
Cc: Moolenaar, Dennis ***@***.***>; Author ***@***.***>
Subject: Re: [BindsNET/bindsnet] Trying to recreate SGD network from frontiers bindsnet article (Discussion #524)
Caution: This e-mail originated from outside of Philips, be careful for phishing.
RealInput is not available anymore. According to the 2018 paper, "for simulating constant current injection with the RealInput object".
Meaning that the real-valued dataset provides a current injection (a constant pressure to the neuron), eventually making it spike.
You can still replicate this by constantly adding the voltage to the input layer neurons by using the injects_v keyword in the model run() function, so that they will fire at a rate corresponding to the pixel brightness.
You have to use IF or LIF neurons for the input layer as well.
I think this code snippet you're referring to was a only a POC, nothing really useful.
Ideally, if you're looking for gradient descent in a SNN design with BindsNet, you will probably have to compute you own continuous differentiable function to simulate the spikes occurring in each layer. For example, see: https://arxiv.org/pdf/1901.09948.pdf<url>
-
You are receiving this because you authored the thread.
Reply to this email directly, view it on GitHub<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2FBindsNET%2Fbindsnet%2Fdiscussions%2F524%23discussioncomment-1647452&data=04%7C01%7C%7Cdb139a46e74a473d3ea708d9a87ed0d1%7C1a407a2d76754d178692b3ac285306e4%7C0%7C0%7C637726085326605749%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=12XSNJMkyOoGtI7mYEz8M2%2BQmE5sF4AOMM4wcSVm8V0%3D&reserved=0>, or unsubscribe<https://eur01.safelinks.protection.outlook.com/?url=https%3A%2F%2Fgithub.com%2Fnotifications%2Funsubscribe-auth%2FAOZPXV3VIOCWBSC5UYKGOSLUMF3OLANCNFSM5IBP5FPA&data=04%7C01%7C%7Cdb139a46e74a473d3ea708d9a87ed0d1%7C1a407a2d76754d178692b3ac285306e4%7C0%7C0%7C637726085326615705%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C3000&sdata=Go8rbhA6J0OOamaxBmYsVRwpMVgbPo1alr4XQWj6X70%3D&reserved=0>.
…________________________________
The information contained in this message may be confidential and legally protected under applicable law. The message is intended solely for the addressee(s). If you are not the intended recipient, you are hereby notified that any use, forwarding, dissemination, or reproduction of this message is strictly prohibited and may be unlawful. If you are not the intended recipient, please contact the sender by return e-mail and destroy all copies of the original message.
|
Beta Was this translation helpful? Give feedback.
-
I am trying to recreate the network in Figure 5 of the 2018 Frontiers article with title: BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python. ( doi: 10.3389/fninf.2018.00089)
I had to make a number of changes to allow it to run. The code is below. It runs but it is not learning. The accuracy for each step in each epoch is the exactly the same even though the weight is updated.
Changes:
I have the following questions:
According to paper this should be able to hit accuracy of 85%. Well this only hits 10%.
Does anyone know what is wrong here ?
Model:
import torch
from bindsnet.network import Network
from bindsnet.datasets import FashionMNIST
from bindsnet.network.monitors import Monitor
from bindsnet.network.topology import Connection
from bindsnet.network.nodes import Input, IFNodes
Network building.
network = Network()
input_layer = Input(n=784, sum_input=True)
output_layer = IFNodes(n=10, sum_input=True)
network.add_layer(input_layer, name='X')
network.add_layer(output_layer, name='Y')
input_connection = Connection(input_layer, output_layer, norm=150, wmin=-1, wmax=1)
network.add_connection(input_connection, source="X", target='Y')
Stats variable monitoring,
time = 25 # No. of simulation time steps per example.
for l in network.layers:
m = Monitor(network.layers[l], state_vars=['s'], time=time)
network.add_monitor(m, name=l)
Load Fashion-MNIST data.
ds = FashionMNIST(root='.', download=True, train=True)
images = ds.train_data
labels = ds.train_labels
Run training
grads = {}
lr, lr_decay = 1e-2, 0.95
criterion = torch.nn.CrossEntropyLoss()
spike_ims, spike_axes, weights_im = None, None, None
for epoch in range(4):
correct = 0
for i, (image, label) in enumerate(zip(images.view(-1, 784) / 255, labels)):
# fun simulation for stngle datus
inpts = {'X': image.repeat(time, 1), 'Y_b': torch.ones(time, 1)}
network.run(inputs=inpts, time=time)
# Retrieve spikes and summed inputs from both layers.
label = torch.tensor(label).long()
spikes = {l: network.monitors[l].get('s') for l in network.layers}
summed_inputs = {l: network.layers[l].summed for l in network. layers}
# Compute softmax of output activity, get predicted label,
output = spikes['Y'].sum(-1).float().softmax(0).view(1, -1)
predicted = output.argmax(1).item()
# Compute gradient of loss and do SGD update.
grads['dl/df'] = summed_inputs['Y'].softmax(0)[0]
grads['dl/df'][label] -= 1
grads['dl/dw'] = torch.ger(summed_inputs['X'][0], grads['dl/df'])
network.connections ['X', 'Y'].w -= lr * grads['dl/dw']
Beta Was this translation helpful? Give feedback.
All reactions