Best practices for training latency-coded SNNs on TSC task with supervised STDP #475
ElisaNguyen
started this conversation in
General
Replies: 1 comment 4 replies
-
Thanks Elisa for describing your task. Searching the hyper parameters is never fun and never a short task.
Let me know how it goes. |
Beta Was this translation helpful? Give feedback.
4 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Dear All,
I am currently working on training a latency-coded SNN (i.e. earliest spike for classification) on a time-series classification task with supervised STDP (voltage clamp). As there are several hyperparameters, I was wondering if you have any best practices to share for this case?
Currently, I am working with the UCI Binary ADL dataset, where I have transformed the data to binary spike trains with a timestep of 1 second.
This is the configuration of my network right now:
I am feeding the data in time windows of 1 second to the network, without resetting the state variables in between (i.e. continuous prediction). Due to the size of the dataset, training time is quite long to try different hyperparameters. As I am quite inexperienced, I wanted to ask if you have any recommendations for learning rates, weight decay, weight boundaries, etc. for me.
Thank you very much!
Elisa
Beta Was this translation helpful? Give feedback.
All reactions