-
Notifications
You must be signed in to change notification settings - Fork 93
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Relation between this repository and the original C++ implementation #104
Comments
Hi @zhangsen-hit this implementation uses some of the accelerated components from the original C++ implementation. This version is more feature-rich. You might want to look at Lava-DL SLAYER for even more feature-rich version. |
Thank you very much! I have noticed that Lava-DL SLAYER provides more neuron models and other useful features, which is highly beneficial. Here I have another question. In both this repository and Lava-DL SLAYER, the input and output data types are tensors in the format [NCHWT] or [NCT], where 'T' represents the time dimension and is placed at the end. So, during the forward propagation process, is the computation performed layer by layer, rather than time step by time step? Can we obtain the results of any time steps before the entire forward propagation is finished? |
@zhangsen-hit slayerPyTorch computes the output of each layer for all the timesteps at once. Lava-dl SLAYER allows for both options. By default, it calculates the output of all the outputs at once, but all neuron models have |
I could not find the term Furthermore, upon reviewing the source code of lava-dl, I did not find the SRM neuron mentioned in the original paper SLAYER and implemented in slayerPytorch. Could you please clarify why this neuron type was abandoned? Or, was the name 'SRM' replaced with another term? |
Does this PyTorch implementation rely on the original version? Is it an independent implementation that has been re-implemented using PyTorch?
The text was updated successfully, but these errors were encountered: