Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Relation between this repository and the original C++ implementation #104

Open
zhangsen-hit opened this issue May 11, 2023 · 4 comments
Open

Comments

@zhangsen-hit
Copy link

Does this PyTorch implementation rely on the original version? Is it an independent implementation that has been re-implemented using PyTorch?

@bamsumit
Copy link
Owner

Hi @zhangsen-hit this implementation uses some of the accelerated components from the original C++ implementation. This version is more feature-rich. You might want to look at Lava-DL SLAYER for even more feature-rich version.

@zhangsen-hit
Copy link
Author

Thank you very much! I have noticed that Lava-DL SLAYER provides more neuron models and other useful features, which is highly beneficial.

Here I have another question. In both this repository and Lava-DL SLAYER, the input and output data types are tensors in the format [NCHWT] or [NCT], where 'T' represents the time dimension and is placed at the end. So, during the forward propagation process, is the computation performed layer by layer, rather than time step by time step? Can we obtain the results of any time steps before the entire forward propagation is finished?

@bamsumit
Copy link
Owner

@zhangsen-hit slayerPyTorch computes the output of each layer for all the timesteps at once. Lava-dl SLAYER allows for both options. By default, it calculates the output of all the outputs at once, but all neuron models have persistent_mode flag. When you set it, you can execute the entire network one time-step at a time if needed.

@zhangsen-hit
Copy link
Author

I could not find the term persistent_mode in the source code, but I did find persistent_state. Are you referring to persistent_state?

Furthermore, upon reviewing the source code of lava-dl, I did not find the SRM neuron mentioned in the original paper SLAYER and implemented in slayerPytorch. Could you please clarify why this neuron type was abandoned? Or, was the name 'SRM' replaced with another term?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants