You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
But in implementation, the explore_siren notebook as well as modules.py, the output of linear layer is multiplied to \omega_0. In other words, sin(\omega_0 (Wx+b)).
I find this difference drastically changes the network convergence behavior. The actual implemented network performs much better.
Could you clarify this issue?
The text was updated successfully, but these errors were encountered:
In paper, it is written sin(\omega_0 W x + b).
But in implementation, the explore_siren notebook as well as modules.py, the output of linear layer is multiplied to \omega_0. In other words, sin(\omega_0 (Wx+b)).
I find this difference drastically changes the network convergence behavior. The actual implemented network performs much better.
Could you clarify this issue?
The text was updated successfully, but these errors were encountered: