You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi! Very nice and clean implementation. In the paper they state that the model preserves fast inference with constant memory. Unfortunately, I don't quite understand how this is possible with attention and local convolution.
Would it be difficult to add an example?
The text was updated successfully, but these errors were encountered:
Hi! Very nice and clean implementation. In the paper they state that the model preserves fast inference with constant memory. Unfortunately, I don't quite understand how this is possible with attention and local convolution.
Would it be difficult to add an example?
The text was updated successfully, but these errors were encountered: