You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
It works for me, but there is an issue with conservation. When setting zero_params="bias" (for SafeEpsilon and SafeGamma), conservation is observed throughout the network up until the first block, but breaks at the input layer. As we discussed in person, this is most likely due to the classifier token absorbing some relevance that does not reach the input.
Also, just fyi, the implementation suffers from the issue described in #148 meaning that a combination of low gamma, low epsilon, and zero_params="bias" leads to instable/exploding relevance values.
Hi Christopher,
Here is my prototype of LRP+ViT: https://colab.research.google.com/drive/1EWjPV3FiDZIAp0gZCyTjdB9OnQD8-m8W?usp=sharing
The text was updated successfully, but these errors were encountered: