You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the great work! I wonder if there would be documentation/tutorial on how to access the quantities (e.g., importance) and draw plots in a customized way (Like Fig.9 in the paper)?
The GUI is very friendly for specific case studies. But if I want to calculate the importance of attention heads for a specific dataset (e.g., reasoning task), is there a simple class/interface to access related quantities and calculate them in batches?
Though I noticed that there's an argument called "preloaded_dataset_filename":
Thanks for the great work! I wonder if there would be documentation/tutorial on how to access the quantities (e.g., importance) and draw plots in a customized way (Like Fig.9 in the paper)?
The GUI is very friendly for specific case studies. But if I want to calculate the importance of attention heads for a specific dataset (e.g., reasoning task), is there a simple class/interface to access related quantities and calculate them in batches?
Though I noticed that there's an argument called "preloaded_dataset_filename":
llm-transparency-tool/llm_transparency_tool/server/app.py
Lines 91 to 94 in ee81984
I think this is just for the ease of loading multiple cases as in sample_input.txt.
The text was updated successfully, but these errors were encountered: