You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm facing some issues while loading my fine-tuned GPT2-XL model locally. I referred to #28, but I'm still unclear on how to configure the local.json file properly.
For the GPT2-XL model, I understand there should be an entry like "gpt2-xl": null in the JSON file. How can I modify this to point to my local model?
In #28, it mentions passing two strings—model_name and supported_model_name. Could you clarify where I should pass these strings?
Thank you for your help!
The text was updated successfully, but these errors were encountered:
Hi,
Thank you for your excellent work!
I'm facing some issues while loading my fine-tuned GPT2-XL model locally. I referred to #28, but I'm still unclear on how to configure the local.json file properly.
For the GPT2-XL model, I understand there should be an entry like
"gpt2-xl": null
in the JSON file. How can I modify this to point to my local model?In #28, it mentions passing two strings—
model_name
andsupported_model_name
. Could you clarify where I should pass these strings?Thank you for your help!
The text was updated successfully, but these errors were encountered: