You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thanks for the great work.
When I f-t tiny model it in my dataset, it shows ValueError: DeepseekVLV2ForCausalLM does not support Flash Attention 2.0 yet. Please request to add support where the model is hosted, on its model hub page: https://huggingface.co/deepseek-ai/deepseek-vl2-tiny/discussions/new or in the Transformers GitHub repo: https://github.com/huggingface/transformers/issues/new
Are there any fine-tuning tutorials available?
The text was updated successfully, but these errors were encountered:
Thanks for the great work.
When I f-t tiny model it in my dataset, it shows
ValueError: DeepseekVLV2ForCausalLM does not support Flash Attention 2.0 yet. Please request to add support where the model is hosted, on its model hub page: https://huggingface.co/deepseek-ai/deepseek-vl2-tiny/discussions/new or in the Transformers GitHub repo: https://github.com/huggingface/transformers/issues/new
Are there any fine-tuning tutorials available?
The text was updated successfully, but these errors were encountered: