You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I see that @xenova has successfully export Janus-1.3B and Janus-Pro-1B to ONNX, presumably using some version of scripts/convert.py. We are interested in exporting Janus-Pro-7B to ONNX as well, but have not been able to do so using this script (nor any other path). Attempting to convert either of the previous two models encounters the same errors, so hopefully whatever steps were taken to convert those will also enable the 7B version.
The initial error was:
ValueError: The checkpoint you are trying to load has model type `multi_modality` but Transformers does not recognize this architecture. This could be because of an issue with the checkpoint, or because your version of Transformers is out of date.
KeyError: "Unknown task: any-to-any. Possible values are: `audio-classification` for AutoModelForAudioClassification, `audio-frame-classification` for AutoModelForAudioFrameClassification, `audio-xvector` for AutoModelForAudioXVector, `automatic-speech-recognition` for ('AutoModelForSpeechSeq2Seq', 'AutoModelForCTC'), `depth-estimation` for AutoModelForDepthEstimation, `feature-extraction` for AutoModel, `fill-mask` for AutoModelForMaskedLM, `image-classification` for AutoModelForImageClassification, `image-segmentation` for ('AutoModelForImageSegmentation', 'AutoModelForSemanticSegmentation'), `image-to-image` for AutoModelForImageToImage, `image-to-text` for AutoModelForVision2Seq, `mask-generation` for AutoModel, `masked-im` for AutoModelForMaskedImageModeling, `multiple-choice` for AutoModelForMultipleChoice, `object-detection` for AutoModelForObjectDetection, `question-answering` for AutoModelForQuestionAnswering, `semantic-segmentation` for AutoModelForSemanticSegmentation, `text-to-audio` for ('AutoModelForTextToSpectrogram', 'AutoModelForTextToWaveform'), `text-generation` for AutoModelForCausalLM, `text2text-generation` for AutoModelForSeq2SeqLM, `text-classification` for AutoModelForSequenceClassification, `token-classification` for AutoModelForTokenClassification, `zero-shot-image-classification` for AutoModelForZeroShotImageClassification, `zero-shot-object-detection` for AutoModelForZeroShotObjectDetection"
I can't find anything about optimum supporting this task, so it is unclear to me how @xenova was able to get around this.
Any insight or assistance would be greatly appreciated.
The text was updated successfully, but these errors were encountered:
Question
I see that @xenova has successfully export Janus-1.3B and Janus-Pro-1B to ONNX, presumably using some version of scripts/convert.py. We are interested in exporting Janus-Pro-7B to ONNX as well, but have not been able to do so using this script (nor any other path). Attempting to convert either of the previous two models encounters the same errors, so hopefully whatever steps were taken to convert those will also enable the 7B version.
The initial error was:
This was fixed by installing https://github.com/deepseek-ai/Janus and adding
from janus.models import MultiModalityCausalLM
to convert.py.
The error that I'm now stuck at is:
I can't find anything about optimum supporting this task, so it is unclear to me how @xenova was able to get around this.
Any insight or assistance would be greatly appreciated.
The text was updated successfully, but these errors were encountered: