You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
"To adapt to the TTS task, I need to perform embedding mapping outside the model and insert the speaker embedding, then pass the embedding with speaker information directly into the model instead of the token_id. How should I modify it?"
Alternatives
No response
Additional context
No response
Before submitting a new issue...
Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
The text was updated successfully, but these errors were encountered:
I guess you are using vLLM only for Llama model? You can pass --task embedding to use it as an embedding model and call LLM.encode to obtain the embeddings. Note that this is only supported by the latest commit for vLLM. We will publish a release in the next few days.
I guess you are using vLLM only for Llama model? You can pass --task embedding to use it as an embedding model and call LLM.encode to obtain the embeddings. Note that this is only supported by the latest commit for vLLM. We will publish a release in the next few days.
Thank you for your reply.
It seems that when I use --task embedding, internally a discriminative model is called, and the model's input is still token ids. My model is a generative model that requires a custom embedding as a prompt for regression-based generation; the embedding is merely the result of mapping with nn.embedding and not the output of the llama. In short, because I've noticed that llama can directly accept embeddings as input, I hope to use embeddings to completely replace token ids. However, the process from text to model_runner is somewhat complex, so I'm looking for some tips.
🚀 The feature, motivation and pitch
"To adapt to the TTS task, I need to perform embedding mapping outside the model and insert the speaker embedding, then pass the embedding with speaker information directly into the model instead of the token_id. How should I modify it?"
Alternatives
No response
Additional context
No response
Before submitting a new issue...
The text was updated successfully, but these errors were encountered: