Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Feature]: To adapt to the TTS task, I need to directly pass in the embedding. How should I modify it? #10323

Open
1 task done
1nlplearner opened this issue Nov 14, 2024 · 5 comments

Comments

@1nlplearner
Copy link

🚀 The feature, motivation and pitch

"To adapt to the TTS task, I need to perform embedding mapping outside the model and insert the speaker embedding, then pass the embedding with speaker information directly into the model instead of the token_id. How should I modify it?"

Alternatives

No response

Additional context

No response

Before submitting a new issue...

  • Make sure you already searched for relevant issues, and asked the chatbot living at the bottom right corner of the documentation page, which can answer lots of frequently asked questions.
@DarkLight1337
Copy link
Member

Which model are you using?

@1nlplearner
Copy link
Author

Which model are you using?

chattts,and llm is llama

@DarkLight1337
Copy link
Member

DarkLight1337 commented Nov 15, 2024

I guess you are using vLLM only for Llama model? You can pass --task embedding to use it as an embedding model and call LLM.encode to obtain the embeddings. Note that this is only supported by the latest commit for vLLM. We will publish a release in the next few days.

@1nlplearner
Copy link
Author

I guess you are using vLLM only for Llama model? You can pass --task embedding to use it as an embedding model and call LLM.encode to obtain the embeddings. Note that this is only supported by the latest commit for vLLM. We will publish a release in the next few days.

Thank you for your reply.
It seems that when I use --task embedding, internally a discriminative model is called, and the model's input is still token ids. My model is a generative model that requires a custom embedding as a prompt for regression-based generation; the embedding is merely the result of mapping with nn.embedding and not the output of the llama. In short, because I've noticed that llama can directly accept embeddings as input, I hope to use embeddings to completely replace token ids. However, the process from text to model_runner is somewhat complex, so I'm looking for some tips.

@DarkLight1337
Copy link
Member

Oh, so you want to input embeddings directly into the model? You can try out #6869 (comment).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants