-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
blockformer #1504
base: main
Are you sure you want to change the base?
blockformer #1504
Conversation
I think it's better if we add the experiment results on AIShell-1 and LibriSpeech, to show that we can get consistent and solid gain by using the model. |
|
||
def forward(self, x: torch.Tensor) -> torch.Tensor: | ||
b, c, _, _ = x.size() | ||
y = self.avg_pool(x).view(b, c) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
avg_pool over T and D dim should consider pad_mask ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
thanks for your remind, we will update pad_mask to the code and retrain it .
@robin1001 results of aishell has been added |
hi, i run blockformer in 3080 and it just used 30% - 40% gpu. I change batchsize bigger and numworker but it didn't work. So what should i do to take more use of gpu? |
This PR is about implementation of blockformer in WeNet.
(Original paper: https://arxiv.org/abs/2207.11697)
In main branch, extracting features by torchaudio make a little worse results than the paper. I will push a branch using kaldi features for aishell recipe which can reproduce results in the paper.