Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Would it be possible to use VOSK instead of wav2vec in order to force alignment? #463

Open
BlueNebulaDev opened this issue Sep 6, 2023 · 1 comment

Comments

@BlueNebulaDev
Copy link

I've tried to use whisperX to get accurate timestamps for some speech. It's definitely a big improvement over Whisper's output, but it's still far from ideal.

Before trying Whisper I've been playing with VOSK and its ability to timestamp words is impeccable. Unfortunately it's not as accurate in understanding speech.

I'm wondering whether it would be possible to use VOSK as a backend for WhisperX.
The idea would be somewhat simple: we transcribe the audio file using both VOSK and Whisper. We map the words output by the two tools with each other. Then we keep Whisper's words with VOSK's timestamp.
When VOSK and Whisper agree on the transcription the task is easy. It's much harder when the outputs differ, but since the outputs are both sorted, it shouldn't be too hard to craft some good heuristics that take into consideration both tools' timestamps plus the phonemes of the words detected by the two tools to decide which words to map and which ones to drop.

Would something like this fit in the scope of this project, or should I create a brand new project for this?

@finnnnnnnnnnnnnnnnn
Copy link

VOSK has an aligner branch, might be tricky to get working though.

alphacep/vosk-api#756

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants