Performance improvements for transcription (up to 20% faster transcription on CPU) #2516
+62
−9
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Implements a suite of optimizations focusing on memory efficiency, tensor initialization, and model loading functionality. These changes improve performance, code clarity, and model handling flexibility in the Whisper ASR system that improves transcription speed by up to 20%.
These comprehensive changes optimize memory usage, enhance code quality, and improve model loading reliability while maintaining functional equivalence.
Changes:
Gradient Checkpointing Implementation:
Tensor initialization:
Model loading:
Before:
After:
Impact:
Testing:
python3 -m pytest --durations=0 -vv -k 'not test_transcribe or test_transcribe[tiny] or test_transcribe[tiny.en]' -m 'not requires_cuda'