forked from SYSTRAN/faster-whisper
-
Notifications
You must be signed in to change notification settings - Fork 1
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
0 parents
commit 5216d52
Showing
9 changed files
with
658 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,21 @@ | ||
MIT License | ||
|
||
Copyright (c) 2023 Guillaume Klein | ||
|
||
Permission is hereby granted, free of charge, to any person obtaining a copy | ||
of this software and associated documentation files (the "Software"), to deal | ||
in the Software without restriction, including without limitation the rights | ||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell | ||
copies of the Software, and to permit persons to whom the Software is | ||
furnished to do so, subject to the following conditions: | ||
|
||
The above copyright notice and this permission notice shall be included in all | ||
copies or substantial portions of the Software. | ||
|
||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR | ||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, | ||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE | ||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER | ||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, | ||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE | ||
SOFTWARE. |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,63 @@ | ||
# Faster Whisper transcription with CTranslate2 | ||
|
||
This repository demonstrates how to implement the Whisper transcription using [CTranslate2](https://github.com/OpenNMT/CTranslate2/), which is a fast inference engine for Transformer models. | ||
|
||
This implementation is about 4 times faster than [openai/whisper](https://github.com/openai/whisper) for the same accuracy while using less memory. The efficiency can be further improved with 8-bit quantization on both CPU and GPU. | ||
|
||
## Installation | ||
|
||
```bash | ||
pip install -e .[conversion] | ||
``` | ||
|
||
The model conversion requires the modules `transformers` and `torch` which are installed by the `[conversion]` requirement. Once a model is converted, these modules are no longer needed and the installation could be simplified to: | ||
|
||
```bash | ||
pip install -e . | ||
``` | ||
|
||
## Usage | ||
|
||
### Model conversion | ||
|
||
A Whisper model should be first converted into the CTranslate2 format. For example the command below converts the "medium" Whisper model and saves the weights in FP16: | ||
|
||
```bash | ||
ct2-transformers-converter --model openai/whisper-medium --output_dir whisper-medium-ct2 --quantization float16 | ||
``` | ||
|
||
If needed, models can also be converted from the code. See the [conversion API](https://opennmt.net/CTranslate2/python/ctranslate2.converters.TransformersConverter.html). | ||
|
||
### Transcription | ||
|
||
```python | ||
from faster_whisper import WhisperModel | ||
|
||
model_path = "whisper-medium-ct2/" | ||
|
||
# Run on GPU with FP16 | ||
model = WhisperModel(model_path, device="cuda", compute_type="float16") | ||
|
||
# or run on GPU with INT8 | ||
# model = WhisperModel(model_path, device="cuda", compute_type="int8_float16") | ||
# or run on CPU with INT8 | ||
# model = WhisperModel(model_path, device="cpu", compute_type="int8") | ||
|
||
segments, info = model.transcribe("audio.mp3", beam_size=5) | ||
|
||
print("Detected language '%s' with probability %f" % (info.language, info.language_probability)) | ||
|
||
for segment in segments: | ||
print("[%ds -> %ds] %s" % (segment.start, segment.end, segment.text)) | ||
``` | ||
|
||
## Comparing performance against openai/whisper | ||
|
||
If you are comparing the performance against [openai/whisper](https://github.com/openai/whisper), you should make sure to use the same settings in both frameworks. In particular: | ||
|
||
* In openai/whisper, `model.transcribe` uses a beam size of 1 by default. A different beam size will have an important impact on performance so make to use the same. | ||
* When running on CPU, make sure to set the same number of threads. Both frameworks will read the environment variable `OMP_NUM_THREADS`, which can be set when running your script: | ||
|
||
```bash | ||
OMP_NUM_THREADS=4 python3 my_script.py | ||
``` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1 @@ | ||
from faster_whisper.transcribe import WhisperModel |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,36 @@ | ||
import av | ||
import numpy as np | ||
|
||
|
||
def decode_audio(input_file, sampling_rate=16000): | ||
"""Decodes the audio. | ||
Args: | ||
input_file: Path to the input file or a file-like object. | ||
sampling_rate: Resample the audio to this sample rate. | ||
Returns: | ||
A float32 Numpy array. | ||
""" | ||
fifo = av.audio.fifo.AudioFifo() | ||
resampler = av.audio.resampler.AudioResampler( | ||
format="s16", | ||
layout="mono", | ||
rate=sampling_rate, | ||
) | ||
|
||
with av.open(input_file) as container: | ||
# Decode and resample each audio frame. | ||
for frame in container.decode(audio=0): | ||
frame.pts = None | ||
for new_frame in resampler.resample(frame): | ||
fifo.write(new_frame) | ||
|
||
# Flush the resampler. | ||
for new_frame in resampler.resample(None): | ||
fifo.write(new_frame) | ||
|
||
frame = fifo.read() | ||
|
||
# Convert s16 back to f32. | ||
return frame.to_ndarray().flatten().astype(np.float32) / 32768.0 |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,163 @@ | ||
import numpy as np | ||
|
||
|
||
# Adapted from https://github.com/huggingface/transformers/blob/main/src/transformers/models/whisper/feature_extraction_whisper.py | ||
class FeatureExtractor: | ||
def __init__( | ||
self, | ||
feature_size=80, | ||
sampling_rate=16000, | ||
hop_length=160, | ||
chunk_length=30, | ||
n_fft=400, | ||
): | ||
self.n_fft = n_fft | ||
self.hop_length = hop_length | ||
self.chunk_length = chunk_length | ||
self.n_samples = chunk_length * sampling_rate | ||
self.nb_max_frames = self.n_samples // hop_length | ||
self.time_per_frame = hop_length / sampling_rate | ||
self.sampling_rate = sampling_rate | ||
self.mel_filters = self.get_mel_filters( | ||
sampling_rate, n_fft, n_mels=feature_size | ||
) | ||
|
||
def get_mel_filters(self, sr, n_fft, n_mels=128, dtype=np.float32): | ||
# Initialize the weights | ||
n_mels = int(n_mels) | ||
weights = np.zeros((n_mels, int(1 + n_fft // 2)), dtype=dtype) | ||
|
||
# Center freqs of each FFT bin | ||
fftfreqs = np.fft.rfftfreq(n=n_fft, d=1.0 / sr) | ||
|
||
# 'Center freqs' of mel bands - uniformly spaced between limits | ||
min_mel = 0.0 | ||
max_mel = 45.245640471924965 | ||
|
||
mels = np.linspace(min_mel, max_mel, n_mels + 2) | ||
|
||
mels = np.asanyarray(mels) | ||
|
||
# Fill in the linear scale | ||
f_min = 0.0 | ||
f_sp = 200.0 / 3 | ||
freqs = f_min + f_sp * mels | ||
|
||
# And now the nonlinear scale | ||
min_log_hz = 1000.0 # beginning of log region (Hz) | ||
min_log_mel = (min_log_hz - f_min) / f_sp # same (Mels) | ||
logstep = np.log(6.4) / 27.0 # step size for log region | ||
|
||
# If we have vector data, vectorize | ||
log_t = mels >= min_log_mel | ||
freqs[log_t] = min_log_hz * np.exp(logstep * (mels[log_t] - min_log_mel)) | ||
|
||
mel_f = freqs | ||
|
||
fdiff = np.diff(mel_f) | ||
ramps = np.subtract.outer(mel_f, fftfreqs) | ||
|
||
for i in range(n_mels): | ||
# lower and upper slopes for all bins | ||
lower = -ramps[i] / fdiff[i] | ||
upper = ramps[i + 2] / fdiff[i + 1] | ||
|
||
# .. then intersect them with each other and zero | ||
weights[i] = np.maximum(0, np.minimum(lower, upper)) | ||
|
||
# Slaney-style mel is scaled to be approx constant energy per channel | ||
enorm = 2.0 / (mel_f[2 : n_mels + 2] - mel_f[:n_mels]) | ||
weights *= enorm[:, np.newaxis] | ||
|
||
return weights | ||
|
||
def fram_wave(self, waveform, center=True): | ||
""" | ||
Transform a raw waveform into a list of smaller waveforms. | ||
The window length defines how much of the signal is | ||
contain in each frame (smalle waveform), while the hope length defines the step | ||
between the beginning of each new frame. | ||
Centering is done by reflecting the waveform which is first centered around | ||
`frame_idx * hop_length`. | ||
""" | ||
frames = [] | ||
for i in range(0, waveform.shape[0] + 1, self.hop_length): | ||
half_window = (self.n_fft - 1) // 2 + 1 | ||
if center: | ||
start = i - half_window if i > half_window else 0 | ||
end = ( | ||
i + half_window | ||
if i < waveform.shape[0] - half_window | ||
else waveform.shape[0] | ||
) | ||
|
||
frame = waveform[start:end] | ||
|
||
if start == 0: | ||
padd_width = (-i + half_window, 0) | ||
frame = np.pad(frame, pad_width=padd_width, mode="reflect") | ||
|
||
elif end == waveform.shape[0]: | ||
padd_width = (0, (i - waveform.shape[0] + half_window)) | ||
frame = np.pad(frame, pad_width=padd_width, mode="reflect") | ||
|
||
else: | ||
frame = waveform[i : i + self.n_fft] | ||
frame_width = frame.shape[0] | ||
if frame_width < waveform.shape[0]: | ||
frame = np.lib.pad( | ||
frame, | ||
pad_width=(0, self.n_fft - frame_width), | ||
mode="constant", | ||
constant_values=0, | ||
) | ||
|
||
frames.append(frame) | ||
return np.stack(frames, 0) | ||
|
||
def stft(self, frames, window): | ||
""" | ||
Calculates the complex Short-Time Fourier Transform (STFT) of the given framed signal. | ||
Should give the same results as `torch.stft`. | ||
""" | ||
frame_size = frames.shape[1] | ||
fft_size = self.n_fft | ||
|
||
if fft_size is None: | ||
fft_size = frame_size | ||
|
||
if fft_size < frame_size: | ||
raise ValueError("FFT size must greater or equal the frame size") | ||
# number of FFT bins to store | ||
num_fft_bins = (fft_size >> 1) + 1 | ||
|
||
data = np.empty((len(frames), num_fft_bins), dtype=np.complex64) | ||
fft_signal = np.zeros(fft_size) | ||
|
||
for f, frame in enumerate(frames): | ||
if window is not None: | ||
np.multiply(frame, window, out=fft_signal[:frame_size]) | ||
else: | ||
fft_signal[:frame_size] = frame | ||
data[f] = np.fft.fft(fft_signal, axis=0)[:num_fft_bins] | ||
return data.T | ||
|
||
def __call__(self, waveform): | ||
""" | ||
Compute the log-Mel spectrogram of the provided audio, gives similar results | ||
whisper's original torch implementation with 1e-5 tolerance. | ||
""" | ||
window = np.hanning(self.n_fft + 1)[:-1] | ||
|
||
frames = self.fram_wave(waveform) | ||
stft = self.stft(frames, window=window) | ||
magnitudes = np.abs(stft[:, :-1]) ** 2 | ||
|
||
filters = self.mel_filters | ||
mel_spec = filters @ magnitudes | ||
|
||
log_spec = np.log10(np.clip(mel_spec, a_min=1e-10, a_max=None)) | ||
log_spec = np.maximum(log_spec, log_spec.max() - 8.0) | ||
log_spec = (log_spec + 4.0) / 4.0 | ||
|
||
return log_spec |
Oops, something went wrong.