+++ News: We released SCOREQ, a new speech quality metric inspired by NOMAD and trained with MOS labels. SCOREQ can be used in no-reference, full-reference, and non-matching reference modes. Link here.
While NOMAD is useful for training quality metrics without MOS, we recommend using SCOREQ for more accurate quality predictions.
NOMAD is a perceptual audio similarity metric trained without using human quality scores e.g., MOS.
NOMAD embeddings can be used to:
- Measuring quality with any clean reference e.g., both paired and unpaired speech
- As a loss function to improve speech enhancement models
NOMAD is hosted on PyPi. It can be installed in your Python environment with the following command
pip install nomad_audio
The model works with 16 kHz sampling rate. If your wav files have different sampling rates, automatic downsampling or upsampling is performed.
NOMAD was built with Python 3.9.16.
Data wav files can be passed in 2 modes:
- In
mode 'dir'
, you need to indicate two directories for the reference and the degraded .wav files. - In
mode 'csv
, you need to indicate two csv for the reference and the degraded .wav files.
Reference files can be any clean speech.
To predict perceptual similarity of all .wav files between two directories:
python -m nomad_audio --mode dir --nmr /path/to/dir/non-matching-references --deg /path/to/dir/degraded
To predict perceptual similarity of all .wav files between two csv files:
python -m nomad_audio --mode csv --nmr /path/to/csv/non-matching-references.csv --deg /path/to/csv/degraded.csv
Both csv files should include a column filename
with the absolute path for each wav file.
In both modes, the script will create two csv files in results-csv
with date time format.
DD-MM-YYYY_hh-mm-ss_nomad_avg.csv
includes the average NOMAD scores with respect to all the references innmr_path
DD-MM-YYYY_hh-mm-ss_nomad_scores.csv
includes pairwise scores between the degraded speech samples intest_path
and the references innmr_path
Choosing where to save the csv files can be done by setting results_path
.
You can run this example using some .wav files that are provided in the repo:
python -m nomad_audio --mode dir --nmr_path data/nmr-data --test_path data/test-data
The resulting csv file DD-MM-YYYY_hh-mm-ss_nomad_avg.csv
shows the mean computed using the 4 non-matching reference files:
Test File NOMAD
445-123860-0012_NOISE_15 1.587
6563-285357-0042_OPUS_64k 0.294
The other csv file DD-MM-YYYY_hh-mm-ss_nomad_scores.csv
includes the pairwise scores between the degraded and the non-matching reference files:
Test File MJ60_10 FL67_01 FI53_04 MJ57_01
445-123860-0012_NOISE_15 1.627 1.534 1.629 1.561
6563-285357-0042_OPUS_64k 0.23 0.414 0.186 0.346
NOMAD can be imported as a Python module. Here is an example:
from nomad_audio import nomad
nmr_path = 'data/nmr-data'
test_path = 'data/test-data'
nomad_avg_scores, nomad_scores = nomad.predict('dir', nmr_path, test_path)
NOMAD has been evaluated as a loss function to improve speech enhancement models.
NOMAD loss can be used as a PyTorch loss function as follows:
from nomad_audio import nomad
# Here is your training loop where you calculate your loss
loss = mse_loss(estimate, clean) + weight * nomad.forward(estimate, clean)
We provide a full example on how to use NOMAD loss for speech enhancement using a wave U-Net architecture, see src/nomad_audio/nomad_loss_test.py
.
In this example we show that using NOMAD as an auxiliary loss you can get quality improvement:
- MSE -> PESQ = 2.39
- MSE + NOMAD loss -> PESQ = 2.60
Steps to reproduce this experiment:
- Download Valentini speech enhancement dataset here
- In
src/nomad_audio/se_config.yaml
change the following parametersnoisy_train_dir
path to noisy_trainset_28spk_wavclean_train_dir
path to clean_trainset_28spk_wavnoisy_valid_dir
path to noisy_validset_28spk_wavclean_valid_dir
path to clean_validset_28spk_wavnoisy_test_dir
path to noisy_testset_wavclean_test_dir
path to clean_testset_wav
Notice that the Valentini dataset does not explicitly provide a validation partition. We created one by using speech samples from speakers p286
and p287
from the training set.
See the paper for more details on speech enhancement results using the model DEMUCS and evaluated with subjective listening tests.
We recommend to tune the weight of the NOMAD loss. Paper results with the DEMUCS model uses a weight of 0.1
.
The U-Net model provided in this repo uses a weight equal to 0.001
.
After cloning the repo you can either pip install nomad_audio as above or install the required packages from requirements.txt
. If you install the pip package you will also have the additional nomad_audio module which is not needed to train NOMAD but only for usage.
NOMAD is trained on degraded samples from the Librispeech dataset.
Download the dataset to train the model.
In addition, we provide instructions to generate the dataset above. Notice that the process can be time-consuming, we recommend to download the dataset from the link.
The following steps are required to train the model:
- Download wav2vec from this link and save it into
pt-models
. You can skip this step if you installed the pip package withpip install nomad_audio
in your working directory. - Change the following parameters in
src/config/train_triplet.yaml
root
should be set to degraded Librispeech dataset path
- From the working directory run:
python main.py --config_file train_triplet.yaml
This will generate a path in your working directory out-models/train-triplet/dd-mm-yyyy_hh-mm-ss
that includes the best model and the configuration parameters used to train this model.
We evaluated NOMAD for ranking degradation intensity, speech quality assessment, and as a loss function for speech enhancement. See the paper for more details. As clean non-matching references, we extracted 899 samples from the TSP speech database.
Here we show the scatter plot between NOMAD scores (computed with unpaired speech) and MOS quality labels. For each database we mapped NOMAD scores to MOS using a third order polynomial. Notice that performances are reported without mapping in the paper.
If you use NOMAD or the training corpus for your research, please cite our ICASSP 2024 paper.
@INPROCEEDINGS{10448028,
author={Ragano, Alessandro and Skoglund, Jan and Hines, Andrew},
booktitle={ICASSP 2024 - 2024 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
title={NOMAD: Unsupervised Learning of Perceptual Embeddings For Speech Enhancement and Non-Matching Reference Audio Quality Assessment},
year={2024},
volume={},
number={},
pages={1011-1015},
keywords={Degradation;Speech enhancement;Signal processing;Predictive models;Feature extraction;Acoustic measurements;Loss measurement;Perceptual measures of audio quality; objective and subjective quality assessment; speech enhancement},
doi={10.1109/ICASSP48485.2024.10448028}}
The NOMAD code is licensed under MIT license.
Copyright © 2023 Alessandro Ragano