Skip to content
/ audio Public
forked from pytorch/audio

Data manipulation and transformation for audio signal processing, powered by PyTorch

License

Notifications You must be signed in to change notification settings

mruberry/audio

 
 

torchaudio: an audio library for PyTorch

Build Status

The aim of torchaudio is to apply PyTorch to the audio domain. By supporting PyTorch, torchaudio follows the same philosophy of providing strong GPU acceleration, having a focus on trainable features through the autograd system, and having consistent style (tensor names and dimension names). Therefore, it is primarily a machine learning library and not a general signal processing library. The benefits of Pytorch is be seen in torchaudio through having all the computations be through Pytorch operations which makes it easy to use and feel like a natural extension.

Dependencies

  • pytorch (nightly version needed for development)
  • libsox v14.3.2 or above (only required when building from source)
  • [optional] vesis84/kaldi-io-for-python commit cb46cb1f44318a5d04d4941cf39084c5b021241e or above

Installation

Binary Distibutions

To install the latest version using anaconda, run:

conda install -c pytorch torchaudio

To install the latest pip wheels, run:

pip install torchaudio -f https://download.pytorch.org/whl/torch_stable.html

(If you do not have torch already installed, this will default to installing torch from PyPI. If you need a different torch configuration, preinstall torch before running this command.)

Nightly build

Note that nightly build is build on PyTorch's nightly build. Therefore, you need to install the latest PyTorch when you use nightly build of torchaudio.

pip

pip install numpy
pip install --pre torchaudio -f https://download.pytorch.org/whl/nightly/torch_nightly.html

conda

conda install -y -c pytorch-nightly torchaudio

From Source

If your system configuration is not among the supported configurations above, you can build torchaudio from source.

This will require libsox v14.3.2 or above.

Click here for the examples on how to install SoX

OSX (Homebrew):

brew install sox

Linux (Ubuntu):

sudo apt-get install sox libsox-dev libsox-fmt-all

Anaconda

conda install -c conda-forge sox
# Linux
python setup.py install

# OSX
MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install

Alternatively, the build process can build SoX (and codecs such as libmad, lame and flac) statically and torchaudio can link them, by setting environment variable BUILD_SOX=1. The build process will fetch and build SoX, liblame, libmad, flac before building extension.

# Linux
BUILD_SOX=1 python setup.py install

# OSX
BUILD_SOX=1 MACOSX_DEPLOYMENT_TARGET=10.9 CC=clang CXX=clang++ python setup.py install

This is known to work on linux and unix distributions such as Ubuntu and CentOS 7 and macOS. If you try this on a new system and found a solution to make it work, feel free to share it by opening and issue.

Troubleshooting

checking build system type... ./config.guess: unable to guess system type

Since the configuration file for codecs are old, they cannot correctly detect the new environments, such as Jetson Aarch. You need to replace the config.guess file in ./third_party/tmp/lame-3.99.5/config.guess and/or ./third_party/tmp/libmad-0.15.1b/config.guess with the latest one.

See also: #658

Undefined reference to `tgetnum' when using `BUILD_SOX`

If while building from within an anaconda environment you come across errors similar to the following:

../bin/ld: console.c:(.text+0xc1): undefined reference to `tgetnum'

Install ncurses from conda-forge before running python setup.py install:

# Install ncurses from conda-forge
conda install -c conda-forge ncurses

Quick Usage

import torchaudio

waveform, sample_rate = torchaudio.load('foo.wav')  # load tensor from file
torchaudio.save('foo_save.wav', waveform, sample_rate)  # save tensor to file

Backend Dispatch

By default in OSX and Linux, torchaudio uses SoX as a backend to load and save files. The backend can be changed to SoundFile using the following. See SoundFile for installation instructions.

import torchaudio
torchaudio.set_audio_backend("soundfile")  # switch backend

waveform, sample_rate = torchaudio.load('foo.wav')  # load tensor from file, as usual
torchaudio.save('foo_save.wav', waveform, sample_rate)  # save tensor to file, as usual

Unlike SoX, SoundFile does not currently support mp3.

API Reference

API Reference is located here: http://pytorch.org/audio/

Conventions

With torchaudio being a machine learning library and built on top of PyTorch, torchaudio is standardized around the following naming conventions. Tensors are assumed to have channel as the first dimension and time as the last dimension (when applicable). This makes it consistent with PyTorch's dimensions. For size names, the prefix n_ is used (e.g. "a tensor of size (n_freq, n_mel)") whereas dimension names do not have this prefix (e.g. "a tensor of dimension (channel, time)")

  • waveform: a tensor of audio samples with dimensions (channel, time)
  • sample_rate: the rate of audio dimensions (samples per second)
  • specgram: a tensor of spectrogram with dimensions (channel, freq, time)
  • mel_specgram: a mel spectrogram with dimensions (channel, mel, time)
  • hop_length: the number of samples between the starts of consecutive frames
  • n_fft: the number of Fourier bins
  • n_mel, n_mfcc: the number of mel and MFCC bins
  • n_freq: the number of bins in a linear spectrogram
  • min_freq: the lowest frequency of the lowest band in a spectrogram
  • max_freq: the highest frequency of the highest band in a spectrogram
  • win_length: the length of the STFT window
  • window_fn: for functions that creates windows e.g. torch.hann_window

Transforms expect and return the following dimensions.

  • Spectrogram: (channel, time) -> (channel, freq, time)
  • AmplitudeToDB: (channel, freq, time) -> (channel, freq, time)
  • MelScale: (channel, freq, time) -> (channel, mel, time)
  • MelSpectrogram: (channel, time) -> (channel, mel, time)
  • MFCC: (channel, time) -> (channel, mfcc, time)
  • MuLawEncode: (channel, time) -> (channel, time)
  • MuLawDecode: (channel, time) -> (channel, time)
  • Resample: (channel, time) -> (channel, time)
  • Fade: (channel, time) -> (channel, time)
  • Vol: (channel, time) -> (channel, time)

Complex numbers are supported via tensors of dimension (..., 2), and torchaudio provides complex_norm and angle to convert such a tensor into its magnitude and phase. Here, and in the documentation, we use an ellipsis "..." as a placeholder for the rest of the dimensions of a tensor, e.g. optional batching and channel dimensions.

Contributing Guidelines

Please let us know if you encounter a bug by filing an issue.

We appreciate all contributions. If you are planning to contribute back bug-fixes, please do so without any further discussion.

If you plan to contribute new features, utility functions or extensions to the core, please first open an issue and discuss the feature with us. Sending a PR without discussion might end up resulting in a rejected PR, because we might be taking the core in a different direction than you might be aware of.

Disclaimer on Datasets

This is a utility library that downloads and prepares public datasets. We do not host or distribute these datasets, vouch for their quality or fairness, or claim that you have license to use the dataset. It is your responsibility to determine whether you have permission to use the dataset under the dataset's license.

If you're a dataset owner and wish to update any part of it (description, citation, etc.), or do not want your dataset to be included in this library, please get in touch through a GitHub issue. Thanks for your contribution to the ML community!

About

Data manipulation and transformation for audio signal processing, powered by PyTorch

Resources

License

Code of conduct

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Python 88.6%
  • Shell 6.2%
  • C++ 4.4%
  • Other 0.8%