https://github.com/iver56/audiomentations
A Python library for audio data augmentation. Useful for making audio ML models work well in the real world, not just in the lab.
https://github.com/iver56/audiomentations
audio audio-data-augmentation audio-effects augmentation data-augmentation deep-learning dsp machine-learning music python sound sound-processing
Last synced: 5 days ago
JSON representation
A Python library for audio data augmentation. Useful for making audio ML models work well in the real world, not just in the lab.
- Host: GitHub
- URL: https://github.com/iver56/audiomentations
- Owner: iver56
- License: mit
- Created: 2019-02-12T16:36:24.000Z (about 6 years ago)
- Default Branch: main
- Last Pushed: 2025-04-19T20:57:00.000Z (9 days ago)
- Last Synced: 2025-04-19T22:11:42.918Z (8 days ago)
- Topics: audio, audio-data-augmentation, audio-effects, augmentation, data-augmentation, deep-learning, dsp, machine-learning, music, python, sound, sound-processing
- Language: Python
- Homepage: https://iver56.github.io/audiomentations/
- Size: 10.7 MB
- Stars: 2,014
- Watchers: 19
- Forks: 198
- Open Issues: 52
-
Metadata Files:
- Readme: README.md
- Funding: .github/FUNDING.yml
- License: LICENSE
Awesome Lists containing this project
- Awesome-Speech-Enhancement - audiomentations
- awesome-python-audio - audiomentations
- awesome-list - Audiomentations - A Python library for audio data augmentation. (Data Processing / Data Pre-processing & Loading)
README
# Audiomentations
[](https://circleci.com/gh/iver56/audiomentations)
[](https://codecov.io/gh/iver56/audiomentations)
[](https://github.com/psf/black)
[](https://github.com/iver56/audiomentations/blob/main/LICENSE)
[](https://doi.org/10.5281/zenodo.15056865)Audiomentations is a Python library for audio data augmentation, built to be fast and easy to use - its API is inspired by
[albumentations](https://github.com/albu/albumentations). It's useful for making audio deep learning models work well in the real world, not just in the lab.
Audiomentations runs on CPU, supports mono audio and multichannel audio and integrates well in training pipelines,
such as those built with TensorFlow/Keras or PyTorch. It has helped users achieve
world-class results in Kaggle competitions and is trusted by companies building next-generation audio products with AI.Need a Pytorch-specific alternative with GPU support? Check out [torch-audiomentations](https://github.com/asteroid-team/torch-audiomentations)!
# Setup

[](https://pypi.org/project/audiomentations/)
[](https://pypi.org/project/audiomentations/)
`pip install audiomentations`
# Usage example
```python
from audiomentations import Compose, AddGaussianNoise, TimeStretch, PitchShift, Shift
import numpy as npaugment = Compose([
AddGaussianNoise(min_amplitude=0.001, max_amplitude=0.015, p=0.5),
TimeStretch(min_rate=0.8, max_rate=1.25, p=0.5),
PitchShift(min_semitones=-4, max_semitones=4, p=0.5),
Shift(p=0.5),
])# Generate 2 seconds of dummy audio for the sake of example
samples = np.random.uniform(low=-0.2, high=0.2, size=(32000,)).astype(np.float32)# Augment/transform/perturb the audio data
augmented_samples = augment(samples=samples, sample_rate=16000)
```# Documentation
The API documentation, along with guides, example code, illustrations and example sounds, is available at [https://iver56.github.io/audiomentations/](https://iver56.github.io/audiomentations/)
# Transforms
* [AddBackgroundNoise](https://iver56.github.io/audiomentations/waveform_transforms/add_background_noise/): Mixes in another sound to add background noise
* [AddColorNoise](https://iver56.github.io/audiomentations/waveform_transforms/add_color_noise/): Adds noise with specific color
* [AddGaussianNoise](https://iver56.github.io/audiomentations/waveform_transforms/add_gaussian_noise/): Adds gaussian noise to the audio samples
* [AddGaussianSNR](https://iver56.github.io/audiomentations/waveform_transforms/add_gaussian_snr/): Injects gaussian noise using a randomly chosen signal-to-noise ratio
* [AddShortNoises](https://iver56.github.io/audiomentations/waveform_transforms/add_short_noises/): Mixes in various short noise sounds
* [AdjustDuration](https://iver56.github.io/audiomentations/waveform_transforms/adjust_duration/): Trims or pads the audio to fit a target duration
* [AirAbsorption](https://iver56.github.io/audiomentations/waveform_transforms/air_absorption/): Applies frequency-dependent attenuation simulating air absorption
* [Aliasing](https://iver56.github.io/audiomentations/waveform_transforms/aliasing/): Produces aliasing artifacts by downsampling without low-pass filtering and then upsampling
* [ApplyImpulseResponse](https://iver56.github.io/audiomentations/waveform_transforms/apply_impulse_response/): Convolves the audio with a randomly chosen impulse response
* [BandPassFilter](https://iver56.github.io/audiomentations/waveform_transforms/band_pass_filter/): Applies band-pass filtering within randomized parameters
* [BandStopFilter](https://iver56.github.io/audiomentations/waveform_transforms/band_stop_filter/): Applies band-stop (notch) filtering within randomized parameters
* [BitCrush](https://iver56.github.io/audiomentations/waveform_transforms/bit_crush/): Applies bit reduction without dithering
* [Clip](https://iver56.github.io/audiomentations/waveform_transforms/clip/): Clips audio samples to specified minimum and maximum values
* [ClippingDistortion](https://iver56.github.io/audiomentations/waveform_transforms/clipping_distortion/): Distorts the signal by clipping a random percentage of samples
* [Gain](https://iver56.github.io/audiomentations/waveform_transforms/gain/): Multiplies the audio by a random gain factor
* [GainTransition](https://iver56.github.io/audiomentations/waveform_transforms/gain_transition/): Gradually changes the gain over a random time span
* [HighPassFilter](https://iver56.github.io/audiomentations/waveform_transforms/high_pass_filter/): Applies high-pass filtering within randomized parameters
* [HighShelfFilter](https://iver56.github.io/audiomentations/waveform_transforms/high_shelf_filter/): Applies a high shelf filter with randomized parameters
* [Lambda](https://iver56.github.io/audiomentations/waveform_transforms/lambda/): Applies a user-defined transform
* [Limiter](https://iver56.github.io/audiomentations/waveform_transforms/limiter/): Applies dynamic range compression limiting the audio signal
* [LoudnessNormalization](https://iver56.github.io/audiomentations/waveform_transforms/loudness_normalization/): Applies gain to match a target loudness
* [LowPassFilter](https://iver56.github.io/audiomentations/waveform_transforms/low_pass_filter/): Applies low-pass filtering within randomized parameters
* [LowShelfFilter](https://iver56.github.io/audiomentations/waveform_transforms/low_shelf_filter/): Applies a low shelf filter with randomized parameters
* [Mp3Compression](https://iver56.github.io/audiomentations/waveform_transforms/mp3_compression/): Compresses the audio to lower the quality
* [Normalize](https://iver56.github.io/audiomentations/waveform_transforms/normalize/): Applies gain so that the highest signal level becomes 0 dBFS
* [Padding](https://iver56.github.io/audiomentations/waveform_transforms/padding/): Replaces a random part of the beginning or end with padding
* [PeakingFilter](https://iver56.github.io/audiomentations/waveform_transforms/peaking_filter/): Applies a peaking filter with randomized parameters
* [PitchShift](https://iver56.github.io/audiomentations/waveform_transforms/pitch_shift/): Shifts the pitch up or down without changing the tempo
* [PolarityInversion](https://iver56.github.io/audiomentations/waveform_transforms/polarity_inversion/): Flips the audio samples upside down, reversing their polarity
* [RepeatPart](https://iver56.github.io/audiomentations/waveform_transforms/repeat_part/): Repeats a subsection of the audio a number of times
* [Resample](https://iver56.github.io/audiomentations/waveform_transforms/resample/): Resamples the signal to a randomly chosen sampling rate
* [Reverse](https://iver56.github.io/audiomentations/waveform_transforms/reverse/): Reverses the audio along its time axis
* [RoomSimulator](https://iver56.github.io/audiomentations/waveform_transforms/room_simulator/): Simulates the effect of a room on an audio source
* [SevenBandParametricEQ](https://iver56.github.io/audiomentations/waveform_transforms/seven_band_parametric_eq/): Adjusts the volume of 7 frequency bands
* [Shift](https://iver56.github.io/audiomentations/waveform_transforms/shift/): Shifts the samples forwards or backwards
* [SpecChannelShuffle](https://iver56.github.io/audiomentations/spectrogram_transforms/): Shuffles channels in the spectrogram
* [SpecFrequencyMask](https://iver56.github.io/audiomentations/spectrogram_transforms/): Applies a frequency mask to the spectrogram
* [TanhDistortion](https://iver56.github.io/audiomentations/waveform_transforms/tanh_distortion/): Applies tanh distortion to distort the signal
* [TimeMask](https://iver56.github.io/audiomentations/waveform_transforms/time_mask/): Makes a random part of the audio silent
* [TimeStretch](https://iver56.github.io/audiomentations/waveform_transforms/time_stretch/): Changes the speed without changing the pitch
* [Trim](https://iver56.github.io/audiomentations/waveform_transforms/trim/): Trims leading and trailing silence from the audio# Changelog
## [0.40.0] - 2025-03-20
### Added
* Add support for scipy>=1.13
### Changed
* Lay the groundwork for NumPy 2.x support (version constraint update coming in the next release)
* Speed up `LoudnessNormalization` by ~20%
* Improve test coverage and documentation
* Bump min `python-stretch` version and remove the limitation on the number of channels in `PitchShift`
* Bump min numpy version to 1.22
* Bump min pyroomacoustics version to 0.7.4### Fixed
* Fix a bug where `TimeMask` could raise an exception if the fade length became 0
* Disallow `min_cutoff_freq` <= 0 in `HighPassFilter`
* Make `AdjustDuration` picklable (useful for multiprocessing)### Removed
* Remove support for Python 3.8
For the full changelog, including older versions, see [https://iver56.github.io/audiomentations/changelog/](https://iver56.github.io/audiomentations/changelog/)
# Acknowledgements
Thanks to [Nomono](https://nomono.co/) for backing audiomentations.
Thanks to [all contributors](https://github.com/iver56/audiomentations/graphs/contributors) who help improving audiomentations.