80 lines
5.3 KiB
Markdown
80 lines
5.3 KiB
Markdown
<!--Copyright 2023 The HuggingFace Team. All rights reserved.
|
|
|
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with
|
|
the License. You may obtain a copy of the License at
|
|
|
|
http://www.apache.org/licenses/LICENSE-2.0
|
|
|
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on
|
|
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
|
|
specific language governing permissions and limitations under the License.
|
|
|
|
⚠️ Note that this file is in Markdown but contain specific syntax for our doc-builder (similar to MDX) that may not be
|
|
rendered properly in your Markdown viewer.
|
|
|
|
-->
|
|
|
|
# UnivNet
|
|
|
|
## Overview
|
|
|
|
The UnivNet model was proposed in [UnivNet: A Neural Vocoder with Multi-Resolution Spectrogram Discriminators for High-Fidelity Waveform Generation](https://arxiv.org/abs/2106.07889) by Won Jang, Dan Lim, Jaesam Yoon, Bongwan Kin, and Juntae Kim.
|
|
The UnivNet model is a generative adversarial network (GAN) trained to synthesize high fidelity speech waveforms. The UnivNet model shared in `transformers` is the *generator*, which maps a conditioning log-mel spectrogram and optional noise sequence to a speech waveform (e.g. a vocoder). Only the generator is required for inference. The *discriminator* used to train the `generator` is not implemented.
|
|
|
|
The abstract from the paper is the following:
|
|
|
|
*Most neural vocoders employ band-limited mel-spectrograms to generate waveforms. If full-band spectral features are used as the input, the vocoder can be provided with as much acoustic information as possible. However, in some models employing full-band mel-spectrograms, an over-smoothing problem occurs as part of which non-sharp spectrograms are generated. To address this problem, we propose UnivNet, a neural vocoder that synthesizes high-fidelity waveforms in real time. Inspired by works in the field of voice activity detection, we added a multi-resolution spectrogram discriminator that employs multiple linear spectrogram magnitudes computed using various parameter sets. Using full-band mel-spectrograms as input, we expect to generate high-resolution signals by adding a discriminator that employs spectrograms of multiple resolutions as the input. In an evaluation on a dataset containing information on hundreds of speakers, UnivNet obtained the best objective and subjective results among competing models for both seen and unseen speakers. These results, including the best subjective score for text-to-speech, demonstrate the potential for fast adaptation to new speakers without a need for training from scratch.*
|
|
|
|
Tips:
|
|
|
|
- The `noise_sequence` argument for [`UnivNetModel.forward`] should be standard Gaussian noise (such as from `torch.randn`) of shape `([batch_size], noise_length, model.config.model_in_channels)`, where `noise_length` should match the length dimension (dimension 1) of the `input_features` argument. If not supplied, it will be randomly generated; a `torch.Generator` can be supplied to the `generator` argument so that the forward pass can be reproduced. (Note that [`UnivNetFeatureExtractor`] will return generated noise by default, so it shouldn't be necessary to generate `noise_sequence` manually.)
|
|
- Padding added by [`UnivNetFeatureExtractor`] can be removed from the [`UnivNetModel`] output through the [`UnivNetFeatureExtractor.batch_decode`] method, as shown in the usage example below.
|
|
- Padding the end of each waveform with silence can reduce artifacts at the end of the generated audio sample. This can be done by supplying `pad_end = True` to [`UnivNetFeatureExtractor.__call__`]. See [this issue](https://github.com/seungwonpark/melgan/issues/8) for more details.
|
|
|
|
Usage Example:
|
|
|
|
```python
|
|
import torch
|
|
from scipy.io.wavfile import write
|
|
from datasets import Audio, load_dataset
|
|
|
|
from transformers import UnivNetFeatureExtractor, UnivNetModel
|
|
|
|
model_id_or_path = "dg845/univnet-dev"
|
|
model = UnivNetModel.from_pretrained(model_id_or_path)
|
|
feature_extractor = UnivNetFeatureExtractor.from_pretrained(model_id_or_path)
|
|
|
|
ds = load_dataset("hf-internal-testing/librispeech_asr_dummy", "clean", split="validation")
|
|
# Resample the audio to the model and feature extractor's sampling rate.
|
|
ds = ds.cast_column("audio", Audio(sampling_rate=feature_extractor.sampling_rate))
|
|
# Pad the end of the converted waveforms to reduce artifacts at the end of the output audio samples.
|
|
inputs = feature_extractor(
|
|
ds[0]["audio"]["array"], sampling_rate=ds[0]["audio"]["sampling_rate"], pad_end=True, return_tensors="pt"
|
|
)
|
|
|
|
with torch.no_grad():
|
|
audio = model(**inputs)
|
|
|
|
# Remove the extra padding at the end of the output.
|
|
audio = feature_extractor.batch_decode(**audio)[0]
|
|
# Convert to wav file
|
|
write("sample_audio.wav", feature_extractor.sampling_rate, audio)
|
|
```
|
|
|
|
This model was contributed by [dg845](https://huggingface.co/dg845).
|
|
To the best of my knowledge, there is no official code release, but an unofficial implementation can be found at [maum-ai/univnet](https://github.com/maum-ai/univnet) with pretrained checkpoints [here](https://github.com/maum-ai/univnet#pre-trained-model).
|
|
|
|
|
|
## UnivNetConfig
|
|
|
|
[[autodoc]] UnivNetConfig
|
|
|
|
## UnivNetFeatureExtractor
|
|
|
|
[[autodoc]] UnivNetFeatureExtractor
|
|
- __call__
|
|
|
|
## UnivNetModel
|
|
|
|
[[autodoc]] UnivNetModel
|
|
- forward |