Edit model card

Vocoder with HiFIGAN trained on LJSpeech

This repository provides all the necessary tools for using a HiFIGAN vocoder trained with LugandaSpeech.

The pre-trained model takes in input a spectrogram and produces a waveform in output. Typically, a vocoder is used after a TTS model that converts an input text into a spectrogram.

The sampling frequency is 22050 Hz.

NOTES

Install SpeechBrain

pip install speechbrain

Please notice that we encourage you to read our tutorials and learn more about SpeechBrain.

Using the Vocoder

  • Basic Usage:
import torch
from speechbrain.inference.vocoders import HIFIGAN
hifi_gan = HIFIGAN.from_hparams(source="Nick256/tts-hifigan-commonvoice-single-female", savedir="pretrained_models/tts-hifigan-commonvoice-single-female")
mel_specs = torch.rand(2, 80,298)
waveforms = hifi_gan.decode_batch(mel_specs)
  • Convert a Spectrogram into a Waveform:
import torchaudio
from speechbrain.inference.vocoders import HIFIGAN
from speechbrain.lobes.models.FastSpeech2 import mel_spectogram

# Load a pretrained HIFIGAN Vocoder
hifi_gan = HIFIGAN.from_hparams(source="Nick256/tts-hifigan-commonvoice-single-female", savedir="pretrained_models/tts-hifigan-commonvoice-single-female")

# Load an audio file (an example file can be found in this repository)
# Ensure that the audio signal is sampled at 22050 Hz; refer to the provided link for a 16 kHz Vocoder.
signal, rate = torchaudio.load('Nick256/tts-hifigan-commonvoice-single-female/example.wav')

# Compute the mel spectrogram.
# IMPORTANT: Use these specific parameters to match the Vocoder's training settings for optimal results.
spectrogram, _ = mel_spectogram(
    audio=signal.squeeze(),
    sample_rate=22050,
    hop_length=256,
    win_length=None,
    n_mels=80,
    n_fft=1024,
    f_min=0.0,
    f_max=8000.0,
    power=1,
    normalized=False,
    min_max_energy_norm=True,
    norm="slaney",
    mel_scale="slaney",
    compression=True
)

# Convert the spectrogram to waveform
waveforms = hifi_gan.decode_batch(spectrogram)

# Save the reconstructed audio as a waveform
torchaudio.save('waveform_reconstructed.wav', waveforms.squeeze(1), 22050)

# If everything is set up correctly, the original and reconstructed audio should be nearly indistinguishable.
# Keep in mind that this Vocoder is trained for a single speaker; for multi-speaker Vocoder options, refer to the provided links.

Using the Vocoder with the TTS

import torchaudio
from speechbrain.inference.TTS import Tacotron2
from speechbrain.inference.vocoders import HIFIGAN

# Intialize TTS (tacotron2) and Vocoder (HiFIGAN)
tacotron2 = Tacotron2.from_hparams(source="Nick256/tts-tacotron2-commonvoice-single-female", savedir="pretrained_models/tts-tacotron2-commonvoice-single-female")
hifi_gan = HIFIGAN.from_hparams(source="Nick256/tts-hifigan-commonvoice-single-female", savedir="pretrained_model/tts-hifigan-commonvoice-single-female")

# Running the TTS
mel_output, mel_length, alignment = tacotron2.encode_text("osiibye otya leero")

# Running Vocoder (spectrogram-to-waveform)
waveforms = hifi_gan.decode_batch(mel_output)

# Save the waverform
torchaudio.save('example_TTS.wav',waveforms.squeeze(1), 22050)

Inference on GPU

To perform inference on the GPU, add run_opts={"device":"cuda"} when calling the from_hparams method.

Training

The model was trained with SpeechBrain. To train it from scratch follow these steps:

  1. Clone SpeechBrain:
git clone https://github.com/speechbrain/speechbrain/
  1. Install it:
cd speechbrain
pip install -r requirements.txt
pip install -e .
  1. Run Training:
cd recipes/LJSpeech/TTS/vocoder/hifi_gan/
python train.py hparams/train.yaml --data_folder /path/to/LJspeech

You can find our training results (models, logs, etc) here.

Downloads last month
2
Inference API
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.

Dataset used to train Nick256/tts-hifigan-commonvoice-single-female