coqui-tts/TTS/vocoder
Eren Gölge a0a9279e4b Fix GAN optimizer order
commit 212d330929
Author: Edresson Casanova <edresson1@gmail.com>
Date:   Fri Apr 29 16:29:44 2022 -0300

    Fix unit test

commit 44456b0483
Author: Edresson Casanova <edresson1@gmail.com>
Date:   Fri Apr 29 07:28:39 2022 -0300

    Fix style

commit d545beadb9
Author: Edresson Casanova <edresson1@gmail.com>
Date:   Thu Apr 28 17:08:04 2022 -0300

    Change order of HIFI-GAN optimizers to be equal than the original repository

commit 657c5442e5
Author: Edresson Casanova <edresson1@gmail.com>
Date:   Thu Apr 28 15:40:16 2022 -0300

    Remove audio padding before mel spec extraction

commit 76b274e690
Merge: 379ccd7b 6233f4fc
Author: Edresson Casanova <edresson1@gmail.com>
Date:   Wed Apr 27 07:28:48 2022 -0300

    Merge pull request #1541 from coqui-ai/comp_emb_fix

    Bug fix in compute embedding without eval partition

commit 379ccd7ba6
Author: WeberJulian <julian.weber@hotmail.fr>
Date:   Wed Apr 27 10:42:26 2022 +0200

    returns y_mask in VITS inference (#1540)

    * returns y_mask

    * make style
2022-05-07 13:29:11 +02:00
..
configs Fix the wrong default loss name for GAN models 2022-02-25 10:48:03 +01:00
datasets Remove audio padding before mel spec extraction 2022-05-07 13:12:09 +02:00
layers Update TTS.tts formatters (#1228) 2022-02-11 23:03:43 +01:00
models Fix GAN optimizer order 2022-05-07 13:29:11 +02:00
utils Implement VITS model 🚀 2021-08-09 18:02:36 +00:00
README.md Update model file extension (#1422) 2022-03-22 17:55:00 +01:00
__init__.py rename the project to old TTS 2020-09-09 12:27:23 +02:00
pqmf_output.wav rename the project to old TTS 2020-09-09 12:27:23 +02:00

README.md

Mozilla TTS Vocoders (Experimental)

Here there are vocoder model implementations which can be combined with the other TTS models.

Currently, following models are implemented:

  • Melgan
  • MultiBand-Melgan
  • ParallelWaveGAN
  • GAN-TTS (Discriminator Only)

It is also very easy to adapt different vocoder models as we provide a flexible and modular (but not too modular) framework.

Training a model

You can see here an example (Soon)Colab Notebook training MelGAN with LJSpeech dataset.

In order to train a new model, you need to gather all wav files into a folder and give this folder to data_path in '''config.json'''

You need to define other relevant parameters in your config.json and then start traning with the following command.

CUDA_VISIBLE_DEVICES='0' python tts/bin/train_vocoder.py --config_path path/to/config.json

Example config files can be found under tts/vocoder/configs/ folder.

You can continue a previous training run by the following command.

CUDA_VISIBLE_DEVICES='0' python tts/bin/train_vocoder.py --continue_path path/to/your/model/folder

You can fine-tune a pre-trained model by the following command.

CUDA_VISIBLE_DEVICES='0' python tts/bin/train_vocoder.py --restore_path path/to/your/model.pth

Restoring a model starts a new training in a different folder. It only restores model weights with the given checkpoint file. However, continuing a training starts from the same directory where the previous training run left off.

You can also follow your training runs on Tensorboard as you do with our TTS models.

Acknowledgement

Thanks to @kan-bayashi for his repository being the start point of our work.