coqui-tts/recipes/ljspeech
Edresson Casanova 01e7cba5bf Add eval_split and eval_split_size in the call of load_tts_samples for all recipes 2022-03-18 15:53:13 -03:00
..
align_tts Add eval_split and eval_split_size in the call of load_tts_samples for all recipes 2022-03-18 15:53:13 -03:00
fast_pitch Add eval_split and eval_split_size in the call of load_tts_samples for all recipes 2022-03-18 15:53:13 -03:00
fast_speech Add eval_split and eval_split_size in the call of load_tts_samples for all recipes 2022-03-18 15:53:13 -03:00
glow_tts Add eval_split and eval_split_size in the call of load_tts_samples for all recipes 2022-03-18 15:53:13 -03:00
hifigan Make style (#1405) 2022-03-16 12:13:55 +01:00
multiband_melgan Make style (#1405) 2022-03-16 12:13:55 +01:00
speedy_speech Add eval_split and eval_split_size in the call of load_tts_samples for all recipes 2022-03-18 15:53:13 -03:00
tacotron2-DCA Add eval_split and eval_split_size in the call of load_tts_samples for all recipes 2022-03-18 15:53:13 -03:00
tacotron2-DDC Add eval_split and eval_split_size in the call of load_tts_samples for all recipes 2022-03-18 15:53:13 -03:00
univnet Make style (#1405) 2022-03-16 12:13:55 +01:00
vits_tts Add eval_split and eval_split_size in the call of load_tts_samples for all recipes 2022-03-18 15:53:13 -03:00
wavegrad Make style 2022-02-25 11:26:59 +01:00
wavernn Make style 2022-02-25 11:26:59 +01:00
README.md Create LJSpeech recipes for all the models 2021-06-22 16:21:11 +02:00
download_ljspeech.sh Update ljspeech download 2022-02-25 11:12:44 +01:00

README.md

🐸💬 TTS LJspeech Recipes

For running the recipes

  1. Download the LJSpeech dataset here either manually from its official website or using download_ljspeech.sh.

  2. Go to your desired model folder and run the training.

    Running Python files. (Choose the desired GPU ID for your run and set CUDA_VISIBLE_DEVICES)

    CUDA_VISIBLE_DEVICES="0" python train_modelX.py
    

    Running bash scripts.

    bash run.sh
    

💡 Note that these runs are just templates to help you start training your first model. They are not optimized for the best result. Double-check the configurations and feel free to share your experiments to find better parameters together 💪.