mirror of https://github.com/coqui-ai/TTS.git
These allowed Coqui to get download stats, which we don't need anymore |
||
|---|---|---|
| .. | ||
| align_tts | ||
| delightful_tts | ||
| fast_pitch | ||
| fast_speech | ||
| fastspeech2 | ||
| glow_tts | ||
| hifigan | ||
| multiband_melgan | ||
| neuralhmm_tts | ||
| overflow | ||
| speedy_speech | ||
| tacotron2-Capacitron | ||
| tacotron2-DCA | ||
| tacotron2-DDC | ||
| univnet | ||
| vits_tts | ||
| wavegrad | ||
| wavernn | ||
| xtts_v1 | ||
| xtts_v2 | ||
| README.md | ||
| download_ljspeech.sh | ||
README.md
🐸💬 TTS LJspeech Recipes
For running the recipes
-
Download the LJSpeech dataset here either manually from its official website or using
download_ljspeech.sh. -
Go to your desired model folder and run the training.
Running Python files. (Choose the desired GPU ID for your run and set
CUDA_VISIBLE_DEVICES)CUDA_VISIBLE_DEVICES="0" python train_modelX.pyRunning bash scripts.
bash run.sh
💡 Note that these runs are just templates to help you start training your first model. They are not optimized for the best result. Double-check the configurations and feel free to share your experiments to find better parameters together 💪.