mirror of https://github.com/coqui-ai/TTS.git
* Adding neural HMM TTS * Adding tests * Adding neural hmm on readme * renaming training recipe * Removing overflow\s decoder parameters from the config * Update the Trainer requirement version for a compatible one (#2276) * Bump up to v0.10.2 * Adding neural HMM TTS * Adding tests * Adding neural hmm on readme * renaming training recipe * Removing overflow\s decoder parameters from the config * fixing documentation Co-authored-by: Edresson Casanova <edresson1@gmail.com> Co-authored-by: Eren Gölge <erogol@hotmail.com> |
||
|---|---|---|
| .. | ||
| align_tts | ||
| fast_pitch | ||
| fast_speech | ||
| fastspeech2 | ||
| glow_tts | ||
| hifigan | ||
| multiband_melgan | ||
| neuralhmm_tts | ||
| overflow | ||
| speedy_speech | ||
| tacotron2-Capacitron | ||
| tacotron2-DCA | ||
| tacotron2-DDC | ||
| univnet | ||
| vits_tts | ||
| wavegrad | ||
| wavernn | ||
| README.md | ||
| download_ljspeech.sh | ||
README.md
🐸💬 TTS LJspeech Recipes
For running the recipes
-
Download the LJSpeech dataset here either manually from its official website or using
download_ljspeech.sh. -
Go to your desired model folder and run the training.
Running Python files. (Choose the desired GPU ID for your run and set
CUDA_VISIBLE_DEVICES)CUDA_VISIBLE_DEVICES="0" python train_modelX.pyRunning bash scripts.
bash run.sh
💡 Note that these runs are just templates to help you start training your first model. They are not optimized for the best result. Double-check the configurations and feel free to share your experiments to find better parameters together 💪.