mirror of https://github.com/coqui-ai/TTS.git
* added EnergyDataset * add energy to Dataset * add comupte_energy * added energy params * added energy to forward_tts * added plot_avg_energy for visualisation * Update forward_tts.py * create file * added fastspeech2 recipe * add fastspeech2 config * removed energy from fast pitch * add energy loss to forward tts * Update fastspeech2_config.py * change run_name * Update numpy_transforms.py * fix typo * fix typo * fix typo * linting issues * use_energy default value --> False * Update numpy_transforms.py * linting fixes * fix typo * liniting_fix * liniting_fix * fix * fixes * fixes * lint fix * lint fixws * added training test * wrong import * wrong import * trailing whitespace * style fix * changed class name because of error * class name change * class name change * change class name * fixed styles |
||
---|---|---|
.. | ||
align_tts | ||
fast_pitch | ||
fast_speech | ||
fastspeech2 | ||
glow_tts | ||
hifigan | ||
multiband_melgan | ||
overflow | ||
speedy_speech | ||
tacotron2-Capacitron | ||
tacotron2-DCA | ||
tacotron2-DDC | ||
univnet | ||
vits_tts | ||
wavegrad | ||
wavernn | ||
README.md | ||
download_ljspeech.sh |
README.md
🐸💬 TTS LJspeech Recipes
For running the recipes
-
Download the LJSpeech dataset here either manually from its official website or using
download_ljspeech.sh
. -
Go to your desired model folder and run the training.
Running Python files. (Choose the desired GPU ID for your run and set
CUDA_VISIBLE_DEVICES
)CUDA_VISIBLE_DEVICES="0" python train_modelX.py
Running bash scripts.
bash run.sh
💡 Note that these runs are just templates to help you start training your first model. They are not optimized for the best result. Double-check the configurations and feel free to share your experiments to find better parameters together 💪.