mirror of https://github.com/coqui-ai/TTS.git
* Adding encoder * currently modifying hmm * Adding hmm * Adding overflow * Adding overflow setting up flat start * Removing runs * adding normalization parameters * Fixing models on same device * Training overflow and plotting evaluations * Adding inference * At the end of epoch the test sentences are coming on cpu instead of gpu * Adding figures from model during training to monitor * reverting tacotron2 training recipe * fixing inference on gpu for test sentences on config * moving helpers and texts within overflows source code * renaming to overflow * moving loss to the model file * Fixing the rename * Model training but not plotting the test config sentences's audios * Formatting logs * Changing model name to camelcase * Fixing test log * Fixing plotting bug * Adding some tests * Adding more tests to overflow * Adding all tests for overflow * making changes to camel case in config * Adding information about parameters and docstring * removing compute_mel_statistics moved statistic computation to the model instead * Added overflow in readme * Adding more test cases, now it doesn't saves transition_p like tensor and can be dumped as json |
||
---|---|---|
.. | ||
align_tts | ||
fast_pitch | ||
fast_speech | ||
glow_tts | ||
hifigan | ||
multiband_melgan | ||
overflow | ||
speedy_speech | ||
tacotron2-Capacitron | ||
tacotron2-DCA | ||
tacotron2-DDC | ||
univnet | ||
vits_tts | ||
wavegrad | ||
wavernn | ||
README.md | ||
download_ljspeech.sh |
README.md
🐸💬 TTS LJspeech Recipes
For running the recipes
-
Download the LJSpeech dataset here either manually from its official website or using
download_ljspeech.sh
. -
Go to your desired model folder and run the training.
Running Python files. (Choose the desired GPU ID for your run and set
CUDA_VISIBLE_DEVICES
)CUDA_VISIBLE_DEVICES="0" python train_modelX.py
Running bash scripts.
bash run.sh
💡 Note that these runs are just templates to help you start training your first model. They are not optimized for the best result. Double-check the configurations and feel free to share your experiments to find better parameters together 💪.