mirror of https://github.com/coqui-ai/TTS.git
1. Use a single Gradscaler for all the optimizers 2. Save terminal logs to a file. In DDP mode, each worker creates `trainer_N_log.txt`. 3. Fixes to allow only the main worker (rank==0) writing to Tensorboard 4. Pass parameters owned by the target optimizer to the grad_clip_norm |
||
---|---|---|
.. | ||
__init__.py | ||
compute_attention_masks.py | ||
compute_embeddings.py | ||
compute_statistics.py | ||
convert_melgan_tflite.py | ||
convert_melgan_torch_to_tf.py | ||
convert_tacotron2_tflite.py | ||
convert_tacotron2_torch_to_tf.py | ||
distribute.py | ||
extract_tts_spectrograms.py | ||
find_unique_chars.py | ||
resample.py | ||
synthesize.py | ||
train_encoder.py | ||
train_tts.py | ||
train_vocoder.py | ||
tune_wavegrad.py |