mirror of https://github.com/coqui-ai/TTS.git
* Rename Speaker encoder module to encoder * Add a generic emotion dataset formatter * Transform the Speaker Encoder dataset to a generic dataset and create emotion encoder config * Add class map in emotion config * Add Base encoder config * Add evaluation encoder script * Fix the bug in plot_embeddings * Enable Weight decay for encoder training * Add argumnet to disable storage * Add Perfect Sampler and remove storage * Add evaluation during encoder training * Fix lint checks * Remove useless config parameter * Active evaluation in speaker encoder test and use multispeaker dataset for this test * Unit tests fixs * Remove useless tests for speedup the aux_tests * Use get_optimizer in Encoder * Add BaseEncoder Class * Fix the unitests * Add Perfect Batch Sampler unit test * Add compute encoder accuracy in a function |
||
---|---|---|
.. | ||
common_voice.tsv | ||
example_1.wav | ||
language_ids.json | ||
scale_stats.npy | ||
server_config.json | ||
test_align_tts.json | ||
test_config.json | ||
test_glow_tts.json | ||
test_speaker_encoder_config.json | ||
test_speedy_speech.json | ||
test_tacotron2_config.json | ||
test_tacotron_bd_config.json | ||
test_tacotron_config.json | ||
test_vocoder_audio_config.json | ||
test_vocoder_multiband_melgan_config.json | ||
test_vocoder_wavegrad.json | ||
test_vocoder_wavernn_config.json |