coqui-tts/recipes
Edresson Casanova 3b1a28fa95
Add YourTTS VCTK recipe (#2198)
* Add YourTTS VCTK recipe

* Fix lint

* Add compute_embeddings and resample_files functions to be able to reuse it

* Add automatic download and speaker embedding computation for YourTTS VCTK recipe

* Add parameter for eval metadata file on compute embeddings function
2022-12-12 16:14:25 +01:00
..
blizzard2013 d-vector handling (#1945) 2022-09-13 14:10:33 +02:00
kokoro/tacotron2-DDC d-vector handling (#1945) 2022-09-13 14:10:33 +02:00
ljspeech Adding OverFlow (#2183) 2022-12-12 12:44:15 +01:00
multilingual/vits_tts d-vector handling (#1945) 2022-09-13 14:10:33 +02:00
thorsten_DE d-vector handling (#1945) 2022-09-13 14:10:33 +02:00
vctk Add YourTTS VCTK recipe (#2198) 2022-12-12 16:14:25 +01:00
README.md Update recipes README.md 2022-02-25 11:16:30 +01:00

README.md

🐸💬 TTS Training Recipes

TTS recipes intended to host scripts running all the necessary steps to train a TTS model on a particular dataset.

For each dataset, you need to download the dataset once. Then you run the training for the model you want.

Run each script from the root TTS folder as follows.

$ sh ./recipes/<dataset>/download_<dataset>.sh
$ python recipes/<dataset>/<model_name>/train.py

For some datasets you might need to resample the audio files. For example, VCTK dataset can be resampled to 22050Hz as follows.

python TTS/bin/resample.py --input_dir recipes/vctk/VCTK/wav48_silence_trimmed --output_sr 22050 --output_dir recipes/vctk/VCTK/wav48_silence_trimmed --n_jobs 8 --file_ext flac

If you train a new model using TTS, feel free to share your training to expand the list of recipes.

You can also open a new discussion and share your progress with the 🐸 community.