From 6c408ac85e2bf8c1d8cfe45cf416735dea2909c7 Mon Sep 17 00:00:00 2001 From: Tsai Meng-Ting Date: Tue, 21 Nov 2023 13:43:26 +0800 Subject: [PATCH] Update xtts.md fix a wrong link --- docs/source/models/xtts.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/source/models/xtts.md b/docs/source/models/xtts.md index 03e44af1..8c7cab77 100644 --- a/docs/source/models/xtts.md +++ b/docs/source/models/xtts.md @@ -175,7 +175,7 @@ torchaudio.save("xtts_streaming.wav", wav.squeeze().unsqueeze(0).cpu(), 24000) ### Training -A recipe for `XTTS_v2` GPT encoder training using `LJSpeech` dataset is available at https://github.com/coqui-ai/TTS/tree/dev/recipes/ljspeech/xtts_v1/train_gpt_xtts.py +A recipe for `XTTS_v2` GPT encoder training using `LJSpeech` dataset is available at https://github.com/coqui-ai/TTS/tree/dev/recipes/ljspeech/xtts_v2/train_gpt_xtts.py You need to change the fields of the `BaseDatasetConfig` to match your dataset and then update `GPTArgs` and `GPTTrainerConfig` fields as you need. By default, it will use the same parameters that XTTS v1.1 model was trained with. To speed up the model convergence, as default, it will also download the XTTS v1.1 checkpoint and load it.