coqui-tts/TTS/server
Eren Gölge 9e5a469c64
d-vector handling (#1945)
* Update BaseDatasetConfig

- Add dataset_name
- Chane name to formatter_name

* Update compute_embedding

- Allow entering dataset by args
- Use released model by default
- Use the new key format

* Update loading

* Update recipes

* Update other dep code

* Update tests

* Fixup

* Load multiple embedding files

* Fix argument names in dep code

* Update docs

* Fix argument name

* Fix linter
2022-09-13 14:10:33 +02:00
..
static Merge branch 'dev' of https://github.com/coqui-ai/TTS into dev 2021-03-08 15:21:06 +01:00
templates small refactor in server.py 2021-04-23 18:04:37 +02:00
README.md 🐍 Python 3.10.x support and drop Python 3.6 support (#1565) 2022-05-12 15:50:25 +02:00
__init__.py rename the project to old TTS 2020-09-09 12:27:23 +02:00
conf.json Update model file extension (#1422) 2022-03-22 17:55:00 +01:00
server.py d-vector handling (#1945) 2022-09-13 14:10:33 +02:00

README.md

🐸 TTS demo server

Before you use the server, make sure you install) 🐸 TTS properly. Then, you can follow the steps below.

Note: If you install 🐸TTS using pip, you can also use the tts-server end point on the terminal.

Examples runs:

List officially released models. python TTS/server/server.py --list_models

Run the server with the official models. python TTS/server/server.py --model_name tts_models/en/ljspeech/tacotron2-DCA --vocoder_name vocoder_models/en/ljspeech/multiband-melgan

Run the server with the official models on a GPU. CUDA_VISIBLE_DEVICES="0" python TTS/server/server.py --model_name tts_models/en/ljspeech/tacotron2-DCA --vocoder_name vocoder_models/en/ljspeech/multiband-melgan --use_cuda True

Run the server with a custom models. python TTS/server/server.py --tts_checkpoint /path/to/tts/model.pth --tts_config /path/to/tts/config.json --vocoder_checkpoint /path/to/vocoder/model.pth --vocoder_config /path/to/vocoder/config.json