* added basic Mary-TTS API endpoints to server - imported `parse_qs` from `urllib.parse` to parse HTTP POST parameters - imported `render_template_string` from `flask` to return text as endpoint result - added new routes: - `/locales` - returns list of locales (currently locale of active model) - `/voices` - returns list of voices (currently locale and name of active model) - `/process` - accepts synth. request (GET and POST) with parameter `INPUT_TEXT` (other parameters ignored since we have only one active model) * better log messages for Mary-TTS API - smaller tweaks to log output * use f-string in log print to please linter * updated server.py to match 'make style' result |
||
---|---|---|
.. | ||
static | ||
templates | ||
README.md | ||
__init__.py | ||
conf.json | ||
server.py |
README.md
🐸 TTS demo server
Before you use the server, make sure you install) 🐸 TTS properly. Then, you can follow the steps below.
Note: If you install 🐸TTS using pip
, you can also use the tts-server
end point on the terminal.
Examples runs:
List officially released models.
python TTS/server/server.py --list_models
Run the server with the official models.
python TTS/server/server.py --model_name tts_models/en/ljspeech/tacotron2-DCA --vocoder_name vocoder_models/en/ljspeech/multiband-melgan
Run the server with the official models on a GPU.
CUDA_VISIBLE_DEVICES="0" python TTS/server/server.py --model_name tts_models/en/ljspeech/tacotron2-DCA --vocoder_name vocoder_models/en/ljspeech/multiband-melgan --use_cuda True
Run the server with a custom models.
python TTS/server/server.py --tts_checkpoint /path/to/tts/model.pth --tts_config /path/to/tts/config.json --vocoder_checkpoint /path/to/vocoder/model.pth --vocoder_config /path/to/vocoder/config.json