From c16ad3893007005eea5ebaddd2928fe6443342f0 Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Eren=20G=C3=B6lge?= Date: Mon, 8 Mar 2021 14:05:59 +0100 Subject: [PATCH] update server rEADME --- TTS/server/README.md | 24 +++++++++++------------- 1 file changed, 11 insertions(+), 13 deletions(-) diff --git a/TTS/server/README.md b/TTS/server/README.md index 54c85bd6..51cedc05 100644 --- a/TTS/server/README.md +++ b/TTS/server/README.md @@ -1,15 +1,13 @@ -## TTS example web-server + -#### Development server: +# :frog: TTS demo server +Before you use the server, make sure you [install](https://github.com/coqui-ai/TTS/tree/dev#install-tts)) :frog: TTS properly. Then, you can follow the steps below. -##### Using server.py -If you have the environment set already for TTS, then you can directly call ```server.py```. - -**Note:** After installing TTS as a package you can use ```tts-server``` to call the commands below. +**Note:** If you install :frog:TTS using ```pip```, you can also use the ```tts-server``` end point on the terminal. Examples runs: @@ -25,7 +23,7 @@ Run the server with the official models on a GPU. Run the server with a custom models. ```python TTS/server/server.py --tts_checkpoint /path/to/tts/model.pth.tar --tts_config /path/to/tts/config.json --vocoder_checkpoint /path/to/vocoder/model.pth.tar --vocoder_config /path/to/vocoder/config.json``` -##### Using .whl + -#### Running with nginx/uwsgi: + -#### Creating a server package with an embedded model +