mirror of https://github.com/coqui-ai/TTS.git
update server rEADME
This commit is contained in:
parent
426bdbd9cc
commit
c16ad38930
|
@ -1,15 +1,13 @@
|
||||||
## TTS example web-server
|
<!-- ## TTS example web-server
|
||||||
|
|
||||||
You'll need a model package (Zip file, includes TTS Python wheel, model files, server configuration, and optional nginx/uwsgi configs). Publicly available models are listed [here](https://github.com/mozilla/TTS/wiki/Released-Models#simple-packaging---self-contained-package-that-runs-an-http-api-for-a-pre-trained-tts-model).
|
You'll need a model package (Zip file, includes TTS Python wheel, model files, server configuration, and optional nginx/uwsgi configs). Publicly available models are listed [here](https://github.com/mozilla/TTS/wiki/Released-Models#simple-packaging---self-contained-package-that-runs-an-http-api-for-a-pre-trained-tts-model).
|
||||||
|
|
||||||
Instructions below are based on a Ubuntu 18.04 machine, but it should be simple to adapt the package names to other distros if needed. Python 3.6 is recommended, as some of the dependencies' versions predate Python 3.7 and will force building from source, which requires extra dependencies and is not guaranteed to work.
|
Instructions below are based on a Ubuntu 18.04 machine, but it should be simple to adapt the package names to other distros if needed. Python 3.6 is recommended, as some of the dependencies' versions predate Python 3.7 and will force building from source, which requires extra dependencies and is not guaranteed to work. -->
|
||||||
|
|
||||||
#### Development server:
|
# :frog: TTS demo server
|
||||||
|
Before you use the server, make sure you [install](https://github.com/coqui-ai/TTS/tree/dev#install-tts)) :frog: TTS properly. Then, you can follow the steps below.
|
||||||
|
|
||||||
##### Using server.py
|
**Note:** If you install :frog:TTS using ```pip```, you can also use the ```tts-server``` end point on the terminal.
|
||||||
If you have the environment set already for TTS, then you can directly call ```server.py```.
|
|
||||||
|
|
||||||
**Note:** After installing TTS as a package you can use ```tts-server``` to call the commands below.
|
|
||||||
|
|
||||||
Examples runs:
|
Examples runs:
|
||||||
|
|
||||||
|
@ -25,7 +23,7 @@ Run the server with the official models on a GPU.
|
||||||
Run the server with a custom models.
|
Run the server with a custom models.
|
||||||
```python TTS/server/server.py --tts_checkpoint /path/to/tts/model.pth.tar --tts_config /path/to/tts/config.json --vocoder_checkpoint /path/to/vocoder/model.pth.tar --vocoder_config /path/to/vocoder/config.json```
|
```python TTS/server/server.py --tts_checkpoint /path/to/tts/model.pth.tar --tts_config /path/to/tts/config.json --vocoder_checkpoint /path/to/vocoder/model.pth.tar --vocoder_config /path/to/vocoder/config.json```
|
||||||
|
|
||||||
##### Using .whl
|
<!-- ##### Using .whl
|
||||||
1. apt-get install -y espeak libsndfile1 python3-venv
|
1. apt-get install -y espeak libsndfile1 python3-venv
|
||||||
2. python3 -m venv /tmp/venv
|
2. python3 -m venv /tmp/venv
|
||||||
3. source /tmp/venv/bin/activate
|
3. source /tmp/venv/bin/activate
|
||||||
|
@ -33,9 +31,9 @@ Run the server with a custom models.
|
||||||
5. pip install -U https//example.com/url/to/python/package.whl
|
5. pip install -U https//example.com/url/to/python/package.whl
|
||||||
6. python -m TTS.server.server
|
6. python -m TTS.server.server
|
||||||
|
|
||||||
You can now open http://localhost:5002 in a browser
|
You can now open http://localhost:5002 in a browser -->
|
||||||
|
|
||||||
#### Running with nginx/uwsgi:
|
<!-- #### Running with nginx/uwsgi:
|
||||||
|
|
||||||
**Note:** This method uses an old TTS model, so quality might be low.
|
**Note:** This method uses an old TTS model, so quality might be low.
|
||||||
|
|
||||||
|
@ -51,9 +49,9 @@ You can now open http://localhost:5002 in a browser
|
||||||
10. uwsgi --ini uwsgi.ini
|
10. uwsgi --ini uwsgi.ini
|
||||||
|
|
||||||
You can now open http://localhost:80 in a browser (edit the port in /etc/nginx/sites-enabled/tts_site_nginx).
|
You can now open http://localhost:80 in a browser (edit the port in /etc/nginx/sites-enabled/tts_site_nginx).
|
||||||
Configure number of workers (number of requests that will be processed in parallel) by editing the `uwsgi.ini` file, specifically the `processes` setting.
|
Configure number of workers (number of requests that will be processed in parallel) by editing the `uwsgi.ini` file, specifically the `processes` setting. -->
|
||||||
|
|
||||||
#### Creating a server package with an embedded model
|
<!-- #### Creating a server package with an embedded model
|
||||||
|
|
||||||
[setup.py](../setup.py) was extended with two new parameters when running the `bdist_wheel` command:
|
[setup.py](../setup.py) was extended with two new parameters when running the `bdist_wheel` command:
|
||||||
|
|
||||||
|
@ -62,4 +60,4 @@ Configure number of workers (number of requests that will be processed in parall
|
||||||
|
|
||||||
To create a package, run `python setup.py bdist_wheel --checkpoint /path/to/checkpoint --model_config /path/to/config.json`.
|
To create a package, run `python setup.py bdist_wheel --checkpoint /path/to/checkpoint --model_config /path/to/config.json`.
|
||||||
|
|
||||||
A Python `.whl` file will be created in the `dist/` folder with the checkpoint and config embedded in it.
|
A Python `.whl` file will be created in the `dist/` folder with the checkpoint and config embedded in it. -->
|
||||||
|
|
Loading…
Reference in New Issue