coqui-tts/TTS/server
Eren Gölge 4857967063
🐍 Python 3.10.x support and drop Python 3.6 support (#1565)
* Update requirements

* Update CI for p3.10

* Update numpy requirement

* Drop 🐍p3.6 support

Numpy also dropped support for p3.6

* Bind cython v0.29.28

* Bind pyworld to v0.2.10

> 0.2.10 is not p3.10.x compatible

* Update Dockerfile
2022-05-12 15:50:25 +02:00
..
static Merge branch 'dev' of https://github.com/coqui-ai/TTS into dev 2021-03-08 15:21:06 +01:00
templates small refactor in server.py 2021-04-23 18:04:37 +02:00
README.md 🐍 Python 3.10.x support and drop Python 3.6 support (#1565) 2022-05-12 15:50:25 +02:00
__init__.py rename the project to old TTS 2020-09-09 12:27:23 +02:00
conf.json Update model file extension (#1422) 2022-03-22 17:55:00 +01:00
server.py Trick to Upsampling to High sampling rates using VITS model (#1456) 2022-04-26 11:47:46 +02:00

README.md

🐸 TTS demo server

Before you use the server, make sure you install) 🐸 TTS properly. Then, you can follow the steps below.

Note: If you install 🐸TTS using pip, you can also use the tts-server end point on the terminal.

Examples runs:

List officially released models. python TTS/server/server.py --list_models

Run the server with the official models. python TTS/server/server.py --model_name tts_models/en/ljspeech/tacotron2-DCA --vocoder_name vocoder_models/en/ljspeech/multiband-melgan

Run the server with the official models on a GPU. CUDA_VISIBLE_DEVICES="0" python TTS/server/server.py --model_name tts_models/en/ljspeech/tacotron2-DCA --vocoder_name vocoder_models/en/ljspeech/multiband-melgan --use_cuda True

Run the server with a custom models. python TTS/server/server.py --tts_checkpoint /path/to/tts/model.pth --tts_config /path/to/tts/config.json --vocoder_checkpoint /path/to/vocoder/model.pth --vocoder_config /path/to/vocoder/config.json