build: make dependencies for server optional

This commit is contained in:
Enno Hermann 2024-03-10 20:16:00 +01:00
parent 68680ef508
commit 7673f282be
8 changed files with 21 additions and 12 deletions

View File

@ -156,7 +156,7 @@ If you plan to code or train models, clone 🐸TTS and install it locally.
```bash
git clone https://github.com/coqui-ai/TTS
pip install -e .[all,dev,notebooks] # Select the relevant extras
pip install -e .[all,dev,notebooks,server] # Select the relevant extras
```
If you are on Ubuntu (Debian), you can also run following commands for installation.

View File

@ -1,5 +1,8 @@
# :frog: TTS demo server
Before you use the server, make sure you [install](https://github.com/coqui-ai/TTS/tree/dev#install-tts)) :frog: TTS properly. Then, you can follow the steps below.
Before you use the server, make sure you
[install](https://github.com/coqui-ai/TTS/tree/dev#install-tts)) :frog: TTS
properly and install the additional dependencies with `pip install
TTS[server]`. Then, you can follow the steps below.
**Note:** If you install :frog:TTS using ```pip```, you can also use the ```tts-server``` end point on the terminal.

View File

@ -9,7 +9,10 @@ from threading import Lock
from typing import Union
from urllib.parse import parse_qs
from flask import Flask, render_template, render_template_string, request, send_file
try:
from flask import Flask, render_template, render_template_string, request, send_file
except ImportError as e:
raise ImportError("Server requires requires flask, use `pip install TTS[server]`.") from e
from TTS.config import load_config
from TTS.utils.manage import ModelManager

View File

@ -84,8 +84,10 @@ tts --model_name "voice_conversion/<language>/<dataset>/<model_name>"
<!-- <img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/demo_server.gif" height="56"/> -->
![server.gif](https://github.com/coqui-ai/TTS/raw/main/images/demo_server.gif)
You can boot up a demo 🐸TTS server to run an inference with your models. Note that the server is not optimized for performance
but gives you an easy way to interact with the models.
You can boot up a demo 🐸TTS server to run an inference with your models (make
sure to install the additional dependencies with `pip install TTS[server]`).
Note that the server is not optimized for performance but gives you an easy way
to interact with the models.
The demo server provides pretty much the same interface as the CLI command.

View File

@ -1,6 +1,6 @@
# Installation
🐸TTS supports python >=3.7 <3.11.0 and tested on Ubuntu 18.10, 19.10, 20.10.
🐸TTS supports python >=3.9 <3.12.0 and tested on Ubuntu 18.10, 19.10, 20.10.
## Using `pip`
@ -30,4 +30,4 @@ make install
```
## On Windows
If you are on Windows, 👑@GuyPaddock wrote installation instructions [here](https://stackoverflow.com/questions/66726331/
If you are on Windows, 👑@GuyPaddock wrote installation instructions [here](https://stackoverflow.com/questions/66726331/

View File

@ -112,8 +112,9 @@ $ tts --list_models # list the available models.
![cli.gif](https://github.com/coqui-ai/TTS/raw/main/images/tts_cli.gif)
You can call `tts-server` to start a local demo server that you can open it on
your favorite web browser and 🗣️.
You can call `tts-server` to start a local demo server that you can open on
your favorite web browser and 🗣️ (make sure to install the additional
dependencies with `pip install TTS[server]`).
```bash
$ tts-server -h # see the help

View File

@ -13,8 +13,6 @@ pyyaml>=6.0
fsspec[http]>=2023.6.0 # <= 2023.9.1 makes aux tests fail
packaging>=23.1
mutagen==1.47.0
# deps for examples
flask>=2.0.1
# deps for inference
pysbd>=0.3.4
# deps for notebooks

View File

@ -66,7 +66,8 @@ with open(os.path.join(cwd, "requirements.dev.txt"), "r") as f:
requirements_dev = f.readlines()
with open(os.path.join(cwd, "requirements.ja.txt"), "r") as f:
requirements_ja = f.readlines()
requirements_all = requirements_dev + requirements_notebooks + requirements_ja
requirements_server = ["flask>=2.0.1"]
requirements_all = requirements_dev + requirements_notebooks + requirements_ja + requirements_server
with open("README.md", "r", encoding="utf-8") as readme_file:
README = readme_file.read()
@ -115,6 +116,7 @@ setup(
"all": requirements_all,
"dev": requirements_dev,
"notebooks": requirements_notebooks,
"server": requirements_server,
"ja": requirements_ja,
},
python_requires=">=3.9.0, <3.12",