mirror of https://github.com/coqui-ai/TTS.git
Doc update docker (#2153)
* Complete Dockerignore to keep context managable * Add documentation on readme * Match pip and docker cuda version * Use pip3 consistently
This commit is contained in:
parent
4114136717
commit
3191c5f1fe
|
@ -1,2 +1,9 @@
|
||||||
.git/
|
.git/
|
||||||
Dockerfile
|
Dockerfile
|
||||||
|
build/
|
||||||
|
dist/
|
||||||
|
TTS.egg-info/
|
||||||
|
tests/outputs/*
|
||||||
|
tests/train_outputs/*
|
||||||
|
__pycache__/
|
||||||
|
*.pyc
|
|
@ -2,11 +2,11 @@ ARG BASE=nvidia/cuda:11.8.0-base-ubuntu22.04
|
||||||
FROM ${BASE}
|
FROM ${BASE}
|
||||||
RUN apt-get update && apt-get upgrade -y
|
RUN apt-get update && apt-get upgrade -y
|
||||||
RUN apt-get install -y --no-install-recommends gcc g++ make python3 python3-dev python3-pip python3-venv python3-wheel espeak-ng libsndfile1-dev && rm -rf /var/lib/apt/lists/*
|
RUN apt-get install -y --no-install-recommends gcc g++ make python3 python3-dev python3-pip python3-venv python3-wheel espeak-ng libsndfile1-dev && rm -rf /var/lib/apt/lists/*
|
||||||
RUN pip install llvmlite --ignore-installed
|
RUN pip3 install llvmlite --ignore-installed
|
||||||
|
|
||||||
WORKDIR /root
|
WORKDIR /root
|
||||||
COPY . /root
|
COPY . /root
|
||||||
RUN pip3 install torch torchaudio --extra-index-url https://download.pytorch.org/whl/cu117
|
RUN pip3 install torch torchaudio --extra-index-url https://download.pytorch.org/whl/cu118
|
||||||
RUN make install
|
RUN make install
|
||||||
ENTRYPOINT ["tts"]
|
ENTRYPOINT ["tts"]
|
||||||
CMD ["--help"]
|
CMD ["--help"]
|
||||||
|
|
16
README.md
16
README.md
|
@ -146,6 +146,22 @@ $ make install
|
||||||
|
|
||||||
If you are on Windows, 👑@GuyPaddock wrote installation instructions [here](https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system).
|
If you are on Windows, 👑@GuyPaddock wrote installation instructions [here](https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system).
|
||||||
|
|
||||||
|
more details about docker like GPU support, etc. can be found [here](https://
|
||||||
|
|
||||||
|
## Docker Image
|
||||||
|
You can also try TTS without install with the docker image.
|
||||||
|
Simply run the following command and you will be able to run TTS without installing it.
|
||||||
|
|
||||||
|
```bash
|
||||||
|
docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu
|
||||||
|
python3 TTS/server/server.py --list_models #To get the list of available models
|
||||||
|
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server
|
||||||
|
```
|
||||||
|
|
||||||
|
You can then enjoy the TTS server [here](http://[::1]:5002/)
|
||||||
|
More details about the docker images (like GPU support) can be found [here](https://tts.readthedocs.io/en/latest/docker.html)
|
||||||
|
|
||||||
|
|
||||||
## Use TTS
|
## Use TTS
|
||||||
|
|
||||||
### Single Speaker Models
|
### Single Speaker Models
|
||||||
|
|
Loading…
Reference in New Issue