mirror of https://github.com/coqui-ai/TTS.git
docs: improve README with better organization and comprehensive documentation
This commit is contained in:
parent
dbf1a08a0d
commit
f3c81e0b88
493
README.md
493
README.md
|
@ -1,407 +1,206 @@
|
|||
|
||||
## 🐸Coqui.ai News
|
||||
- 📣 ⓍTTSv2 is here with 16 languages and better performance across the board.
|
||||
- 📣 ⓍTTS fine-tuning code is out. Check the [example recipes](https://github.com/coqui-ai/TTS/tree/dev/recipes/ljspeech).
|
||||
- 📣 ⓍTTS can now stream with <200ms latency.
|
||||
- 📣 ⓍTTS, our production TTS model that can speak 13 languages, is released [Blog Post](https://coqui.ai/blog/tts/open_xtts), [Demo](https://huggingface.co/spaces/coqui/xtts), [Docs](https://tts.readthedocs.io/en/dev/models/xtts.html)
|
||||
- 📣 [🐶Bark](https://github.com/suno-ai/bark) is now available for inference with unconstrained voice cloning. [Docs](https://tts.readthedocs.io/en/dev/models/bark.html)
|
||||
- 📣 You can use [~1100 Fairseq models](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) with 🐸TTS.
|
||||
- 📣 🐸TTS now supports 🐢Tortoise with faster inference. [Docs](https://tts.readthedocs.io/en/dev/models/tortoise.html)
|
||||
|
||||
# 🐸 Coqui TTS - Advanced Text-to-Speech Toolkit
|
||||
<div align="center">
|
||||
<img src="https://static.scarf.sh/a.png?x-pxid=cf317fe7-2188-4721-bc01-124bb5d5dbb2" />
|
||||
<img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/coqui-log-green-TTS.png" height="56"/>
|
||||
|
||||
## <img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/coqui-log-green-TTS.png" height="56"/>
|
||||
|
||||
|
||||
**🐸TTS is a library for advanced Text-to-Speech generation.**
|
||||
|
||||
🚀 Pretrained models in +1100 languages.
|
||||
|
||||
🛠️ Tools for training new models and fine-tuning existing models in any language.
|
||||
|
||||
📚 Utilities for dataset analysis and curation.
|
||||
______________________________________________________________________
|
||||
**A comprehensive library for advanced Text-to-Speech generation**
|
||||
|
||||
[](https://discord.gg/5eXr5seRrv)
|
||||
[](https://opensource.org/licenses/MPL-2.0)
|
||||
[](https://badge.fury.io/py/TTS)
|
||||
[](https://github.com/coqui-ai/TTS/blob/master/CODE_OF_CONDUCT.md)
|
||||
[](https://pepy.tech/project/tts)
|
||||
[](https://zenodo.org/badge/latestdoi/265612440)
|
||||
|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||

|
||||
[](https://tts.readthedocs.io/en/latest/)
|
||||
|
||||
</div>
|
||||
|
||||
______________________________________________________________________
|
||||
## 📑 Table of Contents
|
||||
- [Latest Updates](#-latest-updates)
|
||||
- [Quick Start](#-quick-start)
|
||||
- [Features](#-features)
|
||||
- [Installation](#-installation)
|
||||
- [Basic Usage](#-basic-usage)
|
||||
- [Available Models](#-available-models)
|
||||
- [Advanced Usage](#-advanced-usage)
|
||||
- [Performance Optimization](#-performance-optimization)
|
||||
- [Deployment](#-deployment)
|
||||
- [Contributing](#-contributing)
|
||||
- [Community & Support](#-community--support)
|
||||
- [Security](#-security)
|
||||
- [Citation](#-citation)
|
||||
|
||||
## 💬 Where to ask questions
|
||||
Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.
|
||||
## 🔥 Latest Updates
|
||||
- 📣 ⓍTTSv2 released with 16 languages and improved performance
|
||||
- 📣 ⓍTTS fine-tuning code available
|
||||
- 📣 ⓍTTS now supports streaming with <200ms latency
|
||||
- 📣 Support for ~1100 Fairseq models
|
||||
- 📣 Integration with 🐶Bark and 🐢Tortoise
|
||||
[View all updates](https://github.com/coqui-ai/TTS/releases)
|
||||
|
||||
| Type | Platforms |
|
||||
| ------------------------------- | --------------------------------------- |
|
||||
| 🚨 **Bug Reports** | [GitHub Issue Tracker] |
|
||||
| 🎁 **Feature Requests & Ideas** | [GitHub Issue Tracker] |
|
||||
| 👩💻 **Usage Questions** | [GitHub Discussions] |
|
||||
| 🗯 **General Discussion** | [GitHub Discussions] or [Discord] |
|
||||
## 🚀 Quick Start
|
||||
|
||||
[github issue tracker]: https://github.com/coqui-ai/tts/issues
|
||||
[github discussions]: https://github.com/coqui-ai/TTS/discussions
|
||||
[discord]: https://discord.gg/5eXr5seRrv
|
||||
[Tutorials and Examples]: https://github.com/coqui-ai/TTS/wiki/TTS-Notebooks-and-Tutorials
|
||||
```bash
|
||||
# Install TTS
|
||||
pip install TTS
|
||||
|
||||
# Quick text-to-speech generation
|
||||
python -c "from TTS.api import TTS; tts = TTS('tts_models/multilingual/multi-dataset/xtts_v2'); tts.tts_to_file(text='Hello, this is a test!', file_path='output.wav')"
|
||||
```
|
||||
|
||||
## 🔗 Links and Resources
|
||||
| Type | Links |
|
||||
| ------------------------------- | --------------------------------------- |
|
||||
| 💼 **Documentation** | [ReadTheDocs](https://tts.readthedocs.io/en/latest/)
|
||||
| 💾 **Installation** | [TTS/README.md](https://github.com/coqui-ai/TTS/tree/dev#installation)|
|
||||
| 👩💻 **Contributing** | [CONTRIBUTING.md](https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md)|
|
||||
| 📌 **Road Map** | [Main Development Plans](https://github.com/coqui-ai/TTS/issues/378)
|
||||
| 🚀 **Released Models** | [TTS Releases](https://github.com/coqui-ai/TTS/releases) and [Experimental Models](https://github.com/coqui-ai/TTS/wiki/Experimental-Released-Models)|
|
||||
| 📰 **Papers** | [TTS Papers](https://github.com/erogol/TTS-papers)|
|
||||
## ✨ Features
|
||||
- 🌟 High-performance Deep Learning models
|
||||
- 🌍 Support for 1100+ languages
|
||||
- 🎯 Production-ready performance
|
||||
- 🔧 Easy-to-use API
|
||||
- 📚 Comprehensive documentation
|
||||
- 🛠️ Flexible training pipeline
|
||||
|
||||
## 💻 Installation
|
||||
|
||||
## 🥇 TTS Performance
|
||||
<p align="center"><img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/TTS-performance.png" width="800" /></p>
|
||||
|
||||
Underlined "TTS*" and "Judy*" are **internal** 🐸TTS models that are not released open-source. They are here to show the potential. Models prefixed with a dot (.Jofish .Abe and .Janice) are real human voices.
|
||||
|
||||
## Features
|
||||
- High-performance Deep Learning models for Text2Speech tasks.
|
||||
- Text2Spec models (Tacotron, Tacotron2, Glow-TTS, SpeedySpeech).
|
||||
- Speaker Encoder to compute speaker embeddings efficiently.
|
||||
- Vocoder models (MelGAN, Multiband-MelGAN, GAN-TTS, ParallelWaveGAN, WaveGrad, WaveRNN)
|
||||
- Fast and efficient model training.
|
||||
- Detailed training logs on the terminal and Tensorboard.
|
||||
- Support for Multi-speaker TTS.
|
||||
- Efficient, flexible, lightweight but feature complete `Trainer API`.
|
||||
- Released and ready-to-use models.
|
||||
- Tools to curate Text2Speech datasets under```dataset_analysis```.
|
||||
- Utilities to use and test your models.
|
||||
- Modular (but not too much) code base enabling easy implementation of new ideas.
|
||||
|
||||
## Model Implementations
|
||||
### Spectrogram models
|
||||
- Tacotron: [paper](https://arxiv.org/abs/1703.10135)
|
||||
- Tacotron2: [paper](https://arxiv.org/abs/1712.05884)
|
||||
- Glow-TTS: [paper](https://arxiv.org/abs/2005.11129)
|
||||
- Speedy-Speech: [paper](https://arxiv.org/abs/2008.03802)
|
||||
- Align-TTS: [paper](https://arxiv.org/abs/2003.01950)
|
||||
- FastPitch: [paper](https://arxiv.org/pdf/2006.06873.pdf)
|
||||
- FastSpeech: [paper](https://arxiv.org/abs/1905.09263)
|
||||
- FastSpeech2: [paper](https://arxiv.org/abs/2006.04558)
|
||||
- SC-GlowTTS: [paper](https://arxiv.org/abs/2104.05557)
|
||||
- Capacitron: [paper](https://arxiv.org/abs/1906.03402)
|
||||
- OverFlow: [paper](https://arxiv.org/abs/2211.06892)
|
||||
- Neural HMM TTS: [paper](https://arxiv.org/abs/2108.13320)
|
||||
- Delightful TTS: [paper](https://arxiv.org/abs/2110.12612)
|
||||
|
||||
### End-to-End Models
|
||||
- ⓍTTS: [blog](https://coqui.ai/blog/tts/open_xtts)
|
||||
- VITS: [paper](https://arxiv.org/pdf/2106.06103)
|
||||
- 🐸 YourTTS: [paper](https://arxiv.org/abs/2112.02418)
|
||||
- 🐢 Tortoise: [orig. repo](https://github.com/neonbjb/tortoise-tts)
|
||||
- 🐶 Bark: [orig. repo](https://github.com/suno-ai/bark)
|
||||
|
||||
### Attention Methods
|
||||
- Guided Attention: [paper](https://arxiv.org/abs/1710.08969)
|
||||
- Forward Backward Decoding: [paper](https://arxiv.org/abs/1907.09006)
|
||||
- Graves Attention: [paper](https://arxiv.org/abs/1910.10288)
|
||||
- Double Decoder Consistency: [blog](https://erogol.com/solving-attention-problems-of-tts-models-with-double-decoder-consistency/)
|
||||
- Dynamic Convolutional Attention: [paper](https://arxiv.org/pdf/1910.10288.pdf)
|
||||
- Alignment Network: [paper](https://arxiv.org/abs/2108.10447)
|
||||
|
||||
### Speaker Encoder
|
||||
- GE2E: [paper](https://arxiv.org/abs/1710.10467)
|
||||
- Angular Loss: [paper](https://arxiv.org/pdf/2003.11982.pdf)
|
||||
|
||||
### Vocoders
|
||||
- MelGAN: [paper](https://arxiv.org/abs/1910.06711)
|
||||
- MultiBandMelGAN: [paper](https://arxiv.org/abs/2005.05106)
|
||||
- ParallelWaveGAN: [paper](https://arxiv.org/abs/1910.11480)
|
||||
- GAN-TTS discriminators: [paper](https://arxiv.org/abs/1909.11646)
|
||||
- WaveRNN: [origin](https://github.com/fatchord/WaveRNN/)
|
||||
- WaveGrad: [paper](https://arxiv.org/abs/2009.00713)
|
||||
- HiFiGAN: [paper](https://arxiv.org/abs/2010.05646)
|
||||
- UnivNet: [paper](https://arxiv.org/abs/2106.07889)
|
||||
|
||||
### Voice Conversion
|
||||
- FreeVC: [paper](https://arxiv.org/abs/2210.15418)
|
||||
|
||||
You can also help us implement more models.
|
||||
|
||||
## Installation
|
||||
🐸TTS is tested on Ubuntu 18.04 with **python >= 3.9, < 3.12.**.
|
||||
|
||||
If you are only interested in [synthesizing speech](https://tts.readthedocs.io/en/latest/inference.html) with the released 🐸TTS models, installing from PyPI is the easiest option.
|
||||
### Requirements
|
||||
- Python >= 3.9, < 3.12
|
||||
- Operating Systems: Ubuntu 18.04+ (Primary), Windows, macOS
|
||||
- GPU (Optional but recommended for training)
|
||||
|
||||
### Basic Installation
|
||||
```bash
|
||||
pip install TTS
|
||||
```
|
||||
|
||||
If you plan to code or train models, clone 🐸TTS and install it locally.
|
||||
|
||||
### Development Installation
|
||||
```bash
|
||||
git clone https://github.com/coqui-ai/TTS
|
||||
pip install -e .[all,dev,notebooks] # Select the relevant extras
|
||||
pip install -e .[all,dev,notebooks]
|
||||
```
|
||||
|
||||
If you are on Ubuntu (Debian), you can also run following commands for installation.
|
||||
|
||||
### Docker Installation
|
||||
```bash
|
||||
$ make system-deps # intended to be used on Ubuntu (Debian). Let us know if you have a different OS.
|
||||
$ make install
|
||||
docker run --rm -it -p 5002:5002 ghcr.io/coqui-ai/tts-cpu
|
||||
```
|
||||
|
||||
If you are on Windows, 👑@GuyPaddock wrote installation instructions [here](https://stackoverflow.com/questions/66726331/how-can-i-run-mozilla-tts-coqui-tts-training-with-cuda-on-a-windows-system).
|
||||
[Detailed Installation Guide](https://tts.readthedocs.io/en/latest/installation.html)
|
||||
|
||||
## 📖 Basic Usage
|
||||
|
||||
## Docker Image
|
||||
You can also try TTS without install with the docker image.
|
||||
Simply run the following command and you will be able to run TTS without installing it.
|
||||
|
||||
```bash
|
||||
docker run --rm -it -p 5002:5002 --entrypoint /bin/bash ghcr.io/coqui-ai/tts-cpu
|
||||
python3 TTS/server/server.py --list_models #To get the list of available models
|
||||
python3 TTS/server/server.py --model_name tts_models/en/vctk/vits # To start a server
|
||||
```
|
||||
|
||||
You can then enjoy the TTS server [here](http://[::1]:5002/)
|
||||
More details about the docker images (like GPU support) can be found [here](https://tts.readthedocs.io/en/latest/docker_images.html)
|
||||
|
||||
|
||||
## Synthesizing speech by 🐸TTS
|
||||
|
||||
### 🐍 Python API
|
||||
|
||||
#### Running a multi-speaker and multi-lingual model
|
||||
|
||||
### Simple Text-to-Speech
|
||||
```python
|
||||
import torch
|
||||
from TTS.api import TTS
|
||||
|
||||
# Get device
|
||||
device = "cuda" if torch.cuda.is_available() else "cpu"
|
||||
# Initialize TTS
|
||||
tts = TTS("tts_models/en/ljspeech/tacotron2-DDC")
|
||||
|
||||
# List available 🐸TTS models
|
||||
print(TTS().list_models())
|
||||
|
||||
# Init TTS
|
||||
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2").to(device)
|
||||
|
||||
# Run TTS
|
||||
# ❗ Since this model is multi-lingual voice cloning model, we must set the target speaker_wav and language
|
||||
# Text to speech list of amplitude values as output
|
||||
wav = tts.tts(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en")
|
||||
# Text to speech to a file
|
||||
tts.tts_to_file(text="Hello world!", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
|
||||
# Generate speech
|
||||
tts.tts_to_file("Hello world!", file_path="output.wav")
|
||||
```
|
||||
|
||||
#### Running a single speaker model
|
||||
|
||||
### Multi-lingual Voice Cloning
|
||||
```python
|
||||
# Init TTS with the target model name
|
||||
tts = TTS(model_name="tts_models/de/thorsten/tacotron2-DDC", progress_bar=False).to(device)
|
||||
|
||||
# Run TTS
|
||||
tts.tts_to_file(text="Ich bin eine Testnachricht.", file_path=OUTPUT_PATH)
|
||||
|
||||
# Example voice cloning with YourTTS in English, French and Portuguese
|
||||
tts = TTS(model_name="tts_models/multilingual/multi-dataset/your_tts", progress_bar=False).to(device)
|
||||
tts.tts_to_file("This is voice cloning.", speaker_wav="my/cloning/audio.wav", language="en", file_path="output.wav")
|
||||
tts.tts_to_file("C'est le clonage de la voix.", speaker_wav="my/cloning/audio.wav", language="fr-fr", file_path="output.wav")
|
||||
tts.tts_to_file("Isso é clonagem de voz.", speaker_wav="my/cloning/audio.wav", language="pt-br", file_path="output.wav")
|
||||
```
|
||||
|
||||
#### Example voice conversion
|
||||
|
||||
Converting the voice in `source_wav` to the voice of `target_wav`
|
||||
|
||||
```python
|
||||
tts = TTS(model_name="voice_conversion_models/multilingual/vctk/freevc24", progress_bar=False).to("cuda")
|
||||
tts.voice_conversion_to_file(source_wav="my/source.wav", target_wav="my/target.wav", file_path="output.wav")
|
||||
```
|
||||
|
||||
#### Example voice cloning together with the voice conversion model.
|
||||
This way, you can clone voices by using any model in 🐸TTS.
|
||||
|
||||
```python
|
||||
|
||||
tts = TTS("tts_models/de/thorsten/tacotron2-DDC")
|
||||
tts.tts_with_vc_to_file(
|
||||
"Wie sage ich auf Italienisch, dass ich dich liebe?",
|
||||
speaker_wav="target/speaker.wav",
|
||||
tts = TTS("tts_models/multilingual/multi-dataset/xtts_v2")
|
||||
tts.tts_to_file(
|
||||
text="Hello world!",
|
||||
speaker_wav="path/to/speaker.wav",
|
||||
language="en",
|
||||
file_path="output.wav"
|
||||
)
|
||||
```
|
||||
|
||||
#### Example text to speech using **Fairseq models in ~1100 languages** 🤯.
|
||||
For Fairseq models, use the following name format: `tts_models/<lang-iso_code>/fairseq/vits`.
|
||||
You can find the language ISO codes [here](https://dl.fbaipublicfiles.com/mms/tts/all-tts-languages.html)
|
||||
and learn about the Fairseq models [here](https://github.com/facebookresearch/fairseq/tree/main/examples/mms).
|
||||
## 🎯 Available Models
|
||||
|
||||
```python
|
||||
# TTS with on the fly voice conversion
|
||||
api = TTS("tts_models/deu/fairseq/vits")
|
||||
api.tts_with_vc_to_file(
|
||||
"Wie sage ich auf Italienisch, dass ich dich liebe?",
|
||||
speaker_wav="target/speaker.wav",
|
||||
file_path="output.wav"
|
||||
)
|
||||
### Text-to-Speech Models
|
||||
| Model | Languages | Speed | Quality | GPU Memory |
|
||||
|-------|-----------|-------|---------|------------|
|
||||
| ⓍTTS v2 | 16 | ⭐⭐⭐ | ⭐⭐⭐⭐⭐ | 4GB+ |
|
||||
| YourTTS | 13 | ⭐⭐⭐⭐ | ⭐⭐⭐⭐ | 2GB+ |
|
||||
| Tacotron 2 | Any | ⭐⭐ | ⭐⭐⭐ | 1GB+ |
|
||||
| FastSpeech 2 | Any | ⭐⭐⭐⭐⭐ | ⭐⭐⭐ | 1GB+ |
|
||||
|
||||
[Complete Model List](https://tts.readthedocs.io/en/latest/models.html)
|
||||
|
||||
## 🚄 Performance Optimization
|
||||
|
||||
### Hardware Requirements
|
||||
- Training: NVIDIA GPU with 8GB+ VRAM recommended
|
||||
- Inference: CPU or GPU (2GB+ VRAM)
|
||||
- RAM: 8GB minimum, 16GB recommended
|
||||
|
||||
### Optimization Tips
|
||||
- Use batch processing for multiple inputs
|
||||
- Enable GPU acceleration when available
|
||||
- Implement caching for repeated phrases
|
||||
- Use quantized models for faster inference
|
||||
|
||||
## 🌐 Deployment
|
||||
|
||||
### Production Setup
|
||||
1. Load models during initialization
|
||||
2. Implement proper error handling
|
||||
3. Set up monitoring and logging
|
||||
4. Use appropriate scaling strategies
|
||||
|
||||
### Docker Deployment
|
||||
```bash
|
||||
docker run -d --gpus all -p 5002:5002 ghcr.io/coqui-ai/tts-gpu
|
||||
```
|
||||
|
||||
### Command-line `tts`
|
||||
## 🛠 Contributing
|
||||
|
||||
<!-- begin-tts-readme -->
|
||||
### Development Setup
|
||||
1. Fork the repository
|
||||
2. Set up development environment
|
||||
3. Run tests: `pytest tests/`
|
||||
4. Submit PR with detailed description
|
||||
|
||||
Synthesize speech on command line.
|
||||
[Contributing Guidelines](CONTRIBUTING.md)
|
||||
|
||||
You can either use your trained model or choose a model from the provided list.
|
||||
## 🤝 Community & Support
|
||||
|
||||
If you don't specify any models, then it uses LJSpeech based English model.
|
||||
### Get Help
|
||||
- [Discord Community](https://discord.gg/5eXr5seRrv)
|
||||
- [GitHub Discussions](https://github.com/coqui-ai/TTS/discussions)
|
||||
- [Documentation](https://tts.readthedocs.io/)
|
||||
|
||||
#### Single Speaker Models
|
||||
### Commercial Support
|
||||
- [Contact Coqui](https://coqui.ai/contact)
|
||||
|
||||
- List provided models:
|
||||
## 🔒 Security
|
||||
|
||||
```
|
||||
$ tts --list_models
|
||||
```
|
||||
### Best Practices
|
||||
- Keep models and dependencies updated
|
||||
- Use environment variables for sensitive data
|
||||
- Implement proper API authentication
|
||||
- Monitor for unusual usage patterns
|
||||
|
||||
- Get model info (for both tts_models and vocoder_models):
|
||||
[Security Policy](SECURITY.md)
|
||||
|
||||
- Query by type/name:
|
||||
The model_info_by_name uses the name as it from the --list_models.
|
||||
```
|
||||
$ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
|
||||
```
|
||||
For example:
|
||||
```
|
||||
$ tts --model_info_by_name tts_models/tr/common-voice/glow-tts
|
||||
$ tts --model_info_by_name vocoder_models/en/ljspeech/hifigan_v2
|
||||
```
|
||||
- Query by type/idx:
|
||||
The model_query_idx uses the corresponding idx from --list_models.
|
||||
## 📚 Citation
|
||||
|
||||
```
|
||||
$ tts --model_info_by_idx "<model_type>/<model_query_idx>"
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
$ tts --model_info_by_idx tts_models/3
|
||||
```
|
||||
|
||||
- Query info for model info by full name:
|
||||
```
|
||||
$ tts --model_info_by_name "<model_type>/<language>/<dataset>/<model_name>"
|
||||
```
|
||||
|
||||
- Run TTS with default models:
|
||||
|
||||
```
|
||||
$ tts --text "Text for TTS" --out_path output/path/speech.wav
|
||||
```
|
||||
|
||||
- Run TTS and pipe out the generated TTS wav file data:
|
||||
|
||||
```
|
||||
$ tts --text "Text for TTS" --pipe_out --out_path output/path/speech.wav | aplay
|
||||
```
|
||||
|
||||
- Run a TTS model with its default vocoder model:
|
||||
|
||||
```
|
||||
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --out_path output/path/speech.wav
|
||||
```
|
||||
|
||||
- Run with specific TTS and vocoder models from the list:
|
||||
|
||||
```
|
||||
$ tts --text "Text for TTS" --model_name "<model_type>/<language>/<dataset>/<model_name>" --vocoder_name "<model_type>/<language>/<dataset>/<model_name>" --out_path output/path/speech.wav
|
||||
```
|
||||
|
||||
For example:
|
||||
|
||||
```
|
||||
$ tts --text "Text for TTS" --model_name "tts_models/en/ljspeech/glow-tts" --vocoder_name "vocoder_models/en/ljspeech/univnet" --out_path output/path/speech.wav
|
||||
```
|
||||
|
||||
- Run your own TTS model (Using Griffin-Lim Vocoder):
|
||||
|
||||
```
|
||||
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
|
||||
```
|
||||
|
||||
- Run your own TTS and Vocoder models:
|
||||
|
||||
```
|
||||
$ tts --text "Text for TTS" --model_path path/to/model.pth --config_path path/to/config.json --out_path output/path/speech.wav
|
||||
--vocoder_path path/to/vocoder.pth --vocoder_config_path path/to/vocoder_config.json
|
||||
```
|
||||
|
||||
#### Multi-speaker Models
|
||||
|
||||
- List the available speakers and choose a <speaker_id> among them:
|
||||
|
||||
```
|
||||
$ tts --model_name "<language>/<dataset>/<model_name>" --list_speaker_idxs
|
||||
```
|
||||
|
||||
- Run the multi-speaker TTS model with the target speaker ID:
|
||||
|
||||
```
|
||||
$ tts --text "Text for TTS." --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --speaker_idx <speaker_id>
|
||||
```
|
||||
|
||||
- Run your own multi-speaker TTS model:
|
||||
|
||||
```
|
||||
$ tts --text "Text for TTS" --out_path output/path/speech.wav --model_path path/to/model.pth --config_path path/to/config.json --speakers_file_path path/to/speaker.json --speaker_idx <speaker_id>
|
||||
```
|
||||
|
||||
### Voice Conversion Models
|
||||
|
||||
```
|
||||
$ tts --out_path output/path/speech.wav --model_name "<language>/<dataset>/<model_name>" --source_wav <path/to/speaker/wav> --target_wav <path/to/reference/wav>
|
||||
```bibtex
|
||||
@misc{coqui-ai-tts,
|
||||
author = {Eren Gölge and others},
|
||||
title = {🐸TTS - a deep learning toolkit for Text-to-Speech},
|
||||
year = {2021},
|
||||
publisher = {GitHub},
|
||||
journal = {GitHub repository},
|
||||
howpublished = {\url{https://github.com/coqui-ai/TTS}},
|
||||
}
|
||||
```
|
||||
|
||||
<!-- end-tts-readme -->
|
||||
## 📊 Performance Benchmarks
|
||||
|
||||
## Directory Structure
|
||||
<p align="center"><img src="https://raw.githubusercontent.com/coqui-ai/TTS/main/images/TTS-performance.png" width="800" /></p>
|
||||
|
||||
## 🌍 Language Support
|
||||
- 16 primary languages with ⓍTTS v2
|
||||
- 1100+ languages via Fairseq models
|
||||
- Support for custom language training
|
||||
|
||||
[Language Documentation](https://tts.readthedocs.io/en/latest/languages.html)
|
||||
|
||||
## 📁 Directory Structure
|
||||
```
|
||||
|- notebooks/ (Jupyter Notebooks for model evaluation, parameter selection and data analysis.)
|
||||
|- utils/ (common utilities.)
|
||||
|- TTS
|
||||
|- bin/ (folder for all the executables.)
|
||||
|- train*.py (train your target model.)
|
||||
|- ...
|
||||
|- tts/ (text to speech models)
|
||||
|- layers/ (model layer definitions)
|
||||
|- models/ (model definitions)
|
||||
|- utils/ (model specific utilities.)
|
||||
|- speaker_encoder/ (Speaker Encoder models.)
|
||||
|- (same)
|
||||
|- vocoder/ (Vocoder models.)
|
||||
|- (same)
|
||||
|- notebooks/ # Jupyter Notebooks for examples
|
||||
|- TTS/
|
||||
|- bin/ # Training scripts
|
||||
|- tts/ # Core TTS models
|
||||
|- vocoder/ # Vocoder models
|
||||
|- utils/ # Utilities
|
||||
```
|
||||
|
||||
For more detailed information, visit our [Documentation](https://tts.readthedocs.io/).
|
||||
|
|
Loading…
Reference in New Issue