mirror of https://github.com/coqui-ai/TTS.git
README update
This commit is contained in:
parent
f2c541222d
commit
c45666f417
87
README.md
87
README.md
|
@ -1,27 +1,27 @@
|
||||||
# TTS (Work in Progress...)
|
# TTS (Work in Progress...)
|
||||||
TTS targets a Text2Speech engine lightweight in computation with hight quality speech construction.
|
TTS targets a Text2Speech engine lightweight in computation with hight quality speech construction.
|
||||||
|
|
||||||
Here we have pytorch implementation of Tacotron: [A Fully End-to-End Text-To-Speech Synthesis Model](https://arxiv.org/abs/1703.10135) as the start point. We plan to improve the model by the recent updated at the field.
|
Here we have pytorch implementation of Tacotron: [A Fully End-to-End Text-To-Speech Synthesis Model](https://arxiv.org/abs/1703.10135) as the start point. We plan to improve the model by the time with new architectural changes.
|
||||||
|
|
||||||
You can find [here](https://www.evernote.com/shard/s146/sh/9544e7e9-d372-4610-a7b7-3ddcb63d5dac/d01d33837dab625229dec3cfb4cfb887) a brief note pointing possible TTS architectures and their comparisons.
|
You can find [here](http://www.erogol.com/speech-text-deep-learning-architectures/) a brief note pointing possible TTS architectures and their comparisons.
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
Highly recommended to use [miniconda](https://conda.io/miniconda.html) for easier installation.
|
Highly recommended to use [miniconda](https://conda.io/miniconda.html) for easier installation.
|
||||||
* python 3.6
|
* python 3.6
|
||||||
* pytorch > 0.2.0
|
* pytorch 0.4
|
||||||
* TODO
|
* librosa
|
||||||
|
* tensorboard
|
||||||
|
* tensorboardX
|
||||||
|
* matplotlib
|
||||||
|
* unidecode
|
||||||
|
|
||||||
## Audio Samples
|
## Checkpoints and Audio Samples
|
||||||
All samples below are generated in test setting and can be generated a model trained on the master branch.
|
Checkout [here](https://mycroft.ai/blog/available-voices/#the-human-voice-is-the-most-perfect-instrument-of-all-arvo-part) to compare the samples (except the first) below.
|
||||||
* https://soundcloud.com/user-565970875/tts-ljspeech-val-35000 (iteration 35000)
|
|
||||||
* https://soundcloud.com/user-565970875/tts-ljspeech-val-13585
|
|
||||||
|
|
||||||
## Checkpoints
|
|
||||||
|
|
||||||
| Models | Commit | Audio Sample |
|
| Models | Commit | Audio Sample |
|
||||||
| ------------- |:-----------------:|:-------------|
|
| ------------- |:-----------------:|:-------------|
|
||||||
| [iter-62410](https://drive.google.com/open?id=1pjJNzENL3ZNps9n7k_ktGbpEl6YPIkcZ) | [99d56f7](https://github.com/mozilla/TTS/tree/99d56f7e93ccd7567beb0af8fcbd4d24c48e59e9) | https://soundcloud.com/user-565970875/99d56f7-iter62410 |
|
| [iter-62410](https://drive.google.com/open?id=1pjJNzENL3ZNps9n7k_ktGbpEl6YPIkcZ)| [99d56f7](https://github.com/mozilla/TTS/tree/99d56f7e93ccd7567beb0af8fcbd4d24c48e59e9) | [link](https://soundcloud.com/user-565970875/99d56f7-iter62410 )|
|
||||||
|
| Best: [iter-170K](https://drive.google.com/open?id=16L6JbPXj6MSlNUxEStNn28GiSzi4fu1j) | [e00bc66]() |[link](https://soundcloud.com/user-565970875/april-13-2018-07-06pm-e00bc66-iter170k)|
|
||||||
|
|
||||||
## Data
|
## Data
|
||||||
Currently TTS provides data loaders for
|
Currently TTS provides data loaders for
|
||||||
|
@ -43,36 +43,39 @@ You can also enjoy Tensorboard with couple of good training logs, if you point `
|
||||||
Example ```config.json```:
|
Example ```config.json```:
|
||||||
```
|
```
|
||||||
{
|
{
|
||||||
// Data loading parameters
|
|
||||||
"num_mels": 80,
|
"num_mels": 80,
|
||||||
"num_freq": 1024,
|
"num_freq": 1025,
|
||||||
"sample_rate": 20000,
|
"sample_rate": 22050,
|
||||||
"frame_length_ms": 50.0,
|
"frame_length_ms": 50,
|
||||||
"frame_shift_ms": 12.5,
|
"frame_shift_ms": 12.5,
|
||||||
"preemphasis": 0.97,
|
"preemphasis": 0.97,
|
||||||
"min_level_db": -100,
|
"min_level_db": -100,
|
||||||
"ref_level_db": 20,
|
"ref_level_db": 20,
|
||||||
"hidden_size": 128,
|
|
||||||
"embedding_size": 256,
|
"embedding_size": 256,
|
||||||
"text_cleaner": "english_cleaners",
|
"text_cleaner": "english_cleaners",
|
||||||
|
|
||||||
// Training parameters
|
"epochs": 200,
|
||||||
"epochs": 2000,
|
"lr": 0.002,
|
||||||
"lr": 0.001,
|
"warmup_steps": 4000,
|
||||||
"batch_size": 256,
|
"batch_size": 32,
|
||||||
"griffinf_lim_iters": 60,
|
"eval_batch_size":32,
|
||||||
"power": 1.5,
|
"r": 5,
|
||||||
"r": 5, // number of decoder outputs for Tacotron
|
"mk": 0.0, // guidede attention loss weight. if 0 no use
|
||||||
|
"priority_freq": true, // freq range emphasis
|
||||||
|
|
||||||
// Number of data loader processes
|
"griffin_lim_iters": 60,
|
||||||
|
"power": 1.2,
|
||||||
|
|
||||||
|
"dataset": "TWEB",
|
||||||
|
"meta_file_train": "transcript_train.txt",
|
||||||
|
"meta_file_val": "transcript_val.txt",
|
||||||
|
"data_path": "/data/shared/BibleSpeech/",
|
||||||
|
"min_seq_len": 0,
|
||||||
"num_loader_workers": 8,
|
"num_loader_workers": 8,
|
||||||
|
|
||||||
// Experiment logging parameters
|
|
||||||
"checkpoint": true, // if save checkpoint per save_step
|
"checkpoint": true, // if save checkpoint per save_step
|
||||||
"save_step": 200,
|
"save_step": 200,
|
||||||
"data_path": "/path/to/KeithIto/LJSpeech-1.0",
|
|
||||||
"output_path": "/path/to/my_experiment",
|
"output_path": "/path/to/my_experiment",
|
||||||
"log_dir": "/path/to/my/tensorboard/logs/"
|
|
||||||
}
|
}
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -83,23 +86,17 @@ Best way to test your pretrained network is to use the Notebook under ```noteboo
|
||||||
Any kind of contribution is highly welcome as we are propelled by the open-source spirit. If you like to add or edit things in code, please also consider to write tests to verify your segment so that we can be sure things are on the track as this repo gets bigger.
|
Any kind of contribution is highly welcome as we are propelled by the open-source spirit. If you like to add or edit things in code, please also consider to write tests to verify your segment so that we can be sure things are on the track as this repo gets bigger.
|
||||||
|
|
||||||
## TODO
|
## TODO
|
||||||
- [x] Make the default Tacotron architecture functional with reasonable fidelity.
|
Checkout issues and Project field.
|
||||||
- [ ] Update the architecture with the latest improvements at the field. (e.g. Monotonic Attention, World Vocoder)
|
|
||||||
- Using harmonized teacher forcing proposed by
|
## References
|
||||||
- Update the attention module with a monotonic alternative. (e.g GMM attention, Window based attention)
|
- [Efficient Neural Audio Synthesis](https://arxiv.org/pdf/1802.08435.pdf)
|
||||||
- References:
|
- [Attention-Based models for speech recognition](https://arxiv.org/pdf/1506.07503.pdf)
|
||||||
- [Efficient Neural Audio Synthesis](https://arxiv.org/pdf/1802.08435.pdf)
|
- [Generating Sequences With Recurrent Neural Networks](https://arxiv.org/pdf/1308.0850.pdf)
|
||||||
- [Attention-Based models for speech recognition](https://arxiv.org/pdf/1506.07503.pdf)
|
- [Char2Wav: End-to-End Speech Synthesis](https://openreview.net/pdf?id=B1VWyySKx)
|
||||||
- [Generating Sequences With Recurrent Neural Networks](https://arxiv.org/pdf/1308.0850.pdf)
|
- [VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop](https://arxiv.org/pdf/1707.06588.pdf)
|
||||||
- [Char2Wav: End-to-End Speech Synthesis](https://openreview.net/pdf?id=B1VWyySKx)
|
- [WaveRNN](https://arxiv.org/pdf/1802.08435.pdf)
|
||||||
- [VoiceLoop: Voice Fitting and Synthesis via a Phonological Loop](https://arxiv.org/pdf/1707.06588.pdf)
|
- [Faster WaveNet](https://arxiv.org/abs/1611.09482)
|
||||||
- [ ] Simplify the architecture and push the limits of performance vs efficiency.
|
- [Parallel WaveNet](https://arxiv.org/abs/1711.10433)
|
||||||
- [ ] Improve vocoder part of the network.
|
|
||||||
- Possible Solutions:
|
|
||||||
- WORLD vocoder
|
|
||||||
- [WaveRNN](https://arxiv.org/pdf/1802.08435.pdf)
|
|
||||||
- [Faster WaveNet](https://arxiv.org/abs/1611.09482)
|
|
||||||
- [Parallel WaveNet](https://arxiv.org/abs/1711.10433)
|
|
||||||
|
|
||||||
### Precursor implementations
|
### Precursor implementations
|
||||||
- https://github.com/keithito/tacotron (Dataset and Test processing)
|
- https://github.com/keithito/tacotron (Dataset and Test processing)
|
||||||
|
|
Loading…
Reference in New Issue