diff --git a/README.md b/README.md
index f9605355..d577d142 100644
--- a/README.md
+++ b/README.md
@@ -1,5 +1,4 @@
-
## ๐ธCoqui.ai News
- ๐ฃ [๐ถBark](https://github.com/suno-ai/bark) is now available for inference with uncontrained voice cloning. [Docs](https://tts.readthedocs.io/en/dev/models/bark.html)
- ๐ฃ You can use [~1100 Fairseq models](https://github.com/facebookresearch/fairseq/tree/main/examples/mms) with ๐ธTTS.
@@ -10,11 +9,20 @@
- ๐ฃ Voice generation with fusion - **Voice fusion** - is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin).
- ๐ฃ Voice cloning is live on [**Coqui Studio**](https://app.coqui.ai/auth/signin).
+
+

+
##

-๐ธTTS is a library for advanced Text-to-Speech generation. It's built on the latest research, was designed to achieve the best trade-off among ease-of-training, speed and quality.
-๐ธTTS comes with pretrained models, tools for measuring dataset quality and already used in **20+ languages** for products and research projects.
+**๐ธTTS is a library for advanced Text-to-Speech generation.**
+
+๐ Pretrained models in +1100 languages.
+
+๐ ๏ธ Tools for training new models and fine-tuning existing models in any language.
+
+๐ Utilities for dataset analysis and curation.
+______________________________________________________________________
[](https://discord.gg/5eXr5seRrv)
[](https://opensource.org/licenses/MPL-2.0)
@@ -36,13 +44,9 @@

[![Docs]()](https://tts.readthedocs.io/en/latest/)
-๐ฐ [**Subscribe to ๐ธCoqui.ai Newsletter**](https://coqui.ai/?subscription=true)
+
-๐ข [English Voice Samples](https://erogol.github.io/ddc-samples/) and [SoundCloud playlist](https://soundcloud.com/user-565970875/pocket-article-wavernn-and-tacotron2)
-
-๐ [Text-to-Speech paper collection](https://github.com/erogol/TTS-papers)
-
-
+______________________________________________________________________
## ๐ฌ Where to ask questions
Please use our dedicated channels for questions and discussion. Help is much more valuable if it's shared publicly so that more people can benefit from it.
@@ -68,6 +72,8 @@ Please use our dedicated channels for questions and discussion. Help is much mor
| ๐ฉโ๐ป **Contributing** | [CONTRIBUTING.md](https://github.com/coqui-ai/TTS/blob/main/CONTRIBUTING.md)|
| ๐ **Road Map** | [Main Development Plans](https://github.com/coqui-ai/TTS/issues/378)
| ๐ **Released Models** | [TTS Releases](https://github.com/coqui-ai/TTS/releases) and [Experimental Models](https://github.com/coqui-ai/TTS/wiki/Experimental-Released-Models)|
+| ๐ฐ **Papers** | [TTS Papers](https://github.com/erogol/TTS-papers)|
+
## ๐ฅ TTS Performance

@@ -88,7 +94,7 @@ Underlined "TTS*" and "Judy*" are **internal** ๐ธTTS models that are not relea
- Utilities to use and test your models.
- Modular (but not too much) code base enabling easy implementation of new ideas.
-## Implemented Models
+## Model Implementations
### Spectrogram models
- Tacotron: [paper](https://arxiv.org/abs/1703.10135)
- Tacotron2: [paper](https://arxiv.org/abs/1712.05884)
@@ -136,7 +142,7 @@ Underlined "TTS*" and "Judy*" are **internal** ๐ธTTS models that are not relea
You can also help us implement more models.
-## Install TTS
+## Installation
๐ธTTS is tested on Ubuntu 18.04 with **python >= 3.7, < 3.11.**.
If you are only interested in [synthesizing speech](https://tts.readthedocs.io/en/latest/inference.html) with the released ๐ธTTS models, installing from PyPI is the easiest option.
@@ -259,7 +265,7 @@ api.tts_with_vc_to_file(
)
```
-### Command line `tts`
+### Command-line `tts`
#### Single Speaker Models
- List provided models: