From fe38c26b86efaf1b3a1e0e85afc27344993b436e Mon Sep 17 00:00:00 2001 From: =?UTF-8?q?Eren=20G=C3=B6lge?= Date: Tue, 10 Sep 2019 13:32:37 +0300 Subject: [PATCH] Update README.md --- README.md | 4 ++-- 1 file changed, 2 insertions(+), 2 deletions(-) diff --git a/README.md b/README.md index 39e507e1..50b62059 100644 --- a/README.md +++ b/README.md @@ -50,7 +50,7 @@ Below you see Tacotron model state after 16K iterations with batch-size 32 with Audio examples: [https://soundcloud.com/user-565970875](https://soundcloud.com/user-565970875) -![example_model_output](images/example_model_output.png?raw=true) +example_output ## Runtime The most time-consuming part is the vocoder algorithm (Griffin-Lim) which runs on CPU. By setting its number of iterations lower, you might have faster execution with a small loss of quality. Some of the experimental values are below. @@ -176,4 +176,4 @@ Please feel free to offer new changes and pull things off. We are happy to discu ### References - https://github.com/keithito/tacotron (Dataset pre-processing) -- https://github.com/r9y9/tacotron_pytorch (Initial Tacotron architecture) \ No newline at end of file +- https://github.com/r9y9/tacotron_pytorch (Initial Tacotron architecture)