coqui-tts/TTS/tts/tf
Agrin Hilmkil ced4cfdbbf Allow saving / loading checkpoints from cloud paths (#683)
* Allow saving / loading checkpoints from cloud paths

Allows saving and loading checkpoints directly from cloud paths like
Amazon S3 (s3://) and Google Cloud Storage (gs://) by using fsspec.

Note: The user will have to install the relevant dependency for each
protocol. Otherwise fsspec will fail and specify which dependency is
missing.

* Append suffix _fsspec to save/load function names

* Add a lower bound to the fsspec dependency

Skips the 0 major version.

* Add missing changes from refactor

* Use fsspec for remaining artifacts

* Add test case with path requiring fsspec

* Avoid writing logs to file unless output_path is local

* Document the possibility of using paths supported by fsspec

* Fix style and lint

* Add missing lint fixes

* Add type annotations to new functions

* Use Coqpit method for converting config to dict

* Fix type annotation in semi-new function

* Add return type for load_fsspec

* Fix bug where fs not always created

* Restore the experiment removal functionality
2021-08-09 18:02:36 +00:00
..
layers/tacotron formatting 2021-05-03 14:26:35 +02:00
models Create base 🐸TTS model abstraction for tts models 2021-06-28 17:03:19 +02:00
utils Allow saving / loading checkpoints from cloud paths (#683) 2021-08-09 18:02:36 +00:00
README.md rename the project to old TTS 2020-09-09 12:27:23 +02:00
__init__.py rename the project to old TTS 2020-09-09 12:27:23 +02:00

README.md

Utilities to Convert Models to Tensorflow2

Here there are experimental utilities to convert trained Torch models to Tensorflow (2.2>=).

Converting Torch models to TF enables all the TF toolkit to be used for better deployment and device specific optimizations.

Note that we do not plan to share training scripts for Tensorflow in near future. But any contribution in that direction would be more than welcome.

To see how you can use TF model at inference, check the notebook.

This is an experimental release. If you encounter an error, please put an issue or in the best send a PR but you are mostly on your own.

Converting a Model

  • Run convert_tacotron2_torch_to_tf.py --torch_model_path /path/to/torch/model.pth.tar --config_path /path/to/model/config.json --output_path /path/to/output/tf/model with the right arguments.

Known issues ans limitations

  • We use a custom model load/save mechanism which enables us to store model related information with models weights. (Similar to Torch). However, it is prone to random errors.
  • Current TF model implementation is slightly slower than Torch model. Hopefully, it'll get better with improving TF support for eager mode and tf.function.
  • TF implementation of Tacotron2 only supports regular Tacotron2 as in the paper.
  • You can only convert models trained after TF model implementation since model layers has been updated in Torch model.