Enno Hermann
|
6ecdf7fc4e
|
style: fix
|
2024-07-25 15:00:17 +02:00 |
Daniel Walmsley
|
2d146bb6ec
|
Remove unused code
|
2024-07-08 15:57:39 -07:00 |
Daniel Walmsley
|
e8663dd3f8
|
Comment out hack for now
|
2024-07-08 15:55:33 -07:00 |
Daniel Walmsley
|
61ec4322d4
|
Merge branch 'dev' of github.com:idiap/coqui-ai-TTS into fix/macos-stream-generator
|
2024-07-08 15:17:24 -07:00 |
Daniel Walmsley
|
bf9a38fabd
|
Fix for latest TF
|
2024-07-08 14:40:35 -07:00 |
Enno Hermann
|
da82d55329
|
refactor: use load_fsspec from trainer
Made automatically with:
rg "from TTS.utils.io import load_fsspec" --files-with-matches | xargs sed -i 's/from TTS.utils.io import load_fsspec/from trainer.io import load_fsspec/g'
|
2024-06-29 15:07:10 +02:00 |
Enno Hermann
|
4bd3df2607
|
refactor: remove duplicate get_padding
|
2024-06-26 11:54:36 +02:00 |
Enno Hermann
|
c30fb0f56b
|
chore: remove duplicate init_weights
|
2024-06-26 11:46:37 +02:00 |
Enno Hermann
|
c5241d71ab
|
chore: address pytorch deprecations
torch.range(a, b) == torch.arange(a, b+1)
meshgrid indexing: https://github.com/pytorch/pytorch/issues/50276
checkpoint use_reentrant:
https://dev-discuss.pytorch.org/t/bc-breaking-update-to-torch-utils-checkpoint-not-passing-in-use-reentrant-flag-will-raise-an-error/1745
optimizer.step() before scheduler.step():
https://pytorch.org/docs/stable/optim.html#how-to-adjust-learning-rate
|
2024-06-26 11:38:25 +02:00 |
Enno Hermann
|
f8df19a10c
|
refactor: remove duplicate convert_pad_shape
|
2024-06-26 10:17:04 +02:00 |
Enno Hermann
|
4d9e18ea7d
|
chore(stream_generator): address lint issues
|
2024-06-17 09:52:35 +02:00 |
Enno Hermann
|
2a281237d7
|
refactor(stream_generator): update code for transformers>=4.41.1
In line with
eed9ed6798/src/transformers/generation/utils.py
|
2024-06-17 09:52:35 +02:00 |
Enno Hermann
|
4b6da4e7ba
|
refactor(stream_generator): update special tokens for transformers>=4.41.1
Fixes #31. The handling of special tokens in `transformers` was changed in
https://github.com/huggingface/transformers/pull/30624 and
https://github.com/huggingface/transformers/pull/30746. This updates the XTTS
streaming code accordingly.
|
2024-06-17 09:52:35 +02:00 |
Daniel Walmsley
|
6696abfa52
|
Implement custom tensor.isin
|
2024-06-15 11:46:06 -07:00 |
Daniel Walmsley
|
bd2f992e7e
|
Make it work on mps
|
2024-06-15 09:18:13 -07:00 |
Daniel Walmsley
|
f5b81c9767
|
Fix Stream Generator on MacOS
|
2024-06-14 16:01:26 -07:00 |
Enno Hermann
|
df088e99df
|
Merge pull request #19 from idiap/toml
Move from setup.py to pyproject.toml, simplify requirements
|
2024-05-27 08:59:09 +01:00 |
Enno Hermann
|
018f1e6453
|
docs(bark): update docstrings and type hints
|
2024-05-15 22:56:55 +02:00 |
Enno Hermann
|
6d563af623
|
chore: remove obsolete code for torch<2
Minimum torch version is 2.1 now.
|
2024-05-08 18:08:40 +02:00 |
Enno Hermann
|
865a48156d
|
fix: make korean g2p deps optional
|
2024-05-08 18:08:40 +02:00 |
Enno Hermann
|
55ed162f2a
|
fix: make chinese g2p deps optional
|
2024-05-08 18:08:40 +02:00 |
Enno Hermann
|
2ad790d169
|
Merge pull request #4 from idiap/hindi
feat(xtts): support Hindi for sentence-splitting and fine-tuning
|
2024-04-11 16:49:44 +02:00 |
Enno Hermann
|
d41686502e
|
feat(xtts): support hindi for sentence-splitting and fine-tuning
The XTTS model itself already supports Hindi, it was just in these components.
|
2024-04-08 15:57:56 +02:00 |
Enno Hermann
|
b6ab85a050
|
fix: use logging instead of print statements
Fixes #1691
|
2024-04-03 15:19:45 +02:00 |
Enno Hermann
|
309f39a45f
|
fix(xtts_manager): name_to_id() should return dict
This is how the other embedding managers work
|
2024-03-08 14:47:00 +01:00 |
Enno Hermann
|
efdafd5a7f
|
style: run black
|
2024-03-07 11:46:51 +01:00 |
Aarni Koskela
|
d6ea806469
|
Run `make style`
|
2023-12-13 14:56:41 +02:00 |
Aarni Koskela
|
bd172dabbf
|
xtts/stream_generator: remove duplicate import + code
|
2023-12-13 14:56:41 +02:00 |
Aarni Koskela
|
32abb1a7c4
|
xtts/perceiver_encoder: Delete duplicate exists()
|
2023-12-13 14:56:41 +02:00 |
Aarni Koskela
|
33b69c6c09
|
Add some noqa directives (for now)
|
2023-12-13 14:56:41 +02:00 |
Aarni Koskela
|
00f8f4892a
|
Ruff autofix unnecessary passes
|
2023-12-13 14:56:41 +02:00 |
Aarni Koskela
|
bc2cf296a3
|
Ruff autofix PLW3301
|
2023-12-13 14:56:41 +02:00 |
Aarni Koskela
|
64bb41f4fa
|
Ruff autofix C41
|
2023-12-13 14:56:41 +02:00 |
Aarni Koskela
|
90991e89b4
|
Ruff autofix unused imports and import order
|
2023-12-13 14:56:41 +02:00 |
Eren Gölge
|
8c1a8b522b
|
Merge pull request #3405 from coqui-ai/studio_speakers
Add studio speakers to open source XTTS!
|
2023-12-12 16:10:09 +01:00 |
WeberJulian
|
5cd750ac7e
|
Fix API and CI
|
2023-12-11 20:21:53 +01:00 |
WeberJulian
|
a5c0d9780f
|
rename manager
|
2023-12-11 18:48:31 +01:00 |
WeberJulian
|
36143fee26
|
Add basic speaker manager
|
2023-12-11 15:25:46 +01:00 |
Aaron-Li
|
b6e929696a
|
support multiple GPU training
|
2023-12-08 16:55:32 +08:00 |
Eren Gölge
|
e49c512d99
|
Merge pull request #3351 from aaron-lii/chinese-puncs
fix pause problem of Chinese speech
|
2023-12-04 15:57:42 +01:00 |
Edresson Casanova
|
5f900f156a
|
Add XTTS Fine tuning gradio demo (#3296)
* Add XTTS FT demo data processing pipeline
* Add training and inference columns
* Uses tabs instead of columns
* Fix demo freezing issue
* Update demo
* Convert stereo to mono
* Bug fix on XTTS inference
* Update gradio demo
* Update gradio demo
* Update gradio demo
* Update gradio demo
* Add parameters to be able to set then on colab demo
* Add erros messages
* Add intuitive error messages
* Update
* Add max_audio_length parameter
* Add XTTS fine-tuner docs
* Update XTTS finetuner docs
* Delete trainer to freeze memory
* Delete unused variables
* Add gc.collect()
* Update xtts.md
---------
Co-authored-by: Eren Gölge <erogol@hotmail.com>
|
2023-12-01 23:52:23 +01:00 |
Aaron-Li
|
7b8808186a
|
fix pause problem of Chinese speech
|
2023-12-01 23:30:03 +08:00 |
Eren G??lge
|
3b8894a3dd
|
Make style
|
2023-11-27 14:15:50 +01:00 |
Eren G??lge
|
32065139e7
|
Simple text cleaner for "hi"
|
2023-11-24 15:14:34 +01:00 |
Edresson Casanova
|
11283fce07
|
Ensures that only GPT model is in training mode during XTTS GPT training (#3241)
* Ensures that only GPT model is in training mode during training
* Fix parallel wavegan unit test
|
2023-11-17 15:13:46 +01:00 |
Eren G??lge
|
44880f09ed
|
Make style
|
2023-11-17 13:43:34 +01:00 |
Eren G??lge
|
26efdf6ee7
|
Make k_diffusion optional
|
2023-11-17 13:42:33 +01:00 |
Julian Weber
|
fbc18b8c34
|
Fix zh bug (#3238)
|
2023-11-16 17:51:37 +01:00 |
Julian Weber
|
675f983550
|
Add sentence splitting (#3227)
* Add sentence spliting
* update requirements
* update default args v2
* Add spanish
* Fix return gpt_latents
* Update requirements
* Fix requirements
|
2023-11-16 11:01:11 +01:00 |
Edresson Casanova
|
73a5bd08c0
|
Fix XTTS GPT padding and inference issues (#3216)
* Fix end artifact for fine tuning models
* Bug fix on zh-cn inference
* Remove ununsed code
|
2023-11-15 14:02:05 +01:00 |