Commit Graph

257 Commits

Author SHA1 Message Date
Eren Golge f450fe3571 use stop token again 2019-04-18 15:20:19 +02:00
Eren Golge 3c2d500f53 Changesat windowing and some comments 2019-04-12 16:13:40 +02:00
Eren Golge 312a539a0e Enable optional forward attention with transition agent 2019-04-10 16:41:30 +02:00
Eren Golge e3647fa7b3 bug fix for prenet setup 2019-04-08 18:28:19 +02:00
Eren Golge 961af0f5cd setup_model externally based on model selection. Make forward attention and prenet type configurable in config.json 2019-04-05 17:49:18 +02:00
Eren Golge 043e49f367 active windowing 2019-04-05 08:40:09 +02:00
Eren Golge 6e8b66aa89 smaller windowing range 2019-04-02 10:35:13 +02:00
Eren Golge 68f8ef730d stop conditioning with padding for inference_truncated 2019-04-01 14:10:38 +02:00
Eren Golge 5212a11836 longer stop_token padding 2019-04-01 12:07:24 +02:00
Eren Golge e1cd253d65 change stop conditioning 2019-03-31 16:44:17 +02:00
Eren Golge fdca8402c7 config updates 2019-03-26 15:46:26 +01:00
Eren Golge 0a92c6d5a7 Set attention norm method by config.json 2019-03-26 00:48:12 +01:00
Eren Golge 786510cd6a loss functions updates 2019-03-23 17:33:47 +01:00
Eren Golge 82cde95cfa add bias to attention v 2019-03-19 12:21:36 +01:00
Eren Golge 1b68d3cb4e control synthesis lenght as an additional stop condition 2019-03-15 14:01:43 +01:00
Eren Golge 4f89029577 Merge branch 'state-pass' into dev-tacotron2 2019-03-12 09:52:15 +01:00
Eren Golge 3128378bdf bug fix for stop token prediciton 2019-03-12 00:20:57 +01:00
Eren Golge 527567d7ce renaming 2019-03-12 00:20:43 +01:00
Eren Golge b9b79fcf0f inference truncated NEED TO BE TESTED 2019-03-11 17:40:09 +01:00
Eren Golge 4ffda89c42 reshape input vectors before and after bn layer 2019-03-11 13:03:43 +01:00
Eren Golge a144acf466 use BN for prenet 2019-03-07 16:28:50 +01:00
Eren Golge 4b116a2a88 Look for the last two attention values for stop condition and attend to the first encoder verctor if it is the first decoder iteration 2019-03-06 23:46:02 +01:00
Eren Golge b031a65677 compute sequence mask in model, add tacotron2 relatedfiles 2019-03-06 13:14:58 +01:00
Eren Golge a4474abd83 tacotron parse output bug fix 2019-03-06 13:10:54 +01:00
Eren Golge 4326582bb1 TTSDataset formatting and batch sorting to use pytorch pack for rnns 2019-03-06 13:10:05 +01:00
Eren Golge 44c66c6e3e remove comments 2019-03-05 13:34:33 +01:00
Eren Golge 1e8fdec084 Modularize functions in Tacotron 2019-03-05 13:25:50 +01:00
Eren Golge 1c99be2ffd Change window size for attention 2019-02-18 13:06:26 +01:00
Eren Golge 6ea31e47df Constant queue size for autoregression window 2019-02-16 03:18:49 +01:00
Eren Golge 90f0cd640b memoru queueing 2019-02-12 15:27:42 +01:00
Eren Golge c5b6227848 init with embedding lyaers 2019-02-06 16:54:33 +01:00
Eren Golge d28bbe09fb Attention bias setting Revert to old 2019-02-06 16:23:01 +01:00
Eren Golge e12bbc2a5c init decoder states with a function 2019-01-22 18:25:55 +01:00
Eren Golge 66f8d0e260 Attention biased chaged 2019-01-22 18:18:21 +01:00
Eren Golge 562d73d3d1 Soem bug fixes 2019-01-17 15:48:22 +01:00
Eren Golge 4431e04b48 use sigmoid for attention 2019-01-16 16:26:05 +01:00
Eren Golge 7e020d4084 Bug fixes 2019-01-16 16:23:04 +01:00
Eren Golge af22bed149 set bias 2019-01-16 15:53:24 +01:00
Eren Golge f4fa155cd3 Make attn windowing optional 2019-01-16 12:33:07 +01:00
Eren Golge 8969d59902 Use the last attention value as a threshold to stop decoding. since stoptoken prediction is not precies enough to synthsis at the right time. 2019-01-16 12:32:40 +01:00
Eren Golge ed1f648b83 Enalbe attention windowing and make in configurable at model level. 2019-01-16 12:32:40 +01:00
Eren Golge 916f5df5f9 config update and increase dropout p 0.5 2019-01-16 12:14:58 +01:00
Eren Golge 84814db73f reduce dropout ratio 2019-01-02 12:52:17 +01:00
Eren Golge 4826e7db9c remove intermediate tensor asap 2018-12-28 14:22:41 +01:00
Eren Golge 3a72d75ecd Merge branch 'master' of github.com:mozilla/TTS 2018-12-12 17:08:39 +01:00
Eren Golge 8d865629a0 partial model initialization 2018-12-12 15:43:58 +01:00
Eren Golge 703be04993 bug fix 2018-12-12 12:02:10 +01:00
Eren Golge dc3d09304e Cache attention annot vectors for the whole sequence. 2018-12-11 16:06:02 +01:00
Eren Golge cdaaff9dbb Modularize memory reshaping in decoder layer 2018-11-28 16:31:29 +01:00
Eren Golge 4838d16fec config and comment updates 2018-11-13 12:10:12 +01:00
Eren Golge 440f51b61d correct import statements 2018-11-03 23:19:23 +01:00
Eren Golge 0b6a9995fc change import statements 2018-11-03 19:15:06 +01:00
Eren Golge d96690f83f Config updates and add sigmoid to mel network again 2018-11-02 17:27:31 +01:00
Eren Golge c8a552e627 Batch update after data-loss 2018-11-02 16:13:51 +01:00
Eren 1ae3b3e442 Enable CPU training and fix restore_epoch 2018-10-25 14:05:27 +02:00
Eren 006354320e Merge branch 'attention-smoothing' into attn-smoothing-bgs-sigmoid-wd 2018-09-26 16:51:42 +02:00
Eren f60e4497a6 apply sigmoid to outputs 2018-09-19 15:49:42 +02:00
Eren f2ef1ca36a Smmothed attention as in https://arxiv.org/pdf/1506.07503.pdf 2018-09-19 15:47:24 +02:00
Eren 67df385275 Explicit padding for unbalanced padding sizes 2018-09-19 14:20:02 +02:00
Eren fd830c6416 Attention convolution padding correction for TF "SAME" 2018-09-19 14:16:21 +02:00
Eren 2bcd7dbb6f Change functional padding with padding layer 2018-09-18 16:00:47 +02:00
Eren 00c0c9cde6 Padding with funcitonal interface to match TF "SAME" 2018-09-17 20:19:09 +02:00
Eren f377cd3cb8 larger attention filter size and mode filters 2018-09-06 15:27:15 +02:00
Eren 7d66bdc5f4 update model size to paper 2018-09-06 14:37:19 +02:00
Eren a15b3ec9a1 pytorch 0.4.1update 2018-08-13 15:02:17 +02:00
Eren 3b2654203d fixing size mismatch 2018-08-10 18:48:43 +02:00
Eren e0bce1d2d1 Pass mask instead of length to model 2018-08-10 17:45:17 +02:00
Eren G d5febfb187 Setting up network size according to the reference paper 2018-08-08 12:34:44 +02:00
Eren G f5537dc48f pep8 format all 2018-08-02 16:34:17 +02:00
Eren G 25b6769246 Some bug fixes 2018-07-26 13:33:05 +02:00
Eren G fce6bd27b8 Smaller attention filters 2018-07-19 17:36:31 +02:00
Eren G 4ef3ecf37f loca sens attn fix 2018-07-17 17:43:51 +02:00
Eren G 4e6596a8e1 Loc sens attention 2018-07-17 17:01:40 +02:00
Eren G ddaf414434 attentio update 2018-07-17 16:24:39 +02:00
Eren G 90d7e885e7 Add attention-cum 2018-07-17 15:59:18 +02:00
Eren G adbe603af1 Bug fixes 2018-07-13 15:24:50 +02:00
Eren G dac8fdffa9 Attn masking 2018-07-13 14:50:55 +02:00
Eren G 9f52833151 Merge branch 'loc-sens-attn' into loc-sens-attn-new and attention without attention-cum 2018-07-13 14:27:51 +02:00
Eren Golge f791f4e5e7 Use MSE loss instead of L1 Loss 2018-06-06 07:42:51 -07:00
Eren Golge 1b8d0f5b26 Master merge 2018-05-28 01:24:06 -07:00
Eren Golge ad943120ae Do not avg cummulative attention weight 2018-05-25 15:01:16 -07:00
Eren Golge 24644b20d4 Merge branch 'master' of https://github.com/Mozilla/TTS
Conflicts:
	README.md
	best_model_config.json
	datasets/LJSpeech.py
	layers/tacotron.py
	notebooks/TacotronPlayGround.ipynb
	notebooks/utils.py
	tests/layers_tests.py
	tests/loader_tests.py
	tests/tacotron_tests.py
	train.py
	utils/generic_utils.py
2018-05-25 05:14:04 -07:00
Eren Golge ff245a16cb bug fix 2018-05-25 04:28:40 -07:00
Eren Golge 256ed6307c More comments for new layers 2018-05-25 03:25:26 -07:00
Eren Golge 4127b66359 remove abundant arguments 2018-05-25 03:25:01 -07:00
Eren Golge a5f66b58e0 Remove empty lines 2018-05-25 03:24:45 -07:00
Eren Golge fe99baec5a Add a missing class variable to attention class 2018-05-23 06:20:04 -07:00
Eren Golge 8ffc85008a Comment StopNet arguments 2018-05-23 06:18:27 -07:00
Eren Golge 14f9d06b31 Add a constant attnetion model type to attention class 2018-05-23 06:18:09 -07:00
Eren Golge 819011e1a2 Remove depricated comment 2018-05-23 06:17:48 -07:00
Eren Golge 0f933106ca Configurable alignment method 2018-05-23 06:17:01 -07:00
Eren Golge d8c460442a Commenting the attention code 2018-05-23 06:16:39 -07:00
Eren Golge 7acf4eab94 A major bug fix for location sensitive attention. 2018-05-23 06:04:28 -07:00
Eren Golge 6bcec24d13 Remove flatten_parameters due to a bug at pytorch 0.4 2018-05-18 06:00:16 -07:00
Eren Golge 288a6b5b1d add location attn to decoder 2018-05-18 03:34:07 -07:00
Eren Golge 243204bc3e Add location sens attention 2018-05-18 03:33:41 -07:00
Eren Golge 7b9fd63649 Correct commnet 2018-05-18 03:32:17 -07:00
Eren Golge d43b4ccc1e Update decoder accepting location sensitive attention 2018-05-17 08:03:40 -07:00
Eren Golge 3348d462d2 Add location sensitive attention 2018-05-17 08:03:16 -07:00
Eren Golge 70beccf328 Add a length constraint for test time stop signal, to avoid stopage at a mid point stop sign 2018-05-16 04:00:59 -07:00
Eren Golge 40f1a3d3a5 RNN stop-token prediction 2018-05-15 16:12:47 -07:00
Eren Golge a31e60e928 bug fix, average mel spec validation loss 2018-05-15 07:13:46 -07:00
Eren Golge 02d72ccbfe predict stop token from rnn out + mel 2018-05-14 19:00:50 -07:00
Eren Golge d629dafb20 Update stopnet with more layers 2018-05-14 07:02:24 -07:00
Eren Golge cac7e9ca5b Stop test time model with stop_token 2018-05-13 06:31:59 -07:00
Eren Golge 8ed9f57a6d bug fix 2018-05-11 04:19:28 -07:00
Eren Golge 3ea1a5358d Stop token layer on decoder 2018-05-11 04:14:27 -07:00
Eren Golge 2c1f66a0fc remove Variable from models/tacotron.py 2018-05-10 16:35:38 -07:00
Eren Golge 4ab8cbb016 remove Variable from tacotron.py 2018-05-10 16:30:43 -07:00
Eren Golge a856f76791 remove Variable from losses.py 2018-05-10 16:27:55 -07:00
Eren Golge 14c9e9cde9 Loss bug fix - target_flat vs target 2018-05-10 15:59:05 -07:00
Eren Golge f8d5bbd5d2 Perform stop token prediction to stop the model. 2018-05-03 05:56:06 -07:00
Eren Golge a4561c5096 config 2018-05-02 04:56:35 -07:00
Eren Golge 82ffe819eb thweb finetune 2018-04-29 06:12:14 -07:00
Eren Golge 7bfdc32b7b remove requires_grad_() 2018-04-26 05:27:08 -07:00
Eren Golge 07f71b1761 Remove variables 2018-04-25 08:00:48 -07:00
Eren Golge e257bd7278 bug fix loss 2018-04-25 08:00:30 -07:00
Eren Golge 52b4bc6bed Remove variables 2018-04-25 08:00:19 -07:00
Eren Golge 154ec7ba24 Test notebook update 2018-04-21 04:23:54 -07:00
Eren Golge 0c5d0b98d8 threshold changed 2018-04-16 11:54:49 -07:00
Eren Golge 4762569c95 a new hacky way to stop generation and test notebook update 2018-04-13 05:09:14 -07:00
Eren Golge 06d4b231e9 bug fix 2018-04-12 05:59:40 -07:00
Eren Golge bc90050ee9 bug fix on training avg loss printing and computing 2018-04-12 05:57:52 -07:00
Eren Golge a9eadd1b8a pep8 check 2018-04-03 03:24:57 -07:00
Eren Golge af6fd9b941 loss bug fix 2018-03-28 18:20:56 -07:00
Eren Golge a68487f0b8 Data loader bug fix 3 2018-03-26 11:08:15 -07:00
Eren Golge 1ff8d6d2b7 Data loader bug fix 2 2018-03-26 11:07:15 -07:00
Eren Golge 3c084177c6 Data loader bug fix and Attention bug fix 2018-03-26 10:43:36 -07:00
Eren Golge 632c08a638 normal attention 2018-03-25 12:01:41 -07:00
Eren Golge 1dbc51c6b5 convert loss to layer and add test 2018-03-24 19:22:45 -07:00
Eren Golge df4a644326 bug fix 2018-03-23 05:18:51 -07:00
Eren Golge 2617518d91 masked loss 2018-03-22 21:13:33 -07:00
Eren Golge 32d9c734b2 masked loss 2018-03-22 14:35:02 -07:00
Eren Golge e4a0eec77e masked loss 2018-03-22 14:06:54 -07:00
Eren Golge a925c9c75c remove stop token prediciton 2018-03-22 12:47:54 -07:00
Eren Golge 4e4f876bc4 Stop token prediction - does train yet 2018-03-22 12:34:31 -07:00
Eren Golge 5750090fcd Stop token prediction - does train yet 2018-03-22 12:34:16 -07:00
Eren Golge cb48406383 Dont use teacher forcing at test time 2018-03-19 10:38:47 -07:00
Eren Golge 9b4aa92667 Adding harmonized teacher-forcing 2018-03-19 09:27:19 -07:00
Eren Golge 3071e7f6f6 remove attention mask 2018-03-19 08:26:16 -07:00
Eren Golge b4032e8dff best model ever changes 2018-03-07 06:58:51 -08:00
Eren Golge 1d684ea0e8 ReadMe update 2018-02-27 06:25:28 -08:00
Eren Golge 56f8b2d19f Harmonized teacher-forcing 2018-02-26 05:33:54 -08:00
Eren Golge b5f2181e04 teacher forcing with combining 2018-02-23 08:35:53 -08:00
Eren Golge 1bfd8f73e7 Remove DataParallel from the model state before saving 2018-02-21 07:03:53 -08:00
Eren Golge a3d8059d06 More layer tests 2018-02-13 08:08:23 -08:00
Eren Golge 56697ac8cf updates and debugs 2018-02-13 01:45:52 -08:00
Eren Golge 7d5bcd6ca4 Testing of layers and documentation 2018-02-08 10:10:11 -08:00
Eren Golge 3cafc6568c Update attention module Possible BUG FIX2 2018-02-05 08:22:30 -08:00
Eren Golge b6c5771a6f Update attention module Possible BUG FIX 2018-02-05 06:37:40 -08:00