Replies: 5 comments
-
>>> negenentwintig |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> negenentwintig |
Beta Was this translation helpful? Give feedback.
-
>>> negenentwintig |
Beta Was this translation helpful? Give feedback.
-
>>> negenentwintig |
Beta Was this translation helpful? Give feedback.
-
>>> negenentwintig
[June 2, 2019, 2:33pm]
The script:
CHECK=~/scratch/checkpoint-
date +%s
cp -r deepspeech-0.4.1-checkpoint/ $CHECK
# cd ../DeepSpeech
echo $CHECK
for i in {1..1}
do
python3 ../DeepSpeech/DeepSpeech.py --n_hidden 2048 --epoch -1 slash
--train_files libri/librivox-train-clean-100.csv slash
--dev_files libri/librivox-dev-clean.csv slash
--test_files libri/librivox-test-clean.csv slash
--checkpoint_dir $CHECK slash
--train_batch_size 24 slash
--dev_batch_size 24 slash
--test_batch_size 48 slash
--validation_step 1 slash
--checkpoint_step 1 slash
-–learning_rate 0.0001 slash
--dropout_rate 0.15 slash
--lm_alpha 0.75 slash
--lm_beta 1.85 slash
--export_dir $CHECK/export slash
--alphabet_config_path ~/asr/models/alphabet.txt slash
--lm_binary_path ~/asr/models/lm.binary slash
--lm_trie_path ~/asr/models/trie slash
--beam_width 1024 | tee training-$i.out
done
The relevant output:
/home/bderuiter/scratch/checkpoint-1559468775
100% (595 of 595) |######################| Elapsed Time: 0:04:32 Time: 0:04:32
100% (56 of 56) |########################| Elapsed Time: 0:00:19 Time: 0:00:19
100% (54 of 54) |########################| Elapsed Time: 0:01:55 Time: 0:01:55
100% (54 of 54) |########################| Elapsed Time: 0:05:11 Time: 0:05:11
Preprocessing ['libri/librivox-train-clean-100.csv']
Preprocessing done
Preprocessing ['libri/librivox-dev-clean.csv']
Preprocessing done
I STARTING Optimization
I Training epoch 378...
I Training of Epoch 378 - loss: 150.271789
I Validating epoch 378...
I Validation of Epoch 378 - loss: 108.798860
I FINISHED Optimization - training time: 0:04:52
Preprocessing ['libri/librivox-test-clean.csv']
Preprocessing done
Computing acoustic model predictions...
Decoding predictions...
Test - WER: 0.699878, CER: 48.738426, loss: 148.851822
If epoch is set to 0, the WER is 0.08, which is about expected. Why
would the WER shoot up to 0.7 when training for one more epoch?
[This is an archived TTS discussion thread from discourse.mozilla.org/t/wer-shoots-up-when-retraining-the-pretrained-model-for-an-additional-epoch-on-libri]
Beta Was this translation helpful? Give feedback.
All reactions