Replies: 3 comments
-
>>> alchemi5t |
Beta Was this translation helpful? Give feedback.
-
>>> Mo3geza |
Beta Was this translation helpful? Give feedback.
-
>>> alchemi5t |
Beta Was this translation helpful? Give feedback.
-
>>> Mo3geza
[July 21, 2019, 9:06am]
I am working on a specific dataset containing numbers and . and ' so i
made the alphabet.txt contains on the characters a- slash >z and 0- slash >9 and .
and ' then i tired to run on the pre-trained model but it gave me that
output with this command
python DeepSpeech.py slash --train_files .../tts/train.csv
slash --train_batch_size 24 slash --test_files .../tts/test.csv slash --test_batch_size
48 slash --dev_files .../tts/dev.csv slash --dev_batch_size 48 slash --checkpoint_dir
.../models/checkpoint/ slash --export_dir models/ slash --epoch -3
slash --learning_rate 0.0001 slash --dropout_rate 0.15 slash --lm_alpha 0.75 slash --lm_beta
1.85
Output: slash
InvalidArgumentError (see above for traceback): Restoring from
checkpoint failed. This is most likely due to a mismatch between the
current graph and the graph from the checkpoint. Please ensure that you
have not altered the graph expected based on the checkpoint. Original
error: slash
E slash
E Assign requires shapes of both tensors to match. lhs shape=
slash [2048,40 slash ] rhs shape= slash [2048,29 slash ] slash
E slash [ slash [node save/Assign_32 (defined at DeepSpeech.py:418) slash ] slash ] slash
E slash [ slash [node save/restore_all/NoOp_1 (defined at DeepSpeech.py:418) slash ] slash ] slash
E slash
E The checkpoint in .../models/checkpoint/model.v0.5.1 does not match
the shapes of the model. Did you change alphabet.txt or the slash --n_hidden
parameter between train runs using the same checkpoint dir? Try moving
or removing the contents of .../models/checkpoint/model.v0.5.1.
so i removed the slash --checkpoint_dir from the command and tried to run it
again and it give me that error
Output slash
WARNING:tensorflow:From
/home/reasearch/anaconda3/envs/tf13/lib/python3.6/site-packages/tensorflow/python/data/ops/dataset_ops.py:429:
py_func (from tensorflow.python.ops.script_ops) is deprecated and will
be removed in a future version. slash
Instructions for updating: slash
tf.py_func is deprecated in TF V2. Instead, use slash
tf.py_function, which takes a python function which manipulates tf
eager slash
tensors instead of numpy arrays. It's easy to convert a tf eager tensor
to slash
an ndarray (just call tensor.numpy()) but having access to eager
tensors slash
means
tf.py_function
s can use accelerators such as GPUs as well as slashbeing differentiable using a gradient tape.
WARNING:tensorflow:From
/home/reasearch/anaconda3/envs/tf13/lib/python3.6/site-packages/tensorflow/python/data/ops/iterator_ops.py:358:
colocate_with (from tensorflow.python.framework.ops) is deprecated and
will be removed in a future version. slash
Instructions for updating: slash
Colocations handled automatically by placer. slash
WARNING:tensorflow:From
/home/reasearch/anaconda3/envs/tf13/lib/python3.6/site-packages/tensorflow/contrib/rnn/python/ops/lstm_ops.py:696:
to_int64 (from tensorflow.python.ops.math_ops) is deprecated and will be
removed in a future version. slash
Instructions for updating: slash
Use tf.cast instead. slash
I Initializing variables... slash
I STARTING Optimization slash
Epoch 0 slash | Training slash | Elapsed Time: 0:00:00 slash | Steps: 0 slash | Loss:
0.000000 Segmentation fault (core dumped)
I don't know why this happening after i tried more than once De-buging i
feel lost
Any ideas ?
[This is an archived TTS discussion thread from discourse.mozilla.org/t/fine-tuning-on-a-small-dataset-containing-different-alphabet]
Beta Was this translation helpful? Give feedback.
All reactions