Replies: 2 comments
-
>>> carlfm01 |
Beta Was this translation helpful? Give feedback.
-
>>> watercress20 |
Beta Was this translation helpful? Give feedback.
-
>>> carlfm01 |
Beta Was this translation helpful? Give feedback.
-
>>> watercress20 |
Beta Was this translation helpful? Give feedback.
-
>>> watercress20
[August 13, 2019, 9:28pm]
Hi, I'm looking to use deep speech to pick up my own accent better and
learn a bit about speech recognition systems.
But upon setting everything up, it gets to here
Use standard file APIs to check for files with this prefix. slash
I Restored variables from most recent checkpoint at
/home/ slash - slash - slash - slash - slash - slash - slash - slash - slash --/SpeechDemo/deepspeech-0.5.1-checkpoint/model.v0.5.1,
step 467356 slash
I STARTING Optimization slash
Epoch 0 slash | Training slash | Elapsed Time: 0:00:00 slash | Steps: 0 slash | Loss:
0.000000
and at the end of the traceback
tensorflow.python.framework.errors_impl.InvalidArgumentError: WAV data
chunk 'data' is too large: 2147483648 bytes, but the limit is
2147483647 slash
slash [ slash [{{node DecodeWav}} slash ] slash ] slash
slash [ slash [{{node tower_0/IteratorGetNext}} slash ] slash ]
That byte size is in the gigs and it being one byte off seemed strange
to me, could it be a wav file size ?
I'm using checkpoint 0.5.1 and 0.5.1 source, on the CPU. I was wondering
if anyone has came across this and could explain it.
Thanks alot for the project and the models, its amazing .
[This is an archived TTS discussion thread from discourse.mozilla.org/t/checkpoint-wav-data-chunk-data-is-too-large]
Beta Was this translation helpful? Give feedback.
All reactions