Replies: 11 comments
-
Beta Was this translation helpful? Give feedback.
-
>>> Sushantmkarande |
Beta Was this translation helpful? Give feedback.
-
>>> reuben |
Beta Was this translation helpful? Give feedback.
-
>>> reuben |
Beta Was this translation helpful? Give feedback.
-
>>> agarwalaashish20 |
Beta Was this translation helpful? Give feedback.
-
>>> agarwalaashish20 |
Beta Was this translation helpful? Give feedback.
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> lissyx |
Beta Was this translation helpful? Give feedback.
-
>>> othiele |
Beta Was this translation helpful? Give feedback.
-
>>> carlfm01 |
Beta Was this translation helpful? Give feedback.
-
>>> carlfm01 |
Beta Was this translation helpful? Give feedback.
-
>>> Sushantmkarande
[June 17, 2019, 4:45am]
[ slash kdavis](
I was training data I scraped
from youtube and its cc aka vtt aka subtitle as transcript on deepspeech
0.5.0 model when I get this error.
Not enough time for target transition sequence (required: 102, available: 0)0You can turn this error into a warning by using the flag ignore_longer_outputs_than_inputs
I gave
ignore_longer_outputs_than_inputs=True
this flag intf.nn.ctc_loss and model started training again but I need some
clarification on this.
what does it mean?..
why i get this error... it might be true that my transcript is not 100%
match to audio but I remember giving this model completely wrong
transcript and it still trained on it, slash
and how to know how many training sample its ignoring after giving this
flag. what if its skipping over all of the sample because I am not
seeing even slightest effect on model after training all day...
[This is an archived TTS discussion thread from discourse.mozilla.org/t/i-need-some-clarification-on-ignore-longer-outputs-than-inputs-flag]
Beta Was this translation helpful? Give feedback.
All reactions