-
Notifications
You must be signed in to change notification settings - Fork 63
Negative loss value #22
Comments
Hey! Super weird. Could you provide more details? |
I agree :) The only changes are stated above. Used pipeline in basic.py and readme page. I only modified audio loading part to use soundfile and used The loss is frozen at -0.6921 in the last try. Data loaded from librispeech:
|
i have a same problem. i tried training my dataset. ` 476/476 [==============================] - 802s 2s/step - loss: 2.6511 - val_loss: -0.6931 After first epoch, val_loss was negative. and second epoch also had negative loss. i use your environment-gpu.yml for creating a conda environments. my english skill is bad, but i did my best. |
Hi, I had the same problem. For me, it turned out that the pipeline.fit() method returns an empty string instead of a correct transcript, so a model learns to predict it. I used a following code and it works: dataset =pipeline.wrap_preprocess(dataset, False, None) |
I guess there is no improvement in this regard. Because the system still produces negative values. |
I had similar issue even if I just tried example(basic.py) on tf v2.1. I think the negative loss value might be acceptable. keras-team/keras#9369 But when I used the predict to predict the test.csv (same as training file), the output is empty.['']. It doesn't look reasonable now. Epoch 1/5 |
Hi,
I am trying to run the sample training on a librispeech clean 100h.
After few hours of training with
batch size=10
the printed loss value becomes negative.It happens in the first epoch.
The thing i changed is
read_audio
function to usesoundfile
for reading flac files insted of waves withwavfile.read
. Although both give the same output when reading files so it shouldn't make a difference.Are you familiar with the issue? Loss seems to decrease too fast.
Any guess what is going wrong?
The text was updated successfully, but these errors were encountered: