You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I used your graph construction, graph aggregation module and my own data for training. During the training process, I found that increasing the size of the input data will cause Loss to appear'Nan'. At this time, if the batchsize is set to 1, loss will become normal. I can guarantee that my input data is correct and does not contain the ‘Nan’ value. So what is the reason for this situation, is it because the backpropagation of the imtopatch operation may cause the gradient to explode? Is there any solution?
The text was updated successfully, but these errors were encountered:
Hello, I used your graph construction, graph aggregation module and my own data for training. During the training process, I found that increasing the size of the input data will cause Loss to appear'Nan'. At this time, if the batchsize is set to 1, loss will become normal. I can guarantee that my input data is correct and does not contain the ‘Nan’ value. So what is the reason for this situation, is it because the backpropagation of the imtopatch operation may cause the gradient to explode? Is there any solution?
The text was updated successfully, but these errors were encountered: