Skip to content

Commit

Permalink
Fix F.clip_grad_by_norm in the case of small norm
Browse files Browse the repository at this point in the history
  • Loading branch information
takuseno committed Jan 11, 2020
1 parent a16573f commit 788b9ef
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion src/nbla/function/generic/clip_grad_by_norm.cpp
Original file line number Diff line number Diff line change
Expand Up @@ -56,7 +56,7 @@ template <typename T, bool accum>
void clip_grad_by_norm_backward_cpu(int size, T clip_norm_grad, T *dx,
const T *dy, const T *m) {
for (int s = 0; s < size; ++s) {
T _dx = clip_norm_grad * dy[s] / std::sqrt(m[s]);
T _dx = clip_norm_grad * dy[s] / std::max(std::sqrt(m[s]), clip_norm_grad);
accum ? dx[s] += _dx : dx[s] = _dx;
}
}
Expand Down

0 comments on commit 788b9ef

Please sign in to comment.