You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
running 24 hours, the result are always
valid error 100.000%, best valid error 100.000%
ubgpu@ubgpu:~/github/trainingRNNs$ sudo python RNN.py
[sudo] password for ubgpu:
Using gpu device 0: GeForce GTX 970
/usr/local/lib/python2.7/dist-packages/theano/scan_module/scan_perform_ext.py:133: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility
from scan_perform.scan_perform import *
Starting to train
Iter 0000020 : train nnl 1.760, valid error 100.000%, best valid error 100.000%, average gradient norm 1.672, rho_Whh 1.06, Omega 0.02, alpha 2.000, steps in the past 1.050
Iter 0000040 : train nnl 1.285, valid error 100.000%, best valid error 100.000%, average gradient norm 2.910, rho_Whh 1.22, Omega 0.17, alpha 2.000, steps in the past 1.000
Iter 0000060 : train nnl 0.897, valid error 100.000%, best valid error 100.000%, average gradient norm 2.465, rho_Whh 1.34, Omega 0.38, alpha 2.000, steps in the past 1.000
Iter 0000080 : train nnl 0.584, valid error 100.000%, best valid error 100.000%, average gradient norm 1.567, rho_Whh 1.44, Omega 0.54, alpha 2.000, steps in the past 1.000
Iter 0000100 : train nnl 0.484, valid error 100.000%, best valid error 100.000%, average gradient norm 0.788, rho_Whh 1.50, Omega 0.65, alpha 2.000, steps in the past 1.000
Iter 0000120 : train nnl 0.429, valid error 100.000%, best valid error 100.000%, average gradient norm 0.564, rho_Whh 1.52, Omega 0.69, alpha 2.000, steps in the past 1.000
Iter 0000140 : train nnl 0.395, valid error 100.000%, best valid error 100.000%, average gradient norm 0.464, rho_Whh 1.53, Omega 0.71, alpha 2.000, steps in the past 1.000
....................
Iter 2131680 : train nnl 0.150, valid error 100.000%, best valid error 099.990%, average gradient norm 0.036, rho_Whh 1.84, Omega 0.02, alpha 2.000, steps in the past 1.000
Iter 2131700 : train nnl 0.138, valid error 100.000%, best valid error 099.990%, average gradient norm 0.034, rho_Whh 1.84, Omega 0.02, alpha 2.000, steps in the past 1.000
Iter 2131720 : train nnl 0.127, valid error 100.000%, best valid error 099.990%, average gradient norm 0.033, rho_Whh 1.84, Omega 0.02, alpha 2.000, steps in the past 1.000
Iter 2131740 : train nnl 0.111, valid error 100.000%, best valid error 099.990%, average gradient norm 0.027, rho_Whh 1.84, Omega 0.01, alpha 2.000, steps in the past 1.000
Iter 2131760 : train nnl 0.115, valid error 100.000%, best valid error 099.990%, average gradient norm 0.029, rho_Whh 1.84, Omega 0.01, alpha 2.000, steps in the past 1.000
Iter 2131780 : train nnl 0.128, valid error 100.000%, best valid error 099.990%, average gradient norm 0.031, rho_Whh 1.84, Omega 0.01, alpha 2.000, steps in the past 1.000
---force stop after 24 hours.
The text was updated successfully, but these errors were encountered:
running 24 hours, the result are always
valid error 100.000%, best valid error 100.000%
ubgpu@ubgpu:~/github/trainingRNNs$ sudo python RNN.py
[sudo] password for ubgpu:
Using gpu device 0: GeForce GTX 970
/usr/local/lib/python2.7/dist-packages/theano/scan_module/scan_perform_ext.py:133: RuntimeWarning: numpy.ndarray size changed, may indicate binary incompatibility
from scan_perform.scan_perform import *
Starting to train
Iter 0000020 : train nnl 1.760, valid error 100.000%, best valid error 100.000%, average gradient norm 1.672, rho_Whh 1.06, Omega 0.02, alpha 2.000, steps in the past 1.050
Iter 0000040 : train nnl 1.285, valid error 100.000%, best valid error 100.000%, average gradient norm 2.910, rho_Whh 1.22, Omega 0.17, alpha 2.000, steps in the past 1.000
Iter 0000060 : train nnl 0.897, valid error 100.000%, best valid error 100.000%, average gradient norm 2.465, rho_Whh 1.34, Omega 0.38, alpha 2.000, steps in the past 1.000
Iter 0000080 : train nnl 0.584, valid error 100.000%, best valid error 100.000%, average gradient norm 1.567, rho_Whh 1.44, Omega 0.54, alpha 2.000, steps in the past 1.000
Iter 0000100 : train nnl 0.484, valid error 100.000%, best valid error 100.000%, average gradient norm 0.788, rho_Whh 1.50, Omega 0.65, alpha 2.000, steps in the past 1.000
Iter 0000120 : train nnl 0.429, valid error 100.000%, best valid error 100.000%, average gradient norm 0.564, rho_Whh 1.52, Omega 0.69, alpha 2.000, steps in the past 1.000
Iter 0000140 : train nnl 0.395, valid error 100.000%, best valid error 100.000%, average gradient norm 0.464, rho_Whh 1.53, Omega 0.71, alpha 2.000, steps in the past 1.000
....................
Iter 2131680 : train nnl 0.150, valid error 100.000%, best valid error 099.990%, average gradient norm 0.036, rho_Whh 1.84, Omega 0.02, alpha 2.000, steps in the past 1.000
Iter 2131700 : train nnl 0.138, valid error 100.000%, best valid error 099.990%, average gradient norm 0.034, rho_Whh 1.84, Omega 0.02, alpha 2.000, steps in the past 1.000
Iter 2131720 : train nnl 0.127, valid error 100.000%, best valid error 099.990%, average gradient norm 0.033, rho_Whh 1.84, Omega 0.02, alpha 2.000, steps in the past 1.000
Iter 2131740 : train nnl 0.111, valid error 100.000%, best valid error 099.990%, average gradient norm 0.027, rho_Whh 1.84, Omega 0.01, alpha 2.000, steps in the past 1.000
Iter 2131760 : train nnl 0.115, valid error 100.000%, best valid error 099.990%, average gradient norm 0.029, rho_Whh 1.84, Omega 0.01, alpha 2.000, steps in the past 1.000
Iter 2131780 : train nnl 0.128, valid error 100.000%, best valid error 099.990%, average gradient norm 0.031, rho_Whh 1.84, Omega 0.01, alpha 2.000, steps in the past 1.000
---force stop after 24 hours.
The text was updated successfully, but these errors were encountered: