-
Notifications
You must be signed in to change notification settings - Fork 0
/
Copy pathresult_one_column_input.log
399 lines (399 loc) · 10.5 KB
/
result_one_column_input.log
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
2017-11-13 11:16:53.297976: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.1 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-13 11:16:53.298008: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use SSE4.2 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-13 11:16:53.298013: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX instructions, but these are available on your machine and could speed up CPU computations.
2017-11-13 11:16:53.298016: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use AVX2 instructions, but these are available on your machine and could speed up CPU computations.
2017-11-13 11:16:53.298019: W tensorflow/core/platform/cpu_feature_guard.cc:45] The TensorFlow library wasn't compiled to use FMA instructions, but these are available on your machine and could speed up CPU computations.
2017-11-13 11:16:53.712371: I tensorflow/core/common_runtime/gpu/gpu_device.cc:955] Found device 0 with properties:
name: Tesla K80
major: 3 minor: 7 memoryClockRate (GHz) 0.8235
pciBusID 0000:06:00.0
Total memory: 11.17GiB
Free memory: 11.11GiB
2017-11-13 11:16:53.712416: I tensorflow/core/common_runtime/gpu/gpu_device.cc:976] DMA: 0
2017-11-13 11:16:53.712425: I tensorflow/core/common_runtime/gpu/gpu_device.cc:986] 0: Y
2017-11-13 11:16:53.712437: I tensorflow/core/common_runtime/gpu/gpu_device.cc:1045] Creating TensorFlow device (/gpu:0) -> (device: 0, name: Tesla K80, pci bus id: 0000:06:00.0)
2017-11-13 11:16:53.734203: E tensorflow/stream_executor/cuda/cuda_driver.cc:924] failed to allocate 11.17G (11996954624 bytes) from device: CUDA_ERROR_OUT_OF_MEMORY
and away we go
(20520, 240, 240, 3)
./Trimmed1/trim1_resized.mp4
4187
./Trimmed2/trim2_resized.mp4
2267
./Trimmed3/trim3_resized.mp4
4570
./Trimmed4/trim4_resized.mp4
5036
./Trimmed5/trim5_resized.mp4
4460
./Data1/E8D2.csv
./Data1/E84F.csv
./Data1/E91B.csv
./Data1/E863.csv
./Data1/E887.csv
./Data1/E906.csv
./Data1/E912.csv
(4187, 56)
./Data2/E8D2.csv
./Data2/E84F.csv
./Data2/E91B.csv
./Data2/E863.csv
./Data2/E887.csv
./Data2/E906.csv
./Data2/E912.csv
(6454, 56)
./Data3/E8D2.csv
./Data3/E84F.csv
./Data3/E91B.csv
./Data3/E863.csv
./Data3/E887.csv
./Data3/E906.csv
./Data3/E912.csv
(11024, 56)
./Data4/E8D2.csv
./Data4/E84F.csv
./Data4/E91B.csv
./Data4/E863.csv
./Data4/E887.csv
./Data4/E906.csv
./Data4/E912.csv
(16060, 56)
./Data5/E8D2.csv
./Data5/E84F.csv
./Data5/E91B.csv
./Data5/E863.csv
./Data5/E887.csv
./Data5/E906.csv
./Data5/E912.csv
(20520, 56)
Index(['Time Stamp', 'Time Stamp Unix', 'Low Noise Accelerometer X',
'Low Noise Accelerometer Y', 'Low Noise Accelerometer Z', 'Gyroscope X',
'Gyroscope Y', 'Gyroscope Z', 'Time Stamp', 'Time Stamp Unix',
'Low Noise Accelerometer X', 'Low Noise Accelerometer Y',
'Low Noise Accelerometer Z', 'Gyroscope X', 'Gyroscope Y',
'Gyroscope Z', 'Time Stamp', 'Time Stamp Unix',
'Low Noise Accelerometer X', 'Low Noise Accelerometer Y',
'Low Noise Accelerometer Z', 'Gyroscope X', 'Gyroscope Y',
'Gyroscope Z', 'Time Stamp', 'Time Stamp Unix',
'Low Noise Accelerometer X', 'Low Noise Accelerometer Y',
'Low Noise Accelerometer Z', 'Gyroscope X', 'Gyroscope Y',
'Gyroscope Z', 'Time Stamp', 'Time Stamp Unix',
'Low Noise Accelerometer X', 'Low Noise Accelerometer Y',
'Low Noise Accelerometer Z', 'Gyroscope X', 'Gyroscope Y',
'Gyroscope Z', 'Time Stamp', 'Time Stamp Unix',
'Low Noise Accelerometer X', 'Low Noise Accelerometer Y',
'Low Noise Accelerometer Z', 'Gyroscope X', 'Gyroscope Y',
'Gyroscope Z', 'Time Stamp', 'Time Stamp Unix',
'Low Noise Accelerometer X', 'Low Noise Accelerometer Y',
'Low Noise Accelerometer Z', 'Gyroscope X', 'Gyroscope Y',
'Gyroscope Z'],
dtype='object')
Index(['Low Noise Accelerometer X', 'Low Noise Accelerometer Y',
'Low Noise Accelerometer Z', 'Gyroscope X', 'Gyroscope Y',
'Gyroscope Z', 'Low Noise Accelerometer X', 'Low Noise Accelerometer Y',
'Low Noise Accelerometer Z', 'Gyroscope X', 'Gyroscope Y',
'Gyroscope Z', 'Low Noise Accelerometer X', 'Low Noise Accelerometer Y',
'Low Noise Accelerometer Z', 'Gyroscope X', 'Gyroscope Y',
'Gyroscope Z', 'Low Noise Accelerometer X', 'Low Noise Accelerometer Y',
'Low Noise Accelerometer Z', 'Gyroscope X', 'Gyroscope Y',
'Gyroscope Z', 'Low Noise Accelerometer X', 'Low Noise Accelerometer Y',
'Low Noise Accelerometer Z', 'Gyroscope X', 'Gyroscope Y',
'Gyroscope Z', 'Low Noise Accelerometer X', 'Low Noise Accelerometer Y',
'Low Noise Accelerometer Z', 'Gyroscope X', 'Gyroscope Y',
'Gyroscope Z', 'Low Noise Accelerometer X', 'Low Noise Accelerometer Y',
'Low Noise Accelerometer Z', 'Gyroscope X', 'Gyroscope Y',
'Gyroscope Z'],
dtype='object')
(15000, 240, 240, 3)
(5520, 240, 240, 3)
(15000, 42)
(5520, 42)
[4186 0]
nan
[4186 1]
nan
[4186 2]
nan
[4186 3]
nan
[4186 4]
nan
[4186 5]
nan
[6453 0]
nan
[6453 1]
nan
[6453 2]
nan
[6453 3]
nan
[6453 4]
nan
[6453 5]
nan
0
variance of y_train
15895.7985824
variance of y_test
6324.96204399
trial = 1/25
64
128
256
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 3, d2: 2, filters: 32, nodes: 16, stopping: 15, drop: 0.1
train_err 14783.0448927
val_err 19552.6411499
test_err 6286.70264254
trial = 2/25
64
128
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 2, d2: 1, filters: 64, nodes: 64, stopping: 10, drop: 0.5
train_err 14687.0703807
val_err 19541.8168046
test_err 6271.40399671
trial = 3/25
128
256
512
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 3, d2: 2, filters: 64, nodes: 128, stopping: 10, drop: 0.5
train_err 14053.3428452
val_err 19698.396166
test_err 6689.59951613
trial = 4/25
32
64
128
256
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 4, d2: 2, filters: 16, nodes: 16, stopping: 15, drop: 0.5
train_err 14831.8866063
val_err 19573.0794764
test_err 6302.51278411
trial = 5/25
32
64
128
256
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 4, d2: 2, filters: 16, nodes: 64, stopping: 10, drop: 0.5
train_err 14490.647507
val_err 19694.4467988
test_err 6297.29223165
trial = 6/25
128
256
512
1024
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 4, d2: 3, filters: 64, nodes: 64, stopping: 10, drop: 0.5
train_err 14929.3520578
val_err 19574.6138577
test_err 6311.12155728
trial = 7/25
128
256
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 2, d2: 3, filters: 64, nodes: 64, stopping: 10, drop: 0.1
train_err 14360.9543656
val_err 19514.5382781
test_err 6421.31349971
trial = 8/25
64
128
256
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 3, d2: 2, filters: 32, nodes: 64, stopping: 10, drop: 0.5
train_err 14450.8434993
val_err 19474.4333611
test_err 6362.1367046
trial = 9/25
64
128
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 2, d2: 2, filters: 32, nodes: 16, stopping: 15, drop: 0.5
train_err 14859.0677216
val_err 19551.2914916
test_err 6290.36175941
trial = 10/25
64
128
256
512
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 4, d2: 1, filters: 64, nodes: 32, stopping: 15, drop: 0.5
train_err 14756.7484678
val_err 19567.8192107
test_err 6294.75783778
trial = 11/25
16
32
64
128
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 4, d2: 1, filters: 16, nodes: 32, stopping: 15, drop: 0.5
train_err 14578.1325432
val_err 19535.0815427
test_err 6279.94908433
trial = 12/25
16
32
64
128
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 4, d2: 1, filters: 16, nodes: 128, stopping: 15, drop: 0.5
train_err 13854.0355089
val_err 19685.3447035
test_err 6262.25472367
trial = 13/25
64
128
256
512
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 4, d2: 2, filters: 32, nodes: 32, stopping: 15, drop: 0.1
train_err 14705.4951868
val_err 20085.7259839
test_err 6966.90973959
trial = 14/25
16
32
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 2, d2: 1, filters: 16, nodes: 64, stopping: 15, drop: 0.1
train_err 14274.768407
val_err 19475.2359456
test_err 6292.02787239
trial = 15/25
16
32
64
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 3, d2: 1, filters: 16, nodes: 32, stopping: 15, drop: 0.1
train_err 14634.5476082
val_err 19546.2877218
test_err 6285.24575113
trial = 16/25
64
128
256
512
1024
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 5, d2: 3, filters: 32, nodes: 16, stopping: 15, drop: 0.1
train_err 14910.3445116
val_err 19677.8902044
test_err 6360.34477632
trial = 17/25
64
128
256
512
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 4, d2: 3, filters: 32, nodes: 16, stopping: 10, drop: 0.1
train_err 14909.8064153
val_err 19563.8959394
test_err 6298.63740563
trial = 18/25
64
128
256
512
1024
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 5, d2: 2, filters: 32, nodes: 128, stopping: 10, drop: 0.1
train_err 14552.3873324
val_err 24855.8913208
test_err 8283.40666998
trial = 19/25
64
128
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 2, d2: 3, filters: 32, nodes: 128, stopping: 10, drop: 0.1
train_err 13913.7527412
val_err 19574.3092225
test_err 6289.89118886
trial = 20/25
128
256
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 2, d2: 2, filters: 64, nodes: 128, stopping: 10, drop: 0.5
train_err 13670.1121445
val_err 19503.3298124
test_err 6299.13031091
trial = 21/25
64
128
256
512
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 4, d2: 2, filters: 32, nodes: 32, stopping: 10, drop: 0.1
train_err 14690.0660062
val_err 19553.2607491
test_err 6297.94079688
trial = 22/25
64
128
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 2, d2: 2, filters: 32, nodes: 64, stopping: 15, drop: 0.1
train_err 13994.9834937
val_err 19606.9130595
test_err 6306.53068183
trial = 23/25
32
64
128
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 3, d2: 2, filters: 16, nodes: 128, stopping: 15, drop: 0.5
train_err 13798.111061
val_err 19595.9111388
test_err 6351.64221672
trial = 24/25
128
256
512
1024
2048
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 5, d2: 2, filters: 64, nodes: 128, stopping: 10, drop: 0.5
train_err 14651.1442259
val_err 19530.9028042
test_err 6277.68379316
trial = 25/25
Using TensorFlow backend.
32
64
128
256
512
keys
dict_keys(['val_loss', 'val_acc', 'loss', 'acc'])
d1: 5, d2: 3, filters: 16, nodes: 128, stopping: 10, drop: 0.1
train_err 14739.8075813
val_err 19500.7470632
test_err 6290.16243502