You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi~I have tested your pre-trained Bi-Real Net (18-layer) on Huawei Mate30 Pro and I got an inference time of 20ms which was nice. However, when I built dabnn on Hi3516dv300 (armv7, with NEON open) and ran the same model with net_test, I got an inference time of 2000ms, could you please help me with this "bad performace"?
To reproduce this result:
1.Use the corresponding toolchain file to build dabnn. (Here is my CMAKE_CXX_FLAGS)
2.Test the generated net_test on Hi3516dv300 and here is what I got:
/mnt/output/tianlin/dabnn_test/bin # ./net_test
Running main() from /home/tianlin/dabnn/third_party/googletest/googletest/src/gtest_main.cc
[==========] Running 4 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 4 tests from net
[ RUN ] net.bireal18imagenet_comparison
[ OK ] net.bireal18imagenet_comparison (6526 ms)
[ RUN ] net.bireal18imagenet
[INFO] bireal18imagenet's latency is 2219.43 ms
[ OK ] net.bireal18imagenet (2323 ms)
[ RUN ] net.bireal18imagenetstem_comparison
[ OK ] net.bireal18imagenetstem_comparison (6047 ms)
[ RUN ] net.bireal18imagenetstem
[INFO] bireal18imagenetstem's latency is 2045.26 ms
[ OK ] net.bireal18imagenetstem (2232 ms)
[----------] 4 tests from net (17129 ms total)
[----------] Global test environment tear-down
[==========] 4 tests from 1 test suite ran. (17130 ms total)
[ PASSED ] 4 tests.
The text was updated successfully, but these errors were encountered:
Hi~I have tested your pre-trained Bi-Real Net (18-layer) on Huawei Mate30
Pro and I got an inference time of 20ms which was nice. However, when I
built dabnn on Hi3516dv300 (armv7, with NEON open) and ran the same model
with *net_test*, I got an inference time of 2000ms, could you please help
me with this "bad performace"?
To reproduce this result:
1.Use the corresponding toolchain file to build dabnn. (Here is my
CMAKE_CXX_FLAGS)
SET(CMAKE_CXX_FLAGS " -mfloat-abi=softfp -mfpu=neon-vfpv4 -mcpu=cortex-a7
${CMAKE_CXX_FLAGS}" )
2.Test the generated *net_test* on Hi3516dv300 and here is what I got:
/mnt/output/tianlin/dabnn_test/bin # ./net_test
Running main() from
/home/tianlin/dabnn/third_party/googletest/googletest/src/gtest_main.cc
[==========] Running 4 tests from 1 test suite.
[----------] Global test environment set-up.
[----------] 4 tests from net
[ RUN ] net.bireal18imagenet_comparison
[ OK ] net.bireal18imagenet_comparison (6526 ms)
[ RUN ] net.bireal18imagenet
[INFO] bireal18imagenet's latency is 2219.43 ms
[ OK ] net.bireal18imagenet (2323 ms)
[ RUN ] net.bireal18imagenetstem_comparison
[ OK ] net.bireal18imagenetstem_comparison (6047 ms)
[ RUN ] net.bireal18imagenetstem
[INFO] bireal18imagenetstem's latency is 2045.26 ms
[ OK ] net.bireal18imagenetstem (2232 ms)
[----------] 4 tests from net (17129 ms total)
[----------] Global test environment tear-down
[==========] 4 tests from 1 test suite ran. (17130 ms total)
[ PASSED ] 4 tests.
—
You are receiving this because you are subscribed to this thread.
Reply to this email directly, view it on GitHub
<#72>, or unsubscribe
<https://github.com/notifications/unsubscribe-auth/ACYRZH4BPRJCCKCM3QSCUXLRVY2E3ANCNFSM4NZM76GQ>
.
Hi~I have tested your pre-trained Bi-Real Net (18-layer) on Huawei Mate30 Pro and I got an inference time of 20ms which was nice. However, when I built dabnn on Hi3516dv300 (armv7, with NEON open) and ran the same model with net_test, I got an inference time of 2000ms, could you please help me with this "bad performace"?
To reproduce this result:
1.Use the corresponding toolchain file to build dabnn. (Here is my CMAKE_CXX_FLAGS)
SET(CMAKE_CXX_FLAGS " -mfloat-abi=softfp -mfpu=neon-vfpv4 -mcpu=cortex-a7 ${CMAKE_CXX_FLAGS}" )
2.Test the generated net_test on Hi3516dv300 and here is what I got:
The text was updated successfully, but these errors were encountered: