-
Notifications
You must be signed in to change notification settings - Fork 369
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
De1-SoC Performance Different from Example Given ! #143
Comments
As referring to this issue: #46 (comment) Total kernel runtime of 149.988 ms on DE1-Soc Board seem like have been achieved with parameter VEC_SIZE=8 and LANE_NUM=8, too |
@doonny This time I managed to get Total kernal runtime of 157.928 ms. Just wondering, why the latest version of PipeCNN github give a slower inference performance for DE1-SoC board? |
May I ask which version of SDK are you using for compilation ? |
@mingyi136 where you download the bsp for de1soc board? |
@doonny I have compiled conv.aocx using OpenCL SDK 17.1 on Windows, whereas run.exe has been compiled using OpenCL SDK 16.1 on DE1-SoC board (Linux). |
Ahhh okk, i try compile using openCL SDK 17,1, but i have this error: You have a license for 16.1 SDK? |
@sergio14890 , I downloaded Linux SD Card Image (inside has OpenCL 16.1) from here: Whereas conv.aocx has been compiled using OpenCL 17.1 & pointing towards DE1-SoC BSP (OpenCL 16.0) which available here: |
The latest code is optimized for SDK v19.1, in which some features are not supported by older version, like v16.1 and v17.1. We suggest upgrade the SDK to v19.1. |
Hai @doonny , I have run inference on De1-SoC board with VEC_SIZE=8 and LANE_NUM=8 (other parameters remain unchanged).
However, the Total kernel runtime is 236.344 ms instead of 149.988 ms given in the example. Is there any additional changes in parameters / coding compared to old version?
Here is my inference result:
The text was updated successfully, but these errors were encountered: