8 procs not used #456
-
Dear Miles I have a dataset of 2000 training points. I have set the number of iterations to 100, procs=8 / cpu.count() and multithreading=True. When I am looking at the CPU usage, I see all 8 of them are at 50-70% during PySr's training. Even if I saturate procs to 32, this issue still remains. What should I change here? Although I get the answer in less than 2 minutes even with these settings, I want to make use of all 8 CPUs. If I run for a higher number of iterations=500 and populations=3*32 over a "pbs" cluster of procs=32, I see the same issue there. Only one CPU is getting used. The settings here were: procs=32, cluster_manager="pbs" |
Beta Was this translation helpful? Give feedback.
Replies: 3 comments 3 replies
-
Does it say the head occupation %? The lower that is, the better. If that number is high, the head worker basically can’t give out work quickly enough. To improve this you should increase population size and/or ncyclesperiteration. Alternatively you can use multiprocessing instead of multithreading by setting |
Beta Was this translation helpful? Give feedback.
-
What version of PBS are you using? I know that some people have run into issues on multi-node processing with PBS due to ClusterManagers.jl not yet supporting the latest qsub command line arguments. See JuliaParallel/ClusterManagers.jl#179. So your current options are to:
|
Beta Was this translation helpful? Give feedback.
-
@Nakul9621 also this issue is highly relevant! #419 |
Beta Was this translation helpful? Give feedback.
What version of PBS are you using? I know that some people have run into issues on multi-node processing with PBS due to ClusterManagers.jl not yet supporting the latest qsub command line arguments. See JuliaParallel/ClusterManagers.jl#179.
So your current options are to:
cluster_manager=None, multithreading=False, procs=32
).