Skip to content

Performance issue debugging #518

Closed Answered by MilesCranmer
eelregit asked this question in Q&A
Discussion options

You must be logged in to vote

Hey @eelregit,

Sorry, I forget if you already solved this. Is this issue still in place?

One quick tip is that the batch size is very large. I usually do batch size of 50 or even less. With 1000 points batch size and 7000 total, you might as well just run on the total dataset (because non-contiguous slicing can be expensive).

Also the weight_optimize=0.01 is a bit high. Generally constant optimization is the bottleneck, and you are doing it more frequently than normal (I usually do 0.001 or less, even for multi-node). Especially with a large maxsize and large batches, it perhaps is not too surprising that it is quite slow in the search.

Also what is the objective here you are using?

Cheers,

Replies: 1 comment 1 reply

Comment options

You must be logged in to vote
1 reply
@eelregit
Comment options

Answer selected by eelregit
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Category
Q&A
Labels
PySR PySR-related discussion SymbolicRegression.jl SymbolicRegression.jl-related discussion
2 participants