You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Currently in Stage 5 we have the ability to perform each fit in parallel. This is definitely the best way to parallelize the code for large fits (e.g. shared fits), but it might be faster to run multiple fits simultaneously rather than running one fit at a time with parallelization. This likely depends on how long it takes to evaluate the likelihood function, as I've heard before for emcee that parallelizing a single fit makes sense if your likelihood function takes ~0.1 seconds to evaluate or more; I have no idea if that same approximate timescale holds for something like dynesty though. If the overheads are too large to parallelize a single fit though, you can still speed up the overall Stage 5 time by running multiple channels' fits independently but in parallel (not as a shared fit, but spawn multiple workers to do multiple independent fits at the same time to use more of your CPU resources). This likely wouldn't be too hard to implement but we'd have to be careful with memory management and it may not end up providing significant improvements over the current parallelization technique. Perhaps we can wait until Yoni's profiling investigation through Issue #309 to determine if this is worth pursuing.
Error traceback output
No response
What operating system are you using?
No response
What version of Python are you running?
No response
What Python packages do you have installed?
No response
Code of Conduct
I agree to follow this project's Code of Conduct
The text was updated successfully, but these errors were encountered:
Instrument
Light curve fitting (Stages 4-6)
What is your suggestion?
Currently in Stage 5 we have the ability to perform each fit in parallel. This is definitely the best way to parallelize the code for large fits (e.g. shared fits), but it might be faster to run multiple fits simultaneously rather than running one fit at a time with parallelization. This likely depends on how long it takes to evaluate the likelihood function, as I've heard before for emcee that parallelizing a single fit makes sense if your likelihood function takes ~0.1 seconds to evaluate or more; I have no idea if that same approximate timescale holds for something like dynesty though. If the overheads are too large to parallelize a single fit though, you can still speed up the overall Stage 5 time by running multiple channels' fits independently but in parallel (not as a shared fit, but spawn multiple workers to do multiple independent fits at the same time to use more of your CPU resources). This likely wouldn't be too hard to implement but we'd have to be careful with memory management and it may not end up providing significant improvements over the current parallelization technique. Perhaps we can wait until Yoni's profiling investigation through Issue #309 to determine if this is worth pursuing.
Error traceback output
No response
What operating system are you using?
No response
What version of Python are you running?
No response
What Python packages do you have installed?
No response
Code of Conduct
The text was updated successfully, but these errors were encountered: