You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Yet, none of the comparisons here utilize this. Is this fair to us? Probably not. Should we modify one of the existing exaMPLES so that instead of running a model once, it runs 1000 models each with different rng seed? And each framework may use whatever (API declared) tools to accelarate this computation?
I think that in a sense it should be enough to extrapolate that running models in parallel should have a speed-up equal to the numbers of cores available (if the memory usage is not higher than the available one).
Certainly, if things are not managed optimally by each framework the speed-up could be less than the number of cores available, and would be interesting to check anyway.
I think that in a sense it should be enough to extrapolate that running models in parallel should have a speed-up equal to the numbers of cores available (if the memory usage is not higher than the available one).
The discussion here https://discourse.julialang.org/t/ann-vahana-jl-framework-for-large-scale-agent-based-models/102024 made me realize: Agents.jl allows distributed computing straightforwardly when e.g scanning parameters or running a model several times with different seeds to get statistical convergence.
Yet, none of the comparisons here utilize this. Is this fair to us? Probably not. Should we modify one of the existing exaMPLES so that instead of running a model once, it runs 1000 models each with different rng seed? And each framework may use whatever (API declared) tools to accelarate this computation?
@Tortar thoughts?
The text was updated successfully, but these errors were encountered: