Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pyflamegpu comparison #9

Draft
wants to merge 20 commits into
base: FLAMEGPU2
Choose a base branch
from
Draft

pyflamegpu comparison #9

wants to merge 20 commits into from

Conversation

Robadob
Copy link
Member

@Robadob Robadob commented Nov 20, 2023

  • python rtc
    • Schelling
    • Boids
    • benchmark.py
  • agent python
    • Schelling
    • Boids
    • benchmark.py
  • repast4py?
    • Schelling
    • Boids
    • benchmark.py
      • mpi version
  • Seed benchmark models
    • fgpu2
    • pyflamegpu
    • pyflamegpu-agentpython
    • repast4py
    • agents.jl (untested)
    • mesa (untested)
  • Update bench script in root
    • pyflamegpu
    • pyflamegpu-agentpython
    • repast4py
  • Update docker in root
    • pyflamegpu
    • repast4py
  • Rebase vs upstream?
  • Resolve(/test agent py) potential div0 in all FGPU Schelling's isHappy = (float(same_type_neighbours) / (same_type_neighbours + diff_type_neighbours)) > THRESHOLD
  • Resolve repast4py, initialising all ranks with the same random seed
  • Re-benchmark on current hardware (3090/V100/A100/H100pcie)
  • Version pinning / requirements files for pyflamegpu?
  • Update flamegpu to 2.0.0-rc.1

Added RTC init time to outputs.

Requires a means to disable RTC cache, JitifyCache is not currently exposed via SWIG.
@Robadob Robadob self-assigned this Nov 20, 2023
@Robadob Robadob changed the title Port Schelling to pyflamegpu pyflamegpu comparison Nov 20, 2023
@Robadob
Copy link
Member Author

Robadob commented Nov 27, 2023

Py 3.8, CUDA 12.0, Mavericks (using FLAMEGPU/FLAMEGPU2#1153 branch, as it requires a fix for Schelling)

(venv) (base) rob@mavericks:~/ABM_Framework_Comparisons/pyflamegpu-agentpy$ python benchmark.py
FLAMEGPU2 flocking prepop times (s)  : [5.970654, 5.822776, 5.788271, 5.792181, 5.838711, 5.813876, 5.782237, 5.79098, 5.749717, 5.782655]
FLAMEGPU2 flocking popgen times (s)  : [0.99894, 1.028624, 0.960488, 0.984972, 1.005758, 1.071515, 1.020886, 1.006786, 1.016327, 0.983596]
FLAMEGPU2 flocking simulate times (s): [0.038535, 0.03893, 0.03791, 0.038728, 0.04037, 0.038438, 0.038292, 0.038916, 0.038361, 0.038519]
FLAMEGPU2 flocking rtc times (s): [5.559, 5.481, 5.461, 5.46, 5.474, 5.466, 5.45, 5.456, 5.428, 5.441]
FLAMEGPU2 flocking main times (s)    : [7.088199, 6.965024, 6.856667, 6.887598, 6.959482, 6.995701, 6.917973, 6.905403, 6.876439, 6.881605]
FLAMEGPU2 flocking prepop (mean ms)  : 5813.205800000001
FLAMEGPU2 flocking popgen (mean ms)  : 1007.7891999999999
FLAMEGPU2 flocking simulate (mean ms): 38.6999
FLAMEGPU2 flocking rtc (mean ms): 5467.6
FLAMEGPU2 flocking main (mean ms)    : 6933.409100000001
FLAMEGPU2 schelling prepop times (s)  : [22.553871, 22.610452, 22.580342, 22.582618, 22.758561, 22.62143, 22.792131, 22.569158, 22.629332, 22.652648]
FLAMEGPU2 schelling popgen times (s)  : [1.149062, 1.079772, 1.11006, 1.081574, 1.113666, 1.10193, 1.090191, 1.143826, 1.076933, 1.107323]
FLAMEGPU2 schelling simulate times (s): [0.083654, 0.083567, 0.082175, 0.083943, 0.083326, 0.082924, 0.082782, 0.082495, 0.083176, 0.084518]
FLAMEGPU2 schelling rtc times (s): [7.689, 7.773, 7.715, 7.754, 7.785, 7.745, 7.772, 7.772, 7.767, 7.738]
FLAMEGPU2 schelling main times (s)    : [23.851035, 23.846701, 23.84451, 23.825024, 24.027995, 23.872736, 24.035152, 23.863517, 23.855344, 23.930243]
FLAMEGPU2 schelling prepop (mean ms)  : 22635.0543
FLAMEGPU2 schelling popgen (mean ms)  : 1105.4337
FLAMEGPU2 schelling simulate (mean ms): 83.256
FLAMEGPU2 schelling rtc (mean ms): 7751.0
FLAMEGPU2 schelling main (mean ms)    : 23895.225700000003

I've stripped out the repeated Warning: Unknown argument '--disable-rtc-cache' passed to Simulation will be ignored

Could setup alt bench script which forces RTC to use cache to show the difference.

@Robadob
Copy link
Member Author

Robadob commented Dec 6, 2023

Trying to implement repast schelling in the flamegpu style isn't working. It's not well suited to agents communicating over distance without moving. Hence I need to rewrite the model to instead have static cell agents, and mobile bidding agents. The latter will move to bid for the cells, but they will only hold a state/team.

Robadob and others added 12 commits December 11, 2023 15:50
Runs to completion on Waimu, but need to decide how to validate it.
+minor tweaks to pyfgpu boids
Remove space and store xy pos as an agent variable

Init and print timing info only at rank 0

Write output csv to split log file
Not julia script does not log time of first run where model is compiled.
@ptheywood
Copy link
Member

not doing mpi - treating as a separate thing.

Comment on lines +6 to 17
using Random

# Does not use @bencmark, due to jobs being OOM killed for long-running models, with a higher maximum runtime to allow the required repetitions.
# Does not use @benchmark, due to jobs being OOM killed for long-running models, with a higher maximum runtime to allow the required repetitions.
# enabling the gc between samples did not resolve this BenchmarkTools.DEFAULT_PARAMETERS.gcsample = false
# Runs each model SAMPLE_COUNT + 1 times, discarding hte first timing (which includes compilation)
# Runs each model SAMPLE_COUNT + 1 times, discarding the first timing (which includes compilation)
SAMPLE_COUNT = 10
SEED = 12

# Boids
Random.seed!(SEED)
times = []
for i in 0:SAMPLE_COUNT
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think the changes to this file can be removed when #10 is merged

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants