Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Extremely big latency when enabling shared memory #525

Open
OscarMrZ opened this issue Jan 13, 2025 · 3 comments
Open

Extremely big latency when enabling shared memory #525

OscarMrZ opened this issue Jan 13, 2025 · 3 comments

Comments

@OscarMrZ
Copy link

OscarMrZ commented Jan 13, 2025

Bug report

Required Info:

  • Operating System:
    • Ubuntu 22.04
  • Installation type:
    • From humble binaries
  • Version or commit hash:
    • Humble last
  • Client library (if applicable):
    • rclcpp
  • Relevant hardware info
memory         15GiB System memory
processor      11th Gen Intel(R) Core(TM) i7-11800H @ 2.30GHz 16 cores
display        GA107M [GeForce RTX 3050 Mobile]

Summary

I’ve been testing the performance of CycloneDDS with shared memory (SHM) enabled to demonstrate the performance improvements it should provide. I already encountered latencies I was not expecting using just CycloneDDS + iceoryx standalone, which I have reported in this issue.

I've encountered two further issues only happening in rmw_cyclonedds:

  1. The performance gain is almost non-existent. The SHM transport seems as performant as the COPY one, while in my CycloneDDS standalone tests, the latency for SHARED was half of the one for COPY.
  2. There is a latency explosion when increasing the number of subscribers, going well over the 200ms. In my testing machine, this happens around 18 subs. Please see the next section for the exact details of the test.

Steps to reproduce issue

I used the APEX performance test package and this dockerfile they provide (you can follow the instructions to spin up the docker directly there).

I defined the following test:

---
experiments:
  -
    process_configuration: INTER_PROCESS
    execution_strategy: INTER_THREAD
    sample_transport: 
      - BY_COPY
      - SHARED_MEMORY
      - LOANED_SAMPLES
    msg: Array8m
    pubs: 1
    subs: 
     - 1
     - 2
     - 4
     - 6
     - 8
     - 10
     - 12
     - 14
     - 16
     - 18
     - 20
     - 22
     - 24
     - 26
     - 28
     - 30
     - 32
    rate: 30
    reliability: BEST_EFFORT
    durability: VOLATILE
    history: KEEP_LAST
    history_depth: 5
    max_runtime: 30
    ignore_seconds: 5

That you can run (after installing the performance package) with:

ros2 run performance_report runner --configs definition.yaml

This test measures the average latency between a publisher publishing an 8MB payload (around the size of a typical pointcloud or a Full HD image) at 30Hz and an increasing number of subscribers, for the three different available transports available for Cyclone: UDP(copy), SHM, and SHM with loaned samples. The QoS are the ones specified in the YAML.

Expected results

A quite low latency as expected from Iceoryx mostly independent of the number of subscribers. Alternatively, if this is indeed an issue, at least the same results that were obtained there. For reference:

image

Actual results

A negligible performance gain for less than 10 subs that continues worsening until exploding around 24 subs.

image

I also ran the test in one of our robots and the explosion happens even sooner, so it seems to be related to CPU capabilities (as they have the same 16GB of RAM):

image

Both the reference plot and the results plots were run in the same machine on the same conditions, just one after the other, with the same memory pool configuration. At first I thought it may be running out of RAM or starting swapping, but I can confirm that doesn't seem to be the case.

Do you guys have any ideas why this may happen and why it only seems to happen in the RMW integration of Cyclone?

Many thanks in advance!

@OscarMrZ
Copy link
Author

OscarMrZ commented Jan 14, 2025

Also, to add further info, the issue is related to the value of the history depth. I did an additional experiment where I measured the latency with 1, 16, and 32 subs with different values for depth. Keep in mind that the durability is set to VOLATILE, so there is no impact from that side, only from the size of the subscriber queues.

Here is the evolution of the latency with COPY:

image

As you can see, the evolution of the latency only depends on the number of subscribers.

Here is the evolution of the latency with SHARED:

image

As you can see, the latency values are far bigger when increasing the value of history depth for the same amount of subscribers. With depth = 1, the latency is similar to the one in COPY (which is strange enough), but it explodes with higher values.

As far as I know, this value would effectively increase the amount of memory the subscribers need to keep track of the different chunks, but I don't see why this should result in this explosion. It does use far more chunks with higher values.

@sloretz
Copy link
Contributor

sloretz commented Jan 24, 2025

The SHM transport seems as performant as the COPY one, while in my CycloneDDS standalone tests, the latency for SHARED was half of the one for COPY.

Mind sharing the CycloneDDS standalone tests? It might help show differences between rmw_cyclonedds and pure CycloneDDS.

@eboasson FYI

@OscarMrZ
Copy link
Author

Hello @sloretz, thanks for your answer!

This is the test with just Cyclone:

---
experiments:
  -
    com_mean: CycloneDDS
    process_configuration: INTER_PROCESS
    execution_strategy: INTER_THREAD
    sample_transport: 
      - BY_COPY
      - SHARED_MEMORY
      - LOANED_SAMPLES
    msg: Array8m
    pubs: 1
    subs: 
     - 1
     - 2
     - 4
     - 6
     - 8
     - 10
     - 12
     - 14
     - 16
     - 18
     - 20
     - 22
     - 24
     - 26
     - 28
     - 30
     - 32
    rate: 30
    reliability: BEST_EFFORT
    durability: VOLATILE
    history: KEEP_LAST
    history_depth: 5
    max_runtime: 30
    ignore_seconds: 5

Also, for reference, I opened a related issue in Cyclone repo, and there you can get a lot of additional info and more complete performance results.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants