You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using FastDDS to transfer images (~4MB) between participants, the achieved latency results (~60ms) are worse when compared to other tools like Zenoh or ZeroMQ (~40/50 ms). This should not happen since FastDDS leverages SHM and others don't.
Setup
I am using FastDDS docker images, something similar to #134.
I am using eProsima_Fast-DDS-v2.13.3-Linux.tgz and https://raw.githubusercontent.com/eProsima/Fast-DDS-python/main/fastdds_python.repos as the base resources for the docker image.
I deploy the containers as such: docker run -d --ipc shareable --name ipc_share_container alpine sleep infinity docker run -itd --network host --ipc container:ipc_share_container --name app1 docker run -itd --network host --ipc container:ipc_share_container --name app2
Participants
Using the following XML file
<?xml version="1.0" encoding="utf-8" ?>
<profilesxmlns="http://www.eprosima.com/XMLSchemas/fastRTPS_Profiles">
<!-- Descriptors for new transports -->
<transport_descriptors>
<!-- UDP new transport -->
<transport_descriptor>
<transport_id>standard_udp_transport</transport_id>
<type>UDPv4</type>
<TTL>250</TTL>
</transport_descriptor>
<!-- Create a descriptor for the new transport -->
<transport_descriptor>
<transport_id>shm_transport</transport_id>
<type>SHM</type> <!-- REQUIRED -->
<maxMessageSize>524288</maxMessageSize> <!-- OPTIONAL uint32 valid of all transports-->
<segment_size>1048576</segment_size> <!-- OPTIONAL uint32 SHM only-->
<port_queue_capacity>1024</port_queue_capacity> <!-- OPTIONAL uint32 SHM only-->
<healthy_check_timeout_ms>250</healthy_check_timeout_ms> <!-- OPTIONAL uint32 SHM only-->
<default_reception_threads> <!-- OPTIONAL -->
<scheduling_policy>-1</scheduling_policy>
<priority>0</priority>
<affinity>0</affinity>
<stack_size>-1</stack_size>
</default_reception_threads>
<reception_threads> <!-- OPTIONAL -->
<reception_threadport="12345">
<scheduling_policy>-1</scheduling_policy>
<priority>0</priority>
<affinity>0</affinity>
<stack_size>-1</stack_size>
</reception_thread>
</reception_threads>
</transport_descriptor>
</transport_descriptors>
<participantprofile_name="large_data_builtin_transports_options"is_default_profile="true">
<rtps>
<sendSocketBufferSize>1048576</sendSocketBufferSize>
<listenSocketBufferSize>4194304</listenSocketBufferSize>
<builtinTransportsmax_msg_size="310KB"sockets_size="310KB"non_blocking="true"tcp_negotiation_timeout="50">LARGE_DATA</builtinTransports>
</rtps>
</participant>
<participantprofile_name="video_publisher_qos">
<rtps>
<!-- Link the Transport Layer to the Participant -->
<userTransports>
<transport_id>shm_transport</transport_id>
<transport_id>standard_udp_transport</transport_id>
</userTransports>
<useBuiltinTransports>false</useBuiltinTransports>
</rtps>
</participant>
</profiles>
We have tested participants using either the large_data_builtin_transports_options and the video_publisher_qos profiles, and results were about the same.
Is there any kind of flag missing, or is there an issue when using FastDDS Python inside Docker containers?
I would assume that these values are not normal, and I should be achieving much better latencies.
Thanks in advance for the help 😄
The text was updated successfully, but these errors were encountered:
When using FastDDS to transfer images (~4MB) between participants, the achieved latency results (~60ms) are worse when compared to other tools like Zenoh or ZeroMQ (~40/50 ms). This should not happen since FastDDS leverages SHM and others don't.
Setup
I am using FastDDS docker images, something similar to #134.
I am using
eProsima_Fast-DDS-v2.13.3-Linux.tgz
andhttps://raw.githubusercontent.com/eProsima/Fast-DDS-python/main/fastdds_python.repos
as the base resources for the docker image.I deploy the containers as such:
docker run -d --ipc shareable --name ipc_share_container alpine sleep infinity
docker run -itd --network host --ipc container:ipc_share_container --name app1
docker run -itd --network host --ipc container:ipc_share_container --name app2
Participants
Using the following XML file
We have tested participants using either the
large_data_builtin_transports_options
and thevideo_publisher_qos
profiles, and results were about the same.Sample code for the publisher:
In main code:
SensorMsgOperations uses this IDL File
and is built like this:
Sample code for the subscriber:
In the main code:
Is there any kind of flag missing, or is there an issue when using FastDDS Python inside Docker containers?
I would assume that these values are not normal, and I should be achieving much better latencies.
Thanks in advance for the help 😄
The text was updated successfully, but these errors were encountered: