This repository is a Proof-of-Concept (PoC) implementation of zero-copy ROSIDL message serialization and deserialization in ROS 2. It serves as a reference implementation for REP-0157, demonstrating the feasibility and performance benefits of zero-copy communication in ROS 2.
Further reads:
Set the ROS domain ID and launch the development container:
export ROS_DOMAIN_ID=<your_domain_id>
./docker/run.sh [--build] [-- COMMAND [ARGS...]]Use --build to rebuild the image. Commands after -- execute inside the container.
Modifications are applied during the Docker build when ROS core packages are compiled from source. The script-based approach allows for arbitrary changes beyond simple patches, including removing packages, adding COLCON_IGNORE files, or managing git submodules.
- Create a shell script
docker/files/patches_ros/<package_name>.shwith your modifications - For simple patches, reference a corresponding
.patchfile:
#!/bin/bash
script_location=$(dirname "$(readlink -f "$0")")
script_name=$(basename "$0")
script_name_minus_extension="${script_name%.*}"
git apply ${script_location}/${script_name_minus_extension}.patch- For other modifications, implement custom logic directly in the script
- Make the script executable:
chmod +x docker/files/patches_ros/<package_name>.sh
The script executes automatically during the build phase when processing the corresponding ROS core package. The same mechanism applies to external dependencies using docker/files/patches_ext/.
To open a shell in a running container:
docker exec -it <username>_zero_rosidl_devel bashReplace <username> with your system username (e.g., ekumen_zero_rosidl_devel).
This project includes iRobot's ros2-performance benchmark suite to measure ROS 2 communication performance.
The benchmark packages are built automatically during the Docker image build. To rebuild the workspace inside a running container:
# Inside the container, rebuild the workspace
cd $USER_WS
rm -rf build/* install/*
source /opt/ros/jazzy/setup.bash
colcon buildTo run the default Sierra Nevada topology benchmark:
# Inside the container, run the benchmark
cd $USER_WS
source $USER_WS/install/setup.bash
cd ./install/irobot_benchmark/lib/irobot_benchmark
./irobot_benchmark topology/sierra_nevada.jsonResults are saved in the sierra_nevada_log/ directory:
latency_total.txt: Summary statistics (mean latency, late messages, lost messages)latency_all.txt: Detailed per-message latency datametadata.txt: Benchmark configuration parametersresources.txt: CPU and memory usage during the test
To view the summary results:
cat $USER_WS/install/irobot_benchmark/lib/irobot_benchmark/sierra_nevada_log/latency_total.txtExample output:
received_msgs mean_us late_msgs late_perc too_late_msgs too_late_perc lost_msgs lost_perc
5278 131 0 0 0 0 0 0
The benchmark includes several predefined topologies in the topology/ directory:
sierra_nevada.json- Default topology (10 nodes, complex graph)mont_blanc.json- Large topologycedar.json- Medium complexitywhite_mountain.json- High complexitydebug_*.json- Smaller topologies for testing
To run a different topology:
./irobot_benchmark topology/<topology_name>.json