If you are a ROS developer/user and you blog about it, ROS wants those contributions on this page ! All you need for that to happen is:
have an RSS/Atom blog (no Tweeter/Facebook/Google+ posts)
open a pull request on planet.ros tracker indicating your name and your RSS feed/ATOM url. (You can just edit the file and click "Propose File Change" to open a pull request.)
make your ROS related posts tagged with any of the following categories: "ROS", "R.O.S.", "ros", "r.o.s."
Warnings
For security reasons, html iframe, embed, object, javascript will be stripped out. Only Youtube videos in object and embed will be kept.
Guidelines
Planet ROS is one of the public faces of ROS and is read by users and potential contributors. The content remains the opinion of the bloggers but Planet ROS reserves the right to remove offensive posts.
Blogs should be related to ROS but that does not mean they should be devoid of personal subjects and opinions : those are encouraged since Planet ROS is a chance to know more about ROS developers.
Posts can be positive and promote ROS, or constructive and describe issues but should not contain useless flaming opinions. We want to keep ROS welcoming :)
ROS covers a wide variety of people and cultures. Profanities, prejudice, lewd comments and content likely to offend are to be avoided. Do not make personal attacks or attacks against other projects on your blog.
Suggestions ?
If you find any bug or have any suggestion, please file a bug on the planet.ros tracker.
Weāre curious ā does a dedicated working group (or similar community) already exist for maintaining and operating ROS 2-based robots in industrial environments? If not, maybe itās time to build one.
At Siemens, our ROS 2 efforts are focused on four key challenges:
Shipping ā How do you bring ROS 2-based systems into real industrial deployments?
Debugging ā How do you quickly find bugs when your machine runs not just ROS 2, but also PLCs, HMIs, network switches, safety sensors, and more?
Updates ā How do you keep your software reliably up to date?
Fleet health ā How do you detect critical bugs locally and across your entire fleet?
Weād love to connect with the community and learn whatās already out there!
Weāre actively looking to engage with others working in this space ā whether youāre building solutions, facing the same challenges, or have already found answers we havenāt discovered yet.
Here are some data points weāve gathered so far:
Exciting tools that just dropped
The community has been busy! A few noteworthy new tools:
Big shoutout to @doisyg for sharing impressive insights on how they manage upgrades across a large fleet of robots in the field!
And I am sure there is a big vast of further open source tools out there that can help all of us.
What Siemens has shared so far(all talks in English)
Weāve been open about our own challenges and learnings:
Weāre strong believers in open collaboration. Whether youāre a researcher, developer, or industry practitioner ā letās align our efforts and avoid reinventing the wheel.
A few things weād especially love to hear about:
Open EU calls related to these topics ā always happy to explore funding opportunities and collaborative projects
Bachelor & Master thesis requests from EU students ā if youāre looking for a meaningful, real-world topic in this space, reach out! Weād love to support the next generation of robotics engineers
Cheers from Germany
Florian
Update as of Tue, Mar 3, 2026 11:00 PM UTC
Letās try to ball point where a potential virtual meeting could already happen:
(Please also vote if the day does not fit, right now I am more interested in finding the right time of day)
We are pleased to officially announce ROS Meetup MedellĆn 2026, a space designed to bring together the robotics, ROS, and autonomous systems community in Colombia.
April 29 ā Universidad EIA (Poster Session) April 30 ā Parque Explora (Talk Session)
MedellĆn, recognized for its strong innovation and technology ecosystem, will be the perfect setting to connect academia, industry, and the open-source community around ROS and robotics.
Call for Speakers open Call for Posters open Attendee registration available
If you are developing ROS-based projects, conducting robotics research, or building AI-driven and autonomous systems solutions, we invite you to share your work and actively participate in the event.
MAHE Mobility Challenge 2026 is a national-level hybrid hackathon hosted by CEAM and the Department of ECE at Manipal Institute of Technology (MIT), Bengaluru.
This challenge is designed for B.Tech students passionate about autonomous and connected mobility systems, offering an opportunity to ideate, design, and build real working prototypes addressing next-generation mobility challenges.
Total Prize Pool: ā¹3 Lakhs
Challenge Tracks
⢠AI in Mobility
Intelligent perception systems, predictive modeling, adaptive routing, autonomy stacks
⢠Robotics & Control
Embedded systems, actuator integration, simulation workflows, control architecture design
⢠Cybersecurity for Mobility
Secure V2X communication, threat modeling, safety-focused system hardening for connected vehicles
Format & Timeline
Registrations Close: 15 March 2026
Round 1: Online technical proposal submission (Deadline: 31 March 2026)
Round 2: Offline prototype demonstration at MIT Bengaluru (17 April 2026)
Shortlisted teams will build and demonstrate working prototypes during the final round.
Participants are encouraged to leverage open-source robotics frameworks (ROS), simulation environments, and modular autonomy architectures where relevant.
We welcome engagement from students and robotics enthusiasts interested in contributing to secure and intelligent mobility systems.
SIPA (Spatial Intelligence Physical Audit) is a trajectory-level physical consistency diagnostic. It does not require source code access or internal simulator states and directly audits 7-DoF CSV trajectories. By design, SIPA is compatible with any system that produces spatial motion data. Its principle is based on the Non-Associative Residual Hypothesis (NARH).
1. What SIPA Can Audit
SIPA operates on the final motion output, enabling post-hoc physical forensics for:
Physics Simulators: NVIDIA Isaac Sim, MuJoCo, PyBullet, Gazebo.
Neural World Models: World Labs Marble, OpenAI Sora, Runway Gen-3 (via pose extraction).
Robotic Foundation Models: Any system outputting 7-DoF trajectories.
Real-World Capture: OptiTrack, Vicon, or SLAM-based motion sequences.
Supported Data Pathways:
Tier 1 Ć¢ā¬ā Native Spatial Intelligence (Recommended): High-fidelity data from Isaac Sim, MuJoCo, or Robot Telemetry.
Tier 2 Ć¢ā¬ā Structured World Generators: Emerging models like World Labs Marble, where 3D states are programmable and exportable.
Tier 3 Ć¢ā¬ā Pixel Video Models (Experimental): Pure video generators (Sora, Kling). This requires an additional pose-lifting step (Video \\to Pose \\to SIPA) and is currently research-grade due to vision uncertainty.
2. The Logic: Non-Associative Residual Hypothesis (NARH)
NARH posits that physical inconsistency stems from discrete solver ordering rather than just algebraic error.
(1)Setting
Consider a rigid-body simulation system defined by:
State space S \subset \mathbb{R}^n
Associative update operator \Phi \Delta t : S \to S
Parallel constraint resolution composed of sub-operators `\{\Psi_i\}_{i=1}^k`
Ć¢ā¬ā¹The simulator implements a discrete update:
where Äļæ½Åļæ½ is an execution order induced by:
constraint partitioning
thread scheduling
contact batching
solver splitting
Each \Psi_i is individually well-defined, but their composition order may vary.
(2) Order Sensitivity
Although each operator ĆĀØi belongs to an associative algebra (e.g., matrix multiplication, quaternion composition), the composition of numerically approximated operators may satisfy:
We define the Non-Associative Residual (NAR) at state s_t as:
R_t = \lVert A(a,b,c; s_t) \rVert
for a chosen triple of sub-operators representative of contact or constraint updates.
This residual measures path-dependence induced by discrete solver ordering, not algebraic non-associativity of the state representation.
(4) Hypothesis (NARH)
In high-interaction-density regimes (e.g., contact-rich robotics, high-speed manipulation), the Non-Associative Residual R_t becomes non-negligible relative to scalar stability metrics, and accumulates over time as a structured drift term.
Formally, there exists a regime such that:
\sum_{t=0}^{T} R_t \not\approx 0
even when:
\Vert s_{t+1} - s_t \Vert remains bounded.
(5) Interpretation
This hypothesis does not claim:
that simulators are mathematically invalid,
that associative algebras are incorrect,
or that hardware tiling causes topological inconsistency.
Instead, it asserts:
Discrete parallel constraint resolution introduces a measurable order-dependent residual that is not explicitly encoded in the state space.
This residual may contribute to:
sim-to-real divergence,
policy brittleness,
instability under reordering of equivalent control inputs.
(6) Falsifiability
NARH is falsified if:
s_t remains within numerical noise across interaction densities.
Scalar metrics (e.g., kinetic energy norm, velocity norm) detect instability earlier or equally compared to any associator-derived signal.
(7) Research Implication
If validated, NARH suggests that:
Order sensitivity is a structural property of discrete solvers.
Additional diagnostic signals (e.g., associator magnitude) may serve as early-warning indicators.
Embodied AI training in simulation may implicitly depend on hidden order-stability assumptions.
If invalidated, the experiment establishes an empirically order-invariant regime Ć¢ā¬ā a valuable boundary characterization of solver behavior.
3. Physical Integrity Rating (PIR)
SIPA introduces the Physical Integrity Rating (PIR), a heuristic composite indicator designed to quantify the causal reliability of motion trajectories. PIR evaluates whether a world model is Ć¢ā¬Åphysically solventĆ¢ā¬ļæ½ or accumulating Ć¢ā¬Åkinetic debt.Ć¢ā¬ļæ½
The Metric
PIR = Q_{\text{data}} \times (1 - D_{\text{phys}})
D_{\text{phys}}(Physical Debt): Log-normalized residual derived from the Octonion Associator, testing the NARH limits.
PIR \in [0, 1]: Higher indicates higher physical fidelity.
Credit Rating Scale
PIR Score
Rating
Label
Operational Meaning
Ć¢ā°Ā„ 0.85
A
High Integrity
Reliable for industrial simulation and safety-critical AI.
Ć¢ā°Ā„ 0.70
B
Acceptable
Generally consistent; minor numerical drift detected.
Ć¢ā°Ā„ 0.50
C
Speculative
Ć¢ā¬ÅVisual plausibility maintained, but causal logic is shaky.Ć¢ā¬ļæ½
Ć¢ā°Ā„ 0.30
D
High Risk
Ć¢ā¬ÅElevated physical debt; prone to Ć¢ā¬ÅĆ¢ā¬ÅhallucinationsĆ¢ā¬ļæ½Ć¢ā¬ļæ½ under stress.Ć¢ā¬ļæ½
< 0.30
F
Critical
Physical bankruptcy; trajectory violates fundamental causality.
Note on Early Adoption: Since its initialization, weĆ¢ā¬ā¢ve observed a unique anomaly: 120 institutional entities cloned the repo via CLI with near-zero web UI traffic. This suggests that the industry (Sim-to-Real teams and Tech DD leads) is already utilizing NARH for internal audits. View Traffic Evidence
Call to Action
We invite the ROS community to stress-test their simulators and world models using SIPA. Any questions can be discussed under this topic!
As a popular open-source project, OpenClaw has become a highlight in the robotic arm control field with its intuitive operation and strong adaptability. It enables full end-to-end linkage between AI commands and device execution, greatly lowering the barrier for robotic arm control. This article focuses on practical implementation. Combined with the pyAgxArm SDK, we will guide you through the download, installation, and configuration of OpenClaw to achieve efficient control of the NERO 7-axis robotic arm.
Locate the Quick Start tab and execute the one-click installation script
Start Configuring OpenClaw
It is recommended to select the QWEN model
Select all hooks
Open the web interface
Teach OpenClaw the Skill and Rules for Controlling the Robotic Arm
Create an agx_arm_codegen directory in the skill folder, then create the following skill files:
SKILLS.md
---
name: agx-arm-codegen
description: Guide OpenClaw to generate pyAgxArm-based robotic arm control code from user natural language. When users describe robotic arm movements with prompts and existing scripts cannot directly meet the requirements, automatically organize and generate executable Python scripts based on the APIs and examples provided by this skill.
metadata:
{
"openclaw":
{
"emoji": "ē",
"requires": { "bins": ["python3", "pip3"] },
},
}
---
## Function Overview
- This skill is used to **guide OpenClaw to generate** executable pyAgxArm control code (Python scripts) based on user natural language descriptions, rather than just calling existing CLIs.
- Reference SDK: pyAgxArm ([GitHub](https://github.com/agilexrobotics/pyAgxArm)); Reference example: `pyAgxArm/demos/nero/test1.py`.
## When to Use This Skill
- Users say "Write code to control the robotic arm", "Generate a control script based on my description", "Make the robotic arm perform multiple actions in sequence", etc.
- Users explicitly request to "generate Python code" or "provide a runnable script" to control AgileX robotic arms such as Nero/Piper.
## Generate Code Using This Skill
- Based on user prompts, combine the APIs and templates in `references/pyagxarm-api.md` of this skill to generate a complete, runnable Python script.
- After generation, explain: the script needs to run in an environment with pyAgxArm and python-can installed, and CAN must be activated and the robotic arm powered on; remind users to pay attention to safety (no one in the workspace, small-scale testing first is recommended).
## Rules for Generating Code
1. **Connection and Configuration**
- Use `create_agx_arm_config(robot="nero", comm="can", channel="can0", interface="socketcan")` to create a configuration (Nero example; Piper can use `robot="piper"`).
- Use `AgxArmFactory.create_arm(robot_cfg)` to create a robotic arm instance, then `robot.connect()` to establish a connection.
2. **Enabling and Pre-Motion**
- CRITICAL: The robot MUST BE ENABLED before switching modes. If the robot is in a disabled state, you cannot switch modes.
- Switch to normal mode before movement, then enable: `robot.set_normal_mode()`, then poll `robot.enable()` until successful; you can set `robot.set_speed_percent(100)`.
- Motion modes: Whenever using move_* or needing to switch to * mode, explicitly set `robot.set_motion_mode(robot.MOTION_MODE.J)` (Joint), `P` (Point-to-Point), `L` (Linear), `C` (Circular), `JS` (Joint Quick Response, use with caution).
3. **Motion Interfaces and Units**
- Joint motion: `robot.move_j([j1, j2, ..., j7])`, unit is **radians**, Nero has 7 joints.
- Cartesian: `robot.move_p(pose)` / `robot.move_l(pose)`, pose is `[x, y, z, roll, pitch, yaw]`, position unit is **meters**, attitude is **radians**.
- Circular: `robot.move_c(start_pose, mid_pose, end_pose)`, each pose is 6 floating-point numbers.
- CRITICAL: All movement commands (move_j, move_js, move_mit, move_c, move_l, move_p) must be used in normal mode
- After motion completion, poll `robot.get_arm_status().msg.motion_status == 0` or encapsulate `wait_motion_done(robot, timeout=...)` before executing the next step.
4. **Mode Switching**
- Switching modes (master, slave, normal) requires 1s delay before and after the mode switch
- Use `robot.set_normal_mode()` to set normal mode
- Use `robot.set_master_mode()` to set master mode
- Use `robot.set_slave_mode()` to set slave mode
- CRITICAL: Enable the robot FIRST with `robot.enable()` BEFORE switching modes
5. **Safety and Conclusion**
- In the generated script, note: confirm workspace safety before execution; small-scale movement is recommended for the first time; use physical emergency stop or `robot.electronic_emergency_stop()` / `robot.disable()` in case of emergency.
- If the user requests "disable after completion", call `robot.disable()` at the end of the script.
6. **Implementation Details**
- When waiting for motion to complete, use shorter timeout (2-3 seconds)
- After each mechanical arm operation, add a small sleep (0.01 seconds)
- Motion completion detection: `robot.get_arm_status().msg.motion_status == 0` (not == 1)
## Reference Files
- **API and Minimal Runnable Template**: `references/pyagxarm-api.md`
When generating code, refer to the interfaces and code snippets in this file to ensure consistency with pyAgxArm and test1.py usage.
## Safety Notes
- The generated code will drive a physical robotic arm. Users must be reminded: confirm no personnel or obstacles in the workspace before execution; it is recommended to test with small movements and low speeds first.
- High-risk modes (such as `move_js`, `move_mit`) should be marked with risks in code comments or user explanations, and it is recommended to use them only after understanding the consequences.
- This skill is only responsible for "guiding code generation" and does not directly execute movements; users need to prepare the actual running environment, CAN activation, and pyAgxArm installation by themselves (refer to environment preparation in the agx-arm skill).
pyagxarm-api.md
# pyAgxArm API Quick Reference & Minimal Runnable Template
For reference when OpenClaw generates robotic arm control code from user natural language. SDK source: pyAgxArm ([GitHub](https://github.com/agilexrobotics/pyAgxArm)); Example reference: `pyAgxArm/demos/nero/test1.py`.
## 1. Connection and Configuration
```python
from pyAgxArm import create_agx_arm_config, AgxArmFactory
# Configuration: robot options - nero / piper / piper_h / piper_l / piper_x; channel e.g. can0
robot_cfg = create_agx_arm_config(
robot="nero",
comm="can",
channel="can0",
interface="socketcan",
)
robot = AgxArmFactory.create_arm(robot_cfg)
robot.connect()
create_agx_arm_config(robot, comm="can", channel="can0", interface="socketcan", **kwargs): Create configuration dictionary; CAN-related parameters are passed via kwargs (e.g. channel, interface).
AgxArmFactory.create_arm(config): Return robotic arm driver instance.
robot.connect(): Establish CAN connection and start reading thread.
2. Enabling and Modes
robot.set_normal_mode() # Normal mode (single arm control)
# Enable: poll until successful
while not robot.enable():
time.sleep(0.01)
robot.set_speed_percent(100) # Motion speed percentage 0ā100
# Disable
while not robot.disable():
time.sleep(0.01)
MIT Impedance/Torque Control (Advanced): robot.set_motion_mode(robot.MOTION_MODE.MIT), robot.move_mit(joint_index, p_des, v_des, kp, kd, t_ff), parameter ranges refer to SDK, use with caution.
6. Minimal Runnable Template (Extend based on this when generating code)
#!/usr/bin/env python3
import time
from pyAgxArm import create_agx_arm_config, AgxArmFactory
def wait_motion_done(robot, timeout: float = 3.0, poll_interval: float = 0.1) -> bool: # Shorter timeout (2-3s)
time.sleep(0.5)
start_t = time.monotonic()
while True:
status = robot.get_arm_status()
if status is not None and getattr(status.msg, "motion_status", None) == 0:
return True
if time.monotonic() - start_t > timeout:
return False
time.sleep(poll_interval)
def main():
robot_cfg = create_agx_arm_config(
robot="nero",
comm="can",
channel="can0",
interface="socketcan",
)
robot = AgxArmFactory.create_arm(robot_cfg)
robot.connect()
# Mode switching requires 1s delay before and after
time.sleep(1) # 1s delay before mode switch
robot.set_normal_mode()
time.sleep(1) # 1s delay after mode switch
# CRITICAL: The robot MUST BE ENABLED before switching modes
while not robot.enable():
time.sleep(0.01)
robot.set_speed_percent(80)
# After each mechanical arm operation, add a small sleep (0.01 seconds)
# CRITICAL: All movement commands must be used in normal mode
robot.set_motion_mode(robot.MOTION_MODE.J)
robot.move_j([0.05, 0.0, 0.0, 0.0, 0.0, 0.0, 0.0])
time.sleep(0.01) # Small delay after move command
wait_motion_done(robot, timeout=3.0) # Shorter timeout
# Optional: Disable before exit
# while not robot.disable():
# time.sleep(0.01)
if __name__ == "__main__":
main()
When generating code, replace or add motion steps (move_j / move_p / move_l / move_c, etc.) according to user descriptions, and keep consistency in connection, enabling, wait_motion_done and units (radians/meters).
Next, teach OpenClaw this skill
After configuring the robotic arm CAN communication and Python environment, OpenClaw can automatically call the SDK driver to generate control code and control the robotic arm
LinkForge v1.3.0 was just released! But more than a release announcement, I want to take a moment to share the bigger vision of where this project is going ā because itās grown far beyond a Blender plugin.
The Vision
LinkForge is not just a URDF exporter for Blender. The architecture is intentionally built as a Hexagonal Core, fully decoupled from any single 3D host or output format.
The mission is simple:
Bridge the gap between creative 3D design and high-fidelity robotics engineering.
Design Systems (Blender, FreeCAD, Fusion 360) ā LinkForge Core ā Simulation & Production (ROS 2, MuJoCo, Gazebo, Isaac Sim)
Because in robotics, Physics is Truth. Every inertia tensor, every joint limit, every sensor placement should be mathematically correct before it ever reaches a simulator.
Where: Artisans Asylum, 96 Holton Street, Boston, MA 02135
When: Thursday March 5, 7:00-9:00pm
Speaker: Tom Ryden of Mass Robotics
If you are into Robotics (and, by definition, you are, given you are reading this!) this promises to be a very interesting talk! Tom will start with an overview of MassRobotics and then get into what the current trends are in the robotics market: what problems are start-ups addressing, how the fundraising market is today, and where the investment dollars are going.
As you may have seen in the recent blog post, Intrinsic is joining Google as a distinct robotics and AI unit. Specifically Intrinsicās platform will bring a new āinfrastructural bridgeā between Googleās frontier AI research (such as the AI coming from teams at Gemini and DeepMind) and the practical, high-stakes requirements of industrial manufacturing, which is Intrinsicās focus. This decision will allow our team to continue building the Intrinsic platform, and operate in a very similar way to before. Our commercial mandate remains the same, as does our focus on delivering intelligent solutions for our customers.
Intrinsic remains dedicated to the commitments weāve made to the open source community, to ROS, Gazebo and Open-RMF (including Lyrical and Kura release roadmaps) and deepening our platform integrations with ROS over time. Weāre also very excited about the AI for Industry Challenge this year, which is organized with the team at Open Robotics and has thousands of registrants so far.
From the communityās perspective we are expecting minimal disruption, if any, and we look forward to showing and sharing more news at ROSCon in Toronto later this year.
Different options that the solution can provide for backward compatibility
Support in rclpy
Support in rclc
Break ABI/API
Remember that the meeting is happening every week to push this feature in to Lyrical Luth. Please check the Open Source Robotics Foundation official events to join the next meeting
We had the first meeting in the Accelerated Memory Transport WG. The meeting focused on discussing a new prototype presented by Karsten and CY from NVIDIA.
We discussed some topics:
How to handle large messages between ROS nodes, particularly for image data, tensors, and point clouds, by introducing custom buffer types that can be mapped to non-CPU memory.
Backend loading and compatibility, leading to a discussion about whether the feature should be opt-in or opt-out, with Michael suggesting it could be made default in a future release after initial adoption.
The group also discussed wire compatibility concerns and potential workarounds, including using maximum length values or special annotations in message types.
Working with nine different robotics companies over the course of 10 years has taught us a thing or two about designing robotic full-stack architectures. All this experience went into the design of Transitive, the open-source framework for full-stack robotics. Weāve started a new mini-series of blog posts where I dive into the three core concepts of the framework. Too often do we see robotic startups fall into the same pitfalls when designing their full-stack architecture (robot + cloud + web). Therefore it is important to us to share our experience and explain why we built Transitive the way we did.
In this first post youāll learn about the need for cross-device code encapsulation, how we addressed this need in Transitive via full-stack packages, and what benefits result from this approach for growing your fleet and functionality without increasing complexity.
During the integration of our hardware we (inmach.de) encountered some shortcomings in the ros2_canopen package which we worked around or fixed in our fork of ros2_canopen. Weād like to get these changes into the upstream repo so that everyone can profit from them.
The major shortcomings we found and think should and could be improved are:
The CIA402 Driver only supports 1 axle but the standard allows up to 8 axles (our hardware supports 4 axles)
The canopen_ros2_control systems can not handle different types of CAN devices on the same CAN bus so one has to write an own system in this case. This getās even worse if a node with a custom api should be used.
Because ros2_control just reuses the normal Node implementation there are always also the ROS topics/services available which can easily bypass the ros2_control controllers.
The use of a template for ros2_canopen::node_interfaces::NodeCanopenDriver to handle the case of Node and LifecycleNode has probably historic reasons. The current proposed ROS way to handle this case is to use rclcpp::node_interfaces.
With this post Iād like to start a discussion with the ROS community and the maintainers (@c_h_s, @ipa-vsp) of ros2_canopen about other possible shortcomings and what needs and can be done to improve the ros2_canopen stack. So that we together can make it even better in the years to come.
Hey! Iām looking to improve my ROS 2 code performance with the usage of zero-copy transfer for large messages.
Iāve been under the impression that simply composing any composable node into a container, and setting āuse_intra_process_commsā to True would lead into zero-copy transfer. But after experimenting and going through multiple tutorials, design docs, and discussions, that doesnāt seem to be the case.
I wanted to create this thread to write down some of my questions, in the hopes of them being helpful for improving the documentation, and to get a better understanding of the zero-copy edge cases. Iām also curious to hear if there are already ways to easily verify that the zero-copy transfer is happening.
To my understanding, it looks like there are a bunch of different things that can have an influence if zero-copy happens or not:
Pointer type: The choice of SharedPtr, UniquePtr, etc. seems to affect on if zero-copy really happens or not [1], [2]
Number of subscribers: If we have many subscriptions to the same topic, some of the subscriptions might be actually creating a copy of the message [1]
QoS: Some of the quality of service types have been at least in the past unsupported [3]
RMW implementation: At least in the past, the middleware choice has played a role. How it is nowadays? How about with Zenoh? [3]
ROS Distribution version: Are there differences between existing distros (Humble to Rolling?)
Component container type: Based on my past experimentation, there seems to be a difference between the container type: component_container vs. _mt vs. _isolated.
A new inter-process subscriber outside of the composable container: What happens if we have a new inter-process subscription, outside of the composable container?
Publisher outside of the composable container: How zero-copy behaves in situations when we for example have the publisher node outside of composable container? Can multiple subscribers still benefit from zero-copy? From my past experimenting, it seems that they can.
Is there something else that can have an influence?
Iām looking to understand what are the cases when the zero-copy transfer really happens, and in which cases ROS just quietly falls back to copying the messages.
Many of these questions also boil down a bigger question: How can I verify if zero-copy happens, and what kind of performance benefits Iām getting from using it? All the demos Iāve seen until now simply print the memory address of the message to confirm that the zero-copy happens. I think it would be highly beneficial to have a better way directly in ROS 2 to see if zero-copy pub-sub is actually happening. Is there already a way to do that, or do you see how this could be implemented? Maybe through ros2 topic CLI?
In addition to the above questions, the tutorials and other resources still left me wondering about these ones:
What are all the different ways of achieving zero-copy transfer? Via Loaned messages? What are the benefits of it compared to intra-process communication (IPC)? In Jazzy, loaned messages tutorial has a mention āCurrently using Loaned Messages is not safe on subscriptionā [4]
What are the performance gains of zero-copy? In which situations the serialization is completely avoided, and in which situations the middleware layer is skipped completely?
What is the role of āuse_intra_process_commsā parameter? Iāve sometimes observed zero-copy happening even when this parameter is set to false. What are the benefits of having it as āfalseā (which it is by default when nodes are composed in launch file)?
We are building Ajime (https://ajime.io) to provide a zero-config and pipeline building, Ajime is a CI/CD drag and drop experience for edge computing & robotics. Just link your GitHub repository, we handle the build and deployment of CUDA-ready containers, manage your cloud/on-prem databases and compute resources (provide also fast hosting), and provide secure, fleet connectivity over the cloud. Easy like building lego.
Whether youāre deploying to an NVIDIA Jetson or Raspberry PI or any other linux based SOM, Ajime automates the entire pipelineāfrom LLM-generated Dockerfiles with sensor drivers to NVIDIA Isaac Sim validation. Weāre in private beta and looking for engineers to help us kill the ādependency hellā of robotics DevOps. Check out the demo and join the waitlist at ajime.io.
Please come and join us for this coming meeting at Wed, Feb 25, 2026 4:00 PM UTCāWed, Feb 25, 2026 5:00 PM UTC, where we plan to deploy an example Canonical Observability Stack instance based on information from the tutorials and documentation.
We did originally plan to host this session on 2026-02-11, but unfortunately had to cancel, so the session has been moved back.
In the previous meeting, the CRWG invited Guillaume Beuzeboc from Canonical to present on the Canonical Observability Stack (COS). COS is a general observability stack for devices such as drones, robots, and IoT devices. It operates from telemetry data, and the COS team has extended it to support robot-specific use cases. If youāre interested to watch the talk, it is available on YouTube.
I built something to bridge the gap between AI agents and ROS robots. Instead of writing custom interfaces for every LLM integration, this gives you a universal bridge with zero boilerplate.
**Key features:**
- Auto-generates Python classes from .msg/.srv files
Once a year, we take a moment to evaluate the health, growth, and general well-being of the ROS community. Our goal with this annual report is to provide a relative estimate of the communityās evolution and composition to better help us plan for the future and allocate resources.
As an open-source project, we prioritize user privacy above all else. We do not track our users, and as such, this report relies on aggregate statistics from services like GitHub, Google Analytics, and download data from our various servers. While this makes data collection difficult, and the results donāt always capture the information we would like, we are happy to report that the data we have captured clearly show a thriving and rapidly growing ROS ecosystem!
The ROS 2 Github organization saw an 11.2% increase in contributors and a 37.59% increase in the number of pull requests.
Discourse posts have increased by 24% and viewership has increased by 29.7%.
Our newest ROS 2 paper (Macenski et al., 2022) had 1,929 citations, representing 90% growth year over year.
92.14% of Gazebo downloads are now for modern versions of Gazebo.
A Landmark Year for Community Growth
The 2025 metrics highlight a massive surge in users across almost all of our websites and servers. In the month of October 2025, ROS 2 package downloads saw a staggering 284% increase in the number of package downloads over the previous year. ROS 2 package downloads now make up the overwhelming majority of ROS package downloads (91.2% of all downloads in October 2025). This growth isnāt just from users transitioning from ROS 1 to ROS 2, most of it appears to be explosive growth in the number of ROS 2 users overall. The number of unique users / IPs downloading ROS packages grew from 843,959 in October 2024 to 1,315,867 in October of 2025, an increase of just shy of 56%!
Meanwhile, ROS 1 downloads declined slightly from 12,206,979 packages in October of 2024 to 11,590,884 in October of 2025, a decrease of slightly over 5%. The ROS Wiki, which is now at End-of-Life, saw an 8.5% decrease in users, a trend we view positively as the community migrates to modern documentation platforms and away from ROS 1. Similarly, there were only 5 questions tagged with āROS1ā on Robotics Stack Exchange in 2025, in contrast to the 1,449 questions tagged āROS2.ā On every platform, and by every metric, ROS 2 is now the dominant platform ROS development.
Our discussion platforms are also busier than ever. Annual topics on ROS Discourse rose by 40% (to 1,472), and annual posts increased by 24% (to 4,901). Overall viewership of Discourse grew by nearly 30%. Similarly our community on LinkedIn has increased by 23.9% and hovers at just shy of 200,000 followers. The only notable decrease of any ROS metric was on Robotics Stack Exchange, which has seen a -42.49% decrease in the number of questions asked. This decrease mirrors larger industry wide trends as developers turn to large language models to answer their technical questions.
ROS 2 Adoption and Industry Momentum
The shift to ROS 2 has reached a definitive milestone, with package downloads now overwhelmingly centered on ROS 2 and likely surpassing one billion per year. This massive download volume is a testament to the ROSās utility and widespread adoption. We are especially encouraged by the growing health of the ecosystem, which now features 34,614 unique ROS packages available via Apt (an increase of 9.15% over the previous year). This growth in package availability directly translates into greater functionality and choice for our users.
The dedication of the developer community is evident in the flourishing number of public repositories on Github: 3,848 repositories are tagged with ā#ROS2ā (a 39% increase in 2025), alongside 8,744 public repositories tagged with ā#ROSā (up 4.73% since Jan 2025), demonstrating increasing development activity. Furthermore, the relevance of ROS in industry is undeniable: our private list of ROS companies grew 26% this year to 1,579 companies, showing strong commercial validation. In the academic sphere, our canonical ROS 2 paper continues to demonstrate explosive growth with 1,929 citations (up 89.9% in 2025), confirming the platformās role in cutting-edge research. Collectively, these metrics confirm ROS 2ās status as the established platform for the next generation of robotics development, driving significant growth across both commercial and research sectors.
Conclusion and Feedback
The data from 2025 depicts a thriving, maturing ecosystem that is increasingly centered on modern ROS 2 and modern Gazebo tools. We are immensely proud of this communityās growth and its successful shift toward next-generation robotics software!
We encourage you to dive into the full report for a more detailed breakdown of these metrics. We also encourage you to take a look at the ROS project contributor metrics published by our colleagues at the Linux Foundation for a detailed breakdown of project contribution statistics. As always, we would love to hear your thoughts on what metrics you would like to see included in future reports.
A Note on 2025 Data
Our goal with the ROS metrics report is to develop an understanding of the magnitude and direction of changes in the ROS open source community so we can make better decisions about where we allocate our time and resources. As such, weāre looking for ballpark estimates to help guide decision making, not necessarily exacting figures. This year, due to circumstances beyond our control, weāve had to fill in some gaps in our data as explained below. We believe the numbers reported here paint a reasonable lower bound on various phenomena in the ROS community.
Our ROS package download statistics are culled from an AWStats instance running on our OSU OSL servers. In July of 2025 we moved our AWStats host at OSU OSL and upgraded AWStats ahead of its imminent deprecation. Unfortunately, this migration had two negative side effects that impacted our results for 2025. First, it caused us to lose most of our AWStats data for the month of July, 2025. Second, the upgrade did not provide a migration utility for existing log data, and our AWStats summary page for 2025 only presents data for the six months after the migration. Thankfully, we still have the raw log data for the proceeding six months (with the exception of July), and we were able to manually re-calculate the results for most metrics, albeit missing some data from the month of July.
For our Gazebo download metrics we rely upon the Apache logs available on an OSRF AWS instance and AWStats download data from the OSU OSL servers. For privacy reasons we do not retain the Apache log data in perpetuity, instead we rely on a logging buffer that periodically rolls over. In prior years this buffer was sufficient to capture well over a monthās worth of Gazebo download data. Gazebo downloads have grown significantly over the past year, and when we evaluated our logs, we found that only a little over two weeks worth of data was available. As such we decided to evaluate the download data on a two week period from January 13th, until January 27th and extrapolate those results out to the entire month.
iāve opened an issue proposing to add an ADOPTERS to the ROS documentation ā a centralized, community-maintained list of organizations using ROS in production.
please have a look at the issue, and give me the feedback
Does this seem valuable to the community?
What fields or structure would you find most useful?
Would your organization be willing to be listed?
If thereās interest, Iām happy to submit an initial PR to get things started. Please share your thoughts here or on the GitHub issue.
I want to test analytics software Iām developing on a wide variety of movements and ROS2 frameworks (e.g. MoveIt, Nav2, etc.) and sensor types, and Iām looking for recommendations on robots that are a good balance between low cost and a broad range of functionality. For example, Iām thinking of a combination of a Turtlebot 4 for a mobile robot and a Waveshare RoArm M3 for a robot arm with some Gen AI capabilities. Iām sure a lot of people have experience with the Turtlebot here but Iām curious what your recommendations would be in general.
By the way, Iām new here and wasnāt sure what category to post this in. Please let me know if thereās a better place for this discussion. Thanks in advance.
I was discussing this topic with a colleague and am interested in some other opinions. He was proposing using the GetParameter service to get the robot description from the robot_state_publisher node. I was suggesting we subscribe to /robot_description. We are working in a single-robot environment.
What do you prefer and why?
The way I see it, writing the service call makes the code using the robot description clearer, as you see itās only received once, and waiting for the response is clear. In contrast, the topic subscribing code looks like you might be receiving it periodically.
On the other hand, you how have to specify the node and parameter name, so if for some reason RSP isnāt there or doesnāt have the robot_description param, and instead some other node is publishing it, it wonāt work. But to be honest Iāve never had this be the case in any ros2 systems Iāve worked with.
Maybe it would be the best of both worlds if the robot_state_publisher had a /get_robot_description service? Or maybe rclpy needs some built-in helpers for making getting the robot description, or in general latched topics, cleaner? Or maybe these things already exist and I am unaware
Looking forward to hearing from others on this topic!
As we begin the planning phase for the ROS 2 Lyrical release, the PMC is considering an upgrade to our core language requirements. Specifically, we are looking at making C++20 the default standard for the Lyrical distribution.
Why now?
The PMC has reviewed our intended Tier 1 target platforms for this cycle, and they all appear to support modern toolchains with mature C++20 implementations. These targets include:
Ubuntu 26.04 (Resolute)
Windows 11
RHEL 10
Debian 13 (Trixie)
We Need Your Feedback
While the infrastructure seems ready, the PMC wants to make sure we do not inadvertently break any critical workflows or orphan embedded environments that might be constrained by older compilers.
We would like to hear from you if:
You are targeting an LTS embedded platform or an RTOS that lacks a C++20-compliant compiler.
You maintain a core package that would face significant architectural hurdles by incrementing the standard.
You have specific concerns regarding binary compatibility or cross-compilation with existing C++17 libraries.
The goal is to move the ecosystem forward without leaving anyone behind. If you anticipate any friction, please share your thoughts below.
The 13th ROS-Industrial Europe Conference 2025 took place on 17ā18 November 2025 in Strasbourg, co-located with ROSCon FR&DE. The event brought together industrial practitioners, researchers, and technology providers to share practical experience with deploying ROS 2 in production environments, discussing both proven approaches and remaining challenges.
Hosted at the CCI Campus Alsace ā Site de Strasbourg, the program covered robotics market insights, vendor perspectives, and technical topics such as driver development and real-time control. Further sessions addressed humanoid safety, modular application frameworks, and industrial expectations regarding determinism and long-term maintainability. Updates from the different regional ROS-Industrial consortia provided a broader international perspective.
The event concluded with a hands-on company visit to ENGLAB, allowing participants to see robotics solutions in action beyond the conference hall.
Event Page with links to the slides and presentations video here
Day 1 Highlights : From Market Momentum to āROS 2 Going Industrialā
Werner Kraus opened the conference with an introduction to Fraunhofer IPA and a global robotics market overview. He highlighted strong growth trends, particularly in medical and humanoid robotics, and emphasized that safety in humanoid systems remains a critical research and engineering frontier.
Felix Exner from Universal Robots presented ongoing development of ROS interfaces for robot controllers, including motion-primitive-based approaches. He addressed a recurring industry challenge: maintaining a stable ROS ecosystem across multiple distributions while balancing documentation quality, development agility, and long-term support strategies.
Robert Wilbrandt from the FZI Research Center for Information Technology shared insights into RSI integration, asynchronous control strategies, and the practical integration challenges that arise when transitioning research prototypes into industrial systems. His talk also highlighted key software-architecture considerations such as driver lifecycles, memory management, and allocation trackingāturning ārobustnessā into measurable engineering practices.
Alexander Mühlens from igus GmbH showcased several ROS-powered innovations and real-world deployments, with particular focus on the RBTX marketplace and the value of ecosystems in reducing cost, risk, and complexity for robotics adoption. His examples demonstrated how accessible, composable solutions can accelerate industrial uptake.
Adolfo Suarez Roos from IRT Jules Verne discussed Yaskawa drivers and industrial applications ranging from medical finishing processes to offshore welding automation. A key message was that successful deployments depend on tight integration decisionsāincluding controller capabilities, communication frequency, and compatibility constraintsātailored to the realities of the shop floor.
Lukasz Pietrasik from Intrinsic presented a practical approach to integrating ROS with broader AI and software platforms. Topics included developer workflows, digital-twin environments, behavior-tree-based task composition, and bridging ROS data and services into higher-level orchestration platforms.
Afternoon Focus : Safety, Resilience, and Industrial Expectations
Florian WeiĆhardt from Synapticon GmbH addressed the unique safety challenges of humanoid robots, where unpredictability, balance loss, and autonomy make traditional āsafe stateā concepts insufficient. His session reinforced a central theme of the day: as robots move into unstructured environments, safety becomes a system-level design challenge rather than a single-component feature.
Florian GramĆ from Siemens AG explored the tension between traditional deterministic automation and the flexibility offered by ROS-based systems. He advocated for hybrid architecturesādeterministic where required, flexible where possibleāas a realistic path forward for modern industrial automation.
Riddhesh Pradeep More presented his work on semantic discovery and rich descriptive models for reusable ROS software components, demonstrating how knowledge graphs and vector-based semantic search can significantly improve the identification, understanding, and reuse of ROS packages across domains such as navigation, perception, SLAM, and manipulation.
Dennis Borger showcased applied ROS 2 research projects including robotic bin-picking and automated post-processing, highlighting how modular architectures, hybrid vision approaches, and AI-supported workflows enable flexible automation solutions for small-batch and customized industrial production scenarios.
Denis Stogl and Nikola BanoviÄ from b-robotized GmbH shared practical experiences in bringing ROS 2 into real industrial environments, emphasizing the role of ros2_control, hardware abstraction, diagnostics, and seamless integration with industrial communication protocols such as EtherCAT, CANOpen, and Modbus to achieve production-ready robotic systems.
The first day concluded with a Gala Dinner, where informal discussions and networking often proved as valuable as the scheduled presentations.
Day 2 Highlights : Consortium Alignment and Advanced Applications
Consortium Updates Across Regions
The second day began with updates from across the global ROS-Industrial network:
             ⢠Vishnuprasad Prachandabhanu and Yasmine Makkaoui on ROS-Industrial Europe initiatives
             ⢠Maria Vergo and Glenn Tan on Asia-Pacific ecosystem orchestration, sandboxes, and large-scale deployments
             ⢠Paul Evans from the Southwest Research Institute on ROS-Industrial Americas roadmap priorities, technical progress, and improvements in usability and tooling
Louis-Romain Joly from SNCF introduced nav4rail, a navigation stack tailored specifically for railway maintenance robots. His key insight was that in constrained domainsāsuch as effectively one-dimensional rail movementāsimpler, model-driven solutions can outperform general-purpose navigation frameworks in both clarity and engineering efficiency.
Mario Prats from PickNik Robotics closed the conference with advancements in mobile manipulation workflows and the continued evolution of MoveIt toward professional-grade tooling, highlighting behaviour-tree composition, real-time control capabilities, and an AI-oriented roadmap.
Closing Takeaway : Industrial ROS Maturing Through Engineering Reality
The conference confirmed that ROS 2 is steadily gaining ground in real industrial environments. A wide range of practical use cases, improved interoperability through ros2_control and fieldbus integration, and increasing adoption of behavior-tree-based architectures demonstrate clear technical progress.
At the same time, challenges remain, particularly in documentation quality and real-time performance. Safety, AI integration, and driver development continue to shape the technical agenda, while expectations for new collaborative initiatives such as a potential ROSin 2.0 underline the need for sustained ecosystem support.
Software is the invisible thread that weaves the fabric of robotics: it turns sensors into perception, models into decisions, and hardware into reliable behavior in the real world. As our systems scale from demos to deployment, robust engineering practices: architecture, testing, tooling, debugging, benchmarking, and reproducibility, often determine success.
With that in mind, weāre inviting submissions to
RoSEā26 (Robotics Software Engineering) Workshop @ ICRA Vienna
Submission deadline: March 8 (20 days to go)
What weāre looking for?
We welcome contributions that share actionable software engineering insights for robotics, including (but not limited to):
ROS/ROS 2 system & package architecture patterns (and lessons learned)