February 19, 2026
2025 ROS Metrics Report

2025 ROS Metrics Report

2025 ROS Metrics Report.pdf (3.7 MB)
For comparison, heres the 2024 Metrics Report.

Once a year, we take a moment to evaluate the health, growth, and general well-being of the ROS community. Our goal with this annual report is to provide a relative estimate of the community’s evolution and composition to better help us plan for the future and allocate resources.

As an open-source project, we prioritize user privacy above all else. We do not track our users, and as such, this report relies on aggregate statistics from services like GitHub, Google Analytics, and download data from our various servers. While this makes data collection difficult, and the results don’t always capture the information we would like, we are happy to report that the data we have captured clearly show a thriving and rapidly growing ROS ecosystem! :rocket:

2025 Report Highlights


The full report is available for download here (3.7 MB) If you would like just the highlights we’ve summarized the top line results below.

  • 984,135,185 ROS packages were downloaded in 2025 representing an 85.18% increase over 2024 (this is despite missing data for July, see note below).
  • Over 1,300,000 individuals / unique IPs downloaded ROS packages in October, 2025.
  • ROS 2 now makes up 91.2% of all ROS downloads.
  • ROS Humble currently makes up 48.53% of all ROS downloads.
  • ROS Jazzy currently makes up 24.45% of all ROS downloads.
  • ROS Index visitors have increased by 63.3%.
  • The ROS 2 Github organization saw an 11.2% increase in contributors and a 37.59% increase in the number of pull requests.
  • Discourse posts have increased by 24% and viewership has increased by 29.7%.
  • Our newest ROS 2 paper (Macenski et al., 2022) had 1,929 citations, representing 90% growth year over year.
  • 92.14% of Gazebo downloads are now for modern versions of Gazebo.

A Landmark Year for Community Growth

The 2025 metrics highlight a massive surge in users across almost all of our websites and servers. In the month of October 2025, ROS 2 package downloads saw a staggering 284% increase in the number of package downloads over the previous year. ROS 2 package downloads now make up the overwhelming majority of ROS package downloads (91.2% of all downloads in October 2025). This growth isn’t just from users transitioning from ROS 1 to ROS 2, most of it appears to be explosive growth in the number of ROS 2 users overall. The number of unique users / IPs downloading ROS packages grew from 843,959 in October 2024 to 1,315,867 in October of 2025, an increase of just shy of 56%!

Meanwhile, ROS 1 downloads declined slightly from 12,206,979 packages in October of 2024 to 11,590,884 in October of 2025, a decrease of slightly over 5%. The ROS Wiki, which is now at End-of-Life, saw an 8.5% decrease in users, a trend we view positively as the community migrates to modern documentation platforms and away from ROS 1. Similarly, there were only 5 questions tagged with “ROS1” on Robotics Stack Exchange in 2025, in contrast to the 1,449 questions tagged “ROS2.” On every platform, and by every metric, ROS 2 is now the dominant platform ROS development.

Our discussion platforms are also busier than ever. Annual topics on ROS Discourse rose by 40% (to 1,472), and annual posts increased by 24% (to 4,901). Overall viewership of Discourse grew by nearly 30%. Similarly our community on LinkedIn has increased by 23.9% and hovers at just shy of 200,000 followers. The only notable decrease of any ROS metric was on Robotics Stack Exchange, which has seen a -42.49% decrease in the number of questions asked. This decrease mirrors larger industry wide trends as developers turn to large language models to answer their technical questions.

ROS 2 Adoption and Industry Momentum

The shift to ROS 2 has reached a definitive milestone, with package downloads now overwhelmingly centered on ROS 2 and likely surpassing one billion per year. This massive download volume is a testament to the ROS’s utility and widespread adoption. We are especially encouraged by the growing health of the ecosystem, which now features 34,614 unique ROS packages available via Apt (an increase of 9.15% over the previous year). This growth in package availability directly translates into greater functionality and choice for our users.

The dedication of the developer community is evident in the flourishing number of public repositories on Github: 3,848 repositories are tagged with “#ROS2” (a 39% increase in 2025), alongside 8,744 public repositories tagged with “#ROS” (up 4.73% since Jan 2025), demonstrating increasing development activity. Furthermore, the relevance of ROS in industry is undeniable: our private list of ROS companies grew 26% this year to 1,579 companies, showing strong commercial validation. In the academic sphere, our canonical ROS 2 paper continues to demonstrate explosive growth with 1,929 citations (up 89.9% in 2025), confirming the platform’s role in cutting-edge research. Collectively, these metrics confirm ROS 2’s status as the established platform for the next generation of robotics development, driving significant growth across both commercial and research sectors.

Conclusion and Feedback

The data from 2025 depicts a thriving, maturing ecosystem that is increasingly centered on modern ROS 2 and modern Gazebo tools. We are immensely proud of this community’s growth and its successful shift toward next-generation robotics software! :rocket:

We encourage you to dive into the full report for a more detailed breakdown of these metrics. We also encourage you to take a look at the ROS project contributor metrics published by our colleagues at the Linux Foundation for a detailed breakdown of project contribution statistics. As always, we would love to hear your thoughts on what metrics you would like to see included in future reports.

A Note on 2025 Data

Our goal with the ROS metrics report is to develop an understanding of the magnitude and direction of changes in the ROS open source community so we can make better decisions about where we allocate our time and resources. As such, we’re looking for ballpark estimates to help guide decision making, not necessarily exacting figures. This year, due to circumstances beyond our control, we’ve had to fill in some gaps in our data as explained below. We believe the numbers reported here paint a reasonable lower bound on various phenomena in the ROS community.

Our ROS package download statistics are culled from an AWStats instance running on our OSU OSL servers. In July of 2025 we moved our AWStats host at OSU OSL and upgraded AWStats ahead of its imminent deprecation. Unfortunately, this migration had two negative side effects that impacted our results for 2025. First, it caused us to lose most of our AWStats data for the month of July, 2025. Second, the upgrade did not provide a migration utility for existing log data, and our AWStats summary page for 2025 only presents data for the six months after the migration. Thankfully, we still have the raw log data for the proceeding six months (with the exception of July), and we were able to manually re-calculate the results for most metrics, albeit missing some data from the month of July.

For our Gazebo download metrics we rely upon the Apache logs available on an OSRF AWS instance and AWStats download data from the OSU OSL servers. For privacy reasons we do not retain the Apache log data in perpetuity, instead we rely on a logging buffer that periodically rolls over. In prior years this buffer was sufficient to capture well over a month’s worth of Gazebo download data. Gazebo downloads have grown significantly over the past year, and when we evaluated our logs, we found that only a little over two weeks worth of data was available. As such we decided to evaluate the download data on a two week period from January 13th, until January 27th and extrapolate those results out to the entire month.

2 posts - 1 participant

Read full topic

by Katherine_Scott on February 19, 2026 05:26 PM

Proposal: Add ADOPTERS to showcase ROS 2 production users

Hi :waving_hand:

i’ve opened an issue proposing to add an ADOPTERS to the ROS documentation — a centralized, community-maintained list of organizations using ROS in production.

please have a look at the issue, and give me the feedback :person_bowing:

  • Does this seem valuable to the community?
  • What fields or structure would you find most useful?
  • Would your organization be willing to be listed?

If there’s interest, I’m happy to submit an initial PR to get things started. Please share your thoughts here or on the GitHub issue.

thanks,
tomoya

1 post - 1 participant

Read full topic

by tomoyafujita on February 19, 2026 12:38 AM

February 18, 2026
Research grade robot recommendations in 2026

I want to test analytics software I’m developing on a wide variety of movements and ROS2 frameworks (e.g. MoveIt, Nav2, etc.) and sensor types, and I’m looking for recommendations on robots that are a good balance between low cost and a broad range of functionality. For example, I’m thinking of a combination of a Turtlebot 4 for a mobile robot and a Waveshare RoArm M3 for a robot arm with some Gen AI capabilities. I’m sure a lot of people have experience with the Turtlebot here but I’m curious what your recommendations would be in general.

By the way, I’m new here and wasn’t sure what category to post this in. Please let me know if there’s a better place for this discussion. Thanks in advance.

4 posts - 3 participants

Read full topic

by kristoph_robotforest on February 18, 2026 09:42 PM

Service for robot description?

I was discussing this topic with a colleague and am interested in some other opinions. He was proposing using the GetParameter service to get the robot description from the robot_state_publisher node. I was suggesting we subscribe to /robot_description. We are working in a single-robot environment.

What do you prefer and why?

The way I see it, writing the service call makes the code using the robot description clearer, as you see it’s only received once, and waiting for the response is clear. In contrast, the topic subscribing code looks like you might be receiving it periodically.

On the other hand, you how have to specify the node and parameter name, so if for some reason RSP isn’t there or doesn’t have the robot_description param, and instead some other node is publishing it, it won’t work. But to be honest I’ve never had this be the case in any ros2 systems I’ve worked with.

Maybe it would be the best of both worlds if the robot_state_publisher had a /get_robot_description service? Or maybe rclpy needs some built-in helpers for making getting the robot description, or in general latched topics, cleaner? Or maybe these things already exist and I am unaware :eyes:

Looking forward to hearing from others on this topic!

2 posts - 2 participants

Read full topic

by PeterMitrano on February 18, 2026 08:51 AM

February 17, 2026
ROS 2 Lyrical C++ Version

As we begin the planning phase for the ROS 2 Lyrical release, the PMC is considering an upgrade to our core language requirements. Specifically, we are looking at making C++20 the default standard for the Lyrical distribution.

Why now?

The PMC has reviewed our intended Tier 1 target platforms for this cycle, and they all appear to support modern toolchains with mature C++20 implementations. These targets include:

  • Ubuntu 26.04 (Resolute)
  • Windows 11
  • RHEL 10
  • Debian 13 (Trixie)

We Need Your Feedback

While the infrastructure seems ready, the PMC wants to make sure we do not inadvertently break any critical workflows or orphan embedded environments that might be constrained by older compilers.

We would like to hear from you if:

  1. You are targeting an LTS embedded platform or an RTOS that lacks a C++20-compliant compiler.
  2. You maintain a core package that would face significant architectural hurdles by incrementing the standard.
  3. You have specific concerns regarding binary compatibility or cross-compilation with existing C++17 libraries.

The goal is to move the ecosystem forward without leaving anyone behind. If you anticipate any friction, please share your thoughts below.

1 post - 1 participant

Read full topic

by mjcarroll on February 17, 2026 04:52 PM

ROS 2 in Industry: Key Takeaways from the ROS-Industrial Conference 2025

The 13th ROS-Industrial Europe Conference 2025 took place on 17–18 November 2025 in Strasbourg, co-located with ROSCon FR&DE. The event brought together industrial practitioners, researchers, and technology providers to share practical experience with deploying ROS 2 in production environments, discussing both proven approaches and remaining challenges.

Hosted at the CCI Campus Alsace – Site de Strasbourg, the program covered robotics market insights, vendor perspectives, and technical topics such as driver development and real-time control. Further sessions addressed humanoid safety, modular application frameworks, and industrial expectations regarding determinism and long-term maintainability. Updates from the different regional ROS-Industrial consortia provided a broader international perspective.

The event concluded with a hands-on company visit to ENGLAB, allowing participants to see robotics solutions in action beyond the conference hall.

Event Page with links to the slides and presentations video here

Day 1 Highlights : From Market Momentum to “ROS 2 Going Industrial”

Werner Kraus opened the conference with an introduction to Fraunhofer IPA and a global robotics market overview. He highlighted strong growth trends, particularly in medical and humanoid robotics, and emphasized that safety in humanoid systems remains a critical research and engineering frontier.

Felix Exner from Universal Robots presented ongoing development of ROS interfaces for robot controllers, including motion-primitive-based approaches. He addressed a recurring industry challenge: maintaining a stable ROS ecosystem across multiple distributions while balancing documentation quality, development agility, and long-term support strategies.

Robert Wilbrandt from the FZI Research Center for Information Technology shared insights into RSI integration, asynchronous control strategies, and the practical integration challenges that arise when transitioning research prototypes into industrial systems. His talk also highlighted key software-architecture considerations such as driver lifecycles, memory management, and allocation tracking—turning “robustness” into measurable engineering practices.

Alexander Mühlens from igus GmbH showcased several ROS-powered innovations and real-world deployments, with particular focus on the RBTX marketplace and the value of ecosystems in reducing cost, risk, and complexity for robotics adoption. His examples demonstrated how accessible, composable solutions can accelerate industrial uptake.

Adolfo Suarez Roos from IRT Jules Verne discussed Yaskawa drivers and industrial applications ranging from medical finishing processes to offshore welding automation. A key message was that successful deployments depend on tight integration decisions—including controller capabilities, communication frequency, and compatibility constraints—tailored to the realities of the shop floor.

Lukasz Pietrasik from Intrinsic presented a practical approach to integrating ROS with broader AI and software platforms. Topics included developer workflows, digital-twin environments, behavior-tree-based task composition, and bridging ROS data and services into higher-level orchestration platforms.

Afternoon Focus : Safety, Resilience, and Industrial Expectations

Florian Weißhardt from Synapticon GmbH addressed the unique safety challenges of humanoid robots, where unpredictability, balance loss, and autonomy make traditional “safe state” concepts insufficient. His session reinforced a central theme of the day: as robots move into unstructured environments, safety becomes a system-level design challenge rather than a single-component feature.

Florian Gramß from Siemens AG explored the tension between traditional deterministic automation and the flexibility offered by ROS-based systems. He advocated for hybrid architectures—deterministic where required, flexible where possible—as a realistic path forward for modern industrial automation.

Riddhesh Pradeep More presented his work on semantic discovery and rich descriptive models for reusable ROS software components, demonstrating how knowledge graphs and vector-based semantic search can significantly improve the identification, understanding, and reuse of ROS packages across domains such as navigation, perception, SLAM, and manipulation.

Dennis Borger showcased applied ROS 2 research projects including robotic bin-picking and automated post-processing, highlighting how modular architectures, hybrid vision approaches, and AI-supported workflows enable flexible automation solutions for small-batch and customized industrial production scenarios.

Denis Stogl and Nikola Banović from b-robotized GmbH shared practical experiences in bringing ROS 2 into real industrial environments, emphasizing the role of ros2_control, hardware abstraction, diagnostics, and seamless integration with industrial communication protocols such as EtherCAT, CANOpen, and Modbus to achieve production-ready robotic systems.

The first day concluded with a Gala Dinner, where informal discussions and networking often proved as valuable as the scheduled presentations.

Day 2 Highlights : Consortium Alignment and Advanced Applications

Consortium Updates Across Regions

The second day began with updates from across the global ROS-Industrial network:

              • Vishnuprasad Prachandabhanu and Yasmine Makkaoui on ROS-Industrial Europe initiatives

              • Maria Vergo and Glenn Tan on Asia-Pacific ecosystem orchestration, sandboxes, and large-scale deployments

              • Paul Evans from the Southwest Research Institute on ROS-Industrial Americas roadmap priorities, technical progress, and improvements in usability and tooling

Louis-Romain Joly from SNCF introduced nav4rail, a navigation stack tailored specifically for railway maintenance robots. His key insight was that in constrained domains—such as effectively one-dimensional rail movement—simpler, model-driven solutions can outperform general-purpose navigation frameworks in both clarity and engineering efficiency.

Mario Prats from PickNik Robotics closed the conference with advancements in mobile manipulation workflows and the continued evolution of MoveIt toward professional-grade tooling, highlighting behaviour-tree composition, real-time control capabilities, and an AI-oriented roadmap.

Closing Takeaway : Industrial ROS Maturing Through Engineering Reality

The conference confirmed that ROS 2 is steadily gaining ground in real industrial environments. A wide range of practical use cases, improved interoperability through ros2_control and fieldbus integration, and increasing adoption of behavior-tree-based architectures demonstrate clear technical progress.

At the same time, challenges remain, particularly in documentation quality and real-time performance. Safety, AI integration, and driver development continue to shape the technical agenda, while expectations for new collaborative initiatives such as a potential ROSin 2.0 underline the need for sustained ecosystem support.

RIC-EU25_14.jpg
RIC-EU25_15.jpg
RIC-EU25_16.jpg
RIC-EU25_17.jpg
RIC-EU25_18.jpg
RIC-EU25_19.jpg
RIC-EU25_01.jpg
RIC-EU25_02.jpg
RIC-EU25_03.jpg
RIC-EU25_04.jpg
RIC-EU25_05.jpg
RIC-EU25_06.jpg
RIC-EU25_07.jpg
RIC-EU25_08.jpg
RIC-EU25_09.jpg
RIC-EU25_10.jpg
RIC-EU25_11.jpg
RIC-EU25_12.jpg
RIC-EU25_13.jpg

by Yasmine Makkaoui on February 17, 2026 11:25 AM

February 16, 2026
[Call for Papers] RoSE’26 (Robotics Software Engineering) @ ICRA Vienna -- due March 8

Hi all,

Software is the invisible thread that weaves the fabric of robotics: it turns sensors into perception, models into decisions, and hardware into reliable behavior in the real world. As our systems scale from demos to deployment, robust engineering practices: architecture, testing, tooling, debugging, benchmarking, and reproducibility, often determine success.

With that in mind, we’re inviting submissions to

RoSE’26 (Robotics Software Engineering) Workshop @ ICRA Vienna

:spiral_calendar: Submission deadline: March 8 (20 days to go)

What we’re looking for?

We welcome contributions that share actionable software engineering insights for robotics, including (but not limited to):

  • ROS/ROS 2 system & package architecture patterns (and lessons learned)
  • Testing & quality: CI, simulation + HIL, regression testing, reproducibility
  • Tooling: build/release workflows, dependency management, static analysis
  • Runtime robustness: logging, introspection, monitoring, debugging, recovery
  • Benchmarking & evaluation practices for robotics software
  • Deployment at scale: updates, configuration, fleet/edge deployment practices
  • Maintenance realities: migrations, long-lived systems, technical debt management

If you’ve built something others could reuse, or learned something the hard way, RoSE is a great venue to share it.

Examples of ROS-focused RoSE papers from previous editions

To give a sense of the kinds of ROS/ROS 2 topics that have fit well at RoSE:

Submission details and workshop info

Website (CFP + instructions):

Questions about fit or format? Feel free to reply here.
We also appreciate it if you share with your peers :dizzy:

Hope to see many ROS-flavored software engineering lessons represented at RoSE’26!


Ricardo Caldas
on behalf of the RoSE’26 Organizing Committee

1 post - 1 participant

Read full topic

by rdcal on February 16, 2026 07:00 PM

Cornea: Image Segmentation Skills from the Telekinesis Agentic Skill Library

Introducing Cornea: Image Segmentation Skills from the Telekinesis Agentic Skill library.

Cornea is a module in the Telekinesis Agentic Skill Library containing skills for 2D image segmentation: https://docs.telekinesis.ai/

It provides segmentation capabilities using classical computer vision techniques and deep learning models, allowing developers to extract structured visual information from images for robotics applications.

What Does Cornea Provide?

  • Color-based segmentation: RGB, HSV, LAB, YCrCb
  • Region-based segmentation: Focus region, Watershed, Flood fill
  • Deep learning segmentation: BiRefNet (foreground), SAM
  • Graph-based segmentation: GrabCut
  • Superpixel segmentation: Felzenszwalb, SLIC
  • Filtering: Filter by area, color, mask
  • Thresholding: Global threshold, Otsu, Local, Yen, Adaptive, Laplacian-based

When to Use Cornea?

Use Cornea for robotics applications that require pixel-level understanding of images, such as:

  • Vision-guided pick-and-place pipelines
  • Palletizing and bin organization
  • Object isolation for manipulation and grasp planning
  • Obstacle detection in camera-based navigation
  • Scene understanding for Physical AI agents





2 posts - 1 participant

Read full topic

by suman_pal on February 16, 2026 02:04 PM

February 14, 2026
Release] LinkForge v1.2.3: 100% Type Safety, Parser Hardening & ROS-Agnostic Assets

Hi everyone! :waving_hand:

I’m excited to announce the release of LinkForge v1.2.3Professional URDF & XACRO Bridge for Blender.

This release marks a major stability milestone, achieving 100% static type safety across the Blender codebase and introducing significant robustness improvements to the core parser.

Key Highlights

  • :globe_showing_europe_africa: ROS-Agnostic Asset Resolution: We’ve introduced a hybrid package resolver that allows you to import complex robot descriptions (with package:// URIs) on any OS, without needing a local ROS installation. This effectively bridges the gap between design teams (on Windows/macOS) and engineering teams (on Linux).
  • :shield: Parser Hardening: The import logic is now much more resilient to edge cases, malformed XML, and unusual file paths.
  • :locked: 100% Type Safety: A complete refactor of the Blender integration ensures maximum stability and fewer runtime errors.
  • :right_arrow_curving_left: DAE Support Restored: Full support for Collada meshes has been restored for legacy robot compatibility.

LinkForge enables a true “Sim-Ready” workflow: Model in Blender, configure physics/sensors/ros2_control visually, and export valid URDF/XACRO code directly.

:link: Links:

Happy forging! :hammer_and_wrench:

8 posts - 4 participants

Read full topic

by arounamounchili on February 14, 2026 01:46 PM

February 13, 2026
Proposal: Reproducible actuator-boundary safety (SSC + conformance harness)

Hi all — I’m looking for feedback on a design question around actuator-boundary safety in ROS-based systems.

Once a planner (or LLM-backed stack) can issue actuator commands, failures become motion. Most safety work in ROS focuses on higher layers (planning, perception, behavior trees), but there’s less shared infrastructure around deterministic enforcement at the actuator interface itself.

I’m prototyping a small hardware interposer plus a draft “Safety Contract” spec (SSC v1.1) with three components:

  1. A machine-readable contract defining caps (velocity / acceleration / effort), modes (development vs field), and stop semantics

  2. A conformance harness (including malformed traffic handling + fuzzing / anti-wedge tests)

  3. “Evidence packs” (machine-readable logs with wedge counts, latency distributions, and verifier tooling)

The goal is narrow:

Not “this makes robots safe.”

But: if someone claims actuator-boundary enforcement works, there should be a reproducible way to test and audit that claim.

Some concrete design questions I’m unsure about:

• Does ROS 2 currently have a standard place where actuator-boundary invariants should live?

• Should this layer sit at the driver level, as a node wrapper, or outside ROS entirely?

• What would make a conformance harness credible to you?

• Are there prior art efforts I should be aware of?

I’m happy to share more technical detail if useful. Mostly interested in whether this layer is actually leverageful or if I’m solving the wrong problem.

4 posts - 1 participant

Read full topic

by robozilla on February 13, 2026 06:55 PM

Ament (and cmake) understanding

I always add in existing CmakeLists new packages, but I just follow the previous structure of the code and adding something to it is not that hard, but it is always error and trial.

Reading official docs is of course useful, but it doesn’t stick. I forget it immediately.

Is this just a skill issue, and I should invest more time in Cmake and “building”? How important it is to know every line of a CmakeList? Does it make sense concentrating on it specifically?

Thank you)

5 posts - 3 participants

Read full topic

by rainingmp3 on February 13, 2026 04:37 PM

February 12, 2026
Piper Arm Kinematics Implementation

Piper Arm Kinematics Implementation

Abstract

This chapter implements the forward kinematics (FK) and Jacobian-based inverse kinematics (IK) for the AgileX PIPER robotic arm using the Eigen linear algebra library, as well as the implementation of custom interactive markers via interactive_marker_utils.

Tags

Forward Kinematics, Jacobian-based Inverse Kinematics, RVIZ Simulation, Robotic Arm DH Parameters, Interactive Markers, AgileX PIPER

Function Demonstration

Code Repository

GitHub Link: https://github.com/agilexrobotics/Agilex-College.git


1. Preparations Before Use

Reference Videos:

1.1 Hardware Preparation

  • AgileX Robotics Piper robotic arm

1.2 Software Environment Configuration

  1. For Piper arm driver deployment, refer to: https://github.com/agilexrobotics/piper_sdk/blob/1_0_0_beta/README(ZH).MD
  2. For Piper arm ROS control node deployment, refer to: https://github.com/agilexrobotics/piper_ros/blob/noetic/README.MD
  3. Install the Eigen linear algebra library:
sudo apt install libeigen3-dev

1.3 Prepare DH Parameters and Joint Limits for AgileX PIPER

The modified DH parameter table and joint limits of the PIPER arm can be found in the AgileX PIPER user manual:


2. Forward Kinematics (FK) Calculation

The FK calculation process essentially converts angle values of each joint into the pose of a specific joint of the robotic arm in 3D space. This chapter takes joint6 (the last rotary joint of the arm) as an example.

2.1 Prepare DH Parameters

  1. Build the FK calculation program based on the PIPER DH parameter table. From the modified DH parameter table of AgileX PIPER in Section 1.3, we obtain:

// Modified DH parameters [alpha, a, d, theta_offset]
dh_params_ = {
    {0,         0,          0.123,      0},                     // Joint 1
    {-M_PI/2,   0,          0,          -172.22/180*M_PI},      // Joint 2 
    {0,         0.28503,    0,          -102.78/180*M_PI},      // Joint 3
    {M_PI/2,    -0.021984,  0.25075,    0},                     // Joint 4
    {-M_PI/2,   0,          0,          0},                     // Joint 5
    {M_PI/2,    0,          0.091,      0}                      // Joint 6
};

For conversion to Standard DH parameters, refer to the following rules:

Standard DH ↔ Modified DH Conversion Rules:

  • Standard DH → Modified DH:
    αᵢ₋₁ (Standard) = αᵢ (Modified)
    aᵢ₋₁ (Standard) = aᵢ (Modified)
    dᵢ (Standard) = dᵢ (Modified)
    θᵢ (Standard) = θᵢ (Modified)

  • Modified DH → Standard DH:
    αᵢ (Standard) = αᵢ₊₁ (Modified)
    aᵢ (Standard) = aᵢ₊₁ (Modified)
    dᵢ (Standard) = dᵢ (Modified)
    θᵢ (Standard) = θᵢ (Modified)

The converted Standard DH parameters are:

// Standard DH parameters [alpha, a, d, theta_offset]
dh_params_ = {
    {-M_PI/2,   0,          0.123,      0},                     // Joint 1
    {0,         0.28503,    0,          -172.22/180*M_PI},      // Joint 2 
    {M_PI/2,    -0.021984,  0,          -102.78/180*M_PI},      // Joint 3
    {-M_PI/2,   0,          0.25075,    0},                     // Joint 4
    {M_PI/2,    0,          0,          0},                     // Joint 5
    {0,         0,          0.091,      0}                      // Joint 6
};
  1. Prepare DH Transformation Matrices
    • Modified DH Transformation Matrix:

  • Rewrite the modified DH transformation matrix using Eigen:
T << cos(theta),            -sin(theta),            0,             a,
     sin(theta)*cos(alpha), cos(theta)*cos(alpha), -sin(alpha), -sin(alpha)*d,
     sin(theta)*sin(alpha), cos(theta)*sin(alpha),  cos(alpha),  cos(alpha)*d,
     0,                     0,                      0,             1;
  • Standard DH Transformation Matrix:

  • Rewrite the standard DH transformation matrix using Eigen:
T << cos(theta), -sin(theta)*cos(alpha),  sin(theta)*sin(alpha), a*cos(theta),
     sin(theta),  cos(theta)*cos(alpha), -cos(theta)*sin(alpha), a*sin(theta),
     0,           sin(alpha),             cos(alpha),            d,
     0,           0,                      0,                     1;
  1. Implement the core function computeFK() for FK calculation. See the complete code in the repository: https://github.com/agilexrobotics/Agilex-College.git
Eigen::Matrix4d computeFK(const std::vector<double>& joint_values) {
    // Check if the number of input joint values is sufficient (at least 6)
    if (joint_values.size() < 6) {
        throw std::runtime_error("Piper arm requires at least 6 joint values for FK");
    }

    // Initialize identity matrix as the initial transformation
    Eigen::Matrix4d T = Eigen::Matrix4d::Identity();

    // For each joint:
    //    Calculate actual joint angle = input value + offset
    //    Get fixed parameter d
    //    Calculate the transformation matrix of the current joint and accumulate to the total transformation
    for (size_t i = 0; i < 6; ++i) {
        double theta = joint_values[i] + dh_params_[i][3];  // θ = joint_value + θ_offset
        double d = dh_params_[i][2];                       // d = fixed d value (for rotary joints)

        T *= computeTransform(
            dh_params_[i][0],  // alpha
            dh_params_[i][1],  // a
            d,                 // d
            theta              // theta
            );
    }

    // Return the final transformation matrix
    return T;
}

2.2 Verify FK Calculation Accuracy

  1. Launch the FK verification program:
ros2 launch piper_kinematics test_fk.launch.py
  1. Launch the RVIZ simulation program, enable TF tree display, and check if the pose of link6_from_fk (the arm end-effector calculated by FK) coincides with the original link6 (calculated by joint_state_publisher):
ros2 launch  piper_description display_piper_with_joint_state_pub_gui.launch.py 

High coincidence is observed, and the error between link6_from_fk and link6 is basically within four decimal places.


3. Inverse Kinematics (IK) Calculation

The IK calculation process essentially determines the position of each joint of the robotic arm required to move the arm’s end-effector to a given target point.

3.1 Confirm Joint Limits

  • Joint limits of the PIPER arm must be defined to ensure the IK solution path does not exceed physical constraints (preventing arm damage or safety hazards).
  • From Section 1.3, the joint limits of the PIPER arm are:

  • The joint limit matrix is defined as:
std::vector<std::pair<double, double>> limits = {
    {-154/180*M_PI, 154/180*M_PI},    	// Joint 1
    {0, 			195/180*M_PI},      // Joint 2
    {-175/180*M_PI, 0},         		// Joint 3
    {-102/180*M_PI, 102/180*M_PI},      // Joint 4
    {-75/180*M_PI,  75/180*M_PI},       // Joint 5
    {-120/180*M_PI, 120/180*M_PI}       // Joint 6
};

3.2 Step-by-Step Implementation of Jacobian Matrix Method for IK

Solution Process:

  1. Calculate Error e:
    Difference between current pose and target pose (6-dimensional vector: 3 for position + 3 for orientation).
  2. Is Error e below tolerance?
    • Yes: Return current θ as the solution.
    • No: Proceed to iterative optimization.
  3. Calculate Jacobian Matrix J: 6×6 matrix.
  4. Calculate Damped Pseudoinverse:

J⁺ = Jᵀ(JJᵀ + λ²I)⁻¹

λ is the damping coefficient (avoids numerical instability in singular configurations).
5. Calculate Joint Angle Increment:

Δθ = J⁺e

Adjust joint angles using error e and pseudoinverse.
6. Update Joint Angles:

θ = θ + Δθ

Apply adjustment to current joint angles.
7. Apply Joint Limits.
8. Normalize Joint Angles.
9. Reach Maximum Iterations?

  • No: Return to Step 2 for further iteration.
  • Yes: Throw non-convergence error.

Core Function computeIK():

std::vector<double> computeIK(const std::vector<double>& initial_guess, 
                                 const Eigen::Matrix4d& target_pose,
                                 bool verbose = false,
                                 Eigen::VectorXd* final_error = nullptr) {
    // Initialize with initial guess pose
    if (initial_guess.size() < 6) {
        throw std::runtime_error("Initial guess must have at least 6 joint values");
    }

    std::vector<double> joint_values = initial_guess;
    Eigen::Matrix4d current_pose;
    Eigen::VectorXd error(6);
    bool success = false;

    // Start iterative calculation
    for (int iter = 0; iter < max_iterations_; ++iter) {
        // Calculate FK for initial state to get position and orientation
        current_pose = fk_.computeFK(joint_values);
        // Calculate error between initial state and target pose
        error = computePoseError(current_pose, target_pose);

        if (verbose) {
            std::cout << "Iteration " << iter << ": error norm = " << error.norm() 
                      << " (pos: " << error.head<3>().norm() 
                      << ", orient: " << error.tail<3>().norm() << ")\n";
        }

        // Check if error is below tolerance (separate for position and orientation)
        if (error.head<3>().norm() < position_tolerance_ && 
            error.tail<3>().norm() < orientation_tolerance_) {
            success = true;
            break;
        }

        // Calculate Jacobian matrix (analytical by default)
        Eigen::MatrixXd J = use_analytical_jacobian_ ? 
            computeAnalyticalJacobian(joint_values, current_pose) :
            computeNumericalJacobian(joint_values);

        // Use Levenberg-Marquardt (damped least squares)
        // Δθ = Jᵀ(JJᵀ + λ²I)⁻¹e
        // θ_new = θ + Δθ
        Eigen::MatrixXd Jt = J.transpose();
        Eigen::MatrixXd JJt = J * Jt;
        // lambda_: damping coefficient (default 0.1) to avoid numerical instability in singular configurations
        JJt.diagonal().array() += lambda_ * lambda_;
        Eigen::VectorXd delta_theta = Jt * JJt.ldlt().solve(error);

        // Update joint angles
        for (int i = 0; i < 6; ++i) {
            // Apply adjustment to current joint angle
            double new_value = joint_values[i] + delta_theta(i);
            // Ensure updated θ is within physical joint limits
            joint_values[i] = std::clamp(new_value, joint_limits_[i].first, joint_limits_[i].second);
        }

        // Normalize joint angles to [-π, π] (avoid unnecessary multi-turn rotation)
        normalizeJointAngles(joint_values);
    }

    // Throw exception if no solution is found within max iterations (100)
    if (!success) {
        throw std::runtime_error("IK did not converge within maximum iterations");
    }

    // Calculate final error (if required)
    if (final_error != nullptr) {
        current_pose = fk_.computeFK(joint_values);
        *final_error = computePoseError(current_pose, target_pose);
    }

    return joint_values;
}

3.3 Publish 3D Target Points for the Arm Using Interactive Markers

  1. Install ROS2 dependency packages:
sudo apt install ros-${ROS_DISTRO}-interactive-markers ros-${ROS_DISTRO}-tf2-ros
  1. Launch interactive_marker_utils to publish 3D target points:
ros2 launch interactive_marker_utils marker.launch.py 
  1. Launch RVIZ2 to observe the marker:

  1. Drag the marker and use ros2 topic echo to verify if the published target point updates:

3.4 Verify IK Correctness in RVIZ via Interactive Markers

  1. Launch the AgileX PIPER RVIZ simulation demo (the model will not display correctly without joint_state_publisher):
ros2 launch piper_description display_piper.launch.py 

  1. Launch the IK node and interactive_marker node (in the same launch file). The arm will display correctly after successful launch:
ros2 launch piper_kinematics piper_ik.launch.py

  1. Control the arm for IK calculation using interactive_marker:

  1. Drag the interactive_marker to see the IK solver calculate joint angles in real time:

  1. If the interactive_marker is dragged to an unsolvable position, an exception will be thrown:


4. Verify IK on the Physical PIPER Arm

  1. First, launch the script for CAN communication with the PIPER arm:
cd piper_ros
./find_all_can_port.sh 
./can_activate.sh 

  1. Launch the physical PIPER control node:
ros2 launch piper my_start_single_piper_rviz.launch.py 
  1. Launch the IK node and interactive_marker node (in the same launch file). The arm will move to the HOME position:
ros2 launch piper_kinematics piper_ik.launch.py
  1. Drag the interactive_marker and observe the movement of the physical PIPER arm.

1 post - 1 participant

Read full topic

by Agilex_Robotics on February 12, 2026 03:03 AM

February 11, 2026
ROS2 news & updates wanted

Hi, I’m a robotics group admin with 412K members.

My group includes all kinds of robotics enthusiasts including some ROS enthusiasts.

I’d like to encourage you to post in my group. No strings attached. You get exposure worldwide.

Commercial robotics post are allowed - as long as they are not spammy.

facebook / groups/351994643955858

For your understanding, I don’t get paid for moderating the group. I just want to make it a better place for robotics enthusiasts.

Thank you for reading.

3 posts - 3 participants

Read full topic

by iliao on February 11, 2026 11:23 PM

Building and Integrating AI Agents for Robotics — Experiences, Tools, and Best Practices

I’ve been exploring how AI agents are being developed and used to interact with robotic systems, especially within ROS and ROS 2 environments. There are some exciting projects in the community — from NASA JPL’s open‑source ROSA agent that lets you query and command ROS systems with natural language, to community efforts building AI agents for TurtleSim and TurtleBot3 using LangChain and other agent frameworks.

I’d love to start a discussion around AI agent design, implementation, and real‑world use in robotics:

  1. Which AI agent frameworks have you experimented with for robotics?
    For example, have you used ROSA, RAI, LangChain‑based agents, or custom solutions? What worked well and what limitations did you encounter?

  2. How do you handle multi‑modal inputs and outputs in your agents?
    (e.g., combining natural language, sensor data, and robot commands)

  3. What strategies do you use for planning and action execution with your agent?
    Do you integrate RL policies, behavior trees, skill libraries, or other reasoning approaches?

  4. What tooling or libraries do you recommend for scalable agent performance?
    Have you found certain profiling tools, API integrations, or frameworks particularly helpful?

  5. What are the biggest challenges you’ve faced when deploying your AI agent on real robots?
    (e.g., latency, safety, unexpected robot behavior, or integration issues)

  6. Are there any resources, examples, or papers that helped you with agent development?
    I’m keen to share references and compare experiences.

Let’s share our experiences and recommendations — whether you’re just starting to explore AI agents or you’ve already built something that interacts with real robotic systems!

1 post - 1 participant

Read full topic

by aartijangid on February 11, 2026 05:06 PM

What are the recommended learning resources for mastering ROS quickly?

What are the recommended learning resources for mastering ROS quickly?

4 posts - 4 participants

Read full topic

by Suheb on February 11, 2026 05:05 PM

🔥 the ros2 competition is your ticket to the world of robotics, where job, university, and a real robot await!

Do you want to do more than just “play” with robots—do you want to become a sought-after specialist, invited to join teams, companies, and labs?
Then the ROS2 competition is your perfect start. Here’s why:
:white_check_mark: It’s not just a game—it’s a real skill that employers value.
ROS2 is the industry standard in robotics. It’s required for job openings from Sber’s Robotics Center to Boston Dynamics.
By participating in the competition, you’re not learning from a textbook—you’re solving a real problem: a robot must navigate autonomously, recognize objects, and manipulate them, just like in service and industrial robotics.
:money_bag: Everything is within reach, even for a student
You can build a robot for just 100–300 dollars.
Divide it among a team—it’s less than a gym membership.
And finding a sponsor for that amount? It’s easy—especially when you present a real project, not just an idea.
:hammer_and_wrench: You’ll have a working robot—not a toy, but a tool for future projects.
After the competition, you’ll have a fully functional autonomous robot capable of:

  • Navigating the room
  • Grasping objects
  • Working in the real world

This is your personal portfolio project, which will open doors to internships and research groups.
:bullseye: Everything is already there—you’re not starting from scratch.
Playing field? Just order a Charuco banner from any advertising agency—matte, inexpensive, ready to go.
Robot? There’s a baseline configuration on GitHub—a basic robot you can start with.
Training? A free, public course on ROS2 on Stepik with step-by-step instructions—everything from installation to running algorithms.
:brain: You’ll learn more than just ROS2—you’ll master the entire engineering stack.

  • Linux
  • Configuring local networks
  • C++, Python
  • Computer vision
  • Path planning
  • Manipulator control

And much more—all in one project!
:graduation_cap: And starting in 2027, winning the ROS2 competition will earn you extra points when applying to universities and graduate schools!
You’re not just participating—you’re investing in your future.
Today—training. Tomorrow—an advantage over other applicants.
:joystick: And yes—it’s fun!
You’ll work in a team, solve puzzles, see how your code brings hardware to life…
It’s adrenaline, excitement, and pleasure—everything that made us fall in love with robotics in the first place. :rocket: Don’t wait for the “perfect moment.” The perfect moment is now.
Register for the ROS2 competition—and in one months, you won’t be dreaming about a career in robotics…
You’ll already be there.
:backhand_index_pointing_right: Click “Register”—while others are thinking, you’re already building the future.
:man_dancing:Get inspired by this song!

An article explaining the competition in detail.

Competition regulations and rules.

Video explanation of the competition regulations and rules.

We invite teams from all countries. Translation of the competition’s leading organizers into English is possible.

The competition is being held as part of the ROS Meetup conference on robotics and artificial intelligence in robotics, March 21-22, 2026, in Moscow.

We’d like to host an international ROS2 competition in Moscow every year. If you’d like to help us with this, please let us know.

2 posts - 1 participant

Read full topic

by amburkoff on February 11, 2026 10:17 AM

🚀 Invitation to the scientific section at the ROS Meetup 2026 conference in Moscow. Remote participation is possible

CFP-ROSRM2026 - Eng.pdf (585.6 KB)

The Robot Operating System Research Meetup will be held for the first time in 2026 as part of the scientific track of the annual ROS Meetup. This is an international scientific forum dedicated to the discussion of artificial intelligence methods in robotics.

:loudspeaker: Attention, scientists and researchers!

:globe_showing_europe_africa: Registration for scientific papers is now OPEN! Papers can be submitted to the ROSRM 2026 scientific section, which is dedicated to artificial intelligence methods in robotics and will be held at the ROS Meetup conference on March 20-22, 2026, in Moscow. Don’t miss your chance to present your work and be published in a Scopus-listed journal!

:mechanical_arm:Article Topics: Intelligent robotics, robotic algorithms, deep learning, reinforcement learning, agents in robotics, computer vision, navigation and control.

:memo: Submission Procedure:

  • Submit your abstract or full article before the ROS Meetup conference.
  • If your abstract is accepted, you will present your paper in the scientific section at the conference, receive feedback, and receive recommendations for improvement. Address: Moscow Institute of Physics and Technology, Dolgoprudny, Russia, or remotely via videoconference.
  • After the conference, revise your article based on the comments received and resubmit it.
  • Receive publication of your article in the journal Optical Memory and Neural Networks (indexed in Scopus, WoS, Q3, and included in the White List of Journals)!

:link: Article Submission:
Articles must be submitted through the service OpenReview. Details on preparing materials and the registration fee will be published on the conference website: rosmeetup.ru/science-eng

:books: Accepted articles will be published in the journal Optical Memory and Neural Networks, indexed in Scopus, ensuring your work is visible internationally! Publications also count toward master’s and doctoral programs.

:red_question_mark: Any questions? Ask Dmitry Yudin.

:fire: Don’t miss the chance to advance your scientific career and get published in an international journal! Submit your article and join this important scientific event!

:wrench: You can also provide a link to the source code of your ROS2 package (this is optional). This way, we support open source and the ROS philosophy of reusing software components across different robots!

IMPORTANT DATES
March 9, 2026 — Abstract submission deadline
March 16, 2026 — Program committee decision on paper acceptance
March 19, 2026 — Participant registration
March 21–22, 2026 — Conference
April 20, 2026 — Full article submission deadline
May 25, 2026 — Notification of article acceptance

:memo:Fill out the OpenReview submission form OpenReview. Right now, just the abstract is enough!:rocket::robot:

1 post - 1 participant

Read full topic

by amburkoff on February 11, 2026 09:53 AM

NVIDIA Isaac ROS 4.1 for Thor has arrived

gr1_nvblox_azure-ezgif.com-optimize

NVIDIA Isaac ROS 4.1 for Thor is now live.

NVIDIA Isaac ROS 4.1 is now available. This open-source collection of accelerated ROS 2 packages and reference applications adds more flexibility for building and deploying on Jetson AGX Thor.

This release introduces a Docker-optional development and deployment workflow, with new Virtual Environment and Bare Metal modes that make it easier to integrate Isaac ROS packages into your existing setup.

We’ve also made several key updates across the stack. Isaac ROS Nvblox now supports improved dynamics with LiDAR and motion compensation, and Isaac ROS Visual SLAM adds support for RGB-D cameras. There’s a new 3D-printable multi-camera rig for mounting RealSense cameras directly to Jetson AGX Thor, along with canonical URDF poses to get you started quickly.

On the sim-to-real side, a new gear assembly tutorial walks through training a reach policy in simulation and deploying it to a UR10e arm. And for data movement, you can now send and receive point clouds using the CUDA with NITROS API.

Check out the full details :right_arrow: here and let us know what you build with 4.1 :rocket:

1 post - 1 participant

Read full topic

by HemalShahNV on February 11, 2026 03:00 AM

February 10, 2026
Henki ROS 2 Best Practices - For People and AI

Hi all!

We’ve decided to write down and publish some of our best practices for ROS 2 development at Henki Robotics! The list of best practices has been compiled from years of experience in developing ROS applications, and we wanted to make this advice freely available, as we believe that some of these simple tips can have a huge impact on a project architecture and maintainability.

In addition to having this advice available for developers, we built the repository so that the best practices can be directly integrated with coding agents to support modern AI-driven development. You can generate quality code automatically, or review your current project. We’ve tested this using Claude, and the difference in generated code is noticeable - we added examples in the repo to showcase the impact of these best practices.

More info in the repository. We’d love to hear which practices you find useful, and which ones we are still missing from our listing.

2 posts - 1 participant

Read full topic

by jak on February 10, 2026 04:01 PM

February 09, 2026
Ouster Acquires StereoLabs Creating a World-Leading Physical AI Sensing and Perception Company

Ouster asserts its position in Physical AI by acquiring StereoLabs :tada:

https://investors.ouster.com/news-releases/news-release-details/ouster-acquires-stereolabs-creating-world-leading-physical-ai

2 posts - 2 participants

Read full topic

by Samahu on February 09, 2026 05:43 PM

February 08, 2026
Working prototype of native buffers / accelerated memory transport

Hello ROS community,

as promised in our previous discourse post, we have uploaded our current version of the accelerated memory transport prototype to GitHub, and it is available for testing.

Note on code quality and demo readiness

At this stage, we would consider this to be an early preview. The code is still somewhat rough around the edges, and still needs a thorough cleanup and review. However, all core pieces should be in place, and can be shown working together.

The current demo is an integration test that connects a publisher and a subscriber through Zenoh, and exchanges messages using a demo backend for the native buffers. It will show a detailed trace of the steps being taken.

The test at this point is not a visual demo, and it does not exercise CUDA, Torch or any other more sophisticated flows. We are working on a more integrated demo in parallel, and expect to add those shortly.

Also note that the structure is currently a proposal, detail of which will be discussed in the Accelerated Memory Transport Working Group, so some of the concepts may still change over time.

Getting started

In order to get started, we recommend installing Pixi first for an isolated and reproducible environment:

curl -fsSL https://pixi.sh/install.sh | sh

Then, clone the ros2 meta repo that contains the links to all modified repositories:

git clone https://github.com/nvcyc/ros2.git && cd ros2

Lastly, run the following command to setup the environment, clone the sources, build, and run the primary test to showcase functionality:

pixi run test test_rcl_buffer test_1pub_1sub_demo_to_demo

You can run pixi task list for additional commands available, or simply do pixi shell if you prefer to use colcon directly.

Details on changes

Overview

The rolling-native-buffer branch adds a proof-of-concept native buffer feature to ROS 2, allowing uint8[] message fields (e.g., image data) to be backed by vendor-specific memory (CPU, GPU, etc.) instead of always using std::vector. A new rcl_buffer::Buffer<T> type replaces std::vector for these fields while remaining backward-compatible. Buffer backends are discovered at runtime via pluginlib, and the serialization and middleware layers are extended so that when a publisher and subscriber share a common non-CPU backend, data can be transferred via a lightweight descriptor rather than copying through CPU memory. When backends are incompatible, the system gracefully falls back to standard CPU serialization.

Per package changes

rcl_buffer (new)

Core Buffer<T> container class — a drop-in std::vector<T> replacement backed by a polymorphic BufferImplBase<T> with CpuBufferImpl<T> as the default.

rcl_buffer_backend (new)

Abstract BufferBackend plugin interface that vendors implement to provide custom memory backends (descriptor creation, serialization registration, endpoint lifecycle hooks).

rcl_buffer_backend_registry (new)

Singleton registry using pluginlib to discover and load BufferBackend plugins at runtime.

demo_buffer_backend, demo_buffer, demo_buffer_backend_msgs (new)

A reference demo backend plugin with its buffer implementation and descriptor message, used for testing the plugin system end-to-end.

test_rcl_buffer (new)

Integration tests verifying buffer transfer for both CPU and demo backends.

rosidl_generator_cpp (modified)

Code generator now emits rcl_buffer::Buffer<uint8_t> instead of std::vector<uint8_t> for uint8[] fields.

rosidl_runtime_cpp (modified)

Added trait specializations for Buffer<T> and a dependency on rcl_buffer.

rosidl_typesupport_fastrtps_cpp (modified)

Extended the type support callbacks struct with has_buffer_fields flag and endpoint-aware serialize/deserialize function pointers.

Added buffer_serialization.hpp with global registries for backend descriptor operations and FastCDR serializers, plus template helpers for Buffer serialization.

Updated code generation templates to detect Buffer fields and emit endpoint-aware serialization code.

rmw_zenoh_cpp (modified)

Added buffer_backend_loader module to initialize/shutdown backends during RMW lifecycle.

Extended liveliness key-expressions to advertise each endpoint’s supported backends.

Added graph cache discovery callbacks so buffer-aware publishers and subscribers detect each other dynamically.

Buffer-aware publishers create per-subscriber Zenoh endpoints and check per-endpoint backend compatibility before serialization.

Buffer-aware subscribers create per-publisher Zenoh subscriptions and pass publisher endpoint info into deserialization for correct backend reconstruction.

Interpreting the log output

test_1pub_1sub_demo_to_demo produces detailed log outputs, highlighting the steps taken to allow following what is happening when a buffer flows through the native buffer infrastructure.

Below are the key points to watch out for, which also provide good starting points for more detailed exploration of the code.

Note that if you used the Pixi setup above, the code base will have compile_commands.json available everywhere, and code navigation is available seamlessly through your favorite LSP server.

Backend Initialization

Each ROS 2 process discovers and loads buffer backend plugins via pluginlib, then registers their FastCDR serializers.

Discovered 1 buffer backend plugin(s) / Loaded buffer backend plugin: demo
Demo buffer descriptor registered with FastCDR

Buffer-Aware Publisher Creation

The RMW detects at creation time that sensor_msgs::msg::Image contains Buffer fields, and registers a discovery callback to be notified when subscribers appear.

Creating publisher for topic '/test_image' ... has_buffer_fields: '1'
Registered subscriber discovery callback for publisher on topic: '/test_image'

Buffer-Aware Subscriber Creation

The subscription is created in buffer-aware mode — no Zenoh subscriber is created yet; it waits for publisher discovery to create per-publisher endpoints dynamically.

has_buffer_fields: 1, is_buffer_aware: 1
Initialized buffer-aware subscription ... (endpoints created dynamically)

Mutual Discovery

Both sides discover each other through liveliness key-expressions that include backends:demo:version=1.0, confirm backend compatibility, and create per-peer Zenoh endpoints.

Discovered endpoint supports 'demo' backend
Creating endpoint for key='...'

Buffer-Aware Publishing

The publisher serializes the buffer via the demo backend’s descriptor path instead of copying raw bytes, and routes to the per-subscriber endpoint.

Serializing buffer (backend: demo)
Descriptor created: size=192, data_hash=1406612371034480997

Buffer-Aware Deserialization

The subscriber uses endpoint-aware deserialization to reconstruct the buffer from the descriptor, restoring the demo backend implementation.

Deserialized backend_type: 'demo'
from_descriptor() called, size=192 elements, data_hash=1406612371034480997

Application-Level Validation

The subscriber confirms the data arrived through the demo backend path with correct content.

Received message using 'demo' backend - zero-copy path!
Image #1 validation: PASSED (backend: demo, size: 192)

What’s next

The code base will server as a baseline for discussions in the Accelerated Memory Transport Working Group, where the overall concept as well as its details will be discussed and agreed upon.

In parallel, we are working on integrating fully featured CUDA and Torch backends into the system, which will allow for more visually appealing demos, as well as a blueprint for how more realistic vendor backends would be implemented.

rclpy support is another high priority item to integrate, ideally allowing for seamless tensor exchange between C++ and Python nodes.

Lastly, since Zenoh will not become the default middleware for the ROS2 Lyrical release, we will restart efforts to integrate the backend infrastructure into Fast DDS.

2 posts - 2 participants

Read full topic

by karsten-nvidia on February 08, 2026 08:04 PM

February 06, 2026
Turning Davos into a Robot City this July

Hi all!

I am helping to organize the Davos Tech Summit July 1st-4th this year: https://davostechsummit.com/

Rather than keeping it as the typical fair or tech conference behind doors, we had the idea of turning Davos into a robot city. For this, we need the help of robotics companies to actually deploy their robots around the city, which we can help set up and coordinate. Some of the companies that have already confirmed they are joining the Robot City concept are:

  • Ascento with their security robot
  • Tethys robotics
  • Deep robotics
  • Loki Robotics
  • Astral

Humanoid robots:

  • Agibot - A2 + X2 - tasks TBD
  • Droidup - task TBD
  • Galbot G1 - doing pick and place at a shop
  • Unitree - Various robots and tasks
  • Booster Robotics - K1
  • Limx Dynamics - Olli
  • Devanthro

There are also ongoing talks with companies that are open to bring autonomous excavators, various inspection robots, a drone show, and setting up a location for people to pilot racing drones. Plus working on bringing autonomous cars and shuttles driving around the city.

We were in Davos during WEF to promote this event and got some media coverage: Davos Tech Summit 2026 | Touching Intelligence

If you are interested in speaking at the event, please reach out! We are building the program during this month.

We are also looking into organizing a ROS Meetup during the event.
Let us know if you’d like to join.

Cheers!

3 posts - 2 participants

Read full topic

by jopequ on February 06, 2026 04:32 PM

ROS 2 Rust Meeting: February 2026

The next ROS 2 Rust Meeting will be Mon, Feb 9, 2026 2:00 PM UTC

The meeting room will be at https://meet.google.com/rxr-pvcv-hmu

In the unlikely event that the room needs to change, we will update this thread with the new info!

1 post - 1 participant

Read full topic

by jhdcs on February 06, 2026 02:46 PM

Canonical Observability Stack Tryout | Cloud Robotics WG Meeting 2026-02-11

Please come and join us for this coming meeting at Wed, Feb 11, 2026 4:00 PM UTCWed, Feb 11, 2026 5:00 PM UTC, where we plan to deploy an example Canonical Observability Stack instance based on information from the tutorials and documentation.

Last meeting, the CRWG invited Guillaume Beuzeboc from Canonical to present on the Canonical Observability Stack (COS). COS is a general observability stack for devices such as drones, robots, and IoT devices. It operates from telemetry data, and the COS team has extended it to support robot-specific use cases. If you’re interested to watch the talk, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

2 posts - 1 participant

Read full topic

by mikelikesrobots on February 06, 2026 10:36 AM

February 05, 2026
Is there / could there be a standard robot package structure?

Hi all! I imagine this might be one of those recurring noob questions that keep popping up every few months, please excuse my naivity..

I am currently working on a ROS 2 mobile robot (diff drive, with a main goal of easy hardware reconfigurability). Initial development took place as a tangled monolithic package, and we are now working on breaking it up into logically separate packages for: common files, simulation, physical robot implementation, navigation stack, example apps, hardware extensions, etc.

To my understanding, there is no official document that recommends a project structure for this, yet still, “established” robots (e.g. turtlebot4, UR, rosbot) seem to follow a similar convention among the lines of:

  • xyz_description – URDFs, meshes, visuals
  • xyz_bringup – Launch and configuration for “real” physical implementation
  • xyz_gazebo / _simulation – Launch and configuration for a simulated equivalent robot
  • xyz_navigation – Navigation stack

None seem to be exactly the same, though. My understanding is that this is a rough convention that the community converged to over time, and not something well defined.

My question is thus twofold:

  1. Is there a standard for splitting up a robot’s codebase into packages, which I’m unaware of?
  2. If not, would there be any value in writing up such a recommendation?

Cheers!

3 posts - 3 participants

Read full topic

by trupples on February 05, 2026 11:49 PM


Powered by the awesome: Planet