May 06, 2026
Where does latency in WebRTC video streaming come from? An analysis

We analyzed the glass-to-glass latency of streaming video from robots to the web using WebRTC. Typical, total latency for remote streaming is 150-180 ms but how does this break down?

Tl;dr:

  • The vast majority of latency actually comes from the camera itself and the USB bus (~100 ms).
  • H264 encoding and decoding add around 10 ms each (or less).
  • WebRTC only adds around 10 ms of latency for remote streaming (jitter buffers).
  • The rest is due to static network delay (“ping timing”, speed of light).

Read the full analysis here:

1 post - 1 participant

Read full topic

by chfritz on May 06, 2026 05:40 PM

ROS2 + Gazebo Harmonic on macOS 26

Hello,

It seems ROS2 on macOS 26 installation guide is this: Installing ROS 2 on macOS — ROS 2 Documentation: Crystal documentation
Gazebo Harmonic on macOS 26 installation guide is this: Binary Installation on macOS — Gazebo harmonic documentation

I read online there might be some incompatibility issues. I would like to understand what’s the current picture of the ROS2+Gazebo Harmonic. Would those two guides work and are there any expected issues?

1 post - 1 participant

Read full topic

by MT1234 on May 06, 2026 05:36 PM

Camera streaming made easy [NEW Release]

Greetings fellow roboticists,

you might know me from other projects such as ros_babel_fish, the QML ROS 2 module, or RQml.
Maybe as the dude who ends all their posts with an AI-generated image.

TLDR: ros_camera_server out now (link below). Efficient streaming to ROS, robust low-bandwidth to operator station over H.264/H.265 using rtp/srt/webrtc. Simple config, automatically optimized pipelines.

Camera streaming in ROS is kind of cumbersome, even though it’s always the same issue.
On the robot, you want raw (or compressed between PCs) ROS images for processing, and on the remote operator station, you want a low-latency, ideally low-bandwidth live video feed.
In academia, many setups I have encountered use compressed images for remote operator setups because it’s simple and, in good network conditions, works well enough.
Using usb_cam or gscam with compressed images is also what Opus 4.7 would suggest when asked.
While this does work, it’s quite resource- and bandwidth-intensive and is one of the roadblocks that need to be addressed to bring research solutions to the field, especially in rescue robotics, where bandwidth is a limited resource.

To save bandwidth, you can use something like gst_bridge to receive the ROS image, encode it in H.264, and send it to your remote operator station.
This will be approximately 1/10 to 1/30 of the bandwidth for comparable quality.

If your camera is a high-resolution USB camera, it will most likely stream jpeg encoded data for the high resolution, high fps options.
So your pipeline becomes:
Camera (jpeg) → usb_cam (decodes jpeg) → image_transport (re-encodes as jpeg for compressed) → gst_bridge → handwritten gstreamer pipeline (requires some technical knowledge to get right) → stream

Doesn’t take an expert to see that this is not optimal.
Here’s where my new ros_camera_server comes in.
You specify one yaml file with your cameras, each with one input and as many outputs as you want (and your compute can handle).
The outputs can differ in resolution and framerate.
Currently supported are ROS 2, RTP, SRT, and WebRTC.
The camera server will automatically create and optimize GStreamer pipelines based on your available hardware accelerations, which, in parallel, produce your ROS output and streams applying scale and framerate limiters as necessary.
Cutting the decode and re-encode overheads and significantly reducing latency and CPU usage.
JPEG camera input can also be published directly as a ROS-compressed image or forced to be decoded if needed.
Check the plots from my benchmark in the comments to see that the much easier configuration is not paid for with higher latency or overhead, and it beats the alternatives in both.

Here’s the repo:

If you can’t comply with the AGPL, you can contact me to see if we can find a suitable license for your use.

PS: The ros_camera_server preserves the image capture/header timestamp as a custom RTP header extension and can restore it from ros_camera_server H.264/H.265 streams.
So you can stream from the robot over RTP/SRT/WebRTC and restore it to ROS on the operator station or another robot, and the timestamp will be preserved in the ROS image output.

Benchmarks



I hope this helps groups without video streaming experts to create more robust remote control setups.
If you read this far and this was not of interest to you, I’m sorry, here’s your AI picture:

1 post - 1 participant

Read full topic

by StefanFabian on May 06, 2026 01:02 PM

ALERT: do NOT visit plotjuggler.com

Hi,

avoid the page “plotjuggler DOT com”

I did not create this page and I have no idea who did.

Please do not download any file from there, there is a high risk of Malware / Phishing.

I will try to have a solution to this ASAP.

Davide

3 posts - 2 participants

Read full topic

by facontidavide on May 06, 2026 12:56 PM

Rclnodejs 2.0.0 beta — ROS 2 Lyrical (beta) and Node.js 26 support

Hi all,

For those new to the project: rclnodejs is the Node.js client library for ROS 2, maintained under the Robot Web Tools umbrella. It lets you write ROS 2 nodes — publishers, subscribers, services, actions, parameters, lifecycle, etc. — in plain JavaScript or TypeScript, with a native binding to rcl/rmw so messages stay zero-copy where possible. It’s a good fit for web dashboards, Electron desktop apps, browser bridges, scripting, and rapid prototyping.

I’m happy to announce that rclnodejs 2.0.0-beta.0 is out — the first preview of the 2.x line, with first-class support for the upcoming ROS 2 Lyrical Luth release on Ubuntu 26.04 and the latest Node.js 26.

What’s new in 2.0.0-beta.0

  • ROS 2 Lyrical Luth (Ubuntu 26.04) is supported in addition to existing distros (Humble / Jazzy / Kilted / Rolling).

  • Node.js 26.x is supported, with Linux x64 and arm64 prebuilds so npm install works without a local toolchain.

New: rosocket — talk to ROS 2 from a browser, no JS library

This release ships a lightweight WebSocket bridge called rosocket plus an end-to-end demo. The point: a browser tab can subscribe and publish to topics (and call services) using only the built-in WebSocket and JSON APIs — no rosbridge or roslibjs needed.

URL convention:


ws://<host>:<port>/topic/<name>

ws://<host>:<port>/service/<name>

Browser-side pub/sub in a few lines:


const BRIDGE = 'ws://localhost:9000';

// Subscribe — every published message arrives as onmessage.

const sub = new WebSocket(`${BRIDGE}/topic/chatter`);

sub.onmessage = (ev) => console.log('recv:', ev.data); // {"data":"hi"}

// Publish — open, send JSON, close.

const pub = new WebSocket(`${BRIDGE}/topic/chatter`);

pub.onopen = () => {

pub.send(JSON.stringify({ data: 'hello from browser' }));

setTimeout(() => pub.close(), 200);

};

Live walkthrough:

rclnodejs rosocket demo

Code: demo/rosocket.

Electron desktop visualization demos

For richer UIs, the same library powers cross-platform desktop apps via Electron — HTML/CSS/Three.js/WebGL on the front end, with real ROS 2 nodes running in the Electron main process. The demos cover topics, a car controller, a manipulator, and a turtle TF2 visualizer.

rclnodejs electron manipulator demo

Code: demo/electron.

Cheers,

Minggang

1 post - 1 participant

Read full topic

by minggangw on May 06, 2026 10:06 AM

Is FastDDS (default config) totally unusable for simulation with rclpy?

I was fighting some performance issues in our simulation for a subject I teach.

It boiled down to one slight performance improvement in rclpy which I started discussing, but the larger surprise was how super-bad FastDDS is with distributing the 250 Hz /clock topic.

I did an experiment with our full autonomy stack (but on a Turtlebot with 2D lidar, so nothing heavy). Most nodes run rclpy with MultiThreadedExecutor (more on that later). There are in total 16 nodes that subscribe /clock topic and Gazebo runs at ~70% RTF, so the real frequency of /clock is more like 200 Hz.

Pink is FastDDS, blue is Zenoh.

Let me explain the plot:

The vertical axis shows the dt - delta time, i.e. the time difference between two consecutive /clock messages received by the node (measured by instrumenting the default clock callback of a node).

With FastDDS, some nodes sometimes do not receive any /clock message for more than 0.5 s!!! With Zenoh, the worst case is about 0.18 s.

Even more interesting is the best case - FastDDS achieves 0.004 dt (what is expected) for only a small fraction of the dataset, like 5%. Zenoh achieves this dt most of the time (the almost solid blue line on the bottom). (no, this blue line does not hide the pink dots, there are almost none underneath it).

This is the histogram of dt values (notice logarithmic axis, Zenoh left, FastDDS right):

I understand that the combination of rclpy and MultiThreadedExecutor is one of the worst things you can do in ROS 2, but still, this huge difference in usability between FastDDS and Zenoh hits me.

9 posts - 4 participants

Read full topic

by peci1 on May 06, 2026 02:33 AM

May 05, 2026
[Release] emcon_gz_hardware_interface: A DDS-bypassing hardware interface for Gazebo Harmonic

Hi everyone,

When scaling up multi-robot simulations or running complex network-isolated state estimators in Gazebo Harmonic, relying on the standard gz_ros2_control shared memory architecture can become a bottleneck or present domain isolation challenges.

To solve this, I’ve open-sourced emcon_gz_hardware_interface.

It acts as a standard ros2_control SystemInterface, but instead of routing simulation traffic through ROS 2 DDS, it acts as a “data diode” and subscribes/publishes directly to native gz-transport topics.

Key Features:

  • Total DDS Bypass: Keeps your ROS_DOMAIN_ID completely isolated from high-frequency simulation traffic.

  • Strict Real-Time Safety: Uses realtime_tools::RealtimeBuffer to ensure the control loop never blocks.

  • Fully Parameterized: Configurable directly via URDF <hardware> tags.

Repository: https://github.com/yenode/emcon_gz_hardware_interface

I’ve already opened an RFC with the ros-controls team to see if this fits upstream, but the standalone package is fully CI-tested and ready to use for ROS 2 Jazzy! I’d love to hear feedback from anyone running massive fleets or strict network simulations.

1 post - 1 participant

Read full topic

by Aditya_Pachauri on May 05, 2026 11:50 PM

"ROS 2 in a Nutshell: A Survey" is now published in ACM Computing Surveys

Hi everyone,

I have been a silent observer here since 2019, and this is my first post on Discourse. I hope you will excuse the sudden intrusion :grinning_face_with_smiling_eyes:

I am pleased to share that our survey paper, “ROS 2 in a Nutshell: A Survey,” has been published in ACM Computing Surveys .

Paper DOI: https://doi.org/10.1145/3815113

ACM Computing Surveys is recognized with an Impact Factor of 28.0 and is ranked #1 out of 147 journals in Computer Science, Theory & Methods .

The goal of this survey is to provide a broad and systematic overview of the ROS 2 ecosystem. In particular, the paper covers:

  • the evolution of ROS 2


  • the motivations behind ROS 2 and its architectural redesign


  • middleware and RMW evolution

  • a taxonomy of ROS 2 literature and research directions


  • frameworks, simulators, community packages, and the broader ROS 2 software ecosystem





The paper is organized around three main research questions:

  1. How does ROS 2 improve upon ROS 1, and what new limitations arise?
  2. What advances address redesign challenges and enable deployment?
  3. Which frameworks and tools shape the ROS 2 ecosystem, and where are the remaining gaps?

A major outcome of the work is an open-access companion database that organizes ROS-related literature, tools, and ecosystem resources:

ROS 2 survey database: ROS 2 in a Nutshell: A Survey

We also welcome community contributions to improve and extend the database:

Contribution guide: awesome-ros/CONTRIBUTING.md at main · asmbatati/awesome-ros · GitHub

This work was carried out by:

We would also like to express our appreciation to:

for their valuable support.

We are also grateful for the support of:

I hope the survey and the companion database will be useful to researchers, developers, and students working with ROS 2. :folded_hands:

Feedback, corrections, and suggestions for additions to the database are very welcome.

1 post - 1 participant

Read full topic

by asmbatati on May 05, 2026 01:27 AM

May 04, 2026
Guidance on learning ROS2 focusing on motion planning and Perception.

Hello guys, I am a beginner at learning ROS2. I want guidance on learning it from scratch, right now I am stuck on where to start. Watching a couple of YouTube tutorials, but struggling to understand certain concepts. I am here to learn from people who have prior knowledge of ROS.

Thanks

1 post - 1 participant

Read full topic

by Hariharan_Karthikeya on May 04, 2026 04:13 PM

Ouster unveil the first native color Lidar sensor

5 posts - 3 participants

Read full topic

by Samahu on May 04, 2026 02:50 PM

RVizSplat: Visualize 3D Gaussian splats in RViz!

Hey everyone!

We are happy to share Release 1.0.0 of RVizSplat!
In this release, we provide an RViz display plugin that renders 3D Gaussian splats alongside other conventional markers in the scene. Along with this, we also provide an interface to stream GSplats over ROS topics and the ability to read .ply files which are in the 3DGS INRIA format directly from the plugin.

In case you wish to render Gaussian splatted scenes in a resource constrained environment, we also provide OIT based implementations to bypass sorting the Gaussians at the cost of rendering quality.

On an Nvidia RTX 3060+, we render at 40-60 FPS on a large scene with > 6 million splats and about 20 FPS on a modern integrated GPU for the same scene. On scenes that are comparatively smaller (1-3 million splats), we are able to achieve 100+ FPS. We also provide the ability to sort on an Nvidia GPU (Radix sort) and on the CPU (PDQ Sort).
Additional sorting techniques can be implemented through a simple interface if they suit your needs better.

The package is currently well tested on ROS 2 Rolling and is experimental on Jazzy and Kilted.
Please try out our work, let us know what you think, and star the repository to support the project :grin:

Team members that have made this project possible: Videh Patel, Akash Chikhalikar, Aditya Mathur, and Suchetan Saravanan.

le_robot_6mil (1)

alpha_0_5_marker

4 posts - 2 participants

Read full topic

by suchetanrs on May 04, 2026 04:43 AM

May 03, 2026
I tested URDF format conversion on NASA Valkyrie, UR5, and Franka Panda results and a free tool

I validated and converted URDFs from 5 popular robots to both
Gazebo SDF and MuJoCo MJCF format.

The tool is free to try (14-day Pro trial, no credit card):

  • Validate: POST /api/urdf/validate
  • Convert to SDF: POST /api/urdf/convert-format?target=sdf
  • Convert to MJCF: POST /api/urdf/convert-format?target=mjcf

Python SDK: pip install roboinfra-sdk
GitHub Action: roboinfra/validate-urdf-action@v1

1 post - 1 participant

Read full topic

by Robotic on May 03, 2026 12:02 PM

April 30, 2026
ROS Lyrical Luth Beta and Call for Testing

We’re in the “beta” phase of development for ROS 2 Lyrical Luth! We have binary packages available for Ubuntu Resolute and RHEL 10, and rosdistro is open for newly released packages for Lyrical.

Testing the Lyrical beta

We published Installation instructions for Lyrical here. However, binary packages for Lyrical are only available in the testing repository.

Follow the pre-release testing instructions to use the ros-testing repository so that you can install the ros-lyrical-* packages.

For those using the comprehensive archive for installation, download the archive from the artifacts of this pre-release tag. If you’re building from source, use the ros2.repos file at that release.

If you think you’ve discovered a bug, please:

  • check the open issues and PRs on the related repository, or
  • discuss the issue in this thread, or
  • open a new issue

We’ll triage the issue or PR and decide when and how it should be fixed in Lyrical.

Releasing your packages

If you are a package maintainer, please follow this guide to release your package in Lyrical.

Reminder that the tutorial party starts Thu, Apr 30, 2026 7:00 AM UTC. Find all the details in this post.

Thanks!

The ROS 2 Team

2 posts - 1 participant

Read full topic

by sloretz on April 30, 2026 09:36 PM

ROS 2 Lyrical Luth Release Illustration and Swag 🎸

Hi Everyone,

It is my pleasure to present you with the illustration for ROS 2 Lyrical Luth! This release illustration is the work of our illustrator Ryan Hungerford. Ryan is an illustrator based in the Bay Area and his AI (actual intelligence) makes for some wonderful illustrations.

Lyrical Swag Sale

We’re also happy to announce that the ROS 2 Lyrical Luth swag sale is now live. We’re now using Fourth Wall for all of our ROS swag sales as the platform supports both a wide array of items and allows us to produce merch on demand and ship it almost anywhere on earth We’ve also created a permanent URL for ROS swag at store.openrobotics.org so it is easy to find. For this release we are offering eight different items for sale including:

  • :t_shirt: Mens, womens, and kids shirts (we’re big fans of the tri-blend shirts)
  • :baby: Baby Onsies
  • :coat: Hoodies and long sleeve shirts
  • :bed: Throw pillows
  • :hot_beverage: Mugs
  • :framed_picture: Decorative prints

All profits from the Lyrical swag sale go directly to the Open Source Robotics Foundation and help support the ROS, Gazebo, ROS Control, and Open-RMF projects. If you order today you might just receive your swag by release day on May 22rd, 2026. If you would like to earn Lyrical swag by contributing to the project please consider contributing to the Lyrical Test and Tutorial party that is currently taking place. The top twenty test contributors will be sent a code to our swag store.

6 posts - 3 participants

Read full topic

by Katherine_Scott on April 30, 2026 05:06 PM

Lyrical Luth Test and Tutorial Party Instructions

Lyrical Luth Test and Tutorial Party Instructions

:tada: Update: Lyrical board has been updated and is live! Docs are live too. A recording of the kickoff meeting can be found here.

Hi Everyone,

As mentioned previously, we’re conducting a testing and tutorial party for the next ROS release, Lyrical Luth. If you happened to miss the kickoff of the Lyrical Luth Testing and Tutorial party this morning I have put together some written instructions that should let everyone participate, no matter their time zone. Here are the slides from the kickoff meeting.

TL;DR

We need your help to test the next ROS Distro before its release on Friday, May 22nd. We’re asking the community to pick a particular system setup, a combination of host operating system, CPU architecture, RMW vendor, and build type (source, debian, binary), and run through a set of ROS tutorials to make sure everything is working smoothly. Depending on the outcome of your tutorials, you can either close the ticket as completed or report the errors you found. If you can’t assign the ticket to yourself, leave a comment, and an admin will take care of it for you. Please do not sign up for more than one ticket at any given time. Everything you need to know about this process can be found in this Github repository.

As a thank you for your help, we’re planning to provide the top twenty contributors to the T&T party with their choice of either ROS Lyrical swag or OSRA membership. :warning: To be eligible to receive swag, you must register using this short Google Form so we can match email addresses to GitHub usernames and count the total tickets closed.:warning:

The testing and tutorial party will close on May 14, 2026, but we’re asking everyone to get started right away! We have 10,000 tickets to work through and with Lyrical’s transition to C++20, we fully anticipate that we’ll need to update a few tutorials and fix some broken source builds.

Full Instructions

We’re planning to release ROS 2 Lyrical Luth on May 22, 2026, and we need the community’s help to make sure that we’ve thoroughly tested the distro on a variety of platforms before we make the final release. What do we mean by testing? Well, lots of things, but in the context of the testing and tutorial party, we are talking about the package-level ROS unit tests and anything else you want to test. What do we mean by tutorials? We also want to make sure all our docs.ros.org tutorials are in working order before the release.

The difficulty in testing a ROS release is that people have lots of different ways they use ROS, and we can’t possibly test all of those combinations. For the testing and tutorial party we have created what we call, “a setup.” A setup is a combination of:

  • RMW vendor: FASTDDS, CYCLONEDDS, CONNEXTDDS or ZENOH
  • BuildType: binary, debian or source
  • OS: Ubuntu Resolute 26.04, Windows 11 and RHEL-10
  • Chipset: Amd64 or Arm64

If you already have a particular system setup that you work with, we suggest that you roll with that; otherwise, feel free to create a new system setup just for testing purposes. If you normally use Windows or RHEL (or binary compatible distributions to RHEL like Rocky Linux / Alma Linux) we would really appreciate your help as we don’t have a ton of internal resources to test these distributions.

Here are the steps for participating in the testing and tutorial party:

  • Before you begin please fill out the Google form so we have your contact information We can’t send you swag if we don’t have both your email address and your Github user name.
  • First go to the Tutorial Party Github repo (bit.ly/LyricalBoard) and read the README.md.
  • Figure out your setup!
  • Once you’ve got your “setup” all figured out take a look at the you can use the bottom of the Lyrical Tutorial Party party ReadMe file to filter by setup. There should be a set of tickets for your “setup”. Click on the links and review the available tickets. If you want to test something other than the available tickets, feel free to open a new ticket and describe exactly what you are testing.
  • Pick a single ticket for your setup and use the assignees option to assign it to yourself. If you can’t assign yourself, leave a comment and an admin will assign the ticket to you
  • Take a look at the ticket and do as it asks in the “Links” section. For example, in this ticket, its links section points you to this tutorial. You should use your new ROS Lyrical Luth setup to run through that tutorial.
    • :warning: Please note that we’re using the Rolling documentation. If you see instructions to install a rolling package you’ll need to modify those to point to lyrical.
  • Once you complete the links section things will either go smoothly or you will run into problems. Please report your results using the check boxes in the “Checks” section of your Github issue.
    1. If everything goes well, note as such in your ticket’s comment section. We ask that you attach your terminal’s output as a code block or as a gist file or include a screenshot. At this point feel free to close the ticket by clicking “close as completed.”
    2. If something went poorly please note it in your ticket’s comment section. Try to include a full stack trace or other debug output if possible. Please also run ros2 doctor --report and dump the output in your ticket.

The testing and tutorial party wraps up on May 14, 2026 , but we’re asking everyone to get started early as we will need some lead time to address any bugs.

New for Lyrical: Pull Requests and Reviews

For 2026’s Test and Tutorial Party, we’re piloting a new feature: Lyrical Bug Fixes and PR Reviews. We’re looking for community members to help us out by lending us their eyes and expertise. For the T&T party we’re allowing participants to gain one extra point for each completed bug fix and PR review from the Lyrical board. We anticipate the majority of these issues will be documentation related so they should be fairly straightforward to fix.

For the T&T party we will provide you with one extra point if you do one of the following:

  • Create a pull request for a bug fix that addresses documented issues listed in the Lyrical issues board.
  • Review one of the pull requests or bug fixes listed in the Lyrical issues board.
  • We also have a limited number of general ROS pull request reviews that are also in scope for the T&T party. You can find those here (bit.ly/Lyrical-PR-Reviews)

To help us track and tabulate scores, you must fill out this short form every time you complete a review or a PR.

For Lyrical pull requests and bug fixes:

  • Ask to be assigned the issue in the Lyrical Tracking Board (bit.ly/LyricalTrackingBoard).
  • Write the relevant code or documentation. Remember to use the correct branch!
  • Build your solution and run the necessary tests and linters. This step is key to getting your PR approved.
  • Submit your PR. You must include a brief description of the issue and the issue number from the tracking board.
  • You must work with the reviewers to address all necessary feedback until the PR is accepted and merged.
  • If you use AI for your pull request you must report it in a manner consistent with OSRF policy.
  • Report your work using the form (bit.ly/LyricalPR).

For Lyrical reviews:

  • Request to be assigned to the pull request from the Lyrical Tracking Board (bit.ly/LyricalTrackingBoard). You can be assigned to one pull request at a time.
  • Once you are assigned to the pull request you must do the following:
    • Verify the fix by checking out the PR, building it, and replicating the bug conditions. For documentation this means checking out the PR and running make html.
    • Take one or more screenshots of the result.
  • Perform a realistic review of the pull request. There are two potential outcomes for your review:
    • You find no issues.
      • If that’s the case you must briefly list the steps you took to verify that the PR works and attach a screenshot.
    • You find an issue and request changes.
      • Changes should use the format: “Nit:” (minor change, usually a matter of preference, non-blocking), “Issue:” (major issue, blocking), “Suggestion:” (friendly suggestion, non-blocking), “Question:” (clarification, non-blocking), or “Chore:” (generally formatting issue, non-blocking)
      • For issues and chores the feedback in the pull request should include the following:
        • What specifically needs attention.
        • Why this change is necessary.
        • A suggestion on how to fix it.
      • You must follow up with the PR author to make sure their changes fix your issue. We suggest using the “suggest changes” feature liberally to expedite the process.
  • Generative AI should not be used for pull request reviews.
  • Report your work using the form (bit.ly/LyricalPR).

12 posts - 5 participants

Read full topic

by Katherine_Scott on April 30, 2026 04:37 PM

I'm done manually tuning DDS parameters!

Raise your hand if this sounds familiar:

  1. You just want better latency or higher throughput for your ROS2 app — but DDS throws hundreds of parameters at you and you have no idea where to start.
  2. So you end up spending hours (or days) manually tweaking, re-running benchmarks, and tweaking again… only to end up with “good enough” instead of actually good.

If that’s you, I have something that might help: GitHub - qualcomm-qrb-ros/ROS2-DDSConfig-Optimizer: An AI-driven tool that automatically tunes DDS configuration for ROS2 applications. · GitHub

It’s an AI-driven tool that automatically tunes FastDDS configuration for your ROS 2 application. All you need to provide is:

  1. Your performance targets — latency, throughput, reliability, CPU/memory limits, whatever matters to you — in a simple XML file
  2. An initial DDS config as the baseline

That’s it. You’ll get back the best DDS configuration tailored to your application. :sparkles:

Would love to hear your feedback, bug reports, or feature ideas — issues and PRs are very welcome!

4 posts - 2 participants

Read full topic

by NaSong on April 30, 2026 06:50 AM

April 29, 2026
New ROS controller app

https://play.google.com/store/apps/details?id=com.jax.roscontroller

My app was finally approved on the play store. I have been using this app to control my quadruped running ROS2 on a Pi. This release has the fundamentals working. I will be adding additional features soon.

2 posts - 2 participants

Read full topic

by Tdp378 on April 29, 2026 05:12 PM

RobotCAD 10.8.1 new AI tool - Generate robot from primitives by text

Improvements:

  1. Reforged Explode View tool and now it support robot links position states (with memory), explode offset slider.

Features:

  1. New AI Tool - “Generate primitive robot by text”.
    Creates robot from primitives by your description. Supports various LLM providers.

  2. New Tool “Manage Link Display”.
    Toggles display of Visual, Collision, Real elements of robot links. It has “Set Placement Mode” for fast activate visibility of Real and disable others.

Added Sponsorship Block to Settings Window. There is can be your company or ads.

Explode View tool

AI Generator of Primitive Robots tool

Sponsorship Block

1 post - 1 participant

Read full topic

by fenixionsoul on April 29, 2026 12:09 PM

micro-ROS public infrastructure transition

As part of an OSRA-led clarification of governance boundaries within the ROS ecosystem, the public infrastructure of micro-ROS is transitioning to the Vulcanexus ecosystem.

This change is limited to public infrastructure and hosting. The micro-ROS project itself, its goals, roadmap, APIs, and technical direction remain unchanged. micro-ROS continues to be fully aligned with ROS 2 and supports standard ROS 2 workflows.

The new canonical website for micro-ROS is:
https://micro.vulcanexus.org

During the transition period, micro.ros.org will display a notice page indicating the new location of the project and guiding users to update bookmarks and references.

We recommend updating any existing links, documentation, or automation to point to the new domain.

Further updates will be shared as the transition progresses.

1 post - 1 participant

Read full topic

by Jaime_Martin_Losa on April 29, 2026 05:41 AM

April 28, 2026
Participants wanted for a survey on tooling and AI use in the ROS community!

Do you have opinions on the available ROS tooling? Are you using AI in your ROS development workflow? Or maybe you refuse to use AI and want to tell us why?

We want to hear from you!

We are a group of software engineering researchers at Carnegie Mellon University, VORTEX Collab, and the University of Lisbon investigating how ROS developers find and use information, what tools they rely on across different development tasks, and how AI-powered tools fit into the development workflow.

We are conducting a research survey to better understand the information needs, tooling gaps, and the role of AI in the ROS development process. This survey is estimated to take ~20 minutes to complete.

The research survey is open to ROS developers who are at least 18 years old and with at least one year of experience. If you are interested in sharing your experiences, please visit the SURVEY LINK to complete the survey.

Responses are anonymous and will be used solely for research purposes. This research survey is part of a study (STUDY2026_00000158) conducted by Claire Le Goues and Christopher Timperley at Carnegie Mellon University. If you have any questions about the study, please contact Andrea Miller (PhD student) at andreami@andrew.cmu.edu.

1 post - 1 participant

Read full topic

by Andrea_Miller on April 28, 2026 04:21 PM

Custom Capabilities in Transitive Robotics - Again | Cloud Robotics WG Meeting 2026-05-04

Please come and join us for this coming meeting at Mon, May 4, 2026 4:00 PM UTCMon, May 4, 2026 5:00 PM UTC, where we plan to continue our Transitive Robotics tryout by trying one of the more advanced features: writing and deploying a custom capability. This feature allows customers to write their own custom code and deploy it to their robots alongside the features available directly from Transitive Robotics.

We did attempt this tryout last session (hence why the title might be familiar!), but as I used an unsupported system for setting up the development environment, most of the session was spent on the initial setup. Hence, we’re repeating the session using a supported operating system. If you’re interested in watching the meeting anyway, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

2 posts - 2 participants

Read full topic

by mikelikesrobots on April 28, 2026 09:45 AM

April 27, 2026
Request for testing on pyspacemouse

Do you use a Spacemouse to control your robot? Then you might depend on `pyspacemouse`. If so, please check test this merge request and let me know if it has any effect on your usage!

1 post - 1 participant

Read full topic

by PeterMitrano on April 27, 2026 07:10 PM

[ACM Computing Surveys] ROS 2 in a Nutshell: A Survey

I am delighted to share that our manuscript, “��� � �� � ��������: � ������,� has been officially �������� ��� ����������� in ��� ��������� ������� — one of the leading journals for high-impact review articles in computer science.

With an ������ ������ �� ��.�, and ������ #� ��� �� ��� �������� �� �������� ������� ������ & �������, this journal is recognized among the most prestigious venues for high-impact survey research.

This work is the result of a valuable collaboration between researchers from the RIOTU Lab at Prince Sultan University and the AlfaisalX Center at Alfaisal University.
Co-authors: ����������� �. ��-������, ���� ������, ������ ����, ������� ����������, ��� ����� �������� for more than 2 years of consistent work

Our paper presents one of the most comprehensive and systematic reviews of ROS 2, covering its architecture, ecosystem, advances, challenges, and future directions in modern robotics.

:bar_chart: Survey Highlights
:star: 8,033 papers surveyed
:star: 960 ROS 2 publications analyzed
:star: 176 community packages reviewed
:star: 2009–2025 research timeline covered

The survey explores:
:star: Evolution from ROS 1 to ROS 2
:star: Middleware architecture and DDS
:star: Real-time systems and hardware acceleration
:star: Security and safety
:star: Multi-robot and distributed robotics
:star: Simulators, frameworks, and open-source ecosystems
:star: Applications in autonomous vehicles, healthcare, aerospace, logistics, agriculture, and public safety

:globe_with_meridians: Companion website and open-access database:

We also provide an open-access companion database for the ROS research community.

I sincerely thank my co-authors and, in particular, ����������� �. ��-������ for his perseverance, editors, and reviewers for their valuable feedback and support throughout this journey.

Excited to see this contribution help researchers, students, and engineers advance the future of robotics.

1 post - 1 participant

Read full topic

by Anis_Koubaa on April 27, 2026 05:51 PM

Feel like TurtleBot4_Navigation is a "House of Cards" for my robot

My Raspberry Pi 5 powered TurtleBot4 (clone) TB5-WaLI has been “alive” for 11108 hours since January 9, 2025.

It has undocked, navigated around my home 6226 meters (almost 4 miles), and redocked by itself 2137 times (well I did have to help it 5 times after safety shutdowns).

It has built a good map of the house, and with LIDAR seems to never lose localization anywhere in the map.

I have found a set of Turtlebot4_navigation parameters that will give 90% reliable navigation to 10 carefully chose goals in my home, but don’t try to check the battery_state from the command line, or write a node that subscribes to BT Log Events to collect statistics, and don’t expect reliable navigation in complex, tight areas of the house (even when it appears there is a very clear path available).

My “Raspberry Pi home/educational robot” philosophy has always been, “I’m not in a hurry, so my robot should be able to do anything (that fits in 8GB of memory), albeit the bot may need to go very slow or think for a while before acting or answering.”

My hope for ROS TurtleBot4_navigation has been along this same line - WaLI may need to go slow, or even stop to rethink a plan, but nothing short of a localization failure should prevent him from navigating.

I understand there are several path planners, several plan critics, and several recovery planners, each with a bevy of parameters to tweak, but for the life of me I cannot find a set that provide robust, reliable navigation in my home.

From the start my hope was that there are some nav2 parameters that would make nav2 robust and reliable.

Many times navigation decides to fail because WaLI happens to be in a tight spot, and it seems like it doesn’t wait for recoveries. I see “spin failed” but the bot never rotated. Or “Spin 1.57” which is way too much more than needed.

A simple DDS discovery server query can cause turtlebot_navigation to fail. Shouldn’t such “resource starvation” events be possible to handle by stopping the bot, or pausing the planning?

Introducing anything external that monitors the navigation BT Events, or my node collecting statistics during the navigation should not kill everything. If the control loops are only able to run at 10Hz, can’t all the nodes slow their expectations, can’t the bot crawl slowly enough or stop while it is thinking?

For all the flexibility of Nav2 and the infinite parameterization, suite of planners and critics, and adaptable sensor inputs, TurtleBot4_navigation just seems to be a “house of cards” for me.

ROS 2 Jazzy, Turtlebot_navigation (Derivative of Nav2), Raspberry Pi 5 8GB, Ubuntu 24.04
(No temp or voltage throttling ever, Idle CPU 35%, navigating CPU 75%)

1 post - 1 participant

Read full topic

by RobotDreams on April 27, 2026 01:07 AM

April 25, 2026
What are you using for containerization in ROS deployments?

Saw this post recently, debating that Docker is not the right tool for robotics. It sparked a good discussion and got me thinking.

I am curious to know what people are actually using in real world deployments and over-the-air updates. Docker, Podman, Snap, Yocto, something else entirely? And does it change between dev and production?

There is a nice blog on this as well: https://blog.robotair.io/best-way-to-ship-your-ros-app-a53927186c35

But still forming my own view on this, would really like to know what people are using.

12 posts - 8 participants

Read full topic

by Sakshay_Mahna on April 25, 2026 01:02 PM


Powered by the awesome: Planet