April 30, 2026
ROS Lyrical Luth Beta and Call for Testing

We’re in the “beta” phase of development for ROS 2 Lyrical Luth! We have binary packages available for Ubuntu Resolute and RHEL 10, and rosdistro is open for newly released packages for Lyrical.

Testing the Lyrical beta

We published Installation instructions for Lyrical here. However, binary packages for Lyrical are only available in the testing repository.

Follow the pre-release testing instructions to use the ros-testing repository so that you can install the ros-lyrical-* packages.

For those using the comprehensive archive for installation, download the archive from the artifacts of this pre-release tag. If you’re building from source, use the ros2.repos file at that release.

If you think you’ve discovered a bug, please:

  • check the open issues and PRs on the related repository, or
  • discuss the issue in this thread, or
  • open a new issue

We’ll triage the issue or PR and decide when and how it should be fixed in Lyrical.

Releasing your packages

If you are a package maintainer, please follow this guide to release your package in Lyrical.

Reminder that the tutorial party starts Thu, Apr 30, 2026 7:00 AM UTC. Find all the details in this post.

Thanks!

The ROS 2 Team

1 post - 1 participant

Read full topic

by sloretz on April 30, 2026 09:36 PM

ROS 2 Lyrical Luth Release Illustration and Swag 🎸

Hi Everyone,

It is my pleasure to present you with the illustration for ROS 2 Lyrical Luth! This release illustration is the work of our illustrator Ryan Hungerford. Ryan is an illustrator based in the Bay Area and his AI (actual intelligence) makes for some wonderful illustrations.

Lyrical Swag Sale

We’re also happy to announce that the ROS 2 Lyrical Luth swag sale is now live. We’re now using Fourth Wall for all of our ROS swag sales as the platform supports both a wide array of items and allows us to produce merch on demand and ship it almost anywhere on earth We’ve also created a permanent URL for ROS swag at store.openrobotics.org so it is easy to find. For this release we are offering eight different items for sale including:

  • :t_shirt: Mens, womens, and kids shirts (we’re big fans of the tri-blend shirts)
  • :baby: Baby Onsies
  • :coat: Hoodies and long sleeve shirts
  • :bed: Throw pillows
  • :hot_beverage: Mugs
  • :framed_picture: Decorative prints

All profits from the Lyrical swag sale go directly to the Open Source Robotics Foundation and help support the ROS, Gazebo, ROS Control, and Open-RMF projects. If you order today you might just receive your swag by release day on May 22rd, 2026. If you would like to earn Lyrical swag by contributing to the project please consider contributing to the Lyrical Test and Tutorial party that is currently taking place. The top twenty test contributors will be sent a code to our swag store.

4 posts - 1 participant

Read full topic

by Katherine_Scott on April 30, 2026 05:06 PM

Lyrical Luth Test and Tutorial Party Instructions

Lyrical Luth Test and Tutorial Party Instructions

:tada: Update Lyrical board has been updated and is live! Docs are live too.

Hi Everyone,

As mentioned previously, we’re conducting a testing and tutorial party for the next ROS release, Lyrical Luth. If you happened to miss the kickoff of the Lyrical Luth Testing and Tutorial party this morning I have put together some written instructions that should let everyone participate, no matter their time zone. Here are the slides from the kickoff meeting.

TL;DR

We need your help to test the next ROS Distro before its release on Friday, May 22nd. We’re asking the community to pick a particular system setup, a combination of host operating system, CPU architecture, RMW vendor, and build type (source, debian, binary), and run through a set of ROS tutorials to make sure everything is working smoothly. Depending on the outcome of your tutorials, you can either close the ticket as completed or report the errors you found. If you can’t assign the ticket to yourself, leave a comment, and an admin will take care of it for you. Please do not sign up for more than one ticket at any given time. Everything you need to know about this process can be found in this Github repository.

As a thank you for your help, we’re planning to provide the top twenty contributors to the T&T party with their choice of either ROS Lyrical swag or OSRA membership. :warning: To be eligible to receive swag, you must register using this short Google Form so we can match email addresses to GitHub usernames and count the total tickets closed.:warning:

The testing and tutorial party will close on May 14, 2026, but we’re asking everyone to get started right away! We have 10,000 tickets to work through and with Lyrical’s transition to C++20, we fully anticipate that we’ll need to update a few tutorials and fix some broken source builds.

Full Instructions

We’re planning to release ROS 2 Lyrical Luth on May 22, 2026, and we need the community’s help to make sure that we’ve thoroughly tested the distro on a variety of platforms before we make the final release. What do we mean by testing? Well, lots of things, but in the context of the testing and tutorial party, we are talking about the package-level ROS unit tests and anything else you want to test. What do we mean by tutorials? We also want to make sure all our docs.ros.org tutorials are in working order before the release.

The difficulty in testing a ROS release is that people have lots of different ways they use ROS, and we can’t possibly test all of those combinations. For the testing and tutorial party we have created what we call, “a setup.” A setup is a combination of:

  • RMW vendor: FASTDDS, CYCLONEDDS, CONNEXTDDS or ZENOH
  • BuildType: binary, debian or source
  • OS: Ubuntu Resolute 26.04, Windows 11 and RHEL-10
  • Chipset: Amd64 or Arm64

If you already have a particular system setup that you work with, we suggest that you roll with that; otherwise, feel free to create a new system setup just for testing purposes. If you normally use Windows or RHEL (or binary compatible distributions to RHEL like Rocky Linux / Alma Linux) we would really appreciate your help as we don’t have a ton of internal resources to test these distributions.

Here are the steps for participating in the testing and tutorial party:

  • Before you begin please fill out the Google form so we have your contact information We can’t send you swag if we don’t have both your email address and your Github user name.
  • First go to the Tutorial Party Github repo (bit.ly/LyricalBoard) and read the README.md.
  • Figure out your setup!
    • Note your computer’s host operating system (either Ubuntu Resolute 26.04, Windows 11, or RHEL-10)
    • Note your chipset, either AMD64 or ARM64, if you don’t know it is probably AMD64.
    • Note your installed RMW / DDS Vendor (this varies by host OS).
    • Figure out how you want to install the ROS Lyrical Luth Beta, your options are:
      1. Binaries
      2. Debian installation
      3. Source installation
  • Once you’ve got your “setup” all figured out take a look at the you can use the bottom of the Lyrical Tutorial Party party ReadMe file to filter by setup. There should be a set of tickets for your “setup”. Click on the links and review the available tickets. If you want to test something other than the available tickets, feel free to open a new ticket and describe exactly what you are testing.
  • Pick a single ticket for your setup and use the assignees option to assign it to yourself. If you can’t assign yourself, leave a comment and an admin will assign the ticket to you
  • Take a look at the ticket and do as it asks in the “Links” section. For example, in this ticket, its links section points you to this tutorial. You should use your new ROS Lyrical Luth setup to run through that tutorial.
    • :warning: Please note that we’re using the Rolling documentation. If you see instructions to install a rolling package you’ll need to modify those to point to lyrical.
  • Once you complete the links section things will either go smoothly or you will run into problems. Please report your results using the check boxes in the “Checks” section of your Github issue.
    1. If everything goes well, note as such in your ticket’s comment section. We ask that you attach your terminal’s output as a code block or as a gist file or include a screenshot. At this point feel free to close the ticket by clicking “close as completed.”
    2. If something went poorly please note it in your ticket’s comment section. Try to include a full stack trace or other debug output if possible. Please also run ros2 doctor --report and dump the output in your ticket.

The testing and tutorial party wraps up on May 14, 2026 , but we’re asking everyone to get started early as we will need some lead time to address any bugs.

New for Lyrical: Pull Requests and Reviews

For 2026’s Test and Tutorial Party, we’re piloting a new feature: Lyrical Bug Fixes and PR Reviews. We’re looking for community members to help us out by lending us their eyes and expertise. For the T&T party we’re allowing participants to gain one extra point for each completed bug fix and PR review from the Lyrical board. We anticipate the majority of these issues will be documentation related so they should be fairly straightforward to fix.

For the T&T party we will provide you with one extra point if you do one of the following:

  • Create a pull request for a bug fix that addresses documented issues listed in the Lyrical issues board.
  • Review one of the pull requests or bug fixes listed in the Lyrical issues board.
  • We also have a limited number of general ROS pull request reviews that are also in scope for the T&T party. You can find those here (bit.ly/Lyrical-PR-Reviews)

To help us track and tabulate scores, you must fill out this short form every time you complete a review or a PR.

For Lyrical pull requests and bug fixes:

  • Ask to be assigned the issue in the Lyrical Tracking Board (bit.ly/LyricalTrackingBoard).
  • Write the relevant code or documentation. Remember to use the correct branch!
  • Build your solution and run the necessary tests and linters. This step is key to getting your PR approved.
  • Submit your PR. You must include a brief description of the issue and the issue number from the tracking board.
  • You must work with the reviewers to address all necessary feedback until the PR is accepted and merged.
  • If you use AI for your pull request you must report it in a manner consistent with OSRF policy.
  • Report your work using the form (bit.ly/LyricalPR).

For Lyrical reviews:

  • Request to be assigned to the pull request from the Lyrical Tracking Board (bit.ly/LyricalTrackingBoard). You can be assigned to one pull request at a time.
  • Once you are assigned to the pull request you must do the following:
    • Verify the fix by checking out the PR, building it, and replicating the bug conditions. For documentation this means checking out the PR and running make html.
    • Take one or more screenshots of the result.
  • Perform a realistic review of the pull request. There are two potential outcomes for your review:
    • You find no issues.
      • If that’s the case you must briefly list the steps you took to verify that the PR works and attach a screenshot.
    • You find an issue and request changes.
      • Changes should use the format: “Nit:” (minor change, usually a matter of preference, non-blocking), “Issue:” (major issue, blocking), “Suggestion:” (friendly suggestion, non-blocking), “Question:” (clarification, non-blocking), or “Chore:” (generally formatting issue, non-blocking)
      • For issues and chores the feedback in the pull request should include the following:
        • What specifically needs attention.
        • Why this change is necessary.
        • A suggestion on how to fix it.
      • You must follow up with the PR author to make sure their changes fix your issue. We suggest using the “suggest changes” feature liberally to expedite the process.
  • Generative AI should not be used for pull request reviews.
  • Report your work using the form (bit.ly/LyricalPR).

7 posts - 3 participants

Read full topic

by Katherine_Scott on April 30, 2026 04:37 PM

I'm done manually tuning DDS parameters!

Raise your hand if this sounds familiar:

  1. You just want better latency or higher throughput for your ROS2 app — but DDS throws hundreds of parameters at you and you have no idea where to start.
  2. So you end up spending hours (or days) manually tweaking, re-running benchmarks, and tweaking again… only to end up with “good enough” instead of actually good.

If that’s you, I have something that might help: GitHub - qualcomm-qrb-ros/ROS2-DDSConfig-Optimizer: An AI-driven tool that automatically tunes DDS configuration for ROS2 applications. · GitHub

It’s an AI-driven tool that automatically tunes FastDDS configuration for your ROS 2 application. All you need to provide is:

  1. Your performance targets — latency, throughput, reliability, CPU/memory limits, whatever matters to you — in a simple XML file
  2. An initial DDS config as the baseline

That’s it. You’ll get back the best DDS configuration tailored to your application. :sparkles:

Would love to hear your feedback, bug reports, or feature ideas — issues and PRs are very welcome!

3 posts - 2 participants

Read full topic

by NaSong on April 30, 2026 06:50 AM

April 29, 2026
New ROS controller app

https://play.google.com/store/apps/details?id=com.jax.roscontroller

My app was finally approved on the play store. I have been using this app to control my quadruped running ROS2 on a Pi. This release has the fundamentals working. I will be adding additional features soon.

2 posts - 2 participants

Read full topic

by Tdp378 on April 29, 2026 05:12 PM

RobotCAD 10.8.1 released!

Improvements:

  1. Reforged Explode View tool and now it support robot links position states (with memory), explode offset slider.

Features:

  1. New AI Tool - “Generate primitive robot by text”.
    Creates robot from primitives by your description. Supports various LLM providers.

  2. New Tool “Manage Link Display”.
    Toggles display of Visual, Collision, Real elements of robot links. It has “Set Placement Mode” for fast activate visibility of Real and disable others.

Added Sponsorship Block to Settings Window. There is can be your company or ads.

Explode View tool

AI Generator of Primitive Robots tool

Sponsorship Block

1 post - 1 participant

Read full topic

by fenixionsoul on April 29, 2026 12:09 PM

micro-ROS public infrastructure transition

As part of an OSRA-led clarification of governance boundaries within the ROS ecosystem, the public infrastructure of micro-ROS is transitioning to the Vulcanexus ecosystem.

This change is limited to public infrastructure and hosting. The micro-ROS project itself, its goals, roadmap, APIs, and technical direction remain unchanged. micro-ROS continues to be fully aligned with ROS 2 and supports standard ROS 2 workflows.

The new canonical website for micro-ROS is:
https://micro.vulcanexus.org

During the transition period, micro.ros.org will display a notice page indicating the new location of the project and guiding users to update bookmarks and references.

We recommend updating any existing links, documentation, or automation to point to the new domain.

Further updates will be shared as the transition progresses.

1 post - 1 participant

Read full topic

by Jaime_Martin_Losa on April 29, 2026 05:41 AM

April 28, 2026
Participants wanted for a survey on tooling and AI use in the ROS community!

Do you have opinions on the available ROS tooling? Are you using AI in your ROS development workflow? Or maybe you refuse to use AI and want to tell us why?

We want to hear from you!

We are a group of software engineering researchers at Carnegie Mellon University, VORTEX Collab, and the University of Lisbon investigating how ROS developers find and use information, what tools they rely on across different development tasks, and how AI-powered tools fit into the development workflow.

We are conducting a research survey to better understand the information needs, tooling gaps, and the role of AI in the ROS development process. This survey is estimated to take ~20 minutes to complete.

The research survey is open to ROS developers who are at least 18 years old and with at least one year of experience. If you are interested in sharing your experiences, please visit the SURVEY LINK to complete the survey.

Responses are anonymous and will be used solely for research purposes. This research survey is part of a study (STUDY2026_00000158) conducted by Claire Le Goues and Christopher Timperley at Carnegie Mellon University. If you have any questions about the study, please contact Andrea Miller (PhD student) at andreami@andrew.cmu.edu.

1 post - 1 participant

Read full topic

by Andrea_Miller on April 28, 2026 04:21 PM

Custom Capabilities in Transitive Robotics - Again | Cloud Robotics WG Meeting 2026-05-04

Please come and join us for this coming meeting at Mon, May 4, 2026 4:00 PM UTCMon, May 4, 2026 5:00 PM UTC, where we plan to continue our Transitive Robotics tryout by trying one of the more advanced features: writing and deploying a custom capability. This feature allows customers to write their own custom code and deploy it to their robots alongside the features available directly from Transitive Robotics.

We did attempt this tryout last session (hence why the title might be familiar!), but as I used an unsupported system for setting up the development environment, most of the session was spent on the initial setup. Hence, we’re repeating the session using a supported operating system. If you’re interested in watching the meeting anyway, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

2 posts - 2 participants

Read full topic

by mikelikesrobots on April 28, 2026 09:45 AM

April 27, 2026
Request for testing on pyspacemouse

Do you use a Spacemouse to control your robot? Then you might depend on `pyspacemouse`. If so, please check test this merge request and let me know if it has any effect on your usage!

1 post - 1 participant

Read full topic

by PeterMitrano on April 27, 2026 07:10 PM

[ACM Computing Surveys] ROS 2 in a Nutshell: A Survey

I am delighted to share that our manuscript, “��� � �� � ��������: � ������,� has been officially �������� ��� ����������� in ��� ��������� ������� — one of the leading journals for high-impact review articles in computer science.

With an ������ ������ �� ��.�, and ������ #� ��� �� ��� �������� �� �������� ������� ������ & �������, this journal is recognized among the most prestigious venues for high-impact survey research.

This work is the result of a valuable collaboration between researchers from the RIOTU Lab at Prince Sultan University and the AlfaisalX Center at Alfaisal University.
Co-authors: ����������� �. ��-������, ���� ������, ������ ����, ������� ����������, ��� ����� �������� for more than 2 years of consistent work

Our paper presents one of the most comprehensive and systematic reviews of ROS 2, covering its architecture, ecosystem, advances, challenges, and future directions in modern robotics.

:bar_chart: Survey Highlights
:star: 8,033 papers surveyed
:star: 960 ROS 2 publications analyzed
:star: 176 community packages reviewed
:star: 2009–2025 research timeline covered

The survey explores:
:star: Evolution from ROS 1 to ROS 2
:star: Middleware architecture and DDS
:star: Real-time systems and hardware acceleration
:star: Security and safety
:star: Multi-robot and distributed robotics
:star: Simulators, frameworks, and open-source ecosystems
:star: Applications in autonomous vehicles, healthcare, aerospace, logistics, agriculture, and public safety

:globe_with_meridians: Companion website and open-access database:

We also provide an open-access companion database for the ROS research community.

I sincerely thank my co-authors and, in particular, ����������� �. ��-������ for his perseverance, editors, and reviewers for their valuable feedback and support throughout this journey.

Excited to see this contribution help researchers, students, and engineers advance the future of robotics.

1 post - 1 participant

Read full topic

by Anis_Koubaa on April 27, 2026 05:51 PM

Feel like TurtleBot4_Navigation is a "House of Cards" for my robot

My Raspberry Pi 5 powered TurtleBot4 (clone) TB5-WaLI has been “alive” for 11108 hours since January 9, 2025.

It has undocked, navigated around my home 6226 meters (almost 4 miles), and redocked by itself 2137 times (well I did have to help it 5 times after safety shutdowns).

It has built a good map of the house, and with LIDAR seems to never lose localization anywhere in the map.

I have found a set of Turtlebot4_navigation parameters that will give 90% reliable navigation to 10 carefully chose goals in my home, but don’t try to check the battery_state from the command line, or write a node that subscribes to BT Log Events to collect statistics, and don’t expect reliable navigation in complex, tight areas of the house (even when it appears there is a very clear path available).

My “Raspberry Pi home/educational robot” philosophy has always been, “I’m not in a hurry, so my robot should be able to do anything (that fits in 8GB of memory), albeit the bot may need to go very slow or think for a while before acting or answering.”

My hope for ROS TurtleBot4_navigation has been along this same line - WaLI may need to go slow, or even stop to rethink a plan, but nothing short of a localization failure should prevent him from navigating.

I understand there are several path planners, several plan critics, and several recovery planners, each with a bevy of parameters to tweak, but for the life of me I cannot find a set that provide robust, reliable navigation in my home.

From the start my hope was that there are some nav2 parameters that would make nav2 robust and reliable.

Many times navigation decides to fail because WaLI happens to be in a tight spot, and it seems like it doesn’t wait for recoveries. I see “spin failed” but the bot never rotated. Or “Spin 1.57” which is way too much more than needed.

A simple DDS discovery server query can cause turtlebot_navigation to fail. Shouldn’t such “resource starvation” events be possible to handle by stopping the bot, or pausing the planning?

Introducing anything external that monitors the navigation BT Events, or my node collecting statistics during the navigation should not kill everything. If the control loops are only able to run at 10Hz, can’t all the nodes slow their expectations, can’t the bot crawl slowly enough or stop while it is thinking?

For all the flexibility of Nav2 and the infinite parameterization, suite of planners and critics, and adaptable sensor inputs, TurtleBot4_navigation just seems to be a “house of cards” for me.

ROS 2 Jazzy, Turtlebot_navigation (Derivative of Nav2), Raspberry Pi 5 8GB, Ubuntu 24.04
(No temp or voltage throttling ever, Idle CPU 35%, navigating CPU 75%)

1 post - 1 participant

Read full topic

by RobotDreams on April 27, 2026 01:07 AM

April 25, 2026
What are you using for containerization in ROS deployments?

Saw this post recently, debating that Docker is not the right tool for robotics. It sparked a good discussion and got me thinking.

I am curious to know what people are actually using in real world deployments and over-the-air updates. Docker, Podman, Snap, Yocto, something else entirely? And does it change between dev and production?

There is a nice blog on this as well: https://blog.robotair.io/best-way-to-ship-your-ros-app-a53927186c35

But still forming my own view on this, would really like to know what people are using.

11 posts - 7 participants

Read full topic

by Sakshay_Mahna on April 25, 2026 01:02 PM

April 24, 2026
URDF Validation + Kinematic Analysis API with Browser-Based Preview (Tested on Robonaut 2)

I’ve been working on a developer-focused URDF validation and analysis API and recently added a browser-based 3D preview layer.

To stress test the pipeline, I used the NASA Robonaut 2 (R2) URDF:

URDF: https://github.com/gkjohnson/nasa-urdf-robots/blob/master/r2_description/robots/r2c6.urdf

Observations

  • URDF passed structural validation (100+ joints, consistent hierarchy)

  • Kinematic analysis produced expected:

    • DOF (~74, computed from non-fixed joints)

    • chain depth (~19)

    • multiple end effectors (hands, fingers, sensors)

  • Tree reconstruction yielded a valid single-root structure (no cycles/orphans)

  • Browser preview correctly reflects:

    • joint origins (xyz / rpy)

    • parent-child relationships

    • joint axis orientation

Pipeline

Upload → Validate → Analyze → Preview

  • No ROS environment required

  • No RViz

  • Validation on server (file deleted after response)

  • 3D is client-side only

Motivation

Most URDF tooling focuses on XML/schema validation.

In practice, failures appear later in:

  • TF tree inconsistencies

  • Incorrect joint transforms

  • Misconfigured DOF

  • Broken kinematic chains

This tool aims to surface those earlier via:

  • structural validation

  • kinematic introspection

  • immediate visual feedback

Live demo

https://roboinfra-dashboard.azurewebsites.net/validator

Feedback request

Would appreciate input from others working with URDF pipelines, especially around:

  • additional validation rules worth enforcing

  • kinematic analysis gaps

  • CI/CD use cases (pre-merge URDF checks, regression detection)

Live test Screen Shots Below,

1 post - 1 participant

Read full topic

by Robotic on April 24, 2026 06:45 PM

Community Group Proposal: Automated ROS Upgrade Assistant

Hello everyone,

I would like to start a community discussion around building an AI-assisted ROS migration and upgrade tool focused on ROS 2 LTS transitions such as:

  • Humble → Jazzy

  • Jazzy → Lyrical

As ROS 2 adoption grows, many developers face challenges upgrading between releases due to API changes, deprecated interfaces, package compatibility issues, launch system updates, Nav2 migration challenges, QoS behavior differences, build system adaptations, and CI/CD validation changes.

The goal is not only syntax conversion, but practical engineering support for production migrations.

I would like to explore:

  • Upgrade analysis tooling

  • Automated migration suggestions

  • Compatibility validation

  • Package dependency checking

  • Launch file migration assistance

  • Nav2-specific upgrade support

I am looking for contributors interested in ROS core development, Nav2, tooling, developer experience, and AI-assisted code migration.

If there is enough interest, we can organize a community group discussion and eventually work toward a formal proposal.

I would especially appreciate feedback from @smac , @mjcarroll, and @katherine_Scott, along with others working on ROS 2 migration workflows and developer tooling.

Would love to hear thoughts from the community.

Best,
Darshil

3 posts - 2 participants

Read full topic

by Darshil_Arora on April 24, 2026 04:28 PM

Analysis on FusionCore vs robot_localization

A few days ago I shared a benchmark where FusionCore beat robot_localization EKF on a single NCLT sequence. Fair enough… people called out that one sequence can easily be cherry-picked. Someone also mentioned that the particular sequence I used is known to be rough for GPS-based filters. Others asked if RL was just badly tuned, or how FusionCore could outperform it that much if both are just nonlinear Kalman filters… etc


All good questions.

So I went back and ran six sequences across different weather conditions. Same config for everything. No parameter tweaks between runs. The config is in fusioncore_datasets/config/nclt_fusioncore.yaml, committed along with the results so anyone can check.


FusionCore wins 5 of 6. RL-UKF diverged with NaN on all six.


Now, the obvious question: what happened with November 2012? That’s the one where RL wins.

That sequence has sustained GPS degradation… this isn’t just occasional noise. The NCLT authors themselves mention elevated GPS noise in that session. Both filters are seeing the exact same data, so the difference really comes down to how they handle it.

Here’s what’s going on:

FusionCore has a gating mechanism. When GPS looks bad, it rejects those measurements. That’s usually a good thing… but in this case, the degradation is continuous. So, Fusioncore rejects a few GPS fixes → the state drifts → the next GPS measurement looks even worse relative to that drifted state → it gets rejected again → and this repeats. It kind of traps itself rejecting the very data it needs to recover.

RL, on the other hand, just accepts every GPS update. No gating, no rejection. That means it gets pulled around by noisy GPS, but it also re-anchors itself as soon as the signal improves. So in this specific case, that “always accept” behavior actually helps.

After discussing this with some hardware folks here in Kingston, we decided to add something we’re calling an inertial coast mode. The idea is simple:

  • If FusionCore sees N consecutive GPS rejections, it increases the position process noise (Q)

  • That causes the covariance (P) to grow

  • As P grows, the Mahalanobis gate naturally becomes less strict

  • Eventually, incoming GPS measurements are no longer “too far” and get accepted again

  • Once GPS is accepted, Q resets back to normal

Basically, instead of getting stuck rejecting everything, the filter “loosens up” over time and lets itself recover.

On the November 2012 sequence, this drops the error from 61.4 m → 28.7 m. RL still wins, but the gap is much smaller now, and everything is documented in the repo.

If your robot drives through tunnels, underpasses, agricultural land, and/or urban canyons with brief GPS dropouts, FC’s gate is a strength… it doesn’t get corrupted by the bad fixes during the outage. If you have GPS that is consistently mediocre (cheap receiver, cheap module, always noisy but never totally wrong), RL’s accept-everything approach is probably safer at least until coast mode gets smarter?

If you’ve got ideas on improving this… especially around re-acquisition or better fallback behavior… I’m all ears. Suggestions, config tweaks, PRs… all welcome.

Reproducing a run is straightforward

git clone https://github.com/manankharwar/fusioncore.git
# Download NCLT sequence from http://robots.engin.umich.edu/nclt/
ros2 launch fusioncore_datasets nclt_benchmark.launch.py \
  data_dir:=/path/to/nclt/2012-01-08 \
  output_bag:=./bag
python3 tools/evaluate.py --gt ground_truth.tum \
  --fusioncore fusioncore.tum --rl rl_ekf.tum \
  --sequence 2012-01-08

Full pipeline in benchmarks/README.md. Results per sequence under benchmarks/nclt/*/results/BENCHMARK.md.

November 2012 is an open problem. Coast mode cuts the error by 53% but RL’s no-gate approach still wins under sustained GPS degradation. Fully closing the gap requires either a smarter re-acquisition strategy or a tunable fallback threshold. Pull requests are welcome.

If you’ve got a dataset you want me to try, just send it over (or drop a link), and I’ll run it and share the results.

FusionCore accepts nav_msgs/Odometry from any source including slam_toolbox, MOLA, ORB-SLAM3, and even VINS-Mono. Same interface as wheel odometry.

-> GitHub - manankharwar/fusioncore: ROS 2 sensor fusion SDK: UKF, 3D native, proper GNSS, zero manual tuning. Apache 2.0. · GitHub

Happy Building!

6 posts - 3 participants

Read full topic

by manankharwar on April 24, 2026 02:36 PM

April 22, 2026
ROS (2) M Name Brainstorming

With ROS Lyrical Luth branching off Rolling this week, it’s an excellent time to start thinking about where we’ll be a year from now: in the midst of preparing the ROS M release! To get there, the first thing we need to do is pick a name.

Following tradition, the next ROS release name will be an adjective starting with M followed by a turtle-related word or name, also starting with M.

Here are the existing ROS 2 names.

  • Ardent Apalone
  • Bouncy Bolson
  • Crystal Clemmys
  • Dashing Diademata
  • Eloquent Elusor
  • Foxy Fitzroy
  • Galactic Geochelone
  • Humble Hawksbill
  • Iron Irwini
  • Jazzy Jalisco
  • Kilted Kaiju
  • Lyrical Luth

ROS 1 used the following names, which means they cannot be reused.

  • Boxturtle
  • C Turtle
  • Diamondback
  • Electric Emys
  • Fuerte
  • Groovy Galapagos
  • Hydro Medusa
  • Indigo Igloo
  • Jade Turtle
  • Kinetic Kame
  • Lunar Loggerhead
  • Melodic Morenia
  • Noetic Ninjemys

Here are the usual lists to help your namestorming.

For a blast from the mailing list past, here is the namestorming thread for ROS Melodic Morenia.

Please share your suggestions and comments. There are no rules to this part of the process so be creative!

48 posts - 41 participants

Read full topic

by gbiggs on April 22, 2026 11:40 PM

April 21, 2026
How to generate a perfect URDF for a cobot (Robotic Arm) for ros2 using Solid works for gravity compensation mode with cyclic syncronous torque mode?

We have a cobot. We are making URDF for ROS2 and for real hardware for gravity compensation mode and CST Cyclic Synchronous Torque Mode with EtherCAT. Links and Joints are such that if Link X has motor, motor is all inside this link joint, and its output rotating side is attached with the flange (half inside link X and half inside Link Y)

Flange is attached fixed to Link Y via screws so it rotates with Link Y Flange other face is attached to output (rototaing part of Link X)

The Motor has Strain Wave Gear (harmonic Drive) attached to its rotor.

Problem: Now, I want to know what Part of Motor should be taken along with Link X, Like in tag as part of the link and same for Link Y.

Because we are going to put these masses separately in soldiworks and then generate its URDF.
So, what things belong to what part. Like do we add stator in link X as fixed in solidworks so its Center of Mass is calculated with it for tag? or what

  • rotor mass
  • strain wave gear input side mass
  • strain wave gear output side mass
  • stator mass

Please Someone guide me how to achieve this in industrial way and correct way. Also how much error in mass is okay?

2 posts - 2 participants

Read full topic

by Zed on April 21, 2026 04:01 PM

April 20, 2026
:guitar: ROS 2 Lyrical Luth Testing Kicks Off on April 30th

:guitar: ROS 2 Lyrical Luth Testing Kicks Off on April 30th

As many of you are already aware, the ROS 2 Lyrical Luth release is just around the corner: Friday, May 22nd, to be exact (World Turtle Day falls on a Saturday this year)! We want this to be our best ROS 2 release yet, and to get there we need to make sure that we thoroughly test Lyrical Luth before it is released to the general public. We also want to make sure that the ROS documentation on docs.ros.org continues to be clear, concise, and correct. That’s where we need your help! We’re looking for community volunteers to join us for our Lyrical Luth Testing and Tutorial Party. If you are looking to start dipping your toes into contributing to the ROS project, this is a great place to start.

So, what is a Testing and Tutorial Party, you may ask? Well, it is a chance for the community to meet with our core team, systematically review all of the current ROS tutorials, and test the latest ROS release. Right now our ROS Boss @sloretz, is working to generate early release binary and source packages for ROS 2 Lyrical Luth. On April 30th, we’ll release those binaries to the public and start the process of systematically testing them.

During the Testing and Tutorial Party, we’ll provide a GitHub repository with a long list of tests we would like to run on the Lyrical Luth beta. These tests will first ask developers to pick a particular release setup, and then run either the test suite along with one or more of the existing ROS 2 tutorials on docs.ros.org. When we say setup, we mean a specific combination of RMW vendor (Zenoh / FastDDS / Cyclone DDS / Connext DDS), build type (binary / debian / source), host operating system (Ubuntu / RHEL / Windows / MacOS), and chip architecture (amd64 / aarch64). For each setup, we’ll perform a number of tests to validate our tutorials, core ROS functionality, and new features. With dozens of possible setup configurations, testing each and every one internally isn’t feasible, which is why we need your help!

During the tutorial party, participants will be asked to sign up for particular tests and report back the results. If you happen to find an issue or bug while participating, you’ll need to report it to us so it can get corrected before the Lyrical release.

We are planning to kick off the tutorial party with a virtual kickoff meeting on Thu, Apr 30, 2026 4:00 PM UTC. During this kickoff meeting, we’ll explain the whole Testing and Tutorial Party process. We’ll record the meeting and post instructions on Open Robotics Discourse for those who can’t make it. To help motivate participants, we’ll be giving away ROS Lyrical Luth swag to the testers who complete the most tests during the event. The testers with the most closed issues will receive a credit to our Fourth Wall shop to pick out some Lyrical swag.

Here are the key dates you’ll want to remember:

  • Thu, Apr 30, 2026 4:00 PM UTC Tutorial & Testing Party begins
  • Thu, May 14, 2026 7:00 AM UTC Tutorial & Testing Party ends
  • Fri, May 22, 2026 7:00 AM UTC ROS 2 Lyrical Luth released

We’ll add these events to the official ROS events calendar, but the big one that you won’t want to miss is the kickoff event on Thu, Apr 30, 2026 4:00 PM UTC. In the meantime, we would like your help spreading the word about the Testing and Tutorial Party.

Finally, if you can’t make it to the T&T Party but would like to help support the next ROS release, consider making a donation via their DonorBox account or joining the OSRA. Our open source contributors, OSRF donors, and OSRA members are the people making our ROS ecosystem possible! :heart:

4 posts - 2 participants

Read full topic

by Katherine_Scott on April 20, 2026 05:28 PM

I benchmarked my ROS 2 localization filter (FusionCore) against robot_localization on real-world data. Here's what happened

I ran FusionCore head-to-head against robot_localization (the standard ROS sensor fusion package) on the NCLT dataset from the University of Michigan… a real robot driving around a campus for 10 minutes. Mixed urban/suburban environment with tree cover, buildings, and open quads: the kind of GPS conditions where multipath is real, not a lab with clear sky view. Ground truth is RTK GPS, sub-10cm accuracy.

Equal comparison, no tricks: same raw IMU + wheel odometry + GPS fed to every filter simultaneously. No tuning advantage. This is strictly equal-config performance on identical sensor data.

The dashed line is RTK GPS ground truth. That’s where the robot actually was.

Left: robot_localization EKF. Right: FusionCore.

Accuracy over 600s (Absolute Trajectory Error (ATE) RMSE: lower is better):

  • FusionCore: 5.5 m

  • robot_localization EKF: 23.4 m: 4.2× worse

The difference comes down to one thing: robot_localization trusts every GPS fix equally and uses fixed noise values you set manually in a config file. FusionCore continuously estimates IMU bias and adapts its noise model in real time… so it knows when a measurement doesn’t fit and how much to trust it.

FusionCore tracks position, velocity, orientation, plus gyro bias and accelerometer bias as live states. RL-EKF has no bias estimation; gyro drift compounds silently into heading error.

I also ran robot_localization’s UKF mode. It diverged numerically at t=31 seconds: covariance matrix hit NaN, every output invalid for the remaining 9 minutes. FusionCore ran stably for the full 600 seconds on the same data. Fusioncore turns out is numerically stable even at high IMU rates. This is why RL-UKF hit NaN at 100Hz and FusionCore didn’t.

Dataset: NCLT (University of Michigan).

GitHub repo: https://github.com/manankharwar/fusioncore

ROS Discourse: https://discourse.ros.org/t/fusioncore-which-is-a-ros-2-jazzy-sensor-fusion-package-robot-localization-replacement

Currently testing on physical hardware. If you’d like to try it, the repo is open… raise an issue, open a PR, or just DM me. Happy to answer any questions… I respond to everything within 24 hours. Happy building!

6 posts - 3 participants

Read full topic

by manankharwar on April 20, 2026 01:08 PM

Oxide GNSS — a Rust-based ROS 2 driver for u-blox ZED-F9P with NTRIP and integrity monitoring

Hi everyone,

I’d like to announce the first release of “oxide_gnss”, a ROS 2 driver for u-blox ZED-F9P receivers.

Its focus is on providing a clean, simple way to get GNSS position, velocity and optional heading data from an F9P device with minimal effort, while also providing some safety integrity monitoring.

Built on ros2-rust (rclrs). MIT licensed.

Repo: GitHub - greenforge-labs/oxide_gnss: A Rust-based ROS2 GNSS driver for u-blox ZED-F9P devices with integrated NTRIP client · GitHub

Highlights:

  • Mode-based config for standalone / rover (NTRIP or radio) / moving base / moving-base-rover / static base setups — no need to hand-edit UBX CFG-VAL keys.
  • Integrated NTRIP client (including VRS via GGA uplink).
  • Optional safety integrity monitoring: protection levels (NAV-PL), jamming/spoofing detection (SEC-SIG), antenna status, etc., aggregated into a ~/integrity topic and a simple ~/operational go/no-go Bool.
  • CI against Humble / Jazzy / Kilted on amd64 and arm64.
  • Includes a small admin CLI (oxide_gnss_assign_serial) for setting the F9P USB device serial string (useful when operating in moving base + rover mode).

A minimal rover with NTRIP config looks like:

mode: rover_ntrip

features:
  high_precision: true
  integrity: true

device:
  port: "/dev/gnss_f9p_rover"
  baud_rate: 460800
  frame: ENU
  navigation:
    rate_hz: 5

ntrip:
  host: "ntrip.data.gnss.ga.gov.au"
  port: 2101
  mountpoint: "SFLD00AUS0"
  username: "${NTRIP_USERNAME}"
  password: "${NTRIP_PASSWORD}"
  send_gga: true

Launch and you should see something like:

Startup output (click for more details)

Released as-is from internal automation use — plans for continued feature development are limited, but bug-fix PRs and forks are welcome.

Feedback / issues / PRs: Issues · greenforge-labs/oxide_gnss · GitHub

1 post - 1 participant

Read full topic

by geoffs on April 20, 2026 07:22 AM

Colorful ROS2 Command Line!

Hi everyone,

I am adding color support for ros2cli. Currently it is optional and controlled via ROS_COLORIZED_OUTPUT=1. Do you like it?

GitHub PR: https://github.com/ros2/ros2cli/pull/1223

ros2color

1 post - 1 participant

Read full topic

by penwang on April 20, 2026 02:24 AM

April 19, 2026
URDF Validator (catches real robot failures, privacy-first, with xacro support)

Hi all,

I built a URDF validator aimed at catching real-world issues in robot descriptions not just syntax errors.

Why these matters

Many URDF tools will accept files that still fail later in simulation, motion planning, or TF. The goal here is to catch those problems before runtime.

What it does

  • Validates URDF structure and semantics

  • Detects broken links, joints, and invalid references

  • Flags issues seen in real robot models

  • Supports .xacro (with guided upgrade hints)

Proof (real-world failures)

  • Valkyrie: leftover xacro artifacts → correctly flagged

  • Fetch: invalid XML prefix → caught immediately

These are real bugs in widely used robot models — not synthetic test cases.

Quick check (no login required)

Free URDF Validator RoboInfra

API example

curl -X POST "https://roboinfra-api.azurewebsites.net/api/urdf/validate?include_urdf=true" \
  -H "x-api-key: YOUR_KEY" \
  -F "file=@robot.urdf"

Privacy-first

  • Files are not stored

  • No training on user data

  • Stateless validation

Extras

  • Clear upgrade hints for .xacro

  • Human-readable error explanations (not just parser output)

I’d really appreciate feedback especially edge cases or robot models that break it.

1 post - 1 participant

Read full topic

by Robotic on April 19, 2026 08:30 PM

Upcoming Lyrical Feature Freeze

Hi all,

On Tue, Apr 21, 2026 6:59 AM UTC, we will freeze all core ROS 2 packages to prepare for the upcoming Lyrical Luth release on Fri, May 22, 2026 7:00 AM UTC.

Once this freeze takes effect, we will not accept new features to the core packages until Lyrical branches from ROS Rolling. This restriction applies to the packages and vendor packages appearing in the ros2.repos file: ros2/ros2.repos at rolling · ros2/ros2 · GitHub

We still welcome bug fixes after the freeze date.

Find more information on the Lyrical Luth release timeline here: ROS 2 Lyrical Luth (codename ‘lyrical’; May, 2026).

2 posts - 2 participants

Read full topic

by mjcarroll on April 19, 2026 11:27 AM

April 18, 2026
Unibotics Robot Programming Challenge, April 2026

After checking the interest in the robot programming tournament we at JdeRobot org are launching the Unibotics Robot Programming Challenge :pushpin:

  • Online asynchronous competition (from April 15th to April 30th)
  • Python language
  • Robot programming from your web browser
  • Free, just for fun
  • Based on ROS, on Gazebo simulator and Unibotics web platform

The 2026 April challenge is to program a Formula1 car to follow the red line drawn in the floor along several race circuits :racing_car: . The car is endowed with a front camera, a steering wheel (W) and an accelerator pedal (V). You can use both: the available SimpleAPI or directly the ROS topics for camera images and robot control. The car has Ackermann dynamics and Montmeló is the test circuit, although your solution should also work fine in other circuits such as Montreal or SimpleCircuit.

[Unibotics] RoboticsAcademy - Follow Line

The “rules” are available at here. All the interactions will be held at the Unibotics forum

Cheers and good luck! :slight_smile:

JoseMaria

1 post - 1 participant

Read full topic

by jmplaza on April 18, 2026 04:51 PM


Powered by the awesome: Planet