February 11, 2026
🔥 the ros2 competition is your ticket to the world of robotics, where job, university, and a real robot await!

Do you want to do more than just “play” with robots—do you want to become a sought-after specialist, invited to join teams, companies, and labs?
Then the ROS2 competition is your perfect start. Here’s why:
:white_check_mark: It’s not just a game—it’s a real skill that employers value.
ROS2 is the industry standard in robotics. It’s required for job openings from Sber’s Robotics Center to Boston Dynamics.
By participating in the competition, you’re not learning from a textbook—you’re solving a real problem: a robot must navigate autonomously, recognize objects, and manipulate them, just like in service and industrial robotics.
:money_bag: Everything is within reach, even for a student
You can build a robot for just 100–300 dollars.
Divide it among a team—it’s less than a gym membership.
And finding a sponsor for that amount? It’s easy—especially when you present a real project, not just an idea.
:hammer_and_wrench: You’ll have a working robot—not a toy, but a tool for future projects.
After the competition, you’ll have a fully functional autonomous robot capable of:

  • Navigating the room
  • Grasping objects
  • Working in the real world

This is your personal portfolio project, which will open doors to internships and research groups.
:bullseye: Everything is already there—you’re not starting from scratch.
Playing field? Just order a Charuco banner from any advertising agency—matte, inexpensive, ready to go.
Robot? There’s a baseline configuration on GitHub—a basic robot you can start with.
Training? A free, public course on ROS2 on Stepik with step-by-step instructions—everything from installation to running algorithms.
:brain: You’ll learn more than just ROS2—you’ll master the entire engineering stack.

  • Linux
  • Configuring local networks
  • C++, Python
  • Computer vision
  • Path planning
  • Manipulator control

And much more—all in one project!
:graduation_cap: And starting in 2027, winning the ROS2 competition will earn you extra points when applying to universities and graduate schools!
You’re not just participating—you’re investing in your future.
Today—training. Tomorrow—an advantage over other applicants.
:joystick: And yes—it’s fun!
You’ll work in a team, solve puzzles, see how your code brings hardware to life…
It’s adrenaline, excitement, and pleasure—everything that made us fall in love with robotics in the first place. :rocket: Don’t wait for the “perfect moment.” The perfect moment is now.
Register for the ROS2 competition—and in one months, you won’t be dreaming about a career in robotics…
You’ll already be there.
:backhand_index_pointing_right: Click “Register”—while others are thinking, you’re already building the future.
:man_dancing:Get inspired by this song!

An article explaining the competition in detail.

Competition regulations and rules.

Video explanation of the competition regulations and rules.

We invite teams from all countries. Translation of the competition’s leading organizers into English is possible.

The competition is being held as part of the ROS Meetup conference on robotics and artificial intelligence in robotics, March 21-22, 2026, in Moscow.

We’d like to host an international ROS2 competition in Moscow every year. If you’d like to help us with this, please let us know.

1 post - 1 participant

Read full topic

by amburkoff on February 11, 2026 10:17 AM

🚀 Invitation to the scientific section at the ROS Meetup 2026 conference in Moscow. Remote participation is possible

CFP-ROSRM2026 - Eng.pdf (585.6 KB)

The Robot Operating System Research Meetup will be held for the first time in 2026 as part of the scientific track of the annual ROS Meetup. This is an international scientific forum dedicated to the discussion of artificial intelligence methods in robotics.

:loudspeaker: Attention, scientists and researchers!

:globe_showing_europe_africa: Registration for scientific papers is now OPEN! Papers can be submitted to the ROSRM 2026 scientific section, which is dedicated to artificial intelligence methods in robotics and will be held at the ROS Meetup conference on March 20-22, 2026, in Moscow. Don’t miss your chance to present your work and be published in a Scopus-listed journal!

:mechanical_arm:Article Topics: Intelligent robotics, robotic algorithms, deep learning, reinforcement learning, agents in robotics, computer vision, navigation and control.

:memo: Submission Procedure:

  • Submit your abstract or full article before the ROS Meetup conference.
  • If your abstract is accepted, you will present your paper in the scientific section at the conference, receive feedback, and receive recommendations for improvement. Address: Moscow Institute of Physics and Technology, Dolgoprudny, Russia, or remotely via videoconference.
  • After the conference, revise your article based on the comments received and resubmit it.
  • Receive publication of your article in the journal Optical Memory and Neural Networks (indexed in Scopus, WoS, Q3, and included in the White List of Journals)!

:link: Article Submission:
Articles must be submitted through the service OpenReview. Details on preparing materials and the registration fee will be published on the conference website: rosmeetup.ru/science-eng

:books: Accepted articles will be published in the journal Optical Memory and Neural Networks, indexed in Scopus, ensuring your work is visible internationally! Publications also count toward master’s and doctoral programs.

:red_question_mark: Any questions? Ask Dmitry Yudin.

:fire: Don’t miss the chance to advance your scientific career and get published in an international journal! Submit your article and join this important scientific event!

:wrench: You can also provide a link to the source code of your ROS2 package (this is optional). This way, we support open source and the ROS philosophy of reusing software components across different robots!

IMPORTANT DATES
March 9, 2026 — Abstract submission deadline
March 16, 2026 — Program committee decision on paper acceptance
March 19, 2026 — Participant registration
March 21–22, 2026 — Conference
April 20, 2026 — Full article submission deadline
May 25, 2026 — Notification of article acceptance

:memo:Fill out the OpenReview submission form OpenReview. Right now, just the abstract is enough!:rocket::robot:

1 post - 1 participant

Read full topic

by amburkoff on February 11, 2026 09:53 AM

NVIDIA Isaac ROS 4.1 for Thor has arrived

gr1_nvblox_azure-ezgif.com-optimize

NVIDIA Isaac ROS 4.1 for Thor is now live.

NVIDIA Isaac ROS 4.1 is now available. This open-source collection of accelerated ROS 2 packages and reference applications adds more flexibility for building and deploying on Jetson AGX Thor.

This release introduces a Docker-optional development and deployment workflow, with new Virtual Environment and Bare Metal modes that make it easier to integrate Isaac ROS packages into your existing setup.

We’ve also made several key updates across the stack. Isaac ROS Nvblox now supports improved dynamics with LiDAR and motion compensation, and Isaac ROS Visual SLAM adds support for RGB-D cameras. There’s a new 3D-printable multi-camera rig for mounting RealSense cameras directly to Jetson AGX Thor, along with canonical URDF poses to get you started quickly.

On the sim-to-real side, a new gear assembly tutorial walks through training a reach policy in simulation and deploying it to a UR10e arm. And for data movement, you can now send and receive point clouds using the CUDA with NITROS API.

Check out the full details :right_arrow: here and let us know what you build with 4.1 :rocket:

1 post - 1 participant

Read full topic

by HemalShahNV on February 11, 2026 03:00 AM

February 10, 2026
Henki ROS 2 Best Practices - For People and AI

Hi all!

We’ve decided to write down and publish some of our best practices for ROS 2 development at Henki Robotics! The list of best practices has been compiled from years of experience in developing ROS applications, and we wanted to make this advice freely available, as we believe that some of these simple tips can have a huge impact on a project architecture and maintainability.

In addition to having this advice available for developers, we built the repository so that the best practices can be directly integrated with coding agents to support modern AI-driven development. You can generate quality code automatically, or review your current project. We’ve tested this using Claude, and the difference in generated code is noticeable - we added examples in the repo to showcase the impact of these best practices.

More info in the repository. We’d love to hear which practices you find useful, and which ones we are still missing from our listing.

1 post - 1 participant

Read full topic

by jak on February 10, 2026 04:01 PM

February 09, 2026
Ouster Acquires StereoLabs Creating a World-Leading Physical AI Sensing and Perception Company

Ouster asserts its position in Physical AI by acquiring StereoLabs :tada:

https://investors.ouster.com/news-releases/news-release-details/ouster-acquires-stereolabs-creating-world-leading-physical-ai

2 posts - 2 participants

Read full topic

by Samahu on February 09, 2026 05:43 PM

February 08, 2026
Working prototype of native buffers / accelerated memory transport

Hello ROS community,

as promised in our previous discourse post, we have uploaded our current version of the accelerated memory transport prototype to GitHub, and it is available for testing.

Note on code quality and demo readiness

At this stage, we would consider this to be an early preview. The code is still somewhat rough around the edges, and still needs a thorough cleanup and review. However, all core pieces should be in place, and can be shown working together.

The current demo is an integration test that connects a publisher and a subscriber through Zenoh, and exchanges messages using a demo backend for the native buffers. It will show a detailed trace of the steps being taken.

The test at this point is not a visual demo, and it does not exercise CUDA, Torch or any other more sophisticated flows. We are working on a more integrated demo in parallel, and expect to add those shortly.

Also note that the structure is currently a proposal, detail of which will be discussed in the Accelerated Memory Transport Working Group, so some of the concepts may still change over time.

Getting started

In order to get started, we recommend installing Pixi first for an isolated and reproducible environment:

curl -fsSL https://pixi.sh/install.sh | sh

Then, clone the ros2 meta repo that contains the links to all modified repositories:

git clone https://github.com/nvcyc/ros2.git && cd ros2

Lastly, run the following command to setup the environment, clone the sources, build, and run the primary test to showcase functionality:

pixi run test test_rcl_buffer test_1pub_1sub_demo_to_demo

You can run pixi task list for additional commands available, or simply do pixi shell if you prefer to use colcon directly.

Details on changes

Overview

The rolling-native-buffer branch adds a proof-of-concept native buffer feature to ROS 2, allowing uint8[] message fields (e.g., image data) to be backed by vendor-specific memory (CPU, GPU, etc.) instead of always using std::vector. A new rcl_buffer::Buffer<T> type replaces std::vector for these fields while remaining backward-compatible. Buffer backends are discovered at runtime via pluginlib, and the serialization and middleware layers are extended so that when a publisher and subscriber share a common non-CPU backend, data can be transferred via a lightweight descriptor rather than copying through CPU memory. When backends are incompatible, the system gracefully falls back to standard CPU serialization.

Per package changes

rcl_buffer (new)

Core Buffer<T> container class — a drop-in std::vector<T> replacement backed by a polymorphic BufferImplBase<T> with CpuBufferImpl<T> as the default.

rcl_buffer_backend (new)

Abstract BufferBackend plugin interface that vendors implement to provide custom memory backends (descriptor creation, serialization registration, endpoint lifecycle hooks).

rcl_buffer_backend_registry (new)

Singleton registry using pluginlib to discover and load BufferBackend plugins at runtime.

demo_buffer_backend, demo_buffer, demo_buffer_backend_msgs (new)

A reference demo backend plugin with its buffer implementation and descriptor message, used for testing the plugin system end-to-end.

test_rcl_buffer (new)

Integration tests verifying buffer transfer for both CPU and demo backends.

rosidl_generator_cpp (modified)

Code generator now emits rcl_buffer::Buffer<uint8_t> instead of std::vector<uint8_t> for uint8[] fields.

rosidl_runtime_cpp (modified)

Added trait specializations for Buffer<T> and a dependency on rcl_buffer.

rosidl_typesupport_fastrtps_cpp (modified)

Extended the type support callbacks struct with has_buffer_fields flag and endpoint-aware serialize/deserialize function pointers.

Added buffer_serialization.hpp with global registries for backend descriptor operations and FastCDR serializers, plus template helpers for Buffer serialization.

Updated code generation templates to detect Buffer fields and emit endpoint-aware serialization code.

rmw_zenoh_cpp (modified)

Added buffer_backend_loader module to initialize/shutdown backends during RMW lifecycle.

Extended liveliness key-expressions to advertise each endpoint’s supported backends.

Added graph cache discovery callbacks so buffer-aware publishers and subscribers detect each other dynamically.

Buffer-aware publishers create per-subscriber Zenoh endpoints and check per-endpoint backend compatibility before serialization.

Buffer-aware subscribers create per-publisher Zenoh subscriptions and pass publisher endpoint info into deserialization for correct backend reconstruction.

Interpreting the log output

test_1pub_1sub_demo_to_demo produces detailed log outputs, highlighting the steps taken to allow following what is happening when a buffer flows through the native buffer infrastructure.

Below are the key points to watch out for, which also provide good starting points for more detailed exploration of the code.

Note that if you used the Pixi setup above, the code base will have compile_commands.json available everywhere, and code navigation is available seamlessly through your favorite LSP server.

Backend Initialization

Each ROS 2 process discovers and loads buffer backend plugins via pluginlib, then registers their FastCDR serializers.

Discovered 1 buffer backend plugin(s) / Loaded buffer backend plugin: demo
Demo buffer descriptor registered with FastCDR

Buffer-Aware Publisher Creation

The RMW detects at creation time that sensor_msgs::msg::Image contains Buffer fields, and registers a discovery callback to be notified when subscribers appear.

Creating publisher for topic '/test_image' ... has_buffer_fields: '1'
Registered subscriber discovery callback for publisher on topic: '/test_image'

Buffer-Aware Subscriber Creation

The subscription is created in buffer-aware mode — no Zenoh subscriber is created yet; it waits for publisher discovery to create per-publisher endpoints dynamically.

has_buffer_fields: 1, is_buffer_aware: 1
Initialized buffer-aware subscription ... (endpoints created dynamically)

Mutual Discovery

Both sides discover each other through liveliness key-expressions that include backends:demo:version=1.0, confirm backend compatibility, and create per-peer Zenoh endpoints.

Discovered endpoint supports 'demo' backend
Creating endpoint for key='...'

Buffer-Aware Publishing

The publisher serializes the buffer via the demo backend’s descriptor path instead of copying raw bytes, and routes to the per-subscriber endpoint.

Serializing buffer (backend: demo)
Descriptor created: size=192, data_hash=1406612371034480997

Buffer-Aware Deserialization

The subscriber uses endpoint-aware deserialization to reconstruct the buffer from the descriptor, restoring the demo backend implementation.

Deserialized backend_type: 'demo'
from_descriptor() called, size=192 elements, data_hash=1406612371034480997

Application-Level Validation

The subscriber confirms the data arrived through the demo backend path with correct content.

Received message using 'demo' backend - zero-copy path!
Image #1 validation: PASSED (backend: demo, size: 192)

What’s next

The code base will server as a baseline for discussions in the Accelerated Memory Transport Working Group, where the overall concept as well as its details will be discussed and agreed upon.

In parallel, we are working on integrating fully featured CUDA and Torch backends into the system, which will allow for more visually appealing demos, as well as a blueprint for how more realistic vendor backends would be implemented.

rclpy support is another high priority item to integrate, ideally allowing for seamless tensor exchange between C++ and Python nodes.

Lastly, since Zenoh will not become the default middleware for the ROS2 Lyrical release, we will restart efforts to integrate the backend infrastructure into Fast DDS.

2 posts - 2 participants

Read full topic

by karsten-nvidia on February 08, 2026 08:04 PM

February 06, 2026
Turning Davos into a Robot City this July

Hi all!

I am helping to organize the Davos Tech Summit July 1st-4th this year: https://davostechsummit.com/

Rather than keeping it as the typical fair or tech conference behind doors, we had the idea of turning Davos into a robot city. For this, we need the help of robotics companies to actually deploy their robots around the city, which we can help set up and coordinate. Some of the companies that have already confirmed they are joining the Robot City concept are:

  • Ascento with their security robot
  • Tethys robotics
  • Deep robotics
  • Loki Robotics
  • Astral

Humanoid robots:

  • Agibot - in
  • Droidup - task TBD
  • Galbot G1 - doing pick and place at a shop
  • Unitree - Various robots
  • Booster Robotics - K1
  • Limx Dynamics - Olli
  • Devanthro

There are also ongoing talks with companies that are open to bring autonomous excavators, various inspection robots, a drone show, and setting up a location for people to pilot racing drones. Plus working on bringing autonomous cars and shuttles driving around the city.

We were in Davos during WEF to promote this event and got some media coverage: Davos Tech Summit 2026 | Touching Intelligence

If you are interested in speaking at the event, please reach out! We are building the program during this month.

We are also looking into organizing a ROS Meetup during the event.
Let us know if you’d like to join.

Cheers!

2 posts - 2 participants

Read full topic

by jopequ on February 06, 2026 04:32 PM

ROS 2 Rust Meeting: February 2026

The next ROS 2 Rust Meeting will be Mon, Feb 9, 2026 2:00 PM UTC

The meeting room will be at https://meet.google.com/rxr-pvcv-hmu

In the unlikely event that the room needs to change, we will update this thread with the new info!

1 post - 1 participant

Read full topic

by jhdcs on February 06, 2026 02:46 PM

Canonical Observability Stack Tryout | Cloud Robotics WG Meeting 2026-02-11

Please come and join us for this coming meeting at Wed, Feb 11, 2026 4:00 PM UTCWed, Feb 11, 2026 5:00 PM UTC, where we plan to deploy an example Canonical Observability Stack instance based on information from the tutorials and documentation.

Last meeting, the CRWG invited Guillaume Beuzeboc from Canonical to present on the Canonical Observability Stack (COS). COS is a general observability stack for devices such as drones, robots, and IoT devices. It operates from telemetry data, and the COS team has extended it to support robot-specific use cases. If you’re interested to watch the talk, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

2 posts - 1 participant

Read full topic

by mikelikesrobots on February 06, 2026 10:36 AM

February 05, 2026
Is there / could there be a standard robot package structure?

Hi all! I imagine this might be one of those recurring noob questions that keep popping up every few months, please excuse my naivity..

I am currently working on a ROS 2 mobile robot (diff drive, with a main goal of easy hardware reconfigurability). Initial development took place as a tangled monolithic package, and we are now working on breaking it up into logically separate packages for: common files, simulation, physical robot implementation, navigation stack, example apps, hardware extensions, etc.

To my understanding, there is no official document that recommends a project structure for this, yet still, “established” robots (e.g. turtlebot4, UR, rosbot) seem to follow a similar convention among the lines of:

  • xyz_description – URDFs, meshes, visuals
  • xyz_bringup – Launch and configuration for “real” physical implementation
  • xyz_gazebo / _simulation – Launch and configuration for a simulated equivalent robot
  • xyz_navigation – Navigation stack

None seem to be exactly the same, though. My understanding is that this is a rough convention that the community converged to over time, and not something well defined.

My question is thus twofold:

  1. Is there a standard for splitting up a robot’s codebase into packages, which I’m unaware of?
  2. If not, would there be any value in writing up such a recommendation?

Cheers!

3 posts - 3 participants

Read full topic

by trupples on February 05, 2026 11:49 PM

[Release] GerdsenAI's Depth Anything 3 ROS2 Wrapper with Real-time TensorRT for Jetson

Update: TensorRT Optimization, 7x Performance Improvement Over Previous PyTorch Release!

Great news for everyone following this project! We’ve successfully implemented TensorRT 10.3 acceleration, and the results are significant:

Performance Improvement

Metric Before (PyTorch) After (TensorRT) Improvement
FPS 6.35 43+ 6.8x faster
Inference Time 153ms ~23ms 6.6x faster
GPU Utilization 35-69% 85%+ More efficient

Test Platform: Jetson Orin NX 16GB (Seeed reComputer J4012), JetPack 6.2, TensorRT 10.3

Key Technical Achievement: Host-Container Split Architecture

We solved a significant Jetson deployment challenge - TensorRT Python bindings are broken in current Jetson container images (dusty-nv/jetson-containers#714). Our solution:

HOST (JetPack 6.x)
+--------------------------------------------------+
|  TRT Inference Service (trt_inference_shm.py)    |
|  - TensorRT 10.3, ~15ms inference                |
+--------------------------------------------------+
                    ↑
                    | /dev/shm/da3 (shared memory, ~8ms IPC)
                    ↓
+--------------------------------------------------+
|  Docker Container (ROS2 Humble)                  |
|  - Camera drivers, depth publisher               |
+--------------------------------------------------+

This architecture enables real-time TensorRT inference while keeping ROS2 in a clean container environment.

One-Click Demo

git clone https://github.com/GerdsenAI/GerdsenAI-Depth-Anything-3-ROS2-Wrapper.git
cd GerdsenAI-Depth-Anything-3-ROS2-Wrapper
./run.sh

First run takes ~15-20 minutes (Docker build + TensorRT engine). Subsequent runs start in ~10 seconds.

Compared to Other Implementations

We’re aware of ika-rwth-aachen/ros2-depth-anything-v3-trt which achieves 50 FPS on desktop RTX 6000. Our focus is different:

  • Embedded-first: Optimized for Jetson deployment challenges
  • Container-friendly: Works around broken TRT bindings in Jetson images
  • Production-ready: One-click deployment, auto-dependency installation

Call for Contributors

We’re looking for help with:

  • Test coverage for SharedMemory/TensorRT code paths
  • Validation on other Jetson platforms (AGX Orin, Orin Nano)
  • Point cloud generation (currently depth-only)

Repo: GitHub - GerdsenAI/GerdsenAI-Depth-Anything-3-ROS2-Wrapper: ROS2 wrapper for Depth Anything 3 (https://github.com/ByteDance-Seed/Depth-Anything-3)
License: MIT

@Phocidae @AljazJus - the TensorRT optimization should help significantly with your projects! Let me know if you run into any issues.

1 post - 1 participant

Read full topic

by GerdsenAI on February 05, 2026 05:07 PM

Getting started with Pixi and RoboStack

Hi all,

I noticed a lot of users get stuck with trying Pixi and RoboStack, simply because it’s to hard to do the initial setup.

To help you out we’ve created a little tool called pixi-ros to help you map the package.xml’s to a pixi.toml.

It basically does what rosdep does, but since the logic of the rosdep installation doesn’t translate well to a Pixi workspace this was always complex to implement.

Instead of staying in waiting mode untill we figured out a “clean” way of doing that I just wanted to get something out that helps you get started today.

Here is a small video to get you started:

Pixi ROS extension; think rosdep for Pixi!

I’m very open to contributions or improvement ideas!

ps. I hope pixi-ros will be obsolete ASAP due to our development of pixi-build-ros which can read package.xml’s directly, but today (5-feb-2026) I would only advice that workflow to advanced users due to the experimental nature of it.

1 post - 1 participant

Read full topic

by ruben-arts on February 05, 2026 02:23 PM

Accelerated Memory Transport Working Group Announcement

Hi

I’m pleased to announce that the Accelerated Memory Transport Working Group was officially approved by the ROS PMC on January 27th 2026.

This group will focus on extending the ROS transport utilities to enable better memory management through the pipeline to take advantage of available hardware accelerators efficiently, while providing fallbacks in cases where the whole system can not handle the advanced transport.

And may involve code in repositories including but not limited to:

The first meeting of the Accelerated Memory Transport Working Group will be on Wed, Feb 11, 2026 4:00 PM UTCWed, Feb 11, 2026 5:00 PM UTC

If you have any question about the process, please reach out to me: acordero@honurobotics.com

Thank you

2 posts - 2 participants

Read full topic

by ahcorde on February 05, 2026 12:09 PM

February 04, 2026
🚀 Update: LinkForge v1.2.1 is now live on Blender Extensions!

We just pushed a critical stability update for anyone importing complex robot models.

What’s New in v1.2.1?

:white_check_mark: Fixed “Floating Parts”: We resolved a transform baking issue where imported meshes would drift from their parent links. Your imports are now 1:1 accurate.

:white_check_mark: Smarter XACRO: Use complex math expressions in your property definitions? We now parse mixed-type arguments robustly.

:white_check_mark: Implement native high-fidelity XACRO parser

If you are building robots in Blender for ROS 2 or Gazebo, this is the most stable version yet.

:link: Get it on GitHub: linkforge-github

:link: Get it on Blender Extensions: linkforge-blender

1 post - 1 participant

Read full topic

by arounamounchili on February 04, 2026 05:34 PM

Lessons learned migrating directly to ROS 2 Kilted Kaiju with pixi

Hi everyone,

We recently completed our full migration from ROS 1 Noetic directly to ROS 2 Kilted Kaiju. We decided to skip the intermediate LTS releases (Humble/Jazzy) to land directly on the bleeding edge features and be prepared for the next LTS Lyrical in May 2026.

Some of you might have seen our initial LinkedIn post about the strategy, which was kindly picked up by OSRF. Since then, we’ve had time to document the actual execution.

You can see the full workflow (including a video of the “trash bin” migration :robot:) in my follow-up post here: :backhand_index_pointing_right: Watch the Migration Workflow on LinkedIn

I wanted to share the technical breakdown here on Discourse, specifically regarding our usage of Pixi, Executors, and the RMW.

1. The Environment Strategy: Pixi & Conda

We bypassed the system-level install entirely. Since we were already using Pixi and Conda for our legacy ROS 1 stack, we leveraged this to make the transition seamless.

  • Side-by-Side Development: This allowed us to run ROS 1 Noetic and ROS 2 Kilted environments on the same machines without environment variable conflicts.
  • The “Disposable” Workspace: We treated workspaces as ephemeral. We could wipe a folder, resolve, and install the full Kilted stack from scratch in <60 seconds (installing user-space dependencies only).

Pixi Gotchas:

  • Versioning: We found we needed to remove the pixi.lock file when pulling the absolute latest build numbers (since we were re-publishing packages with increasing build numbers rapidly during the migration).
  • File Descriptors: On large workspaces, Pixi occasionally ran out of file descriptors during the install phase. A simple retry (or ulimit bump) always resolved this.

2. Observability & AI

We relied heavily on Claude Code to handle the observability side of the migration. Instead of maintaining spreadsheets and bash scripts, we had Claude generate “throw-away” web dashboards to visualize:

  • Build orders
  • Real-time CI status
  • Package porting progress

(See the initial LinkedIn post for examples of these dashboards)

3. The Workflow

Our development loop looked like this: Feature BranchCI SuccessPublish to Stagingpixi install (on Robot)Test

Because we didn’t rely on baking Docker images for every test, the iteration loop (Code → Robot) was extremely fast.

4. Technical Pain Points & Findings

This is where we spent most of our debugging time:

Executors (Python Nodes):

  • SingleThreadedExecutor: Great for speed, but lacked the versatility we needed (e.g., relying on callbacks within callbacks for certain nodes).
  • MultiThreadedExecutor: This is what we are running mostly now. We noticed some performance overhead, so we pushed high-frequency topic subscriptions (e.g., tf and joint_states) to C++ nodes to compensate.
  • ExperimentalEventExecutor: We tried to implement this but couldn’t get it stable enough for production yet.

RMW Implementation:

  • We started with the default FastRTPS but encountered stability and latency issues in our specific setup.
  • We switched to CycloneDDS and saw an immediate improvement in stability.

Questions for the Community:

  1. Has anyone put the new Zenoh RMW through its paces in Kilted/Rolling yet? We are eyeing that as a potential next step.
  2. Are others testing Kilted in production contexts yet, or have you had better luck with the Event Executor stability?

Related discussion on tooling: Pixi as a co-official way of installing ROS on Linux

3 posts - 3 participants

Read full topic

by daenny on February 04, 2026 04:57 PM

February 03, 2026
Stop rewriting legacy URDFs by hand 🛑

Migrating robots from ROS 1 to ROS 2 is usually a headache of XML editing and syntax checking.

In this video, I demonstrate how ��������� solves this in minute using the ��-���100.

The Workflow:
:one: Import: Load legacy ROS 1 URDFs directly into Blender with LinkForge.
:two: Interact: Click links and joints to visualize properties instantly.
:three: Modernize: Auto-generate ros2_control interfaces from existing joints with one click.
:four: Export: Output a clean, fully compliant ROS 2 URDF ready for Jazzy.

LinkForge handles the inertia matrices, geometry offsets, and tag upgrades automatically.

��������� - Import URDF file: ���� ��������� ������ ����� �� ����. 🛑

3 posts - 2 participants

Read full topic

by arounamounchili on February 03, 2026 07:26 PM

Stable Distance Sensing for ROS-Based Platforms in Low-Visibility Environments

In nighttime, foggy conditions, or complex terrain environments, many ROS-based platforms
(UAVs, UGVs, or fixed installations) struggle with reliable distance perception when relying
solely on vision or illumination-dependent sensors.

In our recent projects, we’ve been focusing on stable, continuous distance sensing as a
foundational capability for:

  • ground altitude estimation
  • obstacle distance measurement
  • terrain-aware navigation

Millimeter-wave radar has shown strong advantages in these scenarios due to its independence
from lighting conditions and robustness in fog, dust, or rain. We are currently working with
both 24GHz and 77GHz mmWave radar configurations, targeting:

  • mid-to-long-range altitude sensing
  • close-range, high-stability distance measurement

We’re interested in discussing with the ROS community:

  • How others integrate mmWave radar data into ROS (ROS1 / ROS2)

  • Message formats or filtering strategies for distance output

  • Fusion approaches with vision or IMU for terrain-following or obstacle detection

Any shared experience, references, or best practices would be greatly appreciated.

1 post - 1 participant

Read full topic

by hexsoon2026 on February 03, 2026 04:55 PM

developing an autonomous weeding robot for orchards using ROS2 Jazzy

I’m developing an autonomous weeding robot for orchards using ROS2 Jazzy. The robot needs to navigate tree rows and weed close to trunks (20cm safety margin).
My approach:
GPS (RTK ideally) for global path planning and navigation between rows
Visual-inertial SLAM for precision control when working near trees - GPS accuracy isn’t sufficient for safe 20cm clearances
Need robust sensor fusion to hand off between the two modes
The interesting challenge is transitioning smoothly between GPS-based navigation and VIO-based precision maneuvering as the robot approaches trees.
Questions:
What VIO SLAM packages work reliably with ROS2 Jazzy in outdoor agricultural settings?
How have others handled the handoff between GPS and visual odometry for hybrid localization?
Any recommendations for handling challenging visual conditions (varying sunlight, repetitive tree textures)?
Currently working in simulation - would love to hear from anyone who’s taken similar systems to hardware.

1 post - 1 participant

Read full topic

by Ilyes_Saadna on February 03, 2026 12:50 AM

February 02, 2026
Error reviewing Action Feedback messages in MCAP files

Hello,

We are using Kilted and record mcap bags with a command line approximately like this:

ros2 bag record -o <filename> --all-topics --all-services --all-actions --include-hidden-topics

When we open the MCAP files in Lichtblick or Foxglove we get this error in Problems panel and we can’t review the feedback messages:

Error in topic <redacted>/_action/feedback (channel 6)
Message encoding cdr with schema encoding '' is not supported (expected "ros2msg" or "ros2idl" or "omgidl")

At this point we are at a loss as to how to resolve this - do we need to publish the schema encoding somewhere?

Thanks.

3 posts - 2 participants

Read full topic

by jbcpollak on February 02, 2026 05:13 PM

MINT Protocol - ROS 2 node for robots to earn crypto for task execution

Built a ROS 2 package that lets robots earn MINT tokens on Solana for task execution.

Repo: GitHub - FoundryNet/ros-mint

How it works

Node subscribes to /mint/task_start and /mint/task_end topics. Duration between events gets settled on-chain as MINT tokens.

# Launch
ros2 run mint_ros settler --ros-args -p keypair_path:=/path/to/keypair.json

# Your task node publishes:
/mint/task_start  # String: task_id
/mint/task_end    # String: task_id

# MINT settles automatically

Rate

0.005 MINT per second of work. Oracle pays gas - robots pay nothing.

Task Duration MINT Earned
1 minute 0.30 MINT
10 minutes 3.00 MINT
1 hour 18.00 MINT

Links

Machines work. Machines should earn.

3 posts - 2 participants

Read full topic

by FoundryNet on February 02, 2026 04:57 PM

Optimization of Piper Robotic Arm Motion Control via lerobot Transplantation

Optimization of Piper Robotic Arm Motion Control via lerobot Transplantation

Author: VA11Hall
Link: https://zhuanlan.zhihu.com/p/1946636125415401016
Source: Zhihu

I. Introduction

We have successfully transplanted lerobot to the Piper robotic arm, enabling smooth execution of task workflows including remote control, data collection, training, and inference. The current goal is to optimize the Piper’s operational performance—such as improving success rates and motion stability. Key optimization measures will focus on two aspects: enhancing the quality and scale of datasets, and further refining motion control algorithms. For the former, we plan to conduct experiments on reducing light interference, deploying cameras in more reasonable positions (e.g., on the arm itself), and improving the consistency of teaching actions during data collection. For the latter, we will directly modify the code to enhance motion control.

This article introduces a code-based approach to optimize Piper’s motion control, inspired by the following Bilibili video:LeRobot ACT Algorithm Introduction and Tuning

The video author not only provides optimization ideas and demonstration of results but also shares the source code. This article analyzes and explains the ideas and corresponding code implementations from the video, and presents the results of transplanting this code to the Piper robotic arm for practical testing.

II. Limitations of Motion Control in lerobot’s Official Code

Robots trained with lerobot often exhibit severe jitter during inference and validation. This is because lerobot relies on imitation learning—during data collection, human demonstrators inevitably introduce unnecessary jitter into the dataset due to unfamiliarity with the master arm. Additionally, even for similar grasping tasks, demonstrators may adopt different action strategies. Given the current limitations of small dataset sizes and immature network architectures, these factors lead to unstable motion control (there are also numerous other contributing factors).

For a given pre-trained model, developers can directly improve data collection quality to provide the model with high-quality task demonstrations—analogous to “compensating for a less capable student with a more competent teacher.” Furthermore, developers can embed critical knowledge that the robot struggles to learn into the code through explicit programming.

To reduce jitter during robotic arm movement without compromising the model’s generalization ability, two classic motion control optimization strategies can be adopted: motion filtering and interpolation.

III. Interpolation and Filtering of Action Sequences Generated by ACT

The default model used in lerobot workflows is ACT, with relevant code located in the policies directory. The lerobot project has transplanted the original ACT code and implemented wrapper functions for robot control.

Using VS Code’s indexing feature, we can directly locate the select_action function in lerobot’s ACT-related code:

python

def select_action(self, batch: dict[str, Tensor]) -> Tensor:
    """Select a single action given environment observations.

    This method wraps `select_actions` in order to return one action at a time for execution in the
    environment. It works by managing the actions in a queue and only calling `select_actions` when the
    queue is empty.
    """
    self.eval()  # keeping the policy in eval mode as it could be set to train mode while queue is consumed

    if self.config.temporal_ensemble_coeff is not None:
        actions = self.predict_action_chunk(batch)
        action = self.temporal_ensembler.update(actions)
        return action

    # Action queue logic for n_action_steps > 1. When the action_queue is depleted, populate it by
    # querying the policy.
    if len(self._action_queue) == 0:
        actions = self.predict_action_chunk(batch)[:, : self.config.n_action_steps]

        # `self.model.forward` returns a (batch_size, n_action_steps, action_dim) tensor, but the queue
        # effectively has shape (n_action_steps, batch_size, *), hence the transpose.
        self._action_queue.extend(actions.transpose(0, 1))
    return self._action_queue.popleft()

The core logic here is: if the action queue is empty, the model predicts and generates a new sequence of actions. A unavoidable limitation of this logic is that the end of one action cluster (a sequence of consecutive actions) and the start of the next generated cluster often lack continuity. This causes the robotic arm to exhibit sudden jumps during inference (more severe than jitter, similar to convulsions).

To address this, linear interpolation can be used to generate a series of intermediate actions, smoothing the transition between discontinuous action clusters. Subsequently, applying mean filtering to the entire action sequence can further mitigate jitter.

P.S.: While writing this, I suddenly wondered if slower demonstration actions during data collection would result in more stable operation.

Based on the above ideas, the select_action function was modified as follows:

python

def select_action(self, batch: dict[str, Tensor]) -> Tensor:
    """Select a single action given environment observations.

    This method wraps `select_actions` in order to return one action at a time for execution in the
    environment. It works by managing the actions in a queue and only calling `select_actions` when the
    queue is empty.
    """
    self.eval()  # keeping the policy in eval mode as it could be set to train mode while queue is consumed

    if self.config.temporal_ensemble_coeff is not None:
        actions = self.predict_action_chunk(batch)
        action = self.temporal_ensembler.update(actions)
        return action

    # vkrobot: Model prediction generates a sequence of n_action_steps, which is stored in the queue.
    # The robotic arm is controlled based on the actions in the sequence.
    if len(self._action_queue) == 1:
        self.last_action = self._action_queue[0].cpu().tolist()[0]

    # Action queue logic for n_action_steps > 1. When the action_queue is depleted, populate it by
    # querying the policy.
    if len(self._action_queue) == 0:
        actions = self.predict_action_chunk(batch)[:, : self.config.n_action_steps]

        # `self.model.forward` returns a (batch_size, n_action_steps, action_dim) tensor, but the queue
        # effectively has shape (n_action_steps, batch_size, *), hence the transpose.
        # vkrobot: Linear interpolation for jump points
        self.begin_mutation_filter(actions)
        self._action_queue.extend(actions.transpose(0, 1))
        # vkrobot: Mean filtering
        self.actions_mean_filtering()
    return self._action_queue.popleft()

Key modifications include:

python

if len(self._action_queue) == 1:

When only one action remains in the queue (indicating the end of the previously predicted action cluster), this action is recorded. For clarification: an “action” here refers to a set of joint angles.

Thus, when generating the next prediction, linear interpolation can be used to smooth the transition from the last action of the previous cluster to the first action of the new cluster. Additionally, mean filtering is applied to all newly generated action sequences:

python

self.begin_mutation_filter(actions)
self._action_queue.extend(actions.transpose(0, 1))
# vkrobot: Mean filtering
self.actions_mean_filtering()

The interpolation and filtering functions need to be implemented separately, as they are not included in the original lerobot code.

IV. Adding Smooth Loss to the Loss Function

The video author also proposes another method to reduce jitter: incorporating smooth loss into the total loss function. This is a common technique in machine learning—an ingenious idea, though its practical effectiveness may vary depending on the scenario.

python

# # # Mean filtering loss vkrobot
kernel_size = 11
padding = kernel_size // 2
x = actions_hat.transpose(1, 2)
weight = torch.ones(6, 1, kernel_size, device=actions_hat.device) / kernel_size
filtered_x = F.conv1d(x, weight, padding=padding, groups=6)
filtered_tensor = filtered_x.transpose(1, 2)
mean_loss = torch.abs(actions_hat - filtered_tensor).mean()
loss += mean_loss
loss_dict["mean_loss"] = mean_loss.item()

V. Other Optimization Attempts

The video also mentions modifying model inference parameters to improve grasping success rates. We tested this method on the Piper: setting the model to infer 100 steps and execute the first 50 steps resulted in the robot entering a hesitant state, failing to proceed. Adjusting to 70 steps also led to similar issues. Thus, parameter modification may require scenario-specific tuning.

Additionally, the video suggests introducing mean filtering during data collection—a method that should be effective. We plan to test this in future research focused on data collection optimization.

After integrating interpolation and filtering, we ran the previously trained model. A comparison of the operational performance before and after optimization can be viewed in the following video:Piper lerobot Transplantation: Motion Control Optimization Demo

Overall, the Piper robotic arm’s motion during inference has become significantly smoother, with a moderate improvement in grasping success rates.

1 post - 1 participant

Read full topic

by Agilex_Robotics on February 02, 2026 09:59 AM

February 01, 2026
URDF Kitchen Beta 2 Released

Hello everyone,

I have developed a tool to support the creation of URDF files.

URDF Kitchen is a GUI-based tool that allows you to load mesh files for robot parts, mark connection points, and assemble robots by connecting nodes. It is especially useful when your CAD or 3D modeling tools cannot directly export URDF files, or when existing URDFs need to be modified.

The tool also supports exporting MJCF for use with MuJoCo.

Key features:

  • Robot assembly via node-based connections

  • Supports STL, OBJ, and DAE mesh files

  • Export to URDF and MJCF

  • Import URDF, xacro, SDF, and MJCF

  • Automatic mirroring to generate the right side from a left-side assembly

  • GUI-based configuration of connection points and colliders

  • Supports setting only the minimum required joint parameters

  • Available on Windows, macOS, and Ubuntu

  • Free to use (GPL v3.0)

  • Written in Python, making it easy to extend or modify features with AI-assisted coding

This is the Beta 2 release, with significant feature updates since the previous version.

Please give it a try, and I would really appreciate any feedback or bug reports.

YouTube (30s overview video):

URDF kitchen Beta2

GitHub:
https://github.com/Ninagawa123/URDF_kitchen/tree/beta2

5 posts - 4 participants

Read full topic

by Ninagawa123 on February 01, 2026 11:06 PM

Toio meets navigation2

I published ROS 2 package for navigation2 with toio.
So, user can study navigation2 using toio.
You can watch demo movie.

1 post - 1 participant

Read full topic

by dandelion1124 on February 01, 2026 04:12 AM

Space ROS Jazzy 2026.01.0 Release

Hello ROS community!

The Space ROS team is excited to announce Space ROS Jazzy 2026.01.0 was released last week and is available as osrf/space-ros:jazzy-2026.01.0 on DockerHub. Additionally, builds Move It 2 and Navigation 2 built on the jazzy-2026.01.0 underlay are also available to accelerate work using these systems as osrf/space-ros-moveit2:jazzy-2026.01.0 on DockerHub and osrf/space-ros-nav2:jazzy-2026.01.0 on DockerHub respectively.

For an exhaustive list of all the issues addressed and PRs merged, check out the GitHub Project Board for this release here.

Code

Current versions of all packages released with Space ROS are available at:

What’s Next

This release comes 3 months after the last release. The next release is planned for April 30, 2026. If you want to contribute to features, tests, demos, or documentation of Space ROS, get involved on the Space ROS GitHub issues and discussion board.

All the best,

The Space ROS Team

2 posts - 1 participant

Read full topic

by bkempa on February 01, 2026 12:26 AM

January 30, 2026
Abandoned joystick_drivers package

I noticed that the joystick drivers repository has not had any recent changes and there are several open pull requests which have not been addressed by the maintainers. Has this package been replaced or is it abandoned?

7 posts - 4 participants

Read full topic

by ethanholter on January 30, 2026 05:18 PM


Powered by the awesome: Planet