January 27, 2026
Stop SSH-ing into robots to find the right rosbag. We built a visual Rolling Buffer for ROS2

Hi everyone,

I’m back with an update on INSAION, the observability platform my co-founder and I are building. Last time, we discussed general fleet monitoring, but today I want to share a specific feature we just released that targets a massive pain point we faced as roboticists: Managing local recordings without filling up the disk.

We’ve all been there: A robot fails in production, you SSH in, navigate to the log directory, and start playing “guess the timestamp” to find the right bag file. It’s tedious, and usually, you either missed the data or the disk is already full.

So, we built a smart Rolling Buffer to solve this.

How it actually works (It’s more than just a loop):

It’s not just a simple circular buffer. We built a storage management system directly into the agent. You allocate a specific amount of storage (e.g., 10GB) and select a policy via the Web UI (no config files!):

  • FIFO: Oldest data gets evicted automatically when the limit is reached.

  • HARD: Recording stops when the limit is reached to preserve exact history.

  • NONE: Standard recording until disk saturation.

The “No-SSH” Workflow:

As you can see in the video attached, we visualized the timeline.

  1. The Timeline: You see exactly where the Incidents (red blocks) happened relative to the Recordings (yellow/green blocks).

  2. Visual correlation: No need to grep logs or match timestamps manually. You can see at a glance if you have data covering the crash.

  3. Selective Sync: You don’t need to upload terabytes of data. You just select the relevant block from the timeline and click “Sync.” The heavy sensor data (Lidar, Images, Costmaps) is then uploaded to the cloud for analysis.

Closing the Loop:

Our goal is to give you the full picture. We start with lightweight telemetry for live monitoring, which triggers alerts. Then, we close the loop by letting you easily grab the high-fidelity, heavy data stored locally—only when you actually need it.

We’re trying to build the tool we wish we had in our previous robotics jobs. I’d love to hear your thoughts on this “smart recording” approach—does this sound like something that would save you time debugging?

I’d love to hear your feedback on it

Check it out at app.insaion.com if you want to dig deeper. It’s free to get started

Cheers!

1 post - 1 participant

Read full topic

by vicmassy on January 27, 2026 05:02 PM

Implementation of UR Robotic Arm Teleoperation with PIKA SDK

Implementation of UR Robotic Arm Teleoperation with PIKA SDK

Demo Demonstration

Pika Teleoperation of UR Robotic Arm Demo Video

Getting Started with PIKA Teleoperation (UR Edition)

We recommend reading [Methods for Teleoperating Any Robotic Arm with PIKA] before you begin.

Once you understand the underlying principles, let’s guide you through writing a teleoperation program step by step. To quickly implement teleoperation functionality, we will use the following tools:

  • PIKA SDK: Enables fast access to all PIKA Sense data and out-of-the-box gripper control capabilities
  • Various transformation tools: Such as converting XYZRPY to 4x4 homogeneous transformation matrices, converting XYZ and quaternions to 4x4 homogeneous transformation matrices, and converting RPY angles (rotations around X/Y/Z axes) to rotation vectors
  • UR Robotic Arm Control Interface: This interface is primarily built on the ur-rtde library. It enables real-time control by sending target poses (XYZ and rotation vectors), speed, acceleration, control interval (frequency), lookahead time, and proportional gain

Environment Setup

  1. Clone the code
git clone --recursive https://github.com/RoboPPN/pika_remote_ur.git

2、Install Dependencies

cd pika_remote_ur/pika_sdk

pip3 install -r requirements.txt  

pip3 install -e .

pip3 install ur-rtde

UR Control Interface

Let's start with the control interface. To implement teleoperation, you first need to develop a proper control interface. For instance, the native control interface of UR robots accepts XYZ coordinates and rotation vectors as inputs, while teleoperation code typically outputs XYZRPY data. This requires a coordinate transformation, which can be implemented either in the control interface or the main teleoperation program. Here, we perform the transformation in the main teleoperation program.

The UR robotic arm control interface code is located at pika_remote_ur/ur_control.py:

import rtde_control
import rtde_receive

class URCONTROL:
    def __init__(self,robot_ip):
        # Connect to the robot
        self.rtde_c = rtde_control.RTDEControlInterface(robot_ip)
        self.rtde_r = rtde_receive.RTDEReceiveInterface(robot_ip)
        if not self.rtde_c.isConnected():
            print("Failed to connect to the robot control interface.")
            return
        if not self.rtde_r.isConnected():
            print("Failed to connect to the robot receive interface.")
            return
        print("Connected to the robot.")
            
        # Define servoL parameters
        self.speed = 0.15  # m/s
        self.acceleration = 0.1  # m/s^2
        self.dt = 1.0/50  # dt for 500Hz, or 1.0/125 for 125Hz
        self.lookahead_time = 0.1  # s
        self.gain = 300  # proportional gain
        
    def sevol_l(self, target_pose):
        self.rtde_c.servoL(target_pose, self.speed, self.acceleration, self.dt, self.lookahead_time, self.gain)
        
    def get_tcp_pose(self):
        return self.rtde_r.getActualTCPPose()
    
    def disconnect(self):
        if self.rtde_c:
            self.rtde_c.disconnect()
        if self.rtde_r:
            self.rtde_r.disconnect()
        print("Disconnected from UR robot")

# example
# if __name__ == "__main__":
#     ur = URCONTROL("192.168.1.15")
#     target_pose = [0.437, -0.1, 0.846, -0.11019068574221307, 1.59479642933605, 0.07061926626169934]
    
#     ur.sevol_l(target_pose)

The code defines a Python class named URCONTROL for communicating and controlling UR robots. This class encapsulates the functionalities of the rtde_control and rtde_receive libraries, providing methods for connecting to the robot, disconnecting, sending servoL commands, and retrieving TCP poses.

Core Teleoperation Code

The teleoperation code is located at `pika_remote_ur/teleop_ur.py`

As outlined in [Methods for Teleoperating Any Robotic Arm with PIKA], the teleoperation principle can be summarized in four key steps:

  1. Obtain 6D Pose data
  2. Coordinate System Alignment
  3. Incremental Control
  4. Map 6D Pose data to the robotic arm

Obtaining Pose Data

The code is as follows:
# Get pose data of the tracker device
def get_tracker_pose(self):
    logger.info(f"Starting to obtain pose data of {self.target_device}...")
    while True:
        # Get pose data
        pose = self.sense.get_pose(self.target_device)
        if pose:
            # Extract position and rotation data for further processing
            position = pose.position  # [x, y, z]
            rotation = self.tools.quaternion_to_rpy(pose.rotation[0],pose.rotation[1],pose.rotation[2],pose.rotation[3])  # [x, y, z, w] quaternion

            self.x,self.y,self.z,   self.roll, self.pitch, self.yaw = self.adjustment(position[0],position[1],position[2],
                                                                                      rotation[0],rotation[1],rotation[2])                                                                           
        else:

            logger.warning(f"Failed to obtain pose data for {self.target_device}, retrying in the next cycle...")

        time.sleep(0.02)  # Obtain data every 0.02 seconds (50Hz)

This code retrieves the pose information of the tracker named “T20” every 0.02 seconds. There are two types of tracker device names: those starting with WM and those starting with T. When connecting trackers to the computer via a wired connection, the first connected tracker is named T20, the second T21, and so on. For wireless connections, the first connected tracker is named WM0, the second WM1, and so forth.

The acquired pose data requires further processing. The adjustment function is used to adjust the coordinates to match the coordinate system of the UR robotic arm’s end effector, achieving alignment between the two systems.

Coordinate System Alignment

The code is as follows:
# Coordinate transformation adjustment function
def adjustment(self,x,y,z,Rx,Ry,Rz):
    transform = self.tools.xyzrpy2Mat(x,y,z,   Rx, Ry, Rz)

    r_adj = self.tools.xyzrpy2Mat(self.pika_to_arm[0],self.pika_to_arm[1],self.pika_to_arm[2],
                                  self.pika_to_arm[3],self.pika_to_arm[4],self.pika_to_arm[5],)   # Adjust coordinate axis direction: Pika ---> Robotic Arm End Effector

    transform = np.dot(transform, r_adj)

    x_,y_,z_,Rx_,Ry_,Rz_ = self.tools.mat2xyzrpy(transform)

    return x_,y_,z_,Rx_,Ry_,Rz_

The function implements coordinate transformation and adjustment with the following steps:

  1. Convert the input pose (x,y,z,Rx,Ry,Rz) into a transformation matrix.
  2. Obtain the adjustment matrix for transforming the Pika coordinate system to the robotic arm’s end effector coordinate system.
  3. Combine the two transformations through matrix multiplication.
  4. Convert the final transformation matrix back to pose parameters and return the result.

The adjusted pose parameters matching the robotic arm’s coordinate system can be obtained through this function.

Incremental Control

In teleoperation, the pose data provided by Pika Sense is absolute. However, we do not want the robotic arm to jump directly to this absolute pose. Instead, we want the robotic arm to follow the relative movements of the operator starting from its current position. In simple terms, this involves converting the absolute pose changes of the control device into relative pose commands for the robotic arm.

The code is as follows:

# Incremental control
def calc_pose_incre(self,base_pose, pose_data):
    begin_matrix = self.tools.xyzrpy2Mat(base_pose[0], base_pose[1], base_pose[2],
                                                base_pose[3], base_pose[4], base_pose[5])
    zero_matrix = self.tools.xyzrpy2Mat(self.initial_pose_rpy[0],self.initial_pose_rpy[1],self.initial_pose_rpy[2],
                                        self.initial_pose_rpy[3],self.initial_pose_rpy[4],self.initial_pose_rpy[5])
    end_matrix = self.tools.xyzrpy2Mat(pose_data[0], pose_data[1], pose_data[2],
                                            pose_data[3], pose_data[4], pose_data[5])
    result_matrix = np.dot(zero_matrix, np.dot(np.linalg.inv(begin_matrix), end_matrix))
    xyzrpy = self.tools.mat2xyzrpy(result_matrix)
    return xyzrpy   

This function uses transformation matrix arithmetic to implement incremental control. Let’s break down the code step by step:

Input Parameters:

  • base_pose: The reference pose at the start of teleoperation. When teleoperation is triggered, the system records the current pose of the control device and stores it as self.base_pose. This pose serves as the “starting point” or “reference zero point” for calculating all subsequent increments.
  • pose_data: The real-time pose data received from the control device (Pika Sense) at the current moment.

Matrix Transformation:The function first converts three key poses (represented in [x, y, z, roll, pitch, yaw] format) into 4x4 homogeneous transformation matrices, typically implemented by the tools.xyzrpy2Mat function.

  • begin_matrix: Converted from base_pose, representing the pose matrix of the control device at the start of teleoperation (denoted as T_begin).
  • zero_matrix: Converted from self.initial_pose_rpy, representing the pose matrix of the robotic arm’s end effector at the start of teleoperation. This is the “starting point” for the robotic arm’s movement (denoted as T_zero).
  • end_matrix: Converted from pose_data, representing the pose matrix of the control device at the current moment (denoted as T_end).

Core Calculation:This is the critical line of code:

result_matrix = np.dot(zero_matrix, np.dot(np.linalg.inv(begin_matrix), end_matrix))

Let’s analyze it using matrix multiplication:The formula can be expressed as: Result = T_zero * (T_begin)⁻¹ * T_end

  • np.linalg.inv(begin_matrix): Calculates the inverse matrix of begin_matrix, i.e., (T_begin)⁻¹. In robotics, the inverse of a transformation matrix represents the reverse transformation.
  • np.dot(np.linalg.inv(begin_matrix), end_matrix): This calculates (T_begin)⁻¹ * T_end, which physically represents the transformation required to convert from the begin coordinate system to the end coordinate system. In other words, it accurately describes the relative pose change (increment) of the control device from the start of teleoperation to the current moment (denoted as ΔT).
  • np.dot(zero_matrix, ...): This calculates T_zero * ΔT, which physically applies the calculated relative pose change (ΔT) to the initial pose of the robotic arm (T_zero).

Result Conversion and Return:

  • xyzrpy = tools.mat2xyzrpy(result_matrix): Converts the calculated 4x4 target pose matrix result_matrix back to the [x, y, z, roll, pitch, yaw] format that the robot controller can interpret.
  • return xyzrpy: Returns the calculated target pose.

Teleoperation Triggering

There are various ways to trigger teleoperation:
  1. Voice Trigger: The operator can trigger teleoperation using a wake word.
  2. Server Request Trigger: Teleoperation is triggered via a server request.

However, both methods have usability limitations. Voice triggering requires an additional voice input module and may suffer from low wake word recognition accuracy—you might have to repeat the wake word multiple times before successful triggering, leaving you frustrated before even starting teleoperation. Server request triggering requires sending a request from the control computer, which works well with two-person collaboration but becomes cumbersome when operating alone.

Instead, we use Pika Sense’s state transition detection to trigger teleoperation. The operator simply holds the Pika Sense and double-clicks it to reverse the state, thereby initiating teleoperation. The code is as follows:

# Teleoperation trigger
def handle_trigger(self):
    current_value = self.sense.get_command_state()

    if self.last_value is None:
        self.last_value = current_value
    if current_value != self.last_value: # Detect state change
        self.bool_trigger = not self.bool_trigger # Reverse bool_trigger
        self.last_value =  current_value # Update last_value
        # Perform corresponding operations based on the new bool_trigger value
        if self.bool_trigger :
            self.base_pose = [self.x, self.y, self.z, self.roll, self.pitch, self.yaw]
            self.flag = True
            print("Teleoperation started")

        elif not self.bool_trigger :
            self.flag = False

            #-------------------------------------------------Option 1: Robotic arm stops at current pose after teleoperation ends; resumes from current pose in next teleoperation---------------------------------------------------

            self.initial_pose_rotvec = self.ur_control.get_tcp_pose()

            temp_rotvec = [self.initial_pose_rotvec[3], self.initial_pose_rotvec[4], self.initial_pose_rotvec[5]]

            #  Convert rotation vector to Euler angles
            roll, pitch, yaw = self.tools.rotvec_to_rpy(temp_rotvec)

            self.initial_pose_rpy = self.initial_pose_rotvec[:]
            self.initial_pose_rpy[3] = roll
            self.initial_pose_rpy[4] = pitch
            self.initial_pose_rpy[5] = yaw

            self.base_pose = self.initial_pose_rpy # Desired target pose data
            print("Teleoperation stopped")

            #-------------------------------------------------Option 2: Robotic arm returns to initial pose after teleoperation ends; starts from initial pose in next teleoperation---------------------------------------------------

            # # Get current pose of the robotic arm
            # current_pose = self.ur_control.get_tcp_pose()

            # # Define interpolation steps
            # num_steps = 100  # Adjust steps as needed; more steps result in smoother transition

            # for i in range(1, num_steps + 1):
            #     # Calculate interpolated pose at current step
            #     # Assume initial_pose_rotvec and current_pose are both in [x, y, z, Rx, Ry, Rz] format
            #     interpolated_pose = [
            #         current_pose[j] + (self.initial_pose_rotvec[j] - current_pose[j]) * i / num_steps
            #         for j in range(6)
            #     ]
            #     self.ur_control.sevol_l(interpolated_pose)
            #     time.sleep(0.01)  # Short delay between interpolations to control speed

            # # Ensure the robotic arm reaches the initial position
            # self.ur_control.sevol_l(self.initial_pose_rotvec)


            # self.base_pose = [self.x, self.y, self.z, self.roll, self.pitch, self.yaw]
            # print("Teleoperation stopped")

The code continuously retrieves the current state of Pika Sense using self.sense.get_command_state(), which outputs either 0 or 1. When the program starts, bool_trigger defaults to False. On the first state reversal, bool_trigger is set to True—the tracker’s pose is set as the zero point, self.flag is set to True, and control data is sent to the robotic arm for motion control.

To stop teleoperation, double-click the Pika Sense again to reverse the state. The robotic arm will then stop at its current pose and resume from this pose in the next teleoperation session (Option 1). Option 2 allows the robotic arm to return to its initial pose after teleoperation stops and start from there in subsequent sessions. You can choose the appropriate option based on your specific needs.

Mapping Pika Pose Data to the Robotic Arm

The code for this section is as follows:
def start(self):
    self.tracker_thread.start() # Start the thread        
    # Main thread continues with other tasks
    while self.running:
        self.handle_trigger()
        self.control_gripper()
        current_pose = [self.x, self.y, self.z, self.roll, self.pitch, self.yaw]
        increment_pose = self.calc_pose_incre(self.base_pose,current_pose)

        finally_pose  = self.tools.rpy_to_rotvec(increment_pose[3], increment_pose[4], increment_pose[5])

        increment_pose[3:6] = finally_pose

        # Send pose to robotic arm
        if self.flag:
            self.ur_control.sevol_l(increment_pose)

        time.sleep(0.02) # Update at 50Hz

This section of code converts the RPY rotation data of the calculated increment_pose into a rotation vector and sends it to the robotic arm (UR robots accept XYZ coordinates and rotation vectors for control). Control data is only sent to the robotic arm when self.flag is set to True.

Practical Operation

The teleoperation code is located at: `pika_remote_ur/teleop_ur.py`
  1. Power on the UR robotic arm and enable the joint motors. If the robotic arm’s end effector is equipped with a gripper or other actuators, enter the corresponding load parameters.

  2. Configure the robotic arm’s IP address on the tablet.

3、Configure the Tool Coordinate System.

The end effector coordinate system must be set with the Z-axis pointing forward, X-axis pointing downward, and Y-axis pointing left. In the code, we rotate the Pika coordinate system 90° counterclockwise around the Y-axis, resulting in the Pika coordinate system having the Z-axis forward, X-axis downward, and Y-axis left. Therefore, the robotic arm’s end effector (tool) coordinate system must be aligned with this configuration; otherwise, the control will malfunction.

  1. For first-time use, set the speed to 20-30% and enable remote control of the robotic arm.

  2. Connect the tracker to the computer via a USB cable and calibrate the tracker and base station.

Navigate to the ~/pika_ros/scripts directory and run:

bash calibration.bash 

Once positioning calibration is complete, close the program.

  1. Connect Pika Sense and Pika Gripper to the computer using USB 3.0 cables. Note: Connect Pika Sense first (it should be assigned the port /dev/ttyUSB0), then connect the Pika Gripper (which requires 24V DC power supply, and should be assigned the port /dev/ttyUSB1).

  2. Run the code:

cd pika_remote_ur

python3 teleop_ur.py

The terminal will output numerous logs, with the most common one being:

teleop_ur - WARNING - Failed to obtain pose data for T20, retrying in the next cycle...

Teleoperation can be initiated by double-clicking once the above log stops appearing and is replaced by:

pika.vive_tracker - INFO -  Detected new device update: T20

Then you can start the remote operation by double-clicking.

1 post - 1 participant

Read full topic

by Agilex_Robotics on January 27, 2026 03:38 AM

January 26, 2026
Should feature-adding/deprecating changes to core repos define feature flags?

When a new feature or deprecation is added to a C++ repo, it would be useful to have an easy way of detecting whether this feature is available.

Currently, it’s possible to use has_include (if the feature added a whole new header file), or you’re left with try_compile in CMake. Or version checks, which get very quickly very complicated.

Take for example Add imu & mag support in `tf2_sensor_msgs` (#800) by roncapat · Pull Request #813 · ros2/geometry2 · GitHub which added support for transforming IMU messages. If my package uses this feature and has a fallback for the releases where it’s missing, I need a reliable way for detecting the presence of the feature. I went with try_compile and it works.

However, imagine that tf2_sensor_msgs::tf2_sensor_msgs target automatically adds a compile definition like `-DTF2_SENSOR_MSGS_HAS_IMU_SUPPORT`. It would be much easier for downstream packages.

As long as it’s feasible, I very much want to have single-branch ROS2 packages for all distros out there, and this kind of packages would benefit a lot.

Another example: ament_index_cpp added std::filesystem interface recently. For downstream packages that want to work with both the old and new interfaces, there are some ifdefs needed in the implementation. But it doesn’t make sense to me for each package using ament_index_cpp to do the try_compile check…

What do you think about adding such feature flags to packages? Would it be maintainable? Would there be any drawbacks?

1 post - 1 participant

Read full topic

by peci1 on January 26, 2026 04:15 PM

PlotJugler 2026: it needs your

Why PlotJuggler 2026

Soon it will be the 10th anniversary of my first commit in the PlotJuggler repo.

I built this to help people visualize their data faster and almost everyone I know, in the ROS community, uses it (Bubble? False confirmation bias? We will never know).

What I do know is that in an era where we have impressive (and VC-backed) visualization tools, PlotJuggler is still used and loved by thousands of roboticists.

I believe the reason is that it is not just a “visualization” tool, but a debugging one; fast, nimble and effective, like vi… if you are into that.

I decided that PJ deserves better… my users do! And I have big plans to make that happen… with your help (mostly, your company’s help).

Crowdfunding: PlotJuggler’s future, shaped together

This is the reason why I am launching a crowdfunding campaign, targeted to companies, not individuals.

This worked for me quite well 3 years ago, when I partially financed the development of Groot2, the BehaviorTree.CPP editor. But this time is different: if I reach my goals, 100% of the result will be open source and truly free, forever.

This is my roadmap: PlotJuggler 2026 - Google Slides

  1. Extension Marketplace — discover and install plugins with one click.
  2. Connectors to data in the cloud — access your logs wherever they are.
  3. Connect to your robot from any computer, with or without ROS.
  4. New data transform editor — who needs Matlab?
  5. Efficient data storage for “big data”.
  6. Images and videos (at last?).
  7. PlotJuggler on the web? I want to believe. You want too.

Contact me at dfaconti@aurynrobotics.com if you want to know more.

Why you should join

  1. You or your team already uses PlotJuggler. Invest in the tool that saves you debugging hours every week.
  2. Shape the roadmap. Backers get a voice in prioritizing features that matter to your workflow.
  3. Public recognition. Your company logo in the documentation and release announcements.
  4. Be the change you want to see in the open-source world. We all like a good underdog story. Davide VS Goliath (pun intended), open-source vs closed-source (reference intended). Yes, you can make that happen.

FAQ

What if I want another feature?

Contact me and tell me more.

What if I am not interested in all these features, but only 1 or 2?

We will find a way and negotiate a contribution that is proportional to what you care about.

How much should a backer contribute?

I am not giving you an upper limit :wink: , but use €5,000 as the smallest “quantum”. This is the reason why this is targeted to companies, not individuals.

How will you use that money?

I plan to hire 1-3 full-time employees for 1 year. The more budget I can obtain, the more I can build.

“I think it is great, but I am not in charge of making this decision at my company”

Give me the email of the decision maker I need to bother, and I will do it for you!

2 posts - 2 participants

Read full topic

by facontidavide on January 26, 2026 10:57 AM

January 25, 2026
Ros2cli, meet fzf

ros2cli Interactive Selection: Fuzzy Finding

ros2cli just got a UX upgrade: fuzzy finding!

ros2cli_fzf

Tab completion is nice, but it still requires you to remember how the name starts. Now you can just type any part you remember and see everything that matches. Think “search” not “autocomplete”.

Tab Completion vs. Fuzzy Finding

Tab completion:


$ ros2 topic echo /rob<TAB>

# Shows: /robot/

$ ros2 topic echo /robot/<TAB><TAB><TAB>...

# Cycles through: base_controller, cmd_vel, diagnostics, joint_states...

# Was it under /robot? Or /robot/sensors? Or was it /sensors/robot?

# Start over: Ctrl+C

Fuzzy finding (new):


$ ros2 topic echo

# Type: "lidar"

# Instantly see ALL topics with "lidar" anywhere in the name:

/robot/sensors/lidar/scan

/front_lidar/points

/safety/lidar_monitor

# Pick with arrows, done!

What Works Now?

  • ros2 topic echo / info / hz / bw / type - Find topics by any part of their name

  • ros2 node info - Browse all nodes, filter as you type

  • ros2 param get - Pick node, then browse its parameters

  • ros2 run - Find packages/executables without remembering exact names

There are plenty more opportunities where we could integrate fzf, not only in more verbs of ros2cli (e.g. ros2 service) but also in other tools in the ROS ecosystem (e.g. colcon).

I’d love to to see this practice propagate but for this I need the help of the community!

Links

  • PR: #1151 (currently only available on rolling)

  • Powered by: fzf

5 posts - 3 participants

Read full topic

by tnajjar on January 25, 2026 08:02 PM

January 24, 2026
LinkForge 1.2.0: Centralized ros2_control Dashboard & Inertial Precision

Hi Everyone,

Following the initial announcement of LinkForge, I’m appreciative of the feedback. Today I’m releasing v1.2.0, focused on internal stability and better diagnostic visuals.

Key Technical Changes:

  • Centralized ros2_control Dashboard: We’ve consolidated all hardware interfaces and transmissions into a single dashboard. This makes managing complex actuators much faster and prevents property-hunting across panels.

  • Inertial Origins & CoM Editing: We’ve exposed the inertial origin in the UI and added a GPU-based overlay showing a persistent Center of Mass sphere. This allows for manual fine-tuning and immediate visual verification of your physics model directly in the 3D viewport.

  • Hexagonal Architecture: The core logic is now decoupled from the Blender API, making the codebase more testable (now with near-full core coverage) and future-proof.

We also fixed several bugs related to Xacro generation and mesh cloning for export robustness. Getting the physics right in the editor is the best way to prevent “exploding robots” in simulation.

:hammer_and_wrench: Download (Blender Extensions): LinkForge — Blender Extensions
:open_book: Documentation: https://linkforge.readthedocs.io/
:laptop: Source Code: GitHub - arounamounchili/linkforge: Build simulation-ready robots in Blender. Professional URDF/XACRO exporter with validation, sensors, and ros2_control support.

Feedback on the new dashboard workflow is very welcome!

1 post - 1 participant

Read full topic

by arounamounchili on January 24, 2026 10:34 PM

January 21, 2026
Native ROS 2 Jazzy Debian packages for Raspberry Pi OS / Debian Trixie (arm64)

After spending some time trying to get ROS 2 Jazzy working reliably on Raspberry Pi via Docker and Conda (and losing several rounds to OpenGL, Gazebo plugins, and cross-arch issues), I eventually concluded:)

On Raspberry Pi, ROS really only behaves when it’s installed natively.

So I built the full ROS 2 Jazzy stack as native Debian packages for Raspberry Pi OS / Debian Trixie (arm64), using a reproducible build pipeline:

  • bloom → dpkg-buildpackage → sbuild → reprepro

  • signed packages

  • rosdep-compatible

The result:

  • Native ROS 2 Jazzy on Pi OS / Debian Trixie

  • Uses system Mesa / OpenGL

  • Gazebo plugins load correctly

  • Cameras, udev, and ros2_control behave

  • Installable via plain apt

Public APT repository

:backhand_index_pointing_right: GitHub - rospian/rospian-repo: ROS2 Jazzy on Raspberry OS Trixie debian repo

Build farm (if you want to reproduce or extend it)

:backhand_index_pointing_right: GitHub - rospian/rospian-buildfarm: ROS 2 Jazzy debs for Raspberry Pi OS Trixie with full Debian tooling

Includes the full mini build-farm pipeline.

This was motivated mainly by reliability on embedded systems and multi-machine setups (Gazebo on desktop, control on Pi).

Feedback, testing, or suggestions very welcome.

2 posts - 2 participants

Read full topic

by ebodev on January 21, 2026 03:58 PM

Ros2_yolos_cpp High-Performance ROS 2 Wrappers for YOLOs-CPP [All models + All tasks]

Hi everyone! :waving_hand:

I’m the author of ros2_yolos_cpp and YOLOs-CPP. I’m excited to share the first public release of this ROS 2 package!

:link: Repository: ros2_yolos_cpp


:brain: What Is ros2_yolos_cpp?

ros2_yolos_cpp is a production-ready ROS 2 interface for the YOLOs-CPP inference engine — a high-performance, unified C++ library for YOLO models (v5 through v12 and YOLO26) built on ONNX Runtime and OpenCV.

This package provides composable and lifecycle-managed ROS 2 nodes for real-time:

  • Object Detection
  • Instance Segmentation
  • Pose Estimation
  • Oriented Bounding Boxes (OBB)
  • Image Classification

All powered through ONNX models and optimized for both CPU and GPU inference.


:gear: Key Features

  • :check_mark: ROS 2 Lifecycle Nodes
    Full support for ROS 2 managed node lifecycle (configure, activate, etc.)

  • :check_mark: Composable Nodes
    Efficient multi-model, multi-node setups in a single process

  • :check_mark: Zero-Copy Image Transport
    Optimized subscription for high-throughput video pipelines

  • :check_mark: All Major Vision Tasks
    Detection, segmentation, pose, OBB, and classification in one stack

  • :check_mark: Standardized ROS 2 Messages
    Uses vision_msgs and custom OBB types for interoperability

  • :check_mark: Production-Ready
    CI/CD workflows, strict parameters, and reusable launch configurations

Regards,

1 post - 1 participant

Read full topic

by Geekgineer on January 21, 2026 03:58 PM

The Canonical Observability Stack with Guillaume Beuzeboc | Cloud Robotics WG Meeting 2026-01-28

For this coming session on Wed, Jan 28, 2026 4:00 PM UTC→Wed, Jan 28, 2026 5:00 PM UTC, the CRWG has invited Guillaume Beuzeboc from Canonical to present on the Canonical Observability Stack (COS). COS is a general observability stack for devices such as drones, robots, and IoT devices. It operates from telemetry data, and the COS team has extended it to support robot-specific use cases. Guillaume, a software engineer at Canonical, previously presented COS at ROSCon 2025 and has kindly agreed to join this meeting to discuss additional technical details with the CRWG.

At the previous meeting, the CRWG continued its review of the ROSCon 2025 talks, focusing on identifying the sessions most relevant to Logging and Observability. A blog post summarizing our findings will be published in the coming weeks. If you would like to watch the latest review meeting, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

1 post - 1 participant

Read full topic

by mikelikesrobots on January 21, 2026 03:52 PM

Deployment and Implementation of RDA_planner

Deployment and Implementation of RDA_planner

We reproduce the RDA Planner project from the IEEE paper RDA: An Accelerated Collision-Free Motion Planner for Autonomous Navigation in Cluttered Environments. We provide a step-by-step guide to help you quickly reproduce the RDA path planning algorithm in this paper, enabling efficient obstacle avoidance for autonomous navigation in complex environments.

Abstract

RDA Planner is a high-performance, optimization-based Model Predictive Control (MPC) motion planner designed for autonomous navigation in complex and cluttered environments. By leveraging the Alternating Direction Method of Multipliers (ADMM), RDA decomposes complex optimization problems into several simple subproblems.

This project is the open-source development of the RDA_ROS autonomous navigation project, proposed by researchers from the University of Hong Kong, Southern University of Science and Technology, University of Macau, Shenzhen Institutes of Advanced Technology of the Chinese Academy of Sciences, and Hong Kong University of Science and Technology (Guangzhou). It is developed based on the AgileX Limo simulator. Relevant papers have been published in IEEE Robotics and Automation Letters and IEEE Transactions on Mechatronics.

RDA planner: GitHub - hanruihua/RDA-planner: [RA-Letter 2023] RDA: An Accelerated Collision Free Motion Planner for Autonomous Navigation in Cluttered Environments
RDA_ROS: GitHub - hanruihua/rda_ros: ROS Wrapper of RDA planner

Tags

limo、RDA_planner、path planning

Respositories

Environment Requirements

System:ubuntu 20.04

ROS Version:noetic

python Version:python3.9

Deployment Process

1、Download and Install Conda

Download Link

Choose Anaconda or Miniconda based on your system storage capacity

After downloading, run the following commands to install:

  • Miniconda:

    bash Miniconda3-latest-Linux-x86_64.sh
    
  • Anaconda:

    bash Anaconda-latest-Linux-x86_64.sh
    

2、Create and Activate Conda Environment

conda create -n rda python=3.9
conda activate rda

3、Download RDA_planner

mkdir -p ~/rda_ws/src
cd ~/rda_ws/src
git clone https://github.com/hanruihua/RDA_planner
cd RDA_planner
pip install -e .  

4、Download Simulator

pip install ir-sim

5、Run Examples in RDA_planner

cd RDA_planner/example/lidar_nav
python lidar_path_track_diff.py

The running effect is consistent with the official README.

img_2

Deployment Process of rda_ros

1、Install Dependencies in Conda Environment

conda activate rda
sudo apt install python3-empy
sudo apt install ros-noetic-costmap-converter
pip install empy==3.3.4
pip install rospkg
pip install catkin_pkg

2、Download Code

cd ~/rda_ws/src
git clone https://github.com/hanruihua/rda_ros
cd ~/rda_ws && catkin_make
cd ~/rda_ws/src/rda_ros 
sh source_setup.sh && source ~/rda_ws/devel/setup.sh && rosdep install rda_ros 

3、Download Simulation Components

This step will download two repositories: limo_ros and rvo_ros

limo_ros:Robot model for simulation

rvo_ros:Cylindrical obstacles used in the simulation environment

cd rda_ros/example/dynamic_collision_avoidance
sh gazebo_example_setup.sh

4、Run Gazebo Simulation

Run via Script

cd rda_ros/example/dynamic_collision_avoidance
sh run_rda_gazebo_scan.sh

Run via Individual Commands

Launch the simulation environment:

roslaunch rda_ros gazebo_limo_env10.launch

Launch RDA_planner

roslaunch rda_ros rda_gazebo_limo_scan.launch

img_3

1 post - 1 participant

Read full topic

by Agilex_Robotics on January 21, 2026 08:12 AM

January 19, 2026
iRobot's ROS benchmarking suite now available!

We’ve just open-sourced our ROS benchmarking suite! Built on top of iRobot’s ros2-performance framework, this is a containerized environment for simulating arbitrary ROS2 systems and graph configurations both simple and complex, comparing the performance of various RMW implementations, and identifying performance issues and bottlenecks.

  • Support for jazzy, kilted and rolling
  • Fully containerized, with experimental support for ARM64 builds through docker bake
  • Container includes fastdds, cyclonedds and zenoh out of the box.
  • In-depth statistical analysis / performance graphs, wrapped up in
    a pretty PDF format like so. (3.8 MB)

Are you building a custom RMW or ROS executor not included in this tooling, and want to compare against the existing implementations? We provide instructions and examples for how to add them to this suite.

Huge shoutout to Leonardo Neumarkt Fernandez for owning and driving the development of this benchmarking suite!

Check it out here: GitHub - irobot-ros/ros2-benchmark-container: A Dockerized performance benchmarking suite for ROS 2 that automates testing, comparative analysis, and report generation across multiple RMW implementations and system topologies.

4 posts - 4 participants

Read full topic

by skye.galaxy on January 19, 2026 10:45 PM

Can anyone recommend a C++ GUI framework where I can embed or integrate a 3D engine?

I know that Qt gives an opportunity to do it natively with Qt3D, but I didn’t find any examples demonstrating that I can rotate and view models in Qt3D within mouse. There are also a lot of 3D engines which provides integration with Qt. They are listed here. But I don’t want to try each of them, maybe someone already knows which one is suitable for me.

I am using C++ for everything, so it is better to use C++ for easier integration, but Rust and Python are also acceptable.

I am a big fun of the Open3D, so if somebody knows how to integrate it with some GUI frameworks, I will be glad)

3 posts - 2 participants

Read full topic

by vdovetzi on January 19, 2026 09:43 AM

January 18, 2026
Announcement: rclrs 0.7.0 Release

We’re happy to announce the release of rclrs v0.7.0!

Just like v0.6.0 landed right before ROSCon in Singapore, this release is arriving just in time for FOSDEM at the end of the month. Welcome to Conference-Driven Development (CDD)!

If you’re attending FOSDEM, come check out my talk on ros2-rust in the Robotics & Simulation devroom.

What’s New

Dynamic Messages

This release adds support for dynamic message publishers and subscribers. You can now work with ROS 2 topics without compile-time knowledge of message types, enabling tools like rosbag recorders, topic inspection utilities, and message bridges to be written entirely in Rust.

Best Available QoS

Added support for best available QoS profiles. Applications can now automatically negotiate quality of service settings when connecting to existing publishers or subscribers.

Other Changes

  • Fixed mismatched lifetime syntax warnings

  • Fixed duplicate typesupport extern declarations

Breaking Changes

  • Minimum Rust version is now 1.85

For the next release, we are planning to switch to Rust 2024, but wanted to give enough notice.

Contributors

A huge thank you to everyone who contributed to this release! Your contributions make ros2-rust better for the entire community.

  • Esteve Fernández
  • Geoff Sokoll
  • Jacob Hassold
  • Kimberly N. McGuire
  • Luca Della Vedova
  • Michael X. Grey
  • Nikolai Morin
  • Sam Privett

Links

As always, we welcome feedback and contributions!

1 post - 1 participant

Read full topic

by esteve on January 18, 2026 08:24 PM

LinkForge: Robot modeling does not have to be complicated

I recorded a short video to show how easy it is to build a simple mobile robot with ���������, a Blender extension designed to bridge the gap between 3D modeling and robotics simulation.

All in a few straightforward steps.

���������: ����� �������� ���� ��� ���� �� �� �����������.

The goal is simple: remove friction from robot modeling so engineers can focus on simulation, control, and behavior, not file formats and repetitive setup.

If you are working with ROS or robot simulation and want a faster, cleaner workflow, this is worth a look.

All in a few straightforward steps.

The goal is simple: remove friction from robot modeling so engineers can focus on simulation, control, and behavior, not file formats and repetitive setup.

If you are working with ROS or robot simulation and want a faster, cleaner workflow, this is worth a look.

Blender Extensions: https://extensions.blender.org/add-ons/linkforge/

GitHub: https://github.com/arounamounchili/linkforge

Documentation: https://linkforge.readthedocs.io/

1 post - 1 participant

Read full topic

by arounamounchili on January 18, 2026 02:59 AM

January 16, 2026
First of 2026 ROS-I Developers' Meeting Looks at Upcoming Releases and Collaboration

The ROS-Industrial Developers’ Meeting provided updates on open-source robotics tools, with a focus on advancements in Tesseract, Helping developers still using MoveIt2, and Trajopt. These updates underscore the global push to innovate motion planning, perception, and tooling systems for industrial automation. Key developments revolved around stabilizing existing frameworks, improving performance, and leveraging modern technologies like GPUs for acceleration.

The Tesseract project, designed to address traditional motion planning tools' limitations, is moving steadily toward a 1.0 release. With about half of the work complete, remaining tasks include API polishing, unit test enhancements, and transitioning the motion planning pipeline to a plugin-based architecture. Tesseract is also integrating improved collision checkers and tools like the Task Composer, which supports modular backends, making it more adaptable for high-complexity manufacturing tasks.

On the MoveIt 2 front, ongoing community support will be critical as the prior suppor team shifts to supporting the commercial MoveItPro. To ensured Tesseract maintainability, updates include the migration of documentation directly into repositories via GitHub. This step simplifies synchronization between code and documentation, helping developers maintain robust, open-source solutions. There are plans to provide migration tutorials for those wanting to investigate Tesseract if MoveIt2 is not meeting development needs and not ready to move to MoveItPro. Ability to utilize MoveIt2 components within Tesseract are being investigated.

Trajopt, another critical component of the Tesseract ecosystem, is undergoing a rewrite to better handle complex trajectories and cost constraints. The new version, expected within weeks, will enable better time parameterization and overall performance improvements. Discussions also explored GPU acceleration, focusing on opportunities to optimize constraint and cost calculations using emerging GPU libraries, though some modifications will be needed to fully realize this potential.

Toolpath optimization also gained attention, with updates on the noether repository, which supports industrial toolpath generation and reconstruction. While still a work in progress, noether is set to play a pivotal role in enabling advanced workflows once the planned updates are implemented.

As the meeting concluded, contributors emphasized the importance of community engagement to further modernize and refine these tools. Upcoming events across Europe and Asia will foster collaboration and showcase advancements in the ROS-Industrial ecosystem. This collective effort promises to drive a smarter, more adaptable industrial automation landscape, ensuring open-source solutions stay at the forefront of global manufacturing innovation.

The next Developers' Meeting is slated to be hosted by the ROS-I Consortium EU. You can find all the info for Developers' Meetings over at the Developer Meeting page.

by Matthew Robinson on January 16, 2026 07:52 PM

Simple status webpage for a robot in localhost?

Hi, I’m just collecting info on how you’re solving some simple status pages running locally on robots that would show some basic information like battery status, driver status, sensor health etc. But nothing fancy like camera streaming, teleoperation and such. No cloud, everything local!

The use-case is just being able to quickly connect to a robot AP and see the status of important things. This can of course be done via rqt or remote desktop, but a status webpage is much more accessible from phones, tablets etc.

I’ve seen statically generated pages with autoreload (easiest to implement, but very custom).

I guess some people have something on top of rosbridge/RobotWebTools, right? But I haven’t found much info about this.

Introducing Robotics UI: A Web Interface Solution for ROS 2 Robots  - sciota robotics seemed interesting, but it never did it over 8 commits…

So what do you use?

Is there some automatic /diagnostics_agg → HTML+JS+WS framework? :slight_smile: And no, I don’t count Foxglove, because self-hosted costs… who knows what :slight_smile:

12 posts - 6 participants

Read full topic

by peci1 on January 16, 2026 03:48 PM

January 15, 2026
Tbai - towards better athletic intelligence

Introducing tbai, a framework designed to democratize robotics and embodied AI and to help us move towards better athletic intelligence.

output

Drawing inspiration from Hugging Face (more specifically lerobot :hugs:), tbai implements and makes fully open-source countless state-of-the-art methods for controlling various sorts of robots, including quadrupeds, humanoids, and industrial robotic arms.

With its well-established API and levels of abstraction, users can easily add new controllers while reusing the rest of the infrastructure, including utilities for time synchronization, visualization, config interaction, and state estimation, to name a few.

Everything is built out of lego-like components that can be seamlessly combined into a single, high-performing robot controller pipeline. Its wide pool of already implemented state-of-the-art controllers (many from Robotic Systems Lab), state estimators, and robot interfaces, together with simulation or real-robot deployment abstractions, allows anyone using tbai to easily start playing around and working on novel methods, using the existing framework as a baseline, or to change one component while keeping the rest, thus accelerating the iteration cycle.

No more starting from scratch, no more boilerplate code. Tbai takes care of all of that.

Tbai seeks to support as many robotic platforms as possible. Currently, there are nine robots that have at least one demo prepared, with many more to come. Specifically, we have controllers readily available for ANYmal B, ANYmal C, and ANYmal D from ANYbotics; Go2, Go2W, and G1 from Unitree Robotics; Franka Emika from Franka Robotics; and finally, Spot and Spot with arm from Boston Dynamics.

Tbai is an ongoing project that will continue making strides towards democratizing robotics and embodied AI. If you are a researcher or a tinkerer who is building cool controllers for a robot, be it an already supported robot or a completely new one, please do consider contributing to tbai so that as many people can benefit from your work as possible.

Finally, a huge thanks goes to all researchers and tinkerers who do robotics and publish papers together with their code for other people to learn from. Tbai would not be where it is now if it weren’t for the countless open-source projects it has drawn inspiration from. I hope tbai becomes an inspiration for other projects too.

Thank you all!

Link: https://github.com/tbai-lab/tbai

Link: https://github.com/tbai-lab/tbai_ros

3 posts - 2 participants

Read full topic

by lnotspotl on January 15, 2026 08:58 AM

January 14, 2026
[Humble] Upcoming behavior change: Improved log file flushing in rcl_logging_spdlog

Summary

The ROS PMC has approved backporting an improved log file flushing behavior to ROS 2 Humble. This change will be included in an next Humble sync and affects how rcl_logging_spdlog flushes log data to the filesystem.

What’s Changing?

Previously, rcl_logging_spdlog did not explicitly configure flushing behavior, which could result in:

  • Missing log messages when an application crashes
  • Empty or incomplete log files during debugging sessions

After this update, the logging behavior will:

  • Flush log files every 5 seconds (periodic flush)
  • Immediately flush on ERROR level messages (flush on error)

This provides a much better debugging experience, especially when investigating crashes or unexpected application terminations.

Compatibility

  • :white_check_mark: API/ABI compatible — No rebuild of your packages is required
  • :warning: Behavior change — Log files will be flushed more frequently

How to Revert to the Old Behavior

If you need to restore the previous flushing behavior (no explicit flushing), you can set the following environment variable:

export RCL_LOGGING_SPDLOG_EXPERIMENTAL_OLD_FLUSHING_BEHAVIOR=1

Note: This environment variable is marked as EXPERIMENTAL and is intended as a temporary measure. It may be removed in future ROS 2 releases when full logging configuration file support is implemented. Please do not rely on this variable being available in future versions.

Related Links

Questions or Concerns?

If you experience any issues with this change or have feedback, please:

Thanks,
Tomoya

2 posts - 2 participants

Read full topic

by tomoyafujita on January 14, 2026 02:49 AM

January 13, 2026
Guidance on next steps after ROS 2 Jazzy fundamentals for a hospitality robot project

I’m keenly working on a hospitality robot project driven by personal interest and a genuine enthusiasm for robotics, and I’m seeking guidance on what to focus on next.

I currently have a solid grasp of ROS 2 Jazzy fundamentals, including nodes, topics, services, actions, lifecycle nodes, URDF/Xacro, launch files, and executors. I’m comfortable bringing up a robot model and understanding how the ROS 2 system fits together.

My aim is to build a simulation-first MVP for a lobby scenario (greeter, wayfinding, and escort use cases). I’m deliberately keeping the scope practical and do not plan to add arms initially unless they become necessary.

At this stage, I would really value direction from more experienced practitioners on how to progress from foundational ROS knowledge toward a real, working robot.

In particular, I’d appreciate insights on:

  • What are the most important areas to focus on after mastering ROS 2 basics?

  • Which subsystems are best tackled first, and in what sequence?

  • What level of completeness is typically expected in simulation before transitioning to physical hardware?

  • Are there recommended ROS 2 packages, example bringups, or architectural patterns well suited for this type of robot?

Any advice, lessons learned, or references that could help shape the next phase of development would be greatly appreciated.

1 post - 1 participant

Read full topic

by robo_tbt_ua on January 13, 2026 04:42 PM

[Announcing] LinkForge: A Native Blender Extension for Visual URDF/Xacro Editing (ROS 2 Support)

Hi everyone,

I’d like to share a tool I’ve been working on: LinkForge. It was just approved on the Blender Extensions Platform (v1.1.1).

The Problem We all know the workflow: export meshes from CAD, write URDFs by hand, guess inertia tensors, launch Gazebo, realize a link is rotated 90 degrees, kill Gazebo, edit XML, repeat. It separates the “design” from the “engineering.”

The Solution LinkForge allows you to rig, configure, and export simulation-ready robots directly inside Blender. It is not just a mesh exporter; it manages the entire URDF/Xacro structure.

Key Features for Roboticists:

  • Visual Editor: Import/Export URDF & Xacro files seamlessly
  • Physics: Auto-calculates mass & inertia tensors
  • ROS2 Control Support: Automatically generates hardware interface configurations for ros2_control
  • Complete Sensor Suite: Integrated support for Camera, Depth Camera, LiDAR, IMU, GPS, and Force/Torque sensors with configurable noise models
  • Xacro Support: Preserves macros and properties where possible.

Workflow

  1. Import your existing .urdf or .xacro.
  2. Edit joints and limits visually in the viewport.
  3. Add collision geometry (convex hulls/primitives).
  4. Export valid XML.

Links

This is an open-source project. I’m actively looking for feedback on the “Round-trip” capability and Xacro support.

Happy forging!

4 posts - 3 participants

Read full topic

by arounamounchili on January 13, 2026 04:42 PM

January 12, 2026
Update on ROS native buffers

Hello ROS community,

as you may have heard, NVIDIA has been working on proposing and prototyping a mechanism to add support for native buffer types into ROS2, to allow ROS2 to natively support APIs to use accelerated buffers like CUDA or Torch tensors efficiently. We had briefly touched on this in a previous discourse post. Since then, a lot of design discussions in the SIG PAI, as well as prototyping on our side has happened, to turn that outline into a full-fledged proposal and prototype.

Below is a rundown of our current status, as well as an outlook of where the work is heading. We are looking forward to discussions and feedback on the proposal.

Native Buffers in ROS 2

Problem statement

Modern robots use advanced, high-resolution sensors to perceive their environment. Whether it’s cameras, LIDARs, time-of-flight sensors or tactile sensor arrays, data rates to be processed are ever-increasing.

Processing of those data streams has for the most part moved onto accelerated hardware that can exploit the parallel nature of the data. Whether that is GPUs, DSPs, NPUs/TPUs, ASICS or other approaches, those hardware engines have some common properties:

  • They are inherently parallel, and as such well suited to processing many small samples at the same time
  • They are dedicated hardware with dedicated interfaces and often dedicated memory

The second property of dedicated memory regions is problematic in ROS2, as the framework currently does not have a way to handle non-CPU memory.

Consider for example the sensor_msgs/PointCloud2 message, which stores data like this:

uint8[] data         # Actual point data, size is (row_step*height)

A similar approach is used by sensor_msgs/Image. In rclcpp, this will map to a member like

std::vector<uint8_t> data;

This is problematic for large pieces of data that are never going to be touched by the CPU. It forces the data to be present in CPU memory whenever the framework handles it, in particular for message transport, and every time it crosses a node boundary.

For truly efficient, fully accelerated pipelines, this is undesirable. In cases where there are one or more hardware engines handling the data, it is preferable for the data to stay resident in the accelerator, and never be copied into CPU memory unless a node specifically requests to do so.

We are therefore proposing to add the notion of pluggable memory backends to ROS2 by introducing a concept of buffers that share a common API, but are implemented with vendor-specific plugins to allow efficient storage and transport with vendor-native, optimized facilities.

Specifically, we are proposing to map uint8[] in rosidl to a custom buffer type in rclcpp that behaves like a std::vector<uint8_t> if used for CPU code, but will automatically keep the data resident to the vendor’s accelerator memory otherwise. This buffer type is also integrated with rmw to allow the backend to move the buffer between nodes using vendor-specific side channels, allowing for transparent zero-copy transport of the data if implemented by the vendor.

Architecture overview

Message encoding

The following diagram shows the overview of a message containing a uint8[] array, and how it is mapped to C++, and then serialized:

It shows the following parts, which we will discuss in more detail later:

  • Declaration of a buffer using uint8[] in a message definition as before
  • Mapping onto a custom buffer type in rclcpp, called Buffer<T> here
  • The internals of the Buffer<T> type, in particular its std::vector<T>-compatible interface, as well as a pointer to a vendor-specific implementation
  • A vendor-specific backend providing serialization, as well as custom APIs

The message being encoded into a vendor-specific buffer descriptor message, which is serialized in place of the raw byte array in the message

Choice of uint8[] as trigger

It is worth noting the choice to utilize uint8[] as a trigger to generate Buffer<T> instances. An alternative approach would have been to add a new Buffer type to the IDL, and to translate that into Buffer<T>. However, this would not only introduce a break in compatibility of the IDL, but also force the introduction of a sensor_msgs/PointCloud3 and similar data types, fracturing the message ecosystem further.

We believe the cost of maintaining a std::vector compatible interface and the slight loss of semantics is outweighed by the benefit of being drop-in compatible with both existing messages and existing code bases.

Integration with rclcpp (and rclpy and rclrs)

rclcpp exposes all uint8[] fields as rosidl_runtime_cpp::Buffer<T> members in their respective generated C++ structs.

rosidl_runtime_cpp::Buffer<T> has a fully compatible interface to std::vector<T>, like size(), operator[](size_type pos) etc.. If any of the std::vector<T> APIs are being used, the vector is copied onto the CPU as necessary, and all members work as expected. This maintains full compatibility with existing code - any code that expects a std::vector<T> in the message will be able to use the corresponding fields as such without any code changes.

In order to access the underlying hardware buffers, the vendor-specific APIs are being used. Suppose a vendor backend named vendor_buffer_backend exists, then the backend would usually contain a static method to convert a buffer to the native type. Our hypothetical vendor backend may then be used as follows:

void topic_callback(const msg::MessageWithTensor & input_msg) {
  vendor_native_handle input_h = vendor_buffer_backend::from_buffer(msg.data);

  msg::MessageWithTensor output_msg =     
    vendor_buffer_backend::allocate<msg::MessageWithTensor>();

  vendor_native_handle output_h = 
    vendor_buffer_backend::from_buffer(output_msg.data);

  output_h = input_h.some_operation();

  publisher_.publish(output_msg);
}

This code snippet does the following:

First, it extracts the native buffer handle from the message using a static method provided by the vendor backend. Vendors are free to provide any interface they choose for providing this interface, but would be encouraged to provide a static method interface for ease of use.

It then allocates the output message to be published using another vendor-specific interface. Note that this allocation creates an empty buffer, it only sets up the relationship between output_msg.data and the vendor_buffer_backend by creating an instance of the backend buffer, and registering it in the impl field of rosidl_runtime_cpp::Buffer<T> class.

The native handle from the output message is also extracted, so it can be used with the native interfaces provided.

Afterwards, it performs some native operations on the input data, and assigns the result of that operation to the output data. Note that this is happening on the vendor native data types, but since the handles are linked to the buffers, the results show up in the output message without additional code.

Finally, the output message is published the same as any other ROS2 message. rmw then takes care of vendor-specific serialization, see the following sections on details of that process.

This design keeps any vendor-specific code completely out of rclcpp. All that rclcpp sees and links against is the generic rosidl_runtime_cpp::Buffer<T> class, which has no direct ties to any specific vendor. Hence there is no need for rclcpp to even know about all vendor backends that exist.

It also allows vendors to provide specific interfaces for their respective platforms, allowing them to implement allocation and handling schemes particular to their underlying systems.

A similar type would exist for rclpy and rclrs. We anticipate both of those easier to implement due to the duck typing facilities in rclpy, and the traits-based object system in rclrs, respectively, which make it much easier to implement drop-in compatible systems.

Backends as plugins

Backends are implemented as plugins using ROS’s pluginlib. On startup, each rmw instance scans for available backend-compatible plugins on the system, and registers them through pluginlib.

A standard implementation of a backend using CPU memory to offer std::vector<T> compatibility is provided by default through the ROS2 distribution, to ensure that there is always a CPU implementation available.

Additional vendor-specific plugins are implemented by the respective hardware vendors. For example, NVIDIA would implement and provide a CUDA backend, while AMD might implement and provide a ROCm backend.

Backends can either be distributed as individual packages, or be pre-installed on the target hardware. As an example, the NVIDIA Jetson systems would likely have a CUDA backend pre-installed as part of their system image.

Instances of rosidl_runtime_cpp::Buffer<T> are tied to a particular backend at allocation time, as illustrated in the section above.

Integration with rmw

rmw implementations can choose to integrate with vendor backends to provide accelerated transports through the backends. rmw implementations that do not choose to integrate with backends, or any existing legacy backends, automatically fall back onto converting all data to CPU data, and will continue working without any changes.

A rmw implementation that chooses to integrate with vendor backends does the following. At graph startup when publishers and subscribers are being created, each endpoint shares a list of installed backends, alongside vendor-specific data to establish any required side channels, and establishes dedicated channels for passing backend-enabled messages based on 4 different data points:

  • The message type for determining if it contains any buffer-typed fields
  • The list of backends supported by the current endpoint
  • The list of backends supported by the associated endpoint on the other side
  • The distance between the two endpoints (same process, different process, across a network etc.)

rmw can choose any mechanism it wants to perform this task, since this step is happening entirely internal to the currently loaded rmw implementation. Side channel creation is entirely hidden inside the vendor plugins, and not visible to rmw.

For publishing a message type that contains buffer-typed fields, if the publisher and the subscriber(s) share the same supported backend list, and there is a matching serialization method implemented in the backend for the distance to the subscriber(s), then instead of serializing the payload of the buffer bytewise, the backend can choose to use a custom serialization method instead.

The backend is then free to serialize into a ROS message type of its choice. This backend-custom message type is called a descriptor. It should contain all information the backend needs to deserialize the message at the subscriber side, and reconstruct the buffer. This descriptor message may contain pointer values, virtual memory handles, IPC handles or even the raw payload if the backend chooses to send that data through rmw.

The descriptor message can be inspected as usual if desired since it is just a normal ROS2 message, but deserializing requires the matching backend. However, since the publisher knows the backends available to the subscriber(s), it is guaranteed that a subscriber only receives a descriptor message if it is able to deserialize it.

Integration with rosidl

While the above sections show the implications visible in rclcpp, the bulk of the changes necessary to make that happen go into rosidl. It is rosidl that is generating the C++ message structures, and hence rosidl that would map to the Buffer type instead of std::vector. Hence the bulk of the work done in order to get this scheme to work is done in rosidl, not in rclcpp.

Layering semantics on top

Having only a buffer is not very useful, as most robotics data has higher level semantics, like images, tensors, point clouds etc..

However, all of those data types ultimately map to one or more large, contiguous regions of memory, in CPU or accelerator memory.

We also observe that a healthy ecosystem of higher level abstractions already exists. There is PCL for point clouds, Torch for tensor handling etc.. Hence, we propose to not try to replicate those ecosystems in ROS, but instead allow those ecosystems to bridge into ROS, and use the buffer abstraction as their backend for storage and transport.

As a demonstration of this, we are providing a Torch backend that allows linking (Py)Torch tensors to the ROS buffers. This allows users to use the rich ecosystem of Torch to perform tensor operations, while relying on the ROS buffers to provide accelerator-native storage and zero-copy transport between nodes, even across processes and chips if supported by the backend.

The Torch backend does not provide a raw buffer type itself, but relies on vendors implementing backends for their platforms (CUDA, ROCm, TPUs etc.). The Torch backend then depends on the vendor-specific backends, and provides the binding of the low-level buffers to the Torch tensors. The coupling between the Torch backend and the hardware vendor buffer types is loose, it is not visible from the node’s code, but is established after the fact.

From a developer’s perspective, all of this is hidden. All a developer writing a Node does is to interact with a Torch buffer, and it maps to the correct backend available on the current hardware automatically. An example of such a code could look like this:

void topic_callback(const msg::MessageWithTensor & input_msg) {
  // extract tensor from input message
  torch::Tensor input_tensor =
    torch_backend::from_buffer(input_msg.tensor);

  // allocate output message
  msg::MessageWithTensor output_msg =
    torch_backend::allocate<MessageWithTensor>();

  // get handle to allocated output tensor
  torch::Tensor & output_tensor =
    torch_backend::from_buffer(output_msg.tensor);

  // perform some torch operations
  output_tensor = torch.abs(input_tensor);

  // publish message as usual
  publisher_.publish(output_msg);
}

Note how this code segment is using Torch-native datatypes (torch::Tensor), and is performing Torch-native operations on the tensors (in this case, torch.abs). There is no mention of any hardware backend in the code.

By keeping the coupling loose, this node can run unmodified on NVIDIA, AMD, TPU or even CPU hardware, with the framework, in this case Torch, being mapped to the correct hardware, and receiving locally available accelerations for free.

Prior work

NITROS

https://docs.nvidia.com/learning/physical-ai/getting-started-with-isaac-ros/latest/an-introduction-to-ai-based-robot-development-with-isaac-ros/05-what-is-nitros.html

NITROS is NVIDIA’s implementation of a similar design based on Type Negotiation. It is specific to NVIDIA and not broadly compatible, nor is it currently possible to layer hardware-agnostics frameworks like Torch on top.

AgnoCast

https://github.com/tier4/agnocast

AgnoCast creates a zero-copy regime for CPU data. However, it is limited to CPU data, and does not have a plugin architecture for accelerator memory regions. It also requires kernel modifications, which some may find intrusive.

Future work

NVIDIA has been working on this proposal, alongside a prototype implementation that implements support for the mechanisms described above. We are working on CPU, CUDA and Torch backends, as well as integration with the Zenoh rmw implementation.

The prototype will move into a branch on the respective ROS repositories in the next two weeks, and continue development into a full-fledged implementation in public.

In parallel, a dedicated working group tasked with formalizing this effort is being formed, with the goal of reaching consensus on the design, and getting the required changes into ROS2 Lyrical.

5 posts - 4 participants

Read full topic

by karsten-nvidia on January 12, 2026 05:25 PM

Pixi as a co-official way of installing ROS on Linux

It’s that time of the year when someone with too much spare time on their hands proposes a radical change to the way ROS is distributed and built. This time, it’s my turn.

So let me start this with acknowledging that without all the tooling the ROS community has developed over the years (rosdep, bloom, the buildfarm - donate if you can, I did! -, colcon, etc.) we wouldn’t be here, 20, 10 years ago it was almost impossible to run a multilanguage federated distributed project without these tools, nothing like that existed! So I’m really grateful for all that.

However, the landscape is different now. We now have projects like Pixi, conda-forge and so on.

As per title of my post, I’m proposing that Pixi not only would be the recommended way of installing ROS 2 on Windows, but also on Linux, or at least, co-recommended for ROS 2 Lyrical Luth and onwards.

One of the first challenges that new users of ROS face is learning a new build tool and a development workflow that is ROS-specific. Although historically we really needed to develop all the tools I’ve mentioned, the optics of having our own build tool and package management system doesn’t help, with the perception that some users still have of ROS as a silo that doesn’t play nice with the outside world.

The main two tools that a user can replace with Pixi are colcon and rosdep, and to some extent bloom.

  • colcon has noble goals, becoming the one build tool for multilanguage workspaces, and as someone who has contributed to it (e.g. extensions for colcon to support Gradle and Cargo) I appreciate having it all under the same tool. However, it hasn’t achieved much widespread adoption outside ROS.
  • rosdep makes it easy to install the multilanguage dependencies, however it still has some long standing issues ( Add support for version_eq · Issue #803 · ros-infrastructure/rosdep · GitHub ) that are taken for granted in other package managers and because of the distribution model we have, ROS packages are installed at a system level, not everything is available via APT, etc.
  • bloom works great for submitting packages to the buildfarm. Pixi provides rattler-build, the process only requires a YAML file and can publish to not only prefix.dev, but also Anaconda.org and JFrog Artifactory.

I’ve been using Pixi for over a year for my own projects, some use ROS some don’t, and the experience couldn’t have been better:

  • No need for vendor packages thanks to conda-forge and robostack (over 43k packages available!)
  • No need for root access, all software is installed in a workspace, and workspaces are reproducible thanks to lockfiles, so I have the same environment on my CI as on my computer.
  • Distro-independent. I’m running AlmaLinux and Debian, I no longer have to worry whether ROS supports my distro or not.
  • Pixi can replace colcon thanks to the pixi build backends ( Building a ROS Package - Pixi )
  • Pixi is fast! It’s written in Rust :wink:

Also, from the ROS side, this would reduce the burden of maintaining the buildfarm, the infrasttructure, all the tools, etc. but that’s probably too far in the future and realisticallly it’d take a while if there’s consensus to replace it with someone else.

Over the years, like good opensource citziens we are, we have collaborated with other projects outside the ROS realm. For example, instead of rolling our own transport like we had in ROS 1, we’ve worked with FastDDS, OpenSplice, CycloneDDS and now Zenoh. I’d say this has been quite symbiotic and we’ve helped each other. I believe collaborating with the Pixi, and Robostack projects would be extremely beneficial for everyone involved.

@ruben-arts can surely say more about the benefits of using Pixi for ROS

21 posts - 9 participants

Read full topic

by esteve on January 12, 2026 11:47 AM

January 11, 2026
Ferronyx – Real-Time ROS2 Observability & Automated RCA

We’ve been building robots with ROS2 for years, and we hit the same wall every time a robot fails in production:

The debugging process:

  • SSH into the machine

  • Grep through logs

  • Check ROS2 topics (which ones stopped publishing?)

  • Replay bag files

  • Cross-reference with deployment changes

  • Try to correlate infrastructure issues with ROS state

This takes 3-4 hours. Every time.

The problem: ROS gives you raw telemetry, but zero intelligence connecting infrastructure metrics + ROS topology + deployment history. You’re manually stitching pieces together.

So we built Ferronyx to be that intelligence layer.

What we did:

  • Real-time monitoring of ROS2 topics, nodes, actions + infrastructure (CPU, GPU, memory, network)

  • When something breaks, AI analyzes the incident chain and suggests probable root causes

  • Deployment markers show exactly which release caused the failure

  • Track sensor health degradation before failures happen

Real results from our beta customers:

  • MTTR: 3-4 hours → 12-15 minutes

  • One customer caught sensor drift they couldn’t see manually

  • Another correlated a specific firmware version with navigation failures

We’re looking for 8-12 more teams to beta test and help us refine this. We want teams that:

  • Run ROS2 in production (warehouses, humanoids, autonomous vehicles)

  • Actually deal with downtime/reliability issues

  • Will give honest feedback

Free beta access. You help shape the product, we learn what breaks.

If you’re dealing with robot reliability headaches, reply here or send a DM. Would genuinely love to hear your toughest debugging stories.

Links:
https://ferronyx.com/

3 posts - 2 participants

Read full topic

by Haarvish on January 11, 2026 11:49 PM

January 09, 2026
ROS 2 Rust Meeting: January 2026

The next ROS 2 Rust Meeting will be Mon, Jan 12, 2026 2:00 PM UTC

The meeting room will be at https://meet.google.com/rxr-pvcv-hmu

In the unlikely event that the room needs to change, we will update this thread with the new info!

Agenda:

  1. Changes to generated message consumption (https://github.com/ros2-rust/ros2_rust/pull/556)
  2. Upgrade to Rust 1.85 (build!: require rustc 1.85 and Rust 2024 edition by esteve · Pull Request #566 · ros2-rust/ros2_rust · GitHub)
  3. Migration from Element to Zulip chat (Open Robotics launches Zulip chat server)

2 posts - 2 participants

Read full topic

by maspe36 on January 09, 2026 05:13 PM

January 08, 2026
Easier Protobuf and ROS 2 Integration

For anyone integrating ROS 2 with Protobuf-based systems, we at the RAI Institute want to highlight one of our open-source tools: proto2ros!

proto2ros generates ROS 2 message definitions and bi-directional conversion code directly from .proto files, reducing boilerplate and simplifying integration between Protobuf-based systems and ROS 2 nodes.

Some highlights:

  • Automatic ROS 2 message generation from Protobuf

  • C++ and Python conversion utilities

  • Supports Protobuf v2 and v3

It is currently available for both Humble and Jazzy and can be installed with
apt install ros-<distro>-proto2ros

Check out the full repo here: https://github.com/bdaiinstitute/proto2ros

Thanks to everyone who has contributed to this project including @hidmic @khughes1 @jbarry !
As always, feedback and contributions are welcome!

The RAI Institute

1 post - 1 participant

Read full topic

by tcapp on January 08, 2026 06:08 PM


Powered by the awesome: Planet