February 05, 2026
[Release] GerdsenAI's Depth Anything 3 ROS2 Wrapper with Real-time TensorRT for Jetson

Update: TensorRT Optimization, 7x Performance Improvement Over Previous PyTorch Release!

Great news for everyone following this project! We’ve successfully implemented TensorRT 10.3 acceleration, and the results are significant:

Performance Improvement

Metric Before (PyTorch) After (TensorRT) Improvement
FPS 6.35 43+ 6.8x faster
Inference Time 153ms ~23ms 6.6x faster
GPU Utilization 35-69% 85%+ More efficient

Test Platform: Jetson Orin NX 16GB (Seeed reComputer J4012), JetPack 6.2, TensorRT 10.3

Key Technical Achievement: Host-Container Split Architecture

We solved a significant Jetson deployment challenge - TensorRT Python bindings are broken in current Jetson container images (dusty-nv/jetson-containers#714). Our solution:

HOST (JetPack 6.x)
+--------------------------------------------------+
|  TRT Inference Service (trt_inference_shm.py)    |
|  - TensorRT 10.3, ~15ms inference                |
+--------------------------------------------------+
                    ↑
                    | /dev/shm/da3 (shared memory, ~8ms IPC)
                    ↓
+--------------------------------------------------+
|  Docker Container (ROS2 Humble)                  |
|  - Camera drivers, depth publisher               |
+--------------------------------------------------+

This architecture enables real-time TensorRT inference while keeping ROS2 in a clean container environment.

One-Click Demo

git clone https://github.com/GerdsenAI/GerdsenAI-Depth-Anything-3-ROS2-Wrapper.git
cd GerdsenAI-Depth-Anything-3-ROS2-Wrapper
./run.sh

First run takes ~15-20 minutes (Docker build + TensorRT engine). Subsequent runs start in ~10 seconds.

Compared to Other Implementations

We’re aware of ika-rwth-aachen/ros2-depth-anything-v3-trt which achieves 50 FPS on desktop RTX 6000. Our focus is different:

  • Embedded-first: Optimized for Jetson deployment challenges
  • Container-friendly: Works around broken TRT bindings in Jetson images
  • Production-ready: One-click deployment, auto-dependency installation

Call for Contributors

We’re looking for help with:

  • Test coverage for SharedMemory/TensorRT code paths
  • Validation on other Jetson platforms (AGX Orin, Orin Nano)
  • Point cloud generation (currently depth-only)

Repo: GitHub - GerdsenAI/GerdsenAI-Depth-Anything-3-ROS2-Wrapper: ROS2 wrapper for Depth Anything 3 (https://github.com/ByteDance-Seed/Depth-Anything-3)
License: MIT

@Phocidae @AljazJus - the TensorRT optimization should help significantly with your projects! Let me know if you run into any issues.

1 post - 1 participant

Read full topic

by GerdsenAI on February 05, 2026 05:07 PM

Getting started with Pixi and RoboStack

Hi all,

I noticed a lot of users get stuck with trying Pixi and RoboStack, simply because it’s to hard to do the initial setup.

To help you out we’ve created a little tool called pixi-ros to help you map the package.xml’s to a pixi.toml.

It basically does what rosdep does, but since the logic of the rosdep installation doesn’t translate well to a Pixi workspace this was always complex to implement.

Instead of staying in waiting mode untill we figured out a “clean” way of doing that I just wanted to get something out that helps you get started today.

Here is a small video to get you started:

Pixi ROS extension; think rosdep for Pixi!

I’m very open to contributions or improvement ideas!

ps. I hope pixi-ros will be obsolete ASAP due to our development of pixi-build-ros which can read package.xml’s directly, but today (5-feb-2026) I would only advice that workflow to advanced users due to the experimental nature of it.

1 post - 1 participant

Read full topic

by ruben-arts on February 05, 2026 02:23 PM

Accelerated Memory Transport Working Group Announcement

Hi

I’m pleased to announce that the Accelerated Memory Transport Working Group was officially approved by the TGC on January 27th 2026.

This group will focus on extending the ROS transport utilities to enable better memory management through the pipeline to take advantage of available hardware accelerators efficiently, while providing fallbacks in cases where the whole system can not handle the advanced transport.

And may involve code in repositories including but not limited to:

The first meeting of the Accelerated Memory Transport Working Group will be on Wed, Feb 11, 2026 4:00 PM UTCWed, Feb 11, 2026 5:00 PM UTC

If you have any question about the process, please reach out to me: acordero@honurobotics.com

Thank you

1 post - 1 participant

Read full topic

by ahcorde on February 05, 2026 12:09 PM

February 04, 2026
🚀 Update: LinkForge v1.2.1 is now live on Blender Extensions!

We just pushed a critical stability update for anyone importing complex robot models.

What’s New in v1.2.1?

:white_check_mark: Fixed “Floating Parts”: We resolved a transform baking issue where imported meshes would drift from their parent links. Your imports are now 1:1 accurate.

:white_check_mark: Smarter XACRO: Use complex math expressions in your property definitions? We now parse mixed-type arguments robustly.

:white_check_mark: Implement native high-fidelity XACRO parser

If you are building robots in Blender for ROS 2 or Gazebo, this is the most stable version yet.

:link: Get it on GitHub: linkforge-github

:link: Get it on Blender Extensions: linkforge-blender

1 post - 1 participant

Read full topic

by arounamounchili on February 04, 2026 05:34 PM

Lessons learned migrating directly to ROS 2 Kilted Kaiju with pixi

Hi everyone,

We recently completed our full migration from ROS 1 Noetic directly to ROS 2 Kilted Kaiju. We decided to skip the intermediate LTS releases (Humble/Jazzy) to land directly on the bleeding edge features and be prepared for the next LTS Lyrical in May 2026.

Some of you might have seen our initial LinkedIn post about the strategy, which was kindly picked up by OSRF. Since then, we’ve had time to document the actual execution.

You can see the full workflow (including a video of the “trash bin” migration :robot:) in my follow-up post here: :backhand_index_pointing_right: Watch the Migration Workflow on LinkedIn

I wanted to share the technical breakdown here on Discourse, specifically regarding our usage of Pixi, Executors, and the RMW.

1. The Environment Strategy: Pixi & Conda

We bypassed the system-level install entirely. Since we were already using Pixi and Conda for our legacy ROS 1 stack, we leveraged this to make the transition seamless.

  • Side-by-Side Development: This allowed us to run ROS 1 Noetic and ROS 2 Kilted environments on the same machines without environment variable conflicts.
  • The “Disposable” Workspace: We treated workspaces as ephemeral. We could wipe a folder, resolve, and install the full Kilted stack from scratch in <60 seconds (installing user-space dependencies only).

Pixi Gotchas:

  • Versioning: We found we needed to remove the pixi.lock file when pulling the absolute latest build numbers (since we were re-publishing packages with increasing build numbers rapidly during the migration).
  • File Descriptors: On large workspaces, Pixi occasionally ran out of file descriptors during the install phase. A simple retry (or ulimit bump) always resolved this.

2. Observability & AI

We relied heavily on Claude Code to handle the observability side of the migration. Instead of maintaining spreadsheets and bash scripts, we had Claude generate “throw-away” web dashboards to visualize:

  • Build orders
  • Real-time CI status
  • Package porting progress

(See the initial LinkedIn post for examples of these dashboards)

3. The Workflow

Our development loop looked like this: Feature BranchCI SuccessPublish to Stagingpixi install (on Robot)Test

Because we didn’t rely on baking Docker images for every test, the iteration loop (Code → Robot) was extremely fast.

4. Technical Pain Points & Findings

This is where we spent most of our debugging time:

Executors (Python Nodes):

  • SingleThreadedExecutor: Great for speed, but lacked the versatility we needed (e.g., relying on callbacks within callbacks for certain nodes).
  • MultiThreadedExecutor: This is what we are running mostly now. We noticed some performance overhead, so we pushed high-frequency topic subscriptions (e.g., tf and joint_states) to C++ nodes to compensate.
  • ExperimentalEventExecutor: We tried to implement this but couldn’t get it stable enough for production yet.

RMW Implementation:

  • We started with the default FastRTPS but encountered stability and latency issues in our specific setup.
  • We switched to CycloneDDS and saw an immediate improvement in stability.

Questions for the Community:

  1. Has anyone put the new Zenoh RMW through its paces in Kilted/Rolling yet? We are eyeing that as a potential next step.
  2. Are others testing Kilted in production contexts yet, or have you had better luck with the Event Executor stability?

Related discussion on tooling: Pixi as a co-official way of installing ROS on Linux

3 posts - 3 participants

Read full topic

by daenny on February 04, 2026 04:57 PM

February 03, 2026
Stop rewriting legacy URDFs by hand 🛑

Migrating robots from ROS 1 to ROS 2 is usually a headache of XML editing and syntax checking.

In this video, I demonstrate how ��������� solves this in minute using the ��-���100.

The Workflow:
:one: Import: Load legacy ROS 1 URDFs directly into Blender with LinkForge.
:two: Interact: Click links and joints to visualize properties instantly.
:three: Modernize: Auto-generate ros2_control interfaces from existing joints with one click.
:four: Export: Output a clean, fully compliant ROS 2 URDF ready for Jazzy.

LinkForge handles the inertia matrices, geometry offsets, and tag upgrades automatically.

��������� - Import URDF file: ���� ��������� ������ ����� �� ����. 🛑

3 posts - 2 participants

Read full topic

by arounamounchili on February 03, 2026 07:26 PM

Stable Distance Sensing for ROS-Based Platforms in Low-Visibility Environments

In nighttime, foggy conditions, or complex terrain environments, many ROS-based platforms
(UAVs, UGVs, or fixed installations) struggle with reliable distance perception when relying
solely on vision or illumination-dependent sensors.

In our recent projects, we’ve been focusing on stable, continuous distance sensing as a
foundational capability for:

  • ground altitude estimation
  • obstacle distance measurement
  • terrain-aware navigation

Millimeter-wave radar has shown strong advantages in these scenarios due to its independence
from lighting conditions and robustness in fog, dust, or rain. We are currently working with
both 24GHz and 77GHz mmWave radar configurations, targeting:

  • mid-to-long-range altitude sensing
  • close-range, high-stability distance measurement

We’re interested in discussing with the ROS community:

  • How others integrate mmWave radar data into ROS (ROS1 / ROS2)

  • Message formats or filtering strategies for distance output

  • Fusion approaches with vision or IMU for terrain-following or obstacle detection

Any shared experience, references, or best practices would be greatly appreciated.

1 post - 1 participant

Read full topic

by hexsoon2026 on February 03, 2026 04:55 PM

developing an autonomous weeding robot for orchards using ROS2 Jazzy

I’m developing an autonomous weeding robot for orchards using ROS2 Jazzy. The robot needs to navigate tree rows and weed close to trunks (20cm safety margin).
My approach:
GPS (RTK ideally) for global path planning and navigation between rows
Visual-inertial SLAM for precision control when working near trees - GPS accuracy isn’t sufficient for safe 20cm clearances
Need robust sensor fusion to hand off between the two modes
The interesting challenge is transitioning smoothly between GPS-based navigation and VIO-based precision maneuvering as the robot approaches trees.
Questions:
What VIO SLAM packages work reliably with ROS2 Jazzy in outdoor agricultural settings?
How have others handled the handoff between GPS and visual odometry for hybrid localization?
Any recommendations for handling challenging visual conditions (varying sunlight, repetitive tree textures)?
Currently working in simulation - would love to hear from anyone who’s taken similar systems to hardware.

1 post - 1 participant

Read full topic

by Ilyes_Saadna on February 03, 2026 12:50 AM

February 02, 2026
Error reviewing Action Feedback messages in MCAP files

Hello,

We are using Kilted and record mcap bags with a command line approximately like this:

ros2 bag record -o <filename> --all-topics --all-services --all-actions --include-hidden-topics

When we open the MCAP files in Lichtblick or Foxglove we get this error in Problems panel and we can’t review the feedback messages:

Error in topic <redacted>/_action/feedback (channel 6)
Message encoding cdr with schema encoding '' is not supported (expected "ros2msg" or "ros2idl" or "omgidl")

At this point we are at a loss as to how to resolve this - do we need to publish the schema encoding somewhere?

Thanks.

3 posts - 2 participants

Read full topic

by jbcpollak on February 02, 2026 05:13 PM

MINT Protocol - ROS 2 node for robots to earn crypto for task execution

Built a ROS 2 package that lets robots earn MINT tokens on Solana for task execution.

Repo: GitHub - FoundryNet/ros-mint

How it works

Node subscribes to /mint/task_start and /mint/task_end topics. Duration between events gets settled on-chain as MINT tokens.

# Launch
ros2 run mint_ros settler --ros-args -p keypair_path:=/path/to/keypair.json

# Your task node publishes:
/mint/task_start  # String: task_id
/mint/task_end    # String: task_id

# MINT settles automatically

Rate

0.005 MINT per second of work. Oracle pays gas - robots pay nothing.

Task Duration MINT Earned
1 minute 0.30 MINT
10 minutes 3.00 MINT
1 hour 18.00 MINT

Links

Machines work. Machines should earn.

3 posts - 2 participants

Read full topic

by FoundryNet on February 02, 2026 04:57 PM

Optimization of Piper Robotic Arm Motion Control via lerobot Transplantation

Optimization of Piper Robotic Arm Motion Control via lerobot Transplantation

Author: VA11Hall
Link: https://zhuanlan.zhihu.com/p/1946636125415401016
Source: Zhihu

I. Introduction

We have successfully transplanted lerobot to the Piper robotic arm, enabling smooth execution of task workflows including remote control, data collection, training, and inference. The current goal is to optimize the Piper’s operational performance—such as improving success rates and motion stability. Key optimization measures will focus on two aspects: enhancing the quality and scale of datasets, and further refining motion control algorithms. For the former, we plan to conduct experiments on reducing light interference, deploying cameras in more reasonable positions (e.g., on the arm itself), and improving the consistency of teaching actions during data collection. For the latter, we will directly modify the code to enhance motion control.

This article introduces a code-based approach to optimize Piper’s motion control, inspired by the following Bilibili video:LeRobot ACT Algorithm Introduction and Tuning

The video author not only provides optimization ideas and demonstration of results but also shares the source code. This article analyzes and explains the ideas and corresponding code implementations from the video, and presents the results of transplanting this code to the Piper robotic arm for practical testing.

II. Limitations of Motion Control in lerobot’s Official Code

Robots trained with lerobot often exhibit severe jitter during inference and validation. This is because lerobot relies on imitation learning—during data collection, human demonstrators inevitably introduce unnecessary jitter into the dataset due to unfamiliarity with the master arm. Additionally, even for similar grasping tasks, demonstrators may adopt different action strategies. Given the current limitations of small dataset sizes and immature network architectures, these factors lead to unstable motion control (there are also numerous other contributing factors).

For a given pre-trained model, developers can directly improve data collection quality to provide the model with high-quality task demonstrations—analogous to “compensating for a less capable student with a more competent teacher.” Furthermore, developers can embed critical knowledge that the robot struggles to learn into the code through explicit programming.

To reduce jitter during robotic arm movement without compromising the model’s generalization ability, two classic motion control optimization strategies can be adopted: motion filtering and interpolation.

III. Interpolation and Filtering of Action Sequences Generated by ACT

The default model used in lerobot workflows is ACT, with relevant code located in the policies directory. The lerobot project has transplanted the original ACT code and implemented wrapper functions for robot control.

Using VS Code’s indexing feature, we can directly locate the select_action function in lerobot’s ACT-related code:

python

def select_action(self, batch: dict[str, Tensor]) -> Tensor:
    """Select a single action given environment observations.

    This method wraps `select_actions` in order to return one action at a time for execution in the
    environment. It works by managing the actions in a queue and only calling `select_actions` when the
    queue is empty.
    """
    self.eval()  # keeping the policy in eval mode as it could be set to train mode while queue is consumed

    if self.config.temporal_ensemble_coeff is not None:
        actions = self.predict_action_chunk(batch)
        action = self.temporal_ensembler.update(actions)
        return action

    # Action queue logic for n_action_steps > 1. When the action_queue is depleted, populate it by
    # querying the policy.
    if len(self._action_queue) == 0:
        actions = self.predict_action_chunk(batch)[:, : self.config.n_action_steps]

        # `self.model.forward` returns a (batch_size, n_action_steps, action_dim) tensor, but the queue
        # effectively has shape (n_action_steps, batch_size, *), hence the transpose.
        self._action_queue.extend(actions.transpose(0, 1))
    return self._action_queue.popleft()

The core logic here is: if the action queue is empty, the model predicts and generates a new sequence of actions. A unavoidable limitation of this logic is that the end of one action cluster (a sequence of consecutive actions) and the start of the next generated cluster often lack continuity. This causes the robotic arm to exhibit sudden jumps during inference (more severe than jitter, similar to convulsions).

To address this, linear interpolation can be used to generate a series of intermediate actions, smoothing the transition between discontinuous action clusters. Subsequently, applying mean filtering to the entire action sequence can further mitigate jitter.

P.S.: While writing this, I suddenly wondered if slower demonstration actions during data collection would result in more stable operation.

Based on the above ideas, the select_action function was modified as follows:

python

def select_action(self, batch: dict[str, Tensor]) -> Tensor:
    """Select a single action given environment observations.

    This method wraps `select_actions` in order to return one action at a time for execution in the
    environment. It works by managing the actions in a queue and only calling `select_actions` when the
    queue is empty.
    """
    self.eval()  # keeping the policy in eval mode as it could be set to train mode while queue is consumed

    if self.config.temporal_ensemble_coeff is not None:
        actions = self.predict_action_chunk(batch)
        action = self.temporal_ensembler.update(actions)
        return action

    # vkrobot: Model prediction generates a sequence of n_action_steps, which is stored in the queue.
    # The robotic arm is controlled based on the actions in the sequence.
    if len(self._action_queue) == 1:
        self.last_action = self._action_queue[0].cpu().tolist()[0]

    # Action queue logic for n_action_steps > 1. When the action_queue is depleted, populate it by
    # querying the policy.
    if len(self._action_queue) == 0:
        actions = self.predict_action_chunk(batch)[:, : self.config.n_action_steps]

        # `self.model.forward` returns a (batch_size, n_action_steps, action_dim) tensor, but the queue
        # effectively has shape (n_action_steps, batch_size, *), hence the transpose.
        # vkrobot: Linear interpolation for jump points
        self.begin_mutation_filter(actions)
        self._action_queue.extend(actions.transpose(0, 1))
        # vkrobot: Mean filtering
        self.actions_mean_filtering()
    return self._action_queue.popleft()

Key modifications include:

python

if len(self._action_queue) == 1:

When only one action remains in the queue (indicating the end of the previously predicted action cluster), this action is recorded. For clarification: an “action” here refers to a set of joint angles.

Thus, when generating the next prediction, linear interpolation can be used to smooth the transition from the last action of the previous cluster to the first action of the new cluster. Additionally, mean filtering is applied to all newly generated action sequences:

python

self.begin_mutation_filter(actions)
self._action_queue.extend(actions.transpose(0, 1))
# vkrobot: Mean filtering
self.actions_mean_filtering()

The interpolation and filtering functions need to be implemented separately, as they are not included in the original lerobot code.

IV. Adding Smooth Loss to the Loss Function

The video author also proposes another method to reduce jitter: incorporating smooth loss into the total loss function. This is a common technique in machine learning—an ingenious idea, though its practical effectiveness may vary depending on the scenario.

python

# # # Mean filtering loss vkrobot
kernel_size = 11
padding = kernel_size // 2
x = actions_hat.transpose(1, 2)
weight = torch.ones(6, 1, kernel_size, device=actions_hat.device) / kernel_size
filtered_x = F.conv1d(x, weight, padding=padding, groups=6)
filtered_tensor = filtered_x.transpose(1, 2)
mean_loss = torch.abs(actions_hat - filtered_tensor).mean()
loss += mean_loss
loss_dict["mean_loss"] = mean_loss.item()

V. Other Optimization Attempts

The video also mentions modifying model inference parameters to improve grasping success rates. We tested this method on the Piper: setting the model to infer 100 steps and execute the first 50 steps resulted in the robot entering a hesitant state, failing to proceed. Adjusting to 70 steps also led to similar issues. Thus, parameter modification may require scenario-specific tuning.

Additionally, the video suggests introducing mean filtering during data collection—a method that should be effective. We plan to test this in future research focused on data collection optimization.

After integrating interpolation and filtering, we ran the previously trained model. A comparison of the operational performance before and after optimization can be viewed in the following video:Piper lerobot Transplantation: Motion Control Optimization Demo

Overall, the Piper robotic arm’s motion during inference has become significantly smoother, with a moderate improvement in grasping success rates.

1 post - 1 participant

Read full topic

by Agilex_Robotics on February 02, 2026 09:59 AM

February 01, 2026
URDF Kitchen Beta 2 Released

Hello everyone,

I have developed a tool to support the creation of URDF files.

URDF Kitchen is a GUI-based tool that allows you to load mesh files for robot parts, mark connection points, and assemble robots by connecting nodes. It is especially useful when your CAD or 3D modeling tools cannot directly export URDF files, or when existing URDFs need to be modified.

The tool also supports exporting MJCF for use with MuJoCo.

Key features:

  • Robot assembly via node-based connections

  • Supports STL, OBJ, and DAE mesh files

  • Export to URDF and MJCF

  • Import URDF, xacro, SDF, and MJCF

  • Automatic mirroring to generate the right side from a left-side assembly

  • GUI-based configuration of connection points and colliders

  • Supports setting only the minimum required joint parameters

  • Available on Windows, macOS, and Ubuntu

  • Free to use (GPL v3.0)

  • Written in Python, making it easy to extend or modify features with AI-assisted coding

This is the Beta 2 release, with significant feature updates since the previous version.

Please give it a try, and I would really appreciate any feedback or bug reports.

YouTube (30s overview video):

URDF kitchen Beta2

GitHub:
https://github.com/Ninagawa123/URDF_kitchen/tree/beta2

4 posts - 3 participants

Read full topic

by Ninagawa123 on February 01, 2026 11:06 PM

Toio meets navigation2

I published ROS 2 package for navigation2 with toio.
So, user can study navigation2 using toio.
You can watch demo movie.

1 post - 1 participant

Read full topic

by dandelion1124 on February 01, 2026 04:12 AM

Space ROS Jazzy 2026.01.0 Release

Hello ROS community!

The Space ROS team is excited to announce Space ROS Jazzy 2026.01.0 was released last week and is available as osrf/space-ros:jazzy-2026.01.0 on DockerHub. Additionally, builds Move It 2 and Navigation 2 built on the jazzy-2026.01.0 underlay are also available to accelerate work using these systems as osrf/space-ros-moveit2:jazzy-2026.01.0 on DockerHub and osrf/space-ros-nav2:jazzy-2026.01.0 on DockerHub respectively.

For an exhaustive list of all the issues addressed and PRs merged, check out the GitHub Project Board for this release here.

Code

Current versions of all packages released with Space ROS are available at:

What’s Next

This release comes 3 months after the last release. The next release is planned for April 30, 2026. If you want to contribute to features, tests, demos, or documentation of Space ROS, get involved on the Space ROS GitHub issues and discussion board.

All the best,

The Space ROS Team

2 posts - 1 participant

Read full topic

by bkempa on February 01, 2026 12:26 AM

January 30, 2026
Abandoned joystick_drivers package

I noticed that the joystick drivers repository has not had any recent changes and there are several open pull requests which have not been addressed by the maintainers. Has this package been replaced or is it abandoned?

7 posts - 4 participants

Read full topic

by ethanholter on January 30, 2026 05:18 PM

January 29, 2026
RealSense D435 mounted vertically (90° rotation) - What should camera_link and camera_depth_optical_frame TF orientations be?

Hi everyone,

I’m using an Intel RealSense D435 camera with ROS2 Jazzy and MoveIt2. My camera is mounted in a non-standard orientation: Vertically rather than horizontally. More specifically it is rotated 90° counterclockwise (USB port facing up) and tilted 8° downward.

I’ve set up my URDF with a camera_link joint that connects to my robot, and the RealSense ROS2 driver automatically publishes the camera_depth_optical_frame.

My questions:

Does camera_link need to follow a specific orientation convention? (I’ve read REP-103 says X=forward, Y=left, Z=up, but does this still apply when the camera is physically rotated?)

What should camera_depth_optical_frame look like in RViz after the 90° rotation? The driver creates this automatically - should I expect the axes to look different than a standard horizontal mount?

If my point cloud visually appears correctly aligned with reality (floor is horizontal, objects in correct positions), does the TF frame orientation actually matter? Or is it purely cosmetic at that point?

Is there a “correct” RPY for a vertically-mounted D435, or do I just need to ensure the point cloud aligns with my robot’s world frame?

Any guidance from anyone who has mounted a RealSense camera vertically would be really appreciated!

Thanks!

4 posts - 2 participants

Read full topic

by vs02 on January 29, 2026 04:14 PM

[Update] ros2_unbag: Fast, organized data extraction

Hi everyone,

I wanted to share an update on ros2_unbag, a tool I authored to simplify the process of getting data out of ROS 2 bags and into organized formats.

ros2_unbag is the ultimate “un-packer” for your robot’s messy suitcase. Since the initial release last year, the focus has been on making extraction as “painless” as possible, even for large-scale datasets. If you find yourself writing repetitive scripts to dump images or point clouds, this might save you some time.

Current Capabilities:

  • Quick Export: Direct conversion to images (.png, .jpg), videos (.mp4, .avi), and point clouds (.pcd, .xyz), plus structured text (.json, .yaml, .csv) for any other message type.

  • User-friendly GUI: Recently updated to make the workflow as intuitive as possible.

  • Flexible Processors: Define your own routines to handle virtually any message type or custom export logic. We currently use this for cloudini support and specialized automoted driving message types.

  • Organized Output: Automatically sorts data into clear directory structures for easy downstream use.

  • High Performance: Optimized through parallelization; the bottleneck is often your drive speed, not your compute.

I’ve been maintaining this since July 2025 and would love to hear if there are specific message types or features the community is still struggling to “unbag.”

https://github.com/ika-rwth-aachen/ros2_unbag

1 post - 1 participant

Read full topic

by lostendo on January 29, 2026 02:50 PM

January 28, 2026
Panthera-HT —— A Fully Open-Source 6-Axis Robotic Arm

Panthera-HT —— A Fully Open-Source 6-Axis Robotic Arm

Hello Developers,

Compact robotic arms play a vital role for individual developers, data acquisition centers, and task execution in small-scale scenarios. Now, after a long time of development,the full 6-DOF (six degrees of freedom) robotic arm industry welcomes a new player—the Panthera-HT from hightorque.

The Panthera-HT currently offers control interfaces in C++, Python, and ROS2, featuring capabilities including:

  • Position/Velocity/Torque control

  • Impedance control

  • Gravity compensation mode

  • Gravity and friction compensation mode

  • Master-slave teleoperation (dual-arm/bimanual)

  • Hand-guided teaching / Drag-to-teach

Additionally, it supports data collection and inference within the LeRobot framework. For additional runtime scripts and implementation details, please refer to the SDK documentation.

:sparkles: Project Originemphasized text

This started as Ragtime-LAB/Ragtime_Panthera’s open-source project, and we’ve since taken it further with improvements and polish. Huge thanks to the original author wEch1ng(芝士榴莲肥牛) for sharing their work so generously with the community!*

:card_file_box: Repository

To quickly access to our project , here get the links which are listed cintuitively

Repository License Description
Panthera-HT_Main MIT Main project repository, including project introduction, repository links, and feature requests.
Panthera-HT_Model MIT SolidWorks original design files, sheet metal unfolding diagrams, 3D printing files, and Bill of Materials (BOM).
Panthera-HT_SDK MIT Python SDK development package, providing quick-start example code and development toolchain.
Panthera-HT_ROS2 MIT ROS2 development package providing robotic arm drivers, control, and simulation support.
Panthera-HT_lerobot MIT LeRobot integration package, supporting imitation learning and robot learning algorithms.

:gear: Control Examples

Let’s get into the project and finish the qucik start of the Panthera-HT robot , you will find lots of interesting functions waiting for you , and I know you definitely want to have a preview of the control examples :slight_smile:

Position and Speed Control:

Master-Slave Teleoperation:

Master-Slave Teleoperated Grasping:

Epilogue

We sincerely thank you for your time in reviewing the content above, and extend our gratitude to all developers visiting the project on GitHub. Wishing you smooth development workflows and outstanding project performance!

2 posts - 2 participants

Read full topic

by MT-gao965 on January 28, 2026 05:58 PM

January 27, 2026
Stop SSH-ing into robots to find the right rosbag. We built a visual Rolling Buffer for ROS2

Hi everyone,

I’m back with an update on INSAION, the observability platform my co-founder and I are building. Last time, we discussed general fleet monitoring, but today I want to share a specific feature we just released that targets a massive pain point we faced as roboticists: Managing local recordings without filling up the disk.

We’ve all been there: A robot fails in production, you SSH in, navigate to the log directory, and start playing “guess the timestamp” to find the right bag file. It’s tedious, and usually, you either missed the data or the disk is already full.

So, we built a smart Rolling Buffer to solve this.

How it actually works (It’s more than just a loop):

It’s not just a simple circular buffer. We built a storage management system directly into the agent. You allocate a specific amount of storage (e.g., 10GB) and select a policy via the Web UI (no config files!):

  • FIFO: Oldest data gets evicted automatically when the limit is reached.

  • HARD: Recording stops when the limit is reached to preserve exact history.

  • NONE: Standard recording until disk saturation.

The “No-SSH” Workflow:

As you can see in the video attached, we visualized the timeline.

  1. The Timeline: You see exactly where the Incidents (red blocks) happened relative to the Recordings (yellow/green blocks).

  2. Visual correlation: No need to grep logs or match timestamps manually. You can see at a glance if you have data covering the crash.

  3. Selective Sync: You don’t need to upload terabytes of data. You just select the relevant block from the timeline and click “Sync.” The heavy sensor data (Lidar, Images, Costmaps) is then uploaded to the cloud for analysis.

Closing the Loop:

Our goal is to give you the full picture. We start with lightweight telemetry for live monitoring, which triggers alerts. Then, we close the loop by letting you easily grab the high-fidelity, heavy data stored locally—only when you actually need it.

We’re trying to build the tool we wish we had in our previous robotics jobs. I’d love to hear your thoughts on this “smart recording” approach—does this sound like something that would save you time debugging?

I’d love to hear your feedback on it

Check it out at app.insaion.com if you want to dig deeper. It’s free to get started

Cheers!

1 post - 1 participant

Read full topic

by vicmassy on January 27, 2026 05:02 PM

Implementation of UR Robotic Arm Teleoperation with PIKA SDK

Implementation of UR Robotic Arm Teleoperation with PIKA SDK

Demo Demonstration

Pika Teleoperation of UR Robotic Arm Demo Video

Getting Started with PIKA Teleoperation (UR Edition)

We recommend reading [Methods for Teleoperating Any Robotic Arm with PIKA] before you begin.

Once you understand the underlying principles, let’s guide you through writing a teleoperation program step by step. To quickly implement teleoperation functionality, we will use the following tools:

  • PIKA SDK: Enables fast access to all PIKA Sense data and out-of-the-box gripper control capabilities
  • Various transformation tools: Such as converting XYZRPY to 4x4 homogeneous transformation matrices, converting XYZ and quaternions to 4x4 homogeneous transformation matrices, and converting RPY angles (rotations around X/Y/Z axes) to rotation vectors
  • UR Robotic Arm Control Interface: This interface is primarily built on the ur-rtde library. It enables real-time control by sending target poses (XYZ and rotation vectors), speed, acceleration, control interval (frequency), lookahead time, and proportional gain

Environment Setup

  1. Clone the code
git clone --recursive https://github.com/RoboPPN/pika_remote_ur.git

2、Install Dependencies

cd pika_remote_ur/pika_sdk

pip3 install -r requirements.txt  

pip3 install -e .

pip3 install ur-rtde

UR Control Interface

Let's start with the control interface. To implement teleoperation, you first need to develop a proper control interface. For instance, the native control interface of UR robots accepts XYZ coordinates and rotation vectors as inputs, while teleoperation code typically outputs XYZRPY data. This requires a coordinate transformation, which can be implemented either in the control interface or the main teleoperation program. Here, we perform the transformation in the main teleoperation program.

The UR robotic arm control interface code is located at pika_remote_ur/ur_control.py:

import rtde_control
import rtde_receive

class URCONTROL:
    def __init__(self,robot_ip):
        # Connect to the robot
        self.rtde_c = rtde_control.RTDEControlInterface(robot_ip)
        self.rtde_r = rtde_receive.RTDEReceiveInterface(robot_ip)
        if not self.rtde_c.isConnected():
            print("Failed to connect to the robot control interface.")
            return
        if not self.rtde_r.isConnected():
            print("Failed to connect to the robot receive interface.")
            return
        print("Connected to the robot.")
            
        # Define servoL parameters
        self.speed = 0.15  # m/s
        self.acceleration = 0.1  # m/s^2
        self.dt = 1.0/50  # dt for 500Hz, or 1.0/125 for 125Hz
        self.lookahead_time = 0.1  # s
        self.gain = 300  # proportional gain
        
    def sevol_l(self, target_pose):
        self.rtde_c.servoL(target_pose, self.speed, self.acceleration, self.dt, self.lookahead_time, self.gain)
        
    def get_tcp_pose(self):
        return self.rtde_r.getActualTCPPose()
    
    def disconnect(self):
        if self.rtde_c:
            self.rtde_c.disconnect()
        if self.rtde_r:
            self.rtde_r.disconnect()
        print("Disconnected from UR robot")

# example
# if __name__ == "__main__":
#     ur = URCONTROL("192.168.1.15")
#     target_pose = [0.437, -0.1, 0.846, -0.11019068574221307, 1.59479642933605, 0.07061926626169934]
    
#     ur.sevol_l(target_pose)

The code defines a Python class named URCONTROL for communicating and controlling UR robots. This class encapsulates the functionalities of the rtde_control and rtde_receive libraries, providing methods for connecting to the robot, disconnecting, sending servoL commands, and retrieving TCP poses.

Core Teleoperation Code

The teleoperation code is located at `pika_remote_ur/teleop_ur.py`

As outlined in [Methods for Teleoperating Any Robotic Arm with PIKA], the teleoperation principle can be summarized in four key steps:

  1. Obtain 6D Pose data
  2. Coordinate System Alignment
  3. Incremental Control
  4. Map 6D Pose data to the robotic arm

Obtaining Pose Data

The code is as follows:
# Get pose data of the tracker device
def get_tracker_pose(self):
    logger.info(f"Starting to obtain pose data of {self.target_device}...")
    while True:
        # Get pose data
        pose = self.sense.get_pose(self.target_device)
        if pose:
            # Extract position and rotation data for further processing
            position = pose.position  # [x, y, z]
            rotation = self.tools.quaternion_to_rpy(pose.rotation[0],pose.rotation[1],pose.rotation[2],pose.rotation[3])  # [x, y, z, w] quaternion

            self.x,self.y,self.z,   self.roll, self.pitch, self.yaw = self.adjustment(position[0],position[1],position[2],
                                                                                      rotation[0],rotation[1],rotation[2])                                                                           
        else:

            logger.warning(f"Failed to obtain pose data for {self.target_device}, retrying in the next cycle...")

        time.sleep(0.02)  # Obtain data every 0.02 seconds (50Hz)

This code retrieves the pose information of the tracker named “T20” every 0.02 seconds. There are two types of tracker device names: those starting with WM and those starting with T. When connecting trackers to the computer via a wired connection, the first connected tracker is named T20, the second T21, and so on. For wireless connections, the first connected tracker is named WM0, the second WM1, and so forth.

The acquired pose data requires further processing. The adjustment function is used to adjust the coordinates to match the coordinate system of the UR robotic arm’s end effector, achieving alignment between the two systems.

Coordinate System Alignment

The code is as follows:
# Coordinate transformation adjustment function
def adjustment(self,x,y,z,Rx,Ry,Rz):
    transform = self.tools.xyzrpy2Mat(x,y,z,   Rx, Ry, Rz)

    r_adj = self.tools.xyzrpy2Mat(self.pika_to_arm[0],self.pika_to_arm[1],self.pika_to_arm[2],
                                  self.pika_to_arm[3],self.pika_to_arm[4],self.pika_to_arm[5],)   # Adjust coordinate axis direction: Pika ---> Robotic Arm End Effector

    transform = np.dot(transform, r_adj)

    x_,y_,z_,Rx_,Ry_,Rz_ = self.tools.mat2xyzrpy(transform)

    return x_,y_,z_,Rx_,Ry_,Rz_

The function implements coordinate transformation and adjustment with the following steps:

  1. Convert the input pose (x,y,z,Rx,Ry,Rz) into a transformation matrix.
  2. Obtain the adjustment matrix for transforming the Pika coordinate system to the robotic arm’s end effector coordinate system.
  3. Combine the two transformations through matrix multiplication.
  4. Convert the final transformation matrix back to pose parameters and return the result.

The adjusted pose parameters matching the robotic arm’s coordinate system can be obtained through this function.

Incremental Control

In teleoperation, the pose data provided by Pika Sense is absolute. However, we do not want the robotic arm to jump directly to this absolute pose. Instead, we want the robotic arm to follow the relative movements of the operator starting from its current position. In simple terms, this involves converting the absolute pose changes of the control device into relative pose commands for the robotic arm.

The code is as follows:

# Incremental control
def calc_pose_incre(self,base_pose, pose_data):
    begin_matrix = self.tools.xyzrpy2Mat(base_pose[0], base_pose[1], base_pose[2],
                                                base_pose[3], base_pose[4], base_pose[5])
    zero_matrix = self.tools.xyzrpy2Mat(self.initial_pose_rpy[0],self.initial_pose_rpy[1],self.initial_pose_rpy[2],
                                        self.initial_pose_rpy[3],self.initial_pose_rpy[4],self.initial_pose_rpy[5])
    end_matrix = self.tools.xyzrpy2Mat(pose_data[0], pose_data[1], pose_data[2],
                                            pose_data[3], pose_data[4], pose_data[5])
    result_matrix = np.dot(zero_matrix, np.dot(np.linalg.inv(begin_matrix), end_matrix))
    xyzrpy = self.tools.mat2xyzrpy(result_matrix)
    return xyzrpy   

This function uses transformation matrix arithmetic to implement incremental control. Let’s break down the code step by step:

Input Parameters:

  • base_pose: The reference pose at the start of teleoperation. When teleoperation is triggered, the system records the current pose of the control device and stores it as self.base_pose. This pose serves as the “starting point” or “reference zero point” for calculating all subsequent increments.
  • pose_data: The real-time pose data received from the control device (Pika Sense) at the current moment.

Matrix Transformation:The function first converts three key poses (represented in [x, y, z, roll, pitch, yaw] format) into 4x4 homogeneous transformation matrices, typically implemented by the tools.xyzrpy2Mat function.

  • begin_matrix: Converted from base_pose, representing the pose matrix of the control device at the start of teleoperation (denoted as T_begin).
  • zero_matrix: Converted from self.initial_pose_rpy, representing the pose matrix of the robotic arm’s end effector at the start of teleoperation. This is the “starting point” for the robotic arm’s movement (denoted as T_zero).
  • end_matrix: Converted from pose_data, representing the pose matrix of the control device at the current moment (denoted as T_end).

Core Calculation:This is the critical line of code:

result_matrix = np.dot(zero_matrix, np.dot(np.linalg.inv(begin_matrix), end_matrix))

Let’s analyze it using matrix multiplication:The formula can be expressed as: Result = T_zero * (T_begin)⁻¹ * T_end

  • np.linalg.inv(begin_matrix): Calculates the inverse matrix of begin_matrix, i.e., (T_begin)⁻¹. In robotics, the inverse of a transformation matrix represents the reverse transformation.
  • np.dot(np.linalg.inv(begin_matrix), end_matrix): This calculates (T_begin)⁻¹ * T_end, which physically represents the transformation required to convert from the begin coordinate system to the end coordinate system. In other words, it accurately describes the relative pose change (increment) of the control device from the start of teleoperation to the current moment (denoted as ΔT).
  • np.dot(zero_matrix, ...): This calculates T_zero * ΔT, which physically applies the calculated relative pose change (ΔT) to the initial pose of the robotic arm (T_zero).

Result Conversion and Return:

  • xyzrpy = tools.mat2xyzrpy(result_matrix): Converts the calculated 4x4 target pose matrix result_matrix back to the [x, y, z, roll, pitch, yaw] format that the robot controller can interpret.
  • return xyzrpy: Returns the calculated target pose.

Teleoperation Triggering

There are various ways to trigger teleoperation:
  1. Voice Trigger: The operator can trigger teleoperation using a wake word.
  2. Server Request Trigger: Teleoperation is triggered via a server request.

However, both methods have usability limitations. Voice triggering requires an additional voice input module and may suffer from low wake word recognition accuracy—you might have to repeat the wake word multiple times before successful triggering, leaving you frustrated before even starting teleoperation. Server request triggering requires sending a request from the control computer, which works well with two-person collaboration but becomes cumbersome when operating alone.

Instead, we use Pika Sense’s state transition detection to trigger teleoperation. The operator simply holds the Pika Sense and double-clicks it to reverse the state, thereby initiating teleoperation. The code is as follows:

# Teleoperation trigger
def handle_trigger(self):
    current_value = self.sense.get_command_state()

    if self.last_value is None:
        self.last_value = current_value
    if current_value != self.last_value: # Detect state change
        self.bool_trigger = not self.bool_trigger # Reverse bool_trigger
        self.last_value =  current_value # Update last_value
        # Perform corresponding operations based on the new bool_trigger value
        if self.bool_trigger :
            self.base_pose = [self.x, self.y, self.z, self.roll, self.pitch, self.yaw]
            self.flag = True
            print("Teleoperation started")

        elif not self.bool_trigger :
            self.flag = False

            #-------------------------------------------------Option 1: Robotic arm stops at current pose after teleoperation ends; resumes from current pose in next teleoperation---------------------------------------------------

            self.initial_pose_rotvec = self.ur_control.get_tcp_pose()

            temp_rotvec = [self.initial_pose_rotvec[3], self.initial_pose_rotvec[4], self.initial_pose_rotvec[5]]

            #  Convert rotation vector to Euler angles
            roll, pitch, yaw = self.tools.rotvec_to_rpy(temp_rotvec)

            self.initial_pose_rpy = self.initial_pose_rotvec[:]
            self.initial_pose_rpy[3] = roll
            self.initial_pose_rpy[4] = pitch
            self.initial_pose_rpy[5] = yaw

            self.base_pose = self.initial_pose_rpy # Desired target pose data
            print("Teleoperation stopped")

            #-------------------------------------------------Option 2: Robotic arm returns to initial pose after teleoperation ends; starts from initial pose in next teleoperation---------------------------------------------------

            # # Get current pose of the robotic arm
            # current_pose = self.ur_control.get_tcp_pose()

            # # Define interpolation steps
            # num_steps = 100  # Adjust steps as needed; more steps result in smoother transition

            # for i in range(1, num_steps + 1):
            #     # Calculate interpolated pose at current step
            #     # Assume initial_pose_rotvec and current_pose are both in [x, y, z, Rx, Ry, Rz] format
            #     interpolated_pose = [
            #         current_pose[j] + (self.initial_pose_rotvec[j] - current_pose[j]) * i / num_steps
            #         for j in range(6)
            #     ]
            #     self.ur_control.sevol_l(interpolated_pose)
            #     time.sleep(0.01)  # Short delay between interpolations to control speed

            # # Ensure the robotic arm reaches the initial position
            # self.ur_control.sevol_l(self.initial_pose_rotvec)


            # self.base_pose = [self.x, self.y, self.z, self.roll, self.pitch, self.yaw]
            # print("Teleoperation stopped")

The code continuously retrieves the current state of Pika Sense using self.sense.get_command_state(), which outputs either 0 or 1. When the program starts, bool_trigger defaults to False. On the first state reversal, bool_trigger is set to True—the tracker’s pose is set as the zero point, self.flag is set to True, and control data is sent to the robotic arm for motion control.

To stop teleoperation, double-click the Pika Sense again to reverse the state. The robotic arm will then stop at its current pose and resume from this pose in the next teleoperation session (Option 1). Option 2 allows the robotic arm to return to its initial pose after teleoperation stops and start from there in subsequent sessions. You can choose the appropriate option based on your specific needs.

Mapping Pika Pose Data to the Robotic Arm

The code for this section is as follows:
def start(self):
    self.tracker_thread.start() # Start the thread        
    # Main thread continues with other tasks
    while self.running:
        self.handle_trigger()
        self.control_gripper()
        current_pose = [self.x, self.y, self.z, self.roll, self.pitch, self.yaw]
        increment_pose = self.calc_pose_incre(self.base_pose,current_pose)

        finally_pose  = self.tools.rpy_to_rotvec(increment_pose[3], increment_pose[4], increment_pose[5])

        increment_pose[3:6] = finally_pose

        # Send pose to robotic arm
        if self.flag:
            self.ur_control.sevol_l(increment_pose)

        time.sleep(0.02) # Update at 50Hz

This section of code converts the RPY rotation data of the calculated increment_pose into a rotation vector and sends it to the robotic arm (UR robots accept XYZ coordinates and rotation vectors for control). Control data is only sent to the robotic arm when self.flag is set to True.

Practical Operation

The teleoperation code is located at: `pika_remote_ur/teleop_ur.py`
  1. Power on the UR robotic arm and enable the joint motors. If the robotic arm’s end effector is equipped with a gripper or other actuators, enter the corresponding load parameters.

  2. Configure the robotic arm’s IP address on the tablet.

3、Configure the Tool Coordinate System.

The end effector coordinate system must be set with the Z-axis pointing forward, X-axis pointing downward, and Y-axis pointing left. In the code, we rotate the Pika coordinate system 90° counterclockwise around the Y-axis, resulting in the Pika coordinate system having the Z-axis forward, X-axis downward, and Y-axis left. Therefore, the robotic arm’s end effector (tool) coordinate system must be aligned with this configuration; otherwise, the control will malfunction.

  1. For first-time use, set the speed to 20-30% and enable remote control of the robotic arm.

  2. Connect the tracker to the computer via a USB cable and calibrate the tracker and base station.

Navigate to the ~/pika_ros/scripts directory and run:

bash calibration.bash 

Once positioning calibration is complete, close the program.

  1. Connect Pika Sense and Pika Gripper to the computer using USB 3.0 cables. Note: Connect Pika Sense first (it should be assigned the port /dev/ttyUSB0), then connect the Pika Gripper (which requires 24V DC power supply, and should be assigned the port /dev/ttyUSB1).

  2. Run the code:

cd pika_remote_ur

python3 teleop_ur.py

The terminal will output numerous logs, with the most common one being:

teleop_ur - WARNING - Failed to obtain pose data for T20, retrying in the next cycle...

Teleoperation can be initiated by double-clicking once the above log stops appearing and is replaced by:

pika.vive_tracker - INFO -  Detected new device update: T20

Then you can start the remote operation by double-clicking.

1 post - 1 participant

Read full topic

by Agilex_Robotics on January 27, 2026 03:38 AM

January 26, 2026
Should feature-adding/deprecating changes to core repos define feature flags?

When a new feature or deprecation is added to a C++ repo, it would be useful to have an easy way of detecting whether this feature is available.

Currently, it’s possible to use has_include (if the feature added a whole new header file), or you’re left with try_compile in CMake. Or version checks, which get very quickly very complicated.

Take for example Add imu & mag support in `tf2_sensor_msgs` (#800) by roncapat · Pull Request #813 · ros2/geometry2 · GitHub which added support for transforming IMU messages. If my package uses this feature and has a fallback for the releases where it’s missing, I need a reliable way for detecting the presence of the feature. I went with try_compile and it works.

However, imagine that tf2_sensor_msgs::tf2_sensor_msgs target automatically adds a compile definition like `-DTF2_SENSOR_MSGS_HAS_IMU_SUPPORT`. It would be much easier for downstream packages.

As long as it’s feasible, I very much want to have single-branch ROS2 packages for all distros out there, and this kind of packages would benefit a lot.

Another example: ament_index_cpp added std::filesystem interface recently. For downstream packages that want to work with both the old and new interfaces, there are some ifdefs needed in the implementation. But it doesn’t make sense to me for each package using ament_index_cpp to do the try_compile check…

What do you think about adding such feature flags to packages? Would it be maintainable? Would there be any drawbacks?

1 post - 1 participant

Read full topic

by peci1 on January 26, 2026 04:15 PM

PlotJugler 2026: it needs your

Why PlotJuggler 2026

Soon it will be the 10th anniversary of my first commit in the PlotJuggler repo.

I built this to help people visualize their data faster and almost everyone I know, in the ROS community, uses it (Bubble? False confirmation bias? We will never know).

What I do know is that in an era where we have impressive (and VC-backed) visualization tools, PlotJuggler is still used and loved by thousands of roboticists.

I believe the reason is that it is not just a “visualization” tool, but a debugging one; fast, nimble and effective, like vi… if you are into that.

I decided that PJ deserves better… my users do! And I have big plans to make that happen… with your help (mostly, your company’s help).

Crowdfunding: PlotJuggler’s future, shaped together

This is the reason why I am launching a crowdfunding campaign, targeted to companies, not individuals.

This worked for me quite well 3 years ago, when I partially financed the development of Groot2, the BehaviorTree.CPP editor. But this time is different: if I reach my goals, 100% of the result will be open source and truly free, forever.

This is my roadmap: PlotJuggler 2026 - Google Slides

  1. Extension Marketplace — discover and install plugins with one click.
  2. Connectors to data in the cloud — access your logs wherever they are.
  3. Connect to your robot from any computer, with or without ROS.
  4. New data transform editor — who needs Matlab?
  5. Efficient data storage for “big data”.
  6. Images and videos (at last?).
  7. PlotJuggler on the web? I want to believe. You want too.

Contact me at dfaconti@aurynrobotics.com if you want to know more.

Why you should join

  1. You or your team already uses PlotJuggler. Invest in the tool that saves you debugging hours every week.
  2. Shape the roadmap. Backers get a voice in prioritizing features that matter to your workflow.
  3. Public recognition. Your company logo in the documentation and release announcements.
  4. Be the change you want to see in the open-source world. We all like a good underdog story. Davide VS Goliath (pun intended), open-source vs closed-source (reference intended). Yes, you can make that happen.

FAQ

What if I want another feature?

Contact me and tell me more.

What if I am not interested in all these features, but only 1 or 2?

We will find a way and negotiate a contribution that is proportional to what you care about.

How much should a backer contribute?

I am not giving you an upper limit :wink: , but use €5,000 as the smallest “quantum”. This is the reason why this is targeted to companies, not individuals.

How will you use that money?

I plan to hire 1-3 full-time employees for 1 year. The more budget I can obtain, the more I can build.

“I think it is great, but I am not in charge of making this decision at my company”

Give me the email of the decision maker I need to bother, and I will do it for you!

2 posts - 2 participants

Read full topic

by facontidavide on January 26, 2026 10:57 AM

January 25, 2026
Ros2cli, meet fzf

ros2cli Interactive Selection: Fuzzy Finding

ros2cli just got a UX upgrade: fuzzy finding!

ros2cli_fzf

Tab completion is nice, but it still requires you to remember how the name starts. Now you can just type any part you remember and see everything that matches. Think “search” not “autocomplete”.

Tab Completion vs. Fuzzy Finding

Tab completion:


$ ros2 topic echo /rob<TAB>

# Shows: /robot/

$ ros2 topic echo /robot/<TAB><TAB><TAB>...

# Cycles through: base_controller, cmd_vel, diagnostics, joint_states...

# Was it under /robot? Or /robot/sensors? Or was it /sensors/robot?

# Start over: Ctrl+C

Fuzzy finding (new):


$ ros2 topic echo

# Type: "lidar"

# Instantly see ALL topics with "lidar" anywhere in the name:

/robot/sensors/lidar/scan

/front_lidar/points

/safety/lidar_monitor

# Pick with arrows, done!

What Works Now?

  • ros2 topic echo / info / hz / bw / type - Find topics by any part of their name

  • ros2 node info - Browse all nodes, filter as you type

  • ros2 param get - Pick node, then browse its parameters

  • ros2 run - Find packages/executables without remembering exact names

There are plenty more opportunities where we could integrate fzf, not only in more verbs of ros2cli (e.g. ros2 service) but also in other tools in the ROS ecosystem (e.g. colcon).

I’d love to to see this practice propagate but for this I need the help of the community!

Links

  • PR: #1151 (currently only available on rolling)

  • Powered by: fzf

5 posts - 3 participants

Read full topic

by tnajjar on January 25, 2026 08:02 PM

January 24, 2026
LinkForge 1.2.0: Centralized ros2_control Dashboard & Inertial Precision

Hi Everyone,

Following the initial announcement of LinkForge, I’m appreciative of the feedback. Today I’m releasing v1.2.0, focused on internal stability and better diagnostic visuals.

Key Technical Changes:

  • Centralized ros2_control Dashboard: We’ve consolidated all hardware interfaces and transmissions into a single dashboard. This makes managing complex actuators much faster and prevents property-hunting across panels.

  • Inertial Origins & CoM Editing: We’ve exposed the inertial origin in the UI and added a GPU-based overlay showing a persistent Center of Mass sphere. This allows for manual fine-tuning and immediate visual verification of your physics model directly in the 3D viewport.

  • Hexagonal Architecture: The core logic is now decoupled from the Blender API, making the codebase more testable (now with near-full core coverage) and future-proof.

We also fixed several bugs related to Xacro generation and mesh cloning for export robustness. Getting the physics right in the editor is the best way to prevent “exploding robots” in simulation.

:hammer_and_wrench: Download (Blender Extensions): LinkForge — Blender Extensions
:open_book: Documentation: https://linkforge.readthedocs.io/
:laptop: Source Code: GitHub - arounamounchili/linkforge: Build simulation-ready robots in Blender. Professional URDF/XACRO exporter with validation, sensors, and ros2_control support.

Feedback on the new dashboard workflow is very welcome!

1 post - 1 participant

Read full topic

by arounamounchili on January 24, 2026 10:34 PM

January 21, 2026
Native ROS 2 Jazzy Debian packages for Raspberry Pi OS / Debian Trixie (arm64)

After spending some time trying to get ROS 2 Jazzy working reliably on Raspberry Pi via Docker and Conda (and losing several rounds to OpenGL, Gazebo plugins, and cross-arch issues), I eventually concluded:)

On Raspberry Pi, ROS really only behaves when it’s installed natively.

So I built the full ROS 2 Jazzy stack as native Debian packages for Raspberry Pi OS / Debian Trixie (arm64), using a reproducible build pipeline:

  • bloom → dpkg-buildpackage → sbuild → reprepro

  • signed packages

  • rosdep-compatible

The result:

  • Native ROS 2 Jazzy on Pi OS / Debian Trixie

  • Uses system Mesa / OpenGL

  • Gazebo plugins load correctly

  • Cameras, udev, and ros2_control behave

  • Installable via plain apt

Public APT repository

:backhand_index_pointing_right: GitHub - rospian/rospian-repo: ROS2 Jazzy on Raspberry OS Trixie debian repo

Build farm (if you want to reproduce or extend it)

:backhand_index_pointing_right: GitHub - rospian/rospian-buildfarm: ROS 2 Jazzy debs for Raspberry Pi OS Trixie with full Debian tooling

Includes the full mini build-farm pipeline.

This was motivated mainly by reliability on embedded systems and multi-machine setups (Gazebo on desktop, control on Pi).

Feedback, testing, or suggestions very welcome.

3 posts - 2 participants

Read full topic

by ebodev on January 21, 2026 03:58 PM


Powered by the awesome: Planet