May 12, 2026
How to Build a Robot Arm IK Solver in ROS2 | NERO Arm Parametric Inverse Kinematics

Complete Tutorial on Nero Arm Angle Parametric IK

Reference paper from Tsinghua University Paper —— Inverse kinematic optimization for 7-DoF serial manipulators with joint limits


Part 1. Overview

This document provides a complete mathematical tutorial on parameterized inverse kinematics (IK) for the NERO 7-DoF robotic arm.

The content mainly corresponds to:

  • Tsinghua University paper: Inverse Kinematics Solution for 7-DoF Robotic Arms with Joint Limit Optimization
  • Implementation: ik_solver.py
  • ROS2 real-time runtime node: ik_joint_state_publisher.py

Part 2. Algorithmic Background and Core Concepts

2.1 Fundamental Characteristics of 7-DoF Redundant Robot Arms

A 7-DoF robotic arm with an S-R-S configuration (Spherical Shoulder – Revolute Elbow – Spherical Wrist) introduces one additional redundant degree of freedom compared with a conventional 6-DoF manipulator.

This means that:

  • When the end-effector pose is fixed, the joint configuration may still have infinitely many solutions, and the arm can still move internally while keeping the end-effector stationary.

This type of motion, where the end-effector remains fixed while the robot reconfigures itself, is referred to as null-space motion.

Redundancy provides several important advantages:

  1. Joint limit avoidance
  2. Obstacle avoidance
  3. Elbow posture optimization
  4. Smoother trajectory generation

2.2 Elbow Angle Parameterization (Core Contribution of the Paper)

The core idea of the paper is:

Use a single parameter to represent the entire redundant degree of freedom —— this paramter are called elbow angle \psi (theta \theta in the code implementation).

Geometric Definition of the Elbow Angle

When the end-effector pose is fixed, both points S and point W are fixed in space.

The elbow point E then traces a circle in 3D space.
The rotational angle within the plane of this circle is defined as the elbow angle \psi.

  • S: Shoulder center (intersection point of the first 3 joint axes)
  • E: Elbow center (location of Joint 4)
  • W: Wrist center (intersection point of the last 3 joint axes)
  • Points S–E–W form a triangle with fixed side lengths
  • The elboww angle \psi determines the position of point E on the circle.

In one sentence:

  • \psi → elbow posture changes → joint angles change → end-effector remains unchanged

2.3 Differences Between This Method and Traditional Numerical IK Solvers

Comparison Aspect Numerical Iterative Methods (Jacobian / Damped Least Squares) Elbow-Angle Parameterized Analytical IK
Solution Strategy Iterative convergence, dependent on initialization Geometric derivation with closed-form solution
Computational Speed Slow (ms–10 ms) Extremely fast (<0.1 ms)
Convergence May fail to converge; susceptible to local minima Globally optimal and divergence-free
Joint Limit Handling Passive constraint handling; easy to violate limits Active feasible-region control; never exceeds limits
Null-Space Control Requires projection operators; prone to instability Direct control through \psi; naturally stable

Part 3.Complete Algorithm Workflow

The entire algorithm consists of four core stages:

  1. Extract S, W, and θ_4 from the target pose.
  2. Compute the elbow point E from the elbow angle ψ, and analytically solve q_1q_3 and q_5q_7
  3. Compute the feasible region of the elbow angle under all joint-limit constraints
  4. Optimize the elbow angle within the feasible region using a weighted quadratic objective function

The following sections correspond directly to the equations in the paper and the implementation in code.


3.1 Step 1: Solving for S, W, and \theta_4 from the Target Pose

Theory from the Paper

Given the end-effector pose T_{07}, we first solve for:

  • Shoulder point S
  • Wrist point W (obtained by offsetting the end-effector frame backward by d_6)
  • Elbow joint angle \theta_4 (uniquely determined from the S–E–W triangle using the law of cosines)
  • As illustrated in the figure, points S, W, E, and D

Law of Cosines

cos \theta_4 = \frac{||SW||^2-||SE||^2-||EW||^2}{2||SE|| ||EW||}

Code Implementation: _compute_swe_from_target

def _compute_swe_from_target(T07: np.ndarray, p: NeroParams) -> Tuple[np.ndarray, np.ndarray, Optional[float], np.ndarray]:
    R = T07[:3, :3]
    p_target = T07[:3, 3]
    z7 = R[:, 2]
    d6 = float(p.d_i[6])
    d1 = float(p.d_i[0])

    # End-effector flange center
    O7 = p_target - p.post_transform_d8 * z7
    # Wrist center W: offset backward from the flange by d6
    W = O7 - d6 * z7
    # Shoulder center S: fixed at height d1 above the base
    S = np.array([0.0, 0.0, d1], dtype=float)

    # Solve the absolute value of θ4 using the law of cosines
    q4_abs = _solve_theta4_from_triangle(S, W, p)

    # Unit vector from shoulder to wrist
    v_sw = W - S
    n_sw = np.linalg.norm(v_sw)
    u_sw = v_sw / n_sw if n_sw > 1e-12 else np.array([0.0, 0.0, 1.0])

    return S, W, q4_abs, u_sw

Helper Function: _solve_theta4_from_triangle

def _solve_theta4_from_triangle(S: np.ndarray, W: np.ndarray, p: NeroParams) -> Optional[float]:
    l_sw = np.linalg.norm(W - S)
    l_se = abs(p.d_i[2])
    l_ew = abs(p.d_i[4])

    c4 = (l_sw**2 - l_se**2 - l_ew**2) / (2.0 * l_se * l_ew)
    c4 = np.clip(c4, -1.0, 1.0)

    return math.acos(c4)

Key Insight

The elbow joint angle θ_4 depends only on the geometric link lengths and is completely independent of the arm angle ψ.

3.2 Step 2: Solving the Elbow Point E from the Arm Angle \psi (Core Geometry)

Theory from the Paper

The elbow point E lies on a circle whose chord is defined by the segment SW:

E= C + r (cos\psi*e_1 + sin\psi*e_2)

Where:

  • C: circle center
  • r: circle radius
  • e_1,e_2: orthonormal basis vectors spanning the circle plane

Code Implementation: _elbow_from_arm_angle

def _elbow_from_arm_angle(S: np.ndarray, W: np.ndarray, theta0: float, p: NeroParams) -> Optional[np.ndarray]:
    l_se = abs(p.d_i[2])
    l_ew = abs(p.d_i[4])

    sw = W - S
    l_sw = np.linalg.norm(sw)
    u_sw = sw / l_sw

    # Projection of circle center C onto line SW
    x = (l_se**2 - l_ew**2 + l_sw**2) / (2.0 * l_sw)

    r2 = l_se**2 - x**2
    r = math.sqrt(max(0.0, r2))

    C = S + x * u_sw

    # Construct circle-plane coordinate system e1, e2
    os_vec = S.copy()
    t = np.cross(os_vec, u_sw)

    e1 = t / np.linalg.norm(t)

    e2 = np.cross(u_sw, e1)
    e2 = e2 / np.linalg.norm(e2)

    # Compute elbow point E from arm angle theta0
    E = C + r * (math.cos(theta0) * e1 + math.sin(theta0) * e2)

    return E

This is the geometric core of the entire algorithm.

3.3 Step 3: Analytically Solving All Joint Angles from S–E–W

3.3.1 Shoulder Joints: q1,q2,q3

The paper derives a direct closed-form solution using geometric projection:

  • q1 is obtained from the projection of point E onto the base plane
  • q2 is determined by the height of E
  • q3 is solved from the direction of the wrist relative to the elbow

Code: _solve_q123_from_swe

def _solve_q123_from_swe(E: np.ndarray, W: np.ndarray, q4: float, p: NeroParams) -> List[np.ndarray]:
    d0 = p.d_i[0]
    d2 = p.d_i[2]
    d4 = p.d_i[4]

    Ex, Ey, Ez = E

    # q2
    c2 = (Ez - d0) / d2
    c2 = np.clip(c2, -1.0, 1.0)

    s2_abs = math.sqrt(max(0.0, 1.0 - c2**2))

    s4 = math.sin(q4)
    c4 = math.cos(q4)

    sols = []

    # Traverse both positive and negative s2 configurations
    for s2 in (s2_abs, -s2_abs):

        # q1
        c1 = -Ex / (d2 * s2)
        s1 = -Ey / (d2 * s2)

        n1 = math.hypot(c1, s1)

        c1 /= n1
        s1 /= n1

        q1 = math.atan2(s1, c1)
        q2 = math.atan2(s2, c2)

        # q3
        v = W - E
        col2 = -v / d4

        u1, u2, u3 = col2

        b1 = (s2 * c1 * c4 - u1) / s4
        b2 = (u2 - s1 * s2 * c4) / s4

        s3 = s1 * b1 + c1 * b2
        c2c3 = -c1 * b1 + s1 * b2

        c3 = c2c3 / c2 if abs(c2) > 1e-8 else (u3 + c2 * c4) / (s2 * s4)

        n3 = math.hypot(s3, c3)

        s3 /= n3
        c3 /= n3

        q3 = math.atan2(s3, c3)

        sols.append(np.array([q1, q2, q3]))

    return sols

3.3.2 Wrist Joints: q5,q6,q7

The paper analytically extracts the wrist joint angles directly from the transformation matrix T_{47}

  • cos \theta_6 = T_{47}[1,2]
  • \theta_5 and \theta_7 are computed from neighboring matrix element ratios

Code: _extract_567_from_T47_paper

def _extract_567_from_T47_paper(T47: np.ndarray) -> List[np.ndarray]:
    sols = []

    c6 = np.clip(T47[1, 2], -1.0, 1.0)

    for sgn in (1.0, -1.0):

        s6 = sgn * math.sqrt(max(0.0, 1.0 - c6**2))

        if abs(s6) < 1e-8:
            continue

        th6 = math.atan2(s6, c6)

        th5 = math.atan2(T47[2, 2] / s6, T47[0, 2] / s6)

        th7 = math.atan2(T47[1, 1] / s6, -T47[1, 0] / s6)

        sols.append(np.array([th5, th6, th7]))

    return sols

3.4 Step 4: Joint Limits → Feasible Region of the Arm Angle

Theory from the Paper

Each joint limit interval [q_{min},q_{max}] corresponds to a certain invalid region of the arm angle.

The intersection of all valid intervals yields the feasible arm-angle region \Psi_F.

Only arm angles within this feasible region guarantee that all joints remain inside their limits.

Code: _get_theta0_feasible_region

def _get_theta0_feasible_region(T07: np.ndarray, p: NeroParams, step: float = 0.01) -> List[float]:
    feasible = []

    for theta0 in np.arange(-math.pi, math.pi, step):

        if _ik_one_arm_angle(T07, theta0, p):
            feasible.append(float(theta0))

    return feasible

Internally, the function calls _ik_one_arm_angle, which performs the following steps:

  • Substitute the arm angle \psi
  • Solve the complete joint configuration
  • Check whether all joints satisfy their limits
  • If valid → add the arm angle to the feasible region

3.5 Step 5: Optimal Arm-Angle Selection (Weighted Quadratic Objective Function)

Theory from the Paper

The objective function is defined as:
f(\psi) = \sum w_i(q_i(\psi)-q_{i,prev})^2

  • w_i:Weight coefficient, which increases as the corresponding joint approaches its mechanical limit.
  • Objective: To minimize the overall joint motion while keeping all joints as far as possible from their limits.

Weighting Function (Equation 20 in the Paper)

  • w_i=\frac{bx}{e_{a(1-x)-1}},x ≥ 0
  • w_i=\frac{-bx}{e_{a(1-x)-1}},x \lt 0

Where

  • a=2.28
  • b=2.28

Code: _weight_limits

def _weight_limits(q: float, q_min: float, q_max: float) -> float:
    span = q_max - q_min

    x = 2.0 * (q - (q_min + q_max) * 0.5) / span

    a = 2.38
    b = 2.28

    if x >= 0:
        den = math.exp(a * (1 - x)) - 1
        return b * x / den
    else:
        den = math.exp(a * (1 + x)) - 1
        return -b * x / den

Optimal Arm-Angle Search

def _optimal_theta0(feasible_theta0, T07, p, q_prev):

    best_cost = inf
    best_t = feasible_theta0[0]

    for t in feasible_theta0:

        sols = _ik_one_arm_angle(T07, t, p)

        for q_full in sols:

            q = q_full[:7]

            cost = 0

            for i in range(7):

                lo, hi = p.joint_limits[i]

                w = _weight_limits(q[i], lo, hi)

                dq = abs(q[i] - q_prev[i])

                cost += w * dq * dq

            if cost < best_cost:
                best_cost = cost
                best_t = t

    return best_t

This is the optimal solution selection strategy proposed in the paper.

In essence, it transforms the problem into:

One-dimensional quadratic-function minimization → globally optimal solution → no iterative solving and no local minima.


Part 4.Null-Space Motion Principle (Naturally Embedded)

For a 7-DoF manipulator, the null space is directly controlled by the arm angle \psi.

The principle is straightforward:

  • The end-effector pose T_{07} remains unchanged
  • Only the arm angle ψ is varied
  • The robot joints automatically perform self-reconfiguration while keeping the end-effector fixed
    This is known as null-space motion.

In the implementation, null-space motion can be generated simply by sweeping the arm angle:

for psi in np.linspace(-pi, pi, 100):
    q = _q_from_theta0(psi, T07, p)

No Jacobian matrix is required,
no projection operator is needed,
and the motion remains smooth and stable without oscillation.


Part 5.Code Structure Overview (Clean Version)

Core Functions in ik_solver.py(链接)


Part 6.Quick Start Guide

import numpy as np
from ik_solver import ik_arm_angle, NeroParams

# Define target end-effector pose
T = np.eye(4)
T[:3, 3] = [0.5, 0.0, 0.5]

# Solve inverse kinematics
q_best, feasible_set = ik_arm_angle(T)

print("Optimal joint configuration:", q_best)
print("Number of feasible arm angles:", len(feasible_set))

Part 7.Summary
This method presents a closed-form inverse kinematics solver for a 7-DoF S–R–S robotic manipulator, combined with a 1D quadratic optimization over the arm-angle null space.

Key characteristics:

1.Pure geometric closed-form solution

  • No iterative optimization
  • No Jacobian-based numerical solving

2.Automatic joint limit compliance

  • Feasible region explicitly constrained

3.Optimality guaranteed via quadratic cost function

  • Efficient 1D optimization over arm angle

4.Natural support for null-space motion

  • Arm angle acts as redundancy parameter

5.Real-time performance

  • Extremely fast computation suitable for control loops and embodied systems

1 post - 1 participant

Read full topic

by Agilex_Robotics on May 12, 2026 06:51 AM

May 11, 2026
Introducing an Rviz Alternative that runs SUPER FAST

A new ROS2-native, SUPERFAST visualizer written in Rust — `fastviz`

Hi everyone,

I’ve been working on a project called **fastviz**: a Rust-based 3D visualizer that runs as a native ROS2 node, built on `wgpu` and `egui`. RViz has been the workhorse of the community for many years and isn’t going anywhere — fastviz is just an experiment to see how much smoothness and headroom we can get out of a pure-Rust + GPU-native pipeline, and I wanted to share where it’s at in case it’s useful to others.

It’s at a preliminary stage — only a handful of message types are wired up so far — but the core architecture is in place and it already renders things like TurtleBot 4 in Gazebo end-to-end.

**Repo:** GitHub - ksatyaki/fastviz: Rust based HOPEFULLY superfast viz for ROS2 · GitHub

-–

## The bits I’m most excited about

### 1. It IS a ROS2 node

No bridge, no middleware, no separate process. fastviz subscribes directly to topics via `r2r`, so there’s nothing extra to wire up between your robot and the visualizer.

### 2. The render thread never touches ROS2

The `r2r` executor runs on a dedicated thread; the renderer talks to it through an `Arc<RwLock>` with brief, write-only handoffs. The UI never blocks on DDS — frames stay smooth even when a noisy topic is flooding the graph.

### 3. GPU-accelerated via `wgpu`

Vulkan on Linux, Metal on macOS, DX12 on Windows, and WebGPU is on the menu too. Same renderer everywhere.

### 4. Revision-cached render passes

A `revision()` counter on the scene graph drives pass-level caching, so an idle scene costs ~zero CPU. Walking away from the visualizer doesn’t pin a core.

### 5. GPU-side per-entity transforms for point clouds

The point-cloud pipeline is instanced, per-entity transforms happen on the GPU, and the prepare step is revision-cached with buffer reuse. PointCloud2 streams stay cheap.

### 6. TF tree reimplemented in Rust

No `tf2` C++ dependency — TF maintenance lives in pure Rust alongside the rest of the ingestion layer.

### 7. TOML config as the source of truth

Layouts are declared in a TOML file — diff-friendly, version-controllable, and easy to commit alongside your robot’s launch config.

### 8. Polled wildcard topic discovery

Drop `“*”` into a topic list and every matching message type in the ROS graph gets auto-subscribed within about a second. Handy when you’re exploring an unfamiliar bag or sim and don’t want to enumerate topics by hand.

### 9. Per-topic QoS overrides in config

`reliability`, `durability`, and `depth` are all settable per topic from the same TOML file.

### 10. URDF support with STL / OBJ / DAE meshes

URDF parsing via `urdf-rs`; mesh loading covers STL, OBJ, and Collada. `package://` URIs resolve through `AMENT_PREFIX_PATH`, and `JointState` drives the FK.

### 11. Dev container + release Docker image

The `.devcontainer/` ships an Ubuntu 24.04 + ROS2 Jazzy image with `r2r` build deps, the Vulkan loader, and NVIDIA passthrough already wired up. A root `Dockerfile` also builds a release image you can `docker run`.

-–

## What’s supported today (early days!)

This is very preliminary — only a few message types are supported right now:

| Topic kind | Message |

| -------------- | -------------------------------- |

| `[map]` | `nav_msgs/OccupancyGrid` |

| `[poses]` | `geometry_msgs/PoseStamped` |

| `[pose_arrays]`| `geometry_msgs/PoseArray` |

| `[paths]` | `nav_msgs/Path` |

| `[scans]` | `sensor_msgs/LaserScan` |

| `[points]` | `sensor_msgs/PointCloud2` |

| `[tf]` | `tf2_msgs/TFMessage` |

| `[urdf]` | `std_msgs/String` + `JointState` |

`MarkerArray`, `Image`, `Imu`, `Odometry`, and friends are on the near-term roadmap. ROS2 Jazzy is the only distro currently tested.

-–

## Try it

```sh

git clone GitHub - ksatyaki/fastviz: Rust based HOPEFULLY superfast viz for ROS2 · GitHub

cd fastviz

source /opt/ros/jazzy/setup.bash

cargo build --release

cargo run -p app – --config configs/turtlebot4.toml

```

Or via the dev container — open the folder in VS Code / Cursor and pick “Reopen in Container”.

-–

## Help wanted

If you give it a spin, I’d genuinely love to hear:

- which message types you’d want supported next,

- what kinds of bags would make good benchmarks,

- any architectural input on plugins, MCAP playback, or multi-window layouts.

Issues, PRs, and “this completely broke on my robot” reports are all very welcome.

Hopefully this can grow into something useful for the community. Thanks for taking a look!

**GitHub:** GitHub - ksatyaki/fastviz: Rust based HOPEFULLY superfast viz for ROS2 · GitHub

3 posts - 2 participants

Read full topic

by JackMcMurdo on May 11, 2026 07:47 PM

ROSCon Global 2026 Registration Now Open! Workshop and exhibitor info now available!

ROSCon Global 2026 Registration Now Open!

Workshop and exhibitor info now available


Hi Everyone,

I am happy to announce that registration for ROSCon Global in Toronto is now open! We highly encourage you to register as soon as possible as ROSCon often sells out and our most popular workshops fill up fast. Early bird ticket prices will be available until July 12th, 2026. Our early bird rates are quite generous and effectively make workshop registration free! Even if you don’t plan to attend a workshop, we recommend you join us for all three days of the event as there will be a number of other activities, like birds of a feather sessions, happening on the first day of ROSCon Global. Given what I’ve heard from other community members, there will likely be a number of other events happening immediately before and after the official ROSCon Global event (I might be cooking something up for the Friday after the conference :wink:).

If you are curious about what else there is to do in the Toronto area while visiting, our hosts have graciously put together a micro-site with dining, shopping, and tourist activities in Toronto.

ROSCon Workshops

Due to the incredible demand for ROSCon workshops last year we’ve expanded our workshop capacity for 2026! We’re excited to announce that this year we will be offering eight half-day workshops and two full-day workshops. ROSCon Global is now officially a three day event, and even if you choose not to attend workshops there will be Birds of a Feather sessions and other events during the first day of the event. We recommend that you plan to be in Toronto for the entire week as we have a couple other big announcements coming out in the next few weeks!

I’ve summarized our ROSCon Global workshops below, but a full list is available on the website.

  • [Half-day] From URDF to USD: A Complete Pipeline for High-Fidelity ROS 2 Simulation in NVIDIA Isaac Sim with Ji Yuan Feng and Ayush Ghosh. Build a complete robot simulation pipeline from raw URDF to ROS 2 integration using NVIDIA Isaac Sim.
  • [Half-day] Train and Deploy Contact-Rich Robot Manipulation Skills With Isaac Lab and Isaac ROS, with Raffaello Bonghi, Rishabh Chadha, Ashwin Varghese Kuruttukulam, and Ayusman Saha. Develop and deploy contact-rich manipulation policies for tasks like gear assembly using Isaac Lab and Isaac ROS.
  • [Half-day] Train, Simulate, Deploy: Agentic AI from Cloud to Robot with Ken O’Brien, Graham Schelle, Sarunas Kalade, Mehdi Saeedi, and Adam Dąbrowski, Explore practical pathways for training and deploying embodied AI models across cloud environments and on-device NPUs.
  • [Half-day] Scaling ros2_control: From Async Hardware Drivers to RL Inference Engines with Sai Kishor Kothakota, Bence Magyar, Christoph Fröhlich, and Denis Štogl, Architect asynchronous hardware interfaces to prevent I/O bottlenecks and deploy reinforcement learning models efficiently.
  • [Full-day] Advanced Aerial Robotics with PX4 and ROS 2: Custom Flight Modes and Beyond with Ramon Hernan Roche Quintana, Beniamino Pozzan, and Patrik Dominik Pordi, Build custom flight modes and sequence complex autonomous behaviors for aerial robots entirely in ROS 2.
  • [Half-day] Motion Planning Fundamentals with Moveit with Yara Shahin and Timotej Gaspar, Learn the core concepts of path planning and collision avoidance by configuring MoveIt 2 for a real robot from scratch.
  • [Half-day] Introduction to ROS and Building Robots with Open-Source Software with Geoff Biggs and Katherine Scott, Master the fundamentals of building robots using open-source software.
  • [Half-day] Declarative ROS workspaces with Pixi and RoboStack: A hands-on workshop for reproducible ROS development with Ruben Arts, Wolf Vollprecht, and Bas Zalmstra. Use Pixi and RoboStack to declare your entire ROS environment in a single file to guarantee completely reproducible development setups.
  • [Half-day] Mastering the Jazzy RMW: A Performance-Driven Framework for ROS 2 Middleware Selection and Tuning with Nathan Van Heyst, Tony Baltovski, Luis Camero, and Jose Mastrangelo, Optimize ROS 2 middleware performance through systematic tuning, benchmarking, and high-scale stress testing.
  • [Full-day] Navigation University with David Lu and Binit Shah, Configure a simulated mobile robot step-by-step to successfully navigate its environment using maps, localization, and planning.

ROSCon Global Sponsors

ROSCon Global wouldn’t be possible without our wonderful sponsors. Below you will find a list of the initial batch of ROSCon Global Sponsors. Make sure to check them out at our ROSCon Global Expo Hall. Many of our sponsors will be holding exclusive demonstrations during ROSCon Global that you’ll want to check out.

:1st_place_medal: Gold Sponsors

:2nd_place_medal: Silver Sponsors

:3rd_place_medal: Bronze Sponsors

:seedling: Startup Alley Sponsors

3 posts - 2 participants

Read full topic

by Katherine_Scott on May 11, 2026 11:11 AM

Built an Autonomous Mobile Robot (AMR) for warehouse automation - from CAD to code

Designed the chassis in Fusion 360, exported to URDF, and built the full stack using ROS 2.

Stack:

Nav2 for navigation & path planning

ArUco-based visual docking for precise alignment Custom waypoint sequencing for multi-shelf tasks • Gazebo + RViz for simulation & visualization

Challenge:

LiDAR point cloud rotated with the robot in RViz, breaking the mapping and navigation.

Root cause:

odom/TF mismatch during turns.

Developed a Ground TruthOdom node using Gazebo pose data to publish stable /odom and consistent TF, including handling ROS-Gazebo timestamp issues.

In the video: robot autonomously services requests for Shelf B and Shelf C and delivers them to the drop-off zone.

Happy to discuss the system or challenges!

ros2 robotics #AMR nav2 Gazebo urdf #WarehouseAutomation #OpenRobotics opencv #ComputerVision

588430240-ba0cc7cf-33bd-4987-a1d0-1731533b003f

1 post - 1 participant

Read full topic

by Sourav24 on May 11, 2026 04:58 AM

May 10, 2026
RoboInfra is the missing infrastructure layer for robotics development

Robot models defined in URDF act as the “source code” of robotic systems, but the tooling around them is fragmented, slow, and dependent on full ROS installations.

RoboInfra solves this by providing a unified API platform that enables developers to:

  1. Validate URDF files instantly with 9+ structural checks
  2. Analyze robot kinematics including degrees of freedom and chain structure
  3. Compare URDF versions semantically instead of raw XML diffs
  4. Convert URDF to simulation-ready formats like SDF (Gazebo) and MJCF (MuJoCo)
  5. Preview robots in 3D directly in the browser
  6. Integrate validation into CI/CD pipelines using a lightweight GitHub Action
  7. Automate workflows via a Python SDK

All of this works without installing ROS, reducing setup time from hours to seconds.

RoboInfraapi is built for:

  1. Robotics developers
  2. Simulation engineers
  3. Research labs (RL, control, autonomy)
  4. DevOps teams managing robot pipelines

Free URDF Validator RoboInfra

1 post - 1 participant

Read full topic

by Robotic on May 10, 2026 06:55 PM

May 09, 2026
ROS 2 Launch YAML/XML Schema + VS Code Integration (Validation, Completion, Substitutions)

Hi everyone,

I’ve been working on a project to improve the developer experience when writing ROS 2 Jazzy launch files.
It provides JSON Schema for ROS 2 launch YAML, XSD for launch XML, and VS Code integration including auto-completion, validation, hover docs, and substitution snippets.

:package: Repository
https://github.com/ok-tmhr/ros2_awesome


:rocket: Features

1. YAML Schema for ROS 2 Launch Files

Provides full validation and auto-completion for:

  • launch: structure

  • actions (node, group, arg, set_env, etc.)

  • parameters

  • substitutions

  • conditions (if, unless)

Schema URL:

https://ok-tmhr.github.io/ros2_awesome/schema/launch.yaml

You can enable it by adding this at the top of your .launch.yaml:

# yaml-language-server: $schema=https://ok-tmhr.github.io/ros2_awesome/schema/launch.yaml


2. XML Schema (XSD) for launch XML

https://ok-tmhr.github.io/ros2_awesome/schema/launch_ros.xsd

Usage:

<launch
  xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
  xsi:noNamespaceSchemaLocation="https://ok-tmhr.github.io/ros2_awesome/schema/launch_ros.xsd">


3. Substitution Snippets for VS Code

The repo includes a snippet extension (launch_substitution.json) that can be installed via:

It provides auto-completion for:

$(var ...)
$(env ...)
$(not ...)
$(eval ...)

Typing $ triggers suggestions.


4. Substitutions also work in parameter files

When a parameter YAML is loaded via a launch file, substitutions are evaluated:

my_node:
  ros__parameters:
    use_sim_time: $(var use_sim_time)
    robot_name: $(env ROBOT_NAME)

This is supported by the schema and by the snippet extension.


5. Sample directory included

The sample/ directory contains working examples for both YAML and XML launch files.

2 posts - 2 participants

Read full topic

by ok-tmhr on May 09, 2026 12:41 PM

May 08, 2026
ros2_lingua: A safe, dependency-aware grounding engine for LLMs

​Hi everyone,

​Like many of us, I’ve been experimenting with giving LLMs control over robot hardware. However, I quickly ran into the classic problems: LLMs hallucinate actions, assume prerequisites that haven’t been met (e.g., trying to drive a humanoid before stabilizing it), and most existing integrations are just tightly coupled, hardcoded scripts.

​To solve this, I built ros2_lingua — an open-source bridge that introduces a structured capability contract between ROS 2 nodes and LLMs.

​Instead of letting the LLM guess what topics or actions to call, ros2_lingua forces the LLM to output a plan based only on explicitly registered capabilities, and uses a backward-chaining planner to automatically inject missing prerequisite steps.

​How it works:

  1. ​Capability Advertisement: Any ROS 2 node can inherit from LinguaMixin to self-advertise its capabilities at boot. It defines its name, ROS action/service, parameters, preconditions, and postconditions.
  2. ​Backward-Chaining Planner: When a user gives a natural language instruction (e.g., “go to the table and pick up the bottle”), the Grounding Engine checks the robot’s current state against the capability schema. If the robot isn’t balanced, the planner automatically injects a stabilize_robot capability before the navigation step.
  3. ​Safe Dispatch: The DispatcherNode safely executes the validated plan over standard ROS 2 actions and services.

Decoupled Architecture

​One of my main goals was to ensure the core logic was highly testable. The project is split into two layers:

ros2_lingua_core: A pure Python library containing the schema, registry, planner, and LLM backends (Ollama, OpenAI, Anthropic). It has zero ROS 2 dependencies, meaning the grounding engine can be unit-tested purely in Python.

ros2_lingua: The ROS 2 interface layer containing the GroundingNode, DispatcherNode, and mixins.

Links & Demo

​You can see a demo of the engine running with a local Ollama model and a mock humanoid setup, along with the full architecture documentation here:

​Documentation & Architecture: ros2_lingua — Documentation

​GitHub Repository: GitHub - purahan/ros2_lingua: Natural language to ROS2 actions — a structured LLM grounding engine for any robot. · GitHub

​What’s Next & Feedback Request

​The project is currently a working prototype in Python. My immediate roadmap includes taking this to a release-ready state and building a C++ bridge so native controller nodes can easily advertise their capabilities.

​Since this is early development, I would love to get feedback from the community on the architecture—specifically on the schema design for the capability registry and how best to handle complex, long-running action pre-emptions within the Dispatcher.

​Thanks for your time, and I’d love to hear your thoughts!

1 post - 1 participant

Read full topic

by purahan on May 08, 2026 09:51 PM

Control Algorithm Dominance Survey

Hey guys, I’m doing a survey to ascertain the dominance of different control engineering paradigms in the industry, to ascertain whether there has been a noticeable shift from classical controls to more modern algorithms, or whether modern algorithms, while looking good on paper, are stuck on research papers for the most part.
I would love everyone’s inputs, from student to seasoned researcher.
Your still welcome to contribute if you don’t work directly in controls, or if your work is controls-adjacent, like SWE or mechanical design.

2 posts - 2 participants

Read full topic

by demi on May 08, 2026 01:57 PM

May 06, 2026
Where does latency in WebRTC video streaming come from? An analysis

We analyzed the glass-to-glass latency of streaming video from robots to the web using WebRTC. Typical, total latency for remote streaming is 150-180 ms but how does this break down?

Tl;dr:

  • The vast majority of latency actually comes from the camera itself and the USB bus (~100 ms).
  • H264 encoding and decoding add around 10 ms each (or less).
  • WebRTC only adds around 10 ms of latency for remote streaming (jitter buffers).
  • The rest is due to static network delay (“ping timing”, speed of light).

Read the full analysis here:

1 post - 1 participant

Read full topic

by chfritz on May 06, 2026 05:40 PM

ROS2 + Gazebo Harmonic on macOS 26

Hello,

It seems ROS2 on macOS 26 installation guide is this: Installing ROS 2 on macOS — ROS 2 Documentation: Crystal documentation
Gazebo Harmonic on macOS 26 installation guide is this: Binary Installation on macOS — Gazebo harmonic documentation

I read online there might be some incompatibility issues. I would like to understand what’s the current picture of the ROS2+Gazebo Harmonic. Would those two guides work and are there any expected issues?

1 post - 1 participant

Read full topic

by MT1234 on May 06, 2026 05:36 PM

Camera streaming made easy [NEW Release]

Greetings fellow roboticists,

you might know me from other projects such as ros_babel_fish, the QML ROS 2 module, or RQml.
Maybe as the dude who ends all their posts with an AI-generated image.

TLDR: ros_camera_server out now (link below). Efficient streaming to ROS, robust low-bandwidth to operator station over H.264/H.265 using rtp/srt/webrtc. Simple config, automatically optimized pipelines.

Camera streaming in ROS is kind of cumbersome, even though it’s always the same issue.
On the robot, you want raw (or compressed between PCs) ROS images for processing, and on the remote operator station, you want a low-latency, ideally low-bandwidth live video feed.
In academia, many setups I have encountered use compressed images for remote operator setups because it’s simple and, in good network conditions, works well enough.
Using usb_cam or gscam with compressed images is also what Opus 4.7 would suggest when asked.
While this does work, it’s quite resource- and bandwidth-intensive and is one of the roadblocks that need to be addressed to bring research solutions to the field, especially in rescue robotics, where bandwidth is a limited resource.

To save bandwidth, you can use something like gst_bridge to receive the ROS image, encode it in H.264, and send it to your remote operator station.
This will be approximately 1/10 to 1/30 of the bandwidth for comparable quality.

If your camera is a high-resolution USB camera, it will most likely stream jpeg encoded data for the high resolution, high fps options.
So your pipeline becomes:
Camera (jpeg) → usb_cam (decodes jpeg) → image_transport (re-encodes as jpeg for compressed) → gst_bridge → handwritten gstreamer pipeline (requires some technical knowledge to get right) → stream

Doesn’t take an expert to see that this is not optimal.
Here’s where my new ros_camera_server comes in.
You specify one yaml file with your cameras, each with one input and as many outputs as you want (and your compute can handle).
The outputs can differ in resolution and framerate.
Currently supported are ROS 2, RTP, SRT, and WebRTC.
The camera server will automatically create and optimize GStreamer pipelines based on your available hardware accelerations, which, in parallel, produce your ROS output and streams applying scale and framerate limiters as necessary.
Cutting the decode and re-encode overheads and significantly reducing latency and CPU usage.
JPEG camera input can also be published directly as a ROS-compressed image or forced to be decoded if needed.
Check the plots from my benchmark in the comments to see that the much easier configuration is not paid for with higher latency or overhead, and it beats the alternatives in both.

Here’s the repo:

If you can’t comply with the AGPL, you can contact me to see if we can find a suitable license for your use.

PS: The ros_camera_server preserves the image capture/header timestamp as a custom RTP header extension and can restore it from ros_camera_server H.264/H.265 streams.
So you can stream from the robot over RTP/SRT/WebRTC and restore it to ROS on the operator station or another robot, and the timestamp will be preserved in the ROS image output.

Benchmarks



I hope this helps groups without video streaming experts to create more robust remote control setups.
If you read this far and this was not of interest to you, I’m sorry, here’s your AI picture:

1 post - 1 participant

Read full topic

by StefanFabian on May 06, 2026 01:02 PM

ALERT: do NOT visit plotjuggler.com

Hi,

avoid the page “plotjuggler DOT com”

I did not create this page and I have no idea who did.

Please do not download any file from there, there is a high risk of Malware / Phishing.

I will try to have a solution to this ASAP.

Davide

3 posts - 2 participants

Read full topic

by facontidavide on May 06, 2026 12:56 PM

Rclnodejs 2.0.0 beta — ROS 2 Lyrical (beta) and Node.js 26 support

Hi all,

For those new to the project: rclnodejs is the Node.js client library for ROS 2, maintained under the Robot Web Tools umbrella. It lets you write ROS 2 nodes — publishers, subscribers, services, actions, parameters, lifecycle, etc. — in plain JavaScript or TypeScript, with a native binding to rcl/rmw so messages stay zero-copy where possible. It’s a good fit for web dashboards, Electron desktop apps, browser bridges, scripting, and rapid prototyping.

I’m happy to announce that rclnodejs 2.0.0-beta.0 is out — the first preview of the 2.x line, with first-class support for the upcoming ROS 2 Lyrical Luth release on Ubuntu 26.04 and the latest Node.js 26.

What’s new in 2.0.0-beta.0

  • ROS 2 Lyrical Luth (Ubuntu 26.04) is supported in addition to existing distros (Humble / Jazzy / Kilted / Rolling).

  • Node.js 26.x is supported, with Linux x64 and arm64 prebuilds so npm install works without a local toolchain.

New: rosocket — talk to ROS 2 from a browser, no JS library

This release ships a lightweight WebSocket bridge called rosocket plus an end-to-end demo. The point: a browser tab can subscribe and publish to topics (and call services) using only the built-in WebSocket and JSON APIs — no rosbridge or roslibjs needed.

URL convention:


ws://<host>:<port>/topic/<name>

ws://<host>:<port>/service/<name>

Browser-side pub/sub in a few lines:


const BRIDGE = 'ws://localhost:9000';

// Subscribe — every published message arrives as onmessage.

const sub = new WebSocket(`${BRIDGE}/topic/chatter`);

sub.onmessage = (ev) => console.log('recv:', ev.data); // {"data":"hi"}

// Publish — open, send JSON, close.

const pub = new WebSocket(`${BRIDGE}/topic/chatter`);

pub.onopen = () => {

pub.send(JSON.stringify({ data: 'hello from browser' }));

setTimeout(() => pub.close(), 200);

};

Live walkthrough:

rclnodejs rosocket demo

Code: demo/rosocket.

Electron desktop visualization demos

For richer UIs, the same library powers cross-platform desktop apps via Electron — HTML/CSS/Three.js/WebGL on the front end, with real ROS 2 nodes running in the Electron main process. The demos cover topics, a car controller, a manipulator, and a turtle TF2 visualizer.

rclnodejs electron manipulator demo

Code: demo/electron.

Cheers,

Minggang

1 post - 1 participant

Read full topic

by minggangw on May 06, 2026 10:06 AM

Is FastDDS (default config) totally unusable for simulation with rclpy?

I was fighting some performance issues in our simulation for a subject I teach.

It boiled down to one slight performance improvement in rclpy which I started discussing, but the larger surprise was how super-bad FastDDS is with distributing the 250 Hz /clock topic.

I did an experiment with our full autonomy stack (but on a Turtlebot with 2D lidar, so nothing heavy). Most nodes run rclpy with MultiThreadedExecutor (more on that later). There are in total 16 nodes that subscribe /clock topic and Gazebo runs at ~70% RTF, so the real frequency of /clock is more like 200 Hz.

Pink is FastDDS, blue is Zenoh.

Let me explain the plot:

The vertical axis shows the dt - delta time, i.e. the time difference between two consecutive /clock messages received by the node (measured by instrumenting the default clock callback of a node).

With FastDDS, some nodes sometimes do not receive any /clock message for more than 0.5 s!!! With Zenoh, the worst case is about 0.18 s.

Even more interesting is the best case - FastDDS achieves 0.004 dt (what is expected) for only a small fraction of the dataset, like 5%. Zenoh achieves this dt most of the time (the almost solid blue line on the bottom). (no, this blue line does not hide the pink dots, there are almost none underneath it).

This is the histogram of dt values (notice logarithmic axis, Zenoh left, FastDDS right):

I understand that the combination of rclpy and MultiThreadedExecutor is one of the worst things you can do in ROS 2, but still, this huge difference in usability between FastDDS and Zenoh hits me.

12 posts - 6 participants

Read full topic

by peci1 on May 06, 2026 02:33 AM

May 05, 2026
[Release] emcon_gz_hardware_interface: A DDS-bypassing hardware interface for Gazebo Harmonic

Hi everyone,

When scaling up multi-robot simulations or running complex network-isolated state estimators in Gazebo Harmonic, relying on the standard gz_ros2_control shared memory architecture can become a bottleneck or present domain isolation challenges.

To solve this, I’ve open-sourced emcon_gz_hardware_interface.

It acts as a standard ros2_control SystemInterface, but instead of routing simulation traffic through ROS 2 DDS, it acts as a “data diode” and subscribes/publishes directly to native gz-transport topics.

Key Features:

  • Total DDS Bypass: Keeps your ROS_DOMAIN_ID completely isolated from high-frequency simulation traffic.

  • Strict Real-Time Safety: Uses realtime_tools::RealtimeBuffer to ensure the control loop never blocks.

  • Fully Parameterized: Configurable directly via URDF <hardware> tags.

Repository: https://github.com/yenode/emcon_gz_hardware_interface

I’ve already opened an RFC with the ros-controls team to see if this fits upstream, but the standalone package is fully CI-tested and ready to use for ROS 2 Jazzy! I’d love to hear feedback from anyone running massive fleets or strict network simulations.

4 posts - 3 participants

Read full topic

by Aditya_Pachauri on May 05, 2026 11:50 PM

"ROS 2 in a Nutshell: A Survey" is now published in ACM Computing Surveys

Hi everyone,

I have been a silent observer here since 2019, and this is my first post on Discourse. I hope you will excuse the sudden intrusion :grinning_face_with_smiling_eyes:

I am pleased to share that our survey paper, “ROS 2 in a Nutshell: A Survey,” has been published in ACM Computing Surveys .

Paper DOI: https://doi.org/10.1145/3815113

ACM Computing Surveys is recognized with an Impact Factor of 28.0 and is ranked #1 out of 147 journals in Computer Science, Theory & Methods .

The goal of this survey is to provide a broad and systematic overview of the ROS 2 ecosystem. In particular, the paper covers:

  • the evolution of ROS 2


  • the motivations behind ROS 2 and its architectural redesign


  • middleware and RMW evolution

  • a taxonomy of ROS 2 literature and research directions


  • frameworks, simulators, community packages, and the broader ROS 2 software ecosystem





The paper is organized around three main research questions:

  1. How does ROS 2 improve upon ROS 1, and what new limitations arise?
  2. What advances address redesign challenges and enable deployment?
  3. Which frameworks and tools shape the ROS 2 ecosystem, and where are the remaining gaps?

A major outcome of the work is an open-access companion database that organizes ROS-related literature, tools, and ecosystem resources:

ROS 2 survey database: ROS 2 in a Nutshell: A Survey

We also welcome community contributions to improve and extend the database:

Contribution guide: awesome-ros/CONTRIBUTING.md at main · asmbatati/awesome-ros · GitHub

This work was carried out by:

We would also like to express our appreciation to:

for their valuable support.

We are also grateful for the support of:

I hope the survey and the companion database will be useful to researchers, developers, and students working with ROS 2. :folded_hands:

Feedback, corrections, and suggestions for additions to the database are very welcome.

1 post - 1 participant

Read full topic

by asmbatati on May 05, 2026 01:27 AM

May 04, 2026
Guidance on learning ROS2 focusing on motion planning and Perception.

Hello guys, I am a beginner at learning ROS2. I want guidance on learning it from scratch, right now I am stuck on where to start. Watching a couple of YouTube tutorials, but struggling to understand certain concepts. I am here to learn from people who have prior knowledge of ROS.

Thanks

1 post - 1 participant

Read full topic

by Hariharan_Karthikeya on May 04, 2026 04:13 PM

Ouster unveil the first native color Lidar sensor

5 posts - 3 participants

Read full topic

by Samahu on May 04, 2026 02:50 PM

RVizSplat: Visualize 3D Gaussian splats in RViz!

Hey everyone!

We are happy to share Release 1.0.0 of RVizSplat!
In this release, we provide an RViz display plugin that renders 3D Gaussian splats alongside other conventional markers in the scene. Along with this, we also provide an interface to stream GSplats over ROS topics and the ability to read .ply files which are in the 3DGS INRIA format directly from the plugin.

In case you wish to render Gaussian splatted scenes in a resource constrained environment, we also provide OIT based implementations to bypass sorting the Gaussians at the cost of rendering quality.

On an Nvidia RTX 3060+, we render at 40-60 FPS on a large scene with > 6 million splats and about 20 FPS on a modern integrated GPU for the same scene. On scenes that are comparatively smaller (1-3 million splats), we are able to achieve 100+ FPS. We also provide the ability to sort on an Nvidia GPU (Radix sort) and on the CPU (PDQ Sort).
Additional sorting techniques can be implemented through a simple interface if they suit your needs better.

The package is currently well tested on ROS 2 Rolling and is experimental on Jazzy and Kilted.
Please try out our work, let us know what you think, and star the repository to support the project :grin:

Team members that have made this project possible: Videh Patel, Akash Chikhalikar, Aditya Mathur, and Suchetan Saravanan.

le_robot_6mil (1)

alpha_0_5_marker

3 posts - 2 participants

Read full topic

by suchetanrs on May 04, 2026 04:43 AM

May 03, 2026
I tested URDF format conversion on NASA Valkyrie, UR5, and Franka Panda results and a free tool

I validated and converted URDFs from 5 popular robots to both
Gazebo SDF and MuJoCo MJCF format.

The tool is free to try (14-day Pro trial, no credit card):

  • Validate: POST /api/urdf/validate
  • Convert to SDF: POST /api/urdf/convert-format?target=sdf
  • Convert to MJCF: POST /api/urdf/convert-format?target=mjcf

Python SDK: pip install roboinfra-sdk
GitHub Action: roboinfra/validate-urdf-action@v1

1 post - 1 participant

Read full topic

by Robotic on May 03, 2026 12:02 PM

April 30, 2026
ROS Lyrical Luth Beta and Call for Testing

We’re in the “beta” phase of development for ROS 2 Lyrical Luth! We have binary packages available for Ubuntu Resolute and RHEL 10, and rosdistro is open for newly released packages for Lyrical.

Testing the Lyrical beta

We published Installation instructions for Lyrical here. However, binary packages for Lyrical are only available in the testing repository.

Follow the pre-release testing instructions to use the ros-testing repository so that you can install the ros-lyrical-* packages.

For those using the comprehensive archive for installation, download the archive from the artifacts of this pre-release tag. If you’re building from source, use the ros2.repos file at that release.

If you think you’ve discovered a bug, please:

  • check the open issues and PRs on the related repository, or
  • discuss the issue in this thread, or
  • open a new issue

We’ll triage the issue or PR and decide when and how it should be fixed in Lyrical.

Releasing your packages

If you are a package maintainer, please follow this guide to release your package in Lyrical.

Reminder that the tutorial party starts Thu, Apr 30, 2026 7:00 AM UTC. Find all the details in this post.

Thanks!

The ROS 2 Team

2 posts - 1 participant

Read full topic

by sloretz on April 30, 2026 09:36 PM

ROS 2 Lyrical Luth Release Illustration and Swag 🎸

Hi Everyone,

It is my pleasure to present you with the illustration for ROS 2 Lyrical Luth! This release illustration is the work of our illustrator Ryan Hungerford. Ryan is an illustrator based in the Bay Area and his AI (actual intelligence) makes for some wonderful illustrations.

Lyrical Swag Sale

We’re also happy to announce that the ROS 2 Lyrical Luth swag sale is now live. We’re now using Fourth Wall for all of our ROS swag sales as the platform supports both a wide array of items and allows us to produce merch on demand and ship it almost anywhere on earth We’ve also created a permanent URL for ROS swag at store.openrobotics.org so it is easy to find. For this release we are offering eight different items for sale including:

  • :t_shirt: Mens, womens, and kids shirts (we’re big fans of the tri-blend shirts)
  • :baby: Baby Onsies
  • :coat: Hoodies and long sleeve shirts
  • :bed: Throw pillows
  • :hot_beverage: Mugs
  • :framed_picture: Decorative prints

All profits from the Lyrical swag sale go directly to the Open Source Robotics Foundation and help support the ROS, Gazebo, ROS Control, and Open-RMF projects. If you order today you might just receive your swag by release day on May 22rd, 2026. If you would like to earn Lyrical swag by contributing to the project please consider contributing to the Lyrical Test and Tutorial party that is currently taking place. The top twenty test contributors will be sent a code to our swag store.

7 posts - 3 participants

Read full topic

by Katherine_Scott on April 30, 2026 05:06 PM

Lyrical Luth Test and Tutorial Party Instructions

Lyrical Luth Test and Tutorial Party Instructions

:tada: Update: Lyrical board has been updated and is live! Docs are live too. A recording of the kickoff meeting can be found here.

Hi Everyone,

As mentioned previously, we’re conducting a testing and tutorial party for the next ROS release, Lyrical Luth. If you happened to miss the kickoff of the Lyrical Luth Testing and Tutorial party this morning I have put together some written instructions that should let everyone participate, no matter their time zone. Here are the slides from the kickoff meeting.

TL;DR

We need your help to test the next ROS Distro before its release on Friday, May 22nd. We’re asking the community to pick a particular system setup, a combination of host operating system, CPU architecture, RMW vendor, and build type (source, debian, binary), and run through a set of ROS tutorials to make sure everything is working smoothly. Depending on the outcome of your tutorials, you can either close the ticket as completed or report the errors you found. If you can’t assign the ticket to yourself, leave a comment, and an admin will take care of it for you. Please do not sign up for more than one ticket at any given time. Everything you need to know about this process can be found in this Github repository.

As a thank you for your help, we’re planning to provide the top twenty contributors to the T&T party with their choice of either ROS Lyrical swag or OSRA membership. :warning: To be eligible to receive swag, you must register using this short Google Form so we can match email addresses to GitHub usernames and count the total tickets closed.:warning:

The testing and tutorial party will close on May 14, 2026, but we’re asking everyone to get started right away! We have 10,000 tickets to work through and with Lyrical’s transition to C++20, we fully anticipate that we’ll need to update a few tutorials and fix some broken source builds.

Full Instructions

We’re planning to release ROS 2 Lyrical Luth on May 22, 2026, and we need the community’s help to make sure that we’ve thoroughly tested the distro on a variety of platforms before we make the final release. What do we mean by testing? Well, lots of things, but in the context of the testing and tutorial party, we are talking about the package-level ROS unit tests and anything else you want to test. What do we mean by tutorials? We also want to make sure all our docs.ros.org tutorials are in working order before the release.

The difficulty in testing a ROS release is that people have lots of different ways they use ROS, and we can’t possibly test all of those combinations. For the testing and tutorial party we have created what we call, “a setup.” A setup is a combination of:

  • RMW vendor: FASTDDS, CYCLONEDDS, CONNEXTDDS or ZENOH
  • BuildType: binary, debian or source
  • OS: Ubuntu Resolute 26.04, Windows 11 and RHEL-10
  • Chipset: Amd64 or Arm64

If you already have a particular system setup that you work with, we suggest that you roll with that; otherwise, feel free to create a new system setup just for testing purposes. If you normally use Windows or RHEL (or binary compatible distributions to RHEL like Rocky Linux / Alma Linux) we would really appreciate your help as we don’t have a ton of internal resources to test these distributions.

Here are the steps for participating in the testing and tutorial party:

  • Before you begin please fill out the Google form so we have your contact information We can’t send you swag if we don’t have both your email address and your Github user name.
  • First go to the Tutorial Party Github repo (bit.ly/LyricalBoard) and read the README.md.
  • Figure out your setup!
  • Once you’ve got your “setup” all figured out take a look at the you can use the bottom of the Lyrical Tutorial Party party ReadMe file to filter by setup. There should be a set of tickets for your “setup”. Click on the links and review the available tickets. If you want to test something other than the available tickets, feel free to open a new ticket and describe exactly what you are testing.
  • Pick a single ticket for your setup and use the assignees option to assign it to yourself. If you can’t assign yourself, leave a comment and an admin will assign the ticket to you
  • Take a look at the ticket and do as it asks in the “Links” section. For example, in this ticket, its links section points you to this tutorial. You should use your new ROS Lyrical Luth setup to run through that tutorial.
    • :warning: Please note that we’re using the Rolling documentation. If you see instructions to install a rolling package you’ll need to modify those to point to lyrical.
  • Once you complete the links section things will either go smoothly or you will run into problems. Please report your results using the check boxes in the “Checks” section of your Github issue.
    1. If everything goes well, note as such in your ticket’s comment section. We ask that you attach your terminal’s output as a code block or as a gist file or include a screenshot. At this point feel free to close the ticket by clicking “close as completed.”
    2. If something went poorly please note it in your ticket’s comment section. Try to include a full stack trace or other debug output if possible. Please also run ros2 doctor --report and dump the output in your ticket.

The testing and tutorial party wraps up on May 14, 2026 , but we’re asking everyone to get started early as we will need some lead time to address any bugs.

New for Lyrical: Pull Requests and Reviews

For 2026’s Test and Tutorial Party, we’re piloting a new feature: Lyrical Bug Fixes and PR Reviews. We’re looking for community members to help us out by lending us their eyes and expertise. For the T&T party we’re allowing participants to gain one extra point for each completed bug fix and PR review from the Lyrical board. We anticipate the majority of these issues will be documentation related so they should be fairly straightforward to fix.

For the T&T party we will provide you with one extra point if you do one of the following:

  • Create a pull request for a bug fix that addresses documented issues listed in the Lyrical issues board.
  • Review one of the pull requests or bug fixes listed in the Lyrical issues board.
  • We also have a limited number of general ROS pull request reviews that are also in scope for the T&T party. You can find those here (bit.ly/Lyrical-PR-Reviews)

To help us track and tabulate scores, you must fill out this short form every time you complete a review or a PR.

For Lyrical pull requests and bug fixes:

  • Ask to be assigned the issue in the Lyrical Tracking Board (bit.ly/LyricalTrackingBoard).
  • Write the relevant code or documentation. Remember to use the correct branch!
  • Build your solution and run the necessary tests and linters. This step is key to getting your PR approved.
  • Submit your PR. You must include a brief description of the issue and the issue number from the tracking board.
  • You must work with the reviewers to address all necessary feedback until the PR is accepted and merged.
  • If you use AI for your pull request you must report it in a manner consistent with OSRF policy.
  • Report your work using the form (bit.ly/LyricalPR).

For Lyrical reviews:

  • Request to be assigned to the pull request from the Lyrical Tracking Board (bit.ly/LyricalTrackingBoard). You can be assigned to one pull request at a time.
  • Once you are assigned to the pull request you must do the following:
    • Verify the fix by checking out the PR, building it, and replicating the bug conditions. For documentation this means checking out the PR and running make html.
    • Take one or more screenshots of the result.
  • Perform a realistic review of the pull request. There are two potential outcomes for your review:
    • You find no issues.
      • If that’s the case you must briefly list the steps you took to verify that the PR works and attach a screenshot.
    • You find an issue and request changes.
      • Changes should use the format: “Nit:” (minor change, usually a matter of preference, non-blocking), “Issue:” (major issue, blocking), “Suggestion:” (friendly suggestion, non-blocking), “Question:” (clarification, non-blocking), or “Chore:” (generally formatting issue, non-blocking)
      • For issues and chores the feedback in the pull request should include the following:
        • What specifically needs attention.
        • Why this change is necessary.
        • A suggestion on how to fix it.
      • You must follow up with the PR author to make sure their changes fix your issue. We suggest using the “suggest changes” feature liberally to expedite the process.
  • Generative AI should not be used for pull request reviews.
  • Report your work using the form (bit.ly/LyricalPR).

15 posts - 6 participants

Read full topic

by Katherine_Scott on April 30, 2026 04:37 PM

I'm done manually tuning DDS parameters!

Raise your hand if this sounds familiar:

  1. You just want better latency or higher throughput for your ROS2 app — but DDS throws hundreds of parameters at you and you have no idea where to start.
  2. So you end up spending hours (or days) manually tweaking, re-running benchmarks, and tweaking again… only to end up with “good enough” instead of actually good.

If that’s you, I have something that might help: GitHub - qualcomm-qrb-ros/ROS2-DDSConfig-Optimizer: An AI-driven tool that automatically tunes DDS configuration for ROS2 applications. · GitHub

It’s an AI-driven tool that automatically tunes FastDDS configuration for your ROS 2 application. All you need to provide is:

  1. Your performance targets — latency, throughput, reliability, CPU/memory limits, whatever matters to you — in a simple XML file
  2. An initial DDS config as the baseline

That’s it. You’ll get back the best DDS configuration tailored to your application. :sparkles:

Would love to hear your feedback, bug reports, or feature ideas — issues and PRs are very welcome!

4 posts - 2 participants

Read full topic

by NaSong on April 30, 2026 06:50 AM

April 29, 2026
New ROS controller app

https://play.google.com/store/apps/details?id=com.jax.roscontroller

My app was finally approved on the play store. I have been using this app to control my quadruped running ROS2 on a Pi. This release has the fundamentals working. I will be adding additional features soon.

2 posts - 2 participants

Read full topic

by Tdp378 on April 29, 2026 05:12 PM


Powered by the awesome: Planet