April 13, 2026
Fast Lossless Image Compression: interested?

Hi,

so, I coundn’t shut up about this on my social media, so some of you might be already sick and tired of me, but I am sharing it here, hoping to understand if the robotic community may benefit from this work.

By pure chance, I started exploring the topic of Lossless Image Compression, in particular in terms of speed, thinking about real-time streaming and recording.

I got very interesting results that I think may benefit some use cases in robotics.

Before moving forward releasing the code or more details about the algorithm (that is very much still work in progress) I wanted to:

  • share the binaries with the community to allow people with healthy dose of skepticism to replicate results on their computer.

  • understand what the actual use case for fast, but still better than PNG, lossless compression is.

These are my results: 3 codecs with 3 different tradeoffs (being Griffin the most balanced one in the 3 dimensions).

I would love to hear the feedback of the community :grin:

LINK: GitHub - AurynRobotics/dvid3-codec · GitHub

Also, if you think that you have a practical application for this, please DM me to discuss this, either here or contacting me on dfaconti@aurynrobotics.com

Davide

1 post - 1 participant

Read full topic

by facontidavide on April 13, 2026 10:32 AM

🚀 New "ROS Adopters" page is live - ADD YOUR PROJECT

Hi everyone :waving_hand:

We are excited to announce a new ROS Adopters page on the official ROS documentation site! This is a community-maintained, self-reported directory that showcases organizations and projects using ROS in any capacity - whether it’s a commercial product, a research platform, an educational tool, or anything in between.

:link: Browse the current adopters here: ROS 2 Adopters — ROS 2 Documentation: Rolling documentation

The page supports filtering by domain (e.g., Aerial/Drone, Manufacturing, Research, Consumer Robot, etc.) and by country, and includes a search function to help you find projects that interest you.

:thinking: Why add your project? (Main Part of the Post)

  • :globe_showing_europe_africa: Visibility - Let the world know your project runs on ROS.
  • :light_bulb: Inspire others - Seeing real-world deployments motivates new adopters and contributors.
  • :flexed_biceps: Strengthen the ecosystem - A healthy adopter list demonstrates the breadth and maturity of ROS to potential users, sponsors, and decision-makers.

:memo: How to add your project

We’ve made it as easy as possible. There’s an interactive form right on the documentation site:

:link: Add Your Project — ROS 2 Documentation: Rolling documentation

That’s it :white_check_mark: No special tooling required - you can do it entirely from your browser.

:robot: What counts as an “adopter”?

Anything that uses ROS :rocket: Commercial products, open-source projects, university research labs, hobby builds - if ROS is part of your stack, we’d love to see it listed. The directory is self-reported and accepted with minimal scrutiny, so don’t be shy :blush:

:open_book: Background / History

This feature was proposed in ros2/ros2_documentation#6248 and implemented in PR #6309.

Please consider to add your project, share this post with your colleagues, and let’s build a comprehensive picture of what the ROS ecosystem looks like in 2026! :tada:

Looking forward to seeing your PRs! :folded_hands:
Ping fujitatomoya@github once your PR is up! I am happy to review PRs!

Cheers,
Tomoya

1 post - 1 participant

Read full topic

by tomoyafujita on April 13, 2026 01:09 AM

April 09, 2026
International Conference on Humanoid Robotics, Innovation & Leadership

======================================================================

                       **CALL FOR PAPERS**
                         **HRFEST 2026**

International Conference on Humanoid Robotics, Innovation & Leadership

Date: November 05 - 07, 2026
Location: Universidad Nacional del Callao (UNAC) - Callao, Peru (Hybrid Event)
Website: https://hrfest.org

CONFERENCE HIGHLIGHTS & WHY SUBMIT

* High-Impact Indexing: All accepted and presented papers will be
submitted to the IEEE Xplore digital library, which is typically
indexed by Scopus and Ei Compendex.
* Hybrid Format: Offering both in-person and virtual presentation
options to accommodate global researchers and industry professionals.
* Global Networking: Hosted alongside the IEEE RAS Regional
Manufacturing Workshop, connecting LATAM researchers with global
industry leaders.

ABOUT THE CONFERENCE

The HRFEST 2026: International Conference on Humanoid Robotics, Innovation
& Leadership is the premier Latin American forum that bridges the gap
between advanced robotics research and industrial leadership. Hosted by
the Universidad Nacional del Callao (UNAC) as the official academic and
not-for-profit sponsor, with NFM Robotics acting as an industrial patron
and logistical facilitator, this conference gathers top researchers,
industry leaders, and innovators.

HRFEST 2026 is technically co-sponsored by IEEE. Accepted and presented
papers will be submitted for inclusion into the IEEE Xplore digital
library, subject to meeting IEEE Xplore’s scope and quality requirements.

TECHNICAL TRACKS & TOPICS OF INTEREST

We invite researchers, academics, and professionals to submit original,
unpublished technical papers. Topics of interest include, but are not
limited to:

* Track 1: Robotics & Adv. Manufacturing

  • Humanoid Robotics, Bipedalism & Legged Locomotion
  • Control Systems, Kinematics & Dynamics
  • Mechatronics, Soft Robotics & Smart Materials
  • Industrial Automation, Cobots & Swarm Robotics

* Track 2: AI & Data Science

  • Machine Learning & Deep Learning
  • Generative AI & LLMs
  • Computer Vision, Pattern Recognition & NLP
  • Ethical AI & Explainable AI (XAI)

* Track 3: Engineering Management

  • Tech, Innovation & R&D Management
  • Industry 4.0 & Digital Transformation
  • Agile Project Management
  • Tech Entrepreneurship & Startups

* Track 4: Applied Technologies

  • Internet of Things (IoT) & Smart Cities
  • Biomedical Eng. & Healthcare Systems
  • Financial Engineering & FinTech
  • Renewable Energy Systems

SUBMISSION GUIDELINES

* Review Process: HRFEST 2026 enforces a strict Double-Blind Peer Review.
* Submission Portal: All manuscripts must be submitted electronically
via EasyChair at: https://easychair.org/conferences/?conf=hrfest2026
* Format & Length: All manuscripts must follow the standard double-column
IEEE Conference template and should not exceed six (6) pages in PDF format.
* Originality: Submissions must be original work not currently under
review by any other conference or journal.
* Camera-Ready Submissions: Final versions of accepted papers must be
validated using IEEE PDF eXpress (Conference ID: 71784X). The PDF
eXpress validation site will open on September 15, 2026.

IMPORTANT DEADLINES

* Full Paper Submission Deadline: July 05, 2026
* Notification of Acceptance: September 15, 2026
* Final Camera-Ready Submission: October 15, 2026

For more information regarding submissions, registration, and the
IEEE RAS Regional Manufacturing Workshop, please visit our official
website: https://hrfest.org

We look forward to seeing you in Callao!

1 post - 1 participant

Read full topic

by RoboticsLab on April 09, 2026 11:13 PM

[Virtual Event] The Messy Reality of Field Autonomy: ROS 2 Architectures, Behavior Trees & Sim-to-Real

Hi everyone,

If you have ever lost a week of field data because of a typo in a custom ROS message, or watched a perfectly tuned simulation model immediately fail on physical hardware, this session is for you.

On May 1st, the Canadian Physical AI Institute (CPAI) is hosting a highly technical, virtual deep-dive into the architectural evolution of robotic autonomy and the gritty realities of physical deployment.

We are moving past the theoretical benchmarks to talk about what actually breaks in the wild and how to architect your software to handle it.

Here is what we are covering:

Part 1: Driving into the (Un)Known: Navigation for Field Robots

Alec Krawciw (PhD candidate, UofT Autonomous Space Robotics Lab & Vanier Scholar) will cover the logistical and systemic realities of field deployment, including:

  • Pre-Field Data Strategies: Why post-processing tools must be built before testing, and how simple data-logging errors (like ROS message naming typos) can ruin a deployment.

  • System Failure is Inevitable: The critical difference between fault prevention and fault recovery, and why strict deterministic approaches shatter off-road.

  • Maximizing Field Time: Practical workflows to reduce on-site engineering workload.

Part 2: Beyond Hard-Coded Control: Embodied AI & ROS 2 Architecture

Behnam Moradi (Senior Software Engineer in Robotic Autonomy) will break down the shift from classical state machines to modern autonomy stacks:

  • From Loops to Graphs: Making the architectural leap from linear execution loops to the distributed graph of nodes required in ROS 2 (“What data is available now?”).

  • Behavior Trees & Goal-Seeking: Moving beyond massive if-else chains to priority-driven agents that respect constraints and dynamically replan.

  • The True Role of Simulation: Why tools like PX4 and AirSim aren’t for testing if your software works, but for validating if your simulation was accurate in the first place.

Event Details

  • Date: Friday, May 1

  • Time: 6:00 PM - 8:00 PM EDT

  • Location: Google Meet

  • Host: Diana Gomez Galeano (former Director, McGill Robotics)

Whether you are migrating a stack to ROS 2, building out your first Behavior Trees, or gearing up for summer field trials, we would love to have you join the conversation. We will have dedicated time for Q&A to help troubleshoot your specific architecture roadblocks.

Registration & Tickets: We have 10 complimentary tickets for ROS community to join us

Looking forward to seeing some of you there!

Cheers,

Saeed Sarfarazi
Canadian Physical AI Institute (CPAI)

1 post - 1 participant

Read full topic

by Saeed on April 09, 2026 12:02 AM

April 08, 2026
FusionCore demo: GPS outlier rejection in a ROS 2 filter built to replace robot_localization

Quick demo of outlier rejection working in simulation.

I built a spike injector that publishes a fake GPS fix 500 meters from the robot’s actual position into a live running FusionCore filter. The Mahalanobis distance hit 60,505 against a rejection threshold of 16. All three spikes dropped instantly. Position didn’t move.

The video is 30 seconds: robot driving in Gazebo, FusionCore GCS dashboard showing the Mahalanobis waveform, rejection log, and spike counter updating in real time.

GitHub

For anyone who missed the original announcement: FusionCore is a ROS 2 Jazzy sensor fusion package replacing deprecated robot_localization. IMU, wheel encoders, and GPS fused via UKF at 100Hz. Apache 2.0.

GitHub: https://github.com/manankharwar/fusioncore

1 post - 1 participant

Read full topic

by manankharwar on April 08, 2026 04:23 PM

Delaying Lyrical RMW and Feature Freezes

Hi all,

In today’s ROS PMC meeting we decided to delay the RMW freeze and Feature freeze by 1 week each. The purpose of the delay is to give more time to upgrade and stabilize all Tier 1 RMW implementations. The ROS Lyrical Release date has not changed.

The new timelines are:

  • New RMW Freeze Tue, Apr 14, 2026 6:59 AM UTC
  • New Feature freeze: Tue, Apr 21, 2026 6:59 AM UTC
  • New Branch from Rolling: Wed, Apr 22, 2026 6:59 AM UTC

Updates here: Delay Lyrical RMW Freeze; Feature Freeze; Branch by sloretz · Pull Request #6350 · ros2/ros2_documentation · GitHub

1 post - 1 participant

Read full topic

by sloretz on April 08, 2026 12:37 AM

April 06, 2026
Multi-Robot Fleet Management System using ROS2, Nav2, and Gazebo

I am developing a multi-robot fleet management system in a simulated warehouse environment using ROS2 (Humble) and Gazebo. The system is designed to study scalable coordination and task allocation across multiple autonomous mobile robots operating in a structured environment.

The architecture follows a distributed approach where each robot is implemented as an independent agent node responsible for navigation, execution, and state reporting. A centralized fleet manager node handles global task allocation and coordination. Communication is implemented using ROS2 topics, services, and action interfaces to enable asynchronous and real-time interaction between components.

Navigation is implemented using the Nav2 stack, integrating localization, global and local path planning, and obstacle avoidance. LiDAR-based perception is used for environmental awareness and safe navigation within the simulated warehouse.

The system supports dynamic task allocation, where robots receive pick-and-deliver tasks, compute feasible paths, and execute them while continuously publishing execution status. A typical workflow involves a robot navigating to a shelf location, performing a simulated pickup, and delivering the item to a designated drop-off point.

This project focuses on understanding distributed robotic system design, inter-node communication, and multi-robot coordination challenges such as scalability and synchronization. Future work includes implementing conflict resolution strategies, fleet-level optimization, and extending the system toward real-world deployment.

4 posts - 4 participants

Read full topic

by Arjun_R on April 06, 2026 11:54 PM

Ros2_medkit + VDA 5050: bridging SOVD diagnostics with fleet management

Hey everyone,

Quick update on ros2_medkit. We’ve been exploring how medkit’s diagnostic data can serve VDA 5050 fleet integrations, and put together a working demo.

Context: VDA 5050 error reporting is intentionally minimal (errorType, errorLevel, errorDescription). That’s fine for fleet routing decisions, but when an engineer needs to debug a fault, there’s a gap. We wanted to see if medkit’s SOVD layer could fill it without breaking either standard.

What we did:

The new SOVD Service Interface plugin exposes medkit’s entity tree, faults, and capabilities via ROS 2 services (ListEntities, GetEntityFaults, GetCapabilities). This means any ROS 2 node can query diagnostic data (not just SOVD/REST clients).

We built a VDA 5050 agent as a separate process that:

  • Handles MQTT communication with a fleet manager (orders, state, instant actions)
  • Drives Nav2 for navigation
  • Queries medkit’s services to report faults as VDA 5050 errors

medkit stays completely unaware of VDA 5050. The agent is just another ROS 2 service consumer (same interface a BT.CPP node or PlotJuggler plugin would use).

vda5050_demo_560_15fps

Demo video:

  • ROSMASTER M3 Pro (Jetson Orin Nano),
  • mission dispatched from VDA 5050 Visualizer,
  • LiDAR fault injected mid-navigation,
  • fault propagated to fleet manager + full SOVD snapshot (freeze frames, extended data records, rosbag) in medkit’s web UI.

The service interface plugin is useful beyond VDA 5050 - anything that consumes ROS 2 services can now pull diagnostic data from medkit. Curious if anyone sees other use cases.

repo: GitHub - selfpatch/ros2_medkit: ros2_medkit - diagnostics gateway for ROS 2 robots. Faults, live data, operations, scripts, locking, triggers, and OTA updates via REST API. No SSH, no custom tooling. · GitHub

1 post - 1 participant

Read full topic

by Michal_Faferek on April 06, 2026 02:42 PM

How to find code “someone already wrote that”? (WaypointFollow Metrics, Rotate Normal To Wall”)

I came to ROS many years ago thinking “someone has probably already coded every basic robotics challenge”. Indeed, I found lots to use, but still find myself writing basic nodes because I don’t know how to search the “ROS mine” for a particular basic node I need.

For example: I’m trying to improve the robustness, and reliability of navigation of my TurtleBot4 robot in my home environment. Nav2 has a million parameters, and I have managed to get a param set for 10 waypoints around my home that succeed most tests. Two desirable waypoints cause a lot of recoveries and occasional goal failures.

I need a test node that collects recovery metrics and goal success/failure/skipped status during a 10 stop waypoint following run, to compare robustness and reliability across parameter changes, and waypoint tweaks. Other metrics like navigation time, distance travelled, delta x,y,heading between goal and result would be nice to have.

Surely someone has written a Nav2 test node I can use to optimize my Nav2 parameter set?

P.s. “Rotate normal to closest wall by /scan” is another basic challenge I would guess was written years ago.

1 post - 1 participant

Read full topic

by RobotDreams on April 06, 2026 01:54 PM

April 05, 2026
Introduction: QERRA-v2 — Hybrid Quantum-Ethical Safety Layer for Humanoid Robots
Hello everyone,

My name is Marussa Metocharaki (@marunigno).
 I’m the solo founder of **QERRA-v2** — a hybrid quantum-classical ethical decision engine for safer humanoid robots and high-stakes AI systems.

The project combines quantum-inspired exploration (I successfully ran a real 8-qubit W-state on IBM quantum hardware) with classical ethical vectors (SEMEV-12), toxicity detection, and a safety kernel. I already have a live public API with a working /analyze endpoint.

Right now the project is still in an early experimental stage — the classical safety layer works well, while the hybrid quantum part is a small prototype that I am actively improving.

I’m building this completely alone under significant personal constraints, and I would love to connect with people in the robotics community who care about ethical and safety layers for humanoid robots.

I just published the full Whitepaper and the code is open-source (AGPL-3.0).

Would be very grateful for any feedback, ideas, or potential collaboration.

GitHub: https://github.com/marunigno-ship-it/QERRA-v2
Whitepaper: https://github.com/marunigno-ship-it/QERRA-v2/blob/main/WHITEPAPER.md

Thank you and looking forward to learning from this community!

1 post - 1 participant

Read full topic

by marunigno-ship-it on April 05, 2026 11:24 PM

April 03, 2026
Rapid deployment of OpenClaw and GraspGen crawling system

OpenClawPi: AgileX Robotics Skill Set Library

License
Language
Platform

OpenClawPi is a modular skill set repository focused on the rapid integration and reuse of core robot functions. Covering key scenarios such as robotic arm control, grasping, visual perception, and voice interaction, it provides out-of-the-box skill components for secondary robot development and application deployment.

From Zero to AI Robot Grasping: OpenClaw + GrabGen Full Setup Guide (Step-by-Step)

I. Quick Start

OpenClaw Deployment

Visit the OpenClaw official website: https://openclaw.ai/

Execute the one-click installation command:

curl -fsSL https://openclaw.ai/install.sh | bash

Next, configure OpenClaw:

  1. Select ‘YES

  2. Select ‘QuickStart

  3. Select ‘Update values


  4. Select your provider (recommended: free options like Qwen, OpenRouter, or Ollama)

  5. Select the company model you wish to use.

  6. Select a default model.

  7. Select the APP you will connect to OpenClaw.

  8. Select a web search provider.

  9. Select skills (not required for now).

  10. Check all Hook options.

  11. Select ‘restart’.

  12. Select ‘Web UI’.

1. Clone the Project

git clone https://github.com/vanstrong12138/OpenClawPi.git

2. Prompt the Agent to Learn Skills

Using the vision skill as an example:

User: Please learn vl_vision_skill

:package: Skill Modules Overview

Module Name Description Core Dependencies
agx-arm-codegen Robotic arm code generation tool; automatically generates trajectory planning and joint control code. Supports custom path templates. pyAgxArm
grab_skill Robot grasping skill, including gripper control, target pose calibration, and grasping strategies (single-point/adaptive). pyAgxArm
vl_vision_skill Visual perception skill, supporting object detection, visual positioning, and image segmentation. SAM3, Qwen3-VL
voice_skill Voice interaction skill, supporting voice command recognition, voice feedback, and custom command set configuration. cosyvoice

II. GrabGen - Pose Generation and Grasping

This article demonstrates the identification, segmentation, pose generation, and grasping of arbitrary objects using SAM3 and pose generation tools.

Repositories

Hardware Requirements

  • x86 Desktop Platform
  • NVIDIA GPU with at least 16GB VRAM
  • Intel RealSense Camera

Project Deployment Environment

  • OS: Ubuntu 24.04
  • Middleware: ROS Jazzy
  • GPU: RTX 5090
  • NVIDIA Driver: Version 570.195.03
  • CUDA: Version 12.8
  1. Install NVIDIA Graphics Driver
sudo apt update
sudo apt upgrade
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-driver-570
# Restart
reboot
  1. Install CUDA Toolkit 12.8
wget https://developer.download.nvidia.com/compute/cuda/12.8.1/local_installers/cuda_12.8.1_570.124.06_linux.run
sudo sh cuda_12.8.1_570.124.06_linux.run
  • During installation, uncheck the first option (“driver”) since the driver was installed in the previous step.
  1. Add Environment Variables
echo 'export PATH=/usr/local/cuda-12.8/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda-12.8/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc
  1. Verify Installation
    Execute nvcc -V to check CUDA information.
nvcc -V
  1. Install cuDNN
  • Download the cuDNN tar file from the NVIDIA Official Website. After extracting, copy the files.

  • Execute the following commands to copy cuDNN to the CUDA directory:

sudo cp cuda/include/cudnn*.h /usr/local/cuda/include
sudo cp cuda/lib/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
  1. Install TensorRT
    Download the TensorRT tar file from the NVIDIA Official Website.
  • Extract and move TensorRT to the /usr/local directory:
# Extract
tar -xvf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9.tar.gz 

# Enter directory
cd TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9/

# Move to /usr/local
sudo mv TensorRT-10.16.0.72/ /usr/local/
  • Test TensorRT Installation:
# Enter MNIST sample directory
cd /usr/local/TensorRT-10.16.0.72/samples/sampleOnnxMNIST

# Compile
make

# Run the executable found in bin
cd /usr/local/TensorRT-10.16.0.72/bin
./sample_onnx_mnist

SAM3 Deployment

  • Python: 3.12 or higher
  • PyTorch: 2.7 or higher
  • CUDA: Compatible GPU with CUDA 12.6 or higher
  1. Create Conda Virtual Environment
conda create -n sam3 python=3.12
conda deactivate
conda activate sam3
  1. Install PyTorch and Dependencies
# For 50-series GPUs, CUDA 12.8 and Torch 2.8 are recommended
# Downgrade numpy to <1.23 if necessary
pip install torch==2.8.0 torchvision==0.23.0 torchaudio==2.8.0 --index-url https://download.pytorch.org/whl/cu128

cd sam3
pip install -e .
  1. Model Download
    1. Submit the form to gain download access on HuggingFace: https://huggingface.co/facebook/sam3
    2. Or search via local mirror sites.

Robotic Arm Driver Deployment

The project outputs target_pose (end-effector pose), which can be manually adapted for different robotic arms.

  1. Example: PiPER Robotic Arm
pip install python-can

git clone https://github.com/agilexrobotics/pyAgxArm.git

cd pyAgxArm
pip install .

Cloning

Clone this project to your local machine:

cd YOUR_PATH
git clone -b ros2_jazzy_version https://github.com/AgilexRobotics/GraspGen.git

Running the Project

  1. Grasping Node
python YOUR_PATH/sam3/realsense-sam.py --prompt "Target Object Name in English"
  1. Grasping Task Execution Controls
A = Zero-force mode (Master arm) | D = Normal mode + Record pose | S = Return to home
X = Replay pose | Q = Open gripper | E = Close gripper | P = Pointcloud/Grasp
T = Change prompt | G = Issue grasp command | Esc = Exit
  1. Automatic Grasping Task
python YOUR_PATH/sam3/realsense-sam.py --prompt "Target Object Name" --auto

1 post - 1 participant

Read full topic

by Agilex_Robotics on April 03, 2026 10:38 AM

April 02, 2026
Interactive GUI toolkit for robotics visualization - Python & C++, runs on desktop and web

Hi everyone,

I’d like to share Dear ImGui Bundle, an open-source framework for building interactive GUI applications in Python and C++. It wraps Dear ImGui with 23 integrated libraries (plotting, image inspection, node editors, 3D gizmos, etc.) and runs on desktop, mobile, and web.

I’m a solo developer and have been working hard on this for 4 years. I am new here, but I thought it might be useful for robotics developers.

It provides:

Real-time visualization

  • ImPlot and ImPlot3D for sensor data, trajectories, live plots at 60fps (or even 120fps)
  • ImmVision for camera feed inspection with zoom, pan, pixel values, and colormaps
  • All GPU-accelerated (OpenGL/Metal/Vulkan)

Interactive parameter tuning

  • Immediate mode means your UI code is just a few lines of Python or C++
  • Sliders, knobs, toggles, color pickers - all update in real time
  • No callbacks, no widget trees, no framework boilerplate

Cross-platform deployment

  • Same code runs on Linux, macOS, Windows
  • Python apps can run in the browser via Pyodide (useful for sharing dashboards without requiring install)
  • C++ apps compile to WebAssembly via Emscripten

Example: live camera + Laplacian filter with colormaps in 54 lines

import cv2
import numpy as np
from imgui_bundle import imgui, immvision, immapp


class AppState:
    def __init__(self):
        self.cap = cv2.VideoCapture(0)
        self.image = None
        self.filtered = None
        self.blur_sigma = 2.0
        # ImmVision params
        # For the camera image
        self.params_image = immvision.ImageParams()
        self.params_image.image_display_size = (400, 0)
        self.params_image.zoom_key = "cam"
        # For the filtered image (synced zoom via zoom_key)
        self.params_filter = immvision.ImageParams()
        self.params_filter.image_display_size = (400, 0)
        self.params_filter.zoom_key = "cam"
        self.params_filter.show_options_panel = True


def gui(s: AppState):
    # grab
    has_image, frame = s.cap.read()
    if has_image:
        s.image = cv2.resize(frame, (640, 480))
        gray = cv2.cvtColor(s.image, cv2.COLOR_BGR2GRAY)
        gray_f = gray.astype(np.float64) / 255.0
        blurred = cv2.GaussianBlur(gray_f, (0, 0), s.blur_sigma)
        s.filtered = cv2.Laplacian(blurred, cv2.CV_64F, ksize=5)

    # Refresh images only if needed
    s.params_image.refresh_image = has_image
    s.params_filter.refresh_image = has_image

    if s.image is not None:
        immvision.image("Camera", s.image, s.params_image)
        imgui.same_line()
        immvision.image("Filtered", s.filtered, s.params_filter)

    # Controls
    _, s.blur_sigma = imgui.slider_float("Blur", s.blur_sigma, 0.5, 10.0)


state = AppState()
immvision.use_bgr_color_order()
immapp.run(lambda: gui(state), window_size=(1200, 550), window_title="Camera Filter", fps_idle=0)

The filtered image is float64 - click “Options” to try different colormaps (Heat, Jet, Viridis…). Both views are zoom-linked: pan one, the other follows.

Try it:

Install: pip install imgui-bundle

Adoption:
The framework is used in several research projects, including CVPR 2024 papers (4K4D), Newton Physics, and moderngl. The Python bindings are auto-generated with litgen, so they stay in sync with upstream Dear ImGui.

Happy to answer any questions or discuss how it could fit into ROS workflows.

Best,
Pascal

2 posts - 2 participants

Read full topic

by pthom on April 02, 2026 06:10 PM

On message standardization (and a call for participation)

Hi folks!

I presume at least some of you are aware of the OSRA efforts towards better supporting Physical AI applications. Some of those efforts revolve around messaging and interfaces, and in that context, a few gaps in standard sensing messages have been identified. In a way, this is orthogonal to Physical AI, yet still we may as well seize the opportunity to improve the state of things.

To that end, the Standardized Interfaces & Messages Working Group will be hosting public sessions to discuss, review, and craft proposals to address those gaps. Either through implementation or through recommendation if the community has already organically developed a solution. Academic researchers and industry practitioners are more than welcome to join. If you design or manufacture sensor hardware, even better.

Our friends at Ouster already took the lead and posted a proposal for a new 3D LiDAR message, so our focus during the first couple sessions will likely be on LiDAR technology. Tactile is a close second. We’ve heard complains about the IMU message structure too. Feel free to propose more (and challenge others too).

We’ll meet on Mondays, biweekly, starting Mon, Apr 6, 2026 3:00 PM UTC. Fill this form to join the meetings. Hope to see you there!

1 post - 1 participant

Read full topic

by hidmic on April 02, 2026 03:17 PM

April 01, 2026
Announcing MoveIt Pro 9 with ROS 2 Jazzy Support

Hi ROS Community!

It’s been a while, but we’re excited to announce MoveIt Pro 9.0, the latest major release of PickNik’s manipulation developer platform built on ROS 2. MoveIt Pro includes comprehensive support for AI model training & execution, Behavior Trees, MuJoCo simulation, and all the classic capabilities you expect like motion planning, collision avoidance, inverse kinematics, and real-time control.

This release adds support for ROS 2 Jazzy LTS (while still supporting ROS Humble), along with significant improvements to teleoperation, motion planning, developer tooling, and robot application workflows. MoveIt Pro now includes new joint-space and Cartesian-space motion planners that outperform previous implementations, to improve cycle time, robustness, and industry-required reliability. See the full benchmarking comparison for details

MoveIt Pro is developed by the team behind MoveIt 2, and our goal is to make it easier for robotics teams to build and deploy real-world manipulation systems using ROS. Many organizations in manufacturing, aerospace, logistics, agriculture, industrial cleaning, and research use MoveIt Pro to accelerate development without needing to build large amounts of infrastructure from scratch.

What’s new

Improved real-time control and teleoperation with Joint Jog

MoveIt Pro now includes a new “Joint Jog” teleoperation mode for controlling robots directly from the web UI. This replaces the previous MoveIt Servo based teleoperation implementation and introduces continuous collision checking, configurable safety factors, and optional link padding for safer manual control during debugging or demonstrations.

Scan-and-plan workflows

New scan-and-plan capabilities allow robots to scan surfaces with a sensor and automatically generate tool paths for tasks like spraying, sanding, washing, or grinding. These workflows make it easier to build surface-processing applications.

scan-and-plan-capabilities-for-spraying-f1782ba23bff8f3dbedf9550a8dd3403

New Python APIs for MoveIt Pro Core

New low-level Python APIs expose the core planners, solvers, and controllers directly, enabling developers to build custom applications outside of the Behavior Tree framework. These APIs provide fine-grained control over motion planning and kinematics, including advanced features like customizable nullspace optimization and path constraints.

Improved motion planning APIs

Several updates improve flexibility for motion generation, including: improved path inverse kinematics, orientation tracking as a nullspace cost, customizable nullspace behavior, tunable path deviation tolerances.

Developer productivity improvements

The MoveIt Pro UI and Behavior Tree tooling received a number of improvements to make debugging and application development faster, including a redesigned UI layout and improved editing workflows, Behavior Tree editor improvements such as search and node snapping, better debugging tools including TF visualization and alert history

Expanded Library of Reusable Manipulation Skills

MoveIt Pro also includes a large library of reusable robot capabilities implemented as thread-safe Behavior Tree nodes, allowing developers to compose complex manipulation applications from modular building blocks instead of writing large amounts of robotics infrastructure from scratch. See our Behaviors Hub to explore the 200+ available Behaviors.

enhanced-ai-processing-of-point-clouds-4ec0f48f9070435cd417ab4915e90bed

Built for the ROS ecosystem

MoveIt Pro integrates with the broader ROS ecosystem, including standard ROS drivers and packages. PickNik has been deeply involved in the MoveIt project since its early development, and we continue investing heavily in open-source robotics such as developing many ROS drivers for major vendors.

Learn more

Full release notes:
https://docs.picknik.ai/release-notes/

We’d love feedback from the ROS community, and we’re excited to see what developers build with these new capabilities. Contact us to learn more.

4 posts - 3 participants

Read full topic

by davetcoleman on April 01, 2026 04:45 PM

[Policy Change] Detailed Standards for REP-2026-04 (Lyrical Enforcement)

Hi everyone,

Following up on the recent announcement regarding the Lyrical Luth release requirements, the PMC has finalized the automated enforcement protocols. To ensure our May release remains on schedule, we are providing expanded guidelines and examples for the new rhyme-lint and README.shanty checks.

Effective immediately, all pull requests targeting the rolling or lyrical branches must pass these poetic audits.

1. The rhyme-lint Mandatory CI Check

All pull requests will now trigger a rhyme-lint action. If your commit message lacks proper meter or rhyme, the build will fail with a 403: UNPOETIC_CONTRIBUTION error.

Accepted Commit Styles:

  • The Heroic Couplet (for Security/Bug Fixes):
fix: A buffer overflow was found in C,
We've locked the heap to keep the memory free.
  • Iambic Pentameter (for Feature Additions):
feat: The twenty-standard now we must embrace,
To bring C++20 speed to every space.
  • The Middleware Haiku (for RMW Updates):
Packets drift like leaves,
The middleware finds the path,
Silence in the logs.

2. The README.shanty Documentation Standard

Any new package added to the core must include a README.shanty file. This ensures our documentation can be easily memorized and sung during long deployment cycles or deep-sea robotics missions.

  • Note: Harmonies are optional but encouraged for Tier-1 platforms.

Example: README.shanty for rcl::Buffer

(To the tune of “The Wellerman”)

There once was a node that sent a frame,
Without a copy or a name,
The CPU was much to blame,
For latency so high! (HUH!)

Soon may the Zero-Copy come,
To bring us throughput, megabytes, and fun,
When the data transfer’s done,
We’ll take our leave and go!
We used the vendor’s memory backend,
A pointer sent to every friend,
The bandwidth limit met its end,
Beneath the Lyrical sky!


3. The Lyrical Luth Rhyming Dictionary

We recognize that many maintainers may find this transition challenging. To assist, the PMC has curated an initial dictionary of “Technical Rhymes” to help you pass CI.

ROS Term Approved Rhymes Example
Node Code, Mode, Load, Road “A lonely node / with heavy load.”
DDS Success, Progress, Finesse “Tune the DDS / with pure finesse.”
Topic Myopic, Tropic, Microscopic “A hidden topic / so microscopic.”
RMW Now, How, Allow, Brow “The RMW / we fix it now.”
Linter Splinter, Winter, Printer “The static linter / cold as winter.”
Pointer Anointer, Appointer “The null pointer / a soul-disappointer.”
Humble Rumble, Stumble, Grumble “Backported from Humble / without a stumble.”

Compliance and “ROS-ffice Hours”

We understand this is a significant shift in our development workflow, but we believe it is necessary to harmonize our ecosystem. To help with the transition, our upcoming “ROS-ffice Hours” sessions will be dedicated to bardic troubleshooting.

Let’s make this May the most harmonious release in robotics history.

5 posts - 4 participants

Read full topic

by mjcarroll on April 01, 2026 01:00 PM

Custom Capabilities in Transitive Robotics | Cloud Robotics WG Meeting 2026-04-13

Please come and join us for this coming meeting at Mon, Apr 13, 2026 4:00 PM UTCMon, Apr 13, 2026 5:00 PM UTC, where we plan to continue our Transitive Robotics tryout by trying one of the more advanced features: writing and deploying a custom capability. This feature allows customers to write their own custom code and deploy it to their robots alongside the features available directly from Transitive Robotics.

Last session, we tried running Transitive Robotics on a Turtlebot. We managed to remotely operate the robot, plus set up Maps as a capability which unfortunately didn’t work due to incompatibility with ROS 2 Jazzy (support has since been added for Jazzy). If you’re interested to watch the meeting, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

1 post - 1 participant

Read full topic

by mikelikesrobots on April 01, 2026 08:52 AM

March 31, 2026
Upcomming RMW Feature Freeze - April 6th, 2026 - ROS Lyrical

Hi all,

On Tue, Apr 7, 2026 6:59 AM UTC, we will freeze all RMW-related packages to prepare for the upcoming Lyrical Luth release on Fri, May 22, 2026 7:00 AM UTC.

Once this freeze takes effect, we will not accept new features to the RMW packages until Lyrical branches from ROS Rolling. This restriction applies to the following packages and vendor packages:

We still welcome bug fixes after the freeze date.

Find more information on the Lyrical Luth release timeline here: ROS 2 Lyrical Luth (codename ‘lyrical’; May, 2026).

5 posts - 2 participants

Read full topic

by sloretz on March 31, 2026 03:29 PM

ROS2 Launch File Validation

Introducing an XML launch file scheme

XSD schema for validating ROS2 XML launch files.
Catch syntax errors before runtime and get IDE support.

Why

For the package.xml we have had it for years, a scheme.

But we found my muscle memory often typing type= instead of exec=.
Or $(find my_pkg) instead of $(find-pkg-share my_pkg).

And we could unit-test the node all we wanted, these errors only popped up in integration tests or even on the robot itself.
Would it not be nice if your editor already warned about you it?

How

Embed in launch file

Start your launchfile like this:

<?xml version="1.0"?>
<?xml-model href="https://nobleo.github.io/ros2_launch_validation/ros2_launch.xsd" schematypens="http://www.w3.org/2001/XMLSchema"?>

<launch>

Command-line validation

Quickstart! Validate all your launch xml files in your workspace right now!

xmllint --noout --schema <(curl -s https://nobleo.github.io/ros2_launch_validation/ros2_launch.xsd) **/*.launch.xml

This was verified internally and on some larger public repositories like autoware. Even found an issue :slight_smile:

6 posts - 2 participants

Read full topic

by Timple on March 31, 2026 07:09 AM

March 30, 2026
RFC: Open standard for robot-to-human light signaling — looking for technical feedback from ROS 2 developers

Hi everyone,

I’m working on an open standard called LSEP (Luminae Signal Expression Protocol) — a state machine specification for how robots communicate intent, awareness, and safety states to humans through light signals.

The problem it solves:

Most robotic platforms implement ad-hoc LED patterns with no shared semantics. Robot A blinks blue for “idle,” Robot B blinks blue for “navigating.” There’s no interoperability, and no way for a human in a shared workspace to learn one signal language that transfers across platforms.

LSEP defines a modular 9-state architecture: 6 Core states (IDLE, AWARENESS, INTENT, CARE, CRITICAL, THREAT) and 3 Extended states (MED_CONF, LOW_CONF, INTEGRITY) with deterministic mappings from sensor inputs like Time-to-Collision (TTC) to signal outputs. The full spec is open: https://lsep.org

Where ROS 2 comes in:

We’ve designed LSEP to run as an isolated safety node — it reads from your perception pipeline (TTC, proximity, sensor health) and publishes signal commands. It doesn’t touch your navigation stack. The architecture pattern uses lifecycle nodes to keep the signaling guardrail separate from autonomy logic.

What I’m looking for:

We’re running a free Beta program for 20 ROS 2 developers who want to stress-test the integration. No cost — your payment is brutal, unfiltered technical feedback and (optionally) a short write-up on how it fits into your stack.

The program covers:

- Translating TTC and proximity data to deterministic state machine outputs

- EU AI Act compliance layers (Art. 9 & 50) for high-risk physical AI transparency

- LSEP core & extended states: mechanics of the 9-state multimodal standard

- ROS 2 integration: isolating the LSEP safety node from your navigation stack

- Sensor fusion resilience: hysteresis and fallback patterns for sensor dropouts

Not looking for:

This isn’t a pitch. I’m not selling anything here. I’m looking for the people who actually build these systems to tell me where LSEP breaks, what’s missing, and what’s naive. The harshest feedback is the most useful.

Full spec: https://lsep.org

Beta registration: https://www.experiencedesigninstitute.ch

— Nemanja Galić

2 posts - 2 participants

Read full topic

by NemanjaGalic on March 30, 2026 04:10 PM

March 28, 2026
PLCnext ROS Bridge: Enabling Hardware Interoperability Between Industrial PLCs and ROS

For developers already working with ROS, the integration of industrial fieldbuses, I/Os, and functional safety into robotic applications often introduces unexpected challenges. ROS offers a flexible and modular software framework, although connecting it to industrial automation hardware typically requires additional integration layers and specialized knowledge.

This led to the idea of creating a solution that allows ROS developers to leverage a PLC where it excels, for example in deterministic control, industrial communication, and safety, while high performance computation and complex logic remain handled within ROS.

PLCnext Technology Architecture Overview

PLCnext Controls run PLCnext Linux, a real-time capable operating system that hosts the PLCnext Runtime. The Runtime manages deterministic process data and stores it in the Global Data Space (GDS).

Key architectural components :

  • PLCnext Linux: Yocto‑based embedded Linux
  • PLCnext Runtime (tasks, data handling, Axioline integration): Provides deterministic processing and the Global Data Space
  • Global Data Space (GDS): Central storage for process variables accessible from PLC programs and system apps
  • PLCnext Apps: Packaged software components that can be installed on the controller

PLCnext ROS Bridge

Concept

At its core, the PLCnext ROS Bridge is a custom ROS node with dedicated services running inside a Docker container, packaged as a PLCnext App. It provides a bidirectional communication gateway between the PLCnext Global Data Space (industrial side) and ROS topics (robotics side).

To illustrate this, consider a motor connected to the PLC via EtherCAT/FSoE or PROFINET/PROFIsafe. The motor, along with its associated safety functions, can be managed through simple PLC logic and represented by a set of variables. Depending on the implementation, these variables, such as setpoints, command velocities, etc., can be exposed to ROS. When the navigation stack publishes a command velocity, the ROS Bridge, as a subscriber to this topic, writes the received values to the corresponding variable on the PLC side. Likewise, information such as safety status or system state can be sent from the PLC to ROS and made available through a defined topic.

Commissioning Workflow

The ROS Bridge Node is generated through an automated code-generation process. This process is driven by the Interface Description File (IDF), which defines the PLC instance paths (variables) that should be exposed to ROS.

A typical build process performs the following steps:

  1. Building the ROS Packages
    • Parse the IDF and generate the source code for the topic, publisher and subscribers
    • Build the ROS Node
  2. Place the resulting binaries and gRPC dependencies into a Docker image with a minimal ros-core installation.
  3. Package the Docker image, together with required metadata, into a read-only PLCnext App.

The resulting App can be deployed to a PLCnext Controller using the Web-Based Management (WBM) interface. While it is possible to build everything in a local environment, the project is designed to be built via CI/CD. An example pipeline can also be found in the GitHub repository.

Runtime Behaviour

After installation, the App starts the container defined via the compose file. Inside this container, the generated ROS Node connects to the Global Data Space using the built gRPC client and then exposes the selected PLC variables via ROS publishers and subscribers. This enables ROS developers to integrate automation components, such as sensors, actuators, I/O modules, and fieldbus devices, into a ROS-based architecture through the GDS. Moreover, the Bridge sets up a set of services that enable users to read and write information at runtime.

Further Reading

More Information about the PLCnext Technology:

by Vishnuprasad Prachandabhanu on March 28, 2026 05:00 AM

March 27, 2026
Questions on Zero-Copy for Variable-Size Messages (PointCloud2) with Iceoryx in ROS 2

Hi everyone,

I am currently working on optimizing high-bandwidth sensor data transmission (specifically LiDAR point clouds) using ROS 2 and Iceoryx for zero-copy communication.

I have successfully set up the Iceoryx environment and confirmed zero-copy works for fixed-size types. However, I am facing challenges when applying this to variable-size messages, such as sensor_msgs/msg/PointCloud2.

As I understand it, Iceoryx typically requires pre-allocated memory pools with fixed chunks. In the case of PointCloud2, the data size can vary depending on the LiDAR’s points (in my case, around 5.2MB per message).

I have two specific questions:

1. Best practices for variable-size data like PointCloud2

How should we handle messages where the size is not strictly fixed at compile-time while still maintaining zero-copy benefits? Should we always pre-allocate the “worst-case” maximum size for the underlying buffers? If anyone has implemented this for sensor_msgs/msg/PointCloud2 or similar dynamic types, I would appreciate any advice or examples.

2. Tuning RouDi Configuration (size and count)

Regarding the roudi_config.toml (or the RouDi memory pool setup), what is the general rule of thumb for determining the optimal size and count?

For high-resolution LiDAR data:

  • How do you balance between the number of chunks (count) and the buffer size for each chunk to avoid memory exhaustion without being overly wasteful?

  • Are there any common pitfalls when setting these values for a system with multiple subscribers?

I’ve already got Iceoryx installed and basic IPC working, but I want to ensure my configuration is production-ready for large-scale sensor data.

Thank you in advance for your insights!

4 posts - 3 participants

Read full topic

by seodayeon416 on March 27, 2026 04:14 PM

WEBINAR: Accelerating Robotics Development with Qt Robotics Framework

Join Qt Group Webinar

Accelerating Robotics Development with Qt Robotics Framework

Qt Robotics Framework (QRF) introduces a fast, reliable way to connect Qt‑based applications (QML and C++) with ROS2 middleware. By automatically generating strongly‑typed Qt/QML bindings from ROS2 interface definitions, QRF enables robotics teams to integrate control, visualization, and simulation capabilities with minimal boilerplate and maximum safety.

In this webinar, Qt Group’s engineers and industry experts demonstrate how QRF simplifies prototyping, reduces integration complexity, and helps teams move rapidly from concept to production.

Whether you’re building robot controllers, diagnostics dashboards, or simulation environments, Qt Robotics Framework reduces the development cycle and improves reliability across your robotics stack.

Speakers:

  • Michele Rossi, Director, Industry, Qt Group

  • Przemysław Nogaj, Head of HMI Technology, Spyrosoft

  • Tommi Mänttäri, Senior Manager, R&D, Qt Group

Accelerating Robotics Development with Qt Robotics Framework

1 post - 1 participant

Read full topic

by Matteo_Capelletti on March 27, 2026 04:12 PM

March 26, 2026
ROS2 Studio — GUI tool for performance monitoring, bag operations and system dashboard

Hi ROS community! :waving_hand:

I’d like to share a tool I built — ROS2 Studio, a single GUI that brings together the most common ROS2 monitoring and bag operations in one place.

What is ROS2 Studio?

ROS2 Studio is a PyQt5-based desktop GUI that runs as a native ROS2 CLI extension (ros2 studio). Instead of juggling multiple terminal windows, everything is accessible from one interface.

Features

  • :bar_chart: Performance Monitor — real-time CPU, memory, and frequency graphs for any topic or node
  • :red_circle: Bag Recorder — multi-topic selection with custom save location
  • :play_button: Bag Player — playback with adjustable rate (0.1x–10x) and loop controls
  • :counterclockwise_arrows_button: Bag to CSV Converter — full message deserialization via rosbag2_py to CSV
  • :control_knobs: System Dashboard — CPU, memory, disk, network stats, ROS2 entities, and process monitor

Installation

cd ~/ros2_ws/src
git clone https://github.com/Sourav0607/ROS2-STUDIO
cd ~/ros2_ws
colcon build --packages-select ros2_studio
source install/setup.bash
ros2 studio

Compatibility

Tested on ROS2 Humble and Jazzy on Ubuntu 22.04.

Links

Feedback, issues, and contributions are very welcome! I’m actively maintaining this and plan to add more features based on community input.

— Sourav

1 post - 1 participant

Read full topic

by Sourav24 on March 26, 2026 02:58 PM

Remote Control of Robotic Arms – Using a Standard Gamepad

Gamepad Control for PiPER Manipulator

1. Abstract

This document implements intuitive control of the PiPER robotic arm using a standard gamepad. With a common gamepad, you can operate the PiPER manipulator in a visualized environment, delivering a precise and intuitive control experience.

Tags

PiPER Manipulator, Gamepad Teleoperation, Joint Control, Pose Control, Gripper Control, Forward & Inverse Kinematics

2. Repositories

3. Function Demo

20260326-173204

4. Environment Setup

  • OS: Ubuntu 20.04 or later
  • Python Environment: Python 3.9 or later. Anaconda or Miniconda is recommended

Clone the project and enter the root directory:

git clone https://github.com/kehuanjack/Gamepad_PiPER.git
cd Gamepad_PiPER

Install common dependencies and kinematics libraries (choose one option; pytracik is recommended):

Option 1: Based on pinocchio

(Python == 3.9; requires piper_ros and sourcing the ROS workspace, otherwise meshes will not be found)

conda create -n test_pinocchio python=3.9.* -y
conda activate test_pinocchio
pip3 install -r requirements_common.txt --upgrade
conda install pinocchio=3.6.0 -c conda-forge
pip install meshcat
pip install casadi

In main.py and main_virtual.py, select:from src.gamepad_pin import RoboticArmController

Option 2: Based on PyRoKi

(Python >= 3.10)

conda create -n test_pyroki python=3.10.* -y
conda activate test_pyroki
pip3 install -r requirements_common.txt --upgrade
pip3 install pyroki@git+https://github.com/chungmin99/pyroki.git@f234516

In main.py and main_virtual.py, select:from src.gamepad_limit import RoboticArmController orfrom src.gamepad_no_limit import RoboticArmController

Option 3: Based on cuRobo

(Python >= 3.8; CUDA 11.8 recommended)

conda create -n test_curobo python=3.10.* -y
conda activate test_curobo
pip3 install -r requirements_common.txt --upgrade
sudo apt install git-lfs && cd ../
git clone https://github.com/NVlabs/curobo.git && cd curobo
pip3 install "numpy<2.0" "torch==2.0.0" pytest lark
pip3 install -e . --no-build-isolation
python3 -m pytest .
cd ../Gamepad_PiPER

In main.py and main_virtual.py, select:from src.gamepad_curobo import RoboticArmController

Option 4: Based on pytracik

(Python >= 3.10)

conda create -n test_tracik python=3.10.* -y
conda activate test_tracik
pip3 install -r requirements_common.txt --upgrade
git clone https://github.com/chenhaox/pytracik.git
cd pytracik
pip install -r requirements.txt
sudo apt install g++ libboost-all-dev libeigen3-dev liborocos-kdl-dev libnlopt-dev libnlopt-cxx-dev
python setup_linux.py install --user

In main.py and main_virtual.py, select:from src.gamepad_trac_ik import RoboticArmController

5. Execution Steps

  1. Connect manipulator and activate CAN interface:sudo ip link set can0 up type can bitrate 1000000

  2. Connect gamepad:Connect the gamepad to the PC via USB or Bluetooth.

  3. Launch control script:Run python3 main.py or python3 main_virtual.py in the project directory.It is recommended to test with main_virtual.py first in simulation mode.

  4. Verify gamepad connection:Check console output to confirm the gamepad is recognized.

  5. Web visualization:Open a browser and go to http://localhost:8080 to view the manipulator status.

  6. Start control:Operate the manipulator according to the gamepad mapping.

6. Gamepad Control Instructions

6.1 Button Mapping

Button Short Press Function Long Press Function
HOME Connect / Disconnect manipulator None
START Switch high-level control mode (Joint / Pose) Switch low-level control mode (Joint / Pose)
BACK Switch low-level command mode (Position-Velocity 0x00 / Fast Response 0xAD) None
Y Go to home position None
A Save current position Clear current saved position
B Restore previous saved position None
X Switch playback order Clear all saved positions
LB Increase speed factor (high-level) Decrease speed factor (high-level)
RB Increase movement speed (low-level) Decrease movement speed (low-level)

6.2 Joystick & Trigger Functions

Control Joint Mode Pose Mode
Left Joystick J1 (Base rotation): Left / RightJ2 (Shoulder): Up / Down End-effector X / Y translation
Right Joystick J3 (Elbow): Up / DownJ6 (Wrist rotation): Left / Right End-effector Z translation & Z-axis rotation
D-Pad J4 (Wrist yaw): Left / RightJ5 (Wrist pitch): Up / Down End-effector X / Y-axis rotation
Left Trigger (LT) Close gripper Close gripper
Right Trigger (RT) Open gripper Open gripper

6.3 Special Functions

6.3.1 Gripper Control

  • Gripper opening range: 0–100%
  • Quick toggle: When fully open (100%) or fully closed (0%), a quick press and release of the trigger toggles the state.

6.3.2 Speed Control

  • Speed factor: 0.25x, 0.5x, 1.0x, 2.0x, 3.0x, 4.0x, 5.0x (adjust with LB)
  • Movement speed: 10%–100% (adjust with RB)

6.3.3 Position Memory

  • Supports saving multiple waypoints
  • Supports forward and reverse playback

Notes

  • You may run main_virtual.py first to test in simulation.
  • For first-time use, start with low speed and increase gradually after familiarization.
  • Keep a safe distance during operation. Do not approach the moving manipulator.
  • Numerical solutions may cause large joint jumps near singularities — maintain safe distance.
  • Fast response mode (0xAD) is dangerous. Use with extreme caution and keep clear.
  • If using pinocchio, source the ROS workspace of the manipulator in advance, otherwise meshes will not be detected.

1 post - 1 participant

Read full topic

by Agilex_Robotics on March 26, 2026 09:51 AM

March 24, 2026
FusionCore, which is a ROS 2 Jazzy sensor fusion package (robot_localization replacement)

Hey everyone,
I’ve been working on FusionCore for the last few months… it’s a ROS 2 Jazzy sensor fusion package that aims to bridge the gap left by the deprecation of robot_localization.

There wasn’t anything user-friendly available for ROS 2 Jazzy. It merges IMU, wheel encoders, and GPS/GNSS into a single, reliable position estimate at 100Hz. No need for manual covariance matrices…. just one YAML config file.

  • It uses an Unscented Kalman Filter (UKF) with a complete 3D state…. and it’s not just a port of robot_localization.
  • It features native GNSS fusion in ECEF coordinates, so you won’t run into UTM zone issues.
  • It supports dual antenna heading right out of the box….
  • It automatically estimates IMU gyroscope and accelerometer bias.
  • It includes HDOP/VDOP quality-aware noise scaling, which means bad GPS fixes are automatically down-weighted.
  • It’s under the Apache 2.0 license, making it commercially safe.
  • And it’s built natively for ROS 2 Jazzy….. not just a port.

GitHub: https://github.com/manankharwar/fusioncore

I respond to issues within 24 hours. If you’re working on a wheeled robot with GPS on ROS 2 Jazzy and hit problems….. open an issue or reply here.

6 posts - 3 participants

Read full topic

by manankharwar on March 24, 2026 11:17 PM


Powered by the awesome: Planet