March 20, 2026
TechSolstice '26 (Annual Technical Fest of MIT Bengaluru)

Hello ROS Community,

Tech Solstice 2026 is the annual technology festival hosted by the Manipal Institute of Technology (MIT), Bengaluru, featuring a diverse lineup of competitive robotics events.

We invite students, robotics enthusiasts, and builders to participate in a series of hands-on challenges designed to test speed, control systems, autonomous navigation, and combat robotics.

Total Prize Pool: ₹2.6 Lakhs+

Robotics Events (further details can be found on the website)
• Robo Race
• Cosmo Clench
• Maze Runner
• Line Follower
• Robo Wars

Format & Timeline
Event Dates: 27 March – 29 March 2026

Participants will compete on-site across multiple rounds depending on the event format, with final winners determined through performance-based evaluation.

Participants are encouraged to utilize embedded systems, ROS-based architectures, simulation tools, and custom-built hardware where applicable.

Further details and registration:
https://techsolstice.mitblr.in

We look forward to participation from the robotics community.

1 post - 1 participant

Read full topic

by Atharva_Maik on March 20, 2026 04:38 PM

March 19, 2026
Transitive Robotics Tryout | Cloud Robotics WG Meeting 2026-03-23

Please come and join us for this coming meeting at Mon, Mar 23, 2026 4:00 PM UTCMon, Mar 23, 2026 5:00 PM UTC, where we plan to try out Transitive Robotics. Transitive Robotics is a service that allows users to deploy and manage robots, including giving full-stack robotic capabilities. Capabilities include data capture and storage, which makes Transitive Robotics a useful case study for our focus on Logging & Observability.

Last session, we continued our tryout of the Canonical Observability Stack (COS) from the previous meeting. We were successful in hosting the full stack and viewing the public pages, as well as connecting a simulated robot to the stack. We could view logs and system statistics from the simulated robot. If you’re interested to watch the recorded part of the meeting, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

1 post - 1 participant

Read full topic

by mikelikesrobots on March 19, 2026 04:35 PM

Mastering Nero – MoveIt2 Part II

Mastering Nero – MoveIt2 Part II

In the previous session, we built a complete MoveIt2 package from a URDF model using the MoveIt Setup Assistant, and realized motion planning and visual control of the robotic arm.

In this session, we will explain how to set up a co-simulation environment for MoveIt2 and Isaac Sim. By configuring the ROS Bridge, adjusting hardware interface topics, and integrating the URDF model, we will achieve seamless connection between the simulator and motion planning, providing a complete practical solution for robot algorithm development and system integration.

Abstract

Co-simulation of MoveIt2 and Isaac Sim

Tags

ROS2, MoveIt2, robotic arm, Nero

Repositories

Operating Environment

System: Ubuntu 22.04
ROS Version: Humble
Isaac Sim Version: 5.1

Download USD Model

We use the Nero USD model provided by AgileX Robotics:

cd ~/nero_ws/src
git clone https://github.com/agilexrobotics/agx_arm_sim

If you haven’t installed Isaac Sim or want to import your own URDF model, refer to:

Isaac_Sim Import PiPER URDF

Launch Isaac Sim

Navigate to the Isaac Sim folder, use the script to launch the ROS Bridge Extension, then click Start to launch Isaac Sim:

cd isaac-sim-standalone-5.1.0-linux-x86_64/
./isaac-sim.selector.sh

Then drag and drop the newly downloaded USD model into Isaac Sim to open it:

In the USD file, you need to add an ActionGraph for communication with the ROS side. The ActionGraph is as follows:

Configure ActionGraph

articulation_controller

Modify targetPrim according to actual conditions; targetPrim is generally /World/nero_description/base_link:

ros2_subscribe_joint_state

Modify topicName according to actual conditions; topicName must correspond to the URDF, here it is isaac_joint_commands:

ros2_publish_joint_state

Modify targetPrim and topicName according to actual conditions; targetPrim is generally /World/nero_description/base_link; topicName must correspond to the URDF, here it is isaac_joint_states:

After starting the simulation, use ros2 topic list in the terminal; the following topics can be viewed:

Modify MoveIt Package

Open nero_description.ros2_control.xacro and add topic parameters:

gedit nero_ws/src/nero_moveit2_config/config/nero_description.ros2_control.xacro

            <hardware>
                <!-- By default, set up controllers for simulation. This won't work on real hardware -->
                <!-- <plugin>mock_components/GenericSystem</plugin> -->
                <plugin>topic_based_ros2_control/TopicBasedSystem</plugin>
                <param name="joint_commands_topic">/isaac_joint_commands</param>
                <param name="joint_states_topic">/isaac_joint_states</param>
            </hardware>

Then save and compile the code, then launch MoveIt2:

cd ~/nero_ws
colcon build
source install/setup.bash
ros2 launch nero_moveit2_config demo.launch.py

1 post - 1 participant

Read full topic

by Agilex_Robotics on March 19, 2026 08:36 AM

March 18, 2026
NWO Robotics API `pip install nwo-robotics - Production Platform Built on Xiaomi-Robotics-0

My name is Ciprian Pater, and I’m reaching out on behalf of PUBLICAE (formerly a student firm at UiA Nyskaping Incubator) to introduce you to NWO Robotics Cloud (nworobotics.cloud) - a comprehensive production-grade API platform we’ve built that extends and enhances the capabilities of the groundbreaking Xiaomi-Robotics-0 model. While Xiaomi-Robotics-0 represents a remarkable achievement in Vision-Language-Action modeling, we’ve identified several critical gaps between a research-grade model and a production-ready robotics platform. Our API addresses these gaps while showcasing the full potential of VLA architecture.

(Attaching some screenshots below for UX reference).

Technical whitepaper at https://www.researchgate.net/publication/401902987_NWO_Robotics_API_WHITEPAPER

NWO Robotics CLI COMMAND GROUPS

Install instantly via pip and start in seconds:

pip install nwo-robotics

Quick Start: nwo auth login → Enter your API key from: nworobotics.cloud → nwo robot “pick up the box”

═══════════════════════════════

• nwo auth - Login/logout with API key

• nwo robot - Send commands, health checks, learn params

• nwo models - List models, preview routing decisions

• nwo swarm - Create swarms, add agents

• nwo iot - Send commands with sensor data

• nwo tasks - Task planning and progress tracking

• nwo learning - Access learning system

• nwo safety - Enable real-time safety monitoring

• nwo templates - Create reusable task templates

• nwo config - Manage CLI configuration etc:

NWO ROBOTICS API v2.0 - BREAKTHROUGH CAPABILITIES

═══════════════════════════════════════

FEATURE | TECHNICAL DESCRIPTION

-------------------------|------------------------------------------

Model Router | Semantic classification + 35% latency

                     | reduction through intelligent LM selection

-------------------------|------------------------------------------

Task Planner | DAG decomposition with topological

                     | sorting + checkpoint recovery

-------------------------|------------------------------------------

Learning System | Vector database + collaborative filtering

                     | for parameter optimization

-------------------------|------------------------------------------

IoT Fusion | Kalman-filtered multi-modal sensor

                     | streams with sub-10cm accuracy

-------------------------|------------------------------------------

Enterprise API | SHA-256 auth, JWT sessions, multi-tenant

                     | isolation

-------------------------|------------------------------------------

Edge Deployment | 200+ locations, Anycast routing, <50ms

                     | latency, 99.99% SLA

-------------------------|------------------------------------------

Model Registry | Real-time p50/p95/p99 metrics + A/B testing

-------------------------|------------------------------------------

Robot Control | RESTful endpoints with collision detection

                     | + <10ms emergency stop

-------------------------|------------------------------------------

═════════════════

INTELLIGENT MODEL ROUTER (v2.0)

═════════════════

Our multi-model routing system analyzes natural language instructions

in real-time using semantic classification algorithms, automatically

selecting the optimal language model for each specific task type.

For OCR tasks, the router selects DeepSeek-OCR-2B with 97% accuracy;

for manipulation tasks, it routes to Xiaomi-Robotics-0. This

intelligent selection reduces inference latency by 35% while

improving task success rates through model specialization.

═════════════════

TASK PLANNER (Layer 3 Architecture)

═════════════════

The Task Planner decomposes high-level natural language instructions

into executable subtasks using dependency graph analysis and

topological sorting. When a user requests “Clean the warehouse,”

the system generates a directed acyclic graph of subtasks

(navigate→identify→grasp→transport→place) with estimated durations

and parallel execution paths. This hierarchical planning reduces

complex mission failure rates by implementing checkpoint recovery

at each subtask boundary.

═════════════════

LEARNING SYSTEM (Layer 4 - Continuous Improvement)

═════════════════

Our parameter optimization engine maintains a vector database of

task execution outcomes, using collaborative filtering algorithms

to recommend optimal grip forces, approach velocities, and grasp strategies based on historical performance data.

For fragile object manipulation, the system has learned that 0.28N grip force with

12cm/s approach velocity yields 94% success rates across 127 similar

tasks, automatically adjusting robot parameters without human

intervention.

═════════════════

IOT SENSOR FUSION (Layer 2 - Environmental Context)

═════════════════

The API integrates multi-modal sensor streams (GPS coordinates,

LiDAR point clouds, IMU orientation, temperature/humidity readings)

into the inference pipeline through Kalman-filtered sensor fusion.

This environmental awareness enables context-aware decision making -

for example, automatically reducing grip force when temperature

sensors detect a hot object, or adjusting navigation paths based

on real-time LiDAR obstacle detection with sub-10cm accuracy.

═════════════════

ENTERPRISE API INFRASTRUCTURE

═════════════════

We’ve implemented a complete enterprise API layer including X-API-Key

authentication with SHA-256 hashing, JWT token-based session

management, per-organization rate limiting with token bucket

algorithms, and comprehensive audit logging. The system supports

multi-tenant deployment with complete data isolation between

organizations, enabling commercial deployment scenarios that raw

model weights cannot address.

═════════════════

EDGE DEPLOYMENT (Global Low-Latency)

═════════════════

Our Cloudflare Worker deployment distributes inference across 200+

global edge locations using Anycast routing, achieving <50ms response

times from anywhere in the world through intelligent geo-routing.

The serverless architecture eliminates cold start latency entirely

while providing automatic DDoS protection and 99.99% uptime SLA -

critical capabilities for production robotics deployments that

require sub-100ms control loop response times.

═════════════════

MODEL REGISTRY & PERFORMANCE ANALYTICS

═════════════════

The Model Registry maintains real-time performance metrics including

per-model success rates, p50/p95/p99 latency percentiles, and

cost-per-inference calculations across different hardware

configurations. This telemetry enables data-driven model selection

and automatic A/B testing of model versions, ensuring optimal

performance as your Xiaomi-Robotics-0 model evolves.

═════════════════

ROBOT CONTROL API

═════════════════

We provide RESTful endpoints for real-time robot state querying

(joint angles, gripper position, battery telemetry) and action

execution with safety interlocks. The action execution pipeline

includes collision detection through bounding box overlap

calculations, emergency stop capabilities with <10ms latency, and

execution confirmation through sensor feedback loops - essential

safety features absent from the base model inference API.

MULTI-AGENT COORDINATION

Enable multiple robots to collaborate on complex tasks. Master

agents break down objectives and distribute work to worker agents

with shared memory and handoff zones.

→ Swarm intelligence, task delegation, conflict resolution

FEW-SHOT LEARNING

Robots learn new tasks from just 3-5 demonstrations instead of

programming. Skills adapt to user preferences and improve

continuously from execution feedback.

→ Learn from demonstrations, skill composition, personalisation.

ADVANCED PERCEPTION

Multi-modal sensor fusion (camera, depth, LiDAR, thermal) with

6DOF pose estimation. Detect humans, recognize gestures, predict

motion, and calculate optimal grasp points.

→ 3D scene understanding, human detection, gesture recognition

SAFETY LAYER

Continuous safety validation with 50ms checks. Force/torque

limits, human proximity detection, collision prediction,

configurable safety zones, and full audit logging for compliance.

→ Real-time monitoring, emergency stop, collision prediction

GESTURE CONTROL

Real-time hand gesture recognition for intuitive robot control.

Wave to pause/stop, point to direct attention, draw paths for

navigation. Works from 0.5-3 meters with 95%+ accuracy.

→ Wave to stop, point to indicate location

VOICE WAKE WORD

Always-listening voice activation with custom wake words.

Natural language command parsing with intent extraction. Supports

multiple languages and voice profiles for personalised interactions.

→ “Hey Robot, [command]”

PROGRESS UPDATES

Real-time task progress reporting with time estimation.

Subscribable WebSocket streams for live updates. Milestone

notifications when tasks reach defined checkpoints.

→ “Task 60% complete, 2 minutes remaining”

FAILURE RECOVERY

Intelligent error recovery with strategy adaptation. If grasp

fails, automatically try different angles, grip forces, or

approaches. Escalates to human operator only after exhausting

recovery options.

→ Auto-retry with different angles/strategies

TASK TEMPLATES

Pre-configured task sequences for common workflows. Schedule-based

activation with variable substitution. Templates can be nested,

parameterized, and shared across robot fleets.

→ “Morning routine”, “Closing procedures”

PHYSICS-AWARE PLANNING

Motion planning with real-world physics simulation. Detects

impossible trajectories, unstable grasps, and collision risks

before execution. Integrates with MuJoCo and Isaac Sim.

→ Simulate before execute, avoid physics violations

REAL-TIME SAFETY

Runtime safety monitoring with microsecond latency. Dynamically

adjusts robot speed based on proximity to humans. Emergency stop

with guaranteed response time under 10ms.

→ Continuous monitoring, dynamic speed adjustment

SEMANTIC NAVIGATION

Navigate using natural language landmarks instead of coordinates.

Understand spatial relationships (“next to the table”, "behind

the sofa"). Dynamic path recalculation when obstacles appear.

Thank you in advance for your consideration and feedback.

Sincere Regards

Ciprian Pater

PUBLICAE / NWO Robotics

+4797521288

1 post - 1 participant

Read full topic

by Ciprian_Pater on March 18, 2026 03:47 PM

JdeRobot Google Summer of Code 2026

Hi folks,

we at JdeRobot org are partipating in Google Summer of Code 2026. All our proposed projects are on open source Robotics, and most of them (7/8) in ROS 2 related software. They are all described at our ideas list for GSoC-2026, including their summary and illustrative videos.

  • Project #1: PerceptionMetrics: GUI extension and support for standard datasets and models
  • Project #2: Robotics Academy: extend C++ support for more exercises
  • Project #3: Robotics Academy: New power tower inspection using deep learning
  • Project #4: RoboticsAcademy: drone-cat-mouse chase exercise, two controlled robots at the same time
  • Project #5: Robotics Academy: using the Open3DEngine as robotics simulator
  • Project #6: VisualCircuit: Improving Functionality & Expanding the Block Library
  • Project #7: Robotics Academy: Exploring optimization strategies for RoboticsBackend container
  • Project #8: Robotics Academy: palletizing with an industrial robot exercise

Motivated candidates are welcome :slight_smile: Please check the Application Instructions, as we request a Technical Challenge and some interactions in our GitHub repositories before talking to our mentors and submitting your proposal.

Cheers,

JoseMaria

1 post - 1 participant

Read full topic

by jmplaza on March 18, 2026 03:33 PM

March 17, 2026
Introducing the Connext Robotics Toolkit for ROS 2

Hi ROS 2 Community,

I’m pleased to announce that RTI released enhanced support for ROS 2 and rmw_connextdds today. The new Connext Robotics Toolkit makes it much easier for ROS users to take advantage of Connext and DDS features to improve their development experience.

As many of you know, RTI has supported ROS 2 since the very beginning by providing our core DDS implementation at no charge for non-commercial use. The Connext Robotics Toolkit extends that support to our full Connext Professional product. This includes our broader platform around DDS – things like network tuning and debugging tools, system observability, and diverse network support, from shared memory to WAN.

In addition, we’re expanding our free license to include commercial prototyping. This means startups and other product teams building ROS-based systems can now take advantage of Connext at no charge. Starting with production-grade communication infrastructure will make it easier to scale from prototype to deployment.

The Connext Robotics Toolkit is currently available for Kilted Kaiju and will be available for Lyrical Luth upon its release. If you’re exploring ways to leverage ROS in commercial systems or looking at RMW options beyond the default, you can find more details and installation instructions here: Connext Robotics Toolkit for ROS | RTI

Happy to answer questions or discuss with anyone interested.

1 post - 1 participant

Read full topic

by rtidavid on March 17, 2026 07:44 PM

March 16, 2026
Per-robot economic settlement for industrial ROS2 fleets

As ROS2 fleets move into commercial deployments serving external clients, one infrastructure gap is shared economic verification between the fleet operator and their customer. The operator’s internal logs don’t give the client independent verification of what work was completed, leading to manual reconciliation and disputes as fleets scale.

Built a settlement layer that monitors ROS2 lifecycle events and generates verified timestamped records per robot per completed task. Both operator and client can verify independently. Each robot builds a portable work history over time useful for service billing, equipment valuation, and proving utilization to potential customers.

Already compatible with standard ROS2 lifecycle management. Integration details here:https://github.com/FoundryNet/foundry_net_MINT/blob/main/FoundryNet%20API%20Client/foundry-client.py

Interested in feedback from anyone deploying ROS2 fleets commercially and dealing with the billing side of multi-client operations.

Cheers!

1 post - 1 participant

Read full topic

by FoundryNet on March 16, 2026 07:16 PM

[Show and Tell] ROS 2 Blueprint Studio: Visual Node Editor & Boilerplate Generator (Alpha)

Hi everyone!

Like many of us, I appreciate the power and flexibility of ROS 2, but I’ve always found the amount of manual boilerplate to be a bottleneck for rapid development. Keeping track of all the configuration details making sure CMakeLists.txt and package.xml are perfectly synced, or manually wiring launch files and topic connections takes a significant amount of time. I wanted to find a way to automate this infrastructure setup so I could focus purely on writing the actual robotics logic.

To solve this, I started building ROS 2 Blueprint Studio a visual node-based editor (inspired by Unreal Engine Blueprints) designed to take the routine off your shoulders.

Under the Hood (Architecture) I tried to avoid any “black magic” and stick entirely to standard ROS 2 practices:

1. Code Generation & Build System The studio doesn’t compile the code itself; it acts as a smart templating engine. Creating a standard node generates a base C++ template. If you duplicate a node (from the palette or canvas), it creates an independent file with a new name and copied code. Modifying the copy doesn’t break the parent. For the actual build, it relies on standard colcon build under the hood.

2. File Watcher & Dependency Tree To build the dependency tree, I wrote a custom FileWatcher. Before building, it scans the files to check for includes and node communication. For performance, it only parses files that have been modified. (I realize this might theoretically cause “phantom connections” on massive graphs, so I plan to add a forced full-rebuild mode in the future).

3. Topic Routing (Two Approaches) Node linking currently works in two modes:

  • Hardcoded (Bottom-Up): If publisher and subscriber topic names are explicitly hardcoded in your C++ or Python files, the UI detects this and automatically draws a visual “locked” wire between them.

  • Visual (Top-Down): You can define the topic name only on the publisher, drag a visual wire to a subscriber, and the FileWatcher will find a special placeholder in the subscriber’s code and automatically replace it with the publisher’s topic name. (Full disclosure: the visual routing is still a bit unstable and not recommended for huge projects yet, but I’m refining it).

0316

4. Runtime Environment (Docker) I chose Docker (osrf/ros:humble-desktop) as the execution environment. Why?

  • Setting up ROS 2 natively on Windows is a special kind of pain.

  • It provides painless deployment and saves you from dependency hell when migrating to future ROS versions.

  • You can send your project folder to someone who doesn’t even have ROS installed, and their system will build and run your entire architecture in just a few clicks.

The Ask: Roast My Architecture The project is currently in early alpha. Honestly, my biggest doubts right now are around the core architecture and the automated build system (package and launch file generation).

I would be incredibly grateful if experienced ROS architects could take a look at the repo, point out my blind spots, and give me some harsh architectural critique. I’d much rather rebuild the foundation now than drag architectural flaws into a full release.

Source code here: GitHub - NeiroEvgen/ros2-blueprint-studio · GitHub

Any feedback is highly appreciated!

1 post - 1 participant

Read full topic

by NeiroEvgen on March 16, 2026 11:25 AM

March 15, 2026
mcp-ros2-logs — let AI agents debug your ROS2 logs across nodes

mcp-ros2-logs is an open-source MCP server that merges ROS2 log files from multiple nodes into a unified timeline and exposes query tools for AI agents like Claude, GitHub Copilot, and Cursor.

The problem: ROS2 writes each node’s logs to a separate file. Debugging a cascading failure across sensor_driver -> collision_checker -> motion_planner means manually correlating timestamps across 3+ files.

What this does: Install it with pipx install mcp-ros2-logs, register it with your AI assistant, and ask natural language questions like:

  • “show me all errors with 5 messages of context around each”
  • “compare good_run vs bad_run — what changed?”
  • “detect anomalies in this run”
  • “correlate errors with bag topics — what was happening on /scan when the planner crashed?”

Features:

  • 12 MCP tools: query logs, node summaries, timelines, run comparison, anomaly detection, bag file parsing, log-to-bag topic correlation, live tailing
  • Parses ROS2 bag files (.db3/.mcap) without ROS2 installed — extracts topic metadata for correlation with log errors
  • Statistical anomaly detection: rate spikes, new error patterns, severity escalations, silence gaps, error bursts
  • Supports custom RCUTILS_CONSOLE_OUTPUT_FORMAT
  • Works with Claude Code, VS Code Copilot, Cursor, and any MCP-compatible client
  • No ROS2 installation required — it just reads files from disk

Example workflow: Point the agent at a run where a lidar USB connection dropped. It loads the logs, correlates the errors with bag topic data, and reconstructs the full causal chain: USB timeout → /scan messages stopped → collision_checker failed → motion_planner aborted. The whole analysis takes about 10 seconds.

GitHub: GitHub - spanchal001/mcp-ros2-logs: Give AI agents the ability to debug ROS2 logs across nodes — MCP server, no ROS2 install required · GitHub
PyPI: pipx install mcp-ros2-logs

Would love feedback from anyone doing multi-node debugging or working with bag files.

1 post - 1 participant

Read full topic

by spanchal001 on March 15, 2026 11:26 PM

Rewire — stream ROS 2 topics to Rerun with zero ROS 2 build dependencies

Title: Rewire — stream ROS 2 topics to Rerun with zero ROS 2 build dependencies

Hi all,

I’ve been working on Rewire, a standalone bridge that streams live ROS 2 topics to
Rerun for real-time visualization. I wanted to share it here and get feedback from the
community.

The problem it solves

Setting up visualization tooling in ROS 2 often means pulling in dependencies,
building packages, and dealing with middleware configuration. I wanted something that just works — point it at a DDS/Zenoh network and start visualizing.

How it works

Rewire is a single Rust binary that speaks DDS and Zenoh wire protocols directly. It’s not a ROS 2 node —
it doesn’t join the ROS graph or require any ROS 2 installation. It acts as a passive observer.

curl -fsSL https://rewire.run/install.sh | sh
rewire record -a    # subscribe to all topics

What’s supported

  • 53 type mappings across sensor_msgs, geometry_msgs, nav_msgs, tf2_msgs, vision_msgs, std_msgs, and rcl_interfaces — including Image, PointCloud2, LaserScan, TF, Odometry, Detection2D/3DArray, and more.
  • Custom message mappings — map any ROS 2 message type to Rerun archetypes via a JSON5 config file, no recompilation.
  • URDF visualization — loads from /robot_description, resolves meshes via AMENT_PREFIX_PATH.
  • Full TF tree — static + dynamic transforms with coordinate frame visualization
  • Per-topic diagnostics — Hz, bandwidth, drops, and latency rendered as Rerun Scalars.
  • Topic filtering — glob-based include/exclude patterns.

Platforms

Linux (x86_64, aarch64) and macOS (Intel + Apple Silicon).

Install options

  • Install script: curl -fsSL https://rewire.run/install.sh | sh
  • prefix.dev: pixi global install -c rewire rewire
  • APT repository for Debian/Ubuntu

I’d love to hear your thoughts — especially around which message types or workflows you’d want supported next. If you run into issues, feedback is very welcome.

Website: https://rewire.run

6 posts - 3 participants

Read full topic

by alvgaona on March 15, 2026 11:25 PM

A proposal for a LidarScan sensor message

Hello ROS community

When working with Lidar data users are usually referred to using PointCloud2 objects that represent the lidar data as a list of 3d points with additional attributes. While this nicely mirrors the PCL representation and fits the majority of applications working with 3D point cloud data this isn’t how modern lidar sensors represent data.

Problem Statement

This has several drawbacks that are highlighted in the following list (not a comprehensive list):

  • With the rapid increase in Lidar resolution, PointCloud2 can be hefty to transport..To this date many DDS implementations struggle to catch up with the actual sensor frame rate when transporting a high resolution PointCloud2 on low to medium compute nodes.

  • An option to reduce the bandwidth requirement would be to use dense pointclouds i.e. transport only valid points. However, by doing so we lose the structured nature of Lidar data of devices that natively generated the Lidar data in a structured 2D grid.

    • Many image processing operations do benefit from the adjacency information allowing for quick lookup of neighboring pixels. For example, Ground Plane Removal can be more efficient to implement directly on 2D range data than using a 3D representation.

    • One could also directly employ existing 2D neural networks like YOLO on Lidar data in their 2D representation.

  • A potential critique to this suggestion might be that we don’t need a new message to do this as we already have sensor_msgs::Image that can fulfill this aspect for users who need it to be that way. And in fact, the ouster_ros driver -optionally- does publish the range data and other byproducts of the sensor as sensor_msgs::Image on separate topics. I am aware of many users who do indeed utilize these topics instead of the 3d pointcloud data.

    • While this works fine if you are only interested in processing each channel individually. This breaks down a bit if you need to access and use more than one channel in the same operation - which is often the case -.

      • A simple example of this, a user may want to filter certain returns (range data) based on reflectivity values and adjacency data simultaneously.
    • A common approach to this problem in ROS would be to use the `ApproximateTime` filter. Doing so, however, adds some latency and CPU overhead to synchronize data channels that were originally already synchronized.

    • A LidarScan message acts here as a multi-spectral image with all channels are memory aligned with 100% data correlation ensured by the sensor without software sync overhead.

The proposal

We are proposing the addition of a new ROS sensor message that mirror that native format of the majority of Lidar sensors (whether spinning or solid-state), in this proposal we would like to invite other lidar vendors to also contribute to make sure that this format encompasses the entire spectrum of Lidar sensors.

A quick draft of a LidarScan message could look like this:

std_msgs/Header header

# Dimensions of the scan (e.g., 128 channels x 2048 columns)
uint32 height
uint32 width

# --- Geometry Metadata ---
# Horizontal and Vertical FOV/Resolution info to allow projection to 3D 
# without needing a full PointCloud2 blob.
float32 vertical_fov_min
float32 vertical_fov_max
float32 horizontal_fov_min
float32 horizontal_fov_max

# --- Channel Data (The "Image" approach) ---
# Each channel (Range, Intensity, Reflectivity, etc.) is stored in this list.
# This mirrors the 'PointField' logic but at a 2d-grid level.
LidarChannel[] channels

# The actual raw buffer containing all interleaved or planar channel data.
# Using uint8[] allows for Zero-Copy compatibility.
uint8[] data

# --- Scaling and Metrics ---
# Different venders have different units
# Ouster (mm) vs. Velodyne (m) vs. Hesai (cm) problem.
# Range = (raw_value * multiplier) + offset
float64 range_multiplier
float64 range_offset

And the definition of LidarChannel:

string name        # "range", "intensity", "reflectivity", "ambient", "near_ir"
uint32 offset      # Offset from start of data row
uint8  datatype    # uint8, uint16, uint32, float32, etc.

While this works for sensors with uniform distributions of laser beams not all vendors have that formation including Ouster sensors making the section on Geometry Metadata insufficient:

float32 vertical_fov_min
float32 vertical_fov_max
float32 horizontal_fov_min
float32 horizontal_fov_max

Ouster spinning sensors vertical beams don’t have uniform distribution due to the calibration process. Which means we need to extend the previous definition to include the beam angels in the LidarScan message body:

# --- Non-Uniform Geometry Metadata ---
# These arrays allow the receiver to project Range -> 3D.
# vertical_angles[height]: The elevation angle for each ring (in radians).
float32[] vertical_angles

# other attributes might also be needed
# horizontal_angles[width]: [optional] The azimuth angle for each column (in radians).
# int32[] beam_time_offset: [optional] To handle "staggered" firing patterns within a single column.

This solves the problem and allows users to project the range data into 3D but adds a bit of overhead increasing the message size. These arrays basically essentially define the intrinsics of the Lidar sensor, however, transporting them with every LidarScan message reduces or eliminates most of the gains attained by transporting the raw range data vs using the projected xyz points. A better approach would be to break down the beam information and the lidar data into two separate messages in which the lidar sensor info is only transported once earlier during the connection phase. This is not a new pattern to ROS as it already has `sensor_msgs/CameraInfo` which describes the intrinsics of the camera link: sensor_msgs/CameraInfo Documentation .

By moving these intrinsic values fields into a separate message we can retain the same gains and keep the LidarScan message lean. The definition for a sensor_msgs::LidarInfo message would be something like:

std_msgs/Header header

float32[] vertical_angles
float32[] horizontal_angles
int32[] beam_time_offsets

# --- Scaling and Metrics ---
float64 range_multiplier
float64 range_offset

# Plus other static factory data (intrinsic/extrinsic)

And the revised LidarScan message becomes:

std_msgs/Header header
uint32 height
uint32 width
LidarChannel[] channels
uint8[] data

NOTES:

  • This format is more suited for filtering and perception stacks

  • It is important to note that this proposal does not suggest that vendors of Lidar sensors or users should stop using the PointCloud2. It mainly suggests the addition of a new message type that mirrors the native format of the majority of Lidar sensors, reducing overhead and providing better synchrony.

  • The idea here is to come up with a standard sensor_msgs::LidarScan and sensor_msgs::LidarInfo messages .. and totally abstract out the process that converts this native Lidar sensor message format and produce the 3D pointcloud out of any sensor.

  • Once we get initial feedback from the community the idea is that Ouster and others who are interested in this concept to build a PoC of the proposal and make sure we cover the all basic necessities for this to work before committing to the final interface.

  • I am also aware of the other proposals around Native Buffers (rcl::Buffer) that are already in flight and we plan to support this from the get go as there are large intersection with the motivation behind Native Buffers and the use of LidarScan for perception type of tasks and other workloads

14 posts - 5 participants

Read full topic

by Samahu on March 15, 2026 06:35 PM

March 12, 2026
Call for Proposals: Global ROSCon 2026 in Toronto

ROSCon Global 2026 Call for Proposals Now Open!

The ROSCon call for proposals is now open! You can find full proposal details on the ROSCon 2026 website.

ROSCon Global 2026 will be held in Toronto, Canada, from September 22nd to September 24th, 2026. This year, we are officially adopting the “Global” moniker to reflect our growing international community and the many regional ROSCons happening worldwide.

Submission Deadlines

  • Workshops: Due by Sun, Apr 5, 2026 12:00 AM UTC. Submit via Google Form
  • Talk Proposals: Due by Sun, Apr 26, 2026 12:00 AM UTC Submit via HotCRP
  • Birds of a Feather (BoF): Due by Fri, Jul 24, 2026 12:00 AM UTC, submissions opening soon.

Important Dates

  • Diversity Scholarship Deadline: Sun, Mar 22, 2026 12:00 AM UTC Submit here
  • Workshop Acceptance Notification: Tue, May 12, 2026 12:00 AM UTC
  • Ticket Sales Begin: Mon, May 11, 2026 12:00 AM UTC
  • Presentation Acceptance Notification: Tue, Jun 9, 2026 12:00 AM UTC

Diversity Scholarship Program

If you require financial assistance to attend ROSCon Global and meet the qualifications, please apply for our Diversity Scholarship Program. Thanks to our sponsors, scholarships include complimentary registration, four nights of hotel accommodation, and a travel stipend.

The deadline for the scholarship is Sun, Mar 22, 2026 12:00 AM UTC, which is well before the CFP deadlines to allow for travel planning and visa processing.

What are we looking for?

The core of ROSCon is community-contributed content. We are looking for:

  • Workshops: Participatory, interactive experiences (Day 1).
  • Presentations: Technical talks (10-30 minutes) on new tools, libraries, or novel applications.
  • Birds of a Feather: Self-organized meetings for specific interest groups (e.g., medical robotics, space, or debugging).

We want to see your robots! Whether it is maritime robots, lunar landers, or industrial factory fleets, we want to hear the technical lessons you learned. We encourage original content, high-impact ideas, and, as always, a focus on open-source availability.

How to Prepare

If you are new to ROSCon we recommend reviewing the archive of previous talks. You are also welcome to use this Discourse thread to workshop your ideas and find collaborators.

Questions and concerns can be directed to the ROSCon Executive Committee (roscon-2026-ec@openrobotics.org) or posted in this thread. We look forward to seeing the community in Toronto!

5 posts - 4 participants

Read full topic

by Katherine_Scott on March 12, 2026 01:49 PM

March 11, 2026
PlotJuggler Bridge Released

I’m happy to introduce PlotJuggler Bridge, a lightweight server that exposes ROS 2 or DDS topics over WebSocket, allowing remote tools like PlotJuggler to access telemetry without directly participating in the middleware network.

In many robotics setups, accessing telemetry from another computer is harder than it should be. DDS discovery over WiFi can be unreliable, opening DDS networks outside the robot can create configuration issues, and installing a full ROS 2 environment on every machine used for debugging is often inconvenient.

PlotJuggler Bridge solves this by acting as a gateway between the middleware network and external clients.
It runs close to the robot, reads the topic data, and exposes it through a simple WebSocket endpoint that any client can connect to.

This approach keeps the ROS/DDS network local while making telemetry easily accessible from other machines.

The project is available here:


Why it is useful

This is especially helpful in scenarios such as:

  • monitoring a robot remotely over WiFi
  • accessing telemetry from Windows or macOS machines without ROS installed
  • avoiding DDS discovery and networking configuration issues
  • debugging systems without exposing the full middleware network
  • connecting tools without needing message definitions compiled locally

Because the bridge performs runtime schema discovery, clients can access topics even if they use custom ROS messages, without requiring those message packages to be installed on the client machine.

The bridge also aggregates and optionally compresses data, which helps reduce bandwidth usage and improves stability when streaming telemetry over wireless networks.


Main features

PlotJuggler Bridge includes several features designed for real-world robotics workflows:

  • WebSocket access through a single endpoint
  • automatic runtime discovery of topic schemas
  • support for custom ROS message types without client-side compilation
  • aggregation of messages for efficient streaming
  • optional ZSTD compression
  • support for multiple simultaneous clients
  • bandwidth-friendly handling of large messages by stripping large array fields while preserving useful metadata

How it works

ROS 2 / DDS -> PlotJuggler Bridge -> WebSocket -> PlotJuggler

The bridge subscribes to topics in the ROS/DDS network and exposes them through a WebSocket server.
External tools can connect and receive the streamed telemetry without joining the middleware network.


Quick to start

The bridge can typically be up and running in less than 5 minutes.

Setup instructions are available in the repository README:

You will need PlotJuggler 3.16 or newer, which includes the WebSocket client plugin:


Basic usage

Once the bridge is running, the workflow is straightforward:

  1. Start the bridge on the machine connected to the ROS/DDS network.
  2. Open PlotJuggler on any computer.
  3. Connect to the WebSocket Client using the bridge address.

The available topics will be discovered automatically and can be inspected immediately.


About the work

My name is Álvaro Valencia, and I am currently working on PlotJuggler as an intern while finishing the last months of my Robotics Software Engineering degree.

I collaborate closely with @facontidavide on this project. PlotJuggler clearly reflects years of work, effort and passion, and contributing to it is a great experience.

Together we are developing the components required to make this new Robot → PlotJuggler connection workflow simple and practical to use. The goal is to make remote telemetry access easier while keeping the system flexible for future extensions that will appear in upcoming PlotJuggler developments.


And stay tuned… more interesting things are coming soon for PlotJuggler.

18 posts - 6 participants

Read full topic

by AlvaroVM on March 11, 2026 01:24 PM

Control NERO’s 7-DoF Effortlessly with MoveIt 2 (Part I)

Let’s Explore Nero – Moveit2 Edition (Part I)

As a next-generation robot operating system, ROS2 provides powerful support for the intelligent and modular development of robotic arms. As the core motion planning framework in the ROS2 ecosystem, MoveIt2 not only inherits the mature functions of MoveIt but also achieves significant improvements in real-time performance, scalability, and industrial applicability.

Taking a 7-DoF robotic arm as an example, this document provides step-by-step instructions for configuring and generating a complete MoveIt2 package from a URDF model using the MoveIt Setup Assistant, enabling motion planning and visual control. This guide offers a clear, practical workflow for both beginners and developers looking to quickly integrate models into MoveIt2.

Abstract

Exporting MoveIt Package from URDF

Tags

ROS2, moveit2, Robotic Arm, nero

Repository

Environment

OS: Ubuntu 22.04
ROS Distro: Humble

Introduction to MoveIt2

MoveIt2 is the next-generation robotic arm motion planning and control framework developed based on the ROS2 architecture. It can be understood as a comprehensive upgrade of MoveIt in the ROS2 ecosystem. Inheriting the core capabilities of MoveIt, it has made significant improvements in real-time performance, modularity, and industrial application scenarios.

The main problems solved by MoveIt2 include:

  • Robotic Arm Motion Planning
  • Collision Checking
  • Inverse Kinematics (IK)
  • Trajectory Generation and Execution
  • RViz Visualization and Interaction

Installing Moveit2

You can directly install using binary packages; use the following commands to install all components related to moveit:

sudo apt install ros-humble-moveit*

Downloading the URDF File

First, create a new workspace and download the URDF model:

mkdir -p ~/nero_ws/src
cd ~/nero_ws/src
git clone https://github.com/agilexrobotics/piper_ros.git -b humble_beta1
cd ..
colcon build 

After successful compilation, use the following command to view the model in rviz:

cd ~/nero_ws/src
source install/setup.bash
ros2 launch nero_description display_urdf.launch.py 

Exporting the MoveIt Package Using Setup Assistant

Launch the moveit_setup_assistant:

roslaunch moveit_setup_assistant setup_assistant.launch

Select Create New Moveit Configuration Package to create a new MoveIt package, then load the robotic arm.

Calculate the collision model; for a single arm, use the default parameters.

Skip selecting virtual joints and proceed to define planning groups. Here, we need to create two planning groups: the arm planning group and the gripper planning group. First, create the arm planning group; set Group Name to arm, use KDL for the kinematics solver, and select RRTstar for OMPL Planning.

Setting the Kin.Chain

Add the control joints for the planning group, select joint1~joint7, click >, then save.

Planning group creation completed.

Setting the Robot Pose; you can pre-set some actions for the planning group here.

Skip End Effectors and Passive Joints, and add interfaces in the URDF.

Setting the controller, here we use position_controllers.

Simulation will generate a URDF file for use in Gazebo, which includes physical properties such as joint motor attributes.

After configuration, fill in your name and email.

Set the package name, then click Generate Package to output the function package.

Launching the MoveIt Package

cd ~/nero_ws/src
source install/setup.bash
ros2 launch nero_moveit2_config demo.launch.py

After successful launch, you can drag the marker to preset the arm position, then click Plan & Execute to control the robotic arm movement.

1 post - 1 participant

Read full topic

by Agilex_Robotics on March 11, 2026 08:15 AM

March 10, 2026
ros2_info — A fastfetch-like system info tool for ROS2

Hey Open Robotics folks :waving_hand:

So I built a small tool called ros2_info.
The idea was simple: what if fastfetch, but for your entire ROS2 environment?

One command → instant snapshot of everything happening in your ROS2 setup.

What it shows:
• ROS2 distro + whether it’s LTS or nearing EOL
• Live nodes, topics, services, and actions
• Auto-detects which DDS middleware you’re running
• All detected colcon workspaces + their build status
• Installed ROS2 packages grouped by category
• System stats (CPU, RAM, Disk)
• Pending ROS2-related apt updates
• A small web dashboard at localhost:8099

Basically the stuff I kept checking with 10 different commands… now in one place :sweat_smile:

Works across ROS2 distros: Foxy → Humble → Iron → Jazzy → Rolling

GitHub:
https://github.com/zang7777/ros2_info

Install

cd ~/ros2_ws/src
git clone https://github.com/zang7777/ros2_info.git
cd ~/ros2_ws && colcon build --symlink-install
source install/setup.bash
(best recommended) ros2 run ros2_info ros2_info --interactive
or just
ros2 run ros2_info ros2_info 

Always fun building little dev tools for the ecosystem :robot:

~"Created by roboticists, for roboticists "

3 posts - 2 participants

Read full topic

by zang7777 on March 10, 2026 04:36 PM

March 09, 2026
ROS 2 Rust Meeting: March 2026

The next ROS 2 Rust Meeting will be Mon, Mar 9, 2026 2:00 PM UTC

The meeting room will be at https://meet.google.com/rxr-pvcv-hmu

In the unlikely event that the room needs to change, we will update this thread with the new info!

2 posts - 1 participant

Read full topic

by maspe36 on March 09, 2026 01:16 PM

A Day at ROSCon Japan 2025 – What It’s Like to Attend as a Robotics Engineer

Hi everyone,

I recently had the chance to attend ROSCon Japan 2025, and it was an amazing experience meeting people from the ROS community, seeing robotics demos, and learning about the latest developments in ROS.

I made a short vlog to capture the atmosphere of the event. In the video, I shared some highlights including:

  • The overall environment and venue of ROSCon Japan

  • Robotics demos and technology showcased by different companies

  • Booths and exhibitions from robotics organizations

  • Moments from the talks and presentations

It was inspiring to see how the ROS ecosystem continues to grow and how many interesting robotics applications are being developed.

If you couldn’t attend the event or are curious about what ROSCon JP looks like, feel free to check out the video.

YouTube:

A Day at ROSCon JP 2025 | Robotics Engineer Vlog

Hope you enjoy it!

2 posts - 2 participants

Read full topic

by chanun3571 on March 09, 2026 03:14 AM

March 06, 2026
LSEP: Open protocol for standardized robot-to-human state communication (light + sound + motion)

Hello ROS community,

I’d like to introduce LSEP (Light Signal Expression Protocol) — an open standard I’ve been developing for how robots communicate their internal state to nearby humans using coordinated light signals, sound, and motion cues.

The problem LSEP solves:

Every robot manufacturer currently invents their own LED patterns and sound cues. There’s no shared vocabulary. A blinking blue light could mean “charging” on one platform and “human detected” on another. With the EU AI Act (Art. 50) now requiring transparency for human-facing AI systems, the industry needs a standardized approach.

What LSEP defines:

- 6 core states: IDLE, AWARENESS, INTENT, CARE, CRITICAL, THREAT

- 3 extended states: MED_CONF, LOW_CONF, INTEGRITY (for sensor uncertainty and self-diagnostics)

- Each state maps to specific light color + pulse pattern, optional sound, and motion modifier

- State transitions driven by Time-to-Contact (TTC) physics, not heuristics

- 1.5m proximity floor: any human within 1.5m triggers minimum AWARENESS

Technical details:

- RFC style specification (v2.0)

- Machine readable JSON signal definitions

- Unity prototype (HDRP) with 74 tests, including sensor noise simulation and tracking dropouts

- MIT licensed — use it however you want

Why I’m posting here:

ROS is where robot software gets built. If LSEP is going to be useful, it needs to work in your stacks — as a ROS node, a topic publisher, or a behavior tree integration. I’m looking for:

1. Feedback on the state model — Do 9 states cover the scenarios you encounter? What’s missing?

2. Integration ideas — How would you want to consume LSEP in a ROS 2 pipeline? As a `/lsep_state` topic? A lifecycle node?

3. Real-world edge cases — What breaks first when you imagine deploying this on your robot?

Links:

- Specification + demo: [lsep.org](https://lsep.org)

- GitHub: [ GitHub - NemanjaGalic/LSEP: Open protocol for standardized human-robot communication — 9 states, 3 modalities, 1 grammar. Physics-based. EU AI Act ready. · GitHub ]( GitHub - NemanjaGalic/LSEP: Open protocol for standardized human-robot communication — 9 states, 3 modalities, 1 grammar. Physics-based. EU AI Act ready. · GitHub )

Happy to answer questions and discuss. The goal is to make this the “USB-C of robot communication” — one standard, every platform.

7 posts - 3 participants

Read full topic

by NemanjaGalic on March 06, 2026 07:13 PM

March 05, 2026
Rover + LiDAR perception inside a Forest3D-generated world (Gazebo Harmonic)

Rover + LiDAR inside a Forest3D-generated world (Gazebo Harmonic)

A quick demonstration of spawning a robot and running LiDAR perception inside a Forest3D-generated environment with realistic visuals, making it a solid base for mapping and navigation tasks.

:play_button: Watch on YouTube

Performance can be improved by tuning the mesh decimation level depending on your use case.

Current work: Integrating terramechanics for more realistic rover-terrain interaction,

Forest3D supports a variety of environments beyond forests, including lunar and other unstructured terrains. Feel free to reach out

1 post - 1 participant

Read full topic

by khalidbourr on March 05, 2026 11:42 PM

Part 2: Canonical Observability Stack Tryout | Cloud Robotics WG Meeting 2026-03-09

Please come and join us for this coming meeting at Mon, Mar 9, 2026 4:00 PM UTCMon, Mar 9, 2026 5:00 PM UTC, where we plan to continue deploying an example Canonical Observability Stack (COS) instance based on information from the tutorials and documentation. This session will pick up where the last session left off: an AWS instance hosting the COS server side, and a VirtualBox VM hosting the robot side.

Last session, we started working through the documentation for setting up both a COS server instance and a robot instance. Unfortunately, the recording cut out shortly into the meeting due to lack of disk space. After this point, we switched to hosting in AWS and were able to host a COS instance, although it was misconfigured and the robot was unable to connect. If you’re interested to watch the recorded part of the meeting, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

1 post - 1 participant

Read full topic

by mikelikesrobots on March 05, 2026 01:30 PM

Pixhawk - ardusub setup ( Roll hold)

Hi all,

We’re building an ROV using Pixhawk (ArduSub) with a Raspberry Pi companion computer (ROS2 + MAVROS). The vehicle needs to attach and operate along vertical surfaces, so maintaining controlled roll while maneuvering is a core requirement.

Stack

  • Pixhawk running ArduSub

  • Companion computer: Raspberry Pi (ROS2 + MAVROS)

  • Joystick control

  • No external XY positioning (no DVL / external localization)

Goal

We want joystick-based control similar to POSHOLD stability, but still allow roll control so the vehicle can move along the surface while attached.

Thanks in advance — happy to share more details about the vehicle config if helpful.

1 post - 1 participant

Read full topic

by Ferbin_FJ on March 05, 2026 04:47 AM

March 03, 2026
Is there a working group for maintaining ROS 2-based robots in industry? 🤖

Hi everyone,

We’re curious — does a dedicated working group (or similar community) already exist for maintaining and operating ROS 2-based robots in industrial environments? If not, maybe it’s time to build one.

At Siemens, our ROS 2 efforts are focused on four key challenges:

  • :rocket: Shipping — How do you bring ROS 2-based systems into real industrial deployments?
  • :bug: Debugging — How do you quickly find bugs when your machine runs not just ROS 2, but also PLCs, HMIs, network switches, safety sensors, and more?
  • :counterclockwise_arrows_button: Updates — How do you keep your software reliably up to date?
  • :magnifying_glass_tilted_left: Fleet health — How do you detect critical bugs locally and across your entire fleet?

We’d love to connect with the community and learn what’s already out there! :globe_showing_europe_africa:

We’re actively looking to engage with others working in this space — whether you’re building solutions, facing the same challenges, or have already found answers we haven’t discovered yet.

Here are some data points we’ve gathered so far:

Exciting tools that just dropped :hammer_and_wrench:

The community has been busy! A few noteworthy new tools:

Big shoutout to @doisyg for sharing impressive insights on how they manage upgrades across a large fleet of robots in the field! :clap:
And I am sure there is a big vast of further open source tools out there that can help all of us.

What Siemens has shared so far (all talks in English)

We’ve been open about our own challenges and learnings:

:speech_balloon: Our concrete question to you:

Would you be interested in joining a regular working group to discuss these topics and align our open-source efforts?

Vote below — even a single click tells us a lot! :backhand_index_pointing_down:

  • ← click me, if you are interested.

Click to view the poll.

Let’s build in the open — together! :handshake:

We’re strong believers in open collaboration. Whether you’re a researcher, developer, or industry practitioner — let’s align our efforts and avoid reinventing the wheel.

A few things we’d especially love to hear about:

  • :megaphone: Open EU calls related to these topics — always happy to explore funding opportunities and collaborative projects
  • :graduation_cap: Bachelor & Master thesis requests from EU students — if you’re looking for a meaningful, real-world topic in this space, reach out! We’d love to support the next generation of robotics engineers

Cheers from Germany :clinking_beer_mugs:
Florian


Update as of Tue, Mar 3, 2026 11:00 PM UTC

Let’s try to ball point where a potential virtual meeting could already happen:
(Please also vote if the day does not fit, right now I am more interested in finding the right time of day)

  • Tue, Mar 10, 2026 7:30 AM UTC
  • Tue, Mar 10, 2026 10:00 AM UTC
  • Tue, Mar 10, 2026 1:00 PM UTC
  • Tue, Mar 10, 2026 4:00 PM UTC

Click to view the poll.

27 posts - 11 participants

Read full topic

by flo on March 03, 2026 06:29 PM

ROS Meetup Medellín Colombia - 29-30 Apr 2026

We are pleased to officially announce ROS Meetup Medellín 2026, a space designed to bring together the robotics, ROS, and autonomous systems community in Colombia.

:round_pushpin: April 29 – Universidad EIA (Poster Session)
:round_pushpin: April 30 – Parque Explora (Talk Session)

Medellín, recognized for its strong innovation and technology ecosystem, will be the perfect setting to connect academia, industry, and the open-source community around ROS and robotics.

:microphone: Call for Speakers open
:framed_picture: Call for Posters open
:busts_in_silhouette: Attendee registration available

If you are developing ROS-based projects, conducting robotics research, or building AI-driven and autonomous systems solutions, we invite you to share your work and actively participate in the event.

Find all the information and registration links here:
:link: https://linktr.ee/IEEE_RAS_Colombia

We look forward to having you join us in this edition and to continue strengthening the ROS community in Colombia.

See you in Medellín :robot:

1 post - 1 participant

Read full topic

by miguelgonrod on March 03, 2026 05:27 PM

MAHE Mobility Challenge 2026 (MIT Bengaluru)

Hello ROS Community,

MAHE Mobility Challenge 2026 is a national-level hybrid hackathon hosted by CEAM and the Department of ECE at Manipal Institute of Technology (MIT), Bengaluru.

This challenge is designed for B.Tech students passionate about autonomous and connected mobility systems, offering an opportunity to ideate, design, and build real working prototypes addressing next-generation mobility challenges.

Total Prize Pool: ₹3 Lakhs


Challenge Tracks

• AI in Mobility
Intelligent perception systems, predictive modeling, adaptive routing, autonomy stacks

• Robotics & Control
Embedded systems, actuator integration, simulation workflows, control architecture design

• Cybersecurity for Mobility
Secure V2X communication, threat modeling, safety-focused system hardening for connected vehicles


Format & Timeline

  • Registrations Close: 15 March 2026

  • Round 1: Online technical proposal submission (Deadline: 31 March 2026)

  • Round 2: Offline prototype demonstration at MIT Bengaluru (17 April 2026)

Shortlisted teams will build and demonstrate working prototypes during the final round.

Participants are encouraged to leverage open-source robotics frameworks (ROS), simulation environments, and modular autonomy architectures where relevant.

We welcome engagement from students and robotics enthusiasts interested in contributing to secure and intelligent mobility systems.

Further details and registration:
https://mahemobility.mitblr.org/

Looking forward to participation and discussion from the ROS community.

2 posts - 2 participants

Read full topic

by Achyuth on March 03, 2026 04:25 PM

SIPA: Quantifying Physical Integrity and the Sim-to-Real Gap in 7-DoF Trajectories

Introduction:

SIPA (Spatial Intelligence Physical Audit) is a trajectory-level physical consistency diagnostic. It does not require source code access or internal simulator states and directly audits 7-DoF CSV trajectories. By design, SIPA is compatible with any system that produces spatial motion data. Its principle is based on the Non-Associative Residual Hypothesis (NARH).

1. What SIPA Can Audit

SIPA operates on the final motion output, enabling post-hoc physical forensics for:

  • Physics Simulators: NVIDIA Isaac Sim, MuJoCo, PyBullet, Gazebo.

  • Neural World Models: World Labs Marble, OpenAI Sora, Runway Gen-3 (via pose extraction).

  • Robotic Foundation Models: Any system outputting 7-DoF trajectories.

  • Real-World Capture: OptiTrack, Vicon, or SLAM-based motion sequences.

Supported Data Pathways:

  • Tier 1 — Native Spatial Intelligence (Recommended): High-fidelity data from Isaac Sim, MuJoCo, or Robot Telemetry.

  • Tier 2 — Structured World Generators: Emerging models like World Labs Marble, where 3D states are programmable and exportable.

  • Tier 3 — Pixel Video Models (Experimental): Pure video generators (Sora, Kling). This requires an additional pose-lifting step (Video \\to Pose \\to SIPA) and is currently research-grade due to vision uncertainty.

2. The Logic: Non-Associative Residual Hypothesis (NARH)

NARH posits that physical inconsistency stems from discrete solver ordering rather than just algebraic error.

(1)Setting

Consider a rigid-body simulation system defined by:

  • State space S \subset \mathbb{R}^n

  • Associative update operator \Phi \Delta t : S \to S

  • Parallel constraint resolution composed of sub-operators `\{\Psi_i\}_{i=1}^k`

    ​The simulator implements a discrete update:

s_{t+1} = \Psi_{\sigma(k)} \circ \cdots \circ \Psi_{\sigma(1)} (s_t)

where � is an execution order induced by:

  • constraint partitioning

  • thread scheduling

  • contact batching

  • solver splitting

Each \Psi_i is individually well-defined, but their composition order may vary.

(2) Order Sensitivity

Although each operator Ψi belongs to an associative algebra (e.g., matrix multiplication, quaternion composition), the composition of numerically approximated operators may satisfy:

(\Psi_a \circ \Psi_b) \circ \Psi_c \neq \Psi_a \circ (\Psi_b \circ \Psi_c)

due to:

  • finite precision arithmetic

  • projection steps

  • iterative convergence truncation

  • asynchronous execution

Define the discrete associator:

A(a,b,c;s) = \bigl( (\Psi_a \circ \Psi_b) \circ \Psi_c \bigr)(s) - \bigl( \Psi_a \circ (\Psi_b \circ \Psi_c) \bigr)(s)

(3) Definition: Non-Associative Residual

We define the Non-Associative Residual (NAR) at state s_t as:

R_t = \lVert A(a,b,c; s_t) \rVert

for a chosen triple of sub-operators representative of contact or constraint updates.

This residual measures path-dependence induced by discrete solver ordering, not algebraic non-associativity of the state representation.

(4) Hypothesis (NARH)

In high-interaction-density regimes (e.g., contact-rich robotics, high-speed manipulation), the Non-Associative Residual R_t becomes non-negligible relative to scalar stability metrics, and accumulates over time as a structured drift term.

Formally, there exists a regime such that:

\sum_{t=0}^{T} R_t \not\approx 0

even when:

\Vert s_{t+1} - s_t \Vert remains bounded.

(5) Interpretation

This hypothesis does not claim:

  • that simulators are mathematically invalid,

  • that associative algebras are incorrect,

  • or that hardware tiling causes topological inconsistency.

Instead, it asserts:

Discrete parallel constraint resolution introduces a measurable order-dependent residual that is not explicitly encoded in the state space.

This residual may contribute to:

  • sim-to-real divergence,

  • policy brittleness,

  • instability under reordering of equivalent control inputs.

(6) Falsifiability

NARH is falsified if:

  1. s_t remains within numerical noise across interaction densities.

  2. Reordering constraint application yields statistically indistinguishable trajectories.

  3. Scalar metrics (e.g., kinetic energy norm, velocity norm) detect instability earlier or equally compared to any associator-derived signal.

(7) Research Implication

If validated, NARH suggests that:

  • Order sensitivity is a structural property of discrete solvers.

  • Additional diagnostic signals (e.g., associator magnitude) may serve as early-warning indicators.

  • Embodied AI training in simulation may implicitly depend on hidden order-stability assumptions.

If invalidated, the experiment establishes an empirically order-invariant regime — a valuable boundary characterization of solver behavior.

3. Physical Integrity Rating (PIR)

SIPA introduces the Physical Integrity Rating (PIR), a heuristic composite indicator designed to quantify the causal reliability of motion trajectories. PIR evaluates whether a world model is “physically solvent� or accumulating “kinetic debt.�

The Metric

PIR = Q_{\text{data}} \times (1 - D_{\text{phys}})
  • Q_{\text{data}} (Data Quality): Measures input integrity (SNR, normalization, temporal jitter).

  • D_{\text{phys}} (Physical Debt): Log-normalized residual derived from the Octonion Associator, testing the NARH limits.

  • PIR \in [0, 1]: Higher indicates higher physical fidelity.

:bar_chart: Credit Rating Scale

PIR Score Rating Label Operational Meaning
≥ 0.85 A High Integrity Reliable for industrial simulation and safety-critical AI.
≥ 0.70 B Acceptable Generally consistent; minor numerical drift detected.
≥ 0.50 C Speculative “Visual plausibility maintained, but causal logic is shaky.�
≥ 0.30 D High Risk “Elevated physical debt; prone to ““hallucinations�� under stress.�
< 0.30 F Critical Physical bankruptcy; trajectory violates fundamental causality.

Note on Early Adoption: Since its initialization, we’ve observed a unique anomaly: 120 institutional entities cloned the repo via CLI with near-zero web UI traffic. This suggests that the industry (Sim-to-Real teams and Tech DD leads) is already utilizing NARH for internal audits. View Traffic Evidence

Call to Action

We invite the ROS community to stress-test their simulators and world models using SIPA. Any questions can be discussed under this topic!

GitHub Repository: https://github.com/ZC502/SIPA.git

1 post - 1 participant

Read full topic

by zc_Liu on March 03, 2026 04:25 PM


Powered by the awesome: Planet