October 07, 2025
ROSCon 2025 Exhibit Map Now Available

ROSCon 2025 Exhibit Map Now Available :world_map:


Hi Everyone,

We just posted the final exhibitor map for ROSCon 2025 in Singapore. An interactive version of the map is also available here.

Our sponsors and exhibitors help make ROSCon possible, and many of them have fantastic live demonstrations planned for the event! I’ve included a full list of sponsors and exhibitors below so you can start building your itinerary for ROSCon.

:1st_place_medal: Gold Sponsors

:2nd_place_medal: Silver Sponsors

:3rd_place_medal: Bronze Sponsors

:seedling: Startup Alley Sponsors

1 post - 1 participant

Read full topic

by Katherine_Scott on October 07, 2025 07:20 PM

What's your stack for debugging robots in the wild?

Hi there :waving_hand: ROS community,

My colleague and I are mapping out best practices for managing deployed robot fleets, and we’d love to learn from your real-world experience.

As robots move from the lab into the wild, the process for debugging and resolving issues gets complicated. We’re trying to move past our own ad-hoc methods and are curious about how your teams handle the entire lifecycle of an incident.

Specifically, we’re focused on these four areas:

  1. Incident & Resolution Tracking
    When a novel issue is solved, how do you capture that hard-won knowledge for the rest of the team? We’re curious about your process for creating a durable record of the diagnostic path and the fix, so the next engineer doesn’t have to solve the same problem from scratch six months from now.

  2. Hardware & Software Context
    How do you correlate a failure with the specific context of the robot? We’ve found it’s often crucial to know the exact firmware of a sensor, the driver version, the OS patch level, or even the manufacturing batch of a component. How do you capture and surface this data during an investigation?

  3. Remote vs. On-Site Debugging
    What is your decision tree for debugging? How much can you solve remotely with the data you have? What are the specific triggers that force you to accept defeat and send a person on-site? What’s the one piece of data you wish you had to avoid that trip?

  4. Fleet-Wide Failure Analysis
    How do you identify systemic issues across your fleet? For example, discovering that a specific component fails more often under certain circumstances. What does your data analysis pipeline look like for finding these patterns—the “what, why, when, and where” of recurring failures?

We’re hoping to get a good public discussion going in this thread about the tools and workflows you’re using today. Whether it’s custom scripts, telegraf, prometheus, grafana dashboards, or something else.

On a separate note, this problem space is our team’s entire focus at INSAION. If you’re wrestling with these challenges daily and find the current tooling inadequate, we’d be very interested to hear your perspective. Please feel free to send me a DM for an honest, engineer-to-engineer conversation.

Keep your robots healthy and running!

Sergi from INSAION

1 post - 1 participant

Read full topic

by insaion on October 07, 2025 04:06 PM

October 06, 2025
Invite: How to Make the Most of ROSCon (for Women in Robotics)

WomeninRobotics.org members are encouraged to check the Slack group for an invitation to the following:

Ahead of ROSCon 2025, we’re hosting a prep session “How to Make the Most of ROSCon, as a Speaker, Regular, or First-timer” on Wednesday Oct 8th/Thursday Oct 9th depending on timezone.*

This will be a structured, facilitated session for anyone attending ROSCon.ros.org (or still considering it!).

Know someone who’d be interested? Given the relatively small intersection of the robotics community, your help reaching interested attendees would be very appreciated! :folded_hands:

Till soon,
Deanna (2024 keynote)

1 post - 1 participant

Read full topic

by dhood on October 06, 2025 07:27 AM

October 05, 2025
ANNOUNCEMENT: October 9 7:00pm: Boston Robot Hackers Meetup

REGISTER: Eventbrite Link

Greetings! I am excited to announce the next meeting of the Boston Robot Hackers!

Date: Thursday October 9 at 7:00pm
Location: Artisans Asylum, 96 Holton Street, Boston (Allston)

REGISTER: Eventbrite Link

We’re excited that our talk this month is by David Dorf and the topic: “Affordable Biomimetic Robot Hands”. David will discuss new ways of building robot end-effectors (hands). He will share novel methods for tackling these issues with 3D printing flexible materials and biomimetic design and interfacing to ROS2.

1 post - 1 participant

Read full topic

by pitosalas on October 05, 2025 07:13 AM

October 03, 2025
Can we build MicroROS on ESP32 with Zephyr RTOS ?

Came across a git: GitHub - micro-ROS/micro_ros_setup: Support macros for building micro-ROS-based firmware.
In the table under configuring microROS module, its stated USB and UART support is not yet done for ESP32. So I was wondering if I could build MicroROS on ESP32 via Zeohyr RTOS.
When I was learning Zephyr RTOS I has build and flashed the MCUs. But maybe the MicroROS setup does not support it yet ?

1 post - 1 participant

Read full topic

by SUSMITA_PR on October 03, 2025 05:15 PM

October 02, 2025
Videos from ROSCon UK 2025 in Edinburgh 🇬🇧

Hi Everyone,

The entire program from our inaugural ROSCon UK in Edinburgh is now available :sparkles: ad free :sparkles: on the OSRF Vimeo account. You can find the full conference website here.


1 post - 1 participant

Read full topic

by Katherine_Scott on October 02, 2025 07:13 PM

DroidCam in ROS2

Hi to everyone! I‘ve recently published ROS2 package for DroidCam to ease usage of your Andoid/iPhone as a webcamera in ROS2.

1 post - 1 participant

Read full topic

by vdovetzi on October 02, 2025 04:13 PM

ROS2 URDF language reference?

The ROS1 wiki includes a complete reference for the URDF language. The ROS2 documentation contains a series of URDF tutorials, but as far as I can see no equivalent language reference. Is the ROS1 wiki still the authoritative reference for URDF? If not, where can I find the latest reference?

1 post - 1 participant

Read full topic

by xperroni on October 02, 2025 01:36 PM

Simple composable and lifecycle node creation - Turtle Nest 1.2.0 update

When developing with ROS 2, I often have to create new nodes that are composable or lifecycle nodes. Setting them up from scratch can be surprisingly tedious, which is why I added a feature to Turtle Nest that allows you to create these nodes with a single click.

Even the CMakeLists.txt and other setup files are automatically updated, so you can run the template node immediately after creating it.

Lifecycle and composable nodes are available in Turtle Nest since the newest 1.2.0 release, which is now available for all the active ROS 2 distributions via apt installation. Since the last announcement here in Discourse, it’s possible now to to also create Custom Message Interfaces package.

Hope you find these features as useful as they’ve been for my day-to-day development!

4 posts - 2 participants

Read full topic

by jak on October 02, 2025 12:05 PM

October 01, 2025
Update RMW Zenoh-Pico for Zenoh 1.4.0

At ROSConJP 2025 on 9/9, eSOL demonstrated the robot operation by applying micro-ROS to Zenoh-Pico.
Fortunately, @Yadunund gave an excellent presentation on integrating ROS and Zenoh-Pico, and I think many Japanese developers learned about Zenoh-Pico.

Now that the team has a decent working experience, eSOL would like to announce the update of the software we showed at ROSConJP 2025.

This update is an enhancement to the previously posted version of the following topic.

Major updates include:

  • Support Zenoh and Zenoh-Pico version 1.4.0
  • Support for several M5Stack (ESP32) dev kits in PlatformIO environments
    • Additional patches for several Zenoh-Pico
  • Micro-ROS only without ROS and Zenohd
    • Confirmed that M5Stack can communicate with both Unicast and Multicast

Here’s a video at the end.
We haven’t been able to measure precisely, but it is able to send ROS messages over the ESP32 Wi-Fi at around 20msec intervals.

1 post - 1 participant

Read full topic

by k-yokoyama-esol on October 01, 2025 03:50 PM

[Announcement] Safe DDS 3.0 is ISO 26262 ASIL D certified — ROS 2 tutorial + field deployment

Safe DDS 3.0 is now ISO 26262 ASIL D certified (renewal after 2.0). It’s compatible with ROS 2. We’re sharing a hands-on tutorial and pointing to a field-deployed device using Safe DDS.


Why this might help ROS 2 teams

Many projects need deterministic communications and safety certification evidence on the path to production. Our goal with Safe DDS is to provide a certified DDS option that integrates with existing ROS 2 workflows while supporting real-world operational needs (TSN, redundancy, memory control, etc.).

Certification cadence: Safe DDS has maintained ASIL-D certification across major releases (2.0 → 3.0). For teams planning multi-year products, the ability to renew certification as versions evolve can simplify compliance roadmaps.


What’s new in Safe DDS 3.0 (highlights)

  • @optional & @external type support — optional members; external basic types; sequences/arrays of basic types; and strings.

  • Custom memory allocators — integrate your own allocators for tighter control.

  • Channel redundancy — listen on multiple channels simultaneously for fault tolerance.

  • Manual entity decommissioning — finer control over DDS entity lifecycle.

  • TSN compatibility for UDPv4 transport — operate the ASIL-D–certified UDPv4 transport within TSN setups.

  • Ethernet transport — native IEEE 802.1Q (TSN-compatible).

  • Docs & tutorials — expanded resources (ROS 2 integration, RTEMS getting-started, board packages for NXP, STMicroelectronics, Espressif, …).


Using Safe DDS with ROS 2

The tutorial below walks through the integration model and configuration patterns with ROS 2:

:backhand_index_pointing_right: Tutorial: https://safe-dds.docs.eprosima.com/main/intro/tutorial_ros2.html

For those evaluating real deployments, here’s a previously released ruggedized depth camera using Safe DDS:

:backhand_index_pointing_right: Field deployment (RealSense D555 PoE):
https://realsenseai.com/ruggedized-industrial-stereo-depth/d555-poe/?q=%2Fruggedized-industrial-stereo-depth%2Fd555-poe%2F&


Open to questions

Happy to discuss ROS 2 integration details (QoS, discovery, transports), TSN/802.1Q topologies, determinism/memory considerations, and migration paths (prototype on Fast DDS → production with Safe DDS).

1 post - 1 participant

Read full topic

by Jaime_Martin_Losa on October 01, 2025 10:24 AM

Rosbag2 composable record - splitting files

Hi

I have been using the rosbag2 to record topics as a composable node for a while now. Does anyone here know how I could make use of splitting the recording into several files during the recording process using the max_file_size parameter? Is this even possible in the composable node method?

3 posts - 2 participants

Read full topic

by jclinton830 on October 01, 2025 07:16 AM

What’s the #1 bottleneck in your robotics dev workflow? (help us prioritize SnappyTool)

Hi everyone,

I’ve been consulting in robotics on and off, and one pattern keeps coming up: our development tools are still too painful.

  • Setting up projects can take days, often requiring advanced expertise just to get an environment working.

  • Teams often say, “Framework X didn’t work for us, so we built our own library.” That may solve a narrow problem, but it slows down progress for the field as a whole.

We think there must be a better way.

That’s why we’re building SnappyTool a browser-based drag-and-drop robotics design platform where you can:

  • Assemble robots visually

  • Auto-generate URDF / ROS code

  • Share designs and even buy/sell robot parts via a marketplace

  • Use it freely with a generous freemium model (not gatekeeping innovation!!)


The ask:

What’s the #1 bottleneck in your robotics workflow that, if solved, would significantly improve your productivity (enough that you or your team would pay for it)?

Examples could be:

  • Simulation setup

  • CAD → URDF conversion

  • Version control for robot models

  • Sourcing compatible hardware parts

  • Deployment and integration

We’ve have a little runway and assembled a small team to work full-time on this. We’d like to make sure we are solving real pains first, not imaginary ones.

Any input would be very much appreciated thank you!

1 post - 1 participant

Read full topic

by Alejo_Cain on October 01, 2025 05:38 AM

September 30, 2025
New Tools for Robotics: RQT Frame Editor and the pitasc Framework

As robotics continues to expand into industrial and collaborative environments, researchers and developers are working on tools that make robots easier to configure, teach, and reconfigure for real-world tasks. In a recent talk, Daniel Bargmann (Fraunhofer IPA) introduced two powerful software solutions designed for exactly this purpose: the RQT Frame Editor and the pitasc Framework.

RQT Frame Editor – Simplifying TF-Frame Management

The RQT Frame Editor is a ROS plugin that makes working with TF-frames more intuitive. Instead of editing configuration files manually, users can visually create, arrange, and adjust frames within the familiar RQT and RViz environments.

Key features include:

  • Interactive frame manipulation – Move, rotate, or manually set values for frames.

  • Copy and reuse poses – Copy positions or orientations from existing frames.

  • Mesh visualization – Attach meshes (including custom STL files) to frames and view them in RViz.

  • Frame grouping and pinning – Organize frames by groups or “pin” active frames for efficient workflow.

  • ROS service integration – Use frame editor functionality programmatically in your own applications.

These capabilities are especially valuable for developers working on multi-robot setups, simulation environments, or applications that require frequent TF-frame adjustments.

Documentation and source code are available on GitHub

pitasc – A Skill-Based Framework for Force-Controlled Robotics

The second tool highlighted in the presentation is pitasc, a robot control framework designed for force-controlled assembly and disassembly tasks. Unlike traditional, vendor-specific robot programming approaches, pitasc uses a skill-based programming model.

In practice, this means developers do not write low-level motion code directly. Instead, they arrange and parameterize skills—reusable building blocks that range from simple movements (e.g., LIN or PTP) to advanced behaviors that combine position and force control across different dimensions.

Real-World Applications

pitasc has already been deployed across a wide variety of industrial use cases, including:

  • Assembly of plastic components

  • Riveting, screwing, and clipping tasks

  • Flexible robot cells with rapid reconfiguration

  • Dual-arm coordination, such as automated wiring of electrical cabinets

This flexibility allows pitasc to support both collaborative robots and industrial robots, bridging the gap between research and production environments.

Documentation and source code available are available here.

pitasc at a glance

Live demo of rqt frame editor and pitasc

Watch the full talk by Daniel Bargmann on YouTube to see live demos of both the RQT Frame Editor and pitasc in action, including real-world examples of assembly and disassembly tasks

by Yasmine Makkaoui on September 30, 2025 05:47 PM

AMP With Carter Schultz | Cloud Robotics WG Meeting 2025-10-08

The CRWG is pleased to welcome Carter Schultz of AMP to our coming meeting at Wed, Oct 8, 2025 4:00 PM UTCWed, Oct 8, 2025 5:00 PM UTC. AMP is working to modernise global recycling infrastructure with AI‑driven robotics. Carter will share the company’s vision and, in particular, the key challenges it faces when operating a large fleet of autonomous robots.

Please note that the meeting day has changed for the CRWG. Previous meetings were on Monday; they are now on Wednesday at the same time.

Last meeting, guest speakers Lei Fu and Sahar Slimpour, from the Zurich University of Applied Sciences and University of Turku respectively, joined the CRWG to talk about their ROSBag MCP Server research (also shared in ROS Discourse). If you’re interested to watch the meeting, it is available on YouTube.

The meeting link for next meeting is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

Hopefully we will see you there!

1 post - 1 participant

Read full topic

by mikelikesrobots on September 30, 2025 09:07 AM

【PIKA】Method for Teleoperating Any Robotic Arm via Pika

Hi everyone,

I’d like to share a universal method for teleoperating robotic arms using Pika Sense. This approach works with any ROS-enabled robotic arm (we’ve tested it with Piper, xArm, and UR robots) by leveraging high-precision 6D pose tracking (0.3mm accuracy) and incremental control algorithms. The system publishes standard geometry_msgs/PoseStamped messages on the pika/pose topic, making integration straightforward. Hope this helps anyone looking to implement teleoperation across different robot platforms!


Teleoperation

Teleoperation of robotic arms is achieved using Pika Sense. When used with external positioning base stations, Pika Sense can acquire 6D pose data with an accuracy of 0.3mm. After aligning the coordinate system of Pika Sense with that of the robotic arm’s end effector, incremental control is employed to map the 6D pose data to the end effector of the robotic arm, thereby realizing teleoperation.

In summary, the teleoperation principle consists of four key steps:

  1. Acquire 6D pose data
  2. Align coordinate systems
  3. Implement incremental control
  4. Map 6D pose data to the robotic arm

Below is a detailed breakdown and explanation of each step.

Acquiring 6D Pose Data

Positioning Principle of Pika Sense and Station

1. Positioning Mechanism of Base Stations

  • Each base station is equipped with an infrared LED array and two rotating laser transmitters (responsible for horizontal and vertical scanning, respectively):
    • The infrared LEDs flash globally at a frequency of 60Hz, providing synchronization signals for the entire space.
    • The laser transmitters, driven by motors to rotate, emit horizontal and vertical laser beams alternately, scanning the space in a cycle of 10ms (resulting in a complete cycle of 20ms).
  • A single base station can achieve a laser scanning coverage of 5×5 meters; with four base stations working collaboratively, the coverage can be expanded to 10×10 meters.

2. Positioning Implementation of Pika Sense

  • The upper sensor of Pika Sense is called the Tracker, which is densely equipped with more than 70 photosensors on its surface. Each sensor can receive infrared signals and laser scans.
  • Positioning calculation process:
    • The sensors record the time of arrival of the laser, and combined with the base station’s scanning cycle, calculate the horizontal and pitch angles of the sensors relative to the base station.
    • Through the spatial distribution and time difference data of multiple sensors (≥5), the precise position and orientation of the Tracker are solved.
  • Calculations are completed directly by the local processor without the need for image processing, resulting in a delay of only 20ms and a positioning accuracy of 0.3mm.

The 6D pose data is published as messages of the geometry_msgs/PoseStamped type to the pika/pose topic, which is compatible with end pose control of most robotic arms available on the market.

In addition to the ROS message type, if you need to access 6D pose data independent of ROS, please refer to our pika_sdk.

Coordinate System Alignment

In the first step [Acquiring 6D Pose Data], whether the 6D pose data is obtained by subscribing to the ROS topic or via the Pika SDK, the coordinate system of Pika Sense is centered at the gripper, with the x-axis facing forward, the y-axis facing left, and the z-axis facing upward, as shown in the figure below:

Different robotic arms have different coordinate systems for their end effectors. However, for most of them, the z-axis faces forward, while the orientations of the x-axis and y-axis depend on the initial rotation values of the robotic arm’s end effector. The method for checking the coordinate system of a robotic arm’s end effector varies by model; typically, it can be viewed through the host-software provided by the manufacturer or by loading the robotic arm model in ROS RViz.

After understanding the coordinate systems of both Pika Sense and the robotic arm’s end effector, the 6D pose data of Pika Sense is converted into a homogeneous transformation matrix. This matrix is then multiplied by an adjustment matrix to align the Pika Sense coordinate system with the robotic arm’s end effector coordinate system. This completes the coordinate system alignment process.

Incremental Control

In the second step [Coordinate System Alignment], we align the coordinate system of Pika Sense with that of the robotic arm’s end effector (with the z-axis facing forward). However, a question arises: when holding Pika Sense and moving it forward, will the value of its z-axis necessarily increase positively?

Not necessarily. The pose value is related to its base_link. If the z-axis of base_link is exactly consistent with the z-axis direction of Pika Sense, then the z-axis value of Pika Sense will indeed increase positively. However, the base_link of Pika Sense is a coordinate system generated when Pika Sense is calibrated with the base station, where the x-axis faces forward, the y-axis faces left, and the z-axis faces upward. In other words, base_link is generated randomly.

So, how do we map the coordinates of Pika Sense to the robotic arm’s end effector? How can we ensure that when Pika Sense moves forward/left, the robotic arm’s end effector also moves forward/left accordingly?

The answer is: use incremental control.

In teleoperation, the pose provided by Pika Sense is an absolute pose. However, we do not want the robotic arm to jump directly to this absolute pose. Instead, we want the robotic arm to follow the relative movement of the operator, starting from its current position. Simply put, it involves converting the absolute pose change of the operating device (Pika Sense) into a relative pose command that the robotic arm needs to execute.

The core code for this functionality is as follows:

# 增量式控制
def calc_pose_incre(self,base_pose, pose_data):
    begin_matrix = tools.xyzrpy2Mat(base_pose[0], base_pose[1], base_pose[2],
                                                base_pose[3], base_pose[4], base_pose[5])
    zero_matrix = tools.xyzrpy2Mat(self.initial_pose_rpy[0],self.initial_pose_rpy[1],self.initial_pose_rpy[2],
                                        self.initial_pose_rpy[3],self.initial_pose_rpy[4],self.initial_pose_rpy[5])
    end_matrix = tools.xyzrpy2Mat(pose_data[0], pose_data[1], pose_data[2],
                                            pose_data[3], pose_data[4], pose_data[5])
    result_matrix = np.dot(zero_matrix, np.dot(np.linalg.inv(begin_matrix), end_matrix))
    xyzrpy = tools.mat2xyzrpy(result_matrix)
    return xyzrpy

This function implements incremental control using the arithmetic rules of transformation matrices. Let’s break down the code step by step:

Input Parameters

  • base_pose: This is the reference pose at the start of teleoperation. When teleoperation is triggered, the system records the pose of the operating device (Pika Sense) at that moment and stores it as self.base_pose. This pose serves as the “starting point” or “reference zero” for calculating all subsequent increments.
  • pose_data: This is the real-time pose data of the operating device (Pika Sense) received at the current moment.

Matrix Conversion

The function first converts three key poses (expressed in the format [x, y, z, roll, pitch, yaw]) into 4×4 homogeneous transformation matrices. This conversion is typically performed by the tools.xyzrpy2Mat function.

  • begin_matrix: Converted from base_pose, it represents the pose matrix of the operating device at the start of teleoperation. We denote it as T_{begin}.
  • zero_matrix: Converted from self.initial_pose_rpy, it represents the pose matrix of the robotic arm’s end effector at the start of teleoperation. This is the “starting point” for the robotic arm’s movement. We denote it as T_{zero}.
  • end_matrix: Converted from pose_data, it represents the pose matrix of the operating device at the current moment. We denote it as T_{end}.

Core Calculation

This is the most critical line of code:

result_matrix = np.dot(zero_matrix, np.dot(np.linalg.inv(begin_matrix), end_matrix))

We analyze it using matrix multiplication:

The formula can be expressed as: Result = T_{zero} \times (T_{begin})^{-1} \times T_{end}

  • np.linalg.inv(begin_matrix): Calculates the inverse matrix of begin_matrix, i.e., (T_{begin})^{-1}. In robotics, the inverse matrix of a transformation matrix represents its reverse transformation.
  • np.dot(np.linalg.inv(begin_matrix), end_matrix): This step calculates (T_{begin})^{-1} \times T_{end}. The physical meaning of this operation is the transformation required to switch from the begin coordinate system to the end coordinate system. In other words, it accurately describes the relative pose change (increment) of the operating device from the start of teleoperation to the current moment. We refer to this increment as \Delta T.
  • np.dot(zero_matrix, ...): This step calculates T_{zero} \times \Delta T. Its physical meaning is applying the relative pose change (\Delta T) just calculated to the initial pose (T_{zero}) of the robotic arm’s end effector.

Result Conversion and Return

  • xyzrpy = tools.mat2xyzrpy(result_matrix): Converts the calculated 4×4 target pose matrix result_matrix back to the [x, y, z, roll, pitch, yaw] format that the robotic arm controller can understand.
  • return xyzrpy: Returns the calculated target pose.

Mapping 6D Pose Data to the Robotic Arm

Through incremental control, we obtain the relative pose commands that the robotic arm needs to execute. However, the control commands vary among different robotic arms. This requires writing different control interfaces for each type of robotic arm. For example:

  • Robotic arms such as Piper and Xarm can directly accept commands in the form of xyzrpy or xyz + quaternion for control. The only difference is that Piper uses the rostopic method for message publishing, while Xarm uses the rosservice method for request sending.
  • UR robotic arms use the xyz and rotation vector format for command delivery.

In summary, to send the 6D pose data calculated via incremental control to the robotic arm for control, the final step is to adapt to the robotic arm’s control interface.


Summary

This article elaborates on the core technical principles of realizing robotic arm teleoperation based on Pika Sense. The entire process can be summarized into four key steps:

  1. Acquire 6D pose data: First, a system composed of Pika Sense and external positioning base stations is used to accurately capture the operator’s hand movements. The base stations scan the space using infrared synchronization signals and rotating lasers. The photosensors on Pika Sense receive these signals, real-time solve its high-precision six-degree-of-freedom (6D) pose (position and orientation), and then publish this data via ROS topics or SDK.

  2. Align coordinate systems: Since the coordinate system definitions of Pika Sense and the end effectors of different robotic arms are inconsistent, alignment is essential. By obtaining the respective coordinate system definitions of Pika Sense and the target robotic arm, a transformation matrix is calculated to convert the pose data of Pika Sense into the coordinate system matching the robotic arm’s end effector, ensuring the intuitiveness of subsequent control.

  3. Implement incremental control: To enable the robotic arm to smoothly follow the operator’s relative movement (rather than jumping abruptly to an absolute position), an incremental control strategy is adopted. This method takes the hand pose and robotic arm pose at the start of teleoperation as references, uses matrix operations to real-time calculate the relative pose change (increment) of the hand from the “starting point” to the “current point”, and then applies this increment to the initial pose of the robotic arm to obtain the current target pose of the robotic arm.

  4. Map to the robotic arm: The final step is to send the calculated target pose commands to the robotic arm for execution. Since robotic arms of different brands and models (e.g., Piper, xArm, UR) have distinct control interfaces and communication protocols (e.g., ROS topic, ROS service, specific format commands), corresponding adaptation code needs to be written to format the standard 6D pose data into commands that the specific robotic arm can recognize and execute, ultimately achieving precise teleoperation control.


That’s it—four steps to teleoperate any robotic arm with Pika! The magic is in the incremental control: your hand moves 5cm forward, the robot moves 5cm forward. Simple math, smooth motion. We’ve tested this on Piper, xArm, and UR arms, and the same approach should work for your robot too. Questions? Want to share your teleoperation adventures? Drop a comment below!

Cheers!

1 post - 1 participant

Read full topic

by Agilex_Robotics on September 30, 2025 03:28 AM

ROS2 "State of the Events Executors" - Benchmark comparison between rclcpp::experimental::EventsExecutor and cm_executors::EventsCBGExecutor

As part of the upcoming ROS2 Lyrical Luth release, the client library working group has been planning to mainstream an EventsExecutor implementation as the new default executor in rclcpp. The current experimental implementation is limited by its inability to properly handle simulation time, unlike the EventsCBGExecutor implemented by @JM_ROS over at Cellumation, which can properly handle sim time as well as offering a multithreaded mode. As a first step towards mainstreaming an EventsExecutor implementation, we ran an extensive set of benchmarks built on top of iRobot’s ros2-performance framework (Keep an eye out, as we are hoping to eventually open-source the full benchmark test suite!)

This post will serve as a deep dive into the performance characteristics of the two executors as well as a jumping off point for discussing the overall state of executors (and middleware implementations) in ROS2. (This is a cleaned-up rewrite of a github gist that I originally put all the benchmark info into)

Some notes about the benchmarks:

  • Benchmark environment:
    upstream ROS rolling docker container running on an x86 developer laptop under minimal load
    rclcpp rolling: b14af74a4c9b8683e72b15d61d0ed9121d883973
    cm_executors: 783a5e329ee8b04abfa3b3397532e979576a2b1f
    ros2_performance: 4528f43410922379b8da501630d9d938046e48e8
  • This suite of benchmarks was run at least 3 times per implementation, to ensure consistent results. For brevity’s sake, we’ll stick to one graph each for this analysis, but the full set of results will be made available elsewhere.
  • ipc_on = running with intra-process mode
  • In the latency tests, max latency signifies the highest single latency measurement taken for that message size. We didn’t do any outlier filtering on this dataset (aside from the high latencies from the first few seconds), so this value is known to have more consistent variation.
  • In the process of producing these benchmarks, we discovered a bug with the generation of clients/services single and multi process CPU usage. Graphs were generated for each, but the underlying data represents just the single process benchmark so we’ll only cover single process clients/services CPU usage.
  • There were a few tests we couldn’t run with the EventsCBGExecutor because of freezes or crashes, and so those tests were also omitted for the upstream EventsExecutor.
  • For a more 1:1 comparison between the two executors, the EventsCBGExecutor was fixed to use just one thread.

Takeaways, tl;dr:

  • Despite some initial concerns about marginally higher CPU usage for the EventsCBGExecutor compared to the experimental EventsExecutor, there doesn’t appear to be too much of a difference across all the characteristics we tested, with the following exceptions

    • EventsExecutor performed slightly better on the long running pub/sub CPU usage test.
    • EventsCBGExecutor performed slightly better on the long running actions CPU usage test.
  • Both executors demonstrate memory leaks in the longer running pub / sub and actions tests. After further investigation, the SingleThreadedExecutor and MultiThreadedExecutor also show a climb in memory for pub/sub, while actions remain stable for the SingleThreadedExecutor (except for rmw_zenoh).

  • As we step through the benchmarks, I’ll point out any differences between the executors as they appear.

CPU Usage - Pub/Sub - Single Process

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor

We can see that the max y axis for the second graph is way higher due to CycloneDDS seemingly causing the test to consume way more CPU at higher message payloads, amidst otherwise highly comparable results. This difference in CPU for CycloneDDS specifically was consistent across all runs of the benchmark suite.

CPU Usage - Pub Sub - Multi Process

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor

Interestingly, in multi-process mode the climb to ~4-5% of a core at larger payload sizes is now consistent for both executors when running CycloneDDS. Otherwise, both executors seem to put up similar results here.

CPU Usage - Services / Clients - Single Process

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor

CPU Usage - Pub/Sub - Long Running Test (10m)

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor

The usage pattern for both executors appears fairly similar, with the EventsExecutor averaging around 0.05 - 0.1% less CPU usage than EventsCBGExecutor in most runs.

CPU Usage - Services / Clients - Long Running Test (10m)

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor

CPU Usage - Actions - Long Running Test (10m)

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor

We again see a similar usage pattern between the two executors, but with the EventsCBGExecutor consistently maxing out at ~2% less CPU than the EventsExecutor and with a smoother looking graph.

Publisher Latency - Single Process

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor

Subscriber Latency - Single Process

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor

Huge differences in max latency aside, we see comparable results here between the two executor implementations for both pub and sub latency. The mean comparison demonstrates extremely similar results, including CycloneDDS’s extreme latency increases at higher payload sizes. The latency increases appear to exaggerate with slightly smaller payloads in EventsCBGExecutor than in EventsExecutor.

Publisher Latency - Multi Process

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor

Subscriber Latency - Multi Process

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor

Publisher Latency - Long Test (10m)

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor

Subscriber Latency - Long Test (10m)

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor

Memory Scaling Comparison

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor

RAM Usage - Pub/Sub - Long Test (10m)

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor
rclcpp::SingleThreadedExecutor rclcpp::MultiThreadedExecutor

Not much difference between the two events executors. This appears to expose a slow climbing memory leak in the client library side, either with both of these executor implementations or in some other part of the code. This leak appears consistent across all RMWs and across all runs of all four executors (single threaded, multi threaded, EventsExecutor, EventsCBGExecutor). Zenoh without intraprocess shows a much sharper increase the first few minutes in.

RAM Usage - Services/Clients - Long Test (10m)

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor
rclcpp::SingleThreadedExecutor rclcpp::MultiThreadedExecutor

Not much different across the executors, with the multi-threaded executor exhibiting much higher overall baselines in RAM usage. We again see RAM climbing for all four, but the rate of usage appears to level out about 5 or so minutes into the tests.

RAM Usage - Actions - Long Test (10m)

rclcpp::experimental::EventsExecutor cm_executors::EventsCBGExecutor
rclcpp::SingleThreadedExecutor rclcpp::MultiThreadedExecutor

Both EventsExecutor implementations demonstrate significant memory leaks during the long running actions tests. The multi-threaded executor’s usage pattern looks similar to clients / services. In the SingleThreadedExecutor, rmw_zenoh appears to exhibit leaks unlike the other tested RMWs.

7 posts - 4 participants

Read full topic

by skye.galaxy on September 30, 2025 12:17 AM

September 29, 2025
⏳ Regular Priced ROSCon Registration Extended until October 5th!

Hi Everyone,

Great news regarding ROSCon 2025 in Singapore! :tada: We’ve extended the regular price ticket sales :tada:. The new deadline for purchasing tickets at the regular price is now Sunday, Mon, Oct 6, 2025 6:59 AM UTC. This extension was made to accommodate our colleagues in Asia, especially those in India, as Singapore’s visa application window for India only opens one month prior to travel. We still recommend that you register as soon as possible as our fantastic ROSCon workshops are starting to sell out and about half of them have less than ten tickets remaining (see the list below).

ROSCon Workshop Status

  • Ros2_control: Fun with Robot Drivers – less than ten seats left
  • Scalable Multi-Robot Scene workflows using ROS Simulation Interfaces standard in Isaac Sim - SOLD OUT
  • Hands-On Aerial Robotics Using PX4 and ROS 2 – many seats available
  • ROS 2 Networking Redefined: Deep Dive into RMW Zenoh – less than ten seats left
  • ROS 2 & micro-ROS Dive In: Low-Cost Underwater Robotics – less than ten seats left
  • How to Implement a Full ROS 2 Application: a Tic-Tac-Toe Player Robot - many seats available
  • Introducing AI PCs for Embodied AI – many seats available
  • Reinforcement Learning for Deliberation in ROS 2 – less than twenty seats left
  • Introduction to ROS and Building Robots with Open-Source Software – many seats available

2 posts - 1 participant

Read full topic

by Katherine_Scott on September 29, 2025 06:52 PM

September 28, 2025
SDF to URDF conversion in 2025

Hi all,

it seems URDF, SDF and the conversion between them is a topic that keeps on giving. When I weekend-project created FusionSDF last year, I didn’t expect to actually still need URDFs anymore as SDFs can now be used for robot_description. Turns out I was too optimistic.

In either case, instead of porting 10+ years old ROS 1 code to ROS 2, I decided to leverage the more recent sdformat_urdf to convert SDF to URDF. Thanks to @sloretz, @quarkytale, @ahcorde and others for sdformat_urdf! My tool has the creative name sdf_to_urdf.

It consists of less then 50 lines of code, nearly all of it boilerplate. However, I didn’t find an already existing ROS 2 tool. Would be great to add the functionality directly to sdformat_urdf though (:wink:). Hence, here we are: sdf_to_urdf

Best,
Andreas

3 posts - 2 participants

Read full topic

by ahb on September 28, 2025 06:16 PM

September 27, 2025
Update to vscode_ros2_workspace

I’ve updated athackst/vscode_ros2_workspace so you can use the main branch as a single template across ROS distributions.

What changed

  • The default branch now supports any ROS version—just set the desired base image in .devcontainer/Dockerfile’s FROM line. The template currently defaults to osrf/ros:jazzy-desktop-full.

  • The repo includes guidance for GUI enablement (X11/Wayland, NVIDIA/WSL2 notes) and non-root user development (UID/GID hints). After building the devcontainer, you’ll see the ros user and can adjust UID/GID if needed.

Quick start

  1. Click “Use this template” on the repo and create your workspace. The README notes that the default branch works for any ROS by changing the FROM line in .devcontainer/Dockerfile.

  2. (Optional) Switch ROS versions by setting, e.g.:

    # .devcontainer/Dockerfile
    FROM osrf/ros:humble-desktop-full
    
    
  3. Open in VS Code – it will build the dev container for you; your terminal user will be ros. If you hit X11/Wayland auth or display issues, the README documents fixes (DISPLAY, WAYLAND variables, volumes, NVIDIA/WSL2 notes).

Extras included

  • Preconfigured linters/formatters, tasks, and launch configs.

  • CI workflow you can tailor to your project.

Why this helps

  • One template for all supported ROS 2 distros; simpler upgrades and onboarding.

  • Built-in GUI and non-root guidance improves day-to-day dev experience out of the box.

Feedback welcome
If you try this out—especially on different distros or GPU/WSL2 setups—please share what works and what doesn’t.

1 post - 1 participant

Read full topic

by athackst on September 27, 2025 07:14 PM

September 26, 2025
FULL TUTORIAL: Isaac SIM -> isaac_ros_foundationpose -> ManyMove

Hi everyone!

I just published the full isaac_ros_foundationpose pipeline tutorial with custom fine-tuned YOLOV8 model.

Here you find the YOUTUBE VIDEO!

KEY FEATURES

The pipeline includes:

  • NVIDIA Isaac SIM 5.0 to generate synthetic data for fine-tuning the YoloV8s model and for a digital twin of the scene to publish a virtual RealSense camera stream and robot data to ROS2
  • Ultralytics to fine-tune the YoloV8s model
  • isaac_ros_foundationpose to estimate the 6D pose of the object, using custom fine-tuned YoloV8s for object detection
  • ManyMove to handle the ROS2 logic and motion planning leveraging MoveIt2 and BehaviorTree.CPP

Hardware:

  • NVIDIA Jetson Orin Agx Developer Kit for isaac_ros_foundationpose and ManyMove on ROS2 Humble
  • Laptop with RTX card for Isaac SIM

HIGHLIGHTS

  • All assets provided to complete the pipeline, from .usd files for SDG and scene to .obj and mesh for FoundationPose
  • Example executable with ManyMove with behavior tree nodes to rectify FoundationPose output to allow grasping of symmetric objects and to limit pose validity to a specific bounding box: these features stabilize the output and enhance reliability on bin picking applications.
  • Full Isaac Sim scene with Ufactory Lite6 cobot and gripper, customized to provide a realistic pneumatic gripper simulation while keeping coherent ROS2 interaction and MoveIt2 path planning.

LINKS

1 post - 1 participant

Read full topic

by pastoriomarco on September 26, 2025 12:45 PM

September 25, 2025
Millie_bot is an open source robot with a big DREAM

Hey Open Robots Community!

I want to introduce Millie_bot and the Dream Cloud ecosystem I am building on Web3 under $DREAM / SOL.

Millie_bot is a 3D printed, modular, AI robot, built entirely in ROS2 Jammy. The retail / commercial price will be $10-20K and CAD files available online for remote building. I am running on a Pi for Nav and a flutter app runs the LLM, voice, face of the robot.

I want to use the mobile robot to build innovative business strategies, that leverage automation to work for local communities. Concepts include a robot drive-in restaurant called DREAM DINER, a fully automated general store called DREAM STORE, and a larger grocery store called DREAM MARKET. The revenue generated by these businesses will then go to build affordable housing and fund UBI.

This is more than a robot project, but I am building everything myself. I am also live streaming everything on X.com/@nico_andretti so you can come and see for yourself. I already have communities that are invested in $DREAM COIN and want to see this project succeed.

If you want to join a project with a vision for supporting communities as automation replaces workers, this is it!

1 post - 1 participant

Read full topic

by Millie_bot on September 25, 2025 05:12 AM

September 24, 2025
PSA: Debian Bookwork Boost rosdep entries

This is to serve as a heads up to all Debian Bookworm users who rely on libboost-* rosdep entries. If you do not use Debian Bookworm, or don’t use libboost-* on Debian Bookworm, you can stop reading now.

The attached pull request adds missing entries that were missing in the libboost-* family of rosdep keys. As a side effect, it aligns all of the libboost versions to 1.74.0, which is the “default” on Debian bookworm.

The following 4 packages will be “downgraded” from 1.81.0 to 1.74.0:

  • libboost-date-time
  • libboost-python
  • libboost-random
  • libboost-thread

Since Bookworm is currently a tier 3 platform, we aren’t providing binary packages for it, and very few packages in the core currently depend on libboost, the PMC has determined that this is relatively low risk and has opted to proceed.

Let us know if you have any comments/concerns.

1 post - 1 participant

Read full topic

by mjcarroll on September 24, 2025 02:42 PM

September 22, 2025
ROS Meetup Bogotá Colombia - 7 Nov 2025

:loudspeaker: The second edition of ROS Meetup Bogotá is here!

The ROS community in Colombia gathers once again to share knowledge, connect academia and industry, and continue building the future of robotics in our country. :rocket:

When and where?
:spiral_calendar: Friday, November 7th, 2025
:stopwatch: 2:00 PM – 7:00 PM
:round_pushpin: Biblioteca Virgilio Barco, Bogotá
:office_building: On-site event with live streaming (link will be shared soon)

What to expect?

  • :microphone: Technical talks on ROS and ROS 2 with national and international experts.

  • :bar_chart: Poster and project fair, showcasing local innovations and research.

  • :handshake: Networking between academia, industry, and robotics enthusiasts.

  • :hot_beverage: Coffee break and informal discussions.

  • :wrapped_gift: Themed souvenirs and surprises.

:writing_hand: Register as an attendee or apply as a speaker here:
:backhand_index_pointing_right: Linktree – RAS Javeriana IEEE

:glowing_star: ROS Meetup Bogotá is a space to strengthen the community, foster collaborations, and accelerate the development of innovative robotics projects with ROS/ROS 2.

:handshake: Organized by:

  • IEEE RAS Javeriana St. Ch.

  • IEEE RAS Colombia

  • IEEE RAS Universidad de los Andes St. Ch.

  • IEEE RAS Universidad Escuela Tecnológica Instituto Técnico Central St. Ch.

  • IEEE RAS Universidad Distrital Francisco José de Caldas St. Ch.

  • Research group SinfonIA – Universidad de los Andes

We look forward to building the future of robotics in Colombia together! :robot:

1 post - 1 participant

Read full topic

by miguelgonrod on September 22, 2025 10:46 PM

ARIAC 2025 Registration Open - Industrial Robotics Competition Using ROS/Gazebo

Hi ROS Community,

The National Institute of Standards and Technology (NIST) has opened registration for the Agile Robotics for Industrial Automation Competition (ARIAC) 2025. This is an excellent opportunity for ROS developers to apply their skills to realistic industrial automation challenges.

What is ARIAC?

ARIAC is an annual simulation-based competition that tests robotic systems in dynamic manufacturing environments. The competition presents real-world scenarios where things go wrong - equipment malfunctions, part quality issues, and changing production priorities.

2025 Competition Scenario: EV Battery Production

The competition simulates an EV battery production factory.

Production Workflow:

  • Task 1: Inspection and Kit Building - Use LIDAR sensors to inspect battery cells for defects, test voltage levels, and assemble qualified cells into kits on AGV trays

  • Task 2: Module Construction - Take completed kits and construct full battery modules through precise assembly and welding operations

Technical Stack:

  • ROS 2 for system architecture and communication

  • Gazebo simulation environment

  • MoveIt for motion planning and robot control

  • C++/Python for control system development

Why Participate?

  • Practical ROS experience: Work with industrial-scale robotics applications

  • Real-world relevance: EV battery production is a rapidly growing manufacturing sector

  • Problem-solving: Address challenges that mirror actual manufacturing environments

  • Recognition: Prize money available for eligible teams (1st: $10,000, 2nd: $5,000, 3rd: $2,500) - check the website for eligibility requirements

  • Professional development: Experience with automated production systems

Who Should Participate?

  • ROS developers interested in manufacturing automation

  • Academic teams working on robotics research

  • Industry professionals developing automation solutions

  • Anyone wanting to test their ROS skills against realistic challenges

Links:

Timeline:

  • Registration: Open now

  • Smoke Test Submission Deadline: December 8th, 2025

  • Final Submission Deadline: January 2nd, 2026

  • Results announcement: February 2nd, 2026

Questions?

The NIST team is available to provide technical support through the GitHub issues page.

Good luck to all participating teams!

3 posts - 2 participants

Read full topic

by jaybrecht on September 22, 2025 09:00 PM


Powered by the awesome: Planet