If you are a ROS developer/user and you blog about it, ROS wants those contributions on this page ! All you need for that to happen is:
have an RSS/Atom blog (no Tweeter/Facebook/Google+ posts)
open a pull request on planet.ros tracker indicating your name and your RSS feed/ATOM url. (You can just edit the file and click "Propose File Change" to open a pull request.)
make your ROS related posts tagged with any of the following categories: "ROS", "R.O.S.", "ros", "r.o.s."
Warnings
For security reasons, html iframe, embed, object, javascript will be stripped out. Only Youtube videos in object and embed will be kept.
Guidelines
Planet ROS is one of the public faces of ROS and is read by users and potential contributors. The content remains the opinion of the bloggers but Planet ROS reserves the right to remove offensive posts.
Blogs should be related to ROS but that does not mean they should be devoid of personal subjects and opinions : those are encouraged since Planet ROS is a chance to know more about ROS developers.
Posts can be positive and promote ROS, or constructive and describe issues but should not contain useless flaming opinions. We want to keep ROS welcoming :)
ROS covers a wide variety of people and cultures. Profanities, prejudice, lewd comments and content likely to offend are to be avoided. Do not make personal attacks or attacks against other projects on your blog.
Suggestions ?
If you find any bug or have any suggestion, please file a bug on the planet.ros tracker.
I would like to have your recommendations on ROS2-compatible drones suitable for educational and research purposes. I’ve been through several options but haven’t found the ideal solution yet.
My Requirements:
ROS2 native support or well-maintained ROS2 integration
Onboard sensors capable of SLAM (3D LiDAR, RGBD camera, or stereo camera)
Ability to operate indoors without external positioning infrastructure
Budget: approximately $6,000 USD
What I’ve Tried/Considered:
I came across this helpful discussion: https://discourse.openrobotics.org/t/trying-to-find-pre-built-drones/44168, which recommends the Crazyflie platform. While Crazyflie is excellent for swarm research and basic control, it requires external infrastructure such as motion capture systems or marker-based localisation (e.g., Lighthouse or Loco Positioning), which isn’t practical for my use case.
Similarly, I’ve used DJI Tello drones, but they share the same limitation—reliance on external environmental setup for accurate localisation and mapping.
What I’m Currently Considering:
I’ve been looking at the ModalAI Starling 2 Max (https://www.modalai.com/products/starling-2-max?variant=48172375900484), which appears promising with its VOXL 2 flight computer, stereo cameras, and PX4/ROS2 support. However, I’d appreciate feedback from anyone who has hands-on experience with this platform, particularly regarding:
Ease of integration with ROS2
Reliability of onboard VIO/SLAM for indoor navigation
Suitability for student projects and coursework
Documentation quality and community support
Use Case:
The drones will be used for teaching autonomous navigation, path planning, and SLAM concepts to postgraduate students. Ideally, students should be able to develop and test algorithms in simulation (Gazebo/Webots/PyBullet) and deploy them on real hardware with minimal friction.
I’d greatly appreciate any recommendations, alternatives, or insights from those with experience in this area. If there are other platforms I should consider within this budget range, please do share.
I have created an awesome list of ROS2 packages on my github. It covers a wide range of topics, like motion and planning, localization (SLAM algorithms), logging, monitoring, client libraries for different languages, useful tools for development, AI based tools etc.
It’s regularly updated with new intreresting packages. Hope it will be useful for everyone in the community.
At ROSCon Spain 2025 we ran a hands-on workshop about ROS 2 testing as part of our work at Ekumen, covering everything from basic linters and unit tests to integration testing and CI. Several people asked if the materials would be shared publicly, so here they are in case they’re useful to others as well:
Everything is built around small C++ examples and simple exercises. Nothing fancy, just practical patterns we’ve found helpful when trying to make ROS 2 codebases more reliable and easier to maintain.
If you end up going through it or applying parts of it in your projects, we’re more than happy to get feedback, questions, or suggestions. Feel free to open issues or comment here in the thread.
Thanks to everyone who joined at the workshop, and to the ROSConES organizers for a great event. Hope this can help more people working on testing in ROS 2.
Today, I am excited to introduce Genesys, a new framework I built, designed to make ROS 2 development faster, cleaner, and more intuitive for everyone.
We’ve all faced the boilerplate, complex build systems, and fragmented tooling that can slow down robotics projects. Genesys is our solution. It’s an opinionated framework that simplifies common workflows and provides a single, unified CLI (genesys) to manage your entire project lifecycle, from scaffolding to simulation.
What makes Genesys different?
Zero Boilerplate: Use elegant Python decorators (@node, @publisher, @subcriber, @service,@timer, etc) and C++ Macros (ROS_PUBLISHING_NODE, ROS_UNIVERSAL_NODE) to define your components without writing repetitive code. The framework auto-generates your build and launch files for you.
Unified CLI: Say goodbye to juggling multiple commands. A single genesys entry point handles everything from “genesys new” for project setup to “genesys build” and “genesys run” for execution the genesys build command also comes with a “- -persist” flag that allows you build once and run on any terminal, this flag will add a command to source your workspace’s install/setup.bash file to your shell’s startup script, “genesys sim create” creates a new *_gazebo package in the sim/ directory, fully configured for a specific robot. “genesys sim run” launches a Gazebo simulation from one of the *_gazebo packages.
100% ROS 2 Compatible: Genesys isn’t a replacement for ROS 2, it’s an enhancement. Every Genesys project is a valid ROS 2 project, meaning you can always fall back to the standard colcon and ros2 commands whenever you need to.
Genesys is about getting you back to what you love: building amazing robots. We’re on a mission to create a “happy path” for robotics development, and this is just the beginning.
Ready to streamline your workflow? Learn more about Genesys and get started today!
I recently had the following warning when loading a node inside an rclcpp_component manager.
[rcl.logging_rosout]: Publisher already registered for node name: ‘my_manager’. If this is due to multiple nodes with the same name then all logs for the logger named ‘my_manager’ will go out over the existing publisher. As soon as any node with that name is destructed it will unregister the publisher, preventing any further logs for that name from being published on the rosout topic.
It also showed two nodes named “my_manager” in the node list, with each their own parameter’s related services.
After digging, it appears that MoveIt’s RobotModelLoader calls moveit::getLogger, which itself creates a new node. However, because the component manager was started with -r __node:=foobar, all subsequent nodes created inside the same process would inherit the same name.
The same problem can appears without rclcpp_component, with this example:
#include <moveit/robot_model_loader/robot_model_loader.hpp>
#include <rclcpp/rclcpp.hpp>
// Try to run it with and without --ros-args -r __node:=foobar
int main(int argc, char** argv)
{
rclcpp::init(argc, argv);
rclcpp::Node::SharedPtr myNode = rclcpp::Node::make_shared("my_node");
myNode->declare_parameter("my_param", 42.0);
robot_model_loader::RobotModelLoader rml {myNode, "robot_description", false};
rclcpp::spin(myNode);
rclcpp::shutdown();
}
Now, this didn’t cause any problem as far as I could tell, but I don’t like warnings and I really don’t like multiple nodes sharing the same name.
I see multiple solutions:
Rename the node using its orignal name, e.g. -r my_node:__node:=foobar. This requires knowing the original node name, and it doesn’t work with launch_ros, namely ComposableNodeContainer which crafts the renaming argument itself and needs to know the manager’s name to load the components inside. Maybe modify launch_ros in consequence.
Ask MoveIt not to create a new node, or at least accept a custom logger argument for the RobotModelLoader
Modify rcl to only apply the __node renaming to the first created node.
Ignore the warning, maybe disable the parameters services from the node created by moveit::getLogger.
What do you think? Is there a guideline on how to use __node:= and creating multiple nodes in the same process?
(we have been working on this with CNCF for a long time) we recently added the new feature in KubeEdge, called “Resource Upgrade Control at Edge”. i understand that this is not only for ROS 2 but other IoT workloads controlled by Kuberentes and KubeEdge, this is one of the differentiating feature developed for ROS 2 application workloads.
This feature lets you control when and how resources are upgraded on each edge node, giving full flexibility over deployment timing under Kubernetes Cluster. That means for edge AI, robots, drones and EVs, you can now coordinate updates safely and precisely — no more unintentional service interruptions or synchronization issues.
Each edge node can control the upgrade timing at edge with Kubernetes even with rolling upgrade policy !
Please take a look how this feature works before and after:
we also work on other projects to un-gap the boundary between cloud and edge, that are really related to robot and robotics application.
i will allocate some time to summarize our activity and share the update including all the development some time later.
These packages provide the features such as hardware acceleration, zero copy and AI inference, most of them are supported on Ubuntu, can be simply installed by “apt install xxx“.
I’m working on SeekSense AI, a training-free semantic search layer for indoor mobile robots – basically letting robots handle “find-by-name” tasks (e.g. “find the missing trolley in aisle 3”, “locate pallet 18B”) on top of ROS2/Nav without per-site detectors or tons of waypoint scripts.
I’ve put together a quick 3–4 minute survey for people who deploy or plan to deploy mobile robots in warehouses, industrial sites, campuses or labs. It focuses on pain points like:
handling “find this asset/location” requests today,
retraining / retuning perception per site,
dealing with layout changes and manual recovery runs.
At the end there’s an optional field if you’d like to be considered for early alpha testing later on – no obligation, just permission to reach out when there’s something concrete.
If you’re working with AMRs / AGVs / research platforms indoors, your input would really help me shape this properly
A ROS 2 compatible Java library for publish-subscribe. I dare say it’s the easiest way possible to talk to other ROS 2 nodes; no local ROS 2 installation is required. All you need to do is depend on the library in your Java project (works best with Maven or Gradle).
Works on: Windows, Linux (including NVIDIA Jetson & RPi), and macOS! Android support is planned.
jros2 loosely follows rclcpp regarding API design and usage. Includes the following features:
Publish and subscribe to ROS 2 topic
Supports custom message types
Generate Java classes from ROS 2 .msg files
Fast-DDS backend
Minimal and fast implementation
Fully thread-safe
Async and allocation-free API
Full QoS configuration
(soon) ROS 2 services
(soon) ROS 2 actions
(soon) ROS 2 parameters
jros2 is recommended with javacpp-presets packages such as opencv, librealsense2, cuda! Check out javacpp-presets; they make developing robotics & computer vision software very easy in Java!
This project was developed to fill a requirement at IHMC Robotics, we write a lot of our robotics software in Java. If there’s something you’d like to see in jros2, please make an issue on GitHub!
From 27–30 October 2025, Singapore became the beating heart of the global ROS ecosystem.
Over three days, **ROSCon 2025 (27–29 Oct) ** convened more than 1,000 participants from 52 countries maintainers, developers, startups, MNCs, public agencies and researchers—united by a shared belief in open-source as the fastest path to real-world robotics at scale.
This was more than a conference week. It was a signal: open, interoperable robotics—anchored in Singapore, built with the world - is here to stay.
Where Code Meets Collaboration: Reflections from ROSCon 2025 Singapore
Hosted in Singapore for the first time, ROSCon 2025 brought the global ROS community to Marina Bay with three intense days of technical talks, tutorials, demos and hallway architecture debates.
The event was honoured by the presence of Prof Tan Chor Chuan, Chairman of A*STAR
In his remarks, Prof Tan highlighted how open-source innovation, collaborative standards, and talent development are becoming the cornerstones of Singapore’s advanced manufacturing and robotics strategy. He commended the Open Source Robotics Foundation (OSRF) and A*STAR’s Advanced Remanufacturing and Technology Centre (ARTC) for their leadership in cultivating an ecosystem that bridges research and industry, noting that:
“Open-source robotics represents not only technological advancement but also a new model of global cooperation. By enabling interoperability and collective innovation, we can accelerate deployment across sectors — from manufacturing to healthcare — while nurturing the next generation of deep-tech talent in Singapore.”
Prof Tan’s message set the tone for the conference — underscoring Singapore’s commitment to being a neutral and collaborative hub for open-source robotics, embodied AI, and digital transformation.
Beyond the energy on stage and in the expo hall, several milestones framed the week:
1.OSRF–ARTC Collaboration on Open-RMF
At ROSCon, the Open Source Robotics Foundation (OSRF) and A*STAR’s Advanced Remanufacturing and Technology Centre (ARTC) announced a strategic collaboration to:
Co-develop best practices, guidelines and testing plans for Open-RMF as a foundation for global robot interoperability.
Use Singapore’s new national sandbox at BCA Braddell Campus as a reference site for validation and certification of RMF-based deployments.
Strengthen community engagement so that Open-RMF continues to evolve as a truly open, production-grade standard.
This partnership cements Singapore’s role not just as a user of open-source robotics, but as a shaper of global interoperability standards
2. National Standards & Testbeds for Interoperability
Announcements around SS 713 (data exchange between robots, lifts and automated doorways) and TR 130 (interoperability between robots and central command systems) showcased how regulation, infrastructure and open-source can move in lockstep to make multi-vendor robot fleets safe and scalable.
3. Singapore as Neutral, Open Hub
With delegates and contributors from across the US, Europe, China, India, and the broader Asia Pacific, ROSCon 2025 reinforced Singapore’s unique role as:
A neutral ground for collaboration amid a more fragmented geopolitical landscape.
A trusted environment to host shared infrastructure, reference implementations and standards for open-source robotics, embodied AI and Open-RMF-driven ecosystems.
As the curtains close on ROSCon 2025 in Singapore, we are deeply honoured and inspired to have hosted this extraordinary gathering of over a thousand innovators, engineers, and visionaries from across 52 countries. The energy, ideas, and partnerships sparked over these few days reaffirm the strength of the open-source robotics community — one that thrives on collaboration, inclusivity, and shared purpose.
At A*STAR’s Advanced Remanufacturing and Technology Centre (ARTC) and ROS-Industrial Consortium Asia Pacifi, we are excited to continue nurturing these collaborations — strengthening our ties with OSRF and the global ROS community, advancing Open-RMF, and building pathways that connect research to real-world adoption.
As we look ahead, we can’t wait to see how the community will come together again for ROSCon 2026 in Toronto — where new ideas will take flight, new contributors will emerge, and the open-source movement will reach even greater heights.
Hi everyone, I’ve created an AI coding agent specialized for ROS. I got tired of the current LLMs being useless/hallucinating and decided to train something that actually understands ROS conventions and workspaces. You can find it here at www.contouragent.com, I’d love your feedback.
Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:
Scan-N-Plan technologies provide tools for real-time robot trajectory planning based on 3D scan data, addressing the limitations of traditional industrial robot programming methods like teach-pendant programming or offline simulation. This approach is ideal for applications where:
High part variability makes manual programming impractical
CAD models are unavailable
Flexible or deformable parts prevent pre-programming
Part-to-part variability cannot be handled with static programming
Flexible or no fixturing is required
The ROS-Industrial Consortium has been advancing tools to support the development of innovative end-user applications and has made them accessible for broader use. The scan_n_plan_workshop offers a ROS 2-based software framework for perception-driven surface processing, providing all the foundational elements needed to understand and implement Scan-N-Plan solutions.
Recently an updated documentation page was publisehd to serve as a comprehensive resource for developers and learners interested in Scan-N-Plan or in using ROS 2 to build industrial applications. It outlines what is included, how to get started, and how ROS 2 can be leveraged effectively.
Key features of the documentation include:
A detailed architecture diagram
Step-by-step instructions to get started
Customization guidance using behavior tree plugins
Example deployments for reference
We’re excited to see how the community adopts and engages with these resources. If you have any questions or requests, don’t hesitate to reach out!
Hi all - wanted to share some work we’ve been doing internally to help with static recovery and visualization of our ROS graph. We did this by hijacking some other tools we’ve been using to standardize our package structure and simplify our launch files.
We ended up creating and open sourcing three packages:
cake is a concept that started as trying to simplify the boilerplate required to set up a C++ node by using a more functional approach to node initialization. For the purposes of static graph analysis, we extended it to include a declarative interface file (publishers, subscribers, etc) which would be consumed at build time to generate the ROS code required for these interfaces. A lot of inspiration was taken from the picknik robotics generate_parameter_library (in fact we wrapped this package and included parameters in the node interface file). If you follow the folder structure suggested, cake also provides an automatic cmake macro, which uses ament_cmake_auto under the hood.
clingwrap is yet another python launch wrapper. Arguments about reinventing the launch wrapper wheel aside, it was convenient for us as we were already using it in all our launch files so it provided a good way to instrument all our launch files to statically extract node launch details. clingwrap provides a LaunchBuilder object which is a subclass of LaunchDescription. The idea is that LaunchBuilder is a mutable object which will track all launch actions as it gets mutated, meaning you just have to return it at the end of generate_launch_description. This lets us add extra tracking logic inside the LaunchBuilder class and expose a get_static_information method on it which lets the user call generate_launch_description and then get_static_information on the resulting object - which returns a dataclass of node information such as package and executable name, remappings, etc. We explored parsing the underlying actions that come out of the base launchfile system, but this got complicated quickly (especially recovering composable node information!) so we fell back to this simpler solution.
breadcrumb is a cli tool that uses the static interfaces from cake and the launch information from clingwrap to generate the final runtime ROS graph, based on a launchfile provided (without executing a ROS launch). It then spits out the graph as a json file or a graphviz dot file.
All of these packages are still very fresh - we are rolling them out in our codebase currently, and expect to extend them where we find extra usecases / corner cases.
Whilst they are somewhat specific to our current system (i.e. you can’t use breadcrumb without rewriting all your launchfiles with clingwrap) I thought it was worth sharing what we’ve come up with for moving our ROS codebase towards being more declarative and statically analyzable.
(the statically generated ROS graph diagram from the breadcrumb example project)
NVIDIA Isaac ROS 4.0, an open-source platform of accelerated packages and reference applications for the ROS 2 ecosystem, is now generally available.
With support for Jetpack 7.0 and Isaac Sim 5.1, you can now unlock the power of Jetson AGX Thor with your Isaac ROS applications. This release includes a new Isaac for Manipulation reference application for deploying learned policies with motion planning for a gear insertion task. The new multi-object pick-and-place workflow using behavior tree orchestration showcases new packages for FoundationStereo and GroundingDINO. Finally, improvements for FoundationPose, NITROS performance, and visual mapping and location along with new Segment Anything 2 round out Isaac ROS with the power of Thor.
We would like to share the announcement for the IEEE MRS Summer School 2026, taking place July 29 – August 4, 2026 in Prague.
The event is open to anyone working in multi-robot systems, autonomous UAV/UGV control, distributed coordination, perception, planning, or ROS-based robotics. Over the years the summer school has welcomed more than 1000 participants from academic labs and industry teams worldwide.
The main goal is to bring together people working on similar MRS challenges and create space for collaboration, exchange of ideas, and hands-on experimentation.
What the program includes:
practical sessions with real multi-robot platforms
talks from leading researchers in MRS, autonomy, and swarm robotics
team assignments focusing on coordination, planning, and deployment
a free weekend for group trips around Prague to encourage networking and community building
Registration:
Early registration fees apply until December 31.
If anyone in your team needs a few extra days, the organizers can extend the reduced fee individually.
If you or your colleagues are working with multi-robot systems, this is a solid opportunity to join the global community, work with real hardware, and connect with people solving similar problems.
I’m running into a bit of a weird problem. Maybe it’s more of an observation. There are times where I struggle to find the documentation for ROS code on google.
Take the following examples:
tf2::BufferCore::lookupVelocity: The search query I came up with on google ros tf2 lookupvelocity returns no relevant results on the first page. tf2 kilted lookupVelocity also returns nothing. tf2 buffercore lookupVelocity lists the documentation from jade, foxy as the first two options but neither version of those documentation pages has lookupVelocity. Even a query like tf2 “lookupvelocity” does not yield any results. From a git blame, lookupVelocity was
Launch python documentation: ros2 launch python includelaunchdescription yields only examples, stack exchange questions and no actual API documentation.
I noticed very clearly that google never gives me ROS source code, and very often outdated API documentation pages. This is really frustrating as it takes longer than I expect to get answers about very normal functions.
I’m really not sure what’s causing this. Is it just bad SEO? Is no one linking to ROS2 docs on the web and so google doesn’t prioritize them? Am I just bad at googling? I’m curious if other people have noticed this. It’s making me feel a little crazy
This is a repost from openrobotics.zulipchat.com that I made earlier, it was suggested I post here, so here goes…
So, over the past couple days we’ve been working on getting Depth Anything 3 (DA3 - the new monocular depth estimation model from ByteDance) running with ROS2. For those unfamiliar, Depth Anything 3 is basically a neural network that can estimate depth from a single camera image - no stereo rig or LiDAR needed. It’s pretty impressive compared to older methods like MiDaS.
PyTorch 2.8.0 (Jetson-optimized from nvidia-ai-lab)
Depth Anything 3 SMALL model (25M parameters)
Standard v4l2_camera for USB input
Current Performance (This is Where We Need Help)
Here’s what we’re seeing:
Inference Performance:
FPS: 6.35 (way slower than we hoped)
Inference time: 153ms per frame
GPU utilization: 35-69%
RAM usage: ~6 GB (out of 64 GB available)
Is PyTorch the problem? We’re running standard PyTorch with CUDA. Would TensorRT conversion give us a significant speedup? Has anyone done DA3 → TensorRT on Jetson?
Memory bandwidth? Could we be hitting memory bandwidth limits moving tensors around?
Is the model just too big for real-time? The SMALL model is 25M params. Maybe we need to quantize to FP16 or INT8?
FP16 precision - The Ampere GPU supports FP16 tensor cores. Depth estimation might not need FP32 precision.
Optimize the preprocessing - Right now we’re doing image normalization and resizing in Python/PyTorch. Could we push this to GPU kernels?
Has anyone done any of this successfully? Especially interested if anyone’s gotten DA3 or similar transformers running fast on Jetson.
The paper claims real-time performance but they’re probably testing on desktop GPUs. Getting this fast on embedded hardware is the challenge.
But, we got it working, which is cool, but 6 FPS is pretty far from real-time for most robotics applications. We’re probably doing something obviously wrong or inefficient - this is our first attempt at deploying a transformer model on Jetson.
I’m a robotics engineer turned product builder. After years in R&D and recently interviewing around 30 robotics teams, I noticed a pattern that honestly surprised me:
Even teams with mature deployments are still relying on fragile, “temporary” setups for remote debugging.
You may recognize some of these:
The Classics:frp + rosbridge, usually fine… until the moment you really need it.
The Painful: SSH X11 forwarding for RViz (enjoy those multi-second frame drops).
The Network Hell: Strict customer firewalls, unpredictable industrial WiFi, or 4G/5G uplinks that work only when the robot is facing north on a Thursday.
The Modern Mesh: Tailscale / ZeroTier: great connectivity, but no robotics-specific QoS, telemetry semantics, or tooling.
Enterprise platforms (Formant, Freedom, etc.) are powerful but often expensive or too heavy for simple debugging needs. Meanwhile, open-source solutions feel fragmented.
My Hypothesis: We don’t need another heavy “platform.” We need a simple, reliable, UNIX-style pipe that just works.
I’m exploring a “stupidly simple” API focused purely on transport (low latency, resilient under packet loss). But before I commit to the architecture, I want to validate my assumptions with you.
What’s in it for you?
The Data: I’ll compile the responses into an open “2025 ROS Remote Access Landscape Report” and share it here.
The Access: I’m looking for beta testers — 10 random participants will receive lifetime free access to the managed API tier at launch.
PS: After you fill out the form, drop a quick reply below (e.g., “Done” or your biggest pain point). It helps keep this thread visible so we can get more community data!
Hi, I’ve been working on building our software stack using only release mode and not building any packages which are test_depend. The problem I’m having is colcon scoops up all the dependencies no matter how they’re marked in the package.xml. I do not use rosdep as I don’t necessarily trust every dev out there chose wisely when building their package.xml anyways, so I’m trying to do this in a more manual way. I don’t believe I should have to build something like ament_cmake_pep257 if I have no plan to build any tests. I also shouldn’t be installing *-dev debian packages in release builds. E.g. a package I have depends on libglib2.0-dev for building, but only needs libglib2.0-0 at runtime, so the process I want is to build the package in release mode, then create a new image with only the release dependencies, and copy over the install/ space to that new image. Colcon though won’t let me separate out those packages that I don’t want to build, even though they are only <test_depend>. Does anyone else do this or have thoughts?