If you are a ROS developer/user and you blog about it, ROS wants those contributions on this page ! All you need for that to happen is:
have an RSS/Atom blog (no Tweeter/Facebook/Google+ posts)
open a pull request on planet.ros tracker indicating your name and your RSS feed/ATOM url. (You can just edit the file and click "Propose File Change" to open a pull request.)
make your ROS related posts tagged with any of the following categories: "ROS", "R.O.S.", "ros", "r.o.s."
Warnings
For security reasons, html iframe, embed, object, javascript will be stripped out. Only Youtube videos in object and embed will be kept.
Guidelines
Planet ROS is one of the public faces of ROS and is read by users and potential contributors. The content remains the opinion of the bloggers but Planet ROS reserves the right to remove offensive posts.
Blogs should be related to ROS but that does not mean they should be devoid of personal subjects and opinions : those are encouraged since Planet ROS is a chance to know more about ROS developers.
Posts can be positive and promote ROS, or constructive and describe issues but should not contain useless flaming opinions. We want to keep ROS welcoming :)
ROS covers a wide variety of people and cultures. Profanities, prejudice, lewd comments and content likely to offend are to be avoided. Do not make personal attacks or attacks against other projects on your blog.
Suggestions ?
If you find any bug or have any suggestion, please file a bug on the planet.ros tracker.
It was a big week for ROS sensors. Our friends over at Fraunhofer released an updated ROS 2 driver for GenICam-based (GigEVision and USB3Vision) cameras. My intern, @Akashleena, is working on a summer project to provide guidance to sensor vendors on how to best support the ROS community. If you have a second please weigh in on what makes a good sensor package in this thread.
We’ve got three ROS and Gazebo events happening next week!
Please take a moment to help update the ROS documentation. If you learned something new this week please consider submitting a how-to guide to docs.ros.org.
I am Akashleena, an intern at Intrinsic working with @katherine_scott. This summer I am writing a ROS whitepaper that informs sensor vendors on the best practices for developing a ROS 2 package. We want to better understand all of the aspects of writing a good sensor SDK and ROS 2 package. Our goal is to help sensor vendors design their SDK and ROS 2 package such that it is performant, easy to use, and readily available as part of a ROS distro.
We’ve got a few questions that we would like to pose to the ROS community, but first we wanted to share some of our recent work. As part of this project we analyzed many of the sensor packages that are available ROS for Humble. Since searching ROS Index still isn’t great, we’ve compiled a list of all the sensor packages available as binaries for ROS Humble using ROS Distro. Our hope is that you can use this list as a starting point for selecting sensor. To give you an idea about how we’re thinking this problem we also included a table that shows some of the criteria that we used to evaluate a couple sensor packages.
README has some diganostics guide but needs improvement
☒needs work
has good lifecycle documentation but can do better in diagnostics
Sensor Packages Available ROS 2 Humble
For your reference, here’s a list of ROS 2 drivers that we currently available as binary packages in Humble. Please note that this table was procedurally generated, if the package.xml did not include a description then no description is listed below. We’re providing this list as an easy reference for hardware packages that can be quicky and easily installed using binary ROS packages.
List of Sensor Packages
Depth Cameras depthai-core
Description: DepthAI core is a C++ library which comes with firmware and an API to interact with OAK Platform
This package combines the Roboception convenience layer for images with the
GenICam reference implementation and a GigE Vision transport layer. It is a
self contained package that permits configuration and image streaming of
GenICam / GigE Vision 2.0 compatible cameras like the Roboception rc_visard.
This package also provides some tools that can be called from the command line
for discovering cameras, changing their configuration and streaming images.
Although the tools are meant to be useful when working in a shell or in a
script, their main purpose is to serve as example on how to use the API for
reading and setting parameters, streaming and synchronizing images.
See LICENSE.md for licensing terms of the different parts.
Description: ROS package for LDS(HLS-LFCD2).
The LDS (Laser Distance Sensor) is a sensor sending the data to Host for the simultaneous localization and mapping (SLAM). Simultaneously the detecting obstacle data can also be sent to Host. HLDS(Hitachi-LG Data Storage) is developing the technology for the moving platform sensor such as Robot Vacuum Cleaners, Home Robot, Robotics Lawn Mower Sensor, etc.
Description: Driver module between Aldebaran’s NAOqiOS and ROS 2. It publishes all sensor and actuator data as well as basic diagnostic for battery, temperature. It subscribes also to RVIZ simple goal and cmd_vel for teleop.
We would love to hear from the community what makes a great sensor package! We’ve come up with a list of questions and we would love it if the ROS community provided their feedback on these topics:
What do you look for in the architecture of a ROS package for a sensor? What makes for a “well-written” sensor package?
Sensor calibration and cross-callibration is still a bit of a “black-art” in ROS. What tools are using for sensor calibration? What features would you like to see in sensor packages to improve calibration and cross calibration?"
One thing we’ve noticed is that Gazebo support for sensors is often missing. Most sensors lack URDF or STL files that define sensor’s hardware geometry. Moreover, most vendors don’t provide a Gazebo sensor plugin. What steps do you take to to integrate simulated sensors into your robot model in Gazebo? What would you like to see sensor vendors provide?
How important is it for sensor vendors to provide support for Tier 2 and Tier 3 supported operating systems? Is anyone out there building robots on top of operating systems like RHEL and Debian? If you are using these host operating systems what would you like to see?
Are there any specific things you would like to sensor vendors provide in their documentation or tests? Are there any sensor packages that you particularly like?
ROS 1 supported nodelets which were included in launch file of the sensors and allowed multiple nodes to share the same process. How are you performing similar interprocess communication in ROS 2? What would your recommendations be for similar features in ROS 2?
In ROS 1, the SubscriberStatusCallback was a key element in image pipeline to coordinate the sequence of nodes from layering to rectification. ROS 2 lacks this callback requiring frequent checks to find active subscribers. How are you achieving similar results in ROS 2?
How are you using lifecycle nodes in ROS 2 to monitor the sensor states?
IMU stands for inertial measurement unit, which is composed of three single-axis accelerometers and three single-axis gyroscopes. The accelerometer detects the acceleration signal of the object in the carrier coordinate system in three independent axes, while the gyroscope detects the angular velocity signal of the carrier relative to the navigation coordinate system. After processing these signals, the attitude of the object can be calculated.
It is worth noting that IMU provides relative positioning information. Its function is to measure the route of the object relative to the starting point, so it cannot provide information about your specific location. Therefore, it is often used together with GPS. When the GPS signal is weak in some places, IMU can play its role, allowing the car to continue to obtain absolute position information so as not to get lost.
In fact, the mobile phones we use every day, the cars and airplanes we take, and even missiles and spacecraft all use IMU. However, the cost and accuracy vary.
According to different scenarios, IMU has different requirements for accuracy. High accuracy also means high cost.
Low-precision IMU: used in ordinary consumer electronic products. This low-precision IMU is very cheap and is commonly used in mobile phones and sports watches. It is often used to record the number of steps.
Medium-precision IMU: used in unmanned driving. The price ranges from a few hundred to tens of thousands of yuan, depending on the positioning accuracy requirements of the unmanned vehicle.
High-precision IMU: used in missiles or space shuttles. Take missiles as an example. From the launch of the missile to the hitting of the target, the aerospace-grade IMU can achieve extremely high-precision calculations, and the error can even be less than one meter.
In addition to the accuracy and cost, IMU has two very critical characteristics. The first is the high update frequency, and the operating frequency can reach more than 100Hz; the second is the high calculation accuracy in a short period of time, without too much error.
IMU message under ROS
The IMU message under ROS looks like:
std_msgs/Header header
uint32 seq
timestamp // timestamp
string frame_id
geometry_msgs/Quaternion orientation // orientation
float64 x
float64 y
float64 z
float64 w
float64[9] orientation_covariance // orientation covariance
geometry_msgs/Vector3 angular_velocity // angular velocity
float64 x
float64 y
float64 z
float64[9] angular_velocity_covariance // angular velocity covariance
geometry_msgs/Vector3 linear_acceleration // linear acceleration
float64 x
float64 y
float64 z
float64[9] linear_acceleration_covariance // linear acceleration covariance
This message type provides IMU data, including orientation, angular velocity, and linear acceleration. Here’s a detailed explanation of each part:
Header
seq: Sequence number of the message.
timestamp: Timestamp indicating when the message was generated.
frame_id: Identifier of the reference coordinate frame.
Orientation
x, y, z, w: Components of the quaternion representing the IMU’s current orientation.
orientation_covariance: A 9-element array representing the covariance matrix of the orientation, indicating the uncertainty of the orientation measurement.
Angular Velocity
x, y, z: Components of the angular velocity, corresponding to the three axes.
angular_velocity_covariance: A 9-element array representing the covariance matrix of the angular velocity, indicating the uncertainty of the angular velocity measurement.
Linear Acceleration
x, y, z: Components of the linear acceleration, corresponding to the three axes.
linear_acceleration_covariance: A 9-element array representing the covariance matrix of the linear acceleration, indicating the uncertainty of the linear acceleration measurement.
How to record IMU data in Limo
Open a new terminal. Run the following command:
roslaunch limo_bringup limo_start.launch
Record IMU data
rosbag record -O bag_name.bag/topic1_name/topic2_name/xxx
Press Ctrl+C to end the recording. The file is automatically saved in the root directory with the name imu.bag. Rosbag records a specific topic name:
rosbag record -O bag_name.bag /topic1_name /topic2_name /xxx
Play back the data at 0.5 times the speed:
rosbag play -r 0.5 imu.bag
The terminal will display:
Visualize the data by rqt_plot
Replay the recored IMU data. And open the terminal and input:
rqt_plot
Close the LIMO driver.
In the interface, choose IMU topic data. Click ‘+’. Then you can see 3 angular velocity of IMU. And click the right side of the interface. Finally, you will see the IMU data changing.
Quiz
● Use ROS and rviz to visualize IMU sensor data.
Requirements:
Subscribe to the IMU data topic and parse the data.
Publish the parsed data to rviz.
Use the IMU plugin in rviz to visualize the data in the form of a 3D model.
● Tips:
You need to use an IMU data parsing library, such as the imu_filter_madgwick library that comes with ROS.
You can refer to the IMU display tutorial on the ROS Wiki.
● Use IMU sensors to implement robot posture control.
Requirements:
Subscribe to the IMU data topic and parse the data.
Calculate the robot’s posture based on the IMU data.
Implement robot posture control, such as keeping the robot stable, adjusting the robot’s posture, etc.
● Tips:
You can use the robot control library in ROS, such as ROS Control.
About Limo
Limo is a smart educational robot published by AgileX Robotics. More details please visit: https://global.agilex.ai/
If you are interested in Limo or have some technical questions about it, feel free to join AgileX Robotics or AgileX Robotics. Let’s talk about it!
In the field of process modeling, workflow diagrams (a.k.a. flowcharts) are an intuitive way to describe how a process evolves from its initial state to being complete. In the fields of discrete event systems and distributed systems, there is a well-studied state machine formulation of workflow diagrams called workflow nets which are a specialization of Petri nets. Workflow nets are particularly good at representing processes with a distinct beginning, distinct finish, and which may involve asynchronous events, cycles, simultaneous parallel actions, and synchronization between multiple independent agents.
While behavior trees are a popular way to express finite state machines in robotics, their tree structure encumbers them with limitations that make it difficult–sometimes impossible–to express workflows that involve arbitrary process branching, arbitrary synchronization, or arbitrary cycles. While behavior trees can do all of these operations to a limited extent, each operation must always somehow fit within a rigid hierarchy. In contrast, workflows do not have that limitation and can support any structure of branching, syncing, and cycling. Put simply, every behavior tree can be converted into an equivalent workflow, but not every workflow can be represented as a behavior tree.
The Open-RMF project has a long history of developing systems where one program needs to juggle attention for many agents, and each agent has multiple sub-processes that need to be run simultaneously and synchronized safely. Developing and maintaining these systems has historically been a very taxing burden for the project since we couldn’t find a framework for process modeling and control that met all our requirements around flexibility, performance, safety, expressiveness, and openness.
To build the “next generation” of the Open-RMF project, we are rolling out bevy_impulse which implements arbitrary workflow building and execution on top of the Bevy game engine. This will serve as a crucial foundation for many aspects of next gen Open-RMF:
Synchronization between robot state updates, multi-agent planners, and robot platform APIs
Defining and executing complex tasks, especially multi-agent tasks
Defining and executing modular behaviors / skills for robots
Synchronization between multiple devices, e.g. defining custom workflows for how an AMR interacts with a door or elevator
At this session of the Interoperability Special Interest group, we will discuss the current capabilities of bevy_impulse as we get ready to fire off its first release. We’ll go over the key concepts that are driving its design, discuss its API, and show some usage examples.
Please come and join us for this coming meeting at 2024-07-29T17:00:00Z UTC→2024-07-29T18:00:00Z UTC, where we will discuss robotics news and any progress that group members have made towards improving cloud robotics for the whole community.
Also, if you are willing and able to give a talk on cloud robotics, we would be happy to host you - please reply here, message me directly, or sign up using the Guest Speaker Signup Sheet.
This post is specifically intended to critique software design. I have nothing but appreciation for the people who work on open source robotics. I’m also relatively new to the community, so it’s perfectly possible that I’m just misunderstanding something
While trying to understand the architecture of the ros2_control package, I was wondering why it doesn’t use the node and message passing system already provided by ROS 2 and DDS. Basic usage gave me the impression that the control library was over-abstracting when the same level of modularity and interoperability could be achieved via defining both controllers and hardware as regular nodes.
For example, a specific PID controller would have subscribe to /measurement, /setpoint, and /voltage topics with user configured message types. On the receiving end, a piece of hardware would subscribe to /voltage and publish /temperature. This seems more straightforward and benefits from relying on ROS infrastructure for message passing, logging, etc.
This past thread had an inconclusive discussion on the topic.
The main feature a native ros controller implementation would not achieve is requiring controllers to own their hardware resources. However, I question the necessity of that, since it seems like a lot of work to circumvent a specific potential issue.
Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:
You guys would be mostly using debian here , im a arch linux user and planning to make a distro that supports both ROS1 and ROS 2 , because i am gonna make it declarative so you can roll back to ROS 1 anytime you want. Because my friend uses ROS he said ROS 2 does not have much docs about it , maybe challenging for me but i’ll do it
We recently released camera_aravis2, a ROS2 driver for GigE cameras based on the Aravis library.
We developed the driver from scratch to provide a cleaner codebase for easy configuration.
The driver is released for humble, iron and jazzy and will be actively maintained. We provide many examples and extensive documentation to make the configuration of new cameras as simple as possible.
ROSDoc2 Now Available Now in Apt and PyPI! If you are a package maintainer please, please, please regularly set aside some time to update your documentation. The post above includes documentation on how to write good ROS documentation.
We are very excited to announce that MoveIt 2 Jazzy is finally here. The newest LTS release Jazzy 2.10 will take Humble’s place as the recommended MoveIt version. It can be installed using the ROS Debian binaries on Ubuntu Noble 24.04, or through a Linux source build. The same version has also been released for Rolling Ridley.
New features in the Jazzy LTS release compared to Humble include (ordered randomly):
A refactored version of MoveIt Servo
MoveIt Python bindings
Multi-dof trajectory execution support
Refactored planning pipeline
Improved logging (rosout, namespaces) and ROS parameter API
Improved Cartesian Interpolator
A new implementation of the STOMP motion planner
Support for parallel planning pipelines
Several improvements to trajectory processing (TOTG, Ruckig, butterworth filtering)
Refactored planning pipeline API to better represent request and response adapters
The full changelogs are provided in the changelogs files in the MoveIt 2 repository, breaking changes are documented in the migration guide. We are still working on fixing and updating the tutorials to reflect the latest changes.
Jazzy development continues on the MoveIt main branch for the upcoming weeks. Once we are sure the MoveIt API can remain stable, we will branch off Jazzy to a stable branch for future releases, just like we did for past distributions. In the meantime, we will maintain source build support for Jazzy on the MoveIt main branch while Humble and Iron support are being phased out. The corresponding stable branches are not affected by this.
A big Thank You to all the great contributors whose work is featured in this release: Abhijeet Das Gupta, Abishalini Sivaraman, AdamPettinger, Alaa, Alex Moriarty, Alex Navarro, AlexWebb, AM4283, Amal Nanavati, AndyZe, Anthony Baker, Antoine Duplex, Ashton Larkin, azalutsky, Bhargav, Shirin Nalamati, cambel, Captain Yoshi, Carlo Rizzardo, Chance Cardona, Chris Lalancette, Chris Thrasher, Christian Henkel, CihatAltiparmak, Cory Crean, David V. Lu!!, Dongya Jiang, Erik Holum, Ezra Brooks, Filip Sund, Forrest Rogers-Marcovitz, Gaël Écorchard, hacker1024, Henning Kayser, Heramb Modugula, HX2003, Igor Medvedev, Ikko Eltociear Ashimine, Jafar, Jens Vanhooydonck, J. Javan Jochen Sprickerhof, Jonathan Grebe, Jorge Nicho, Jorge Pérez Ramos, Joseph Schornak, light-tech, Lucas Wendland, Marc Bestmann, Marco Magri, Mario Prats, Marq Rasmussen, Matej Vargovcik, Matthijs van der Burgh, methylDragon, Michael Ferguson, Michael Görner, Michael Marron, Michael Wiznitzer, Michael Wrock, Nacho Mellado, Nathan Brooks, Nils-Christian Iseke, Pablo Iñigo Blasco, Paul Gesel, Peter David Fagan, Rayene Messaoud, Robert Haschke, Rufus Wong, Sameer Gupta, Sami Alperen Akgün, Sarah Nix, Sarvajith Adyanthaya, Scott K Logan, Sebastian Castro, Sebastian Jahr, Shane Loretz, Shobuj Paul, Simon Schmeisser, Solomon Wiznitzer, Stephanie Eng, s-trinh, Surav Shrestha, tbastiaens-riwo, Tyler Weaver, Vatan Aksoy Tezer, V Mohammed Ibrahim, werner291, Will Yadu, Yang Lin
Please share your feedback and learnings on using MoveIt Jazzy on GitHub Discussions. Happy testing!
Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:
I am currently engaged in a ROS2 project aimed at optimizing our testing processes by enabling concurrent execution of tests within a single package using colcon test.
Initially, I attempted to utilize the --parallel-workers flag. However, this approach did not resolve the issue as intended. It facilitates parallel execution across multiple packages, but intra-package test execution remains sequential.
Each test is implemented using the launch_testing framework and integrated into colcon via the add_launch_test macro within the CMakeLists.txt file.
Key specifics include:
ROS2 version: Humble
Operating System: Ubuntu 22.04
Test execution command: colcon test
I am seeking guidance or practical examples that outline best practices for achieving concurrent test execution at the ROS2 package level.
In case what I am asking turned out to be not supported, I suggest to add it to the list of desired features to be implemented. Thank you in advance!
We’re happy to announce 9 new packages and 92 updates are now available on Ubuntu Jammy on amd64 and more importantly we’ve restored 114 packages on arm64 that disappeared after Iron Irwini sync earlier this week
Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:
Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:
rosdoc2, the utility used to generate ROS 2 package documentation via doc jobs. Is now available in the ROS package repositories for Ubuntu and Debian and on PyPI for other platforms*.
The ROS build farm currently installs the main branch of rosdoc2 rather than the current release, but we’ll likely switch to releases in the near feature.
Autoware is the world’s first “all-in-one” open-source software for autonomous driving based on ROS hosted under the Autoware Foundation. As one of the main contributors of Autoware, TIER IV is sponsoring a new challenge to encourage development of autonomous driving technology.
Through the challenge, participants will come up with their own idea that would improve Autoware and present their solution. At the end of the challenge, TIER IV will examine the presentation and choose a winner who came up with the best solution that improves Autoware functionality.
First Place: 15,000 USD Second Place: 7,000 USD Third Place: 3,500 USD
Timeline
Registration/Abstract Deadline: September 2nd, 2024
Proposal Submission Deadline: January 31st, 2024
Online Presentation: February 7th, 2025
Announcement of the Results: February 21st, 20225
For mobile robots, there are three basic questions: Where am I? Where am I going? How do I get there? The first question is describing a robot positioning topic. The positioning topic can be explained in more detail as follows: the mobile robot determines its position and posture in the world (global or local) in real time based on its own state and sensor information.
In this project, we will discuss details about odometer in the mobile base such as Limo.
Introduction of robot wheel odometer and calibration test
The main positioning solutions for driverless cars in Ackerman turned to include: wheel odometer, visual odometer, laser odometer, inertial navigation module (IMU+GPS), and multi-sensor fusion. Wheel odometer is the simplest and lowest-cost method. Like other positioning solutions, the wheel odometer also requires sensors to perceive external information, but the motor speed measurement module used by the wheel odometer is a very low-cost sensor. The speed module is shown in figure below.
The pose model of a mobile robot is the state of the robot in the world coordinate system. The random variable Xt = (xt, yt, θt) is often used to describe the state of the robot in the world coordinate system at time t, referred to as pose. Among them, (xt, yt) represents the position of the robot in the world coordinate system at time t, and θt represents the direction of the robot. The positive X-axis of the world coordinate system is assumed to be the positive direction, and the counterclockwise rotation is the positive direction of rotation.
At the initial moment, the robot coordinate system and the world coordinate system coincide. The pose description of the robot at a certain time t is shown in the figure.
The rotational angular velocity of the two wheels can be obtained through the wheel speed odometer. Therefore, the angular velocity of the wheel is needed to represent the x displacement, y displacement, and angle calculated by the odometer.
The quantities we need to calibrate are the wheel spacing and the wheel radius. The formula for establishing the mathematical model is to use the wheel spacing and wheel radius to represent the angular velocity and linear velocity of the vehicle body. The wheel spacing diagram is shown.
The angular velocity of the chassis center relative to the body’s rotation center is equal to the angular velocity of the two wheels relative to the body’s rotation center. That is:
Through the relationship between linear velocity and angular velocity, d is introduced:
So we can get r:
The motion solution solves w. Bringing r back, we can find w as:
Solve v in motion. By simplifying w*r, we can get v as:
The calculation of the odometer refers to the cumulative calculation of the robot’s position and posture in the world coordinate system at any time, starting from the moment the robot is powered on (the robot’s heading angle is the positive direction of the world coordinate system X).
The usual method for calculating the odometer is speed integral calculation: the speeds VL and VR of the left and right wheels of the robot are measured by the encoders of the left and right motors. In a short moment △t, the robot is considered to be moving at a uniform speed, and the increments of the X and Y axes of the robot in the world coordinate system at that moment are calculated based on the heading angle of the robot at the previous moment. The increments are then accumulated, and the yaw value of the IMU is used for the heading angle θ. Then the robot’s odometer can be obtained based on the above description.
The specific calculation is shown in the figure below:
Wheel odometer calibration
The three main sources of odometer system errors are “the deviation between the actual diameter of the left and right wheels and the nominal diameter”, “the deviation between the actual spacing between the left and right wheels and the nominal spacing” and “the actual average of the diameters of the two wheels is not equal to the nominal average”.
“The deviation between the actual diameter of the left and right wheels and the nominal diameter” will cause the distance error of linear motion. “The deviation between the actual spacing between the left and right wheels and the nominal spacing” will cause the direction error of rotational motion. “The actual average of the diameters of the two wheels is not equal to the nominal average” will affect both linear motion and rotational motion.
We usually assume that the actual position is linearly related to the wheel odometer. By recording the actual position by ourselves and the position x and y of the odometer of the car, we can use the least squares rule to obtain a linear equation: y=ax+b. The coefficients of the equation can be added when calculating the odometer to correct the odometer.
The code can be viewed in the driver package scout_base/src/scout_messenger.cpp of the robot.
First, data needs to be collected, that is, the actual distance moved by the car and the distance of the odometer of the car.
Running the code in Matlab, the results are as follows
p = [1.0482 -0.0778]
That is, a=1.0482, b=-0.0778, which are the calibration parameters in the x direction. Similarly, the calibration parameters in the y direction and the yaw angle can be calculated. This calibration is reflected in line 28 of the following code.
Detailed explanation of the wheel odometer code released by ROS
Create package
catkin_create_pkg pub_odom roscpp tf nav_msgs
Create the pub_odom_node.cpp file in the src folder under the pub_odom function package and add the following code:
Here we will set some velocities which will cause the “base_link” frame to move in the “odom” frame at 0.1m/s in the x direction, -0.1m/s in the y direction, and 0.1rad/s in the th direction. This will more or less cause our simulated robot to go in a circle.
ros::Rate r(1.0);
In this example, we will publish the mileage information at a rate of 1 Hz to make the display more concise, most systems will publish the mileage information at a higher rate.
//compute odometry in a typical way given the velocities of the robot
double dt = (current_time ‐ last_time).toSec();
double delta_x = (vx * cos(th) ‐ vy * sin(th)) * dt;
double delta_y = (vx * sin(th) + vy * cos(th)) * dt;
double delta_th = vth * dt;
x += delta_x;// x = a * x + b;
x = 1.0482x -0.0778;
y += delta_y;// y = m * m + n;
th += delta_th;// th = q * th + p;
Here we are updating our mileage information based on the constant speed we set. Of course, a real mileage system would incorporate speed into its calculations.
//since all odometry is 6DOF we'll need a quaternion created from yaw
geometry_msgs::Quaternion odom_quat = tf::createQuaternionMsgFromYaw(th);
We generally try to use 3D versions of all messages in our system to allow 2D and 3D components to work together where appropriate and to keep the number of messages to a minimum. Therefore, it is necessary to convert our yaw values to quaternions. tf provides functions that allow quaternions to be easily created from yaw, and yaw values to be easily obtained from quaternions.
//first, we'll publish the transform over tf
geometry_msgs::TransformStamped odom_trans;
odom_trans.header.stamp = current_time;
odom_trans.header.frame_id = "odom";
odom_trans.child_frame_id = "base_link";
Here, we’ll create a TransformStamped message to send over tf. We want to publish the transform from the “odom” coordinate system to the “base_link” coordinate system at current_time. So, we’ll set the message header and child_frame_id accordingly, making sure to use “odom” as the parent coordinate system and “base_link” as the child coordinate system.
Stuff our odometry data into the transform message and send the transform using the TransformBroadcaster.
//next, we'll publish the odometry message over ROS
nav_msgs::Odometry odom;
odom.header.stamp = current_time;
odom.header.frame_id = "odom";
We also need to publish a nav_msgs/Odometry message type so the navigation package can get velocity information from it. We set the header of the message to the current_time and the “odom” frame.
//set the position
odom.pose.pose.position.x = x;
odom.pose.pose.position.y = y;
odom.pose.pose.position.z = 0.0;
odom.pose.pose.orientation = odom_quat;
//set the velocity
odom.child_frame_id = "base_link";
odom.twist.twist.linear.x = vx;
odom.twist.twist.linear.y = vy;
odom.twist.twist.angular.z = vth;
This will populate the message with the mileage data and send it off. We set the child_frame_id of the message to the “base_link” frame, since that’s the frame we want to send velocity information to.
Add the following two lines of code in the CMakeLists.txt file
Run the code
First open roscore
Then run the code we wrote
rosrun pub_odom pub_odom_node
After the code runs successfully, use rostopic echo to view the published odom information
rostopic echo /odom
Test result
After-class QUIZS
● In ROS, how to use the robot’s wheel odometer data to realize the robot’s pose estimation? Please write a ROS node, subscribe to the robot’s wheel odometer data, use the odometer data to realize the robot’s pose estimation, and publish the estimated pose information.
● How to calibrate the robot’s wheel odometer? Please write a ROS node, let the robot move on a specific trajectory, record the robot’s wheel odometer data and real pose information, and use the calibration algorithm to calibrate the wheel odometer, and finally save the calibration results in the ROS parameter server.
About Limo
If you are interested in Limo or have some technical questions about it, feel free to join AgileX Robotics or AgileX Robotics. Let’s talk about it!
Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:
I’m pleased to announce the launch of the Open-RMF Project Management Committee (PMC) open sessions. These are sessions where the Open-RMF PMC will discuss matters pertaining to the upkeep, development, and direction of the project, with an opportunity for the public to observe as well as potentially share feedback with the committee as the agenda permits. The sessions will be run according to the project charter designated by the OSRA.
Sessions will take place at a two-week cadence with the first session at 2024-07-16T01:00:00Z UTC.
Unfortunately the timing is not favorable to folks in a European timezone, but all the current PMC members are in Asia or California. Folks in Europe who are interested in interoperability are encouraged to join the OSRF Interoperability Special Interest Group which takes place during a Euro-friendly timeslot.
Recently, our consortium conducted a comprehensive ROS2 Basics training session at the Mitsubishi Electric Nagoya Works Factory Automation Centre in Japan, from May 14th to 17th . This training brought together our consortium members from IHI, Mitsubishi Electric, Pepperl+Fuchs, and Panasonic,as well as the training team from RIC-AP (Glenn Tan, Adriel Ho, Sheila Suppiah), all eager to delve into the topics of ROS2 Basics.
The sessions covered essential topics including the publisher/subscriber model, service client model, launch files, and parameter tuning in ROS2. A significant focus was also placed on comparing ROS1 and ROS2, highlighting the advancements and improvements in the latter.
To ensure a solid foundation, the course began with an introduction to the Linux operating system and command line interface, which is essential for ROS2 development. The engagement from our consortium members was great, with active participation and thoughtful questions that contributed to a vibrant learning environment. Their enthusiasm underscores a shared commitment to advancing robotics through ROS based technologies.
As part of their final assessment, students applied their newfound knowledge in a practical application focusing on the usage of a TurtleBot3 Burger. This hands-on approach allowed them to demonstrate their understanding of ROS2 concepts in a real-world context, further solidifying their learning.
The feedback from participants was generally positive, with many expressing their interest to delve deeper into more advanced ROS topics in the future. We also had the opportunity to further enrich the learning experience, as we concluded the training with an insightful factory tour of Mitsubishi Electric.
This training session wouldn't have been possible without the continuous support and dedication of our consortium members. Their unwavering interest in ROS2 is paving the way for future advancements in robotics, demonstrating the power of collaboration in driving innovation forward.
We look forward to our next run of trainings in Japan! Do drop a comment or contact us if you are interested to participate in subsequent runs. #goROS
Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:
Our new, free ros-tool capability makes it trivial to interact with ROS from the web. It provides a React API for subscribing and publish to topics and for placing service calls. It works with both ROS 1 and 2, and unlike rosbridge, it caches all data in the cloud, which means your UIs will work even when your robot is offline (just showing the latest data). Since all data is synced via the cloud, it is also much easier and more efficient to aggregate data from multiple robots, e.g., for showing your fleet on a map.
Example
Here is an example of how to use it on the web:
import { CapabilityContext, CapabilityContextProvider } from '@transitive-sdk/utils-web'
const MyROSComponent = () => {
// import the API exposed by the ros-tool capability
const { ready, subscribe, deviceData, publish, call } = useContext(CapabilityContext)
useEffect(() => {
if (ready) {
// subscribe to a ROS 1 topic on the robot
subscribe(1, '/amcl_pose');
}
}, [ready, subscribe])
// get the pose from the reactively updating device data
const pose = deviceData?.ros[1].messages?.amcl_pose?.pose.pose;
if (!pose) return <div>Connecting..</div>;
// Show pose x and y on page as text:
return <div>
x: {pose.position.x},
y: {pose.position.y}
</div>
}
const MyPage = () => <CapabilityContextProvider jwt={jwt}>
<MyROSComponent />
<CapabilityContextProvider>
>
Getting Started
The easiest way to get started with this is to use our hosted solution on transitiverobotics.com where you can create a (free) account, add your robots, and install the ros-tool capability. Then you can use the playground UI to try it out without writing any code. The playground UI shows you the list of topics on the robot and let’s you subscribe to them. For publishing messages and placing service calls it fetches the message/service schema and uses it to pre-populate an editor with a template where you can just edit the values and send it off.
And yes, Transitive is open-source, so if you prefer to self-host the service, you can.
Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:
Today, together with @Fmrico and @juanscelyg, I come to present our implementation to make the unitree go 2 robot work in ros2, in this case for dds.
This implementation is given since once you connect to this robot you can see topics, but you do not see any launched nodes. Furthermore, there is too much information in these topics that we are not typically used to seeing in a robot. By this I mean being able to see a /robot_description, being able to see the /joints_states of the motors, being able to move our robot easily using a /cmd_vel.
In the previous repository, you can find how to use our “driver”, fully implemented in c++, with all the steps to follow to change your robot’s mode, change settings, command speeds, etc… Also, you can find a small list of implementations that we already have done, or which we are currently working on.
I show you some images of what was mentioned:
We are currently developing support to be able to do slam and nav2 with this robot, so we hope to update within a couple of weeks with new news about it ^ ^. Also, in the future (hopefully soon), we want to have a gazebo simulation ready, so that anyone who does not have this robot can also work with it.
This post is not only to show our work, but also to invite the entire community to contribute to it, helping to find small bugs, developing new things… So I invite everyone to contribute to this repository.
Thank you very much for reading the post and I hope it helps many of you.