July 12, 2018
New course on Udemy for beginners on Fundamental Concepts of Robot Operating System (ROS) (C++)

@Anis_Koubaa wrote:

Hi

I am pleased to announce by course entitled
Fundamental Concepts of Robot Operating System (ROS) (C++)
on Udemy learning platform.

The course provides an introduction to beginner users and provides them with tips and practical hints to speed up learning ROS with C++, which encompass several challenges. The course is perfect for students who are starting with ROS and would like to understand well the fundamental concepts, in addition to beginners users who would like to improve their skills in ROS. I also give a lot of practical advises to ROS learners

I have been programming, teaching and developing robotics software with ROS in both academia and industry, for 7 years, since the very early stages of ROS. I am also the editor of 3 books with Springer on ROS. Volume 3 of the book has just appeared

The course will be maintained and more lecture will be added regularly in addition to hands-on activities. I will also provide a technical support for students who enroll the course.

I provide limited number of coupon with exceptional discount to first students in the course

Looking forward to have you in the course …

Anis

Posts: 1

Participants: 1

Read full topic

by @Anis_Koubaa Anis Koubaa on July 12, 2018 10:52 PM

Prototyping for Autonomous Logistics with ROS at StreetScooter

@Tobias wrote:

StreetScooter gives an insight into the prototyping of autonomous Vehicles for logistic applications at DPDHL:

Please keep the spelling mistakes :wink:

Posts: 1

Participants: 1

Read full topic

by @Tobias Tobias Augspurger on July 12, 2018 09:55 PM

European Robotics Week 2018 (#ERW2018) 16-25 November 2018

@ThiloZimmermann wrote:

Dear ROS-community,

The European Robotics Week (ERW) celebrates Europe and its Robotics technology development by offering the public one week of hundreds of interactive events. ERW was conceived in 2011 with the desire of the European Robotics community to bring robotics research and development closer to the public and to build the future Robotics Society. The European Robotics Week is organised under SPARC, the public-private partnership for robotics between euRobotics and the European Commission.

The European Robotics Week 2018 (#ERW2018) will take place on 16-25 November 2018. The Central event will be hosted at the Augsburg Innovationspark, Germany.

I would be very happy to see even more than the 1,000 events as last year. Hopefully some of them with ROS in research, products, education and for fun.

Kind regards,
Thilo

National Coordinator of the European Robotics Week in Germany
www.eu-robotics.net/robotics_week

Posts: 1

Participants: 1

Read full topic

by @ThiloZimmermann Thilo Zimmermann on July 12, 2018 09:40 PM

New packages for Melodic 2018-07-12

@clalancette wrote:

We’re happy to announce the next update for ROS Melodic. We have 45 new packages as well as 47 updated packages.

As always, thanks to all of the maintainers and contributors who help make these updates possible!

Full details are below.

Package Updates for melodic

Added Packages [45]:

Updated Packages [47]:

Removed Packages [0]:

Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:

  • Adolfo Rodriguez Tsouroukdissian
  • Atsushi Watanabe
  • AutonomouStuff Software Development Team
  • Bence Magyar
  • Benjamin Binder
  • Enrique Fernandez
  • Felix Endres
  • Felix Ruess
  • Geoff Viola
  • George Todoran
  • Ingo Luetkebohle
  • Jihoon Lee
  • Johannes Meyer
  • John Hsu
  • Jonathan Bohren
  • Jose Luis Rivero
  • Jose-Luis Blanco-Claraco
  • Kevin Hallenbeck
  • Koji Terada
  • Louise Poubel
  • Mark Moll
  • Markus Bader
  • Martin Guenther
  • Michael Ferguson
  • Monika Florek-Jasinska
  • Paul Bovbel
  • Raphael Hauk
  • Russel Howe
  • Sachin Chitta
  • Sammy Pfeiffer
  • Shengye Wang
  • Stefan Kohlbrecher
  • Tully Foote
  • Vincent Rabaud
  • Vincent Rousseau
  • Vladimir Ermakov

Posts: 1

Participants: 1

Read full topic

by @clalancette Chris on July 12, 2018 12:52 PM

July 11, 2018
ROSCon 2017: Vehicle and city simulation with Gazebo and ROS -- Ian Chen and and Carlos Agüero (Open Robotics)

Looking foward to ROSCon 2018 we're highlighting presentations from last year. The ROSCon 2018 registration is currently open.

Ian and Carlos started the second afternoon session talking about simulating vehicles and cities.

Video

Abstract

Autonomous driving is becoming a popular area of robotics, attracting interests from the research community and industry alike. Open Robotics have received increasing demands for resources to help build vehicle simulations in Gazebo. In this presentation, we will describe our recent efforts on vehicle and city simulation. We have produced a collection of components, including 3D vehicle models, materials and plugins, a Road Network Description File library, and a procedural city generation tool. We will showcase a demo with ROS interface and rviz visualization, and describe how users can create their own vehicle simulations with these components

Slides

View the slides here

by Tully Foote on July 11, 2018 06:54 PM

July 10, 2018
How TIAGo robot benefits from Deep Learning

The combination of robotics and Deep Learning can lead robots to be more intelligent than ever before. A flurry of research is helping robots understand their surroundings and make decisions on their own. The UPC student Sai Kishor Kothakota, who is doing his Master in Automatic Control and Robotics, is doing an internship at PAL Robotics and using Deep Learning with one of our robot TIAGo, in which we have installed an NVIDIA Jetson – one of the robot’s optional features. Here’s our conversation with him:

In your own words, what is Deep Learning?

Deep Learning is a subfield of machine learning, which deals with the techniques that teach computers to do tasks that humans do naturally. These methods are basically inspired by the structure and function of the brain called artificial neural networks. Deep Learning is already being used in various fields of industries such as automated driving, aerospace, medical research, industrial automation and electronics, and outperforms the state-of-the-art approaches in traditional machine learning problems.

Why is TIAGo suitable for doing research with Deep Learning?

TIAGo being a mobile manipulator is surely a versatile platform for this research application. With the capabilities of TIAGo combined with those of deep learning, one can easily command it to perform the desired task. TIAGo has been a part of social robotics for many years, and this multifaceted application will change the way of interaction between the robot and humans, bringing more generalization in the task performance.

tiago_robot-object_detection-deep-learning

TIAGo has even learned to recognize itself!

What is your primary focus in your project with TIAGo?

I am mainly focusing on object and speech recognition. With Speech recognition, the user can command TIAGo in his natural spoken words. Contextualizing the audio into text or machine-readable format helps the robot to better interpret what the human is conveying in the spoken language. Later on, this can be used for TIAGo to build conversations, tell some jokes, search for content or playing a song according to the situation.

Using object recognition, TIAGo is able to detect the items around it. The robot can easily differentiate between similar looking objects and know what the user wants. The process of building object recognition models is something similar to how humans learn right from their childhood.

Could you tell us more about this?

For instance, a toddler learns about a cat by pointing to different objects and saying the word “cat”. The parent says: “Yes, that is a cat,” or: “No, that is not a cat.” As the toddler continues to point objects, he becomes more aware of the features that all cats possess. What happens is that the brain’s level of abstraction is getting more complex and is building a hierarchy to clearly know the object – the same happens while using Deep Learning with robots.

In summary, do you believe robots and Deep Learning have a bright future together?

Yes indeed, with this kind of abilities robots will be able to perform tasks in a more flexile manner. In simple words, deep learning makes the robot respond more intellectually to different scenarios. This is surely a good start for the future of intelligent species!

Thank you for sharing your research with us, Sai! If you would like to learn more about the possibility of integrating NVIDIA Jetson to TIAGo robot, drop us a line!

The post How TIAGo robot benefits from Deep Learning appeared first on PAL Robotics Blog.

by Judith Viladomat on July 10, 2018 03:21 PM

New ROS Online Course for Beginner

@Pyo wrote:

Hi Friends,

I am pleased to announce a new ROS Online Course, this course is a ROS robot programming guide based on the experiences we had accumulated from ROS projects like TurtleBot3, OpenCR and OpenManipulator. We tried to make this a comprehensive guide that covers all aspects necessary for a beginner in ROS. Topics such as embedded system, mobile robots, and robot arms programmed with ROS are included. For those who are new to ROS, there are footnotes throughout the “ROS Robot Programming” Handbook providing more information on the web. Through this course and book, we hope that more people will be aware of and participate in bringing forward the ever-accelerating collective knowledge of Robotics Engineering. Enjoy this summer studying ROS! :smile:

:heavy_check_mark: Free Online Course!
:heavy_check_mark: Basic ROS learning from basics, simulators, applied to real robots!
:heavy_check_mark: We share a 500-page book for free!
:heavy_check_mark: All lecture materials and source code are opened!

:black_small_square: What you will learn from this course

  • From the basic concept to practical robot application programming!
  • ROS Basic concept, instructions and tools
  • How to use sensor and actuator packages on ROS
  • Embedded board for ROS : OpenCR1.0
  • SLAM & Navigation with TurtleBot3
  • How to program a delivery robot using ROS Java
  • OpenManipulator simulation using MoveIt! and Gazebo

:black_small_square: Youtube Playlist

:black_small_square: Lecture Materials

:black_small_square: Download the ‘ROS Robot Programming’ Book for Free!

  • Check out RobotSource for Download
  • This Handbook is written for college students and graduate students who want to learn robot programming based on ROS (Robot Operating system) and also for professional researchers and engineers who work on robot development or software programming.
    We have tried to offer detailed information we learned while working on TurtleBot3 and OpenManipulator. We hope this book will be the complete handbook for beginners in ROS and more people will contribute to the ever-growing community of open robotics.
  • Chapter 01 Robot Software Platform
  • Chapter 02 Robot Operating System
  • Chapter 03 Configuring the ROS Development Environment
  • Chapter 04 Important Concepts of ROS
  • Chapter 05 ROS Commands
  • Chapter 06 ROS Tools
  • Chapter 07 Basic ROS Programming
  • Chapter 08 Robot Sensor Motor
  • Chapter 09 Embedded System
  • Chapter 10 Mobile Robots
  • Chapter 11 SLAM and Navigation
  • Chapter 12 Service Robot
  • Chapter 13 Manipulator

:black_small_square: Manuals for TurtleBot3, OpenCR, OpenManipulator

:black_small_square: Open Source: Tutorials, TurtleBot3, OpenCR, OpenManipulator

Posts: 1

Participants: 1

Read full topic

by @Pyo Yoonseok Pyo on July 10, 2018 08:30 AM

ROSCon 2017: How to select a 3D sensor technology -- Chris Osterwood (Carnegie Robotics)

Looking foward to ROSCon 2018 we're highlighting presentations from last year. The ROSCon 2018 registration is currently open.

Finishing the first afternoon session Chris Osterwood provided an overview of different 3D sensing technologies and how to evaluate them.

Video

Abstract

System developers are faced with a new challenge when designing robots - which 3D perception technology to use? There are a wide variety of sensors on the market, which employ modalities including stereo, ToF cameras, LIDAR, and monocular 3D technologies. This talk will include an overview of various 3D sensor modalities, their general capabilities and limitations, a review of our controlled environment and field testing processes, and some surprising characteristics and limitations we've uncovered through that testing. There is no perfect sensor, but there is always a sensor which best aligns with application requirements - you just need to find it.

Slides

View the slides here

by Tully Foote on July 10, 2018 04:28 AM

Optimization Motion Planning with Tesseract and TrajOpt for Industrial Applications

Summary

Southwest Research Institute launched an internal R&D project to integrate the existing motion planner TrajOpt (Trajectory Optimization for Motion Planning) into ROS. TrajOpt was created at UC Berkeley as a software framework for generating robot trajectories by local optimization. The integration of TrajOpt necessitated new capabilities that spawned the creation of several new packages: tesseract and trajopt_ros. The tesseract package contains a new lightweight motion planning environment designed for industrial application, while the trajopt_ros package contains packages specific to TrajOpt. We will demonstrate how these tools complement existing planners, like OMPL or Descartes, to solve complex problems quickly and robustly.

Description

The original implementation of TrajOpt was developed using OpenRave for Kinematics and Bullet for contact checking. The first step was to replace OpenRave with MoveIt! Kinematics and second replace the collision environment with MoveIt!’s Collision environment. Early on in the process several limitations were found in both MoveIt!’s Kinematics and Collision environment.

TrajOpt requires the ability to calculate specific information about the robot not provided by MoveIt!, but it turns out KDL, which is one of the kinematics libraries used by MoveIt!, provides methods for obtaining the required information. This resulted in the development of a custom kinematics library built on KDL for both kinematic chains and non-kinematic chains. Secondly TrajOpt leverages specific characteristics of convex to convex contact checking to provide the minimum translation vector to move two objects out of collision. In the process of integrating with MoveIt!, it was determined that it did not provide detailed distance information. Also, after further evaluation it was found that MoveIt! does not support convex to convex collision checking requiring significant API changes across multiple repositories. Since the IR&D was time-sensitive, it was determined to not use MoveIt! and create a light-weight Motion Planning Environment (Tesseract).

The Tesseract environment was designed to support SwRI’s work in complex industrial motion planning applications where flexibility and modularity are key to adapting to new applications. Packages include:

  • tesseract_core – Contains platform agnostic interfaces and data structures to be used. tesseract_ros –ROS implementation of the interfaces identified in the tesseract_core package, currently leverages Orocos/KDL libraries.
  • tesseract_collision – ROS implementation of a Bullet collision library. It includes both continuous and discrete collision checking for convex-convex and convex-concave shapes.
  • tesseract_msgs – ROS message types used by Tesseract.
  • tesseract_rviz –ROS visualization plugins for Rviz for both the environment state and trajectories.
  • tesseract_monitoring – Provides tools for monitoring the active environment state and publishing contact information. This is useful if the robot is being controlled outside of ROS, but you want to make sure it does not collide with objects in the environment. Also includes the environment monitor, which is the main environment facilitating requests to add, remove, disable and enable collision objects, while publishing its current state to keep other ROS nodes updated.
  • tesseract_planning – Contains interface bridges between Tesseract Environment and motion planners OMPL and TrajOpt.

After the creation of Tesseract all necessary capabilities were available to finish the integration of TrajOpt into ROS. The new motion planner was evaluated against the following use case while minimizing joint velocity, acceleration and jerk cost along with collision avoidance cost:

  • Fully Constrained Cartesian Path
  • Semi-Constrained Cartesian Path
  • Free Space Path
  • Semi-Constrained Free Space Path
  • Free Space + Constrained Cartesian Path

Only a few of the above case will be discussed below. The first is a Semi-Constrained Free Space Path where a KUKA iiwa needs to plan around a sphere while maintaining tool orientation with the z-axis rotation free to rotate. Each step of the optimization is shown in Figure 1. Note that once the robot is out of collision the remaining iteration are spent minimize joint velocity, acceleration and jerk.

  Figure       SEQ Figure \* ARABIC      1         - KUKA iiwa (7 DOF) TrajOpt free space planning round sphere

Figure 1 - KUKA iiwa (7 DOF) TrajOpt free space planning round sphere

The next use case was a complex industrial application. It is an 8 DOF problem, where the robot picks up a seat off of a conveyor and loads the seat into a car. This is a very challenging task, given the amount of manipulation required to pass through the doorway and set the seat without colliding with the car structure. The final trajectory, found in 0.482 seconds using TrajOpt with continuous collision checking enabled, is shown in Figure 2.

  Figure       SEQ Figure \* ARABIC      2         - TrajOpt Car Seat Installation Example

Figure 2 - TrajOpt Car Seat Installation Example

One significant advantage of the Tesseracts implementation of the Bullet Collision Library is the ability to perform continuous collision checking. An example demonstrating the use of continuous collision with TrajOpt is shown in Figure 3. Each red box represents a state in the trajectory with the green box being the collision object to avoid during planning. Under discrete collision checking each state would be found to be collision free even if the motion between states was not. With continuous collision checking enabled, it can be seen that a collision is detected when transitioning between state 2 and state 3 resulting in a collision-free trajectory.

  Figure       SEQ Figure \* ARABIC      3         - Planar Box (2 DOF) TrajOpt free space planning with continuous collision checking

Figure 3 - Planar Box (2 DOF) TrajOpt free space planning with continuous collision checking

The remaining application to discuss is a complex Semi-Constrained Cartesian path with 437 poses each with 5 Degrees Of Freedom (DOF) fixed and the tool z-axis free to rotate. The application is performing a deburr operation on a complex puzzle piece shown in Figure 4. Also the problem includes a 7 DOF robot and a 2 DOF positioner for the spindle making it a non-fixed base kinematic chain. This requires the use of Tesseract’s joint kinematic model developed for this particular use case. The TrajOpt motion planner problem contains roughly 3,000 constraints and was able to solve in 4.4 seconds.

  Figure       SEQ Figure \* ARABIC      4         – Semi-Constrained Cartesian Planning Problem

Figure 4 – Semi-Constrained Cartesian Planning Problem

TrajOpt has shown itself capable of solving very difficult problems but, as an optimization, it can be sensitive to its initial conditions. This presentation will explore several strategies for seeding the solver, including the integration of sampling planners like OMPL and Descartes. These planners can coarsely and quickly sample the space of a problem to generate candidate solutions that can be refined by TrajOpt. If that should fail, a new robot configuration is selected from the sampled problem-space and the process repeated. For some problems this results in a planner finding a true global optimum.

References

  1. Tesseract Repository
  2. TrajOpt ROS Repository
  3. Examples (Tesseract and TrajOpt ROS)
  4. Videos

by Levi Armstrong on July 10, 2018 04:13 AM

July 09, 2018
The Autonomous Robot Challenge - Now Live

@Alessandro wrote:

Are you interested in AI? Are you a ROS developer? Why not put your knowledge to use by joining the Arm developer community and accepting the challenge - build a robot using ROS to address a real world problem!

Follow the link to see what the prizes and rules are!

Posts: 1

Participants: 1

Read full topic

by @Alessandro Alessandro Grande on July 09, 2018 05:46 PM

New Packages for Kinetic 2018-07-09

@tfoote wrote:

We’re happy to announce 23 new packages and 73 updated packages for Kinetic.

Full details are below. Thank you to all the maintainers and contributors who make these packages available for everyone to use.

Package Updates for kinetic

Added Packages [23]:

Updated Packages [73]:

Removed Packages [0]:

Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:

  • Adolfo Rodriguez Tsouroukdissian
  • Alexander W. Winkler
  • Atsushi Watanabe
  • AutonomouStuff Software Development Team
  • Bence Magyar
  • Benjamin Binder
  • Christoph Rösmann
  • Devon Ash
  • Felix Endres
  • Felix Ruess
  • Franka Emika GmbH
  • George Todoran
  • Ingo Luetkebohle
  • Jose-Luis Blanco-Claraco
  • Kei Okada
  • Kevin Hallenbeck
  • Louise Poubel
  • Markus Bader
  • Masaya Kataoka
  • Mike Purvis
  • Monika Florek-Jasinska
  • Raphael Hauk
  • Rohan Agrawal
  • Russell Toris
  • Ryohei Ueda
  • Sachin Chitta
  • Santiago Carrion
  • Shashank Swaminathan
  • Steve Macenski
  • Tony Baltovski
  • Vincent Rousseau
  • Vladislav Tananaev
  • YoheiKakiuchi

Posts: 1

Participants: 1

Read full topic

by @tfoote Tully Foote on July 09, 2018 04:23 AM

New Packages for Indigo 2018-07-08

@tfoote wrote:

We’re happy to announce 11 new packages and 21 updated packages for Indigo.

Thank you to all the contributors and maintainers who helped make this possible. Full details are below.

Package Updates for indigo

Added Packages [11]:

Updated Packages [21]:

Removed Packages [0]:

Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:

  • Atsushi Watanabe
  • AutonomouStuff Software Development Team
  • Devon Ash
  • Felix Endres
  • Felix Ruess
  • Kei Okada
  • Kevin Hallenbeck
  • Louise Poubel
  • Mike Purvis
  • Monika Florek-Jasinska
  • Ryohei Ueda
  • YoheiKakiuchi

Posts: 1

Participants: 1

Read full topic

by @tfoote Tully Foote on July 09, 2018 04:21 AM

July 06, 2018
ROSCon 2017: Autonomous Racing Car for Formula Student Driverless -- Juraj Kabzan (ETH Zürich, AMZ)

Looking foward to ROSCon 2018 we're highlighting presentations from last year. The ROSCon 2018 registration is currently open.

Continuing the afternoon session Juraj Kabzan continues the theme of cars and brings it to Formula Student Driverless racing.

Video

Abstract

As AMZ Racing Driverless, we're competing in the first Formula Student Driverless competition with «flüela», an electric 4WD car with high wheel torque and a lightweight design (0-100km/h in 1.9s), developed by our team in 2015. To race autonomously, the car has been extended with a LiDAR, a self-developed stereo visual-inertial system, an IMU, a GPS and a velocity sensor. We chose to use ROS Indigo on our Master Slave computing system, as it provided a robust, flexible framework to interface the different components of our Autonomous System. Furthermore we made extensive use of its logging capabilities and powerful visualization and simulation tools.

Slides

View the slides here

by Tully Foote on July 06, 2018 08:10 PM

New Packages for Lunar 2018-07-06

@marguedas wrote:

We’re happy to announce the availability of 18 new packages and 21 updated packages for ROS Lunar.

As always thank you to all the maintainers who are making these releases as well as all the contributors who have helped contribute to these releases. Full details are below.

Package Updates for lunar

Added Packages [18]:

Updated Packages [21]:

Removed Packages [0]:

Thanks to all ROS maintainers who make packages available to the ROS community. The above list of packages was made possible by the work of the following maintainers:

  • Adolfo Rodriguez Tsouroukdissian
  • Atsushi Watanabe
  • Bence Magyar
  • Christoph Rösmann
  • Felix Endres
  • Felix Ruess
  • Johannes Meyer
  • Kevin Hallenbeck
  • Louise Poubel
  • Monika Florek-Jasinska
  • Sachin Chitta
  • Stefan Kohlbrecher
  • Vincent Rousseau

Posts: 1

Participants: 1

Read full topic

by @marguedas Mikael Arguedas on July 06, 2018 02:36 PM

Roslibjs time clock

@MerAARIZOU wrote:

Hello everyone,
I’m working on measuring transmission time of tf2_web_republisher in the Internet (Algeria Telecom) I wanted to know how fast websocket will transmit tf and the relation between the amount of transmitted data and time delivery. To do that I think I should either use wallClock and synchronize the client and server clocks (I dont know if tf will work correctly if I don’t use ros runtime’ clock), or synchronize the client to the ros runtime’ clock, but in this case I can not find how to get to roslibjs client clock. Is there any ROS utility which can help me synchronizing roslibjs time clock with ROS clock. Thank you very much!

Posts: 1

Participants: 1

Read full topic

by @MerAARIZOU Meriem Aarizou on July 06, 2018 09:59 AM

July 05, 2018
ROS for HRI: on-going project + quick survey

@severin-lemaignan wrote:

Dear colleagues,

We are currently working on a proposal for a set of conventions and interfaces that would make it easier in the future to share and integrate HRI-related software, using the ROS ecosystem.

To make sure we cover everyone’s needs and use-cases, we would deeply appreciate if you could take 5 min and fill in this short survey:

https://goo.gl/forms/lyVjp40tFPyxd71I2

At the end of the survey, you’ll be given the opportunity to leave you email address if you wish to get further involved with the design of this ‘ROS for HRI’ set of standards.

We hope to publish a white paper later this year with the proposal. We will make sure to share it on discourse.ros.org as well.

Feel free to ask further questions if you have any!

Posts: 1

Participants: 1

Read full topic

by @severin-lemaignan Séverin Lemaignan on July 05, 2018 05:49 PM

ROS Summer School in China 2018, July 21-28

@xinyu wrote:

Our ROS Summer Schools in the past three years in China, were a great success. We provided a quick, in-depth and free learning opportunity for ROS beginners and advanced ROS developers. In the past three years, more than 1000 participants studied in our ROS summer schools. We also had excellent feedback from the participants with positive comments on the lectures, speakers and organization.

Many robot companies in China have realized the importance of ROS and began developing their robot projects using ROS. However, learning ROS and its associating components involves a wide range of knowledge, which not only requires developers to have software programming skills, but also to be familiar with robotics theories, robot hardware and even understanding the background of specific industrial applications.

Learning and using ROS is somehow a slow and painful process. In 2018, we will organize the 4th ROS summer school in China. This year, our ROS summer school will come to Shenzhen, the city recognized as “China’s Silicon Valley” or “Silicon Valley of Hardware”. We will continue to organize keynote speeches and provide ROS lectures, in the same way we have for the last three years.

In the seven days, we are going to cover the following topics: ROS basics, teleoperation, Gazebo simulator, computer vision, SLAM, navigation, some industrial exhibition and more. Many representatives from industries will share their experiences on robot project development using ROS. At the end of the summer school, we will organize a robot competition. Participants use their skills to fulfill some given tasks using mobile robots.

Please visit our official website for the details of our ROS Summer School 2018.
http://www.roseducation.org
http://www.robotics.sei.ecnu.edu.cn/ros2018

For the past ROS summer schools, check out the following links
http://www.robotics.sei.ecnu.edu.cn/ros2017
http://www.robotics.sei.ecnu.edu.cn/ros2016
http://www.robotics.sei.ecnu.edu.cn/ros2015

Posts: 7

Participants: 4

Read full topic

by @xinyu Xinyu Zhang on July 05, 2018 02:26 AM

July 03, 2018
ROSCon 2017: Building a Computer Vision Research Vehicle with ROS -- Andreas Fregin

Looking foward to ROSCon 2018 we're highlighting presentations from last year. The ROSCon 2018 registration is currently open.

After lunch Andreas Fregin started the afternoon session talking about how they've leveraged ROS for computer vision at Daimler.

Video

Abstract

Daimler (Mercedes-Benz) has a long history on research and development on ADAS systems and autonomous driving. Today's increasing complex requirements on sensors, algorithms and fusion put high demands on the underlying software framework. In this talk, the group Pattern Recognition and Cameras of Daimler Research and Development showcase their latest research vehicle. Additionally, a detailed look on an implemented multi-sensor synchronization system is given. Findings and lessons learned as well as tool modifications and added functionality will be discussed as well. The audience will get insights on data handling in the context of high data throughput.

Slides

View the slides here

by Tully Foote on July 03, 2018 09:18 PM

ROS1 or 2 for a newbie?

@defied wrote:

As I am still in my infancy stage learning ROS, with the full expectation of migrating to ROS2, should I just focus on ROS2 training instead? It makes sense to me, but wanted to collect other thoughts on this.

Thanks,
D

Posts: 6

Participants: 3

Read full topic

by @defied on July 03, 2018 08:54 PM

New package: ros_opencl for easy OpenCL integration

@gstavrinos wrote:

Hello everyone,

I am starting this thread to let you know that I have released the third stable version of the ros_opencl package. ros_opencl is a library that helps developers run routines on their GPU with ease. It basically wraps OpenCL and offers a plethora of functions to choose from.

Give it a try, and don’t forget to submit your (inevitable?) issues!

Happy kernel processing! :wink:

Posts: 1

Participants: 1

Read full topic

by @gstavrinos George Stavrinos on July 03, 2018 01:29 PM

ROS 2 Bouncy Bolson Released!

We're happy to announce the ROS 2 release Bouncy Bolson!

Check out our installation instructions and tutorials and give it a try! We're excited to hear your feedback and the applications that this release will enable!

To get an idea of what's in this release, be sure to read the Bouncy release page.

A few features and improvements we would like to highlight in this release:

Bouncy Bolson is the second non-beta ROS 2 release and will be supported with bug fixes and platform updates (particularly on rolling dependencies like Windows and MacOS) for one year with support ending in June 2019. While we do aim to keep the API as stable as possible, we can't guarantee 100% API compatibility between releases. Check the features page and ROS 2 roadmap to evaluate whether or not ROS 2 is ready to be used for your application or if you can switch from ROS 1 to ROS 2 as it will depend on the exact feature set and requirements of your use case.

As always, we invite you to try out the new software, give feedback, report bugs, and suggest features (and contribute code!): https://github.com/ros2/ros2/wiki/Contact We also invite you to release your ROS 2 packages in Bouncy! Here's a tutorial to do so.

We would also like to announce the name of the next ROS 2 release: Crystal Clemmys

Your friendly ROS 2 Team

P.S. There's still a few days left on the t-shirt campaign.

bouncy.gif

by Tully Foote on July 03, 2018 01:46 AM

ROS 2 Bouncy Bolson Released!

@marguedas wrote:

We’re happy to announce the ROS 2 release Bouncy Bolson!

Check out our installation instructions and tutorials and give it a try!
We’re excited to hear your feedback and the applications that this release will enable!

To get an idea of what’s in this release, be sure to read the Bouncy release page.

A few features and improvements we would like to highlight in this release:

Bouncy Bolson is the second non-beta ROS 2 release and will be supported with bug fixes and platform updates (particularly on rolling dependencies like Windows and MacOS) for one year with support ending in June 2019. While we do aim to keep the API as stable as possible, we can’t guarantee 100% API compatibility between releases. Check the features page and ROS 2 roadmap to evaluate whether or not ROS 2 is ready to be used for your application or if you can switch from ROS 1 to ROS 2 as it will depend on the exact feature set and requirements of your use case.

As always, we invite you to try out the new software, give feedback, report bugs, and suggest features (and contribute code!): https://github.com/ros2/ros2/wiki/Contact
We also invite you to release your ROS 2 packages in Bouncy! Here’s a tutorial to do so.

We would also like to announce the name of the next ROS 2 release:
Crystal Clemmys

Your friendly ROS 2 Team

P.S. There’s still a few days left on the t-shirt campaign.

Posts: 4

Participants: 3

Read full topic

by @marguedas Mikael Arguedas on July 03, 2018 01:41 AM

June 29, 2018
ROSCon 2017: The rostune package: Monitoring systems of distributed ROS nodes -- Georgios Stavrinos and Stasinos Konstantopoulos (NCSR "Demokritos")

Looking foward to ROSCon 2018 we're highlighting presentations from last year. The ROSCon 2018 registration is currently open. As well as the Call for Proposals.

Finishing up the morning session Georgios and Stasinos presented rostune. A tool to help you understand the state of your ROS system better and how that can help you improve performance.

Video

Abstract

rostune is a tool that helps ROS developers distribute their nodes in the most effective way. It collects and visualizes statistics for topics and nodes, such as CPU usage and network usage. In this talk we are going to present technical details about rostune and a characteristic use case from an on-going project developing a home assistance robot, where processing can be distributed between the robot's on-board computer and computational units available at the home.

Slides

View the slides here

by Tully Foote on June 29, 2018 07:02 PM

June 27, 2018
Driving Intelligent Inspection Processes for NDE/NDI

Nondestructive evaluation (NDE) techniques for parts and structures that are created from either forming processes or additively manufactured processes typically have had to be manually performed. Recent advances in scanning technologies and intelligent path planning tools, such as those available within ROS -Industrial, present a platform capable of performing multiple NDE processes in an intelligent fashion leveraging the Scan-N-Plan framework.

The Sensors Systems & Nondestructive Evaluation team within Southwest Research®, along with ROS-Industrial developers have conceived of a concept to enable a Scan-N-Plan approach to Nondestructive Inspection (NDI) thereby reducing the amount of inspection time by only performing detailed surface or volumetric inspections where mandated by an initial higher level screening.

The Sensors Systems & Nondestructive Evaluation team has developed a unique technology within their portfolio that performs full volumetric inspection using a guided wave technique that leverages an omni-directional probe. The concept would be to perform a macro-level scan leveraging the MsT360 Guided Wave Sensor, analyzing the output and for indications, or areas of interest, drive low level inspection. A sample output from this stationary sensor can be seen below. This output can then be leveraged to generate process paths for follow on inspections such as Eddy Current (EC) or Ultrasonic Testing (UT), only in the areas of interest. This provides the full volumetric inspection output, but reduces the time spent doing a 100 percent surface scan with higher resolution traditional UT. This improves solution velocity as well as reduces post-processing clean up.

Based on experience with the MsT360 follow-on low level inspection techniques can be selected based on the characterized potential defect. The system would then Plan trajectories based on this scan for each process, including the stand-off or contact forces and trajectories mandated by the process. A map could be visualized and colored to aid the operator in understanding which regions will be inspected by which NDE process. In the below example for instance, a process would be planned for the area identified with the 35% wall area, or where weld 1 and 2 are located, as opposed to UT or EC of the entire volume or surface respectively.

Scan output.JPG

Sample output from MsT360

A sample workflow for this type of operation would be as such: • Assembly or Piece within a robotic system is ready for NDE Assessment • Locate MsT360 Guided Wave Sensor (Manually or Robotically) • Perform macro volumetric inspection • Analyze output • For areas of concern plan trajectories for each process type (EC or UT) and execute inspection in the areas of interest • Tool Change • Perform inspection • Tool Change if required • Clean • Generate Report; either unload or launch a repair process

The flexibility inherent to this design and the capability included enable processing of piece-parts, such as forms, as well as fabricated structures. The macro scan and detailed path planning approach is more efficient for when it is not inherently value added to inspect an entire volume or surface area.

Planned Region.JPG

As can be seen in the above image, specific regions can be highlighted based on an output such as described earlier, or these can be user adjusted based on a response from a developed GUI. These trajectories can be planned for all the processes of interest, in this case UT and EC, with their requirements relative to process execution included. Leveraging the flexibility of the Scan-N-Plan framework it is easy to see how NDE processes can either be included into a multi-process cell, or just automated in a manner that effectively applies them where required, or where that detailed scrutiny is most beneficial. Furthermore, this can be augmented with visual imaging to allow for assessment of human readable details , or human generated markings to drive further process planning and inspection. For future consideration as well are several visual NDE techniques for surface inspection that could be incorporated, such as dye penetrant, magnetic particle, pulsed thermography, etc.

Human Markings.JPG

Example of Identification and Tool Path Plans for a Human Generated Indication

As advances in both inspection technologies and automation move forward there lie the opportunities to more tightly couple these processes, and optimize their effectiveness in the areas of greatest need. This enables more efficient utilization while optimizing cost and throughput, as well as reducing non-value-added steps that currently come with many of the NDE techniques. Moving forward we hope to see more opportunities for integrated quality evaluation within automation applications, enabling further optimization of manufacturing value streams. For more information about SwRI’s NDE solutions please visit: https://www.swri.org/industries/sensors-systems-nondestructive-evaluation-nde

by Matthew Robinson on June 27, 2018 09:28 PM

ROSCon 2017: Secure ROS: Imposing secure communication in a ROS system -- Aravind Sundaresan and Leonard Gerard (SRI International)

Looking foward to ROSCon 2018 we're highlighting presentations from last year. The ROSCon 2018 registration is currently open. As well as the Call for Proposals.

Following the SROS presentation Aravind and Leonard follow up with a summary of Secure ROS another way to secure the ROS API.

Video

Abstract

Secure ROS is an update to ROS allowing secure communications while keeping ROS public API intact and allowing user code to be reused without modification. Policies are provided at execution time with a YAML file specifying authorized subscribers and publishers to topics, getters and setters to parameters, as well as providers and requesters of services. Policies are specified at the IP address level and enforced by Secure ROS. Combined with IPSec for cryptography, Secure ROS provides secure, authenticated and encrypted ROS communications. Modifications to the ROS code base is restricted to the ROS Master and client libraries (rospy and roscpp).

Slides

View the slides here

by Tully Foote on June 27, 2018 08:38 PM


Powered by the awesome: Planet