May 26, 2016
ROSCon 2016: Call for Proposals

ROSCON 2016 is happening October 8-9 in Seoul, Korea: http://roscon.ros.org/2016/

Proposals for presentations on all topics related to ROS are invited: http://roscon.ros.org/2016/#call-for-proposals

The proposal submission deadline is July 8th, 2016: http://roscon.ros.org/2016/#important-dates

Women, members of minority groups, and members of other under-represented groups are encouraged to submit presentation proposals to ROSCon.

Proposals will be reviewed by the program committee, which will evaluate fit, impact, and balance.

We cannot offer sessions that are not proposed! If there is a topic on which you would like to present, please propose it. If you have an idea for an important topic that you do not want to present yourself, please post it to ros-users@lists.ros.org.

All ROS-related work is invited. Topics of interest include:

  • Best practices
  • Useful packages
  • Robot-specific development
  • ROS Enhancement Proposals (REPs)
  • Safety and security
  • ROS in embedded systems
  • Product development & commercialization
  • Research and education
  • Enterprise deployment
  • Community organization and direction
  • Testing, quality, and documentation
  • Robotics competitions and collaborations

To get an idea of the content and tone of ROSCon, check out the presentation slides and videos from previous years: http://roscon.ros.org/2016/#past-events

Submit your proposal here: http://roscon.ros.org/review/

We can't put on ROSCon without the support of our generous sponsors: http://roscon.ros.org/2016/#sponsors

We'd like to especially thank our Platinum and Gold Sponsors: Fetch Robotics and Intel!

If you're interested in supporting ROSCon, please contact us: roscon-2016-oc@osrfoundation.org.

by Tully Foote on May 26, 2016 07:07 PM

May 25, 2016
Legal aspects and best practices of open-source

On April 19-20 Fraunhofer IPA hosted an event organized in collaboration with euRobotics AISBL on the best practices and legal aspects of Open-Source Software (OSS) in robotics and automation. The rationale behind the event was that while OSS is an established and accepted factor in "software-heavy" business domains like enterprise IT systems and smartphones, its inner workings are less understood in industries where software in now shifting from a component with ancillary role to one with high added-value. With the changes poised to happen in industrial robotics and automation by the advances in robotics science on one hand (just think of the progress in autonomous driving and its underpinning achievements in perception, planning, control) and by government-mandated initiatives like Industrie4.0 on the other, the ROS-Industrial team believes that OSS is a great opportunity to accelerate this process. However, to foster its adoption we understandably need to identify and clear also non-technical obstacles such as possible legal, economic and regulatory aspects.

IMG_9888.JPG
IMG_9891.JPG

During the event the speakers described to the audience how OSS is already part of established business practices at large companies in the industrial domain; how digital economies are being shaped thanks also to OSS; which regulatory and legal aspects we need to take into account in terms of safety standards, licensing and compliance processes.

IMG_9923.JPG
IMG_9989.JPG
IMG_9993.JPG
IMG_9999.JPG

Takeaway messages that we want to highlight as they are instrumental in removing unfounded but long-standing critiques of OSS for industrial robots and machinery are:

  • the kind of functionalities that ROS is typically used for, and which sit at the OS/middleware and the application software levels, can be carried out by non-certified software as they live in a "sandbox" protected by the underlying safety-compliant foundation and which includes the electrical/mechanical and safety device/PLC level layers. As I like to say, we do not necessarily aim to replace the software in your robot's control box with ROS (although the software to do that is available), but rather to provide higher-level functionalities like perception-driven, online trajectory generation to let the robot operate in dynamic environments, thing not possible (or very difficult to perform) with the limited sets of preprogrammed motions typical of current automation
  • OSS has a long history of adoption in industrial automation; Linux (especially Linux RT) is a good example of this, and shows that using OSS in commercial products is not only possible, but also beneficial
  • having a compliance process in place (e.g. OpenChain) can ensure that licensing matters are properly dealt with

Given the interest and the feedback collected after the event, we plan on following-up on these topics at the ROS-Industrial Conference next fall, whose program will be made available in the coming weeks.

by Mirko Bordignon on May 25, 2016 04:48 PM

Thank you for visiting PAL Robotics at ICRA 2016!

Last week the key stakeholders in robotics field worldwide gathered together at Stockholm (Sweden) to attend ICRA 2016. Five intense days with multiple sessions and presentations to disseminate the most advanced research in robotics were coupled by a very interactive exhibition. PAL Robotics’ humanoid REEM-C and mobile manipulator TIAGo were at the venue, available to play with anyone who wanted to.

REEM-C humanoid robot using Whole Body Control

REEM-C humanoid robot using Whole Body Control software

Humanoid REEM-C loved going for walks all around the ICRA 2016 exhibition area, mixing up with conference delegates and visiting robotic colleagues from other stands. The demo which attracted most attention was a Whole Body Control software used by REEM-C to accomplish one goal with its whole humanoid body – in this case, the goal was to point at a marker with its index finger. This specific demo was developed for the socSMCs EU FET project.

Attendees could try themselves to smoothly move TIAGo’s arm in gravity compensation mode, or feel its strength by handshaking. The mobile manipulator was also guided through the venue by pulling its arm, and used the PAL Robotics’ Whole Body Control software to track a marker and keep pointing at it while avoiding its joint limits. TIAGo is suitable for research in both Ambient Assisted living and industry 4.0 environments, and ICRA 2016 demos showed different possibilities the robot opens.

Modular robotics solutions for the industry of the future

Business Manager Carlos Vivas introducing Factory in a Day at ICRA Industry Forum.

Business Manager Carlos Vivas introducing Factory in a Day at ICRA Industry Forum

ICRA 2016 Industry Forum invited PAL Robotics as a speaker, under the theme “Partnering for modular, open solutions to meet industry needs”, hosted by Combient. Our Business Manager Carlos Vivas explained how industry can be enhanced by modular robots, as PAL Robotics develops platforms that are very modular and also suitable for Industry 4.0, such as TIAGo or PMB-2 mobile base. In fact, as Vivas pointed out, “TIAGo has the essence of what we learnt among the years: modularity is the way”.

Modularity – both in hardware and software – reduces complexity and makes it easier to integrate production lines for factories. Modular and open technologies are the basis for Factory in a Day, a EU 7th framework project in which PAL Robotics is involved. Its goal is to reduce the system integration time of a supply chain, using modular robots to optimize production line changes. The project was also presented at ICRA 2016 Industry Forum.

The post Thank you for visiting PAL Robotics at ICRA 2016! appeared first on PAL Robotics Blog.

by Judith Viladomat on May 25, 2016 10:23 AM

May 24, 2016
Things I Learned at OSCon 2016

Last week I had the pleasure of attending OSCon held in Austin, TX. OSCon has been around since 1999 and is a great conference for all things open source. More information on the conference can be found here. Overall the conference was educational and extremely motivational. I intend to regularly attend OSCon and I would recommend it to my colleagues in the open source robotics community. Below are things I learned from OSCon

  • A Historical Perspective
    Open source is not a new idea. According to wikipedia the idea of open source was hatched in the late 1990s. Before open source, there was free software which originated in the 1970s and 80s. For many of us, this history is unknown. We just accept that open source has been adopted by industry, but lack an appreciation for what it took to get it there. Danese Cooper's talk provided a great perspective on this history. The early trailblazers in the open source and free software movements deserve our admiration and respect. This historical perspective is also reassuring to those of us in the Industrial market. We are fighting some of the same battles that were fought early on in the IT market. While ROS-Industrial enjoys the support of the ROS-Industrial Consortium, there are still many industrial companies that remain unconvinced or unsupportive. The acceptance, and some might say dominance of open source in the IT market, illustrates what is possible when early adopters are relentless. It's also much easier when we can point to examples in the IT space, where open source has had a tremendous impact. I can't imagine the hurdles open source encountered in the early days. Imagine convincing businesses, who valued software so greatly, that giving it away is better for the common good and the bottom line. A sincere thank you to those who blazed the trail before us.
  • Building an Open Source Community
    The "Optimizing your project for contribution" presentation by Josh Matthews was perhaps the single most important presentation for me. Josh outlined 5 steps to build your community and make it easier for developers to contribute. These five steps are:
    1. Prioritizing useful information - Document your software and the contribution process from the point of view of a "newbie".
    2. Reducing friction - Make it easy to contribute. Don't make people jump through hoops unnecessarily.
    3. Making expectations clear - Set the expectations for not only contributions, but the review process in general. Provide a timeline for acceptance.
    4. Responding appropriately - Acknowledge every contribution. Contributions take time, and we should consider this when critiquing or requesting changes.
    5. Following through - Follow your own process. Deadlines in particular are of utmost importance. Responding to contributions immediately significantly increases the likelihood of follow on contributions (which ensures your community will grow).
      In the month's to come we will be implementing these ideas in ROS-Industrial. Great things are coming...
  • Open Source Participation is Still Hard for Companies
    While use of open source software has largely been accepted by companies, participation is still difficult. Participation includes everything from financial support to actively committing source code and interacting with the community. Financial support, as we have found with ROS-Industrial is probably the easiest form of support. While financial support is appreciated, and certainly needed, the greatest value of open source is only realized by participating. Participation has several hurdles, not the least of which are legal and IP related. Companies need processes in place to manage open source contributions. The processes need to protect the company while minimizing hurdles to contributing. How do companies create this processes...in the open of course. The TODO group, which stands for "Talk openly, develop openly" was organized for companies to cooperatively develop practices for contributing to open source and sharing experiences.
  • Community Leadership Summit
    This summit is held before OSCon. One of the reasons I attend OSCon was to get ideas for how to lead and grow the ROS-Industrial community. Just about everyone I talked to recommended I attend the Community Leadership Summit. The summit is always held before OSCon. It brings together community leaders across the open source world to discuss strategies for building communities. I won't miss this next year.
  • Thank You Lawyers
    I attended several presentations on the legal aspects of open source. We owe a debt of gratitude to lawyers at the OSI, Apache Foundation, and others for ensuring the open source software will remain open and protected from legal claims. They have ensured that the idea of open source and the true intent of developers is protected.

by Shaun Edwards on May 24, 2016 02:57 AM

May 23, 2016
ROS Kinetic Kame Released
Happy World Turtle Day!

I am pleased to announce that the 10th ROS distribution, Kinetic Kame, is now available on Ubuntu Xenial 16.04, Ubuntu Wily 15.10, and Debian Jessie. Packages for 32-bit ARM (armhf) are available on Xenial, and 64-bit ARM (aarch64) is supported on Debian Jessie.

kinetic.png

To install ROS Kinetic, refer to the Installation page on the Wiki:
Check out the Migration guide for a changelog of new features and API changes:

http://wiki.ros.org/kinetic/Migration

524 packages in the ROS ecosystem are in the initial release of Kinetic, compared to 2149 currently in Indigo and 1016 in Jade. You can see the released packages on the status page for Kinetic:

http://repositories.ros.org/status_page/ros_kinetic_default.html

And you can compare the versions of packages in Indigo, Jade, and Kinetic here (thanks William for making changes to the new compare pages):

http://repositories.ros.org/status_page/compare_indigo_jade_kinetic.html

If there's a package missing in Kinetic that you'd like to see released, contact the maintainers to let them know. Even though we've made the initial Kinetic release, it's never too late to add packages to Kinetic (or Jade or Indigo) for upcoming syncs.

Kinetic T-shirts (and hoodies) should come through in the mail this week.

We'd also like to announce the name of the next ROS distribution, which you can look forward to downloading a year from now: Lunar Loggerhead!

Thank you to all of the maintainers and contributors who helped make this release possible. We couldn't do this without you.

- Jackie and the ROS Team

[1] https://en.wikipedia.org/wiki/World_Turtle_Day

by Tully Foote on May 23, 2016 11:52 PM

May 20, 2016
Amit Moran (Intel): Introducing ROS-RealSense: 3D Empowered Robotics Innovation Platform

From OSRF

While Intel is best known for making computer processors, the company is also interested in how people interact with all of the computing devices that have Intel inside. In other words, Intel makes brains, but they need senses to enable those brains to understand the world around them. Intel has developed two very small and very cheap 3D cameras (one long range and one short range) called RealSense, with the initial intent of putting them into devices like laptops and tablets for applications such as facial recognition and gesture tracking.

Robots are also in dire need of capable and affordable 3D sensors for navigation and object recognition, and fortunately, Intel understands this, and they've created the RealSense Robotics Innovation Program to help drive innovation using their hardware. Intel itself isn't a robotics company, but as Amit explains in his ROSCon talk, they want to be a part of the robotics future, which is why they prioritized ROS integration for their RealSense cameras.

A RealSense ROS package has been available since 2015, and Intel has been listening to feedback from roboticists and steadily adding more features. The package provides access to the RealSense camera data (RGB, depth, IR, and point cloud), and will eventually include basic computer vision functions (including plane analysis and blob detection) as well as more advanced functions like skeleton tracking, object recognition, and localization and mapping tools.

Intel RealSense 3D camera developer kits are available now, and you can order one for as little as $99.

Next up: Michael Aeberhard, Thomas Kühbeck, Bernhard Seidl, et al. (BMW Group Research and Technology) Check out last week's post: The Descartes Planning Library for Semi-Constrained Cartesian Trajectories

by Tully Foote on May 20, 2016 05:59 PM

slack-ros-pkg: Let your robot chat with you !

From Joffrey Kriegel

I recently made a package to enable the communication between ROS and Slack. Slack is a messaging app for team with multiplatform capability.

This package is able to connect to a Slack channel, listen what you say in it and publish it in a ROS topic. It's also able to write on the Slack channel thanks to another ROS topic.

You can find the source code (in python) and the (little) documentation here : https://github.com/smart-robotics-team/slack-ros-pkg

I hope you will enjoy this package.

by Tully Foote on May 20, 2016 05:53 PM

Factory-in-a-Day Newsletter #4
Click on the image to view the full newsletter

Click on the image to view the full newsletter

by Paul Hvass on May 20, 2016 03:44 PM

May 19, 2016
ROS By Example Now Available in Chinese

RBX1 Chinese ROS News


ROS By Example, the first book published on ROS, is now available in Chinese, thanks to the translation efforts of Juan Rojas, Assistant Professor of Robotics at Sun Yat-sen University, and the sponsorship of Jenssen Chang, Owner of Gaitech International Ltd., an innovative robotics solution provider based in Hong Kong, Seoul, Taipei and Shanghai, and an active promoter of ROS education in Asia.  The new Mandarin translation can be obtained in print from DangDang.com and JD.com. The translation was a team effort including the following students: Liu ZhenDong, Li Ziran, Li JiaNeng, Liu Ke Shan, Peng Ye Yi, and Huang LingLing.

by Patrick Goebel on May 19, 2016 11:05 PM

Internship 2015 - Michal Staniaszek: ROS and Behaviour Trees
In July of last year, I joined Yujin Robot for a six month internship with the innovation team. Since 2014, the team has been working on the newest addition to the robots at Yujin, GoCart, the second version of which was announced late in 2015. My goal for the duration of my internship was to help develop the intermediate layer of GoCart's software system. I worked between the lower level modules like navigation, and the UI from which the robot receives its instructions.

The innovation team uses ROS extensively, and while I had used ROS for several earlier projects, I hadn't yet made any contributions. I was encouraged to do so. I enjoyed the experience, and hope to contribute more in future. My first addition was a small change to the sound_play package which allows the the volume of individual sounds to be set, something which was much requested by one of the team members who was annoyed by the loud (and frequent) sounds coming from the robot while it was being tested.

The most significant contribution I made to ROS was a modification to the diagnostics package. We wanted to ensure that the diagnostic UI could be used by testers with little technical knowledge to see problems and report them to developers. I modified the diagnostic aggregator so that we could make sure only relevant information was displayed to them.

The package allows users to define analysers which listen on a topic for diagnostic messages. These messages are then aggregated into user defined groups, which are displayed in an ordered way in the diagnostic UI. As with many ROS systems, the GoCart software consists of a large number of different modules which are combined to drive the robot. Each module has its own diagnostics, which make up a group in the aggregator.

Previously, the diagnostic aggregator was configured when it was first run, by loading a yaml file in the node launch file. The configuration file contains information about the analysers to be created, and which messages they listen for. Once the node had started, it was not possible to modify the aggregator configuration without shutting it down and changing the configuration. This meant that diagnostic groups which weren't relevant to the currently running modules would still appear in the UI.

The updated diagnostic aggregator allows you to define diagnostics configurations for different modules, and load them into the aggregator when you need them. Diagnostics can be loaded by adding a diagnostic loading node to your launch file. The diagnostics are automatically unloaded when the node shuts down. You can find a tutorial on the new functionality here.

I also spent a good deal of time working with behaviour trees, a control structure which is an alternative to state machines. Originally developed in the games industry for controlling AI agents, behaviour trees provide a simple but powerful structure for defining which action is taken under certain conditions. It’s very easy to modify them at runtime, which provides additional flexibility.

Each node in a behaviour tree is a behaviour, which can be in one of four states: success, failure, running, or invalid. The tree operates using time steps. At each timestep, the tree 'ticks' its nodes, starting from the root, descending depth-first into the tree. When a node is ticked, it executes some code, and returns one of three states: success, failure, or running. Running means that a node has not finished whatever task it is supposed to do. For example, a behaviour that asks the controller to turn the robot 0.5 radians would return running until odometry determines that 0.5 radians has been turned, at which point it would return success. The behaviour might also listen to an obstacle detection topic, stopping the turn and returning failure if an obstacle was detected. The return value of a node can change the part of the tree that is ticked.

Beyond the simple behaviour, there are also composites. A composite is a behaviour whose return state depends on the return states of child behaviours. A composite's children can be simple behaviours or further composites. There are two basic types of composites: sequences and selectors. A sequence runs each of its children in order. The sequence returns success if all of its children also return success. If any child fails, the sequence returns failure. A selector runs its children until one returns success, at which point the selector also returns success. If all of the children fail, the selector also fails. While composites are running their children, they are also in the running state. Composites give you more control over which behaviours run, and when.

In late 2015, Daniel Stonier did the groundwork for a behaviour tree package in Python. He and I then spent time improving it. We designed and implemented behaviour trees defining several scenarios for GoCart, and tested them extensively both in simulation and in real-world trials. Using the tf_tree viewer and rqt_bag as a base, I wrote an rqt package which can be used to view the tree as it changes in real time, and to replay recorded bag files.


Here is a screenshot of the rqt viewer in its current state. The colouring of the nodes shows their current state. Green is success, red is failure, and black is running. Grey nodes are those which have not been run yet. Ellipses denote simple behaviours if they are a leaf node, or selectors otherwise. Rectangular nodes are sequences. The timeline at the bottom shows when the tree changed, and can be used to navigate through all received trees. This tree is used by the GoCart when it is performing a parking behaviour. When parking, the GoCart returns to a docking station to charge, or to a predefined parking location where it will wait for the next task to arrive. The viewer is very useful for debugging behaviours, and seeing exactly what state the robot is in at a given time.

We plan to improve the documentation for the package, finalise the structure, improve the UI, and write more tests before we release it to the community sometime later this year. Hopefully we'll have something cool to show at ROSCon!

by Michal Staniaszek (noreply@blogger.com) on May 19, 2016 07:40 PM

May 17, 2016
all-rounder roboticist in Paris start-up

From Karsten Knese via ros-users@

EOS Innovation is a dynamic startup located in the south of Paris, with Parrot as a parent company. We are currently looking for a motivated roboticist to extent our team.

Job description:

The ideal candidate is a talented all rounder roboticist with focused experience in control and indoor navigation. The candidate will be part a small team of engineers and mainly working on stabilizing our current indoor navigation. Further, the position involves multiple R&D projects and hardware contact.

Requirements:

  • fluent in C/C++

  • proficiency in Python

  • experience with ROS

Bonus points:

  • experience with real robot systems

  • experience with signal processing

  • good communication skills (direct client contact)

If you are interested send your CV to contact@eos-innovation.fr For more information have a look at www.eos-innovation.eu

by Tully Foote on May 17, 2016 07:08 PM

Upcoming ROS-I Events

Save the date for these upcoming events! For more details, refer to the events page.

  • 31 May, 10 AM Central, ROS-I Roadmapping (RIC Members): ROS-I Consortium members and/or ROS-I package administrators, please attend the upcoming ROS-I roadmapping event on Tuesday, May 31, 2016. The virtual meeting will use Anymeeting. Keep an eye out for the invitation.
    • Hosts: Paul Hvass (SwRI) and Ron Brown (EWI)
    • Agenda: We will share the current state of the ROS-I roadmap and will discuss ideas for new enhancement proposals.

  • 14 June, 9 AM Central, ROS-I Community Meeting (Public, Registration Required): Join us for the next series of presentations and discussion about ROS-Industrial. Here is the agenda:
    • Host: Paul Hvass, SwRI
    • Agenda:
    • Initiative to Create a PackML State Machine Library for ROS-I, Lex Tinker-Sackett, 3M
    • UT NRG Planned Code Release, Mitch Pryor, UT NRG
    • Multi-arm Control in MoveIt!, Dave Coleman, CU Corell Lab
    • Industrial CI, Issac Saito, TORK
    • Open Discussion
  • 14-15 July, 8:30 AM SGT, ROS-I Asia-Pacific Workshop (Public, Registration Required):
    • Hosts: Nicholas Yeo (*ASTAR ARTC), I-Ming Chen (NTU)
    • Agenda: ROS-I Asia Pacific Workshop will take place in Singapore between 14th and 15th of July. We are excited to bring the ROS-I workshop to Asia for the first time.
  • 21 August, ARIAC Competition Kickoff at CASE Conference (Open to Conference Attendees):
    • Host: Craig Schlenoff, NIST
    • Agenda: We invite you to attend the Conference on Automation Science and Engineering (CASE), where we will have the official competition kickoff and workshop on Sunday, August 21.

by Paul Hvass on May 17, 2016 03:14 PM

May 15, 2016
RTAB-Map Saves the Kidnapped Robot

One of the more difficult challenges in robotics is the so-called “kidnapped robot problem.”  Imagine you are blindfolded and taken by car to the home of one of your friends but you don’t know which one.  When the blindfold is removed, your challenge is to recognize where you are.  Chances are you’ll be able to determine your location, although you might have to look around a bit to get your bearings.  How is it that you are able to recognize a familiar place so easily?

It’s not hard to imagine that your brain uses visual cues to recognize your surroundings.  For example, you might recognize a particular painting on the wall, the sofa in front of the TV, or simply the color of the walls.  What’s more, assuming you have some familiarity with the location, a few glances would generally be enough to conjure up a “mental map” of the entire house.  You would then know how to get from one room to another or where the bathrooms are located.

Over the past few years, Mathieu Labbé from the University of Sherbrooke in Québec has created a remarkable set of algorithms for automated place learning and SLAM (Simultaneous Localization and Mapping) that depend on visual cues similar to what might be used by humans and other animals.  He also employs a memory management scheme inspired by concepts from the field of Psychology called short term and long term memory.  His project is called RTAB-Map for “Real Time Appearance Based Mapping” and the results are very impressive.

Real Time Appearance Based Mapping (RTAB-Map)

The two images below are taken from a typical mapping session using Pi Robot and RTAB-Map:

rtabmap-picture-1rtabmap-picture-features-1

 

 

 

 

 

The picture on the left is the color image seen through the camera.  In this case, Pi is using an Asus Xtion Pro depth camera set at a fairly low resolution of 320×240 pixels.  On the right is the same image where the key visual features are highlighted with overlapping yellow discs. The visual features used by RTAB-Map can be computed using a number of popular techniques from computer vision including SIFT, SURF, BRIEF, FAST, BRISK, ORB or FREAK.  Most of these algorithms look for large changes in intensity in different directions around a point in the image.  Notice therefore that there are no yellow discs centered on the homogeneous parts of the image such as the walls, ceiling or floor.  Instead, the discs overlap areas where there are abrupt changes in intensity such as the corners of the picture on the far wall.  Corner-like features tend to be stable properties of a given location and can be easily detected even under different lighting conditions or when the robot’s view is from a different angle or distance from an object.

RTAB-Map records these collections of visual features in memory as the robot roams about the area.  At the same time, a machine learning technique known as the “bag of words model” looks for patterns in the features that can then be used to classify the various images as belonging to one location or another.  For example, there may be a hundred different video frames like the one shown above but from slightly different viewpoints that all contain visual features similar enough to assign to the same location.  The following image shows two such frames side by side:

rtabmap-image-match

Here we see two different views from essentially the same location.  The pink discs indicate visual features that both images have in common and, as we would expect from these two views, there are quite a few shared features.  Based on the number of shared features and their geometric relations to one another, we can determine if the two views should be assigned to the same location or not.  In this way, only a subset of the visual features needs to be stored in long term memory while still being able to recognize a location from many different viewpoints.  As a result, RTAB-Map can map out large areas such as an entire building or an outdoor campus without requiring an excessive amount of memory storage or processing power to create or use the map.

Note that even though RTAB-Map uses visual features to recognize a location, it is not storing representations of human-defined categories such as “painting”, “TV”, “sofa”, etc.  The features we are discussing here are more like the receptive field responses found in lower levels of the visual cortex in the brain.  Nonetheless, when enough of these features have been recorded from a particular view in the past, they can be matched with similar features in a slightly different view as shown above.

RTAB-Map can stitch together a 3-dimensional representation of the robot’s surroundings using these collections of visual features and their geometric relations.  The Youtube video below shows the resulting “mental map” of a few rooms in a house:

The next video demonstrates a live RTAB-Map session where Pi Robot has to localize himself after been set down in a random location.  Prior to making the video, Pi Robot was driven around a few rooms in a house while RTAB-Map created a 3D map based on the visual features detected.  Pi was then turned off (sorry dude!), moved to a random location within one of the rooms, then turned on again.  Initially, Pi does not know where he is.  So he drives around for a short distance gathering visual cues until, suddenly, the whole layout comes back to him and the full floor plan lights up.  At that point we can set navigation goals for Pi and he autonomously makes his way from one goal to another while avoiding obstacles.

References:

by Patrick Goebel on May 15, 2016 03:48 PM

May 13, 2016
Shaun Edwards (SwRI): The Descartes Planning Library for Semi-Constrained Cartesian Trajectories

From OSRF

Descartes is a path planning library that's designed to solve the problem of planning with semi-constrained trajectories. Semi-constrained means that the degrees of freedom of the path you need to plan are fewer than the degrees of freedom that your robot has. In other words, when planning a path, there are one or more "free" axes that your robot has to work with that can be moved any which way without disrupting the path. This can open up the planning space if you can utilize them creatively, which traditional robots (especially in the industrial space) usually can't. This results in reduced workspaces and (most dangerous of all) increased reliance on human intuition during the planning process.

Descartes was designed to generate common sense plans, exhibiting similar characteristics to paths planned by a human. It can solve easy problems quickly, and difficult problems eventually, integrating hybrid trajectories and dynamic replanning. It's easy to use, with a GUI that allows you to quickly set anchor points that the robot replans around, with visual confirmation of the new path. The second half of Shaun's ROSCon talk is an in-depth explanation of Descartes' interfaces and implementations intended for path planning fans (you know who you are).

As with many (if not most) of the projects being presented at ROSCon, Descartes is open source, and all of the development is public. If you'd like to try it out, the current stable release runs on ROS Hydro, and a tutorial is available on the ROS Wiki to help you get started.

Next up: Amit Moran & Gila Kamhi (Intel) Check out last week's post: Phobos -- Robot Model Development on Steroids

by Tully Foote on May 13, 2016 05:41 PM

Job opening: Software Engineer at Dynamic Legged Systems lab of IIT Genova

From Claudio Semini via ros-users@

The Dynamic Legged Systems Lab (DLS Lab) at Istituto Italiano di Tecnologia (IIT) http://www.iit.it/en/advr-labs/dynamic-legged-systems.html is looking for a full time

SOFTWARE ENGINEER (deadline 7th of May!)

with proven experience in programming (mostly C and C++).

The DLS Lab is known for cutting-edge research in the area of high-performance legged robots. The Lab's main research platform is the [hydraulic robot HyQ] (http://www.iit.it/hyq), one of the world's top performing quadruped robots. Its successor is the new HyQ2Max robot.

The successful candidate will be responsible for developing software in the area of embedded systems, communication and networking as well as higher level applications such as graphical interfaces to support the different projects within the DLS Lab.

Please visit the following page for a detailed list of requirements and other info: https://www.iit.it/careers/openings/opening/138-software-engineer-and-developer The highly competitive salary will depend on qualifications and experience and will include additional health benefits.

To apply please send electronically your detailed CV, university transcripts and cover letter outlining motivation, experience and qualifications for the post to selezioni@iit.it by May 7th, 2016 stating "DLSLab SW 2016" in the subject of the e-mail.

by Tully Foote on May 13, 2016 03:37 AM

May 11, 2016
ICRA 2016: the meeting point of the international robotics community, with PAL Robotics

One of the biggest robotics conferences worldwide is around the corner: PAL Robotics is a sponsor at the International Conference on Robotics and Automation (ICRA) 2016, held in Stockholm, Sweden (May 16-21, 2016). REEM-C and TIAGo robots are ready to show what they are capable of with many live demonstrations. Meet the robots and find more surprises at PAL Robotics’ stand (No. 6)!

Humanoid REEM-C, a robotic platform for the SocSMCs FET EU project

REEM-C to be at ICRA 2016

REEM-C to be at ICRA 2016

ICRA 2016 is partially hosted by Kungliga Tekniska Hoegskolan (KTH), one of the partners of the SocSMCs consortium, a FET EU project where PAL Robotics is involved as well. SocSMCs project is studying the human cognitive system and social behaviour in order to improve interactions between people and between people and robots too. SocSMCs’ goal is to understand the biological process of a person when naturally reacting to a stimulus.

The project’s results will benefit multidisciplinary studies: from neural connections to social behaviour or robotics development. PAL Robotics’ humanoid REEM-C is used by SocSMCs to develop and test the project’s results in different scenarios. REEM-C will perform live demos at ICRA that have been developed under the SocSMCs frame, which involve admittance control. REEM-C will also show demos using the Whole Body Control software developed by PAL Robotics.

Robots like TIAGo for the industry of the future, to be discussed at Industry Forum

It is known that the industrial sector will benefit from robotics in multiple ways. ICRA 2016 is bringing together experts on the field at the Industry Forum, which will take place on Thursday 19th under the theme “Partnering for modular, open solutions to meet industry needs”. PAL Robotics is giving a talk at the Industry Forum about modular and open-source solutions like TIAGo for Industry 4.0 (SWCC Room 35/36, 10:30-11:50 am).

PAL Robotics developed TIAGo as a research platform with ideal features for cooperating with people: it can be useful as an industrial collaborative robot and as a robot companion for the daily life. TIAGo’s features combine navigation, perception and manipulation to perform a wide range of actions, useful both for domestic and industrial environments. Watch TIAGo doing manipulation tasks on this video, a small foretaste of what you can see at ICRA. Tasks were done with human tele-operation using PAL Robotics’ Whole Body Control software suite with a simple gamepad:

The new generation of industrial robots, such as TIAGo, is based on cobots that are useful and safe for workers and that share the same workspace. PAL Robotics’ mobile platform has an open interface to configure it in a simple and safe way, being 100% integrated with ROS. On the other side, TIAGo’s hardware architecture is modular and customizable, with optional components such as the sensors or the end-effector, which can be a parallel gripper or a 5-fingered humanoid hand.

We are waiting for you at PAL Robotics stand no. 6 – ICRA 2016!

The post ICRA 2016: the meeting point of the international robotics community, with PAL Robotics appeared first on PAL Robotics Blog.

by Judith Viladomat on May 11, 2016 05:56 PM

May 10, 2016
Diagnostics package update: dynamic analysers

From Michal Staniaszek via ros-users@

Version 1.8.9 of the diagnostics package (Indigo and later) has some new functionality for the diagnostic aggregator. You can now change the diagnostic aggregator at runtime by dynamically loading or unloading diagnostic analysers. This can be done by including a node in launch file, or directly from code if you require more control. The intention of the change is to give more flexibility to the aggregator, allow individual packages to specify the analysers that they need, and reduce clutter in the diagnostic aggregator GUI.

Please see the tutorial for examples and more information.

by Tully Foote on May 10, 2016 10:09 PM

May 06, 2016
Kai von Szadkowski (University of Bremen): Phobos -- Robot Model Development on Steroids

From OSRF

To model a robot in rviz, you first need to create what's called a Unified Robot Description Format (URDF) file, which is an XML-formatted text file that represents the physical configuration of your robot. Fundamentally, it's not that hard to create a URDF file, but for complex robots, these files tend to be enormously complicated and very tedious to put together. At the University of Bremen, Kai von Szadkowski was tasked with developing a URDF model for a 60 degrees of freedom robot called MANTIS (Multi-legged Manipulation and Locomotion System). Kai got a bit fed up with the process and developed a better way of doing it, called Phobos.

 

mantis

http://robotik.dfki-bremen.de/en/research/robot-systems/mantis.html

 

Phobos is an add-on for a piece of free and open-source 3D modeling and rendering software called Blender. Using Blender, you can create armatures, which are essentially kinematic skeletons that you can use to animate a 3D character. As it turns out, there are some convenient parallels between URDF models and 3D models in Blender: the links and joints in a URDF file equate to armatures and bones in Blender, and both use similar hierarchical structures to describe their models. Phobos adds a new toolbar to Blender that makes it easy to edit these models by adding links, motors, sensors, and collision geometries. You can also leverage Blender's Python scripting environment to automate as much of the process as you'd like. Additionally, Phobos comes with a sort of "robot dictionary" in Python that manages all of the exporting to URDF for you.

 

Since the native URDF format can't handle all of the information that can be incorporated into your model in Blender, Kai proposes an extended version of URDF called SMURF (Supplemental Mostly Universal Robot Format) that adds YAML files to a URDF, supporting annotations for sensor, motors, and anything else you'd like to include.

 

If any of this sounds good to you, it's easy to try it out: Blender is available for free, and Phobos can be found on GitHub.

by Tully Foote on May 06, 2016 07:04 PM

May 04, 2016
Husqvarna Research Platform

From Stefan Grufman via ros-users@

We would like to announce support for ROS into some of our products. We will be showing this at ICRA 2016 (in Stockholm) during the 16/5 to 20/5.

Husqvarna Group has been manufacturing and selling robotic lawn mowers for more than 20 years. These robots are pretty basic when it comes to sensors and intelligence but we are of course researching how these products will be changed for the future. We have spent a some time doing internal research but in order for us to better work with you (the real researchers!) we have now adapted our robot (Automower 330X) to ROS by exposing an interface and implementing a driver for this (the driver will be available as open source soon). We really like the trend in robotics research towards robustness and long term autonomy. This is an area where we think we can help/boost the research by making our hardware available to researchers.

The idea is that we have a very robust & safe robot that will operate 24/7 in all weather conditions (except Scandinavian winter). It has a safety system (collision, lift and the loop around your area) and it will automatically return to the charging station when charging is needed. There are also plenty of space to include your own set of sensors as well as computational power, both inside the chassis as well as outside. We can provide mechanical drawings of mounts that you can print out on an SLS/SLA machine.

So, the offer to you is to get access to this, we call it the Husqvarna Research Platform (HRP), and use it as an outdoor mobile robotics platform for your research. If you need/like, the safety system can be used to run multiple battery cycles without need to handle docking/charging. This could for example be used when collecting data sets over long periods of time. The HRP also supports manual mode, and in this case you have full control of the motors (through the "/cmd_vel" topic) and can do whatever you need. You can mount extra computing power (we usually use an Odroid XU4) and/or sensors of your choice.

The platform will be presented and demoed by Husqvarna as well as one of our research partners, Örebro Univeristy (AASS) during ICRA 2016. We will have a booth at the ICRA expo and would like to invite you all to come and talk with us there. During ICRA 2016, we will also take ideas for your research ideas and hand out the mower shown at the demo to the best idea!

Husqvarna Group information can be found here: http://www.husqvarnagroup.com/en

Information on our robotic products can be found here: http://www.husqvarna.com/uk/products/robotic-lawn-mowers/

by Tully Foote on May 04, 2016 08:49 PM

May 03, 2016
5D Robotics hiring for new positions in Cambridge, MA

From David Rohr via ros-users@

5D is looking for candidates for both full-time positions and summer internships. Please check out the links below to see our RecruiterBox listings:

Full-Time Roboticist

Robotics Intern

by Tully Foote on May 03, 2016 10:16 PM

Modbot at Hannover Messe 2016

Submitted by Saroya Whatley and Shawn Schaerer, Modbot

At Modbot, we make science fiction a reality by creating modular robotic parts that are easily snapped together to create a variety of robotic configurations servicing many different applications. By developing a system of modular joints and links that can be mass produced, Modbot is able to deliver industrial quality motion control at prices accessible by both larger manufacturing firms as well as makers and startups. The Modbot platform also includes a pendant software that can be accessed via the cloud or locally on a computer or tablet (Windows, OS X, iOS, Android). The software allows the user to not only program a Modbot robot with the touch of a button, but also simulate various robotic configurations in the virtual robot builder and build custom graphical user interfaces. The Modbot platform puts the power and precision of high-end machinery into an easy-to-assemble, simple to understand package.

Modbot Alpha Robot Demo at Hannover Messe 2016 (Photo: Shawn Schaerer)

Modbot Alpha Robot Demo at Hannover Messe 2016 (Photo: Shawn Schaerer)

The ROS-Industrial Consortium has been a valuable resource and feedback engine with regards to the development of Modbot's modular robotics system. As a member of the Consortium, Modbot works closely with the Consortium to use, develop and promote ROS Industrial. Currently, Modbot is working with the Consortium to release the CAD to ROS URDF Editor application.

Permalink

by Paul Hvass on May 03, 2016 10:03 PM

Simbe Robotics is hiring robotics/research engineers
From Brad Bogolea via ros-users@

Simbe Robotics is currently hiring for a number of robotics-focused engineering roles in the San Francisco Bay Area. 

 

At Simbe, we are automating brick & mortar retail through the use of mobile robots, computer vision, and cloud-based software. Our first product, Tally, provides retailers unprecedented visibility and insights into the state of their stores.  

 

Current open positions include:     

 

Robotics Software Engineer

https://jobs.lever.co/simberobotics.com/e15c5b16-5f6f-4469-9a3e-c3be65b887b9

 

Computer Vision Software Engineer

https://jobs.lever.co/simberobotics.com/7f842efa-e9e0-4a91-a47e-ed5f9c544130

 

Robotics Research Intern

https://jobs.lever.co/simberobotics.com/4952daea-00f4-419d-a613-18a0308c6b83

 

Dev Ops Engineer

https://jobs.lever.co/simberobotics.com/be3f094c-ccce-41d2-a71e-82fb09d1ada7

 

Full Stack Web Software Engineer

https://jobs.lever.co/simberobotics.com/78ea9088-be51-47c7-834a-c909eaa21639

by Tully Foote on May 03, 2016 07:03 AM

April 29, 2016
Mirko Bordignon (Fraunhofer IPA) and Shaun Edwards (SwRI): Bringing ROS to the Factory Floor
From OSRF


The ROS Industrial Consortium was established four years ago as a partnership between Yaskawa Motoman Robotics, Southwest Research Institute (SwRI), Willow Garage, and Fraunhofer IPA. The idea was to provide a ROS-based open-source framework for robotics applications, designed to make it easy (or at least possible) to leverage advanced ROS capabilities (like perception and planning) in industrial environments. Basically, ROS-I adds models, libraries, drivers, and packages to ROS that are specifically designed for manufacturing automation, with a focus on code quality and end user reliability.

Mirko Bordignon from Fraunhofer IPA opened the final ROSCon 2016 keynote by pointing out that ROS is still heavily focused on research and service robotics. This isn't a bad thing, but with a little help, there's an enormous opportunity for ROS to transform industrial robotics as well. Over the past few years. The ROS Industrial Consortium has grown into two international consortia (one in America and one in Europe), comprising over thirty members that provide financial and managerial support to the ROS-I community.

To help companies get more comfortable with the idea of using ROS in their robots, ROS-I holds frequent training sessions and other outreach events. "People out there are realizing that at least they can't ignore ROS, and that they actually might benefit from it," Bordignon says. And companies are benefiting from it, with ROS starting to show up in a variety of different industries in the form of factory floor deployments as well as products.

Bordignon highlights a few of the most interesting projects that the ROS-I community is working on at the moment, including a CAD to ROS workbench, getting ROS to work on PLCs, and integrating the OPC data protocol, which is common to many industrial systems.

Before going into deeper detail on ROS-I's projects, Shaun Edwards from SwRI talks about how the fundamental idea for a ROS-I consortium goes back to one of their first demos. The demo was of a PR2 using 3D perception and intelligent path planning to pick up objects off of a table. "[Companies were] impressed by what they saw at Willow Garage, but they didn't make the connection: that they could leverage that work," Edwards explains. SwRI then partnered with Yaskawa to get the same software running on an industrial arm, "and this alone really sold industry on ROS being something to pay attention to," says Edwards.

Since 2014, ROS-I has been refining a general purpose Calibration Toolbox for industrial robots. The goal is to streamline an otherwise time-consuming (and annoying) calibration process. This toolbox covers robot-to-camera calibration (with both stationary and mobile cameras), as well as camera-to-camera calibration. Over the next few months, ROS-I will be releasing templates for common calibration use cases to make it as easy as possible.

Path planning is another ongoing ROS-I project, as is ROS support for CANOpen devices (to enable IoT-type networking), and integrated motion planning for mobile manipulators. ROS-I actually paid the developers of the ROS mobile manipulation stack to help with this. "Leveraging the community this way, and even paying the community, is a really good thing, and I'd like to see more of it," Edwards says.

To close things out, Edwards briefly touches on the future of ROS-I, including the seamless fusion of 3D scanning, intelligent planning, and dynamic manipulation, which is already being sponsored by Boeing and Caterpillar. If you'd like to get involved in ROS-I, they'd love for you to join them, and even if you're not directly interested in industrial robotics, there are still plenty of opportunities to be part of a more inclusive and collaborative ROS ecosystem.

  Next up: Kai von Szadkowski (University of Bremen) Check out last week's post: MoveIt! Strengths, Weaknesses, and Developer Insight

by Tully Foote on April 29, 2016 11:34 PM

Next ROS Summer School by FH Aachen
From Patrick Wiesen

As mentioned in February News we are offering the next ROS Summer School from 15th till 26th of August 2016 at the University of Applied Sciences in Aachen (FH Aachen), Germany.
This year a special UAV ROS weekend (27/28th August) will complement the ROS Summer School. Over 60 participants are already registered, but there are still some hacking seats left. The registration dead line is coming weekend: 30th of April! Afterwards we will generate a waiting list for more participants. Register now!

aachen_summer_school_group.png

The following subjects are covered: ROS Basics, Communication, Hardware Interfacing, Teleoperation, Transforms, Gazebo Simulation, Landmark Detection, Localization, Mapping, Navigation, Control as well as some ROS Industrial exhibition. All this can be experienced with real hardware using our mobile robots - the FH Aachen Rover - after learning some theory.

In addition to the above, it is worth mentioning the big success of our recent ROS Summer School at the Tshwane University of Technology in Pretoria, South Africa. Thanks to people there joining and supporting us. It was great fun and a nice learning atmosphere! We had more than 20 participants and they learned ROS from scratch. After one week, five teams competed with their autonomous FH Aachen Rovers on a round track including Mapping and Localization in a final challenge. After five Summer Schools it was the first time that no Rover hits a wall - congratulations!

This is what our participants just managed in one week, so let's see in August what they can do in two weeks?!

The group photo shows our participants at TUT in South Africa, our colleagues from the 3D printing Goethelab in Aachen, who held as well a Summer School at TUT and us, surrounded by happy robot enthusiasts.

by Tully Foote on April 29, 2016 10:47 PM

Team Delft APC 2016 Progress

Team Delft has qualified as one of the 16 finalist teams for the Amazon Picking Challenge 2016. The team is a joint effort of the startup Delft Robotics (Kanter van Deurzen a.o.) and TU Delft Robotics Institute (Carlos Hernandez Corbato a.o.), supported by the RoboValley initiative (www.robovalley.com)

The goal of the Amazon challenge is “to strengthen the ties between the industrial and academic robotic communities and promote shared and open solutions to some of the big problems in unstructured automation." In order to spur the advancement of these fundamental technologies, there will be two parallel competitions: the Pick Task, and the Stow Task. For the Pick Task, target items for an Amazon order have to be removed from a standard shelf in Amazon warehouses and placed into a tote. The Stow Task requires the reverse: target items have to be taken from a tote and stowed into the bins of the shelf. These tasks involve challenges in object recognition, grasping, dexterous manipulation, and motion planning.

Since January, Team Delft has been developing an industrial grade robotic system for the challenge. It involves a 7 degree-of-freedom Motoman robot mounted on a rail, courtesy of Yaskawa (sponsor). Ensenso cameras from sponsor Imagining Development Systems will feed high quality 3D images to a vision pipeline for object recognition and localization using Deep Learning techniques. The team is fully committed to the ROS-Industrial initiative. ROS and ROS-Industrial components for motion planning, robot control, grasping, and PointCloud processing will be integrated into a fault-tolerant control architecture for the robot.

Application video of Team Delft to qualify as finalists for the Amazon Picking Challenge.

The Amazon Picking Challenge will be held in conjunction with RoboCup 2016 in Leipzig, Germany from June 30 to July 3, 2016.

Follow Team Delft at: @teamdelft_apc

For more information about the Amazon Picking Challenge, please visit http://amazonpickingchallenge.org/

by Paul Hvass on April 29, 2016 10:44 PM


Powered by the awesome: Planet