October 23, 2014
Software Engineering Positions at FYS Systems

From Michael Ferguson via ros-users@

FYS Systems is looking for software engineers with broad experience on real robot platforms and a deep background in one or more of the following areas: navigation, motion planning, robot perception, human-robot interfaces. We have multiple openings, for both junior and senior-level engineers, with both immediate start dates and throughout early 2015.

Required Skills:

  • BS or MS in Computer Science, Robotics, or a related field.
  • 1+ year of software engineering experience, or extensive software engineering experience in undergraduate or graduate program.
  • Experience with C++ and/or Python in a Linux Environment.
  • Experience in robot navigation, motion planning, perception, or human-robot interfaces.
  • Experience with Robot Operating System (ROS).

Nice To Haves:

  • Experience with MoveIt, SBPL and/or OMPL.
  • Experience with OpenCV or PCL.
  • Experience with web development.
  • Experience with CMake.

To apply, please visit: https://fyssystems.has-jobs.com/SoftwareEngineerSunnyvale/32999/0

by Tully Foote on October 23, 2014 06:09 PM

October 22, 2014
Call for authors for a ROS Handbook

From Anis Koubaa via ros-users@

I am coordinating with Springer publisher to edit a handbook on Robot Operating System. There are only a few books on ROS http://wiki.ros.org/Books which mainly represent a brief introduction to ROS and a few basic applications. This does not translate the huge amount of work being done in the community and I feel the need to have a complete reference on the topic.

The prospective handbook will cover ROS from foundations and basics to advanced research works from both academia and industry. Tutorials and research papers will both be sought. The book should cover several robotics areas including but not limited to robot navigation, UAVs, arm manipulation, multi-robot communication protocols, Web and mobile interfaces using ROS, integration of new robotic platform to ROS, computer vision applications, development of service robots using ROS, development of new libraries and packages for ROS, etc. Every book chapter should be accompanied with a working code to be put later in a common repository for the readers.

To express your interest to the handbook and your intention of a chapter proposal, I would like to invite you fill in the following form. The proposed chapters are just considered as an initial expression of interest and will be included in the handbook proposal. It does not mean any kind of commitment for the author at this stage. An official call for chapters with instructions and deadline will be announced soon.

Thank you and look forward to receive your feedback,


by Tully Foote on October 22, 2014 05:37 PM

October 21, 2014
NavVis announces ROS based large scale mapping technology

NavVis presents new large scale mapping technology: The impressive shipping exhibition at Deutsches Museum can now be explored online and in 3D.

NavVis utilizes a trolley that's equipped with three laser scanners and six cameras. As a human operator wheels that trolley through the area to be mapped, the scanners record the horizontal and vertical dimensions of the hallway while the cameras record panoramas. The software framework is based on ROS, which allowed for a very modular design, as Suat Gedikli, Chief Software Architect at NavVis, says. One of the main advantages of their mapping device is its efficiency: In contrast to Google's mapping trolley, the operator does not have to crouch to move out of the field of view of the panoramic camera. With the patented NavVis camera head, the six cameras are assembled in a way such that the trolley operator is in their blind spot and therefore not visible. Hence 360-degree panoramas can be continuously recorded while moving the trolley.


The start-up recently teamed up with Deutsches Museum to digitize their impressive shipping exhibitions, which were mapped in less than one hour. The result is a 3D map of the exhibition, overlaid with photos of every square inch of all the surfaces.

Bildschirmfoto 2014-09-_opt.png

Similar to Google Street View, their HTML5 based IndoorViewer allows people to virtually explore the museum online. Additionally, administrators can add content like text, images, video to various points on the map, which can then be accessed by users. This interactive feature also lets users do things like obtaining measurements between different points in the building (which is important for architects and in construction site monitoring applications).

NavVis, which was only founded in May of last year, is focused on public-oriented showcases and business-to-business applications. Felix Reinshagen, co-founder and managing director: "As the first step, our application is aimed at companies in the building management segment that are confronted with challenges such as documentation, inventory, path-finding and task management. We make our hardware, software and service available to them so that they can have their building and industrial plants digitized while handling the usage of their data according to their own requirements. An area the size of the Deutsches Museum is mapped and posted online within three working days for a four-figure amount."

by Tully Foote on October 21, 2014 09:01 PM

October 20, 2014
Industrial Calibration Library Update and Presentation

From Paul Hvass, via ros-users@

Robotics and automation systems are increasingly reliant on both 2D and 3D imaging systems to provide both perception and pose estimation. Calibration of these camera/robot systems is necessary, time consuming, and often a poorly executed process for registering image data to the physical world. SwRI is continuing to develop the industrial calibration library to provide tools for state-of-the-art calibration with the goal to provide reliably accurate results for non-expert users. Using the library, system designers may script a series of observations that ensure sufficient diversity of data to guarantee system accuracy. Often interfaces to motion devices such as robots may be included to fully automate the calibration procedure.

More information can be found on the ROS-I blog post.

by Ugo Cupcic on October 20, 2014 08:31 PM

Alten Mechatronics Applies Robotic Technology in FEI Transmission Electron Microscopes (TEM)

Submitted by Simon Jansen, Alten Mechatronics

FEI designs, produces and supports a wide variety of high performance microscope systems, which can visualize details up to the picometer scale.

In their small dual beam (SDB) systems, a moving stage platform is present. This stage platform needs to be positioned in eight degrees of freedom (DOF).  Next to the stage, the microscope contains a lot more components such as an electron and ion column, multiple detectors and a gas injection system.  These SDB systems are not only used as pure microscopes, but also as nano workshop systems. It is possible to use these systems to add or remove material of a sample, while the sample is being inspected.

Because these microscopy system have such a wide variety of applications, the microscopy chamber gets rather full with components and parts. Therefore positioning and moving the eight DOF stage becomes more and more challenging.

In current systems, an in-house developed solution is used to plan the motion trajectories for the stage. The most important requirement is that the stage moves collision free between two configurations. Using the current solution, each movement between stage configurations is programmed by hand.  In some cases, this planning problem is just too complex and the current in-house developed solution is not able to find a solution. In other cases, the found solution takes too much time because axes can only be moved sequentially. This consumes a lot of time when moving eight axes and is therefore not feasible.

FEI is investigating alternative solutions to perform motion planning for their stage. For these alternatives it is important that the stage moves collision free within a certain time. This time should be minimized to obtain the highest possible throughput. A benefit of using motion planning is the possibility to move axes in parallel instead of sequentially. 

Alten Mechatronics performed a proof of principle study in which a simulation model of the microscope was developed. By using the motion planner MoveIt! collision free paths were generated for different stage configurations. Three cases were selected and compared to the current solution. One case was meant to benchmark the new motion planner, the two other cases to show the capabilities of the motion planner in extreme situations.

Alten showed that by using MoveIt! it became possible to calculate stage trajectories up to five times faster than trajectories found by using the in-house developed solution. For the other cases, it became possible to find trajectories that were not possible when using the in-house developed solution.

By using MoveIt! it is possible to realize complex stage movements which are guaranteed to be collision free and are resulting in a much higher throughput.

For the next phase, the results of the first ROS-Industrial Focused Technical Project (http://rosindustrial.org/ftp-status/) will be used to improve the performance of planner. In the first phase, MoveIt sometimes generated sub-optimal paths, which had to be rejected. With the optimization of the FTP, we will be able to guarantee near optimal solutions and be able to predict a lower reliable cycle time.

Mark Geelen & Simon Jansen Contact: rosindustrial(at-sign)alten.nl

by Paul Hvass on October 20, 2014 06:37 PM

October 14, 2014
3rd ROS-Industrial Training at Fraunhofer IPA: ROS for Industrial Applications

ROS-Industrial Technology Seminar: "ROS for Industrial Applications"

Friday, October 24, 2014
More information: Stuttgarter Produktionsakademie


The open source “Robot Operating System” ROS offers highly developed robotics software components which can be used in flexible industrial applications. In this praxis-oriented tutorial users will get in touch with the basic functionalities of the ROS framework and the ROS-Industrial initiative. Participants will get an impression about the power of the system and learn how to use it in their own application.

Especially in dynamic environments with a variety of different work pieces there is a demand for highly flexible automation solutions supported by sensors and intelligent software components. A cost efficient, reusable and powerful solution is the open source framework ROS. It offers a huge amount of intelligent algorithms, methods and integrated libraries. An advantage is that software as well as hardware components can easily be exchanged due to a network based communication layer and standardized interfaces. This allows time saving and cost effective software development, which lowers the overall development costs.

In robotics research ROS is a well-established standard. The next step is to bring this power to industrial applications. For this purpose the ROS-Industrial initiative was founded. This tutorial will get the participants in touch with the theoretical basics of ROS and teach how to practically use it for their own industrial application.

Workshop Topics:

* ROS – Introduction and Basics
* 3D-Perception using ROS
* Localization and Navigation using ROS
* Motion Planning with MoveIt!
* Application Development using ROS
* Introduction to ROS-Industrial Initiative

In small groups the attendees will have the chance to gain hands-on experience within those topics under the guidance of experts of the respective field. The seminar is suitable for attendees both with or without experience in using ROS and will be held in bilingual language (German, English).

Also please feel free to forward this E-Mail to your colleagues, project collaborators or those who may be interested in this seminar.

If you have questions please don’t hesitate to contact us!

Hope to see some of you in October (or for the 4th seminar on Thursday, March 5th, 2015)!

Contact Person:

Dipl.-Ing. Florian Weißhardt Project Manager ROS Industrial Fraunhofer-Institute For Manufacturing Engineering and Automation IPA, Stuttgart Florian.Weisshardt@ipa.fraunhofer.de

by Julia Selje on October 14, 2014 04:19 PM

New Package: Advanced ROS Network Introspection (ARNI)
From Andreas Bihlmaier via ros-users@

Dear ROS community,

I'm pleased to announce http://wiki.ros.org/arni - a collection of tools for Advanced ROS Network Introspection.
From the wiki page:
"Advanced ROS Network Introspection (ARNI) extends the /statistics
features introduced with Indigo and completes the collected data with
measurements about the hosts and nodes participating in the network.
These are gathered from an extra node that has to run on each host
machine. All statistics or metadata can be compared against a set of
reference values using the monitoring_node. The rated statistics allow
to run optional countermeasures when a deviation from the reference is
detected, in order to remedy the fault or at least bring the system in a
safe state."

No modification of existing nodes is required in order to use the
monitoring features. Therefore, the barrier of entry is very low:
- See the arni tutorial
- git clone https://github.com/ROS-PSE/arni into your catkin_ws
- roslaunch arni_core init_params.launch
- start all your other nodes
- rosrun rqt_gui rqt_gui
- Plugins -> Introspection -> Arni-Detail
  (Click on an item (host, node, topic or connection) in the tree view
  to get more details and graphs in the other widget)
- Enjoy out of the box distributed metadata-based monitoring

If you want to use the more advanced features in your own ROS network,
see the documentation on how to write "specifications" and "constraints".

The documentation can be found in the wiki including the tutorials (http://wiki.ros.org/arni/Tutorials).

Please give feedback and report any bugs found.

Many thanks to my students that worked hard on this:
Matthias Hadlich, Matthias Klatte, Sebastian Kneipp, Alex Weber, Micha Wetzel

by Tully Foote on October 14, 2014 12:58 AM

October 11, 2014
Groovy Galapagos EOL Complete
As we have have now released indigo and are looking forward to Jade, it is time to retire Groovy. 

Groovy was first officially released at the end of 2012, but work toward the release had been started in early 2012.[1] During it's life cycle Groovy almost double the number of packages released reaching a maximum of 900. 

Reviewing the history of the rosdistro repository which contains the release metadata reveals that there was 2912 commits from 127 contributors over the history of the Groovy release. This represents the maintainers making the releases and does not count the many more contributors to the source code of the individual packages. There were commits on 612 different days over the 794 days tracked in this repository. This means on average there were releases of groovy packages more than 5 days per week. For a quick visualization of the activity on the repository we've put together a rendering of commits to the groovy subdirectory:These statistics only count catkin based releases, not the 178 rosbuild packages indexed separately.) 

ROS Groovy Galapagos Rosdistro Git Activity from OSRF on Vimeo.

As you may have already noticed, last week we disabled all the groovy jobs on the farm. We have kept them there for reference but do not intend to reenable them. Along those same lines, we can accept pull-requests to keep source builds working on groovy(such as  if a repository is relocated to a new host), but cannot accept pull-requests for new groovy releases. 

As always we'd like to pay trubute to the hundreds of people who put the time in to make groovy happen. It would not have happened without your efforts. 

by Tully Foote on October 11, 2014 11:21 AM

October 08, 2014
Amazon Picking Challenge @ ICRA 2015
From Joe Romano via ros-users@

Amazon Picking Challenge @ ICRA 2015

Greetings colleagues! We are excited to announce a new manipulation contest to be held at ICRA in May 2015 (http://icra2015.org) in Seattle, WA, USA.

This may be of particular interest to the ROS community since we are encouraging researchers to push their new developments into the open-source domain (as a requirement to be eligible for the available travel support and contest prizes). We hope this leads to the contribution of new and interesting work within ROS.


Amazon is able to quickly package and ship millions of items to customers from a network of fulfillment centers all over the globe. This wouldn't be possible without leveraging cutting-edge advances in technology. Amazon's automated warehouses are successful at removing much of the walking and searching for items within a warehouse. However, commercially viable automated picking in unstructured environments still remains a difficult challenge. In order to spur the advancement of this fundamental technology we are excited to be organizing the first Amazon Picking Challenge at ICRA 2015. It is our goal to strengthen the ties between the industrial and academic robotic communities and promote shared and open solutions to some of the big problems in unstructured automation. To this end the contest will be awarding travel grants to ICRA 2015, practice equipment, and a large prize pool for the competition winners.

This competition will challenge entrants to build their own robot hardware and software that can attempt simplified versions of the general task of picking items from shelves. The robots will be presented with a stationary lightly populated inventory shelf and be asked to pick a subset of the products and put them on a table. The challenge combines object recognition, pose recognition, grasp planning, compliant manipulation, motion planning, task planning, task execution, and error detection and recovery. The robots will be scored by how many items are picked in a fixed amount of time, with $26,000 in prizes being awarded. Participants will be encouraged to share and disseminate their approach to improve future challenge results and industrial implementations.

Find out more and sign up for email updates at the challenge website:
Or email us at:

by Tully Foote on October 08, 2014 08:30 PM

ROSCon 2015 Location and Date Survey
We had a great time at ROSCon 2014! (if you missed it we've posted videos of all the presentations online now at http://roscon.ros.org/2014/program/ )

Although it's a long way off still we need to look forward to when and where to hold the next instance. To help facilitate that process we'd like the communities feedback on what times and locations would best fit into their schedules. Please take a minute to let us know where you would be able to join us for our next event.  

There is a place for your name and email, but it's not required. 

by Tully Foote on October 08, 2014 12:17 AM

October 06, 2014
Announcing package for the Schunk Servo-electric 5-Finger Gripping Hand SVH
From Georg Heppner via ros-users@

Hi everyone,

it is my pleasure to announce the schunk_svh_driver[1] package that you can use to control the Servo-electric 5-Finger Gripping Hand SVH [2] produced by Schunk.

The SVH is the first 5 finger hand which is produced in series and enables a wide range of complex motions due to its 1:1 scale and anthropo­morphic design. It provides an easy interface for standalone usage as well as integration into your project, comes with a detailed 3D-Model based on the orginal CAD Data and was tested extensively during several public demonstrations like the Automatica. A comprehensive documentation is already provided on the wiki and should allow you to easily use the package in your projects. At [3] you can see a Youtube video of the hand in combination with the LWA4P for which an early version of this package was used.

The package is currently available via git [4] and will soon be available via package manager. It was tested with hydro and indigo but should work under most circumstances.

Please let me know if you have any feedback, suggestions or any trouble using the package.

Best Regards
Georg Heppner

[1] http://wiki.ros.org/schunk_svh_driver
[2] http://mobile.schunk-microsite.com/en/produkte/produkte/servoelektrische-5-finger-greifhand-svh.html
[3] https://www.youtube.com/watch?v=hPtSbPzROrs
[4] https://github.com/fzi-forschungszentrum-informatik/schunk_svh_driver

by Tully Foote on October 06, 2014 07:00 PM

October 02, 2014
Bridging the Gap between ROS and JAUS
from Danny Kent, via ros-users@

Hello ROS Users!

One of the questions we've heard quite often lately is how to bridge the gap between ROS and JAUS. The Joint Architecture for Unmanned Systems (JAUS) is an SAE International Standard for command and control of robots. A lot of people have built solutions based on ROS and need to quickly and reliably integrate those solutions using JAUS or vice versa. Previously, the learning curve to do has been difficult as there has been a lack of software to solve the problem. To assist with this problem we decided to build a software bridge between ROS and JAUS. We call it jROS and we've just finished our first cut at it!

jROS allows users to combine the power and flexibility of ROS with the maturity and robustness of the JAUS standard. The software consists of a set of ROS messages and services which are defined with respect to the JAUS message structure. This creates a 1-to-1 mapping of data in JAUS and ROS via our jROS Package. Adding the jROS topics to an existing ROS system is trivial. Once integrated, the ROS data can be exchanged with the JAUS network in a well-defined way. 

Take a closer look at jROS here: http://www.openjaus.com/products/jros

~Danny Kent, PhD

by Ugo Cupcic on October 02, 2014 06:41 PM

October 01, 2014
ROS Development Survey
From Ryan Gariepy of Clearpath Robotics via ros-users@

Clearpath Robotics, an early adopter of ROS, is working with the Open
Source Robotics Foundation (OSRF) to determine how the worldwide ROS
development community can best be supported. This may be via support
services, resources, or tools offered by the OSRF or community
members. Now is your opportunity to let us know what you need and how
Clearpath and OSRF can work together to best support you.
Please take a moment to complete this short survey:


by Tully Foote on October 01, 2014 06:47 PM

Industrial Calibration Library Update and Presentation

By Dr. Chris Lewis, SwRI:

Robotics and automation systems are increasingly reliant on both 2D and 3D imaging systems  to provide both perception and pose estimation. Calibration of these camera/robot systems is necessary, time consuming, and often a poorly executed process for registering image data to the physical world. SwRI is continuing to develop the industrial calibration library to provide tools for state-of-the-art calibration with the goal to provide reliably accurate results for non-expert users. Using the library, system designers may script a series of observations that ensure sufficient diversity of data to guarantee system accuracy. Often interfaces to motion devices such as robots may be included to fully automate the calibration procedure.

As a vision systems developer one may ask the following questions with regards to both intrinsic and extrinsic camera calibration.

  1. How many images of the calibration target are needed?
  2. At what ranges?
  3. At what angles?
  4. How many near the center of the field of view vs at the edges?
  5. What accuracy is achievable?
  6. What accuracy was achieved?

With our framework, a user may rapidly explore these questions.

Our framework is built using Google's Ceres Solver which is a state of the art non-linear optimization tool specifically designed to solve Bundle Adjustment problems efficiently. Our framework consists of five main parts.

  1. The main script processing code which
    • Collects observations
    • Runs the optimization
    • Installs the results
  2. A library of Ceres compatible cost functions.
  3. The camera observer interface which ties your cameras to the system and automatically triggers the camera and locates common calibration targets within specified regions of interest.
  4. The scene trigger interface which provides interfaces to motion hardware such as robots. It may also serve to communicate with users to specify how to configure each scene.
  5. Transform interfaces which provide the means by which kinematic values may be fed into and out of the calibration system. Updates to these extrinsic kinematic parameters is immediate and persistent.

Using this framework, we have demonstrated three distinctly different calibrations:

  • Extrinsic calibration of a camera mounted on the tool of a robot
  • Extrinsic calibration of a network of cameras
  • Extrinsic calibration of a static camera to a robot

In addition, the ROS-I team is currently developing an intrinsic calibration script whereby a robot moves the calibration target to create a repeatable set of calibration images. In the near future, we will be developing kinematic calibration procedures for robots using cameras to better estimate robotic joint parameters.

Additional links:

by Paul Hvass on October 01, 2014 04:09 PM

Multicultural teams are fantastic for cross-fertilisation of ideas, but its not all roses. The fruits of two different teams bringing their work together on the last day can cause some head scratching until you unearth the cause....

> rostopic list

by Daniel Stonier (noreply@blogger.com) on October 01, 2014 01:30 PM

ROS Vagrant base boxes
From Mark Pitchless of Shadowrobot via ros-users@

Hello All,

I'm very pleased to announce a set of vagrant virtual box base boxes, we have been working on at shadow robot, are now available on the cloud for public consumption.


You need vagrant 1.5+ to use these. On trusty bring up a new ros machine like so:

apt-get install vagrant
mkdir indigovm
cd indigovm
vagrant init shadowrobot/ros-indigo-desktop-trusty64
vagrant up

After a bit a logged in desktop will appear, just open a terminal, roscore and away you go.

Currently we have Hydro and Indigo machines in 32bit and 64bit variants.

These are built using vagrant and ansible as part of our build tools project (more on this next week).
Feel free to log issues, ideas there or post here.

Collaboration welcome especially creating bases for other providers.

Have fun,

by Tully Foote on October 01, 2014 12:44 AM

September 30, 2014
Robotiq's New Force-Torque Sensor Support Added

Robotiq has recently announced the release of its new Force-Torque sensor. Please note that they've updated their ROS-I repository to include ROS support for it!

by Paul Hvass on September 30, 2014 02:14 PM

September 25, 2014
New Package: diff_drive_controller in ros_controllers
From Bence Magyar of PAL Robotics via ros-users@

Hi everyone,

PAL Robotics is pleased to announce the release of the diff_drive_controller that became available in Hydro and Indigo in the first quarter of 2014.

For those who already know it, I'd like to ask you to add your robot(s) to the wiki page with a moderately sized image and name: http://wiki.ros.org/diff_drive_controller#Robots.

For those who are new to it,
For documentation refer to:

As the name suggests, this controller moves a differential drive wheel base. 
  • The controller takes geometry_msgs::Twist messages as input.
  • Realtime-safe implementation.

  • Odometry computed and published from open or closed loop
  • Task-space velocity and acceleration limits
  • Automatic stop after command time-out
The controller will soon support skid steer platforms as well. 


by Tully Foote on September 25, 2014 05:28 PM

Community Meeting Video Posted

Thanks to everyone who participated in the ROS-Industrial Community meeting that was held in conjunction with ROSCon on Saturday, September 13! A special thanks to our presenters:

  • Paul Hvass (SwRI): Welcome and Update on RIC-Americas
  • Alexander Bubeck (Fraunhofer IPA): Update from RIC-EU 
  • Clay Flannigan (SwRI): ROS-I Roadmapping update
  • Preben Hjornet (Blue Workforce): Why ROS-I Community needs to adopt the Kinect 2 
  • Risto Kojcev (Italian Institute of Technology): Introducing the Cartesian path planner plug-in for MoveIt!
  • Ryan Gneiting (Deere and Co.): John Deere ROS-I demo cell

Participants included: ABB USA, Blue Workforce, CAT, Clearpath, Fraunhofer IPA, HDT Robotics, Innervycs, Intelligrated, Italian Institute of Technology, John Deere, Leica Biosystems, Max Planch Institute, MTC, Northwestern U., NRL, Omnico AGV, Open Source Robotics Kyokai, OSRF, Reiter Affiliated Companies, Rethink Robotics, Shadow Robot, SICK, Siemens, Spirit AeroSystems, SwRI, TU Delft, UIC, UT Austin, and Wiki Technium! Your insight and energy is key to our growing community.

ROS-I Community meetings occur 3 times per year and are open to the public.

by Paul Hvass on September 25, 2014 03:57 PM

September 23, 2014
SV-ROS's team Maxed-Out wins First Place at the IROS2014 Microsoft Kinect Challenge
SV-ROS's team Maxed-Out earns the highest score at IROS 2014 in the first Microsoft Kinect Challenge.

The Microsoft Kinect Challenge is a showcase of BRIN (Benchmark Indoor Robot Navigation), a scoring software that was used to score the competition. Each team had to create a mapping and autonomous navigation software solution that would successfully run on a provided Adept Pioneer 3DX robot

The number of way points achieved, time and accuracy are combined in determining a contestant's score. Microsoft Research's Gershon Parent, the author of the BRIN scoring software hopes to see BRIN as a universally accepted way of benchmarking autonomous robots' indoor navigation ability. 

SV-ROS is a Silicon Valley ROS users group that meets on the second to last Wednesday each month at the HackerDojo in Mountain View, CA. Team Maxed-Out is led by Greg Maxwell; key team members are Girts Linde, Ralph Gnauck, Steve Okay, and Patrick Goebel. The Maxed-Out effort began in May 2014 and was able to successfully create a winning ROS mapping localization and navigation solution in a few months, beating 5 other international teams. 

Maxed-Out's winning software solution was based on the ROS Hydro distribution on a powerful GPU enabled laptop running Ubuntu 12.04 and Nvidia Cuda 6.0 8 GPU parallel processing software. The team was able to out score all the other teams by incorporating the Rtabmap mapping, localization, navigation and new Point Cloud solution library that is the effort of Mathieu Labbe, a graduate student at the Université de Sherbrooke.

Team Maxed-Out's code is up at SV-ROS's Github repository and documented on this meetup page.

Pictures of the event are posted here

by Tully Foote on September 23, 2014 05:38 PM

Towards Live Programming in ROS with PhaROS and LRP
In this tutorial we will show you how to programs robot behaviour on using Live Robot Programming over PhaROS. Setup Follow the steps 1 to 4 of this post. Create a ROS node that consumes /kompai2/pose and /kompai/scan and publish in /command_velocity. To do this, just executing this: LrpharosPackage uniqueInstance Create an instance of the… Continue reading

by Pablo Estefó on September 23, 2014 09:25 AM

September 22, 2014
New package: Augmented Reality System
From Hamdi Sahloul via ros-users@

Hi everyone!

I have been recently through a need for a reliable pose estimation system, in which ar_pose (http://wiki.ros.org/ar_pose) failed to stratify my needs as it depends on the very basic ARToolKit old library.
Moreover, I found aruco_ros (http://wiki.ros.org/aruco_ros) as a good package to begin with, but it was only using a single marker, or double markers. It does not have a visualization system as well.

So, I made my package..
In order to avoid occlusions, I used marker boards (you still have the ability to use a 1x1 marker board), and now it could detect virtually unlimited boards with a very good accuracy.
Nonetheless, it is able to handle many cameras at once, and finally display the result in the rviz (http://wiki.ros.org/rviz).

I would love if you discover things further yourself, so here is the link:

It would only cost you a camera and couple of papers to try, therefore, kindly be asked to try it and let me please know your impression and feedback which is highly appreciated!

by Tully Foote on September 22, 2014 06:05 PM

September 21, 2014
answers.ros.org: A Quick How-To
Below are 5 steps to getting the most out of ROS Answers, and hopefully giving the most back to the community in the process:

1. Don't be afraid to ask a question

The name of the site may be "ROS Answers", but there is no point in having answers if there are not questions. Often I find that people comment on old questions, or post answers to old questions, hoping to get help for a possibly related problem. Your comments will probably be missed by anyone who did not previously participate in that question/answer thread, and posting an answer with a question is just bad etiquette. If you have a question, open a new question!

2. But before you ask a question, check to see if someone has already asked and answered the exact same thing!

There are over 15,000 18000* questions on ROS Answers. There is a good chance that if you have a common problem, it has already been asked, and probably answered. The average time between posting a question and getting answer is probably several hours, however, if you spend just a few minutes searching the site you might find your answer immediately.

3. If you ask a question -- try to make sure other people will be able to find it some day by adding appropriate tags.

That search thingy in #2 depends on questions being properly tagged. Adding a few (useful) tags will both help get you an answer faster, as well as making sure that the next person with the same question can find your question and the answer to it. "ros" is probably not a useful tag, the name of the package, node, or command in question would be good tags. Including tags for the specific hardware you might be using could also be useful (for instance "kinect" or "pr2")

4. Close button is evil. Karma is good.

This is probably the most misunderstood aspect of ROS Answers. People frequently post a comment saying "thanks, that works" and then click the "close" button on the question instead of selecting an answer. Please don't do this! Instead, click the checkmark next to the question to select the answer to your question. You can only select the answer on questions that you asked, however, if you find an answer to someone else's question that helps you, you can give a little Karma by clicking the "up arrow". The answer to the right here has been upvoted 26 times -- it must be pretty good.

The answers website really depends on Karma. New users have restrictions (unable to post links, images, etc).  New users NEED Karma to become more effective users. Power users need Karma to be able to moderate the site, like retagging those questions where people didn't get the tags right.

5. Finally, make sure somebody can actually maybe answer your question.

Good answers require good questions. If you get an error in the console, certainly include that exact error into the question. A summary of the error, or "I got an error" are not substitutes for the actual error or traceback. Tells us exactly what commands you ran. Other things you probably want to include: what ROS version, operating system, and robot you are running -- and if you aren't running from up-to-date debs from the OSRF apt repo, you probably want to point out how you installed ROS.

* I started writing this post a few days after the 15000th question was posted. I finished writing it 3000 questions later....

by Michael Ferguson (noreply@blogger.com) on September 21, 2014 09:43 PM

September 17, 2014
Microsoft Kinect v2 Driver Released

Reposted from ROS.org/news

From Thiemo and Alexis via ros-users@

Dear ROS Community,

I am Thiemo from the Institute for Artificial Intelligence at the University of Bremen. I am currently a PhD Student under the supervision of Prof. Michael Beetz. I'm writing this together with Alexis Maldonado, another PhD Student at our lab, who has helped mainly with the hardware aspects.

To continue reading: http://www.ros.org/news/2014/09/microsoft-kinect-v2-driver-released.html

Note that the Kinect v2 was the topic of a presentation by Preben Hjornet from Blue Workforce during the recent ROS-Industrial Community Meeting, held at ROSCon on Sept. 13th. To listen to that presentation, go to time stamp 23:14 here: http://youtu.be/7gKnzVTEbVM

by Paul Hvass on September 17, 2014 06:48 PM

ROSCon 2014 comes to a close

Crossposted from www.osrfoundation.org

Thanks to everyone for another fantastic ROSCon! It was a fun event, filled with great presentations and discussions, plus many of those, "we've Internet-known each other for years, but are now meeting for the first time," moments. We'll post the videos and slides as soon as we can, linking them from the program page.

Here's the group at the end of the event (thanks to Chad Rockey for being our photographer):

And here's one way to break down the demographics of the attendees, based on their type of affiliation:

We'd like to thank our generous sponsors, especially: QualcommClearpath Robotics,Rethink Robotics, and Cruise Automation.

by Tully Foote on September 17, 2014 01:19 AM

Powered by the awesome: Planet