February 09, 2016
GeoDigital is recruiting software developers for the autonomous driving team
From Richard Pollock

GeoDigital's innovative Autonomous Driving team is reshaping how geospatial data
are acquired and interpreted, and the way that road vehicles use the interpretation
results. This involves LIDAR and image sensing, spatial databases, photogrammetry,
GNSS, inertial sensing, machine vision, machine learning, and embedded system development.

GeoDigital is recuiting a full-time senior-level and a full-time intermediate-level
software developer, both to work in our Lompoc, California, 93436 USA office.
The work for these positions will include participation in the development of the
following:

- software tools to increase the efficiency of GeoDigital's data interpretation activities

- embedded software systems for route feature data management

- embedded software systems for vehicle localization refinement

- techniques for updating route feature data and distributing updates to user vehicles

Development software systems used internally in this work include the Point Cloud Library (PCL),
the Robot Operating System (ROS), OpenCV, and CUDA.


Benefits of working for GeoDigital:

- Comprehensive medical, vision and dental coverage, with employer contribution to HSA.

- Company paid Life Insurance, ADD, Short Term Disability and Long Term Disability.

- Company contribution to 401k.

- Flexible scheduling.

-Collaborative team-oriented working environment.  


Senior-level software developer position qualifications:

- university degree in an engineering or science field with a computing emphasis.

- a minimum of 7 years of industrial software development experience with steadily increasing
  responsibilities, or a research-based graduate degree and a minimum of 5 years of industrial
  software development experience with steadily increasing responsibilities.

- expert-level C, C++, and Python programming skills.

- familiarity with development toolchains on Windows and Linux platforms.

- experience in the selection and application of techniques from one or more of the
  following fields: machine vision, point cloud processing, photogrammetry, computational
  geometry, machine learning.

- working knowledge of terrestrial coordinate systems.


Intermediate-level software developer position qualifications:

- university degree in in an engineering or science with a computing emphasis.

- a minimum of 5 years industrial software development experience with steadily increasing
  responsibilities

- expert-level C, C++, and Python programming skills

- familiarity with development toolchains on Windows and Linux platforms


For both positions, experience with one or more of ROS, PCL, OpenCV, or CUDA is desirable.


To apply for either position, please send your resume to gayle@nimbushrsolutions.com
or visit our website at www.geodigital.com/careers.

by Tully Foote on February 09, 2016 06:46 PM

Driverless Development Vehicle with ROS Interface
Choose either the Lincoln MKZ or Ford Fusion as a development vehicle.

lincolnmkz.png
Full control of
  • throttle
  • brakes
  • steering
  • shifting
  • turn signals
Read production sensor data such as
  • gyros
  • accelerometers
  • gps
  • wheel speeds
  • tire pressures

There are no visual indications that the production vehicle has been modified. All electronics and wiring are hidden.




by Tully Foote on February 09, 2016 06:42 PM

Empowering your robotics research with TIAGo and The Construct

The purpose of PAL Robotics’ mobile manipulator TIAGo (standing for ‘Take It And Go’) is to help in robotics research while aiming at becoming a service robot in a near future. That’s why, in order to be accessible to everyone and easy to work with, the platform is open-source and 100% ROS compatible. TIAGo is ideal for any robotics research that requires perception abilities, navigation, objects grasping or Human-Robot Interaction.

TIAGo simulation model for Gazebo is available on this ROS Wiki. You can simply download and use it on your Gazebo desktop. Or even easier, you can also use the model at The Construct and skip the installation of ROS and Gazebo. With The Construct you can simulate TIAGo in Gazebo using only a web browser, with any type of computer (Linux, Mac, Windows) and without installing a thing.

What follows are all the steps to easily manage TIAGo’s simulation that you can reproduce later with the real robot. Here we are showing the TIAGo simulation integrated in The Construct. This is just an example of how easy it is to simulate and implement actions and tasks to PAL Robotics’ TIAGo. Remember to share with us your TIAGo simulations – we want to see what are you able to make this robotic buddy do!

TIAGo’s mission: TIAGo comes out of the shelf with autonomous navigation. In this simulation, TIAGo will autonomously navigate, locate itself and map the room to arrive to the table through the easiest path. All this autonomous navigation will be in charge of its lasers, sonars, IMU and RGB-D camera. When getting there, TIAGo will detect a can, adapt its body lifting its torso if necessary and move its 7 DOF arm in order to grasp the object with its parallel gripper.

How to do it:

1. Choose where you are going to create a simulation for TIAGo. For the simulation, two options can be considered:

  • Use a desktop installation of Gazebo in your computer.
  • Simulate in the cloud with The Construct.

TIAGo robot simulation2. Download TIAGo’s simulation model for Gazebo (2.2 version), which is open-source. There are two places where you can find it available.

  • GitHub: here you can also find the link to TIAGo’s ROS Wiki for further information.
  • The Construct’s “Robository”: this one downloads a single zip file which contains all the packages. The zip file can be uploaded as is into The Construct, and then start simulating with TIAGo easily.

TIAGo robot simulation3. Install the simulation files.

TIAGo robot simulation4. Start simulating! Program the robot to perform whatever action you want it to do: from picking a can to follow a path, or bringing one object from one place to another.

TIAGo robot simulation5. Implement the simulation in your real TIAGo robot after having simulated the actions in the digital world. Test your simulation and verify it in the real world!

Any question?

If you have any problem when simulating with TIAGo, e-mail tiago-support@pal-robotics.com asking your doubts, we will be happy to help you.

The post Empowering your robotics research with TIAGo and The Construct appeared first on PAL Robotics Blog.

by Judith Viladomat on February 09, 2016 02:56 PM

February 05, 2016
RIC-Europe Event Recap: Tech Demo & 2016 Members Meeting

Thanks to all the participants who made the 2016 RIC-EU tech demo and members meeting a success! On Jan 28-29, Fraunhofer IPA, the managing organization of the ROS-Industrial Consortium Europe, welcomed more than 50 participants to the annual RIC-EU members meeting. ROS technology continues to mature and find its way into commercial products and industrial applications, which was shown during a technology demonstration session.

Martin Hägele welcomes the guests to the tech demo session

Martin Hägele welcomes the guests to the tech demo session

Participants had the chance to see for themselves what ROS technology can do in terms of easing robot programming; extending the applicability of commercial software platforms through standard interfaces; allowing for hardware-independent intuitive touch interfaces; and powering next-gen robot hardware.

the tech demos kindly provided by IT+robotics srl, ppm as, fraunhofer IPa and blue workforce a/s

the tech demos kindly provided by IT+robotics srl, ppm as, fraunhofer IPa and blue workforce a/s

After introductory talks, the attendees enjoyed individual presentations, and were able to interact with the presenters and fellow attendees during an open-floor format. The day ended with a social event.

The members' meeting held Jan 29 is an annual gathering of members for an overview of activities during the previous year and current initiatives. Presentations about efforts similar to ROS-I targeting other domains were given. The SiLA initiative aims at similar standardization efforts, but for lab automation equipment; the Machinekit project, which is undergoing interesting development, can make in the future Machinekit+ROS a full stack covering all of your robotics-related needs, from bare metal up to the user interface. This "sister" project raised considerable interest for its potential, especially for hardware designers in need of a means to interface with ROS. More updates will be available on rosindustrial.org, as integration efforts continue.

the RIC-EU members meeting, held on the second day of the event

the RIC-EU members meeting, held on the second day of the event

Attendees enjoyed presentations from RIC-EU's scientific advisor, Martijn Wisse from TU Delft, and Mirko Bordignon from Fraunhofer IPA on ROS infrastructure further development for industrial use thanks to public funding. Ingo Luetkebohle from Bosch, which recently joined RIC-EU, provided an overview of ROS activities at his organization, while Paul Evans from the Southwest Research Institute briefed the attendees on the North American ROS-I Consortium.

The meeting ended with an open discussion, which provided inputs for the ongoing technology roadmapping activity. This will continue at the upcoming RIC-NA members meeting, and will set the schedule for the technical developments of ROS-I during 2016.

For your reference, the detailed agenda of the whole event can be found here.

by Mirko Bordignon on February 05, 2016 07:41 PM

Open-source release: REMODE: Probabilistic, Monocular Dense Reconstruction in Real Time
From David Scaramuzza via ros-users@

We are happy to release an open source implementation of our approach for real-time, monocular, dense depth estimation, called "REMODE".



The code is available at: https://github.com/uzh-rpg/rpg_open_remode

It implements a "REgularized, probabilistic, MOnocular Depth Estimation", as described in the paper:

M. Pizzoli, C. Forster, D. Scaramuzza
REMODE: Probabilistic, monocular dense reconstruction in real time
IEEE International Conference on Robotics and Automation (ICRA), pp. 2609-2616, 2014

The idea is to achieve real-time performance by combining Bayesian, per-pixel estimation with a fast regularization scheme that takes into account the measurement uncertainty to provide spatial regularity and mitigate the effect of noise.
Namely, a probabilistic depth measurement is carried out in real time for each pixel and the computed uncertainty is used to reject erroneous estimations and provide live feedback on the reconstruction progress.
The novelty of the regularization is that the estimated depth uncertainty from the per-pixel depth estimation is used to weight the smoothing.

Since it provides real-time, dense depth maps along with the corresponding confidence maps, REMODE is very suitable for robotic applications, such as environment interaction, motion planning, active vision and control, where both dense information and map uncertainty may be required.
More info here: http://rpg.ifi.uzh.ch/research_dense.html

The open source implementation requires a CUDA capable GPU and the NVIDIA CUDA Toolkit.
Instructions for building and running the code are available in the repository wiki.

by Tully Foote on February 05, 2016 07:28 PM

Mark Shuttleworth (Canonical): Commercial Models for the Robot Generation
Cross posted from the OSRF Blog


In 2004, Canonical released the first version of Ubuntu, a Debian-based open source Linux OS that provides one of the main operational foundations of ROS. Canonical's founder, Mark Shuttleworth, was CEO of the company until 2009, when he transitioned to a leadership role that lets him focus more on product design and partnerships. In 2002, Mark spent eight days aboard the International Space Station, but that was before the ISS was home to a ROS-powered robot. He currently lives on the Isle of Man with 18 ducks and an occasional sheep. Ubuntu was a platinum co-sponsor of ROSCon 2015, and Mark gave the opening keynote on developing a business in the robot age.

Changes in society and business are both driven by changes in technology, Mark says, encouraging those developing technologies to consider the larger consequences that their work will have, and how those consequences will result in more opportunities. Shuttleworth suggests that robotics developers really need two things at this point: a robust Internet of Things infrastructure, followed by the addition of dynamic mobility that robots represent. However, software is a much more realistic business proposition for a robotics startup, especially if you leverage open source to create a developer community around your product and let others innovate through what you've built.

To illustrate this principle, Mark shows a live demo of a hexapod called Erle-Spider, along with a robust, high-level 'meta' build and packaging tool called Snapcraft. Snapcraft makes it easy for users to install software and for developers to structure and distribute it without having to worry about conflicts or inter-app security. The immediate future promises opportunities for robotics in entertainment and education, Mark says, especially if hardware, ROS, and an app-like economy can come together to give developers easy, reliable ways to bring their creations to market.

ROSCon 2015 Hamburg: Day 1 - Mark Shuttleworth: Commercial models for the robot generation from OSRF on Vimeo.

Next up: Stefan Kohlbrecher of Technische Universitaet Darmstadt Check out last week's post: OSRF's Brian Gerkey

by Tully Foote on February 05, 2016 07:26 PM

February 03, 2016
Call for Chapters: Springer Book on Robot Operating System (Volume 2)
From Anis Koubaa via ros-users@

I am happy to announce the call for chapters for the Springer Book on Robot Operating System (ROS) Volume 2 is now open. 

The book will be published by Springer. 

We look forward to receiving your contributions to make this book successful and useful for ROS community. 

In Volume 1, we accepted 27 chapters ranging from beginners level to advanced level, including tutorials, case studies and research papers. The Volume 1 is expected to be released by Feb 2016.
After negotiation with Springer, the authors have benefited of around 80% of discount on hardcopies as an incentive to their contribution, in addition to publishing their work. 

The call for chapters website (see above) presents in detail the scope of the book, the different categories of chapters, topics of interest, and submission procedure. There are also Book Chapter Editing Guidelines that authors need to comply with. 

In this volume, we intend to make a special focus on unmanned aerial vehicle using ROS. Papers that present the design of a new drone and its integration with ROS, simulation environments of unmanned aerial vehicle with ROS and SITL, ground station to drone communication protocols (e.g. MAVLink, MAVROS, etc), control of unmanned aerial vehicles, best practices to work with drones, etc. are particularly sought.

In a nutshell, abstracts must be submitted by February 15, 2016 to register the chapters and to identify in advance any possible similarities of chapter contents. Full chapters submission is due on April 20, 2016.
Submissions and the review process will be handle through EasyChair. Link will be provided soon.

Each chapter will be reviewed by at least three expert reviewers, one at least should be a ROS user and/or developer. 

Want to be a reviewer for some chapters?
We look for the collaboration of ROS community users to provide reviews and feedback about proposals and chapters to be submitted for the book. If you are interested to participate in the review process, please consider filling in the following reviewer interest form

We look forward to receiving your contribution for a successful ROS reference!

by Tully Foote on February 03, 2016 08:08 PM

March 3 Public ROS-I Demos, March 4 Consortium Members Meeting
From Paul Hvass

ROS-I Banner.jpg

The ROS-Industrial Consortium Americas Annual Meeting will be held March 3-4 at Southwest Research Institute headquarters in San Antonio, Texas. Demonstrations are open to the public on March 3 for registered attendees, and will include Scan-N-Plan robotic automation, a mobile manipulator for order fulfillment, and more. Come and learn more about the design of a four-story tall laser coating removal mobile robot from Jeremy Zoss, the lead engineer behind the project who will give the keynote address. On March 4 consortia members will convene to hear updates from ROS-I community leaders in the US, Europe, and Asia. At lunch, Erik Nieves, the CEO of PlusOne Robotics, will present his vision for the future of robotics (keynote). Then the Consortium will provide input to build a roadmap for 2016, and will learn more about the progress and plans for the latest focused technical projects.

Interested in being part of the open source industrial robotics community? 
Register online or to view the agenda, visit rosindustrial.org events page.

by Tully Foote on February 03, 2016 07:35 AM

Work on driverless cars at Cruise Automation
From Richard Ni via ros-users@

Come work with a team of robotics experts on technically challenging problems, building products that improve lives and prevent car accidents. 

Our team is small, but we move quickly. Last year, we built prototype vehicles that have logged over 10,000 autonomous miles on California highways, and we're now working on some more exciting stuff.

In particular, we're looking for perception engineers to make sure our cars can accurately identify and track objects. Apply at https://jobs.lever.co/cruise/a2499312-3804-47d7-aad8-12c70228c4e2?lever-source=rw

For a complete list of our openings, see https://jobs.lever.co/cruise

by Tully Foote on February 03, 2016 07:33 AM

February 02, 2016
Marble Looking for Awesome Robotics Software Engineers
From Emily Spady via ros-users@

We're Marble - a scrappy early-stage robotics startup based in San Francisco that designs, builds, and operates robots for last mile logistics - and we're looking for one of our first core robotics software engineers.

You are joining very early and will have a huge amount of responsibility, impact, and room for growth. You must be able to move fast and get things done. Expect to be mostly in ROS writing C++ with a healthy amount of scripting in python and/or node. You should be versed in perception, navigation/path-planning, and state estimation of mobile robots. Experience with deployed outdoor robots is a huge bonus - expect to spend a fair bit of time in the streets with us (and the robot, of course).

If you think you're an awesome fit, apply here:
 https://jobs.lever.co/marble/e88cd13e-cb7a-4d6f-aab5-1a1215af45ce

by Tully Foote on February 02, 2016 08:23 AM

January 29, 2016
ROS Seattle User Group Meetup
From Lucas Walter via ros-users@

I'd like to set up a meetup to occur in February for ROS users in the Seattle area pending working out scheduling with attendees.  Likely it would be on a week night for two or three hours at a restaurant or bar or a room with a screen if we can set that up.  Periodic meetings to follow if there is sufficient interest.  

There is a LinkedIn ROS Seattle group: https://www.linkedin.com/groups/8457866.  If you are interested but don't use LinkedIn feel free to email me directly, and if LinkedIn proves unsuitable meetup.com or another invite system can be used.  Once the spam problem abates I'll make an entry on http://wiki.ros.org/Events, and get an announcement onto the ros.org blog with a time and date.

Currently there is a modest contingent of members from the University of Washington, and a handful who use it in industry or for personal projects.  It would be great to start out with informal discussion of projects and at later meetups have short length presentations from scheduled speakers.

by Tully Foote on January 29, 2016 11:25 PM

ROSCon Program Video - Brian Gerkey

Cross posted from the OSRF Blog

ROSCon is an annual conference focused on ROS, the Robot Operating System. Every year, hundreds of ROS developers of all skill levels and backgrounds, from industry to academia, come together to teach, learn, and show off their latest projects. ROSCon 2015 was held in Hamburg, Germany. Beginning today and each week thereafter, we'll be highlighting one of the talks presented at ROSCon 2015.

Brian Gerkey (OSRF): Opening Remarks

Brian Gerkey is the CEO of the Open Source Robotics Foundation, which oversees core ROS development and helps to coordinate the efforts of the ROS community. Brian helped found OSRF in 2012, after directing open source development at Willow Garage.

Unless you'd like to re-live the ROSCon Logistics Experience, you can skip to 5:10 in Brian's opening remarks, where he provides an overview of ROSCon attendees and ROS user metrics that shows how diverse the ROS community has become. Brian touches on what's happened with ROS over the last year, along with the future of ROS and OSRF, and what we have to look forward to in 2016. Brian also touches on DARPA's Robotics Fast Track program, which has a submission deadline of January 31, 2016.

ROSCon 2015 Hamburg: Day 1 - Opening Remarks from OSRF on Vimeo.

Next up, Mark Shuttleworth from Canonical.

by Tully Foote on January 29, 2016 06:42 PM

New Intrinsic Calibration Procedure

According to a ROS users survey that was conducted in 2014, the most popular hardware to integrate with ROS is a camera. Cameras are often used to perceive the environment or to localize robots and are a critical component of the sense-plan-act capability that ROS enables. Over the past two years, the ROS-I team has been working to create an industrial calibration library that supports both intrinsic and extrinsic calibration of vision sensors. What is novel about the library is that is can handle groups of heterogeneous sensors that may be static, or mounted to a robot, or some combination thereof. And it coordinates with MoveIt! to automate calibration procedures in which robot motion is required during calibration. While the extrinsic calibration routines are well in hand, the intrinsic calibration algorithm, which is based on a popular lens distortion model, resulted in higher parameter variance than was expected based on residual errors. This is particularly true for the focal length parameter, which is essential for correctly interpreting the size of objects in the scene. The ROS-I team has developed a novel camera intrinsic calibration technique that is both computationally faster and provides superior results to the methods commonly employed in machine vision.

The optimization procedure outlined by Zhang and automated by both OpenCV/ROS, and Matlab orchestrate the collection of a set of images of a calibration target. Both the extrinsic pose of the camera and the intrinsic parameters themselves are determined by minimizing the re-projection error. Using these methods, the residual re-projection error is on the order of ¼ pixel/observation or less. However, the variances of focal length and optical center are much higher, typically being 20 pixels and 5 pixels respectively. This is due to correlation between parameters of the distortion model with the focal length parameter.

The new procedure developed by the ROS-I team reduces parameter variance to be on par with the residual error. It requires only 10 to 20 images, but each is taken a known distance apart with little or no skew (refer to images). The new procedure estimates the extrinsic pose for the first image, and constrains the optimization to use the known pose relationship for subsequent images. The focal length and optical center are significantly better constrained. Using the resulting intrinsic calibration parameters for a given camera yields significantly better extrinsic calibration or pose estimation accuracy. Try out the Intrinsic Camera Calibration (ICC) tutorial that is posted on the ROS-I wiki.

The new intrinsic calibration procedure requires one to move the camera to known positions along an axis that is approximately normal to  the calibration target.

The new intrinsic calibration procedure requires one to move the camera to known positions along an axis that is approximately normal to  the calibration target.

by Paul Hvass on January 29, 2016 03:38 PM

January 28, 2016
ROS Book and Tutorials - Learning ROS for Robotics Programming - 2nd Edition
From Enrique Fernández Perdomo via ros-users@

Dear ROS and robotics community,

I simply want draw your attention to the 2nd edition of the 'Learning ROS for Robotics Programming' book that I finished last year, together with some colleagues.

You can find the book here:

And all the book (source code) tutorials here:

You can also read about the book contents on this post:

Feel free to file any PR or issue on the repository, if something doesn't work.
We'll try to solve them asap, either for ROS hydro, indigo or jade.

IMHO, the tutorials are easy to follow on their own, but if you have any problem, there's the book... or just ask us (create an issue on the repo).

I also want to say thanks to all the people who has ever contributed to the ROS wiki and ROS answers, helping many of us to learn ROS and use it efficiently. With this book and code we just want to put our two cents back.

I hope you enjoy it and learn something!

by Tully Foote on January 28, 2016 06:35 PM

January 27, 2016
Apply for the euRobotics Technology Transfer Award at ERF 2016
From Mirko Bordignon via ros-users@

individuals and teams from industry and academia are invited to submit an application for the upcoming euRobotics Technology Transfer Award, which will be a part of the "European Robotics Forum" to be held in Ljubljana 21-23 March 2016(http://www.erf2016.eu/).

 

Detailed information on the application procedure is available at http://www.erf2016.eu/index.php/techtransfer-award/

In case of questions you can contact Martin Hägele at martin.haegele@ipa.fraunhofer.de

by Tully Foote on January 27, 2016 11:42 PM

Announcing roslaunch graph generator
From Brett Ponsler via ros-users@

I thought it would be useful to be able to generate a graph of the tree of files and nodes used by a particular launch file. After not being able to find anything capable of doing this, I wrote a quick python script to do just that and thought I would share it with everyone.


It's fairly simple to use, but feel free to message me if you have any questions, issues, or suggestions for improvements.

by Tully Foote on January 27, 2016 11:16 PM

ROS Web Control Center
From Lars Berscheid

The ROS Control Center is a universal tool for controlling ROS robots. It runs in the browser using a websocket connection androslibjs from RobotWebTools. In general, the ROS Control Center offers an easy way to
  • show nodes, topics and service names,
  • subscribe and publish messages,
  • call services,
  • show and change parameters.
ros_control_center_screenshot.png
Furthermore, it contains features like custom formatting for your own message and service types, a console output and a battery status view. A camera stream view based on the Web Video Server is implemented, many standard message and service types (from common_msgs or std_srvs) work out of the box. It can save multiple robot configurations and has a built-in mode to hide unimportant topics and services. And even better, you can check it out online!

Find more infos at https://github.com/gaug-cns/ros-control-center. Feedback and contributions are really welcome!

by Tully Foote on January 27, 2016 02:20 AM

Job posting for Intelligrated
From Matt Lamping

I am a corporate HR recruiter with Intelligrated, we have posted robotics engineering positions with your newsgroup in the past.  I'd like to post the following position that is open in our St Louis, MO facility:

 

There is quite a lot of interesting development activity with robotics in the warehousing and logistics field.  Intelligrated is a premier leader in this space and is looking for Software Engineers with Robotics/expertise to support our expanding robotics research and development group.   If you are interested in pursuing an exciting career that combines computer vision, robotics, software engineering, and automation, then this R&D is the opportunity for you!

 

Intelligrated offers a rewarding career path, comfortable work environment, competitive compensation, and excellent benefits.

 

You will collaborate with the members of the robotics development team on the integration of motion, vision and perception based robotics solutions. This position is located in St. Louis, Mo and is an excellent opportunity for a motivated and creative software engineer to be a part of multiple exciting robotic based development projects that include integration of robotic motion, vision and simulation to be used in material handling systems.

 

Responsibilities:

- Develop real time motion planning algorithms and vision based perception systems for use in robotic software solutions for material handling systems

- Develop new functionalities as well as maintain the current code

- Follow rigorous design control methodology and write concise requirements specifications, architecture specifications, and design description, verification plans, and test cases.

- Developing software applications to work with the simulations to emulate actual production rates to prove system functionality

- Work concurrently with robotic design engineers, controls engineers and other software engineers as designs are being developed and finalized Performs unit testing of software and assists in the verification and validation process.

- Manages schedules, meet and adhere to development goals.

- Provides planning and status information to project manager.

 

2-3 years of experience in software development in a real-time operating system environment in C/C++ preferred. Will consider entry level candidates based on educational background Practical and/or theoretical knowledge of any of the control of multi degree of freedom robots, Kinematic and Dynamics of robotic manipulators, trajectory generation and path planning, or real-time operating systems.

 

Must have a strong working knowledge of programming and design relating to computer vision algorithms and machine learning.

Experience with ROS, QNX, Ubuntu, Multi-threaded and multi-process programming desired Experience with TCP/IP networking desired Experience developing test procedures and testing modules desired Excellent communication and documentation skills.

Experience industrial robotics or material handling (logistics) industry a plus.

 

Master's or PhD level in electrical engineering, computer science, or related field.

 

EEO Employer F/M/Disabled/Vets

Intelligrated (www.intelligrated.com<http://www.intelligrated.com) is a leading North American-based, single-source provider of intelligent automated material handling solutions that drive distribution and fulfillment productivity for retailers, manufacturers and logistics providers around the world. Through a broad portfolio of automation equipment, software, service and support, Intelligrated solutions optimize processes, increase efficiency and give businesses a competitive edge.  Intelligrated designs, manufactures, integrates and installs complete material handling automation solutions including conveyor systems, sortation systems, palletizers, robotics and order picking technologies - all managed by advanced machine controls and software. Solutions include industry-leading Intelligrated-manufactured Alvey(r), RTS(tm) and IntelliSort(r) brand equipment and Knighted(r) warehouse management (WMS), warehouse control (WCS) and labor management software.

Every project is backed by Intelligrated's 24X7 multilingual technical support and access to lifecycle service through a network of national, regional and local service centers. From concept to integration to lifecycle support, Intelligrated automation delivers distribution and fulfillment success.

by Tully Foote on January 27, 2016 02:17 AM

January 22, 2016
Announcing the first release of RAPP Platform and RAPP API [v0.5.5]
From Manos Tsardoulias

Dear all,


We are happy to announce the first open-source versions of RAPP Platform and RAPP API, oriented to provide an online platform for delivering ready-to-use generic cloud services to robots!


RAPP is a 3-year research project (2013-2016) funded by the European Commission through its FP7 programme, which provides an open source software platform to support the creation and delivery of robotic applications. Its technical objectives include the development of an infrastructure for developers of robotic applications, so they can easily build and include machine learning and personalization techniques to their applications, the creation of a repository from which robots can download Robotic Applications (RApps) and upload useful monitoring information, as well as developing a methodology for knowledge representation and reasoning in robotics and automation. More information on RAPP can be found at http://rapp-project.eu/.


One of the most important parts of RAPP is the RAPP Platform, along with the RAPP API. RAPP Platform is a collection of ROS nodes and back-end processes that aim to provide generic web services to robots. The main concept of RAPP Platform aligns with the cloud robotics approach. RAPP Platform is divided in two main parts: the RAPP ROS nodes and the RAPP Web services.


  • The RAPP ROS nodes are back-end processes providing generic functionality, such as Image processing, Audio processing, Speech synthesis & Automatic speech recognition (ASR), Ontology & Database operations, as well as Machine Learning algorithms.

  • The RAPP Web services are the front-end of the RAPP Platform. These expose specific RAPP Platform functionalities to the world, thus any robot can invoke specific algorithms, simplifying the work of developers. The developed web services utilize HOP, a language dedicated to programming reactive and dynamic applications on the web.


Finally, RAPP API is the software means to invoke a RAPP Platform service via C++, Python or JavaScript from any computational system (either in-robots or standalone such as PCs and laptops).


Links of interest:

by Tully Foote on January 22, 2016 10:49 PM

January 21, 2016
Announcing Visual Tools, ROS Control Boilerplate, Utilities
From Dave Coleman via ros-users@

I'm happy to announce the release a bunch of packages I've worked on and found useful over the years. They have been used in mine and other's research at the University of Colorado Boulder, as well as in the Amazon Picking Challenge and other external projects for companies. They all have decent documentation in their README.md file, example launch files, utilize Travis CI, and have been released in Indigo and Jade. As always, please help in making them even better!

Rviz Visual Tools
Ever wanted to make visualizing data in Rviz easier? rviz_visual_tools provides a ton of helper functions for visualizing different types of shapes and data in Rviz in an efficient way.

MoveIt! Visual Tools
Want to visualize multiple RobotStates while also showing different trajectories, trajectory lines, and grasp positions? moveit_visual_tools contains all the functionality of rviz_visual_tools while also providing visualization of many of the MoveIt! data types. It also makes it easy to add collision objects to your planning scene.

OMPL Visual Tools
Add to the functionality of Rviz and MoveIt! visual tools with even more specialized features for OMPL data types and for introspecting your sampling-based geometric OMPL planners. ompl_visual_tools is specially good for working in 2D or 3D spaces.

ROS Control Boilerplate
Want to get started with ros_control for your next robot/robot upgrade? ros_control_boilerplate contains lots of working example code for the RRBot (as seen in Gazebo) as well as many helper utilities such as loading joint limits from rosparam and URDF, recording trajectories to file, playing back from file, etc.

MoveIt! Sim Controller
moveit_sim_controller is a pass-through non-physics based simulator for quickly testing your ros_control robot offline. It also allows you to load your robot's initial state from a SRDF (semantic robot description format) state instead of the default 0's state.

TF Keyboard Cal
Don't want to worry about more specialized calibration techniques for moving your /tf transforms to different locations? tf_keyboard_cal lets you use your computer keyboard to tweak the 6 dof transform of a frame quickly and intuitively, and even load/save the settings from file.

ROS Param Shortcuts
The package rosparam_shortcuts provides lots of helper functions for all sorts of datatypes to be easily loaded from parameter server with good user feedback if the parameter is missing. This package enforces the philosphy that there should be no default parameters - everything must be defined by the user in yaml files (or launch files or where ever) otherwise your program should not run.

Two New Message Types
For those interested, I've created cartesian_msgs for commanding a robot's Cartesian end effector position (instead of by joint values), and I've also released graph_msgs for sending graphs of nodes and edges across ROS topics.

I hope these are helpful to your robotics projects!

by Tully Foote on January 21, 2016 06:00 PM

January 18, 2016
3D Camera Survey

The recent availability of affordable ROS-compatible 3D sensors has been one of the fortunate coincidences that has accelerated the spread of ROS. Since the popular the ASUS Xtion Pro Live has been intermittently stocked, check out the field of ROS-compatible 3D sensors to review the contenders.

ASUS® XtionPro™ Live

Type: Structured light
Depth Range: 0.8 to 3.5 m
3D Resolution: 640 x 480
RGB Resolution: 1280 x 1024
Frame Rate: 30 fps
Latency: ~1.5 frames
FOV: 58° H, 45° V
Physical dims: ~180x40x25 mm (head)
Interface: USB 2.0
Link to ROS Driver
Notes: Similar internals to the Xbox Kinect 1.0. Intermittent availability for purchase.

Microsoft® Kinect™ 2.0

Type: Time of flight
Depth Range: 0.5 to 4.5 m
3D Resolution: 512 x 424
RGB Resolution: 1920 x 1080
Frame Rate: 30 fps
Latency: 20 ms minimum
FOV: 70° H, 60° V
Physical dims: ~250x70x45 mm (head)
Interface: USB 3.0
Link to ROS Driver
Notes: Latency with ROS is multiple frames.
Active cooling.

Intel RealSense R200

Type: Stereo with pattern projector
Depth Range: 0.6 – 3.5 m
3D Resolution: 640 x 480
RGB Resolution: 1920 x 1080
Frame Rate: 60 fps (3D), 30 fps (RGB)
Latency: 1 frame
FOV: 59° H, 46° V
Physical dims: 102x9.5x7 mm
Interface: USB 3.0
Link to ROS Driver
Notes: Outdoors capable.

IFM® Efector™ O3D303

Type: Time of flight
3D Resolution: 176 x 132
RGB Resolution: N/A
Depth Range: 0.3 to 8 m
Frame Rate: 25 fps
Latency: 1 frame
FOV: 60° V, 45° H
Physical Dims: 120x95x76 mm
Interface: Ethernet
Link to ROS Driver
Notes: Accuracy +/-4 mm. IP65/67 industrial enclosure.

Stereolabs® ZED™

Type: Embedded stereo
3D Resolution: 2208 x 1242 max
RGB: 2208 x 1242 max
Depth Range: 1.5 to 20 m
Frame Rate: 15 fps at max res., 120 fps at VGA res.
Latency: 1 frame
FOV: 96° H, 54° V
Physical Dims: 175x30x33 mm
Interface: USB 3.0
Link to ROS Driver
Notes: Latency not confirmed.

Carnegie Robotics® MultiSense™ S7

Type: Embedded stereo
3D Resolution: 2048 x 1088
RGB Resolution: 2048 x 1088 max (7.5 fps)
Depth Range: 0.4 m to infinity
Frame Rate: 15 fps at 2048 x 544
Latency: 1 frame
FOV: 80° H, 45° V
Physical Dims: 130x130x65 mm
Interface: Ethernet
Link to ROS Driver
Notes: IP68 enclosure.

Ensenso® N35-606-16-BL

Type: Structured light
3D Resolution: 1280 x 1024
RGB: 1280 x 1024
Frame Rate: 10 fps
Latency: 1 frame
FOV: 58° H, 52° V
Physical Dims: 175x50x52 mm
Interface: Ethernet
Link to PCL/ROS Driver
Notes: Many other resolutions and FOVs available. IP65/67 enclosure available.

SICK® 3visitor-T™

Type: Time of flight
3D Resolution: 144 x 176
RGB: N/A
Frame Rate: 30 fps
Latency: 66 msec
FOV: 69° H, 56° V
Physical Dims: 162x93x78 mm
Interface: Ethernet
Link to ROS Driver
Notes: IP67 enclosure

by Paul Hvass on January 18, 2016 08:43 PM

January 14, 2016
UpDroid announces the UP1
From Kartik Tiwari

UpDroid's UP1 is a programmable ROS robot that consists of a modular hardware and a software IDE that can be accessed over Wi-Fi. The software IDE is called UpCom and is served by the robot over the network. The programming environment consists of tabs with the option to choose from editor and console (default tabs) or user defined application specific tabs. Mentioned below are the hardware specs for the robot:

- 1.4GHz Dual core intel atom
- ATmega2560 micro-controller for low level motor control
- 5DOF arm
- 4 wheeled drive
- 4 IR sensors for low level obstacle avoidance
- Dual cameras for stereoscopic imaging (optional replacement with Intel's real sense)
- Audio IN/OUT

The robot ships with low level API calls that can be used to develop sophisticated behaviors. APIs include wrappers for move-it group, point cloud data, color detection etc.

For more information visit our website and see the robot in action below.

by Tully Foote on January 14, 2016 06:57 PM

CAD to ROS Project Launch

A longstanding need, which will improve the efficiency of setting up a ROS project, is the ability to import model and process data from various CAD systems. In collaboration with TU Delft last year, we proposed a ROS-I Consortium Focused Technical Project to develop a CAD data importer for ROS that could automatically convert from common CAD files to URDFs and SRDFs. While the initiative was met with great interest, the lengthy duration and fixed-price milestone contract arrangement were not sufficiently attractive to garner the investment required to launch the project. However, a strong showing by the developer community led us to believe that much of the software development work would be contributed by collaborating consortium members, if technical leadership was provided to guide and curate the development. Furthermore, we wished to expand the future vision of CAD to ROS by building it into a 3D workbench framework. This will provide a common user experience for importing and configuring sensors, Cartesian process plans, motion plans, and point cloud data in the future. To assure incremental progress, we restructured the CAD to ROS project, reorganizing it into five milestones, each approximately four months:

  1. URDF GUI Editor
  2. Process Planning
  3. Work Cell Planning
  4. Sensor Configuration and Calibration Setup
  5. 3D Point Cloud Importer

We also restructured the business model. There are now four types of contributors to the project:

  • Technical Overseer: TU Delft will set the software architecture design goals and will review pull requests from developers.
  • Sponsor: In December 2015, one of the ROS-I Consortium members pledged their support for TU Delft’s efforts.
  • Developers: In exchange for helping to write the code, Consortium members will have access to the private project repository.
  • Consortium Administrator: SwRI and Fraunhofer IPA will recruit developers from amongst the Consortium membership to help with coding.

For this particular Consortium project, we are implementing a business model in which only contributors from the Consortium have access to the URDF editor tool. This is an opportunity to see how productive the community can be when motivated by a little self-interest.

We need the help of ROS-I Consortium member software developers! Please contact paul.hvass@swri.org, or mirko.bordignon@ipa.fraunhofer.de to gain access to the project and to help complete Milestone 1!

CAD to ROS Milestone 1: URDF Editor. Thanks to Levi Armstrong from SwRI for contributing his QT GUI as a head start for the project!

CAD to ROS Milestone 1: URDF Editor. Thanks to Levi Armstrong from SwRI for contributing his QT GUI as a head start for the project!

by Paul Hvass on January 14, 2016 12:32 AM

January 13, 2016
New book: "Mastering ROS for Robotics Programming"
From Lentin Joseph

Here is a new book for mastering your skills in Robot Operating System(R.O.). The book title is "Mastering R.O.S for Robotics Programming" and this is one of the advance book on R.O.S currently available in the market. 

This book discussing advanced concepts in robotics and how to implement it using R.O.S. It starts with deep overview of the R.O.S framework, which can give you a clear idea of how R.O.S really works. During the course of the book, you will learn how to build models of complex robots, and simulate and interface the robot using the R.O.S MoveIt! and R.O.S navigation stack.

After discussing, robot manipulation and navigation in robots, you will get to grip with the interfacing of I/O boards, sensors, and actuators to R.O.S. 

One of the essential ingredients of robots are vision sensors, and an entire chapter is dedicated to the vision sensors and its interfacing in R.O.S.

You can also see the hardware interfacing and simulation of complex robots in R.O.S and R.O.S Industrial. 

Finally, you will get to know the best practices to follow while programming in R.O.S.

There are 12 chapters and 481 pages on this book. The main contents of the book are given below
  1. Introduction to R.O.S and its package management
  2. Working with 3D robot modeling in R.O.S
  3. Simulating  robots using R.O.S and Gazebo
  4. Using the R.O.S MoveIt! And Navigation stack
  5. Working with Pluginlib, Nodelets and Gazebo plugins
  6. Writing R.O.S controllers and visualization plugin
  7. Interfacing I/O boards, sensors and actuators to R.O.S
  8. Programming Vision sensors using R.O.S, Open-CV and P.C.L
  9. Building  and interfacing differential drive mobile robot hardware in R.O.S
  10. Exploring the advanced capabilities of R.O.S MoveIt!
  11. R.O.S for Industrial Robots
  12. Troubleshooting and best practices in R.O.S
This book is written by Lentin Joseph who is the CEO/Founder of a Robotic startup called Qbotics Labs from India. He is also an author of a book called "Learning Robotics using Python" which is also about R.O.S. 

The book uses R.O.S Indigo and installed on latest Ubuntu L.T.S 14.04.03. The codes are also compatible with R.O.S Jade. 


The book is designed in such a way that even beginners can take up all topics. If you are a robotics enthusiast or researcher who wants to learn more about building robot applications using R.O.S, this book is for you. In order to learn from this book, you should have a basic knowledge in R.O.S, GNU/Linux, and C++ programming concepts. The book will also be good for professionals who want to explore more features of R.O.S.

The book is published by PACKT and here are the links to buy the book



You will get complete information about the book from book website


by Tully Foote on January 13, 2016 09:56 PM

Invitation to the first Danish ROS Meetup
From Karl Damkjær Hansen via ros-users@

The first Danish ROS Meetup:

Thursday February 18th 2016
Aalborg University
Section for Automation and Control
Fredrik Bajers Vej 7, C3-203
9220 Aalborg

At ROSCon 2015, a handful of Danish ROS experts met and decided that it was about time that we had a proper Danish ROS Meetup. These experts came from both industry and academia, and both the southern and the northern part of the country.
We want to invite all Danish ROS users to this meetup. Whether you are a user, aficionado or expert, we want you to come. We will share our experiences with ROS and present our different projects that use ROS and hopefully kick-start a useful Danish ROS network.

Participation is free, just send an email to kdh@es.aau.dk to let us know that you are coming.

CALL FOR PRESENTATIONS
Please consider presenting your work with ROS. This may be anything from a presentation of a project using ROS to a tutorial on a package that you maintain. Please send an email with your topic to kdh@es.aau.dk

PROGRAM
10:30-12:00 Morning session
12:00-13:00 Lunch
13:00-13:30 Tour of the Automation and Control laboratories
13:30-15:30 Afternoon session

We hope to see you in Aalborg in February.
Robotic regards,
Karl Damkjær Hansen

by Tully Foote on January 13, 2016 09:14 PM


Powered by the awesome: Planet