A New Website

Earlier this month I started blogging again. My blog was hosted on blogger.com, which was a pretty advanced platform in 2008 when I started blogging. Now it is pretty dated. I had a hell of a time trying to put any code blocks in the blog - which for a software engineer is a real problem.

So this is the new blog. I’ve ported most of the old content over, except for a few posts which were completely irrelevant (for example, a post which consists of nothing but a link to a Kickstarter project for Mech Warfare, both of which are since over).

Porting over all of these posts was actually a somewhat fun exercise, I had forgotten just how much had been posted to this blog over the years. Here’s a few highlights:

In setting up this site, I decided not to bother with a comments section, as the comments on the old site have generally been a wasteland for some time. If you come across an issue or are looking for more information, reach out to me by either filing a ticket against this site on GitHub or by messaging me on Twitter.

I’ve also updated the domain name to reflect that I might actually post some non-robotics things here in the future. In that same spirit, most of the existing posts are now tagged with the new robots tag. I’ve had this domain name for a while now, happy to finally put it to use.

Welcome to Robot & Chisel.

Robomagellan Update

This is day 6 of my National Robotics Week blog marathon. See the full set of posts here.

It is really starting to feel like spring here in New Hampshire, so I've been reviving my Robomagellan robot:

The current robot (not shown: Intel D435 sensor)

If you're not familiar with Robomagellan, here is the wikipedia description:
Robomagellan was created by the Seattle Robotics Society and is a small scale autonomous vehicle race in which robots navigate between predefined start and finish points. The start and finish points are usually represented as GPS coordinates and marked by orange traffic cones. In most versions of the competition there are also optional waypoints that the robot can navigate to in order to earn bonus points. The race is usually conducted on mixed pedestrian terrain which can include obstacles such as park benches, curbs, trees, bushes, hills, people, etc..
Unfortunately, there are not many Robomagellan contests happening anymore - but this platform is still good for me to work on some outdoor navigation. I actually started building this robot in 2012 when Robomagellan was quite popular. The robot was briefly worked on in 2014 and 2018. The GitHub contributions view seems to tell this story quite well:

Contribution timeline for Robomagellan

As with any robot that has been developed sporadically over close to a decade, it has gone through quite a bit of evolution. You can find some of that evolution in the posts tagged robomagellan, but here is a summary:

  • The computer was originally a Turtlebot laptop. It has since been swapped out for an Intel NUC. I've previously posted about how I power the NUC off a 12->19V step up.
  • The original version of the Etherbotix was designed for this robot. It now uses the later Etherbotix design with a plug-in motor driver.
  • The robot now has an Adafruit Ultimate GPS v3. That may change in the near future, as I've been looking at setting up an RTK solution here on the farm.
  • The robot originally used a small chip-level IMU on the Etherbotix, but now uses a UM-7 for better results. That said, I never had any luck with the internal UM-7 EKF (even when trying to calibrate it), so there are probably plenty of cheaper options out there.
  • Originally, the main sensor was going to be a UTM-30 on a tilting servo. I've now simplified that for an Intel D435 depth sensor.
  • The robot is still using the original wheels, however I switched from 100:1 gearboxes to 50:1 to get more speed (the 100:1 were really too much torque, the robot literally could climb a wall).
The robot, as you probably guessed, runs ROS. Specifically I'm using the following packages:
  • etherbotix_python - these are my drivers for the Etherbotix board. In addition to controlling the motors and providing odometry, this board also acts as a serial->ethernet adapter for the GPS module. The drivers publish the raw NMEA sentences that are sent by the GPS into ROS.
  • um7 - this is the driver for the UM7 IMU.
  • nmea_navsat_driver - this is used to convert NMEA sentences into a sensor_msgs/NavSatFix message.
  • imu_transformer - is used to translate the IMU position into the base_link frame. My IMU is actually mounted "upside down" so this is super important.
  • imu_filter_madgwick - this is used to track the pose of the IMU. Importantly it fuses the magnetometer information, allowing the IMU to act like a compass for the global EKF.
  • robot_localization - I use two instances of the EKF filter. The first fuses the IMU with the wheel odometry in order to get a good local odometry frame. The second fuses the IMU, wheel odometry and GPS (processed by the navsat_transform_node) into a global odometry.
  • rviz_satellite - not used on the robot, but is an awesome plugin for RVIZ that can download
Global Localization
Setting up the global localization took me a little while to get working. In order to make this process easier, I setup my main launch file so that I have an "offline_mode" argument which doesn't launch the drivers. Then I have a launch file for recording bagfiles running only the drivers. I can then change everything in my various pipelines when re-running the bagfiles locally. This has been quite useful as I've been tweaking the IMU processing pipeline in parallel with adding the global EKF.

Satellite Imagery in RVIZ
Visualization is always a powerful tool. While RVIZ doesn't have much going for outdoor robots out of the box, the rviz_satellite plugin makes it awesome.

rviz_satellite overlay with some odometry tracks

The one challenging part of rviz_satellite is setting the "Object URI". For an off-road robot, the default OpenStreetMaps don't do much. I ended up using MapBox satellite imagery - but getting the Object URI right took a bit of digging around. It turns out the correct URI is:
https://api.mapbox.com/styles/v1/mapbox/satellite-v9/tiles/256/{z}/{x}/{y}?access_token=XYZ
Also, free accounts with MapBox are limited to 200k tile requests per month. To avoid using these up, you might want to think about running a separate roscore so you can keep RVIZ running even when you restart the robot launch file. That said, I've only used 148 tile requests this month and have been restarting RVIZ quite a bit.

Next Steps
I just recently got the global localization working - I'm probably going to continue to tweak things. The D435 drivers are working pretty reliably now, so the next step is mount the D435 on the robot and start integrating the data and move onto some basic navigation. I also plan to clean up the IMU calibration code I created and get it merged into robot_calibration.

Robot Calibration for ROS

This is day 5 of my National Robotics Week blog marathon. See the full set of posts here.

Uncalibrated vs. Calibrated

Calibration is an essential step in most robotics applications. Robots have many things that need to be calibrated:

  • Camera intrinsics - basically determining the parameters for the pinhole camera model. On some RGBD (3d) cameras, this also involves estimating other parameters using in their projections. This is usually handled by exporting YAML files that are loaded by the drivers and broadcast on the device's camera_info topic.
  • Camera extrinsics - where the camera is located. This often involves updating the URDF to properly place the camera frame.
  • Joint offsets - small errors in the zero position of a joint can cause huge displacement in where the arm actually ends up. This is usually handled by a "calibration_rising" flag in the URDF.
  • Wheel rollout - for good odometry, you need to know how far you really have travelled. If your wheels wear down over time, that has to be taken into account.
  • Track width - on differential-drive robots, the distance between in your drive wheels is an essential value to know for good odometry when turning.
  • IMU - when fusing wheel-based odometry with a gyro, you want to make sure that the scale of the gyro values is correct. The gyro bias is usually estimated online by the drivers, rather than given a one-time calibration. Magnetometers in an IMU also need to be calibrated.
My robot_calibration package can do all of these except wheel rollout and magnetometer calibration (although the magnetometer calibration will be coming soon).

Evolution of ROS Calibration Packages
There are actually quite a few packages out there for calibrating ROS-based robots. The first package was probably pr2_calibration developed at Willow Garage. While the code inside this package isn't all that well documented, there is a paper describing the details of how it works: Calibrating a multi-arm multi-sensor robot: A Bundle Adjustment Approach.

In basic terms pr2_calibration works by putting checkerboards in the robots grippers, moving the arms and head to a large number of poses, and then estimating various offsets which minimize the reprojection errors through the two different chains (we can estimate where the checkerboard points are through its connection with the arm versus what the camera sees). Nearly all of the available calibration packages today rely on similar strategies.

One of the earliest robot-agnostic packages would be calibration. One of my last projects at Willow Garage before joining their hardware development team was to make pr2_calibration generic, the result of this effort is the calibration package. The downside of both this package and pr2_calibration is that they are horribly slow. For the PR2, we needed many, many samples - getting all those samples often took 25 minutes or more. The optimizer that ran over the data was also slow - adding another 20 minutes. Sadly, even after 45 minutes, the calibration failed quite often. At the peak of Willow Garage, when we often had 20+ summer interns in addition to our large staff, typically only 2-3 of our 10 robots were calibrated well enough to actually use for grasping.

After Willow Garage, I tried a new approach using Google's Ceres solver to rebuild a new calibration system. The result was the open source robot_calibration package. This package is used today on the Fetch robot and others.

What robot_calibration Can Do
The robot_calibration package is really intended be an all-inclusive calibration system. Currently, it mainly supports 3d sensors. It does take some time to setup for each robot since the system is so broad - I'm hoping to eventually create a wizard/GUI like the MoveIt Setup Assistant to handle this.

There are two basic steps to calibrating any robot with robot_calibration: first we capture a set of data which mainly includes point clouds from our sensors, joint_states data of our robot pose, and some TF data. Then we do the actual calibration step by running that data through our optimizer to generate corrections to our URDF, and also possibly our camera intrinsics.


One of the major benefits of the system is the reliability and speed. On the Fetch robot, we only needed to capture 100 poses of the arm/head to calibrate the system. This takes only 8 minutes, and the calibration step typically takes less than a minute. One of my interns, Niharika Arora, ran a series of experiments in which we reduce the number of poses down to 25, meaning that capture and calibration took only three minutes - with a less than 1% failure rate. Niharika gave a talk on robot_calibration at ROSCon 2016 and you see the video here. We also put together a paper (that was sadly not accepted to ICRA 2016) which contains more details on those experiments and how the system works [PDF].

In addition to the standard checkerboard targets, robot_calibration also works with LED-based calibration targets. The four LEDs in the gripper flash a pattern allowing the robot to automatically register the location of the gripper:

LED-based calibration target.

One of the coolest features of robot_calibration is that it is very accurate at determining joint zero angles. Because of this, we did not need fancy jigs or precision machined endstops to set the zero positions of the arm. Technicians can just eye the zero angle and then let calibration do the rest.

There is quite a bit of documentation in the README for robot_calibration.

Alternatives
I fully realize that robot_calibration isn't for everyone. If you've got an industrial arm that requires no calibration and just want to align a single sensor to it, there are probably simpler options.