Some people collect classic cars. I tend to collect classic robots. For a long time I’ve
preserved many of my robots - or at least the head of the robot if I needed to reuse the
majority of the components. I also bought a PR2 head during my time
at Willow Garage. Recently I added the best item to this collection.
A couple of weeks ago, someone from the Homebrew Robotics Club
posted a link to a Craigslist ad for a UBR-1 robot in Tracy, CA. I immediately reached out to
the seller to find out more. A few days later the robot was on it’s way to NH in it’s bright
orange case.
Upon arrival in NH, I quickly unpacked the robot. The skins were quite dirty, and a number of
fasteners were missing so I quickly stripped all the skins off the robot to check on the insides.
Removing the batteries, I found they were at only 0.5V each. I had figured they would be garbage
since they were eight years old and they had been left plugged in (I’m not sure, but I suspect
there is always a small draw on the batteries when plugged into the main board). New batteries
were ordered and the cabling was moved over. I manually charged the batteries to balance them
and then installed them into the robot.
I unplugged all the computer and motor power cables coming off the mainboard before starting
the robot up. Once satisfied that the power rails were coming up nicely I plugged the computer
back in. And it booted. And gravity compensation on the arm starting working as soon as I
released the runstop.
The joystick batteries are completely dead and unable to recharge, so I switched to keyboard
teleop to test the base and head. I tested the arm with an older controller test script that
was part of the UBR-1 preview repo.
The scanning laser was also missing on this robot (I vaguely recall we had a loaner laser that
had to go back when Unbounded shut down). Luckily I have some spare lasers here.
A few other things needed to be cleaned up before the skins were put back on. In removing the
scanning laser, lots of tie straps were missing on the cabling near the right side drive
motor (see image below). Thermistors on the base motors had also come undone, but were
quickly fixed with some new kapton tape.
I then backed up the contents of the hard drive and swapped it out for a new drive so I could
updated from 14.04/Indigo. I set up the new drive to dual both both 18.04 and 20.04 since I
wasn’t sure how well Noetic was going to run. Surprisingly it wasn’t too bad. I got the
drivers all updated and ready to go before World MoveIt Day.
During World MoveIt Day I got MoveIt working on the robot and then moved onto testing
robot_calibration, both packages needed a few updates for changes in underlying dependencies
and the move to Python 3. By mid-day I managed to calibrate the robot:
I’m using Ansible to deploy most of the robot setup.
I’ve still got some work to get grasping demos updated and running on the robot, then
I’m hoping to revive the chess playing demo
although that code hasn’t been run since Hydro.
I’ve been making steady progress on the robomagellan robot. In my previous post, I detailed
how I calibrated the magnetometer on the robot in preparation for using the robot_localization
package to fuse the wheel odometry, IMU, and GPS data. This post discusses the actual use of
robot_localization.
Coordinate Frames
Before diving into how the robomagellan robot is localizing, we should explore what the
coordinate frames look like. REP 105 specifies
the standard conventions for mobile robot coordinate frames in ROS.
For an indoor robot, the coordinate frames are pretty straight-forward. The “center” of the robot
is base_link. Your odometry source (typically wheel encoders, often merged with an IMU) is used
to generate an odom frame. Then you have a map of the building and a program such as
AMCL can use laser scan data to align the robot with the map,
publishing a correction in the form of a transformation from odom to map frame. The map frame
is arbitrary, but fixed in reference to the map of the building and set at the time the map was
built. Goal poses are typically specified in the map frame, since it is consistent over time
and over reboots of the robot.
For an outdoor robot, it ends up being more complex. The base_link and odom frames are the
same as they were indoors. The map frame origin is less well defined. REP-105 states:
Map coordinate frames can either be referenced globally
or to an application specific position.
With robot_localization, the map frame origin is wherever the robot odometry started. That
is a bit different for those of us who mainly use indoor robots where the map frame is
consistent from reboot to reboot.
A pair of ROS services are offered which can convert between GPS coordinates and map coordinates.
Internally, these services track the location of the map frame in UTM coordinates. The
Universal Transverse Mercator (UTM)
system splits the world up into a series of coordinate grids. These grids are very large and
so you don’t often want to do your calculations in UTM coordinates, hence the local map frame
which is located where ever the robot started up. There is an option to publish the utm frame,
but it appears rarely used.
Global Localization
“Global” localization is pretty easy with indoor mobile robots based on ROS since it really
just consists of finding the robot pose in the map frame. You simply merge an
odometry source with your laser scan and a map using AMCL. There are quite a few parameters,
but the defaults work pretty well out of the box. Things get a bit more complicated when you go
outdoors.
People often improve their odometry source by merging the wheel encoder odometry with an IMU.
The robot_localization package
offers an Extended Kalman Filter (EKF) that can do this for both indoor and outdoor robots.
While the EKF does not take GPS data directly, the robot_localization package also offers
the navsat_transform_node that can convert GPS data into an odometry stream.
The node subscribes to your gps/fix topic and then outputs an odometry source that encodes
the robot pose in the map frame. Internally, it tracks the transformation between the UTM
coordinates and the map frame.
Navsat Transform Node
The navsat_transform_node also subscribes to two other topics, your IMU and the
odometry output from the EKF. The node only needs this data until it gets a first GPS fix.
The IMU part is easy to understand - GPS does not contain heading information and so the
node needs to get that from the IMU in order to determine the transformation between UTM and
map.
The circular subscription (that the transform node is subscribing to the odometry output by
the EKF, and the EKF is subscribing to the odometry output by the transform node) is probably
the least understood aspect in robot_localization – probably half of all the related
questions on answers.ros.org are about this. The reasoning is as such:
since the map frame is located where the robot started, we need to know how far we have
traveled from the start when we finally get a first GPS fix. If you don’t move the robot at
all before you get a valid fix, you really wouldn’t need this odometry source in
navsat_transform_node.
In setting up the launch files for the EKF, I specifically annotated the somewhat confusing
subscription and publication topics. My
launch file
for the EKF pipeline basically looks like this:
<!-- Global frame localization --><nodename="ekf_global_odom"pkg="robot_localization"type="ekf_localization_node"><rosparamcommand="load"file="$(find robomagellan)/config/ekf_global.yaml"/><!-- Subscriptions (in yaml)
odom0: odometry/gps
odom1: base_controller/odom
imu0: imu/data
--><!-- Publications --><remapfrom="odometry/filtered"to="odometry/global"/></node><!-- Integrating GPS --><nodename="navsat_transform_node"pkg="robot_localization"type="navsat_transform_node"output="screen"><!-- Parameters truncated - see full file on GitHub --><!-- Subscriptions --><remapfrom="imu/data"to="imu/data"/><remapfrom="gps/fix"to="gps/fix"/><remapfrom="odometry/filtered"to="odometry/global"/><!-- Publications --><remapfrom="gps/filtered"to="gps/filtered"/><remapfrom="odometry/gps"to="odometry/gps"/></node>
Setting Up Robot Localization
Parts of the EKF setup are pretty straight forward. The odometry/gps source that comes from
the navsat_transform_node gives us an absolute estimate of x and y coordinates
derived from the GPS data. The base_controller/odom source gives us differential estimates of
x, y and yaw derived from the wheel encoders.
How to fuse the IMU data is less intuitive. Since the IMU is processed by the
imu_filter_madgwick node, we know it can give an absolute estimate of roll, pitch,
yaw. The absolute orientation is quite important, since no other sensor in our system can
give us this important information. The filter also removes bias from the gyro and accelerometers,
so we can use the IMU data for differential estimates of roll, pitch, and yaw as well as
accelerations of x, y, and z. I chose not to use the accelerations though, since it seems
to give better results.
There are numerous notes and warnings that if you are merging two sources of rotation/orientation
data you have to make sure the covariances are well set. While I’m not 100% confident in the
covariance of the wheel encoder odometry source, it seems to be merging fine with the IMU.
The above image shows localization during a test run. The red dots are the raw GPS data. The blue
dots are the output of the EKF. I’m downsampling here to 1 meter spacing between dots.
It’s worth noting that the robot actually stayed on the gravel-colored paths, so the raw GPS is
pretty far off (especially the track on the left that swings way out into the grass). The
datasheet for the MTK3339-based GPS states that the positional accuracy is 2.5 meters (50% CEP),
which is quite a bit worse than the Garmin 18x referenced in the robot_localization paper.
At this point, I have decent global localization for how bad the GPS signal is.
The next step is going to be replacing the GPS module with one that supports Real Time Kinematics
(RTK). The cost of these modules have come down a great deal and so it makes total sense to upgrade
to this. There really aren’t any publicly accessible RTK base stations here in New Hampshire, so
I’ll also be setting up a base station.
Even while I wait for the new GPS modules to show up, I plan to make some progress on navigation
since the localization is still pretty decent within the map frame, and I can temporarily shift
the goal poses for testing.
I’ve made quite a bit of progress on the robomagellan robot recently. Over the past week
I’ve installed the Realsense D435 camera, setup a URDF, and made some serious
progress towards global localization.
Absolute Heading Reference
I’m using an EKF from the
robot_localization package
for outdoor localization. This package can merge the wheel odometry, IMU and GPS data together
into a cohesive global pose and odometry. I’ll post more details about how I’m using
robot_localization in a future post once I finish tuning things.
Since GPS data doesn’t include heading, robot_localization requires an IMU or odometry source
with absolute heading information. While some higher end IMUs may put this information out
directly, none of the IMUs I have offer a reliable absolute heading.
Today’s IMUs typically include a three-axis accelerometer, gyro and magnetometer.
Most robotics people are pretty familiar with gyros and accelerometers, but less so with
magnetometers, especially if they have primarily worked on indoor mobile robots. Magnetometers
measure the geomagnetic field. If there were no disturbances and our magnetometer was
accurately calibrated, then plotting the magnitude vector while rotating the sensor around would
form a sphere with a radius equal to the local field strength. A projection of the magnitude
vector into the horizontal plane gives us the equivelent of a compass bearing.
The imu_filter_madgwick ROS package is used to merge the accelerometer, gyro, and
magnetometer readings into an orientation estimate which we can feed into the EKF. Unfortunately,
the geomagnetic field is not very strong so calibration is quite important. This filter is
able to take in a magnetometer bias, which corresponds to what is often called the
hard iron offset 1. To get this bias vector I wrote a magnetometer_calibration
node in the robot_calibration package.
This calibration node rotates the robot around slowly while recording the magnetometer
values. It then tries to fit them to a sphere. It outputs the center of that sphere (which
corresponds to our bias or hard iron offset). It doesn’t currently do the more complex
soft iron calibration since there isn’t a good way to merge that into the pipeline anyways.
I’m planning to implement both the soft iron calibration, and the extensions to the filter
node so it can use the soft iron calibration soon.
The mag_bias parameters can then be passed to the imu_filter_madgwick
as parameters (along with use_mag parameter being set to true). I still need to add
documentation on the new magnetometer calibration node, but the basics are:
Topics (remap these to your actual topics):
/imu/mag - the magnetometer data. Should be a sensor_msgs/MagneticField message.
/cmd_vel - the commands to the mobile base. geometry_msgs/Twist.
Parameters
rotation_velocity - this is the angular velocity to command. Should be relatively
low, but has to be high enough that your robot actually rotates. Defaults to 0.2 rad/s.
rotation_duration - the robot has to make at least one full revolution in place. If
your robot is really slow, this might need to be longer. Defaults to 60 seconds.
The node also has some other modes, such as manual rotation (in case your IMU isn’t actually
mounted in a robot yet). Right now the best source of documentation is the code itself. I plan
to add documentation to the robot_calibration README later this week and then get a new release
pushed to debians. UPDATE: documentation is now in the
README.
Next Steps
I’m still working on tuning the EKF parameters. I’ve also started to write some GPS-waypoint
navigation code. Both of these topics will be the subject of upcoming posts.