21 Jun 2022
ubr1
robots
ros2
It has been a while since I’ve posted to the blog, but lately I’ve actually been working
on the UBR-1 again after a somewhat long hiatus. In case you missed the earlier posts in
this series:
ROS2 Humble
The latest ROS2 release came out just a few weeks ago. ROS2 Humble targets Ubuntu 22.04
and is also a long term support (LTS) release, meaning that both the underlying Ubuntu
operating system and the ROS2 release get a full 5 years of support.
Since installing operating systems on robots is often a pain, I only use the LTS releases
and so I had to migrate from the previous LTS, ROS2 Foxy (on Ubuntu 20.04). Overall, there
aren’t many changes to the low-level ROS2 APIs as things are getting more stable and mature.
For some higher level packages, such as MoveIt2 and Navigation2, the story is a bit different.
Visualization
One of the nice things about the ROS2 Foxy release was that it targeted the same operating
system as the final ROS1 release, Noetic. This allowed users to have both ROS1 and ROS2
installed side-by-side. If you’re still developing in ROS1, that means you probably don’t
want to upgrade all your computers quite yet. While my robot now runs Ubuntu 22.04, my
desktop is still running 18.04.
Therefore, I had to find a way to visualize ROS2 data on a computer that did not have the
latest ROS2 installed. Initially I tried the Foxglove Studio, but didn’t have any luck
with things actually connecting using the native ROS2 interface (the rosbridge-based
interface did work). Foxglove is certainly interesting, but so far it’s not really
an RVIZ replacement - they appear to be more focused on offline data visualization.
I then moved onto running rviz2
inside a docker environment - which works
well when using the rocker tool:
sudo apt-get install python3-rocker
sudo rocker --net=host --x11 osrf/ros:humble-desktop rviz2
If you are using an NVIDIA card, you’ll need to add --nvidia
along with
--x11
.
In order to properly visualize and interact with my UBR-1 robot, I needed to add the
ubr1_description
package to my workspace in order to get the meshes and
also my rviz configurations. To accomplish this, I needed to create my own docker
image. I largely based it off the underlying ROS docker images:
ARG WORKSPACE=/opt/workspace
FROM osrf/ros:humble-desktop
# install build tools
RUN apt-get update && apt-get install -q -y --no-install-recommends \
python3-colcon-common-extensions \
git-core \
&& rm -rf /var/lib/apt/lists/*
# get ubr code
ARG WORKSPACE
WORKDIR $WORKSPACE/src
RUN git clone https://github.com/mikeferguson/ubr_reloaded.git \
&& touch ubr_reloaded/ubr1_bringup/COLCON_IGNORE \
&& touch ubr_reloaded/ubr1_calibration/COLCON_IGNORE \
&& touch ubr_reloaded/ubr1_gazebo/COLCON_IGNORE \
&& touch ubr_reloaded/ubr1_moveit/COLCON_IGNORE \
&& touch ubr_reloaded/ubr1_navigation/COLCON_IGNORE \
&& touch ubr_reloaded/ubr_msgs/COLCON_IGNORE \
&& touch ubr_reloaded/ubr_teleop/COLCON_IGNORE
# install dependencies
ARG WORKSPACE
WORKDIR $WORKSPACE
RUN . /opt/ros/$ROS_DISTRO/setup.sh \
&& apt-get update && rosdep install -q -y \
--from-paths src \
--ignore-src \
&& rm -rf /var/lib/apt/lists/*
# build ubr code
ARG WORKSPACE
WORKDIR $WORKSPACE
RUN . /opt/ros/$ROS_DISTRO/setup.sh \
&& colcon build
# setup entrypoint
COPY ./ros_entrypoint.sh /
ENTRYPOINT ["/ros_entrypoint.sh"]
CMD ["bash"]
The image derives from humble-desktop and then adds the build tools and clones
my repository. I then ignore the majority of packages, install dependencies and
then build the workspace. The ros_entrypoint.sh
script handles
sourcing the workspace configuration.
#!/bin/bash
set -e
# setup ros2 environment
source "/opt/workspace/install/setup.bash"
exec "$@"
I could then create the docker image and run rviz inside it:
docker build -t ubr:main
sudo rocker --net=host --x11 ubr:main rviz2
The full source of these docker configs is in the
docker folder
of my ubr_reloaded
repository. NOTE: The updated code in the
repository also adds a late-breaking change to use CycloneDDS as I’ve had
numerous connectivity issues with FastDDS that I have not been able to debug.
Visualization on MacOSX
I also frequently want to be able to interact with my robot from my Macbook. While I previously
installed ROS2 Foxy on my Intel-based Macbook,
the situation is quite changed now with MacOSX being downgraded to Tier 3 support and the new
Apple M1 silicon (and Apple’s various other locking mechanisms) making it harder and harder
to setup ROS2 directly on the Macbook.
As with the Linux desktop, I tried out Foxglove - however it is a bit limited on Mac. The
MacOSX environment does not allow opening the required ports, so the direct ROS2 topic
streaming does not work and you have to use rosbridge. I found I was able to visualize
certain topics, but that switching between topics frequently broke.
At this point, I was about to give up, until I noticed that Ubuntu 22.04 arm64 is a Tier 1
platform for ROS2 Humble. I proceeded to install the arm64 version of Ubuntu inside Parallels
(Note: I was cheap and initially tried to use the VMWare technology preview, but was unable
to get the installer to even boot). There are a few tricks here as there is no arm64 desktop
installer, so you have to install the server edition and then upgrade it to a desktop. There
is a detailed description of this workflow on askubuntu.com.
Installing ros-humble-desktop
from arm64 Debians was perfectly easy.
rviz2
runs relatively quick inside the Parallels VM, but overall it was not
quite as quick or stable as using rocker
on Ubuntu. However, it is really nice
to be able to do some ROS2 development when traveling with only my Macbook.
Migration Notes
Note: each of the links in this section is to a commit or PR that implements the discussed
changes.
In the core ROS API, there are only a handful of changes - and most of them are actually
simply fixing potential bugs. The logging macros have been updated for security purposes
and require c-strings like the old ROS1 macros did. Additionally the macros are now
better at detecting invalid substitution strings. Ament has also gotten better at
detecting missing dependencies. The updates I made to
robot_controllers
show just how many bugs were caught by this more strict checking.
image_pipeline
has had some minor updates since Foxy, mainly to improve
consistency between plugins and so I needed to
update some topic remappings.
Navigation has the most updates. amcl
model type names have been
changed
since the models are now plugins. The API of costmap layers has changed significantly,
and so a number of
updates were required
just to get the system started. I then made a more detailed pass through the
documentation and
found a few more issues and improvements with my config,
especially around the behavior tree configuration.
I also decided to do a proper port of
graceful_controller to ROS2,
starting from the latest ROS1 code since a number of improvements have happened in the
past year since I had originally ported to ROS2.
Next Steps
There are still a number of new features to explore with Navigation2, but my immediate
focus is going to shift towards getting MoveIt2 setup on the robot, since I can’t easily
swap between ROS1 and ROS2 anymore after upgrading the operating system.
31 Dec 2020
beer
chisels
robots
This year has been quite different than I think most people would have expected. I haven’t
traveled since February, which makes this year the least I’ve traveled in probably a decade.
This allowed me to make some progress on a number of projects, including restarting this
blog.
The UBR-1
Probably the most visible project for the year was buying, restoring, and upgrading to ROS2
a UBR-1 robot. There are quite a few posts you can find under the
ubr1
tag.
In 2021, I’m planning to finally make some progress on running MoveIt2 on the UBR-1.
Botfarm Rev. A
We refer to my farm in New Hampshire as “The Botfarm”, since I mainly grow robots here.
Since I wasn’t going to be traveling this year, it seemed like a great time to actually
start a garden as well.
We fenced off about 2500 square feet for this first garden, although we only ended up planting
about half that space. The fence is seven foot tall plastic mesh, which seems to have worked
pretty well since no deer got in (and they are everywhere here). The
fancy sign over the gate pretty much constitutes my only woodworking project of the year:
As it turned out, getting seeds for some things was really quite a challenge for the last minute
garden project. In the end, we ended up growing:
- Zucchini - 117.25 kg total from a single 50 foot row of plants. Direct seeded.
- Squash - 29.7 kg from a half row. Direct seeded.
- Cucumbers - 12.8 kg. We planted several store bought plants since I didn’t get the seeds into
ground until quite late. The upside of the “late” plants was that by then I knew they had to
be trellised and so those plants really grew great.
- Potatoes - 20 kg harvested from 5 kg planted. Not a great yield - the soil ended up really packing
down hard around the plants after hilling.
- Tomatoes - 4 kg of Sweet 100, and 7.7 kg of Beefsteak from two plants of each kind.
- Broccoli - was planted way too late, didn’t really start to do anything until the fall. We got
several heads of broccoli.
- Pumpkins - 50 kg of pumpkins from 3 plants.
- Corn - about a dozen ears. This was another poor yield, the corn was basically planted in the
worst of the soil.
- Onions - a bunch of tiny ones - I completely misunderstood how to plant them and put them
way too close together…
Brewing
In July, I also started home brewing - something I’ve wanted to do for a while. Of course, I can’t just throw
some things in a kettle on the stove, I had go all process controlled and data driven.
The first component of my setup is an Anvil Foundry electric kettle. This is an all-in-one that
is used as the mash tun and the boil kettle. So far, I’ve gotten lower than expected efficiency from the unit,
but over several batches I’ve been making improvements. At the very end of the year, I picked up a grain mill
(on a great discount) which I’m hoping will further improve efficiency.
Once the wort is made, I’ve got a Tilt Hydrometer to monitor the fermentation process.
This is an interesting little bluetooth device that wirelessly relays the temperature and specific gravity of the
beer it is floating in. While the absolute value is not entirely accurate, the trend-line is super useful to see when
the fermentation is done (or stuck). The data is recorded every 15 minutes and makes great plots - you can even get
an idea of how active the fermentation is by how noisy the data is at a given point in time:
I didn’t end up brewing much in the summer heat. I did a few batches of Hefeweizens, all of which could be
fermented in the cellar. I’ve been slowly increasing the temperature I want to ferment those at (as well as for
some Belgian styles), which is incompatible with the winter temperatures here in New Hampshire. This fall I
added an electric heat band and a temperature controller so I can keep the beer at the right temperature
during fermentation.
I’m currently working on 3D-printing an orbital shaker to propagate yeast starters - I’ll post on that next year
when it is done.
This fall I also did Virtual Beer School with Natalya Watson and then passed the
Cicerone Beer Server exam. I’m hoping to
complete the next level, Certified Cicerone, next year.
The Shop
While this blog is called “Robot & Chisel”, I did not end up doing any woodworking this year.
I did make some progress on the shop. One of the selling features of the BotFarm was a big barn on the property
that I have been slowly turning into a shop. There is now heat and insulation in the building and we are
making steady progress on finishing out the space. I’m hoping to have all the woodworking tools moved
in by summer next year.
Next Year
After this year, I’m not even going to try to make predictions of what I’ll work on next year. But I do hope to do some
more ROS2 stuff on the UBR-1 and brew more beers (especially some Saisons and some “fake barrel aged” stuff).
01 Sep 2020
ubr1
robots
ros2
With a map having been built and localization working,
it was time to get autonomous navigation working on the UBR-1.
Comparing with ROS1
While many of the ROS1 to ROS2 ports basically amount to a find-and-replace of the
various ROS interfaces and CMake directives, navigation got a fairly extensive
re-architecture from the package that I’ve helped maintain over the past seven
years.
A number of the plugin interfaces in ROS1 have been replaced with action interfaces.
While the planners themselves are still plugins, they are each loaded into a server
node which exposes an action interface to access the planning functions.
One of the biggest changes in ROS2 is the use of behavior trees to structure
the recovery behaviors and connect the various action-based interfaces. This allows
quite a bit of interesting new functionality, such as using different recovery
behaviors for controller failures than are used for planning failures and allowing
quite a bit of control over when to plan. There are already dozens of
behavior tree nodes
and a there is also a new tutorial on
writing custom behavior tree nodes.
In ROS1, the navigation stack contains two local “planners”: trajectory_rollout
and dwa
(the Dynamic Window Approach). ROS2 fixes this horrid naming
issue and properly calls these “controllers”, but only includes the updated
dwb
implementation of the Dynamic Window Approach. As far as
I can remember, I’ve only ever used trajectory rollout as I was never sold on DWA.
I’m still not sold on DWB.
Initial Launch Files
Setting up the navigation to run followed a pretty similar pattern to setting up
SLAM and localization: I copied over the example launch files from the
nav2_bringup
package and started modifying things. The real difference
was the magnitude of things to modify.
A note of caution: it is imperative that you use the files from the proper branch.
Some behavior tree modules have been added in the main
branch that
do not yet exist in the Foxy release. Similarly some parameters have been renamed or
added in new releases. Some of these will likely get backported, but the simplest
approach is to use the proper launch and configuration files from the start.
My initial setup involved just the base laser scanner. I configured both the local
and global costmaps to use the base laser. It is important to set the robot_radius
for your robot (or the footprint if you aren’t circular). The full configuration can
be found in the ubr1_navigation
package, but here is a snippet of
my local costmap configuration:
local_costmap:
local_costmap:
ros__parameters:
global_frame: odom
robot_base_frame: base_link
rolling_window: true
width: 4
height: 4
resolution: 0.05
robot_radius: 0.2413
plugins: ["voxel_layer", "inflation_layer"]
inflation_layer:
plugin: "nav2_costmap_2d::InflationLayer"
cost_scaling_factor: 3.0
voxel_layer:
plugin: "nav2_costmap_2d::VoxelLayer"
enabled: True
publish_voxel_map: True
origin_z: 0.0
z_resolution: 0.05
z_voxels: 16
max_obstacle_height: 2.0
mark_threshold: 0
observation_sources: scan
scan:
topic: /base_scan
max_obstacle_height: 2.0
clearing: True
marking: True
data_type: "LaserScan"
A fairly late change to my configuration was to adjust the size of the local costmap. By default,
the turtlebot3 configuration uses a 3x3 meter costmap, which is pretty small. Depending on your
top speed and the simulation time used for the DWB controller, you will almost certainly
need a larger map if your robot is faster than a turtlebot3.
With this minimal configuration, I was able to get the robot rolling around autonomously!
Tilting Head Node
The UBR-1 has a depth camera in the head and, in ROS1, would tilt the camera up and
down to carve out a wider field of view when there was an active navigation goal.
The tilt_head.py
script also pointed the head in the direction of the
the local plan. The first step in adding the head camera to the costmaps was porting the
tilt_head.py
script to ROS2.
One complication with a 3d sensor is the desire to use the floor plane for clearing
the costmap, but not marking. A common approach for this is to setup two observation
sources. The first source is setup to be the marking source and has a minimum obstacle
height high enough to ignore most noise. A second source is set to be a clearing
source and uses the full cloud. Since clearing sources are applied before marking
sources, this works fine and won’t accidentally over clear:
observation_sources: base_scan tilting_cloud tilting_cloud_clearing
base_scan:
topic: /base_scan
max_obstacle_height: 2.0
clearing: True
marking: True
data_type: "LaserScan"
tilting_cloud:
topic: /head_camera/depth_downsample/points
min_obstacle_height: 0.2
max_obstacle_height: 2.0
clearing: False
marking: True
data_type: "PointCloud2"
tilting_cloud_clearing:
topic: /head_camera/depth_downsample/points
min_obstacle_height: 0.0
max_obstacle_height: 0.5
clearing: True
marking: False
data_type: "PointCloud2"
In setting this up, I had to set the minimum obstacle height quite high (0.2 meters
is almost 8 inches). This is a product of the robot not being entirely well calibrated
and the timing accuracy of the sensor causing the points to sometimes rise out of the
plane. We’ll improve that below.
You’ll notice I am using a “depth_downsample/points” topic.
As inserting full VGA clouds into the costmap would be prohibitively costly,
I downsample the depth image to 160x120 and then turn that into a point cloud
(a common approach you’ll find on a number of ROS1 robots). This was added
to my head_camera.launch.py:
# Decimate cloud to 160x120
ComposableNode(
package='image_proc',
plugin='image_proc::CropDecimateNode',
name='depth_downsample',
namespace=LaunchConfiguration('namespace'),
parameters=[{'decimation_x': 4, 'decimation_y': 4}],
remappings=[('in/image_raw', 'depth_registered/image_rect'),
('in/camera_info', 'depth/camera_info'),
('out/image_raw', 'depth_downsample/image_raw'),
('out/camera_info', 'depth_downsample/camera_info')],
),
# Downsampled XYZ point cloud (mainly for navigation)
ComposableNode(
package='depth_image_proc',
plugin='depth_image_proc::PointCloudXyzNode',
name='points_downsample',
namespace=LaunchConfiguration('namespace'),
remappings=[('image_rect', 'depth_downsample/image_raw'),
('camera_info', 'depth_downsample/camera_info'),
('points', 'depth_downsample/points')],
),
As with the several
other
of the image_proc components I’ve work with, the CropDecimateNode
needed some patches
to actually function.
With this in place, things almost worked. But I was getting a bunch of errors
about the sensor origin being off the map. This made no sense at first - the robot
is clearly on the map - I can see it right in RVIZ! I then started reviewing the
parameters:
z_resolution: 0.05
z_voxels: 16
max_obstacle_height: 2.0
At which point I realized that 0.05 * 16 = 0.8 meters. Which is shorter than my
robot. So, the sensor was “off the map” - in the Z direction. Pesky 3d.
I updated the voxel configuration so that my map was indeed two meters tall
and all my sensor data was now in the costmap.
z_resolution: 0.125
z_voxels: 16
max_obstacle_height: 2.0
Unfortunately, even with my 0.2 meter minimum obstacle height I was still getting
stray noisy pixels causing the robot to navigate somewhat poorly at times. In
particular, it decided to really come to a halt during a talk and demo to the Homebrew
Robotics Club last week.
A Custom Costmap Layer
Setting the minimum obstacle height super high is really not a great idea to begin with.
With the Fetch Mobile Manipulator we
implemented a custom costmap layer that would find the ground plane using
OpenCV and then split the cloud into clearing and marking pixels. This
largely avoids the timing and calibration issues, although the marking pixels
may be slightly off in their location in the costmap due to those timing and
calibration issues. On the Fetch, we were able to get the minimum obstacle height
of that moving sensor down to 0.06 meters. In addition, this layer subscribes to
the depth image, rather than a 3d point cloud, which allows us to do certain
pre-processing less expensively in 2d.
After the HBRC failures, I decided to port the FetchDepthlayer
to ROS2.
You can find it in the ubr1_navigation package.
The initial port
was pretty straight forward. The costmap_2d
package hasn’t gotten
too many updates, other than a nav2
prefix for the package and
namespaces.
One interesting find was that the sensor_msgs/PointCloud
message has been
deprecated and slated for removal after Foxy.
There are a number of places where the PointCloud interface was used a simple way to publish debug
points (the message is simply an array of geometry_msgs/Point32 instead of the
much more complicated PointCloud2 messages which has a variable set of fields
and pretty much requires the use of a modifier and iterator to really fill or read).
I decided to get rid of the deprecation notices and port to PointCloud2 for
the debugging topics -
you can see how much more complicated the code now looks.
Finally, as I started to test the code on the robot, I ran into a few further
issues. Apparently, ROS2 does not just have Lifecyle Nodes, there are also Lifecycle
Publishers. And nav2 uses them. And you need to call on_activate
on them before publishing to them. You can see my final fixes
in this commit.
A final improvement to the node was to remove the somewhat complicated (and I’m
guessing quite slow) code that found outliers. Previously this was done by finding
points in which less than seven neighbors were within 0.1m away, now I use
cv::medianBlur
on the depth image.
The image below shows the costmap filled in for a box that is shorter than my laser
scanner, but detected by camera. The red and green points are the marking and clearing
debug topics from the depth layer:
Test on Robots!
One of the more interesting moments occurred after I updated my sources for
navigation2. Suddenly, the robot was unable to complete goals - it would get to the
goal and then rotate back and forth and eventually give up. I ended up tracking down
that a major bug had been introduced during a refactor which meant that when comparing
the goal to the current pose they were not in the same frame! The goal would be in the
map frame, but the local controller was taking robot pose in the odom frame. The only
time a goal could succeed was if the origins of the map and odom frame were aligned
(which, coincidentally, probably happens a lot in simulation). My fix was
pretty simple and the bug
never made it into released Debians in Foxy, but it did exist for almost a month
on the main branch.
Tuning the Local Controller
As a side effect of the goal bug, I ended up spending quite a bit of time tuning
the local controller (thinking that it was responsible for the issues I was seeing).
Both the overall architecture and the parameters involved are somewhat different from ROS1.
Let’s first mention that the controller server implements a high pass filter on the odometry
topic to which it subscribes. This filter has three parameters:
min_x_velocity_threshold
, min_y_velocity_threshold
, and
min_yaw_velocity_threshold
. While debugging, I ended
up updating the descriptions of these parameters in the
navigation documentation
because I was at first trying to use them as the minimum velocities to control, since
the original description was simply “Minimum velocity to use”.
The controller server still loads the controller as a plugin, but also has separate
plugins for the goal checker and progress checker. The SimpleProgressChecker
is pretty straight forward, it has two parameters and requires that the robot move at
least X distance in T time (default 0.5 meters in 10 seconds).
The SimpleGoalChecker
implements the goal check that previously was part
of the controller itself. As in ROS1, it has three parameters:
xy_goal_tolerance
is how close the robot needs to get to the goal.
By default, the xy tolerance is set quite course. I tightened that tolerance up
on the UBR-1.
stateful
is similar to “latching” in ROS1 stack. Once the robot
has met the xy_goal_tolerance, it will stop moving and simply rotate in place.
yaw_goal_tolerance
is how close to the heading is required to succeed.
One of the enhancements of DWB over the DWA implementation is that it splits each of the
individual elements of trajectory scoring into a separate plugin. This makes it easier
to enable or disable individual elements of the scoring, or add custom ones. For instance,
you could entirely remove the PathAlign
element if it is causing issues and
you don’t care if your robot actually follows the path.
There are two major hurdles in tuning the DWB controller: balancing the path and goal
scores, and balancing smooth operation versus actually getting to the end of the
trajectory (as opposed to just stuttering towards the goal slowly). I think the
first one is well tuned on the UBR-1, but I’ve not yet fixed the stuttering to the
goal well enough to be happy with the controller. You can find
that several
others
have also struggled to get the performance they were seeking.
Next Steps
Now that I’ve got navigation mostly working, the next big hurdle is manipulation.
I have MoveIt2 compiled, but am still working through the requisite launch files
and other updates to make things work for my robot. And then onto the real goal
of every roboticist: having my robot fetch a beer.