Blast From the Past: UBR-1

This is day 4 of my National Robotics Week blog marathon - it’s halfway over!

In 2013, Unbounded Robotics was the last spin-off from Willow Garage. Our four person team set out to build a new robotics platform for the research and development community. The robot would cost a fraction of what the Willow Garage PR2 cost. We did build three robots and demo them at a number of events, but Unbounded eventually ran out of money and was unable to secure further funding. In the summer of 2014 the company shut down.

I wasn’t really blogging during this whole time because I was really busy when things were going well, and then I didn’t really want to talk about it while things were going downhill. The whole affair is now quite a few years ago, so here we go. First, a picture of our little robot friend:

UBR-1 by Unbounded Robotics

The robot had a differential drive base, since that is really the only cost-effective style of robot base out there. It used an interesting 10:1 worm-gear brushless motor, similar to what had been in the PlatformBot our team had previously designed at Willow Garage. The brushless motors were really quite efficient and the whole drive was super quiet, but the worm gear was terribly inefficient (more on that below). The base was about 20” in diameter - which made the robot much more maneuverable than the 26” square footprint of the PR2, even though PR2 had holonomic drive.

The 7-DOF arm had similar kinematics to the PR2 robot, but tossed the counterbalancing for simplicity, lower cost and weight reduction. A parallel jaw gripper replaced the very complex gripper used in the PR2. A torso lift motor allowed the robot to remain quite short during navigation but rise to interact with items at table height - it was a little short for typical counter-top heights but still had a decent workspace if the items weren’t too far back on the countertop.

One of the first demos I set up on the UBR-1 was my chess playing demo:

UBR-1 Software

As with everything that came out of Willow Garage, UBR-1 used the Robot Operating System (ROS). You can still download the preview repository for the UBR-1 which included simulation with navigation and grasping demos. As with the ROS legacy of Willow Garage, the open source software released by Unbounded Robotics continues to live on to this day. My simple_grasping package is an improved and robot-agnostic version of some of the grasping demos we created at Unbounded (which were actually based on some of my earlier grasping demos created for my Maxwell robot). A number of improvements and bug fixes for ROS Navigation and other packages also came out during this time since I was a primary maintainer of these packages in the dark days following Willow’s demise.

UBR-1 in Gazebo Simulation
Power Usage and Battery Life

Power usage reductions and battery life increases were some of the biggest improvements in the UBR-1 versus the PR2. The PR2 was designed in 2010 and used two computers, each with 24Gb of RAM and 2 quad-core Intel L5520 Nehalem processors. The Nehalem was the first generation of Intel Core processors. The PR2 batteries were specified to have 1.3kWh of stored energy, but only gave about two hours of runtime, regardless of whether the robot was even doing anything with the motors. There were other culprits besides the computers, in particular, the 33 motor controller boards each had two ethernet PHYs accounting about 60W of power draw. But the computers were the main power draw. This was made worse by the computers being powered by isolated DC-DC converters that were only about 70% efficient.

The UBR-1 arm.

The UBR-1 used a 4th generation Intel Core processor. The gains in just four years were quite impressive: the computer drew only 30-35W of power, but we were able to run similar demos of navigation and manipulation that ran on the PR2. Based on the Intel documentation, a large part of that was the 75% power reduction for the same performance from the first to fourth generation chips. A smaller contributor was improvements in the underlying algorithms and code base.

For dynamic movements, the UBR-1 arm was also significantly more power efficient than the PR2, since it weighed less than half as much. Gravity compensation of the 25 lb arm required only about 15W in the worst configurations - this could have been lowered with higher gearing, but would have had adverse effects on the efficiency when moving and might have jeopardized the back-drivability.

The base motors were still highly inefficient - UBR-1 needed about 250W to drive around on carpet. Hub motors have become pretty common in the years since and can improve the efficiency of the drivetrain from a measly 35% to upwards of 85%.

Robot Evolution

Processors have gotten more efficient in the years since UBR-1 - but their prices have pretty much stopped dropping. Motors haven’t really gotten any cheaper since the days of the PR2, although the controls have gotten more sophisticated. Sensors also really haven’t gotten much cheaper or better since the introduction of the Microsoft Kinect. While there has been a lot of money flowing into robotics over the past decade, we haven’t seen the predicted massive uptake in robots. Why is that?

One of my theories around robotics is that we go through a repeated cycle of “hardware limited” and “software limited” phases. A hardware innovation allows an increased capability and it takes some time for the software to catch up. At some point the software innovation has increased to a point where the hardware is now the limiting factor. New hardware innovations come out to address this, and the process repeats.

Before ROS, we were definitely software-limited. There were a number of fairly mechanically sophisticated robots in research labs all over the planet, but there were not widely-accepted common frameworks with which to share software. PhD students would re-invent the wheel for the first several years of their work before contributing something new, but then have no way to pass that new innovative code on. ROS changed this significantly.

On the hardware side, there were very few common platforms before the PR2. Even the Mobile Robots Pioneer wasn’t exactly a “common platform” because everyone installed different sensors on them, so code wasn’t very portable. The introduction of PR2 going to a number of top universities, combined with the Willow Garage intern program, really kickstarted the use of a common platform. The introduction of the Microsoft Kinect and the advent of low-cost depth sensors also triggered a huge leap forward in robot capability. I found it amusing at the time to see the several thousand dollar stereo camera suite on the PR2 pretty much replaced (and outperformed) by a single $150 Kinect.

For a few years there was a huge explosion in the software being passed around, and then we were hardware-limited again because the PR2 was too expensive for wide adoption. While the UBR-1 never got to fill that void, there are now a number of lower-cost platforms available with pretty solid capabilities. We’re back to software-limited.

So why are robots still software limited? The world is a challenging environment. The open source robotics community has made great strides in motion planning, motor control, and navigation. But perception is still really hard. Today we’re mainly seeing commercially deployed robots making inroads in industries where the environment is pretty well defined - warehouses and office spaces, for instance. In these environments we can generally get away with just knowing that an “obstacle” is somewhere - our robot doesn’t really care what the obstacle is. We’ve got pretty good sensors - although they’re still a little pricey - but we generally lack the software to really leverage them. Even in the ROS ecosystem, there’s huge amounts of hardware drivers and motion planning software (ROS Navigation, MoveIt, OMPL, SBPL, the list goes on), but very little perception code. Perception is still very dependent on the exact task you’re doing, there just isn’t a lot of “generic” perception out there.

There is a certain magic in finding applications where robots can offer enough value to the customer at the right price point. Today, those tend to be applications where the robot needs limited understanding of the environment. I look forward to what the applications of tomorrow might be.

Code Coverage for ROS

This is day 3 of my 2020 National Robotics Week blog marathon!

About two years ago I created a little package called code_coverage. This package is a bit of CMake which makes it easier to run coverage testing on your ROS packages. Initially it only supported C++, but recently it has been expanded to cover Python code as well.

What is Code Coverage?

Before I get into how to use the code_coverage package, let’s discuss what coverage testing is all about. We all know it is important to have tests for your code so that it does not break as you implement new features and inevitably refactor code. Coverage testing tells you what parts of your code your tests actually test. This can help you find branch paths or even entire modules of the code that are not properly tested. It can also help you know if new code is actually getting tested.

The output of a coverage test is generally some really nice webpages that show you line-by-line what code is getting executed during the test:

Using code_coverage for C++

We will start by discussing the usage of code_coverage with C++ code first, because it is actually quite a bit simpler. C++ coverage can be done almost entirely in CMake.

First, update your package.xml to have a test_depend on code_coverage package.

Next, we need to update two places in the CMakeLists.txt file. The first change should be right after you call to catkin_package. The second change is where you define your test targets. You need to define a new target, which we will typically call {package_name}_coverage_report

# After catkin_package()

if(CATKIN_ENABLE_TESTING AND ENABLE_COVERAGE_TESTING)
  find_package(code_coverage REQUIRED)
  # Add compiler flags for coverage instrumentation before defining any targets
  APPEND_COVERAGE_COMPILER_FLAGS()
endif()

# Add your targets here

if (CATKIN_ENABLE_TESTING)
  # Add your tests here

  # Create a target ${PROJECT_NAME}_coverage_report
  if(ENABLE_COVERAGE_TESTING)
    set(COVERAGE_EXCLUDES "*/${PROJECT_NAME}/test*" "*/${PROJECT_NAME}/other_dir_i_dont_care_about*")
    add_code_coverage(
      NAME ${PROJECT_NAME}_coverage_report
      DEPENDENCIES tests
    )
  endif()
endif()

That’s the configuration needed. Now we can compile the code (with coverage turned on) and run the coverage report (which in turn will run the tests):

catkin_make -DENABLE_COVERAGE_TESTING=ON -DCMAKE_BUILD_TYPE=Debug PACKAGE_NAME_coverage_report

You can find these same instructions (and how to use catkin tools) in the code_coverage README.

Using code_coverage for Python

Python unit tests will automatically get coverage turned on just with the CMake configuration shown above, but Python-based rostests (those that are launched in a launch file) need some extra configuration.

First, we need to turn on coverage testing in each node using the launch-prefix. You can decide on a node-by-node basis which nodes should actually generate coverage information:

<launch>

    <!-- Add an argument to the launch file to turn on coverage -->
    <arg name="coverage" default="false"/>

    <!-- This fancy line forces nodes to generate coverage -->
    <arg name="pythontest_launch_prefix" value="$(eval 'python-coverage run -p' if arg('coverage') else '')"/>

    <!-- This node will NOT generate coverage information -->
    <node pkg="example_pkg" name="publisher_node" type="publisher_node.py" />

    <!-- But this node WILL generate coverage -->
    <node pkg="example_pkg" name="subscriber_node" type="subscriber_node.py"
          launch-prefix="$(arg pythontest_launch_prefix)" />

    <!-- The test can also generate coverage information if you include the launch-prefix -->
    <test time-limit="10" test-name="sample_rostest" pkg="example_pkg" type="sample_rostest.py"
          launch-prefix="$(arg pythontest_launch_prefix)" />

</launch>

Then we turn on coverage by adding the argument in our CMakeLists.txt:

add_rostest(example_rostest.test ARGS coverage:=ENABLE_COVERAGE_TESTING)

You can find this full Python example from my co-worker Survy Vaish on GitHub.

Using codecov.io For Visualization

codecov.io is a cloud-based solution for visualizing the output of your coverage testing. It can combine all of the reports from individual packages, as well as the C++ and Python reports into some nice graphs and track results over multiple commits:

codecov.io dashboard for robot_calibration
A Full Working Example

The robot_calibration package use code_coverage, codecov.io, and Travis-CI to run code coverage testing on every pull request and commit to master branch. It uses the popular industrial-ci package as the base line and then the following changes are made:

  • I set the CMAKE_ARGS in the travis.yml so that coverage is turned on, and the build type is debug.
  • I created a .coverage.sh script which runs as the AFTER_SCRIPT in Industrial-CI. This script runs the coverage report target and then calls the codecov.io bash uploader.
  • Since Industrial-CI runs in a docker, I introduced a .codecov.sh script which exports the required environment variables into the docker. This uses the env script from codecov.io.

10 Years of ArbotiX

This is day 2 of my 2020 National Robotics Week blog marathon! I've added a new label for Blast From The Past posts, of which this pretty much qualifies.

I've recently been sorting through some older electronics and came across this:

The original ArbotiX (2009)

This is the original ArbotiX prototype. Back in 2009, I was a fairly active member of the Trossen Robotics forums. Andrew Alter had this new little event he called Mech Warfare that they were putting on at RoboGames.

I seem to recall having my first ever phone call with Andrew Alter, and by time it was over having a) signed up to build a scoring system and b) bought a Bioloid kit. I did not promise to show up with a Mech because there was really pretty limited time to work on that during the semester.

Then I showed up with a Mech. And won.

You can read all about how I build IssyDunnYet in the thread over at Trossen Robotics. But this post is about the evolution of the ArbotiX.

IssyDunnYet in a not-done status, sporting the ArbotiX prototype.

The original ArbotiX is really pretty simple, it is an Arduino-based board with hardware support for Dynamixel control and XBEE communications, but the Python-based pose engine, and open source nature, really opened up what you could do with Dynamixel Servos. ArbotiX boards have been used in so many places over the past decade+, everything from Giant Hexapods to Kinematic Sculptures.

The original boards were all through hole parts because I was literally assembling these by hand while working my way through grad school. When the order quantities got high enough, I actually had the board fab guys assemble the parts in China and then I would insert the expensive DIP chips stateside. Eventually, I finished grad school and took a job at Willow Garage - at that point we handed all manufacturing over to Trossen Robotics and wound down Vanadium Labs. Trossen is still selling lots of these little boards today, although they've gone surface mount with the newer ArbotiX-M.

The original ArbotiX and the newer, smaller ArbotiX-M

Also found in my pile of random old boards was the original hand-build prototype of the ArbotiX Commander. It looks quite a bit different from the current generation.

The original production versions of the ArbotiX Commander (left)
and the hand-built prototype (right).

I also came across the spiritual predecessor to the whole ArbotiX lineup, the AVRRA board (AVR Robotics API). This was used in XR-B3 (pretty much my last non-ROS bot).

AVRRA Board (2008)