Skip to main content

RBE 3001: Unified Robotics III: Manipulation

Worcester Polytechnic Institute

2015

01Background

RBE 3001: Unified Robotics III[1] was the third course in WPI's unified robotics sequence, building directly on RBE 2001 (Actuation) and RBE 2002 (Sensing) to tackle manipulation — getting a robot to perform useful physical work on its environment. The earlier courses gave us the muscles and the senses; RBE 3001 tied them together through kinematics, dynamics, trajectory planning, closed-loop feedback control, and real peripheral integration on a custom Atmel microcontroller platform[2].

The course also dropped the training wheels. Our prior courses had leaned on Arduino-based platforms with high-level libraries that abstracted away most of the chip. RBE 3001 ran on a custom PCB designed by our professor and the lab technicians, programmed in bare-metal C inside Eclipse with the AVR plugin, with Subversion handling version control across team members. Without Arduino's framework doing the busywork, we had to configure peripherals ourselves — SPI, UART, I2C, timer-input capture, ADC channels, GPIO — and learn what the silicon was actually doing at the register level. Recommended background included ECE 2049: Embedded Computing in Engineering Design[3].

02Custom Atmel PCB Platform

The lab board was a major step up from anything we had used before. Instead of an off-the-shelf Arduino with pre-written drivers for every sensor, we had a custom-designed PCB built around an Atmel AVR microcontroller, with the peripherals needed for the course broken out directly: H-bridge motor drivers for the arm joints, analog channels for current sensing, UART for debug and telemetry, I2C for auxiliary sensors, and SPI for the faster peripherals.

The protocol work compounded through the term. By the end of the course, the robot was simultaneously running PWM to every arm motor, counting quadrature edges on timer-input-capture channels for the encoders, sampling motor current on the ADC, listening to the conveyor's IR sensors on their digital outputs, and maintaining a UART telemetry link back to the host — all with timing constraints that required interrupt priorities to be thought through rather than assumed away.

The RBE 3001 lab board — a custom Atmel AVR PCB designed by the professor and lab technicians. Programmed bare-metal in C through Eclipse with the AVR plugin, with Subversion for team version control.

03The 3D-Printed Robotic Arm

The manipulator was a 3D-printed multi-joint arm driven by DC motors through belt-and-pulley reductions, with a pincer-style gripper as the default end effector (the platform accepted other end effectors — including VEX-compatible mounts — when the lab called for it). Each joint's position was tracked by a quadrature encoder on the motor shaft, and the gear-and-belt reductions did double duty: they multiplied motor torque to handle payload, and they mechanically low-pass-filtered the motor's high-speed jitter so tip motion was smooth at the working end.

Tracking the end-effector position through all of that meant careful bookkeeping. Each joint's encoder counts had to be divided by the tooth ratio of its pulley reduction to get actual joint angle; the joint angle went into a per-link homogeneous transformation matrix; the matrices multiplied together to give the end-effector pose in the base frame. Get any one of the reduction ratios wrong and the tip drifts further off target the more joints it passes through. Teeth and mechanical-advantage accounting weren't optional details — they were load-bearing in the kinematics.

The 3D-printed robotic arm with an interchangeable end effector mounted — here holding VEX components for a prior lab exercise.

04Kinematics & Trajectory Control

The first real control milestone was drawing simple geometry with the arm tip — triangles, squares, straight segments — starting from a list of waypoints in world coordinates. Each waypoint fed through an inverse-kinematics solver hand-derived for our arm's geometry (not a black-box numerical routine), which spat out joint angles. Between waypoints, the arm interpolated with velocity-limited trajectories so the tip traced continuous paths rather than teleporting between points.

This was where the control theory from earlier courses earned its keep. Each joint ran its own PID loop closing on its encoder reading; the outer loop fed those per-joint setpoints from the trajectory planner. Tuning gains that worked across the whole workspace — where the apparent joint inertia and the gravity loading both change with pose — took more iterations than we would have liked, and made every singular configuration near the edge of reach very interesting.

Drawing simple geometry with the arm tip — triangles first, then tighter shapes as the trajectory planner and PID gains settled in.

05Assembly Line: Conveyor & IR Sensing

The final project integrated the arm with a small assembly line we built out of a belt-sander belt running on a pair of rubber wheels — a grippy, dimensionally stable surface that could carry objects at a range of speeds without slipping. A VEX motor drove the belt, with the lab board commanding speed through one of its H-bridge outputs.

A pair of IR Parallax distance sensors mounted over the belt together reported the depth of each object on the line — close to the sensor, centered, or far across the track — along with the belt's speed as the object moved through their field of view. The depth reading fed directly into the horizontal translation targets for the arm: rather than reaching toward a guessed center-line, the inverse-kinematics solver used the measured depth to compute joint angles that placed the end effector directly above the object's actual position on the belt. The speed reading let us schedule the grab to the object's motion instead of to a fixed timer.

06Pick, Weigh, Sort

With the belt and the arm integrated, the final challenge was pick-and-sort with weight classification. Objects of varying masses were placed at varying positions along the belt; the system had to detect each one via the IR gate, time the grab to the belt's velocity, pinch the object with the gripper, lift it to a known pose, infer its mass from the arm's motor current draw, and then deposit it in the correct sort bin for its weight class.

Weighing-by-current worked because a DC motor's steady-state current is proportional to the torque it's applying. With the arm held at a known reference pose — load hanging vertically from the gripper, specific joint angles — the static torque at each joint is a function only of the arm's own geometry and the payload mass. Subtract the known self-weight contribution, divide by the effective moment arm at that pose, and the motor's ADC reading becomes a number of grams. Calibration was tedious (every reference pose had to be characterized against known masses), but once characterized, the weight inference ran in milliseconds from a handful of ADC samples.

This was the first class where everything I had been learning — mechanical design, closed-loop motor control, peripheral bring-up, kinematics, trajectory planning, sensor integration — had to work at the same time, on the same platform, to do real work. Years later, after college, I bought my own desktop robotic arm to keep experimenting with these ideas at home.

07References

[1]RBE 3001: Unified Robotics III — Worcester Polytechnic Institute
[2]RBE 3001 Syllabus (2015) — archived by Prof. Dmitry Berenson. The syllabus covering the term I took the course.
[3]ECE 2049: Embedded Computing in Engineering Design — Worcester Polytechnic Institute