Skip to main content
SearchLoginLogin or Signup

Getting a grip in zero-g

Autonomous robot manipulation in microgravity

Published onNov 17, 2023
Getting a grip in zero-g

Project goals

In this project, we aim to develop an autonomous robot to solve a “capture-and-stow” task, where the robot must autonomously identify, grab, and stow a free-floating object, mimicking real-world docking and servicing tasks. This project aims to achieve two firsts:

  1. It will be the first (to our knowledge) fully autonomous robot manipulation system in microgravity. Previous projects have demonstrated teleoperation [1] and semi-autonomous manipulation [2, 3], but we are not aware of any demonstrations in microgravity with onboard perception, decision making, and control.

  2. It will be the first demonstration of open-source, low-cost robotic manipulation hardware in microgravity, with a target cost ~500x less than previously-flown space robots like Robonaut [2] and ~10x less than commercial terrestrial robot arms.


Robot arms have been used extensively on Earth, the Moon, and Mars, but robot arms in low Earth orbit (LEO) have been mostly limited to non-autonomous “teleoperation” by astronauts or ground controllers [1]. While effective, the resource cost for teleoperation is significant, motivating the exploration of autonomous solutions. The ability to autonomously grasp and manipulate objects in microgravity would not only free up valuable crew time but also support applications in orbital servicing and manufacturing.

The constant motion of free-floating objects in microgravity means that an autonomous manipulator must be able to plan and move quickly in order to successfully grasp objects. Previously-deployed space manipulators have been semi-autonomous, lacking the on-board autonomy needed to quickly react to the dynamics of free-floating objects [2, 3, 4]. This project would integrate recent advances in computer vision and robust control to build an on-board autonomy stack for the capture-and-stow task. To address the safety concerns posed by full onboard autonomy, we will draw on recent research by members of the project team for certifying the safety of autonomous systems [5].


The project team includes Charles Dawson (MIT AeroAstro, advised by Prof. Chuchu Fan) and Evan Palmer (Oregon State University, advised by Prof. Geoff Hollinger).

Update 1: Choosing a robot

There are a lot of different factors that can go into choosing a robot manipulator, but for this mission the three main considerations are reach (how far the robot can reach), payload (how much it can pick up), and cost.

Reach: Since we’ll need to deploy this arm on the zero-g flight, where space can be tight, we need a robot arm that is relatively compact. Ideally, we’d like to be able to fit within a 50x50 cm footprint; it’s OK if the robot can technically reach further than this, since we can constrain it using software to stay within the bounds of the experiment.

Payload: Luckily, we have the luxury of designing the manipulation task, so we can choose a relatively light object to grasp (e.g. < 100 g). Choosing a small payload helps us access lower-cost arms and removes any safety concerns from the robot swinging a heavy mass around.

Cost: The cost of a robot arm usually scales with the reach and payload, so it’s good that we’re targeting a small, low-payload arm for this experiment.

A robot that meets all of these requirements is the Trossen Robotics PincherX 100, which has a 100 g maximum payload and 300 mm reach, easily fitting within our deployment constraints. It’s also (relatively) low cost, especially compared to arms with greater reach and payload. One drawback of this arm is that it only has 4 degrees of freedom: it can reach any (x,z)(x, z) coordinate in it’s workspace, and it can adjust the pitch of the gripper, but it cannot roll the gripper, and gripper yaw and the yy coordinate are coupled (they cannot be chosen independently). This will require some additional mechanical design to add a yy axis to the robot, but this extra complexity is justified by the significantly lower cost of this arm compared to a fully actuated arm.

Update 2: Simulation

While hardware will be the gold standard for testing our system, it’s helpful to have a software simulation environment for quick tests during development. We’ve chosen to use Gazebo as our simulation environment, which will allow us to simulate the physics of the robot interacting with a free-floating object, as well as simulate any cameras the robot uses to track the target object.

As we prepare for our flight, we will work on developing this simulator to act as a useful prototyping tool (and an open-source contribution so others can experiment with micro-gravity manipulation strategies)

Update 3: Mechanical design

There are a few criteria guiding our mechanical design for this project:

  1. Our robot only has 4 degrees of freedom, so we’d like to add a linear degree of freedom.

  2. We need some way to mount the cameras, one of which the robot will use to track the object, the other of which will be used to record the experiment.

  3. We need some way of launching the target object towards the robot.

Here is our final mechanical design:

There are a few features to call out

  1. To add a degree of linear motion to the robot, we added a linear stage driven by stepper motors. One stepper motor will be actively driven, with the other providing redundancy (and a convenient way to align the belt drive). The linear stage has 2 carbon steel shafts and 4 linear bearings carrying the robot platform.

  2. To support the cameras, we added an 80/20 aluminum frame. This will also provide protection for the robot and support a safety net that will keep the floating object confined to the experiment enclosure.

  3. We added a spring-loaded launching system powered by a rubber band to allow for consistent release of the target object towards the robot.

Update 4: Prototyping!

To rapidly prototype and test out our design, we used a big piece of scrap acrylic for the base and replaced the 80/20 frame with a tripod. We then built the linear stage and mounted the robot, yielding our first prototype!

Notice that we’ve added fiducial markers (fancy QR codes) to both the target object and the linear stage. This helps the camera track the target object and tell the robot where to move (without relying on tedious camera calibration).

We decided to use an off-the-shelf stepper driver (the Polulu T825) to control the stepper motor via USB, rather than adding a microcontroller to fill that role. This helps keep complexity low.

Update 5: coding and integration!

After some long days of coding, debugging, and testing, we have our first micro-gravity manipulation controller working! This uses simple linear feedback to track the floating object.

  • [1] Hambuchen et al. “A Review of NASA Human-Robot Interaction in Space.” Current Robotics Reports (2021)

  • [2] Ahlstrom et al. “Robonaut 2 on the International Space Station: Status Update and Preparations for IVA Mobility.” AIAA Space 2013 Conference

  • [3] Farrell et al. “Supervisory Control of a Humanoid Robot in Microgravity for Manipulation Tasks” IEEE IROS 2017

  • [4] Hirzinger. “ROTEX — The first space robot technology experiment.” Experimental Robotics III 2005

  • [5] Dawson et al. “Safe Control With Learned Certificates: A Survey of Neural Lyapunov, Barrier, and Contraction Methods for Robotics and Control.” IEEE T-RO 2023.

No comments here
Why not start the discussion?