Enabling Efficient Mobile Manipulation

Despite its importance in both industrial and service robotics, mobile manipulation remains a significant challenge as it requires a seamless integration of end-effector trajectory generation with navigation skills as well as reasoning over long-horizons. Existing methods struggle to control the large configuration space and to navigate dynamic and unknown environments. As a result, mobile manipulation is commonly reduced to sequential base navigation followed by static arm manipulation at the goal location. This simplification is restrictive as many tasks such as door opening require the joint use of the arm and base and is inefficient as it dismisses simultaneous movement and requires frequent repositioning.

approach

Mobile manipulation tasks in unstructured environments typically require the simultaneous use of the robotic arm and the mobile base. While it is comparably simple to find end-effector motions to complete a task (green), defining base motions (blue) that conform to both the robot’s and the environment’s constraints is highly challenging. We propose Neural Navigation for Mobile Manipulation (N2M2), an effective approach that learns feasible base motions for arbitrary end-effector motions. The resulting model is flexible, dynamic and generalizes to unseen motions and tasks. We demonstrate these capabilities in both extensive simulation and real-world experiments on multiple mobile manipulators, across a wide range of tasks and environments.


N2M2: How Does It Work?

approach
Figure: N2M2: We decompose mobile manipulation tasks into two components: an end-effector (EE) motion generation and a conditional RL agent that controls the base, the torso lift joint and the velocity of the end-effector motions. The agent receives a local map of the environment (shown in grey) together with the robot state and the next end-effector motions. Given the agent's actions, an inverse kinematics solver then solves for the remaining arm joints.


At the core, we decompose the mobile manipulation problem into an end-effector motion and base velocities learned by a reinforcement learning agent with the goal to ensure that these simultaneous end-effector and base motions are kinematically feasible. We extend our previous formulation to learn kinematically feasible motions to enable the reinforcement learning agent to also learn to avoid collisions with the environment and further expand its control to the velocity of the end-effector motions and the height adjustment of the robot torso. We then generalize the objective and training scheme to learn policies that generalize to unseen environments while remaining applicable to a diverse set of mobile manipulation platforms. The resulting approach enables it to solve a very broad range of tasks in unstructured real-world environments.

Videos

Code and Models

A software implementation of this project based on PyTorch can be found in our GitHub repository for academic usage and is released under the GPLv3 license. For any commercial purpose, please contact the authors.

Publications

Daniel Honerkamp, Tim Welschehold and Abhinav Valada,
Learning Navigation for Arbitrary Mobile Manipulation Motions in Unseen and Dynamic Environments
ArXiv preprint, 2022.

(Pdf) (Bibtex)


Daniel Honerkamp, Tim Welschehold and Abhinav Valada,
Learning Kinematic Feasibility for Mobile Manipulation through Deep Reinforcement Learning
EEE Robotics and Automation Letters (RA-L), 2021.

(Pdf) (Bibtex)


Acknowledgements

This work was funded by the European Union’s Horizon 2020 research and innovation program under grant agreement No 871449-OpenDR.

People