Despite its importance in both industrial and service robotics, mobile manipulation remains a
significant challenge as it requires a seamless integration of end-effector trajectory
generation with navigation skills as well as reasoning over long-horizons. Existing methods
struggle to control the large configuration space and to navigate dynamic and unknown
environments.
As a result, mobile manipulation is commonly reduced to sequential base navigation followed
by static arm manipulation at the goal location. This simplification is restrictive as many
tasks such as door opening require the joint use of the arm and base and is inefficient as it
dismisses simultaneous movement and requires frequent repositioning.
Mobile manipulation tasks in unstructured environments typically
require the simultaneous use of the robotic arm and the mobile base. While
it is comparably simple to find end-effector motions to complete a task
(green), defining base motions (blue) that conform to both the robot’s and
the environment’s constraints is highly challenging. We propose Neural Navigation for Mobile
Manipulation (N2M2), an effective
approach that learns feasible base motions for arbitrary end-effector motions.
The resulting model is flexible, dynamic and generalizes to unseen motions
and tasks. We demonstrate these capabilities in both extensive simulation
and real-world experiments on multiple mobile manipulators,
across a wide range of tasks and environments.