Control of contact interactions for robots acting in the world (June 2015 - November 2020)
What are the algorithmic principles that would allow a robot to run through a rocky terrain, lift a couch while reaching for an object that rolled under it or manipulate a screwdriver while balancing on top of a ladder? By trying to answer these questions in CONT-ACT, we would like to understand the fundamental principles for robot locomotion and manipulation and endow robots with the robustness and adaptability necessary to efficiently and autonomously act in an unknown and changing environment. It is a necessary step towards a new technological age: ubiquitous robots capable of helping humans in an uncountable number of tasks.
Dynamic interactions of the robot with its environment through the creation of intermittent physical contacts is central to any locomotion or manipulation task. Indeed, in order to walk or manipulate an object, a robot needs to constantly physically interact with the environment and surrounding objects. Our approach to motion generation and control in CONT-ACT gives a central place to contact interactions. Our main hypothesis is that it will allow us to develop more adaptive and robust planning and control algorithms for locomotion and manipulation. The project is divided in three main objectives: 1) the development of a hierarchical receding horizon control architecture for multi-contact behaviors, 2) the development of algorithms to learn representations for motion generation through multi-modal sensing (e.g. force and touch sensing) and 3) the development of controllers based on multi-modal sensory information through optimal control and reinforcement learning.