Dynamic Locomotion for Humanoid Robots Via Deep Reinforcement Learning

Date
2022-09
Journal Title
Journal ISSN
Volume Title
Publisher
Abstract
A longstanding goal in legged-mobile robotics is to enable robots to learn robust control policies capable of expanding their locomotion abilities to be competent and efficient when moving on a priori unknown uneven terrains. Traditional control techniques lack the generalizability required for legged robots to locomote outside a controlled lab environment in terrains with diverse difficult-to-model uncertainties (e.g., friction coefficients, compliance/deformation, etc.). The approach presented in this thesis combines Deep Reinforcement Learning (Deep RL) with motion capture (MoCap) data to train a biped simulated agent (i.e., humanoid robot) to perform a rich repertoire of diverse skills (e.g., walking, crawling, running, climbing steps, etc.). The method begins by creating reference motion clips with the robot’s morphology for desired locomotion gaits. This is achieved by applying motion retargeting techniques to adapt MoCap clips (taken from humans) to the humanoid system. Subsequently, these adapted reference clips are used as inputs to a new Deep RL architecture. Such architecture uses the Proximal Policy Optimization (PPO) algorithm to train the simulated agent to perform the gait of interest on randomized domains that vary with each iteration. After training, the generated control policy is transferred to a real robot for testing and fine-tuning. The benefits of this approach include i) reducing the training time typically required in multi-legged systems, ii) avoiding exposing the robot and the trainer (human operator) to the tedious and time-consuming initial iterations of the learning process where mistakes are likely to happen, and iii) generalizability – the ability to employ the training model on virtually every available legged mobile robot. Thus, the proposed approach enables robots to locomote in real-world settings. The results are demonstrated through simulation and experimentally tested on a Robotis’ THORMANG 3.0 humanoid robot.
Description
Keywords
Dynamic Locomotion, Reinforcement Learning, Humanoids
Citation
Garza Bayardo, R. A. (2022). Dynamic locomotion for humanoid robots via deep reinforcement learning (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.