Ramirez-Serrano, AlejandroGarza Bayardo, Rodrigo Alberto2022-09-282022-09-282022-09Garza Bayardo, R. A. (2022). Dynamic locomotion for humanoid robots via deep reinforcement learning (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.http://hdl.handle.net/1880/115311https://dx.doi.org/10.11575/PRISM/40317A longstanding goal in legged-mobile robotics is to enable robots to learn robust control policies capable of expanding their locomotion abilities to be competent and efficient when moving on a priori unknown uneven terrains. Traditional control techniques lack the generalizability required for legged robots to locomote outside a controlled lab environment in terrains with diverse difficult-to-model uncertainties (e.g., friction coefficients, compliance/deformation, etc.). The approach presented in this thesis combines Deep Reinforcement Learning (Deep RL) with motion capture (MoCap) data to train a biped simulated agent (i.e., humanoid robot) to perform a rich repertoire of diverse skills (e.g., walking, crawling, running, climbing steps, etc.). The method begins by creating reference motion clips with the robot’s morphology for desired locomotion gaits. This is achieved by applying motion retargeting techniques to adapt MoCap clips (taken from humans) to the humanoid system. Subsequently, these adapted reference clips are used as inputs to a new Deep RL architecture. Such architecture uses the Proximal Policy Optimization (PPO) algorithm to train the simulated agent to perform the gait of interest on randomized domains that vary with each iteration. After training, the generated control policy is transferred to a real robot for testing and fine-tuning. The benefits of this approach include i) reducing the training time typically required in multi-legged systems, ii) avoiding exposing the robot and the trainer (human operator) to the tedious and time-consuming initial iterations of the learning process where mistakes are likely to happen, and iii) generalizability – the ability to employ the training model on virtually every available legged mobile robot. Thus, the proposed approach enables robots to locomote in real-world settings. The results are demonstrated through simulation and experimentally tested on a Robotis’ THORMANG 3.0 humanoid robot.engUniversity of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission.Dynamic LocomotionReinforcement LearningHumanoidsEducation--TechnologyArtificial IntelligenceRoboticsDynamic Locomotion for Humanoid Robots Via Deep Reinforcement Learningmaster thesis