Dynamic Locomotion for Humanoid Robots Via Deep Reinforcement Learning

dc.contributor.advisorRamirez-Serrano, Alejandro
dc.contributor.authorGarza Bayardo, Rodrigo Alberto
dc.contributor.committeememberLee, Jihyun
dc.contributor.committeememberNittala, Aditya Shekhar
dc.date2022-11
dc.date.accessioned2022-09-28T15:19:25Z
dc.date.available2022-09-28T15:19:25Z
dc.date.issued2022-09
dc.description.abstractA longstanding goal in legged-mobile robotics is to enable robots to learn robust control policies capable of expanding their locomotion abilities to be competent and efficient when moving on a priori unknown uneven terrains. Traditional control techniques lack the generalizability required for legged robots to locomote outside a controlled lab environment in terrains with diverse difficult-to-model uncertainties (e.g., friction coefficients, compliance/deformation, etc.). The approach presented in this thesis combines Deep Reinforcement Learning (Deep RL) with motion capture (MoCap) data to train a biped simulated agent (i.e., humanoid robot) to perform a rich repertoire of diverse skills (e.g., walking, crawling, running, climbing steps, etc.). The method begins by creating reference motion clips with the robot’s morphology for desired locomotion gaits. This is achieved by applying motion retargeting techniques to adapt MoCap clips (taken from humans) to the humanoid system. Subsequently, these adapted reference clips are used as inputs to a new Deep RL architecture. Such architecture uses the Proximal Policy Optimization (PPO) algorithm to train the simulated agent to perform the gait of interest on randomized domains that vary with each iteration. After training, the generated control policy is transferred to a real robot for testing and fine-tuning. The benefits of this approach include i) reducing the training time typically required in multi-legged systems, ii) avoiding exposing the robot and the trainer (human operator) to the tedious and time-consuming initial iterations of the learning process where mistakes are likely to happen, and iii) generalizability – the ability to employ the training model on virtually every available legged mobile robot. Thus, the proposed approach enables robots to locomote in real-world settings. The results are demonstrated through simulation and experimentally tested on a Robotis’ THORMANG 3.0 humanoid robot.en_US
dc.identifier.citationGarza Bayardo, R. A. (2022). Dynamic locomotion for humanoid robots via deep reinforcement learning (Master's thesis, University of Calgary, Calgary, Canada). Retrieved from https://prism.ucalgary.ca.en_US
dc.identifier.urihttp://hdl.handle.net/1880/115311
dc.identifier.urihttps://dx.doi.org/10.11575/PRISM/40317
dc.language.isoengen_US
dc.publisher.facultySchulich School of Engineeringen_US
dc.publisher.institutionUniversity of Calgaryen
dc.rightsUniversity of Calgary graduate students retain copyright ownership and moral rights for their thesis. You may use this material in any way that is permitted by the Copyright Act or through licensing that has been assigned to the document. For uses that are not allowable under copyright legislation or licensing, you are required to seek permission.en_US
dc.subjectDynamic Locomotionen_US
dc.subjectReinforcement Learningen_US
dc.subjectHumanoidsen_US
dc.subject.classificationEducation--Technologyen_US
dc.subject.classificationArtificial Intelligenceen_US
dc.subject.classificationRoboticsen_US
dc.titleDynamic Locomotion for Humanoid Robots Via Deep Reinforcement Learningen_US
dc.typemaster thesisen_US
thesis.degree.disciplineEngineering – Mechanical & Manufacturingen_US
thesis.degree.grantorUniversity of Calgaryen_US
thesis.degree.nameMaster of Science (MSc)en_US
ucalgary.item.requestcopytrueen_US
Files
Original bundle
Now showing 1 - 1 of 1
Loading...
Thumbnail Image
Name:
ucalgary_2022_garza-bayardo_rodrigo.pdf
Size:
48.26 MB
Format:
Adobe Portable Document Format
Description:
License bundle
Now showing 1 - 1 of 1
No Thumbnail Available
Name:
license.txt
Size:
2.62 KB
Format:
Item-specific license agreed upon to submission
Description: