Learning Legged Locomotion
Written on October 8th, 2021 by Geoffrey Clark
Since I, unfortunately, do not have access to a high-quality legged robot, I chose to explore reinforcement learning on the MIT Mini Cheetah in simulation. I used reinforcement learning to train a locomotion model in the simulation environment. Specifically, the robot learned to walk and run up to 10m/s and crouch to walk under low objects. Since I do not have a physical robot for testing, I ported both the robot and learned neural network into Unity for high-fidelity simulation. Porting the robot and controller into a new simulation environment poses the same challenges as porting them to a real-world robot. Minor differences in the environment mean that the controller must be highly robust to control the robot effectively. To accomplish this sim-to-sim transfer, I trained my control policy while varying critical simulation parameters such as friction, mass, and control timing, while adding varying noise to each sensor. Training took approximately 2 hours and simulated over 15 days of robot time. Ultimately, my training resulted in a robust and effective control policy that I can drive around the simulated world with a controller. I am currently adding jumping, flipping over, and additional sensing capabilities to the robot.
Further Information
For further information or to follow along with my progress, please consult my github repository.