Generalized Animal Imitator:
Agile Locomotion with Versatile Motion Prior

1UC San Diego, 2CMU, 3USC
*Equal Contribution

Our robot perform diverse agile locomotion skills with a Single Instructable Motion Prior .

Abstract

The agility of animals, particularly in complex activities such as running, turning, jumping, and backflipping, stands as an exemplar for robotic system design. Transferring this suite of behaviors to legged robotic systems introduces essential inquiries: How can a robot be trained to learn multiple locomotion behaviors simultaneously? How can the robot execute these tasks with a smooth transition? And what strategies allow for the integrated application of these skills? This paper introduces the Versatile Instructable Motion prior (VIM) – a Reinforcement Learning framework designed to incorporate a range of agile locomotion tasks suitable for advanced robotic applications. Our framework enables legged robots to learn diverse agile low-level skills by imitating animal motions and manually designed motions with Functionality reward and Stylization reward. While the Functionality reward guides the robot's ability to adopt varied skills, the Stylization reward ensures performance alignment with reference motions.Our evaluations of the VIM framework span both simulation environments and real-world deployment. To our understanding, this is the first work that allows a robot to concurrently learn diverse agile locomotion tasks using a singular controller.

Video

Our system learns a Single Instructable Motion Prior from a diverse reference motion dataset.

We present the Versatile Instructable Motion prior (VIM), designed to acquire a wide range of agile locomotion skills concurrently from multiple reference motions. The development of our motion prior involves three stages: assembling a comprehensive dataset of reference motions sourced from diverse origins, crafting a motion prior that processes varying reference motions and the robot's proprioceptive feedback to generate motor commands, and finally, utilizing an imitation-based reward mechanism to effectively train this motion prior. Given the formulation of our motion prior, the robot learns diverse agile locomotion skills with our imitation reward and reward scheduling mechanics. Our reward offers consistent guidance, ensuring the robot captures both the functionality and style inherent to the reference motion.

Our motion prior consists of a reference motion encoder, and a low-level policy. Reference motion encoder maps varying reference motions into a condensed latent skill space, and low-level policy utilizes our imitation reward, reproduces the robot motion given a latent command.

Given the formulation of our motion prior, the robot learns diverse agile locomotion skills with our imitation reward and reward scheduling mechanics. Our reward offers consistent guidance, ensuring the robot captures both the functionality and style inherent to the reference motion.

Learned Low-Level Skills (With A Single Policy)

Backflipping

Reference Motion

VIM

Motion Imitation

Jump Forward

Reference Motion

VIM

Motion Imitation

Jump While Running

Reference Motion

VIM

Motion Imitation

Canter

Reference Motion

VIM

Motion Imitation

Jump Forward (Synthesized)

Reference Motion

VIM

Motion Imitation

Right Turn

Reference Motion

VIM

Motion Imitation

Left Turn

Reference Motion

VIM

Motion Imitation

Trot

Reference Motion

VIM

Motion Imitation

Pace

Reference Motion

VIM

Motion Imitation

Walk

Reference Motion

VIM

Motion Imitation

Left Turn (Synthesized)

Reference Motion

VIM

Motion Imitation