Harvard
Georgia Tech
MERL

Offline Imitation Learning upon Arbitrary Demonstrations by Pre-Training Dynamics Representations

1Harvard Unviersity, 2Georgia Institute of Techology
3Mitsubishi Electric Research Laboratories

Abstract

Limited data has become a major bottleneck in scaling up offline imitation learning (IL). In this paper, we propose enhancing IL performance under limited expert data by introducing a pre-training stage that learns dynamics representations, derived from factorizations of the transition dynamics.

We first theoretically justify that the optimal decision variable of offline IL lies in the representation space, significantly reducing the parameters to learn in the downstream IL. Moreover, the dynamics representations can be learned from arbitrary data collected with the same dynamics, allowing the reuse of massive non-expert data and mitigating the limited data issues.

We present a tractable loss function inspired by noise contrastive estimation to learn the dynamics representations at the pre-training stage. Experiments on MuJoCo demonstrate that our proposed algorithm can mimic expert policies with as few as a single trajectory. Experiments on real quadrupeds show that we can leverage pre-trained dynamics representations from simulator data to learn to walk from a few real-world demonstrations.

Overview of the Imitation Learning Framework

repr-il structure

Implementations on the Unitree Go2 Quadrupeds

BibTeX

@article{ma2024skilltransfer,
  author    = {Ma, Haitong and Dai, Bo and Ren, Zhaolin and Wang, Yebin and Li, Na},
  title     = {Offline Imitation Learning upon Sub-optimal Demonstrations by Primal-Dual Representation},
  journal   = {technical report},
  year      = {2024},
}