Rofunc: The Full Process Python Package for Robot Learning from Demonstration and Robot Manipulation
Repository address: https://github.com/Skylark0924/Rofunc
Rofunc package focuses on the robotic Imitation Learning (IL) and Learning from Demonstration (LfD) fields and provides valuable and convenient python functions for robotics, including demonstration collection, data pre-processing, LfD algorithms, planning, and control methods. We also provide an Isaac Gym-based robot simulator for evaluation. This package aims to advance the field by building a full-process toolkit and validation platform that simplifies and standardizes the process of demonstration data collection, processing, learning, and its deployment on robots.
The installation is very easy,
pip install rofunc
and as you'll find later, it's easy to use as well!
import rofunc as rf
Thus, have fun in the robotics world!
Note Several requirements need to be installed before using the package. Please refer to the installation guide for more details.
git clone https://github.com/Skylark0924/Rofunc.git
cd Rofunc
# Create a conda environment
# Python 3.8 is strongly recommended
conda create -n rofunc python=3.8
conda activate rofunc
# Install the requirements and rofunc
pip install -r requirements.txt
pip install .
Note If you want to use functions related to ZED camera, you need to install ZED SDK manually. (We have tried to package it as a
.whl
file to add it torequirements.txt
, unfortunately, the ZED SDK is not very friendly and doesn't support direct installation.)
Note Currently, we provide a simple document; please refer to here. A comprehensive one with both English and Chinese versions is built via the readthedoc. We provide a simple but interesting example: learning to play Taichi by learning from human demonstration.
To give you a quick overview of the pipeline of rofunc
, we provide an interesting example of learning to play Taichi
from human demonstration. You can find it in the Quick start
section of the documentation.
The available functions and plans can be found as follows.
Note β : Achieved π: Reformatting β: TODO
Data | Learning | P&C | Tools | Simulator | |||||
---|---|---|---|---|---|---|---|---|---|
xsens.record |
β | DMP |
β | LQT |
β | Config |
β | Franka |
β |
xsens.export |
β | GMR |
β | LQTBi |
β | robolab.fk |
β | CURI |
β |
xsens.visual |
β | TPGMM |
β | LQTFb |
β | robolab.ik |
β | CURIMini |
π |
opti.record |
β | TPGMMBi |
β | LQTCP |
β | robolab.fd |
β | CURISoftHand |
π |
opti.export |
β | TPGMMBiCoordLQR |
β | LQTCPDMP |
β | robolab.id |
β | Walker |
β |
opti.visual |
β | TPGMR |
β | LQR |
β | robolab.tran |
β | Gluon |
π |
zed.record |
β | TPGMRBi |
β | PoGLQRBi |
β | visualab.dist |
β | Baxter |
π |
zed.export |
β | BCO |
π | iLQR |
π | visualab.ellip |
β | Sawyer |
π |
zed.visual |
β | STrans |
β | iLQRBi |
π | visualab.traj |
β | ||
emg.record |
β | PPO(SKRL) |
β | iLQRFb |
π | ||||
emg.export |
β | SAC(SKRL) |
β | iLQRCP |
π | ||||
emg.visual |
β | TD3(SKRL) |
β | iLQRDyna |
π | ||||
mmodal.record |
β | PPO(SB3) |
β | iLQRObs |
π | ||||
mmodal.export |
β | SAC(SB3) |
β | MPC |
β | ||||
TD3(SB3) |
β | CIO |
β | ||||||
PPO(RLlib) |
β | ||||||||
SAC(RLlib) |
β | ||||||||
TD3(RLlib) |
β | ||||||||
PPO(ElegRL) |
β | ||||||||
SAC(ElegRL) |
β | ||||||||
TD3(ElegRL) |
β | ||||||||
PPO(RofuncRL) |
π | ||||||||
SAC(RofuncRL) |
β | ||||||||
TD3(RofuncRL) |
β | ||||||||
CQL(RofuncRL) |
β |
If you use rofunc in a scientific publication, we would appreciate citations to the following paper:
@misc{Rofunc2022,
author = {Liu, Junjia and Li, Zhihao and Li, Chenzui and Chen, Fei},
title = {Rofunc: The full process python package for robot learning from demonstration},
year = {2022},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/Skylark0924/Rofunc}},
}
Rofunc is developed and maintained by the CLOVER Lab (Collaborative and Versatile Robots Laboratory), CUHK.
We would like to acknowledge the following projects: