diff --git a/src/pages/daily/daily.md b/src/pages/daily/daily.md index 5bdb5da..4aba425 100644 --- a/src/pages/daily/daily.md +++ b/src/pages/daily/daily.md @@ -1,3 +1,47 @@ +## 2024-04-09 +### Long-horizon Locomotion and Manipulation on a Quadrupedal Robot with Large Language Models + +- **Authors**: Yutao Ouyang, Jinhan Li, Yunfei Li, Zhongyu Li, Chao Yu, Koushil Sreenath, Yi Wu +- **Main Affiliations**: Shanghai Qizhi Institute, Tsinghua University, University of California, Berkeley +- **Tags**: `Large Language Models` + +#### Abstract + +We present a large language model (LLM) based system to empower quadrupedal robots with problem-solving abilities for long-horizon tasks beyond short-term motions. Long-horizon tasks for quadrupeds are challenging since they require both a high-level understanding of the semantics of the problem for task planning and a broad range of locomotion and manipulation skills to interact with the environment. Our system builds a high-level reasoning layer with large language models, which generates hybrid discrete-continuous plans as robot code from task descriptions. It comprises multiple LLM agents: a semantic planner for sketching a plan, a parameter calculator for predicting arguments in the plan, and a code generator to convert the plan into executable robot code. At the low level, we adopt reinforcement learning to train a set of motion planning and control skills to unleash the flexibility of quadrupeds for rich environment interactions. Our system is tested on long-horizon tasks that are infeasible to complete with one single skill. Simulation and real-world experiments show that it successfully figures out multi-step strategies and demonstrates non-trivial behaviors, including building tools or notifying a human for help. + +[Paper Link](https://arxiv.org/abs/2404.05291) + +