Skip to content

A curated list of in-context-learning, including classic and up-to-date papers📜

Notifications You must be signed in to change notification settings

lziiid/awesome-in-context-learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

15 Commits
 
 

Repository files navigation

Awesome In Context Learning Awesome

A curated list of in-context-learning, including classic and up-to-date papers. This project will be constantly updated and improved.

Keyword Explaination

: Classic papers in the field for those who want a quick overview of the field.

Content

Papers

ICL in vision

  • MM-REACT: Prompting ChatGPT for Multimodal Reasoning and Action (2023.03.20) [pdf]

  • Visual ChatGPT: Talking, Drawing and Editing with Visual Foundation Models (2023.03.08) [pdf]

  • What Makes Good Examples for Visual In-Context Learning? (2023.01.31) [pdf]

  • Multimodal Chain-of-Thought Reasoning in Language Models (2023.01.17) [pdf]

  • Socratic Models: Composing Zero-Shot Multimodal Reasoning with Language (2022.04.01) [pdf]

  • Multimodal Few-Shot Learning with Frozen Language Models (2021.06.25) [pdf]

CoT in vision

  • Visual Chain of Thought: Bridging Logical Gaps with Multimodal Infillings (2023.05.03) [pdf]

  • Chain of Thought Prompt Tuning in Vision Language Models (2023.04.16) [pdf]

Theoretical analysis

  • What In-Context Learning "Learns" In-Context: Disentangling Task Recognition and Task Learning (2023.05.16) [pdf]

  • Symbol tuning improves in-context learning in language models (2023.05.15) [pdf]

  • Larger language models do in-context learning differently (2023.03.07) [pdf]

  • Meta Learning to Bridge Vision and Language Models for Multimodal Few-Shot Learning (2023.02.28) [pdf]

  • Transformers as Algorithms: Generalization and Stability in In-context Learning (2023.01.17) [pdf]

  • Why Can GPT Learn In-Context? Language Models Secretly Perform Gradient Descent as Meta-Optimizers (2022.12.20) [pdf]

  • Transformers learn in-context by gradient descent (2022.12.15) [pdf]

  • What learning algorithm is in-context learning? Investigations with linear models (2022.11.28) [pdf]

  • In-context Learning and Induction Heads (2022.09.24) [pdf]

  • Data Distributional Properties Drive Emergent In-Context Learning in Transformers (2022.04.22) [pdf]

  • An Explanation of In-context Learning as Implicit Bayesian Inference (2021.11.03) [pdf]

Chain of thoughts

  • Active Prompting with Chain-of-Thought for Large Language Models (2023.02.23) [pdf]

  • Faithful Chain-of-Thought Reasoning (2023.01.31) [pdf]

  • Automatic Chain of Thought Prompting in Large Language Models (2022.10.07) [pdf]

  • Large Language Models are Zero-Shot Reasoners (2022.05.24) [pdf]

  • Self-Consistency Improves Chain of Thought Reasoning in Language Models (2022.03.21) [pdf]

  • Chain-of-Thought Prompting Elicits Reasoning in Large Language Models (2022.01.28) [pdf]

Theoretical analysis of CoT

  • Large Language Models Can Be Easily Distracted by Irrelevant Context (2023.01.23) [pdf]

  • Towards Understanding Chain-of-Thought Prompting: An Empirical Study of What Matters (2022.12.20) [pdf]

  • Large Language Models are Better Reasoners with Self-Verification (2022.12.19) [pdf]

Contribution

If you think there are still papers worth reading, other useful resources, or any other things, feel free to contribute!

About

A curated list of in-context-learning, including classic and up-to-date papers📜

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published