This paper explores many-shot in-context learning, where large language models (LLMs) are provided with hundreds or thousands of examples at inference in order to learn new tasks. The authors leverage the recently expanded context windows of LLMs like Gemini 1.5 Pro to investigate performance gains from few-shot to many-shot learning across a wide range of tasks.
Many-Shot In-Context Learning
Many-Shot In-Context Learning
Many-Shot In-Context Learning
This paper explores many-shot in-context learning, where large language models (LLMs) are provided with hundreds or thousands of examples at inference in order to learn new tasks. The authors leverage the recently expanded context windows of LLMs like Gemini 1.5 Pro to investigate performance gains from few-shot to many-shot learning across a wide range of tasks.