Long-context LLMs Struggle with Long In-context Learning: A Study on Extreme-label Classification
Abstract:
Large Language Models (LLMs) have shown significant progress in handling long sequences exceeding 32K tokens. However, their performance evaluation has mostly focused on metrics like perplexity and synthetic tasks, which may not fully capture their abilities in more nuanced, real-world scenarios. This study introduces a specialized benchmark, LongICLBench, focusing on long in-context learning within the realm of extreme-label classification. The benchmark requires LLMs to comprehend the entire input to recognize massive label spaces to make correct predictions. The study evaluates 13 long-context LLMs on LongICLBench and finds that while these models perform relatively well on less challenging tasks with shorter demonstration lengths, they struggle on the most challenging task, Discovery, with 174 labels, reaching a performance close to zero. This suggests a notable gap in current LLM capabilities for processing and understanding long, context-rich sequences.
Introduction:
Large Language Models (LLMs) have made significant strides in natural language processing tasks, including machine translation, question-answering, and text generation. However, their performance evaluation has primarily focused on metrics like perplexity and synthetic tasks, which may not fully capture their abilities in more nuanced, real-world scenarios. This study introduces a specialized benchmark, LongICLBench, focusing on long in-context learning within the realm of extreme-label classification. The benchmark requires LLMs to comprehend the entire input to recognize massive label spaces to make correct predictions.
The study evaluates 13 long-context LLMs on LongICLBench and finds that while these models perform relatively well on less challenging tasks with shorter demonstration lengths, they struggle on the most challenging task, Discovery, with 174 labels, reaching a performance close to zero. This suggests a notable gap in current LLM capabilities for processing and understanding long, context-rich sequences.
LongICLBench:
LongICLBench is a specialized benchmark that focuses on long in-context learning within the realm of extreme-label classification. The benchmark requires LLMs to comprehend the entire input to recognize massive label spaces to make correct predictions. The benchmark consists of six datasets with a label range spanning 28 to 174 classes, covering different input (few-shot demonstration) lengths from 2K to 50K tokens.
Experimental Setup:
The study evaluates 13 long-context LLMs on LongICLBench. The models are selected based on their ability to handle long sequences, and the evaluation is conducted on a machine with 128GB of RAM and an NVIDIA RTX 3090 GPU.
Results:
The study finds that long-context LLMs perform relatively well on less challenging tasks with shorter demonstration lengths. However, on the most challenging task, Discovery, with 174 labels, all the LLMs struggle to understand the task definition, thus reaching a performance close to zero. This suggests a notable gap in current LLM capabilities for processing and understanding long, context-rich sequences.Further analysis revealed a tendency among models to favor predictions for labels presented toward the end of the sequence. Their ability to reason over multiple pieces in the long sequence is yet to be improved.
Discussion:
The study reveals that long context understanding and reasoning is still a challenging task for the existing LLMs. The authors believe that LongICLBench could serve as a more realistic evaluation for the future long-context LLMs.The study highlights the need for more research on long-context processing and understanding in LLMs. The authors suggest that future work could focus on developing models that can effectively reason over multiple pieces in the long sequence and improve their ability to understand and process long, context-rich sequences.
Conclusion:
The study introduces LongICLBench, a specialized benchmark that focuses on long in-context learning within the realm of extreme-label classification. The benchmark requires LLMs to comprehend the entire input to recognize massive label spaces to make correct predictions. The study evaluates 13 long-context LLMs on LongICLBench and finds that while these models perform relatively well on less challenging tasks with shorter demonstration lengths, they struggle on the most challenging task, Discovery, with 174 labels, reaching a performance close to zero. This suggests a notable gap in current LLM capabilities for processing and understanding long, context-rich sequences.The study highlights the need for more research on long-context processing and understanding in LLMs. The authors suggest that future work could focus on developing models that can effectively reason over multiple pieces in the long sequence and improve their ability to understand and process long, context-rich sequences. The authors believe that LongICLBench could serve as a more realistic evaluation for the future long-context LLMs.