Blog Logo
TAGS

Hyena Hierarchy: Towards Larger Convolutional Language Models

In this paper, we propose Hyena, a subquadratic drop-in replacement for attention, constructed by interleaving implicitly parametrized long convolutions and data-controlled gating. Our proposed method improves accuracy in recall and reasoning tasks on sequences of thousands to hundreds of thousands of tokens, surpassing operators relying on state-spaces and other implicit and explicit methods. We set a new state-of-the-art for dense-attention-free architectures on language modeling in standard datasets (WikiText103 and The Pile), reaching Transformer quality with a 20% reduction in training compute required at sequence length 2K. Our operators are twice as fast as highly optimized attention at sequence length 8K and 100 times faster at sequence length 64K. Our work questions the role of attention as the gold-standard operator for deep learning at scale.