transformers/docs/source/en/model_doc/mra.md

2.7 KiB

MRA

Overview

The MRA model was proposed in Multi Resolution Analysis (MRA) for Approximate Self-Attention by Zhanpeng Zeng, Sourav Pal, Jeffery Kline, Glenn M Fung, and Vikas Singh.

The abstract from the paper is the following:

Transformers have emerged as a preferred model for many tasks in natural language processing and vision. Recent efforts on training and deploying Transformers more efficiently have identified many strategies to approximate the self-attention matrix, a key module in a Transformer architecture. Effective ideas include various prespecified sparsity patterns, low-rank basis expansions and combinations thereof. In this paper, we revisit classical Multiresolution Analysis (MRA) concepts such as Wavelets, whose potential value in this setting remains underexplored thus far. We show that simple approximations based on empirical feedback and design choices informed by modern hardware and implementation challenges, eventually yield a MRA-based approach for self-attention with an excellent performance profile across most criteria of interest. We undertake an extensive set of experiments and demonstrate that this multi-resolution scheme outperforms most efficient self-attention proposals and is favorable for both short and long sequences. Code is available at https://github.com/mlpen/mra-attention.

This model was contributed by novice03. The original code can be found here.

MraConfig

autodoc MraConfig

MraModel

autodoc MraModel - forward

MraForMaskedLM

autodoc MraForMaskedLM - forward

MraForSequenceClassification

autodoc MraForSequenceClassification - forward

MraForMultipleChoice

autodoc MraForMultipleChoice - forward

MraForTokenClassification

autodoc MraForTokenClassification - forward

MraForQuestionAnswering

autodoc MraForQuestionAnswering - forward