Step New Attention Mechanism KV Cache Reduced
This article explores Multi-matrix Factorization Attention (MFA) and MFA-Key-Reuse (MFA-KR), novel attention mechanisms that significantly reduce KV cache usage in large language models (LLMs). MFA and MFA-KR achieve performance comparable to or exceeding traditional MHA and MLA while substantially lowering memory consumption. Key innovations include increasing attention head dimensions, employing low-rank decomposition, and using a single key-value head. Experimental results demonstrate significant memory savings and scalability, making MFA a promising solution for efficient LLM inference.