[AI] KV Cache and Paged KV Cache

Scaled Do Product Attention

  1. Input tokens → Input matrix. (batch_size, token_size, emb_size)
  2. Input Matrix → Q, K, V matrix. (batch_size, toekn_size, emb_size)
  3. Attention(Q, K, V) = Softmax(QK^T / sqrt(emb_size)) V. (batch_size, token_size, emb_size)

Casual attention for decoder-only models

  1. Before calculating the softmax value in the attention layer, set all logits that don’t fit casual relationships to be negative infinite. I.e. for each token, set all logit values for its successor tokens to be negative infinite. 
  2. Calculate the softmax value in the attention layer. For each token, it will only have non-zeros value for tokens before it and have zero value for tokens after it.

Using cache

  1. Let’s say we also calculate the K, V value for the first three tokens. Now, we have a fourth token.
  2. Use Q, K, V weights to calculate the QueryToken4, KeyToken4, ValueToken4. (batch_size, 1, emb_size)
  3. Concat KeyToken4 to Key cache, get KeyMatrix. For each instance in the batch, it needs to find its own cache (corresponding to a different input sentence) to construct this tensor. (batch_size, 4, emb_size)
  4. Calculate softmax value. (batch_size, 1, 4)
  5. Concat ValueToken4 to Value cache, get ValueMatrix. (batch_size, 4, emb_size)
  6. Calculate attention value. (batch_size, 1, emb_size)

Paged KV

  1. Paged Attention: partition the KV cache into KV blocks. Each KV block contains a KV cache for a fixed number of tokens. 
  2. KV Cache Manager: it maintains a block table that maps logical blocks to physical blocks
  3. vLLM: an inference system that implemented Pages Attention. In the decoding phase, it provides logical blocks instead of physical GPU memory to store the new KV cache. It can share KV cache between different decoding sequences and do copy-on-write when these sequences diverge.
  4. Paged KV cache reduce memory wastage:
    1. It reduces internal fragmentations: we don’t need to reserve memory for the possible max length output sequence. We request a new KV block if needed.
    2. It reduces external fragmentations: KV blocks are managed by a centralized manager. There will be no memory waste between different decoding sequences.
    3. It enables memory sharing between different sequences that have the same decoded prefix. 

Sources

  1. Mark Moyou: https://www.youtube.com/watch?v=z2M8gKGYws4&t=121s 
  2. Shashank Verma and Neal Vaidya Nvidia Blog: https://developer.nvidia.com/blog/mastering-llm-techniques-inference-optimization/
  3. Attention is all you need: https://arxiv.org/pdf/1706.03762 
  4. Paged Attention: https://arxiv.org/pdf/2309.06180

Leave a Reply

Your email address will not be published. Required fields are marked *