Is Attention sink without Positional Encoding unavoidable? [D]
![Is Attention sink without Positional Encoding unavoidable? [D]](/_next/image?url=https%3A%2F%2Fpreview.redd.it%2Fz5127oeuoayg1.png%3Fwidth%3D640%26crop%3Dsmart%26auto%3Dwebp%26s%3Db0118d9db10abc13fb09485fb4e2b0aedff9101d&w=3840&q=75)
| TL;DR: As soon as I remove Positional Encoding (PE) from Self or Cross-attention, I start seeing vertical hot lines in attention heatmaps. Is there any way to make a model have query-conditioned attention without PE? So, I've been trying to pre-train a couple types of Transformer based models (small, tinkering level only), Encoder-Decoder model and Cross-attention memory only model (basically, removing FFNs and using cross-attended vectors as memory banks instead), namely. But every-time I try to train cross-attention, I see vertical lines as shown in the image attached. And I'm guessing that means every query vector is attending to the same key tokens. This is while I don't use RoPE or any other PE during cross-attention. I start to see some diagonals when I add PE, though I do not think I should need to add it during cross-attention, as queries and keys are representations of different data. And this shows up in simple Causal Self-attention too, as soon as I remove PE. My question is, how do I force the model to attend to key tokens dynamically based on query token? I've already tried regularization such that attention is more spread out, which does make the attention more spread out, but still in vertical lines, no diagonals, or any other pattern. [link] [comments] |
Want to read more?
Check out the full article on the original site