Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

This is ok (could use some diagrams!), but I don't think anyone coming to this for the first time will be able to use it to really teach themselves the LLM attention mechanism. It's a hard topic and requires two or three book chapters at least if you really want to start grokking it!

For anyone serious about coming to grips with this stuff, I would strongly recommend Sebastian Raschka's excellent book Build a Large Language Model (From Scratch), which I just finished reading. It's approachable and also detailed.

As an aside, does anyone else find the whole "database lookup" motivation for QKV kind of confusing? (in the article, "Query (Q): What am I looking for? Key (K): What do I contain? Value (V): What information do I actually hold?"). I've never really got it and I just switched to thinking of QKV as a way to construct a fairly general series of linear algebra transformations on the input of a sequence of token embedding vectors x that is quadratic in x and ensures that every token can relate to every other token in the NxN attention matrix. After all, the actual contents and "meaning" of QKV are very opaque: the weights that are used to construct them are learned during training. Furthermore, there is a lot of symmetry between Q and K in the algebra, which gets broken only by the causal mask. Or do people find this motivation useful and meaningful in some deeper way? What am I missing?

[edit: on this last question, the article on "Attention is just Kernel Smoothing" that roadside_picnic posted below looks really interesting in terms of giving a clean generalized mathematical approach to this, and also affirms that I'm not completely off the mark by being a bit suspicious about the whole hand-wavy "database lookup" Queries/Keys/Values interpretation]





> I've never really got it and I just switched to thinking of QKV as a way to construct a fairly general series of linear algebra transformations on the input of a sequence of token embedding vectors x that is quadratic in x and ensures that every token can relate to every other token in the NxN attention matrix.

That's because what you say here is the correct understanding. The lookup thing is nonsense.

The terms "Query" and "Value" are largely arbitrary and meaningless in practice, look at how to implement this in PyTorch and you'll see these are just weight matrices that implement a projection of sorts, and self-attention is always just self_attention(x, x, x) or self_attention(x, x, y) in some cases (e.g. cross-attention), where x and y are are outputs from previous layers.

Plus with different forms of attention, e.g. merged attention, and the research into why / how attention mechanisms might actually be working, the whole "they are motivated by key-value stores" thing starts to look really bogus. Really it is that the attention layer allows for modeling correlations/similarities and/or multiplicative interactions among a dimension-reduced representation. EDIT: Or, as you say, it can be regarded as kernel smoothing.


Thanks! Good to know I’m not missing something here. And yeah, it’s always just seemed to me better to frame it as: let’s find a mathematical structure to relate every embedding vector in a sequence to every other vector, and let’s throw in a bunch of linear projections so that there are lots of parameters to learn during training to make the relationship structure model things from language, concepts, code, whatever.

I’ll have to read up on merged attention, I haven’t got that far yet!


The main takeaway is that "attention" is a much broader concept generally, so worrying too much about the "scaled dot-product attention" of transformers deeply limits your understanding of what kinds of things really matter in general.

A paper I found particularly useful on this was generalizing even farther to note the importance of multiplicative interactions more generally in deep learning (https://openreview.net/pdf?id=rylnK6VtDH).

EDIT: Also, this paper I was looking for dramatically generalizes the notion of attention in a way I found to be quite helpful: https://arxiv.org/pdf/2111.07624


I'm not a fan of the database lookup analogy either.

The analogy I prefer when teaching attention is celestial mechanics. Tokens are like planets in (latent) space. The attention mechanism is like a kind of "gravity" where each token is influencing each other, pushing and pulling each other around in (latent) space to refine their meaning. But instead of "distance" and "mass", this gravity is proportional to semantic inter-relatedness and instead of physical space this is occurring in a latent space.

https://www.youtube.com/watch?v=ZuiJjkbX0Og&t=3569s


Then I think you’ll like our project which aims to find the missing link between transformers and swarm simulations:

https://github.com/danielvarga/transformer-as-swarm

Basically a boid simulation where a swarm of birds can collectively solve MNIST. The goal is not some new SOTA architecture, it is to find the right trade-off where the system already exhibits complex emergent behavior while the swarming rules are still simple.

It is currently abandoned due to a serious lack of free time (*), but I would consider collaborating with anyone willing to put in some effort.

(*) In my defense, I’m not slacking meanwhile: https://arxiv.org/abs/2510.26543 https://arxiv.org/abs/2510.16522 https://www.youtube.com/watch?v=U5p3VEOWza8


This is an excellent analogy! Thank you!

The way I think about QKV projections: Q defines sensitivity of token i features when computing similarity of this token to all other tokens. K defines visibility of token j features when it’s selected by all other tokens. V defines what features are important when doing weighted sum of all tokens.

Don't get caught up in interpreting QKV, it is a waste of time, since completely different attention formulations (e.g. merged attention [1]) still give you the similarities / multiplicative interactions, but may even work better [2]. EDIT: Oh and attention is much more broad than scaled dot-product attention [3].

[1] https://www.emergentmind.com/topics/merged-attention

[2] https://blog.google/innovation-and-ai/technology/developers-...

[3] https://arxiv.org/abs/2111.07624


I glanced at these links and it seems that all these attention variants still use QKV projections.

Do you see any issues with my interpretation of them?


Read the third link / review paper, it is not at all the case that all attention is based on QKV projections.

Your terms "sensitivity", "visibility", and "important" are too vague and lack any clear mathematical meaning, so IMO add nothing to any understanding. "Important" also seems factually wrong, given these layers are stacked, so later weights and operations can in fact inflate / reverse things. Deriving e.g. feature importances from self-attention layers remains a highly disputed area (e.g. [1] vs [2], for just the tip of the iceberg).

You are also assuming that the importance of attention is the highly-specific QKV structure and projection, but there is very little reason to believe that based on the third review link I shared. Or, if you'd like another example of why not to focus so much on scaled dot-product attention, see that it is just a subset of a broader category of multiplicative interactions (https://openreview.net/pdf?id=rylnK6VtDH).

[1] Attention is not Explanation - https://arxiv.org/abs/1902.10186

[2] Attention is not not Explanation - https://arxiv.org/abs/1908.04626


1. The two papers you linked are about importance of attention weights, not QKV projections. This is orthogonal to our discussion.

2. I don't see how the transformations done in one attention block can be reversed in the next block (or in the FFN network immediately after the first block): can you please explain?

3. All state of the art open source LLMs (DeepSeek, Qwen, Kimi, etc) still use all three QKV projections, and largely the same original attention algorithm with some efficiency tweaks (grouped query, MLA, etc) which are done strictly to make the models faster/lighter, not smarter.

4. When GPT2 came out, I myself tried to remove various ops from attention blocks, and evaluated the impact. Among other things I tried removing individual projections (using unmodified input vectors instead), and in all three cases I observed quality degradation (when training from scratch).

5. The terms "sensitivity", "visibility", and "important" all attempt to describe feature importance when performing pattern matching. I use these terms in the same sense as importance of features matched by convolutional layer kernels, which scan the input image and match patterns.


> and in all three cases I observed quality degradation (when training from scratch).

At the same model size and training FLOPS?


No. Each projection is ~5% of total FLOPs/params. Not enough model capacity change to care. From what I remember, removing one of them was worse than other two, I think it was Q. But in all three cases, degradation (in both loss and perplexity) was significant.

1. I do not think it is orthogonal, but, regardless, there is plenty of research trying to get explainability out of all aspects of scaled dot-product attention layers (weights, QKV projections, activations, other aspects), and trying to explain deep models generally via sort of bottom-up mechanistic approaches. I think it can be clearly argued this does not give us much and is probably a waste of time (see e.g. https://ai-frontiers.org/articles/the-misguided-quest-for-me...). I think this is especially clear when you have evidence (in research, at least) that other mechanisms and layers can produce highly similar results.

2. I didn't say the transformations can be reversed, I said if you interpret anything as an importance (e.g. a magnitude), that can be inflated / reversed by whatever weights are learned by later layers. Negative values and/or weights make this even more annoying / complicated.

3. Not sure how this is relevant, but, yes, any reasons for caring about QKV and scaled dot-product attention specifics are mostly related to performance and/or current popular leading models. But there is nothing fundamentally important about scaled dot-product attention, it most likely just happens to be something that was settled on prematurely because it works quite well and is easy to parallelize. Or, if you like the kernel smoothing explanation also mentioned in this thread, scaled dot-product self-attention implements something very similar to a particularly simple and nice form of kernel smoothing.

4. Yup, removing ops from scaled dot-product attention blocks is going to dramatically reduce expressivity, because there really aren't much ops there to remove. But there is enough work on low-rank attention, linear attentions, and sparse attentions, that show you can remove a lot of expressivity and still do quite well. And, of course, the huge amount of helpful other types of attention I linked before give gains in some cases too. You should be skeptical about any really simple or clear story about what is going on here. In particular, there is no clear reason why a small hypernetwork couldn't be used to approximate something more general than scaled dot-product attention, except that, obviously this is going to be more expensive, and in practice you can probably just get the same approximate flexibility by stacking simpler attention layers.

5. I still find that doesn't give me any clear mathematical meaning.

I suspect our learning goals are at odds. If you want to focus solely on the very specific kind of attention used in the popular transformer models today, perhaps because you are interested in optimizations or distillation or something, then by all means try to come up with special intuitions about Q, K, and V, if you think that will help here. But those intuitions will likely not translate well to future and existing modifications and improvements to attention layers, in transformers or otherwise. You will be better served learning about attention broadly and developing intuitions based on that.

Others have mentioned the kernel smoothing interpretation, and I think multiplicative interactions are the clearer deeper generalization of what is really important and valuable here. Also, the useful intuitions in DL have been less about e.g. "feature importances" and "sensitivity" and such, but tend to come more from linear algebra and calculus, and tend to involve things like matrix conditioning and regularization / smoothing and Lipschitz constants and the like. In particular, the softmax in self-attention is probably not doing what people typically say it does (https://arxiv.org/html/2410.18613v1), and the real point is that all these attention layers are trained in an end-to-end fashion where all layers are interdependent on each other to varying complicated degrees. Focusing on very specific interpretations ("Q is this, K is that"), especially where these interpretations are sort of vaguely metaphorical, like yours, is not likely to result in much deep understanding, in my opinion.


Per your point 4, some current hyped work is pushing hard in this direction [1, 2, 3]. The basic idea is to think of attention as a way of implementing an associative memory. Variants like SDPA or gated linear attention can then be derived as methods for optimizing this memory online such that a particular query will return a particular value. Different attention variants correspond to different ways of defining how the memory produces a value in response to a query, and how we measure how well the produced value matches the desired value.

Some of the attention-like ops proposed in this new work are most simply described as implementing the associative memory with a hypernetwork that maps keys to values with weights that are optimized at test time to minimize value retrieval error. Like you suggest, designing these hypernetworks to permit efficient implementations is tricky.

It's a more constrained interpretation of attention than you're advocating for, since it follows the "attention as associative memory" perspective, but the general idea of test-time optimization could be applied to other mechanisms for letting information interact non-linearly across arbitrary nodes in the compute graph.

[1] https://arxiv.org/abs/2501.00663

[2] https://arxiv.org/abs/2504.13173

[3] https://arxiv.org/abs/2505.23735


perhaps because you are interested in optimizations or distillation or something

Yes, my job is model compression: quantization, pruning, factorization, ops fusion/approximation/caching, in the context of hw/sw codesign.

In general, I agree with you that simple intuitions often break down in DL - I observed it many times. I also agree that we don't have good understanding how these systems work. Hopefully this situation is more like pre-Newtonian physics, and Newtons are coming.


IIRC isn't the symmetry between Q and K also broken by the direction of the softmax? I mean, row vs column-wise application yields different interpretation.

Yes but in practice, if you compute K=X.wk, Q=X.wq and then K.tQ you make three matrice multiplication. Wouldn't be faster to compute W=wk.twq beforhand and then just X.W.tX which will be just two matrices multiplication ? Is there something I am missing ?

Most models have a per-head dimension much smaller than the input dimension, so it's faster to multiply by the small wk and wk individually than to multiply by the large matrix W. Also, if you use rotary positional embeddings, the RoPE matrices need to be sandwiched in the middle and they're different for every token, so you could no longer premultiply just once.

Oh yes! That's probably more important, in fact.

Well, I think that this is also answer to your question about the intuition.

If the assymetry of K and Q stems from the direction of the softmax application, it must also be the reason for the names of the matrices :)

And if you think about it, it makes sense that for each Key, weights to all of the Queries sum to 1 and not vice versa.

So this is my only intuition for the K and Q names.

(It may or may not be similar to the whole "db lookup thing"... I just don't use that one.)


I find it really confusing as well. The analogy implies we have something like Q[K] = V

For one, I have no idea how this relates to the mathematical operations of calculating attention score, applying softmax and than doing dot product with the V matrix.

Second just conceptually I don't understand how this relates to the "a word looks up to how relevant it is to another word". So if you have "The cat eats his soup", "his" queries how it's important it is to cat. So is V just numerical result of the significance, like 0.99?

I dont think Im very stupid but after seeing a dozens of these, I am starting to wonder if anyone actually understands this conceptually


Not sure how helpful it is, but: Words or concepts are represented as high-dim vectors. At high level, we could say each dimension is another concept like "dog"-ness or "complexity" or "color"-ness. The "a word looks up to how relevant it is to another word" is basically just relevance=distance=vector dot product. and the dot product can be distorted="some directions are more important" for one purpose or another(q/k/v matrixes distort the dot product). softmax is just a form of normalization (all sums to 1 = proper probability). The whole shebang works only because all pieces can be learned by gradient descent, otherwise it would be impossible to implement.

Does that book require some sort of technical prerequisite to understand?

It helps if you have some basic linear algebra, for sure - matrices, vectors, etc. That's probably the most important thing. You don't need to know pytorch, which is introduced in the book as needed and in an appendix. If you want to really understand the chapters on pre-training and fine-tuning you'll need to know a bit of machine learning (like a basic grasp of loss functions and gradient descent and backpropagation - it's sort of explained in the book but I don't think I'd have understood it much without having trained basic neural networks before), but that is not required so much for the earlier chapters on the architecture, e.g. how the attention mechanism works with Q, K, V as discussed in this article.

The best part about it is seeing the code built up for the GPT-2 architecture in basic pytorch, and then loading in the real GPT-2 weights and they actually work! So it's great for learning but also quite realistic. It's LLM architecture from a few years ago (to keep it approachable), but Sebastian has some great more advanced material on modern LLM architectures (which aren't that different) on his website and in the github repo: e.g. he has a whole article on implementing the Qwen3 architecture from scratch.


> modern LLM architectures (which aren't that different) on his website and in the github repo: e.g. he has a whole article on implementing the Qwen3 architecture from scratch.

This might be underselling it a little bit. The difference between GPT2 and Qwen3 is maybe, I don't know, ~20 lines of code difference if you write it well? The biggest difference is probably RoPE (which can be tricky to wrap your head around); the rest is pretty minor.


There’s Grouped Query Attention as well, a different activation function, and a bunch of not very interesting norms stuff. But yeah, you’re right - still very similar overall.

Thank you! Might get the book to see what I can learn from it, and see what gaps I have to research and learn more. Appreciate the detailed response.

Sure! I don't think the linear algebra pre-req is that hard if you do need to learn it, there's tons of material online to practice on and it's really just basic "apply this matrix to this vector" stuff. Most of what would be in even an undergrad intro to linear algebra course (inverting a matrix, determinants, whatever) is totally unnecessary.



Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: