Issue
I am confused with these two structures. In theory, the output of them are all connected to their input. what magic make 'self-attention mechanism' is more powerful than the full-connection layer?
Solution
Ignoring details like normalization, biases, and such, fully connected networks are fixed-weights:
f(x) = (Wx)
where W
is learned in training, and fixed in inference.
Self-attention layers are dynamic, changing the weight as it goes:
attn(x) = (Wx)
f(x) = (attn(x) * x)
Again this is ignoring a lot of details but there are many different implementations for different applications and you should really check a paper for that.
Answered By - hkchengrex
0 comments:
Post a Comment
Note: Only a member of this blog may post a comment.