Skip to content
This repository has been archived by the owner on Apr 9, 2024. It is now read-only.

Slot attention vs Transformer #19

Open
MLDeS opened this issue Nov 13, 2023 · 0 comments
Open

Slot attention vs Transformer #19

MLDeS opened this issue Nov 13, 2023 · 0 comments

Comments

@MLDeS
Copy link

MLDeS commented Nov 13, 2023

Question edited for clarity:

What are the conceptual and technical differences between Slot Attention and Transformer modules, particularly in terms of their architectural components like the use of a GRU cell in Slot Attention versus the absence of such in Transformers? How might these differences influence the outcomes and interpretations of results when applied to tasks such as object-centric learning? Would substituting Slot Attention with a Transformer module in a given architecture yield comparable performance, or are there theoretical considerations that would necessitate adjustments to maintain functionality?

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant