The link between transformers and human long-term working memory is an interesting one. Using temporal context as a shared mechanism makes sense, especially when you consider how both systems build meaning across sequences.
The idea that STDP supports temporary working memory instead of just long-term memory is also compelling. It helps explain how we hold and reshape thoughts over minutes or hours without committing them to permanent storage.
Curious if future AI systems could benefit from something like spontaneous internal activity,like a background loop that keeps refining or connecting ideas even after the prompt ends. That might be a step closer to what we experience as “thinking.”
Excellent article, very informative
The link between transformers and human long-term working memory is an interesting one. Using temporal context as a shared mechanism makes sense, especially when you consider how both systems build meaning across sequences.
The idea that STDP supports temporary working memory instead of just long-term memory is also compelling. It helps explain how we hold and reshape thoughts over minutes or hours without committing them to permanent storage.
Curious if future AI systems could benefit from something like spontaneous internal activity,like a background loop that keeps refining or connecting ideas even after the prompt ends. That might be a step closer to what we experience as “thinking.”