Building, Reusing, and Generalizing Abstract Representations from Concrete Sequences

Part of International Conference on Representation Learning 2025 (ICLR 2025) Conference

Bibtex Paper

Authors

Shuchen Wu, Mirko Thalmann, Peter Dayan, Zeynep Akata, Eric Schulz

Abstract

Humans excel at learning abstract patterns across different sequences, filtering outirrelevant details, and transferring these generalized concepts to new sequences.In contrast, many sequence learning models lack the ability to abstract, whichleads to memory inefficiency and poor transfer. We introduce a non-parametrichierarchical variable learning model (HVM) that learns chunks from sequencesand abstracts contextually similar chunks as variables. HVM efficiently organizesmemory while uncovering abstractions, leading to compact sequence representations.When learning on language datasets such as babyLM, HVM learns a more efficientdictionary than standard compression algorithms such as Lempel-Ziv. In a sequencerecall task requiring the acquisition and transfer of variables embedded in sequences,we demonstrate HVM’s sequence likelihood correlates with human recall times. Incontrast, large language models (LLMs) struggle to transfer abstract variables aseffectively as humans. From HVM’s adjustable layer of abstraction, we demonstratethat the model realizes a precise trade-off between compression and generalization.Our work offers a cognitive model that captures the learning and transfer of abstractrepresentations in human cognition and differentiates itself from LLMs.