Resources
Authors & Affiliations
Tankut Can, Weishun Zhong, Mikhail Katkov, Ilya Shnayderman, Antonis Georgiou, Misha Tsodyks
Abstract
Compression is a fundamental process required for storing and subsequently retrieving long and meaningful sequences such as narratives from memory. To understand how compression may arise in human memory, we develop a statistical model of memory retrieval for meaningful material, in which semantic structures are abstracted into a hierarchy of keypoints, each representing a compressed version of the underlying clauses. Our model is motivated by a small set of basic principles, inspired by behavioral and neuroscience experiments on human processing of naturalistic stimuli. Our model predicts compression ratios at multiple scales, which we compare against large-scale experiments on human memory for narratives. In particular, we find that the overall compression of a narrative obeys a predictable law as a function of the total clauses retained in memory. Furthermore, the distribution of compression ratios in human recall of narratives, as measured by the level of summarization in each reported clause, agrees well with our analytical predictions.