Encoding and accessing linguistic representations in a dynamically structured holographic memory system

Dan Parker, William and Mary
Daniel Lantz, William and Mary

Abstract

This paper presents a computational model that integrates a dynamically structured holographic memory system into the ACT‐R cognitive architecture to explain how linguistic representations are encoded and accessed in memory. ACT‐R currently serves as the most precise expression of the moment‐by‐moment working memory retrievals that support sentence comprehension. The ACT‐R model of sentence comprehension is able to capture a range of linguistic phenomena, but there are cases where the model makes the wrong predictions, such as the over‐prediction of retrieval interference effects during sentence comprehension. Here, we investigate one such case involving the processing of sentences with negative polarity items (NPIs) and consider how a dynamically structured holographic memory system might provide a cognitively plausible and principled explanation of some previously unexplained effects. Specifically, we show that by replacing ACT‐R's declarative memory with a dynamically structured memory, we can explain a wider range of behavioral data involving reading times and judgments of grammaticality. We show that our integrated model provides a better fit to human error rates and response latencies than the original ACT‐R model. These results provide proof‐of‐concept for the unification of two independent computational cognitive frameworks.