Network 2008;19(3):161-82. 10.1080/09548980802412638
Computational models can replicate the capacity of human recognition memory.
Full text PDF download:
Abstract:
The capacity of human recognition memory was investigated by Standing, who presented several groups of participants with different numbers of pictures (from 20 to 10 000), and subsequently tested their ability to distinguish between previously presented and novel pictures. The estimated number of pictures retained in recognition memory by different groups when plotted as a logarithmic function of the number of pictures presented formed a straight line, representing a power-law relationship. Here, we investigate if published models of familiarity discrimination can replicate Standing's results. We first consider a simplified assumption that visual stimuli are represented by uncorrelated patterns of firing of visual neurons providing input to the familiarity discrimination network. We show that for this case three models (Familiarity discrimination based on Energy (FamE), Anti-Hebbian and Info-max) can reproduce the observed power-law relationship when their synaptic weights are appropriately initialized. For more realistic assumptions on neural representation of stimuli, the FamE model is no longer able to reproduce the power-law relationship in simulations, while the Anti-Hebbian and Info-max can reproduce it. Nevertheless, the slopes of the power-law relationships produced by the models in all simulations differ from that observed by Standing. We discuss possible reasons for this difference, including separate contributions of familiarity and recollection processes, and describe experimentally testable predictions based on our analysis.