University of Helsinki Department of Computer Science
 

Department of Computer Science

Department information

 

Computational Complexity Problems in Neural Associative Memories

Patrik Floréen: Computational Complexity Problems in Neural Associative Memories. PhD Thesis, Report A-1992-5, Department of Computer Science, University of Helsinki, 1992. 128+8 pages. <http://www.cs.helsinki.fi/TR/A-1992/5>

Full paper:
Metadata: XML file

Abstract

Neural associative memories are memory constructs consisting of a weighted network with a large number of simple nodes, whose computations collectively result in retrieving stored information when partial or distorted information is given as input. We study two types of neural associative memories: the Hopfield and the Hamming memories.

For the Hopfield memory, we first study worst-case bounds on the convergence times in both asynchronous and synchronous updating. However, the main theme in our study of Hopfield networks is the computational complexity of assessing the memory quality of a given network. We show that (assuming P /= NP) it is hard to count the number of stable vectors, to compute the size of attraction domains (in synchronous updating), and to compute the attraction radii (in synchronous updating). It is hard even to approximate the attraction radii, unless P = NP.

For the Hamming memory, we study which conditions the parameters of the network must satisfy in order to ensure convergence and we obtain a tight bound on the convergence time. With an optimal choice of parameter values, the worst-case convergence time is Θ˜(p log(pn)), where p is the memory capacity and n is the vector length. Restricting the instances to those with a unique nearest stored vector to the input vector, the worst-case convergence time is Θ˜(p log n). For random vectors, the probability for such instances to occur approaches 1 as n grows, if the number of stored vectors grows subexponentially in n. By dynamically changing the connection weights and by taking powers of the activity levels, the convergence is speeded up considerably. Finally, we present a new memory model based on the Hamming memory, but with a feed-forward network structure made possible by the use of more complicated node functions. In simulations, we show that the new memory model behaves as expected.

Throughout the study, we also explore the consequences of allowing "don't know" elements in both the input and the output. Experiments show that "don't elements" are effective especially in the input.

Index Terms

Categories and Subject Descriptors:
B.3.2, C.1.3, F.1.1, F.1.3

General Terms: Algorithms, Theory

Additional Key Words and Phrases: associative memory, neural networks, computational complexity


Online Publications of Department of Computer Science, Anna Pienimäki