The Connectionism reference article from the English Wikipedia on 24-Jul-2004
(provided by Fixed Reference: snapshots of Wikipedia from


Spread the word about a children's charity with social media
Connectionism today generally refers to an approach in psychology which models mental or behavioral phenomena with neural networks, and is associated with a certain set of arguments for why this is a good idea (among others, that connectionist models are more biologically plausible than other models).

Connectionism is considered an alternative in psychology and cognitive science to computationalism. The fundamental differences between computationalism and connectionism are: a) connectionists believe that at any level of description the mind does not utilize algorithmic or symbolic processes for the most part, whereas computationalists believe that at some level of description mental activity wholly consists of compositional, symbolic computation (see Language of thought). b) connectionists engage in "low level" modelling, trying to ensure that their models resemble the anatomy of the human brain, whereas computationalists construct "high level" models that do not resemble neurological structure at all. c) connectionists focus heavily on questions of learning whereas computationalists generally focus on other issues, such as the compositional structure of mental tokens.

The dominant form of connectionism today is known as Parallel Distributed Processing (PDP). PDP became popular in the 1980's with the release of Parallel Distributed Processing: Explorations in the Microstructure of Cognition - Volume 1 (foundations) & Volume 2 (Psychological and Biological Models), by James L. McClelland, David E. Rumelhart, and the PDP Research Group. PDP's roots are the perceptron theories from the 1950's.

Another form of connectionism is the Relational Network framework developed by the linguist Sydney Lamb in the 1960's. Relational Networks have only ever been used by linguists.

All modern connectionist models adhere to two major principles regarding the mind. 1) Any given mental state can be described as a n-dimensional vector of numeric activation values over neural units in a network. 2) Memory is created by modifying the strength or the architecture of the connections between neural units. The connection strengths, or "weights", in PDP models are generally represented as a nxn-dimensional matrix.

An earlier and rather different connectionism was held by Edward Thorndike, a turn of the century psychologist who studied learning, with his most famous contributions being work on how cats escaped from puzzle boxes, and his formulation of the Law of Effect. His analysis (and its descendants) are peppered with references to associations between stimuli and responses. Though the S-R aspect has today been abandoned by radical behaviorists and cognitive psychologists (including connectionists), it is easy to impose the notion of association and modification of association strength on connectionist models.

Many sophisticated learning procedures for neural networks have been developed by connectionists. Learning always involves modifying the connection weights. These generally involve mathematical formula to determine the change in weights when given sets of data consisting of activation vectors for some subset of the neural units. By formalizing learning in such a way connectionists have many tools at their hands. A very common tactic in connectionist learning methods is to incorporate gradient descent over an error surface in a space defined by the weight matrix. All gradient descent learning in connectionist models involves changing each weight by the partial derivative of the error surface with respect to the weight. Backpropagation, first made popular in the 1980's, is probably the most commonly known connectionist gradient descent algorithm today.

Connectionists are generally in agreement that recurrent networks (networks wherein connections of the network can form a directed cycle) are more like the human brain than feedforward networks (networks with no directed cycles). A lot of recurrent connectionist models incorporate dynamical systems theory as well. Many researchers, such as the connectionist Paul Smolensky (one of the authors of the original PDP books), have argued that the direction connectionist models will take is towards fully continuous, high-dimensional, non-linear dynamic systems approaches.

See also