keywords:
Bookmark and Share



Front Back
gestalt effect .
We perceive whole patterns without any conscious processing of their constituent parts
content addressability
More or less any small part of a pattern allows us to recognise the whole immediately,
neural networks, artificial neural networks, neurocomputing or connectionism,
the study of computational models of nervous interaction,
synapses
neurons communicated with one another electrically through special junctions
nervous system
All vertebrate animals have a nervous system responsible for muscular activity, internal monitoring of the body, handling input from the senses and initiating actions.
central nervous system (CNS);
the brain and spinal chord
peripheral nervous system (PNS),
consisting of all other nerves.
axon
a long, slender projection of a nerve cell, or neuron, that conducts electrical impulses away from the neuron's cell body or soma.
glial cells
that provide nutrition and support
localised potentials.
such currents are very weak and can only travel tiny distances
action potential
The main way neurons carry electricity is through a quite different mechanism,
ions
An ion is an atom that has either gained one or more extra electrons,becoming negatively charged (e.g. Cl-), or lost one or more electrons, making it positively charged (e.g. K+)
resting potential.
An equilibrium state results, in which there is a stable difference of electric potential across the membrane
depolarisation
The inside of the membrane rapidly starts to acquire a positive charge. Within a few milliseconds the potential across the membrane completely reverses
hyperpolarisation
This restoration process
axon terminals
the end points
spike or fired.
The pulse of electric charge moves along the axon spreading throughout the axonal arborisation all the way to the end points
refractory period
the neuron is unable to fire again during that interval,
spike trains
neurons generally emit whole sequences of spikes
neurotransmitters
tiny sacs or vesicles of special chemicals
pre-synaptic

post-synaptic
This structure does not actually make contact with the membrane of the other neuron: there is a tiny gap, about 20 nanometres (nm, millionths of a millimetre) wide, between it (the axon terminal of the pre-synaptic cell and the

membrane of the post-synaptic cell (see Figure 1.16). An action potential in the presynaptic neuron, arriving at this terminal, causes a cascade of events:
excitatory 
inhibitory
Synapses that cause the post-synaptic membrane to depolarise are called excitatory, those that hyperpolarise it are called inhibitory
maps
neurons become organised into specialised groups, sometimes hundreds of thousands or millions of cells strong
somatosensory strip .
sense data coming in from the surface of the body, information about the various touches, tickles and pressures the body is experiencing, are handled by a region of the brain known as the somatosensory strip
motor strip.
movements of the muscles are controlled by another area of the brain,
homunculi
the distorted scale model of a human drawn or sculpted to reflect the relative space human body parts occupy on the somatosensory cortex (sensory homunculus)
neural Darwinism
 a large scale theory of brain function
discrete time
A neuron has a certain fixed time period to accumulate excitation; at the end of this time it either fires or it does not fire. Then a new time period starts afresh, with all previous activity forgotten;
grandmother cell
something as complicated as a specific human memory could hardly be the property of any one neuron
Perceptron ,
one of the earliest computer-modelled neural networks
plasticity .
This initial arrangement of connections and connection strengths will change as a result of the experiences each individual organism undergoes as a result of mechanisms such as LTP
selforganising map (SOM)
A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map. Self-organizing maps are different from other artificial neural networks in the sense that they use a neighborhood function to preserve the topological properties of the input space.
unsupervised learning,


learning vector quantisation,
 the problem of trying to find hidden structure in unlabeled data.

LVQ can be understood as a special case of an artificial neural network, more precisely, it applies a winner-take-all Hebbian learning-based approach. It is a precursor to Self-organizing maps (SOM)
recurrence
the axons of neurons are often observed to form cycles; that is, they send activity back, directly or indirectly, to neurons that have already stimulated them (see Figure 1.18). You can see that this is just a classic example of feedback
Ising spin glass
one kind of recurrent network and provided a deep theoretical analysis of its properties. He noted its analogies with certain other types of physical phenomena,
units or processing elements (PEs))

weights
Computer scientists tend to prefer the term unit to neuron, because they want to avoid the suggestion that they are attempting an accurate model of the biological neuron two basic building blocks to worry about in any neural network model

activation function .
The net input is passed to it. Its task is to calculate the activation of the unit.
sigmoids
Because of the S-shaped profile of their graphs, such functions as these are also sometimes known as sigmoids.
output function
A few neural models put the activation through another function, the output function,to calculate the proportion of the unit’s activity that will be passed on
topologies
patterns of connection
hidden layer
no contact with the world around the system; it is purely internal, and is thus generally called the hidden layer.
feedforward networks
one-way flow of activation through the network
Lattice networks

 
They generally comprise a one-, two-or three-dimensional grid of units fed by a set of dummy input units (like the input layer in a feedforward network).
lateral connections
these kinds of lateral connections are found in the brains of humans and other mammals
Recurrent networks
A recurrent network, then, has at least one feedback connection. However, it’s more usual to find that such networks are fully recurrent, with every unit feeding back to every other unit, except itself
clamp
force or clamp the activations
bias
it is possible to provide a perceptron with an adjustable threshold and still work with the weights alone, by altering the topology of the network slightly and incorporating a bias.
learning rule .
an automated procedure by means of which the system can learn by adjusting its weights itself.
Hebb Rule ,orHebbian learning
the earliest example of a learning rule in artificial neural networks

If two units at either end of a weighted link are activated simultaneously, then the value of the weight is increased by some amount. c If two units on either side of a weighted link are activated at different times, then the value of the weight is decreased by a certain amount.
Supervised learning
presupposes a bank of training data, usually a set of exemplars. Each exemplar will be a pair, consisting of an input object and a ‘correct’ or desired output object. Learning then proceeds through an external teaching process presenting each exemplar to the system in turn usually many times correcting the system in some way when it outputs a ‘wrong’ answer (as compared to the ‘correct’ answer in the training set), and generally doing nothing when the system comes up with the ‘correct’ answer.
x of y cards Next > >> >|