Bookmark and Share

Front Back
morphogenesis .
living systems self-assemble, growing from a single cell to massive complexity,
amorphous computing
Amorphous computing attempts to exploit the fact that very tiny microprocessors can today be produced in their millions at very little cost. If, instead ofbuildinga system aroundasingle chip, one couldinstead use thousandsof minute, cheap microprocessors,
Systems that somehow grow and evolve in ways similar to living organisms are broadly known as POEtic machines
field-programmable gate arrays (FPGAs).
FPGAs are made up of clusters of interconnected logic units that are fully reconfigurable
immune systems


innate immune system (inbuilt)
adaptive immune system (learnt)
Immune system computing
Research is underway in a number of laboratories around the world. Most of this is into AIS-like software systems, but there is some very early work investigating the feasibility of auto-immune hardware, a fledgling fieldknown as immunotronics
molecular electronics
Why not build computers out of biological materials? Why not create a whole new field of molecular electronics? What about living computers? Certain inconvenient facts stand in the way of this vision. Biological materials are generally too fragile and too difficult to work with to be the basis of systems that are reliable enough for everyday purposes. It is also very hard to interface them with electronic components. Nevertheless the first steps are now being taken towards living computers.

Protein memory

A number of prototype data storage devices using bR have been produced. One of these uses bR molecules scattered densely through a thin film as an associative memory that works on the principle of the hologram
DNA computers
Although still in their infancy, DNA computers can potentially store billions of times more data than a conventional computer, and solve the kind of difficult problems
artificial life (or A-Life)
complements the traditional biological sciences concerned with the analysis of living organisms by attempting to synthesize life-like behaviours within computers and other artificial media. By extending the empirical foundations upon which biology is based beyond the carbon-chain life that has evolved on Earth, artificial life can contribute to theoretical biology by locating life-as-weknow-it within the larger picture of life-as-it-could-be.
a new stage in human development
sub-symbolic systems.
Systems in which the tokens that are manipulated are not themselves symbols are generally referred to as sub-symbolic systems.
A simulation of a system is a model that captures the functional connections between the inputs and outputs of the system.
A replication of a system is a model that captures these connections and is based on processes that are the same as, or similar to, those of the system being modelled.
Strong and weak AI
weak AI as a practical programme that aims to build computer systems with intelligent behaviour, but which are not necessarily based on human mental processes, nor claimed to be intelligent in any strong sense. Strong AI was defined as the imitation of human mental processes, with the aim of building computer systems that are truly intelligent in exactly the same way that humans are intelligent.
The idea that the mind is a program is a version of a theory known as functionalism. Functionalists believe that minds, including theirintentionality, can entirelybe explained in termsof computation.
computationalism, or the computational model of the mind.
It is the view that computation is both necessary and sufficient for cognition (thinking, understanding, etc.).
Searle and the Chinese Room
Amachine(theroom) running aprogram (the book of rulesfor correlating one set of squiggles with another) can pass the Turing Testbyproducing perfectly convincing answers (thethird batch of squiggles) to the Chinese questions (thesecond batchof squiggles) about the Chinese storiesitis given (the first batch of squiggles), simply by applying the rules to the incoming squiggles and producing more squiggles in response. Yet to the machine, the squiggles have no meaning whatsoever they are ‘purely formal’, ‘uninterpreted’. The machine doesn’t understand anything about the squiggles: it just blindly applies the rules.
Intentionality is the property of certain mental states – thoughts, beliefs, desires, etc. – of being about objectsorstatesof affairsinthe world.
Grounding refers to the way in which meaning is established in a system. Two types of grounding appear in the AI literature – symbol grounding and physical grounding. If an artificial agent makes use of symbolic representations, and if that agent is to be called intelligent, then such representations must be grounded: they have to be given meaning in some way.
symbol grounding problem ,
How can the semantic interpretation of a formal symbol system be made intrinsic to the system, rather than just parasitic on the meanings in our heads? How can the meanings of the meaningless symbol tokens, manipulated solely on the basis of their(arbitrary) shapes,be groundedin anything but other meaningless symbols?
symbol grounding .
the symbol ‘dog’ carries the meaning dog because of the relation betweenthe symbolandthepresenceof realdogsintheexternal world.The meaningof ‘Rover’, a symbol representing a particular dog, gets some of its meaning from its relation to the ground symbol ‘dog’.
physical grounding
Physical grounding is the idea that a system is grounded if it is situated within, and intimately connected to, the physical world.
The system reply
although the person in the room does not understand Chinese, the entire system the person, rule book, bits of paper, the room itself of which the person is merely one component does,infact, understand it.
The Robot Reply
The Robot Reply embodies theidea of symbol grounding: for the system as a whole to understand, there needs to be the right kindof relations between the symbolsinthe program and thethingsthey refertoin the external world.
externalist semantics internalist semantics
I thought that the dispute arises from radically different ideas about the nature of meaning. For Searle’s critics, meaning lies in the coupling between an agent and its world. For Searle and his supporters,it liesinthe agent’s head or moreprecisely, in its mind. Technically, Harnad’s position is based on externalist semantics, and on Searle’s internalist semantics. Searle maintains that it is not symbol grounding that is important but symbol meaning, and this is not generated through the coupling of symbols to a sensoractuator interface it comes from intentionality, which is necessarily tied to consciousness.
Chinese Gym Argument
Imagine that instead of a Chinese Room, I have a Chinese Gym: a hall containing many monolingual English-speaking men. These men would carry out the same operations as the nodes and synapses [i.e. units and connections] in a connectionist architecture ... [The] outcome would be the same as having  one man manipulate symbols according to a rule book. No one in the gym speaksa wordofChinese, and thereisno way forthe system asa wholetolearn the meaning of any Chinese words. Yet with appropriate adjustments, the system would give correct answers to Chinese questions.
first-person phenomenon

third-person phenomenon
Third-person phenomena are ones that can, in principle, be observed by anyone: ‘it’s raining’, ‘there is a giraffe over there’, etc. First-person phenomena can only be observed by me: ‘I’m hungry’, ‘mybig toe itches’, etc.

The situated perspectiveonintelligent human behaviour seesit as takingplace not only within a physical environment, but also in countless social settings. It is not an abstract, detached, general-purpose processof logical orformal reasoning.One centralinsightis that the physical setting can greatly reduce the load of a cognitive agent by providing external cues about what to do next and when goals are accomplished.
. situatedness
or socialisation, which is a social–cultural phenomenon having to do with shared ways of life among the members of communities of embodied individuals. It is the capacity of humans to attain social fluency in one or more cultures

So long as computers don’t have bodies they won’tbe abletodo what we do; so long as computers represent the world in discrete lumps they won’t be able to respond to the world in our way; we are always in a situation, whereas a computer only ‘knows’ its situation from a set of necessary and sufficient features. Source: Collins(1996) But he then goes on to

the hard problem
x of y cards