©2001
Design Knowledge Intermediary
Architecture of the Mind
( By Belkis Uluoglu )
EMAIL
TEXT SIZE PRINT

(Non) Representation of Design Knowledge

Discussions on thinking are usually based on a belief that the mind works separately from the surrounding domain, whatever the name is given to this domain - environment, problem space, search space, or others. The mind entails either a close-circuit process, or the conditions are such that it presupposes everything being equal. Once the mind stands separate from the world, the knowledge should be kept in some form in our minds (symbolization) for it to be able to react in certain conditions in an intelligent way. Yet, there are those who reject that our mind works with representations, and still others who believe that we think by direct apprehension via senses.

The classical AI researchers have believed that thinking happened via a symbolic structure and rules to relate those symbols. Their basic assumption is that the mind should be considered as a computer. There is assumed to be a central system that processes information, perceptual modules that deliver a symbolic description of the world, and action modules which work as outputs via symbolic descriptions of actions resulting from the internal processes. A.Newell and H.A.Simon’s IPS (Information Processing System) is one basic model of this symbolic structure.

A.Newell & H.A.Simon (1972) Human Problem Solving; Englewood Cliffs, New Jersey.

Ö.Akın has proposed that representation can be considered within two categories, analog (naïve representations) and symbolic (physical representations); and that “some representations are better suited to solve certain problems”. In the first case, the Physics problems are represented in the form of pulleys and ropes, rollers and inclines, e.g., while in the latter one, it is represented in the form of forces, energy, etc.

Ö.Akın (6/2001) “Simon Der ki”: Tasarım Temsildir(“Simon Says”: Design is Representation), Arredamento-Mimarlık, pp. 82-85.

J.Haugeland has said that, “These systems are good at processing syntactical patterns of the sort that are characteristic of logical formulae, ordinary sentences, and many inferences”. In short, they are good at manipulating systems that work somewhat serial and where the system can be built on a symbolic structure (e.g. language, numbers, etc.).

J.Haugeland, ed. (1997) Mind Design II: philosophy, psychology, artificial intelligence; Cambridge, Massachusetts, MIT Press.

While the computer inspired models of the classical AI researchers have emphasized the functional level, we know that the architecture also is very important. Neurally inspired models have proposed a connectionist approach, where there are a set of processing units instead of a central processor, and the knowledge is being stored in the connections themselves. These systems are good at pattern processing; in other words, they are good in parallel processing. This has brought a new discussion concerning representation.

D.E.Rumelhart (1989) The Architecture of Mind: A Connectionist Approach. In: Foundations of Cognitive Science, M.I.Posner, ed.; Cambridhe, MA: Bradford/MIT Press.
P.Smolensky (1989) Connectionist Modeling: Neural Computation / Mental Connections. In:Neural Connections, Mental Computation, L.Nadel, L.A.Cooper, P.Culicover, R.M.Harnish, eds.; Cambridge, MA: Bradford/MIT Press.

Both the classical AI approaches and the Connectionists deal with an internal mechanism; and what is missing in both is an active environment, which in fact turns the system on. Some recent research has developed alternative approaches that consider the environment and the agent as a whole. R.A.Brooks’ research on such a model is based on a holistic approach. Their assumption is that “human level intelligence is too complex and too little understood to be correctly decomposed into the right sub-pieces at the moment, and that even if we knew the sub-pieces we still wouldn’t know the right interfaces between them”. Their proposal is that intelligent systems should be built incrementally, having complete systems at each step, just like as in the evolution of the biological intelligent systems.

R.A.Brooks (1991) Intelligence without representation, Artificial Intelligence Journal 47; pp.139-160.


 

This article has been written by Belkıs Uluoglu* for Designophy in 2006
 

*Belkıs Uluoglu is working as a Associate Professor in Department of Architecture at Istanbul Technical University