Information Theory and Evolution

  • 8 210 8
  • Like this paper and download? You can publish your own PDF file online for free in a few minutes! Sign Up

Information Theory and Evolution - Free Books & magazines , o, , 0, . , , . . , 0 3 .. fl 1 1, . 0., n J 0, , 0, , t

914 503 12MB

Pages 233 Page size 432 x 648 pts Year 2007

Report DMCA / Copyright


Recommend Papers

File loading please wait...
Citation preview - Free Books & magazines

, o, ,


. ,


. . ,


3 ..



1, .











. , ,

0,. 0 0,. o.r 0. ] ,

. 0, 1 ,> 0, 0, » 0, 1, 0




1, 1


1. 1, , 0,


0, 0,


. C, 0, . Q, n . 0, 0 6 (8.2) 0 if a < 9

The decisions taken by a TLU can be given a geometrical interpretation: The input signals can be thought of as forming the components of a vector, x = {xi,x2, ...,XN}, in an ^-dimensional space called pattern space. The weights also form a vector, w = {W\,W2-,...,WN}, in the same space. If we write an equation setting the scalar product of these two vectors equal to some constant, N


x = ^2 WjXj = 9



then this equation defines a hyperplane in pattern space, called the decision hyperplane. The decision hyperplane divides pattern space into two parts - (1) input pulse patterns which will produce firing of the TLU, and (2) patterns which will not cause firing. The position and orientation of the decision hyperplane can be changed by altering the weight vector w and/or the threshold 9. Therefore it is convenient to put the threshold and the weights on the same footing by introducing an augmented weight vector, W = {wl,w2,...,wN,9}


and an augmented input pattern vector, X = {a;i,a;2,...,xjv,-1}


In the 7V+l-dimensional augmented pattern space, the decision hyperplane now passes through the origin, and equation (8.3) can be rewritten in the




form N+l

W X = J2 wiX3 = 0



Those input patterns for which the scalar product W X is positive or zero will cause the unit to fire, but if the scalar product is negative, there will be no response. If we wish to "teach" a TLU to fire when presented with a particular pattern vector X, we can evaluate its scalar product with the current augmented weight vector W. If this scalar product is negative, the TLU will not fire, and therefore we know that the weight vector needs to be changed. If we replace the weight vector by W' = W + 7 X


where 7 is a small positive number, then the new augmented weight vector W will point in a direction more nearly the same as the direction of X. This change will be a small step in the direction of making the scalar product positive, i.e. a small step in the right direction. Why not take a large step instead of a small one? A small step is best because there may be a whole class of input patterns to which we would like the TLU to respond by firing. If we make a large change in weights to help a particular input pattern, it may undo previous learning with respect to other patterns. It is also possible to teach a TLU to remain silent when presented with a particular input pattern vector. To do so we evaluate the augmented scalar product W-X as before, but now, when we desire silence rather than firing, we wish the scalar product to be negative, and if it is positive, we know that the weight vector must be changed. In changing the weight vector, we can again make use of equation (8.7), but now 7 must be a small negative number rather than a small positive one. Two sets of input patterns, A and B, are said to be linearly separable if they can be separated by some decision hyperplane in pattern space. Now suppose that the four sets, A, B, C, and D, can be separated by two decision hyperplanes. We can then construct a two-layer network which will identify the class of an input signal belonging to any one of the sets, as is illustrated in Figure 8.2. The first layer consists of two TLU's. The first TLU in this layer is taught to fire if the input pattern belongs to A or B, and to be silent if the input belongs to C or D. The second TLU is taught to fire if the input pattern belongs to A or D, and to be silent if it belongs to B or C. The second layer of the network consists of four output units which are not



taught, but which are assigned a fixed Boolean functionality. The first output unit fires if the signals from the first layer are given by the vector y = {0,0} (class A); the second fires if y = {0,1} (class B), the third if y = {1,0} (class C), and the fourth if y = {1,1} (class D). Thus the simple two-layer network shown in Figure 8.2 functions as a classifier. The output units in the second layer are analogous to the "grandmother's face cells" whose existence in the visual cortex is postulated by neurophysiologists. These cells will fire if and only if the retina is stimulated with a particular class of patterns. This very brief glance at artificial neural networks does not do justice to the high degree of sophistication which network architecture and training algorithms have achieved during the last two decades. However, the suggestions for further reading at the end of this chapter may help to give the reader an impression of the wide range of problems to which these networks are now being applied. Besides being useful for computations requiring pattern recognition, learning, generalization, intuition, and robustness in the face of noisy data, artificial neural networks are important because of the light which they throw on the mechanism of brain function. For example, one can compare the classifier network shown in Figure 8.2 with the discoveries of Kuffler, Hubel and Wessel concerning pattern abstraction in the mammalian retina and visual cortex (Chapter 5).

Genetic algorithms Genetic algorithms represent a second approach to machine learning and to computational problems involving optimization. Like neural network computation, this alternative approach has been inspired by biology, and it has also been inspired by the Darwinian concept of natural selection. In a genetic algorithm, the hardware is that of a conventional computer; but the software creates a population and allows it to evolve in a manner closely analogous to biological evolution. One of the most important pioneers of genetic algorithms was John Henry Holland (1929- ). After attending MIT, where he was influenced by Norbert Wiener, Holland worked for IBM, helping to develop the 701. He then continued his studies at the University of Michigan, obtaining the first Ph.D. in computer science ever granted in America. Between 1962 and 1965, Holland taught a graduate course at Michigan called "Theory of Adaptive Systems". His pioneering course became almost a cult, and together with his enthusiastic students he applied the genetic algorithm approach to a great variety of computational problems. One of Holland's




students, David Goldberg, even applied a genetic algorithm program to the problem of allocating natural gas resources. The programs developed by Holland and his students were modelled after the natural biological processes of reproduction, mutation, selection and evolution. In biology, the information passed between generations is contained in chromosomes - long strands of DN A where the genetic message is written in a four-letter language, the letters being adenine, thymine, guanine and cytosine. Analogously, in a genetic algorithm, the information is coded in a long string, but instead of a four-letter language, the code is binary: The chromosome-analogue is a long string of O's and l's, i.e., a long binary string. One starts with a population that has sufficient diversity so that natural selection can act. The genotypes are then translated into phenotypes. In other words, the information contained in the long binary string (analogous to the genotype of each individual) corresponds to an entity, the phenotype, whose fitness for survival can be evaluated. The mapping from genotype to phenotype must be such that very small changes in the binary string will not produce radically different phenotypes. Prom the initial population, the most promising individuals are selected to be the parents of the next generation, and of these, the fittest are allowed produce the largest number of offspring. Before reproduction takes place, however, random mutations and chromosome crossing can occur. For example, in chromosome crossing, the chromosomes of two individuals are broken after the nth binary digit, and two new chromosomes are formed, one with the head of the first old chromosome and the tail of the second, and another with the head of the second and the tail of the first. This process is analogous to the biological crossings which allowed Thomas Hunt Morgan and his "fly squad" to map the positions of genes on the chromosomes of fruit flies, while the mutations are analogous to those studied by Hugo de Vries and Hermann J. Muller. After the new generation has been produced, the genetic algorithm advances the time parameter by a step, and the whole process is repeated: The phenotypes of the new generation are evaluated and the fittest selected to be parents of the next generation; mutation and crossings occur; and then fitness-proportional reproduction. Like neural networks, genetic algorithms are the subject of intensive research, and evolutionary computation is a rapidly growing field. Evolutionary methods have been applied not only to software, but also to hardware. Some of the circuits designed in this way defy analysis using conventional techniques - and yet they work astonishingly well.



Artificial life As Aristotle pointed out, it is difficult to define the precise border between life and nonlife. It is equally difficult to give a precise definition of artificial life. Of course the term means "life produced by humans rather than by nature", but what is life? Is self-replication the only criterion? The phrase "produced by humans" also presents difficulties. Humans have played a role in creating domestic species of animals and plants. Can cows, dogs, and high-yield wheat varieties be called "artificial life" ? In one sense, they can. These species and varieties certainly would not have existed without human intervention. We come nearer to what most people might call "artificial life" when we take parts of existing organisms and recombine them in novel ways, using the techniques of biotechnology. For example, Steen Willadsen7, working at the Animal Research Station, Cambridge England, was able to construct chimeras by operating under a microscope on embryos at the eight-cell stage. The zona pelucida is a transparent shell that surrounds the cells of the embryo. Willadsen was able to cut open the zona pelucida, to remove the cells inside, and to insert a cell from a sheep embryo together with one from a goat embryo. The chimeras which he made in this way were able to grow to be adults, and when examined, their cells proved to be a mosaic, some cells carrying the sheep genome while others carried the genome of a goat. By the way, Willadsen did not create his chimeras in order to produce better animals for agriculture. He was interested in the scientifically exciting problem of morphogenesis: How is the information of the genome translated into the morphology of the growing embryo? Human genes are now routinely introduced into embryos of farm animals, such as pigs or sheep. The genes are introduced into regulatory sequences which cause expression in mammary tissues, and the adult animals produce milk containing human proteins. Many medically valuable proteins are made in this way. Examples include human blood-clotting factors, interleukin-2 (a protein which stimulates T-lymphocytes), collagen and fibrinogen (used to treat burns), human fertility hormones, human hemoglobin, and human serum albumin. Transgenic plants and animals in which the genes of two or more species are inherited in a stable Mendelian way have become commonplace in modern laboratory environments, and, for better or for worse, they are also becoming increasingly common in the external global environment. These new species might, with some justification, be called "artificial life". In discussing the origin of life in Chapter 3, we mentioned that a long 7Willadsen is famous for having made the first verified and reproducible clone of a mammal. In 1984 he made two genetically identical lambs from early sheep embryo cells.




period of molecular evolution probably preceded the evolution of cells. In the early 1970's, S. Spiegelman performed a series of experiments in which he demonstrated that artificial molecular evolution can be made to take place in vitro. Spiegelman prepared a large number of test tubes in which RNA replication could take place. The aqueous solution in each of the test tubes consisted of RNA replicase, ATP, UTP (uracil triphosphate), GTP (guanine triphosphate), CTP (cytosine triphosphate) and buffer. He then introduced RNA from a bacteriophage into the first test tube. After a predetermined interval of time, during which replication took place, Spiegelman transferred a drop of solution from the first test tube to a new tube, uncontaminated with RNA. Once again, replication began and after an interval a drop was transferred to a third test tube. Spiegelman repeated this procedure several hundred times, and at the end he was able to demonstrate that the RNA in the final tube differed from the initial sample, and that it replicated faster than the initial sample. The RNA had evolved by the classical Darwinian mechanisms of mutation and natural selection. Mistakes in copying had produced mutant RNA strands which competed for the supply of energy-rich precursor molecules (ATP, UTP, GTP and CTP). The most rapidly-reproducing mutants survived. Was Spiegelman's experiment merely a simulation of an early stage of biological evolution? Or was evolution of an extremely primitive life-form actually taking place in his test tubes? G.F. Joyce, D.P. Bartel and others have performed experiments in which strands of RNA with specific catalytic activity (ribozymes) have been made to evolve artificially from randomly coded starting populations of RNA. In these experiments, starting populations of 1013 to 1015 randomly coded RNA molecules are tested for the desired catalytic activity, and the most successful molecules are then chosen as parents for the next generation. The selected molecules are replicated many times, but errors (mutations) sometimes occur in the replication. The new population is once again tested for catalytic activity, and the process is repeated. The fact that artificial evolution of ribozymes is possible can perhaps be interpreted as supporting the "RNA world" hypothesis, i.e. the hypothesis that RNA preceded DNA and proteins in the early history of terrestrial life. In Chapter 4 we mentioned that John von Neumann speculated on the possibility of constructing artificial self-reproducing automata. In the early 1940's, a period when there was much discussion of the Universal Turing Machine, he became interested in constructing a mathematical model of the requirements for self-reproduction. Besides the Turing machine, another source of his inspiration was the paper by Warren McCulloch and Walter Pitts entitled A logical calculus of the ideas immanent in nervous activity, which von Neumann read in 1943. In his first attempt (the kinematic




model), he imagined an extremely large and complex automaton, floating on a lake which contained its component parts. Von Neumann's imaginary self-reproducing automaton consisted of four units, A, B, C and D. Unit A was a sort of factory, which gathered component parts from the surrounding lake and assembled them according to instructions which it received from other units. Unit B was a copying unit, which reproduced sets of instructions. Unit C was a control apparatus, similar to a computer. Finally D was a long string of instructions, analogous to the "tape" in the Turing machine described in Chapter 7. In von Neumann's kinematic automaton, the instructions were coded as a long binary number. The presence of what he called a "girder" at a given position corresponded to 1, while its absence corresponded to 0. In von Neumann's model, the automaton completed the assembly of its offspring by injecting its progeny with the duplicated instruction tape, thus making the new automaton both functional and fertile. In presenting his kinematic model at the Hixton Symposium (organized by Linus Pauling in the late 1940's), von Neumann remarked that " is clear that the instruction [tape] is roughly effecting the function of a gene. It is also clear that the copying mechanism B performs the fundamental act of reproduction, the duplication of the genetic material, which is clearly the fundamental operation in the multiplication of living cells. It is also easy to see how arbitrary alterations of the system...can exhibit certain traits which appear in connection with mutation, lethality as a rule, but with a possibility of continuing reproduction with a modification of traits." It is very much to von Neumann's credit that his kinematic model (which he invented several years before Crick and Watson published their DNA structure) was organized in much the same way that we now know the reproductive apparatus of a cell to be organized. Nevertheless he was dissatisfied with the model because his automaton contained too many "black boxes". There were too many parts which were supposed to have certain functions, but for which it seemed very difficult to propose detailed mechanisms by which the functions could be carried out. His kinematic model seemed very far from anything which could actually be built8. Von Neumann discussed these problems with his close friend, the PolishAmerican mathematician Stanislaw Ulam, who had for a long time been 8Von Neumann's kinematic automaton was taken seriously by the Mission IV Group, part of a ten-week program sponsored by NASA in 1980 to study the possible use of advanced automation and robotic devices in space exploration. The group, headed by Richard Laing, proposed plans for self-reproducing factories, designed to function on the surface of the moon or the surfaces of other planets. Like von Neumann's kinetic automaton, to which they owed much, these plans seemed very far from anything that could actually be constructed.




interested in the concept of self-replicating automata. When presented with the black box difficulty, Ulam suggested that the whole picture of an automaton floating on a lake containing its parts should be discarded. He proposed instead a model which later came to be known as the Cellular Automaton Model. In Ulam's model, the self-reproducing automaton lives in a very special space. For example, the space might resemble an infinite checkerboard, each square would constitute a multi-state cell. The state of each cell in a particular time interval is governed by the states of its near neighbors in the preceding time interval according to relatively simple laws. The automaton would then consist of a special configuration of cell states, and its reproduction would correspond to production of a similar configuration of cell states in a neighboring region of the cell lattice. Von Neumann liked Ulam's idea, and he began to work in that direction. However, he wished his self-replicating automaton to be able to function as a universal Turing machine, and therefore the plans which he produced were excessively complicated. In fact, von Neumann believed complexity to be a necessary requirement for self-reproduction. In his model, the cells in the lattice were able to have 29 different states, and the automaton consisted of a configuration involving hundreds of thousands of cells. Von Neumann's manuscript on the subject became longer and longer, and he did not complete it before his early death from prostate cancer in 1957. The name "cellular automaton" was coined by Arthur Burks, who edited von Neumann's posthumous papers on the theory of automata. Arthur Burks had written a Ph.D. thesis in philosophy on the work of the nineteenth century thinker Charles Sanders Pierce, who is today considered to be one of the founders of semiotics9. He then studied electrical engineering at the Moore School in Philadelphia, where he participated in the construction of ENIAC, one of the first general purpose electronic digital computers, and where he also met John von Neumann. He worked with von Neumann on the construction of a new computer, and later Burks became the leader of the Logic of Computers Group at the University of Michigan. One of Burks' students at Michigan was John Holland, the pioneer of genetic algorithms. Another student of Burks, E.F. Codd, was able to design a self-replicating automaton of the von Neumann type using a cellular automaton system with only 8 states (as compared with von Neumann's 29). For many years, enthusiastic graduate students at the Michigan group continued to do important research on the relationships between information, logic, complexity and biology. Meanwhile, in 1968, the mathematician John Horton Conway, working in England at Cambridge University, invented a simple game which greatly 9 Semiotics

is defined as the study of signs (see Appendix 2).



increased the popularity of the cellular automaton concept. Conway's game, which he called "Life", was played on an infinite checker-board-like lattice of cells, each cell having only two states, "alive" or "dead". The rules which Conway proposed are as follows: "If a cell on the checkerboard is alive, it will survive in the next time step (generation) if there are either two or three neighbors also alive. It will die of overcrowding if there are more than three live neighbors, and it will die of exposure if there are fewer than two. If a cell on the checkerboard is dead, it will remain dead in the next generation unless exactly three of its eight neighbors is alive. In that case, the cell will be 'born' in the next generation". Originally Conway's Life game was played by himself and by his colleagues at Cambridge University's mathematics department in their common room: At first the game was played on table tops at tea time. Later it spilled over from the tables to the floor, and tea time began to extend: far into the afternoons. Finally, wishing to convert a wider audience to his game, Conway submitted it to Martin Gardner, who wrote a popular column on "Mathematical Games" for the Scientific American. In this way Life spread to MIT's Artificial Intelligence Laboratory, where it created such interest that the MIT group designed a small computer specifically dedicated to rapidly implementing Life's rules. The reason for the excitement about Conway's Life game was that it seemed capable of generating extremely complex patterns, starting from relatively simple configurations and using only its simple rules. Ed Fredkin, the director of MIT's Artificial Intelligence Laboratory, became enthusiastic about cellular automata because they seemed to offer a model for the way in which complex phenomena can emerge from the laws of nature, which are after all very simple. In 1982, Fredkin (who was independently wealthy because of a successful computer company which he had founded) organized a conference on cellular automata on his private island in the Caribbean. The conference is notable because one of the participants was a young mathematical genius named Stephen Wolfram, who was destined to refine the concept of cellular automata and to become one of the leading theoreticians in the field10. One of Wolfram's important contributions was to explore exhaustively the possibilities of 1-dimensional cellular automata. No one before him had looked at 1-dimensional CA's, but in fact they had two great advantages: The first of these advantages was simplicity, which allowed Wolfram to explore and classify the possible rule sets. Wolfram classified the rule sets into 4 categories, according to the degree of complexity which they generated. The second advantage was that the configurations of the system 10 As many readers probably know, Stephen Wolfram was also destined to become a millionaire by inventing the elegant symbol-manipulating program system, Mathematica.




in successive generations could be placed under one another to form an easily-surveyed 2-dimensional visual display. Some of the patterns generated in this way were strongly similar to the patterns of pigmentation on the shells of certain molluscs. The strong resemblance seemed to suggest that Wolfram's 1-dimensional cellular automata might yield insights into the mechanism by which the pigment patterns are generated. In general, cellular automata seemed to be promising models for gaining insight into the fascinating and highly important biological problem of morphogenesis: How does the fertilized egg translate the information on the genome into the morphology of the growing embryo, ending finally with the enormously complex morphology of a fully developed and fully differentiated multicellular animal? Our understanding of this amazing process is as yet very limited, but there is evidence that as the embryo of a multicellular animal develops, cells change their state in response to the states of neighboring cells. In the growing embryo, the "state" of a cell means the way in which it is differentiated, i.e., which genes are turned on and which off - which information on the genome is available for reading, and which segments are blocked. Neighboring cells signal to each other by means of chemical messengers11. Clearly there is a close analogy between the way complex patterns develop in a cellular automaton, as neighboring cells influence each other and change their states according to relatively simple rules, and the way in which the complex morphology of a multicellular animal develops in the growing embryo. Conway's Life game attracted another very important worker to the field of cellular automata: In 1971, Christopher Langton was working as a computer programmer in the Stanley Cobb Laboratory for Psychiatric Research at Massachusetts General Hospital. When colleagues from MIT brought to the laboratory a program for executing Life, Langton was immediately interested. He recalls "It was the first hint that there was a distinction between the hardware and the behavior which it would support... You had the feeling that there was something very deep here in this little artificial universe and its evolution through time. [At the lab] we had a lot of discussions about whether the program could be open ended - could you have a universe in which life could evolve?" Later, at the University of Arizona, Langton read a book describing von Neumann's theoretical work on automata. He contacted Arthur Burks, von Neumann's editor, who told him that no self-replicating automaton had actually been implemented, although E.F. Codd had proposed a simplified plan with only 8 states instead of 29. Burks suggested to Langton that he should start by reading Codd's book. 11 We can recall the case of slime mold cells which signal to each other by means of the chemical messenger, cyclic AMP (Chapter 3).



When Langton studied Codd's work, he realized that part of the problem was that both von Neumann and Codd had demanded that the selfreproducing automaton should be able to function as a universal Turing machine, i.e., as a universal computer. When Langton dropped this demand (which he considered to be more related to mathematics than to biology) he was able to construct a relatively simple self-reproducing configuration in an 8-state 2-dimensional lattice of CA cells. As they reproduced themselves, Langton's loop-like cellular automata filled the lattice of cells in a manner reminiscent of a growing coral reef, with actively reproducing loops on the surface of the filled area, and "dead" (nonreproducing) loops in the center. Langton continued to work with cellular automata as a graduate student at Arthur Burks' Logic of Computers Group at Michigan. His second important contribution to the field was an extension of Wolfram's classification of rule sets for cellular automata. Langton introduced a parameter A to characterize various sets of rules according to the type of behavior which they generated. Rule sets with a value near to the optimum (A = 0.273) generated complexity similar to that found in biological systems. This value of Langton's A parameter corresponded to a borderline region between periodicity and chaos. After obtaining a Ph.D. from Burks' Michigan group, Christopher Langton moved to the Center for Nonlinear Studies at Los Alamos, New Mexico, where in 1987 he organized an "Interdisciplinary Workshop on the Synthesis and Simulation of Living Systems" - the first conference on artificial life ever held. Among the participants were Richard Dawkins, Astrid Lindenmeyer, John Holland, and Richard Laing. The noted Oxford biologist and author Richard Dawkins was interested in the field because he had written a computer program for simulating and teaching evolution. Astrid Lindenmeyer and her coworkers in Holland had written programs capable of simulating the morphogenesis of plants in an astonishingly realistic way. As was mentioned above, John Holland pioneered the development of genetic algorithms, while Richard Laing was the leader of NASA's study to determine whether self-reproducing factories might be feasible. Langton's announcement for the conference, which appeared in the Scientific American, stated that "Artificial life is the study of artificial systems that exhibit behavior characteristic of natural living systems...The ultimate goal is to extract the logical form of living systems. Microelectronic technology and genetic engineering will soon give us the capability to create new life in silico as well as in vitro. This capacity will present humanity with the most far-reaching technical, theoretical, and ethical challenges it has ever confronted. The time seems appropriate for a gathering of those involved in attempts to simulate or synthesize aspects of living systems."




In the 1987 workshop on artificial life, a set of ideas which had gradually emerged during the previous decades of work on automata and simulations of living systems became formalized and crystallized: All of the participants agreed that something more than reductionism was needed to understand the phenomenon of life. This belief was not a revival of vitalism; it was instead a conviction that the abstractions of molecular biology are not in themselves sufficient. The type of abstraction found in Darwin's theory of natural selection was felt to be nearer to what was needed. The viewpoints of thermodynamics and statistical mechanics were also helpful. What was needed, it was felt, were insights into the flow of information in complex systems; and computer simulations could give us this insight. The fact that the simulations might take place in silico did not detract from their validity. The logic and laws governing complex systems and living systems were felt to be independent of the medium. As Langton put it, "The ultimate goal of artificial life would be to create 'life' in some other medium, ideally a virtual medium where the essence of life has been abstracted from the details of its implementation in any particular model. We would like to build models that are so lifelike that they cease to become models of life and become examples of life themselves." Most of the participants at the first conference on artificial life had until then been working independently, not aware that many other researchers shared their viewpoint. Their conviction that the logic of a system is largely independent of the medium echoes the viewpoint of the Macy Conferences on cybernetics in the 1940's, where the logic of feedback loops and control systems was studied in a wide variety of contexts, ranging from biology and anthropology to computer systems. A similar viewpoint can also be found in biosemiotics (Appendix 2), where, in the words of the Danish biologist Jesper Hoffmeyer, "the sign, rather than the molecule" is considered to be the starting point for studying life. In other words, the essential ingredient of life is information; and information can be expressed in many ways. The medium is less important than the message. The conferences on artificial life have been repeated each year since 1987, and European conferences devoted to the new and rapidly growing field have also been organized. Langton himself moved to the Santa Fe Institute, where he became director of the institute's artificial life program and editor of a new journal, Artificial Life. The first three issues of the journal have been published as a book by the MIT Press, and the book presents an excellent introduction to the field. Among the scientists who were attracted to the artificial life conferences was the biologist Thomas Ray, a graduate of Florida State University and Harvard, and an expert in the ecology of tropical rain forests. In the late



1970's, while he was working on his Harvard Ph.D., Ray happened to have a conversation with a computer expert from the MIT Artificial Intelligence Lab, who mentioned to him that computer programs can replicate. To Ray's question "How?", the AI man answered "Oh, it's trivial." Ray continued to study tropical ecologies, but the chance conversation from his Cambridge days stuck in his mind. By 1989 he had acquired an academic post at the University of Delaware, and by that time he had also become proficient in computer programming. He had followed with interest the history of computer viruses. Were these malicious creations in some sense alive? Could it be possible to make self-replicating computer programs which underwent evolution by natural selection? Ray considered John Holland's genetic algorithms to be analogous to the type of selection imposed by plant and animal breeders in agriculture. He wanted to see what would happen to populations of digital organisms that found their own criteria for natural selection - not humanly imposed goals, but self-generated and open-ended criteria growing naturally out of the requirements for survival. Although he had a grant to study tropical ecologies, Ray neglected the project and used most of his time at the computer, hoping to generate populations of computer organisms that would evolve in an open-ended and uncontrolled way. Luckily, before starting his work in earnest, Thomas Ray consulted Christopher Langton and his colleague James Farmer at the Center for Nonlinear Studies in New Mexico. Langton and Farmer realized that Ray's project could be a very dangerous one, capable of producing computer viruses or worms far more malignant and difficult to eradicate than any the world had yet seen. They advised Ray to make use of Turing's concept of a virtual computer. Digital organisms created in such a virtual computer would be unable to live outside it. Ray adopted this plan, and began to program a virtual world in which his freely evolving digital organisms could live. He later named the system "Tierra". Ray's Tierra was not the first computer system to aim at open-ended evolution. Steen Rasmussen, working at the Danish Technical University, had previously produced a system called "VENUS" (Virtual Evolution in a Nonstochastic Universe Simulator) which simulated the very early stages of the evolution of life on earth. However, Ray's aim was not to understand the origin of life, but instead to produce digitally something analogous to the evolutionary explosion of diversity that occurred on earth at the start of the Cambrian era. He programmed an 80-byte self-reproducing digital organism which he called "Ancestor", and placed it in Tierra, his virtual Garden of Eden. Ray had programmed a mechanism for mutation into his system, but he doubted that he would be able to achieve an evolving population with




his first attempt. As it turned out, Ray never had to program another organism. His 80-byte Ancestor reproduced and populated his virtual earth, changing under the action of mutation and natural selection in a way that astonished and delighted him. In his freely evolving virtual zoo, Ray found parasites, and even hyperparasites, but he also found instances of altruism and symbiosis. Most astonishingly of all, when he turned off the mutations in his Eden, his organisms invented sex (using mechanisms which Ray had introduced to allow for parasitism). They had never been told about sex by their creator, but they seemed to find their own way to the Tree of Knowledge. Thomas Ray expresses the aims of his artificial life research as follows:12 "Everything we know about life is based on one example: Life on Earth. Everything we know about intelligence is based on one example: Human intelligence. This limited experience burdens us with preconceptions, and limits our imaginations... How can we go beyond our conceptual limits, find the natural form of intelligent processes in the digital medium, and work with the medium to bring it to its full potential, rather than just imposing the world we know upon it by forcing it to run a simulation of our physics, chemistry and biology?..." "In the carbon medium it was evolution that explored the possibilities inherent in the medium, and created the human mind. Evolution listens to the medium it is embedded in. It has the advantage of being mindless, and therefore devoid of preconceptions, and not limited by imagination." "I propose the creation of a digital nature - a system of wildlife reserves in cyberspace in the interstices between human colonizations, feeding off unused CPU-cycles and permitted a share of our bandwidth. This would be a place where evolution can spontaneously generate complex information processes, free from the demands of human engineers and market analysts telling it what the target applications are - a place for a digital Cambrian explosion of diversity and complexity..." "It is possible that out of this digital nature, there might emerge a digital intelligence, truly rooted in the nature of the medium, rather than brutishly copied from organic nature. It would be a fundamentally alien intelligence, but one that would complement rather than duplicate our talents and abilities." Have Thomas Ray and other "a-lifers"13 created artificial living organisms? Or have they only produced simulations that mimic certain aspects of life? Obviously the answer to this question depends on the definition of life, and there is no commonly agreed-upon definition. Does life have to involve carbon chemistry? The a-lifers call such an assertion "carbon chauvinism". 1 2 T. 13 In

Ray, ray/pubs/pubs.html this terminology, ordinary biologists are "b-lifers".



They point out that elsewhere in the universe there may exist forms of life based on other media, and their program is to find medium-independent characteristics which all forms of life must have. In the present book, especially in Chapter 4, we have looked at the phenomenon of life from the standpoint of thermodynamics, statistical mechanics and information theory. Seen from this viewpoint, a living organism is a complex system produced by an input of thermodynamic information in the form of Gibbs free energy. This incoming information keeps the system very far away from thermodynamic equilibrium, and allows it to achieve a statistically unlikely and complex configuration. The information content of any complex (living) system is a measure of how unlikely it would be to arise by chance. With the passage of time, the entropy of the universe increases, and the almost unimaginably improbable initial configuration of the universe is converted into complex free-energy-using systems that could never have arisen by pure chance. Life maintains itself and evolves by feeding on Gibbs free energy, that is to say, by feeding on the enormous improbability of the initial conditions of the universe. All of the forms of artificial life that we have discussed derive their complexity from the consumption of free energy. For example, Spiegelman's evolving RNA molecules feed on the Gibbs free energy of the phosphate bonds of their precursors, ATP, GTP, UTP, and CTP. This free energy is the driving force behind artificial evolution which Spiegelman observed. In his experiment, thermodynamic information in the form of high-energy phosphate bonds is converted into cybernetic information. Similarly, in the polymerase chain reaction, discussed in Chapter 3, the Gibbs free energy of the phosphate bonds in the precursor molecules ATP, TTP, GTP and CTP drives the reaction. With the aid of the enzyme DNA polymerase, the soup of precursors is converted into a highly improbable configuration consisting of identical copies of the original sequence. Despite the high improbablity of the resulting configuration, the entropy of the universe has increased in the copying process. The improbability of the set of copies is less than the improbability of the high energy phosphate bonds of the precursors. The polymerase chain reaction reflects on a small scale, what happens on a much larger scale in all living organisms. Their complexity is such that they never could have originated by chance, but although their improbability is extremely great, it is less than the still greater improbability of the configurations of matter and energy from which they arose. As complex systems are produced, the entropy of the universe continually increases, i.e., the universe moves from a less probable configuration to a more probable one. In Thomas Ray's experiments, the source of thermodynamic information




is the electrical power needed to run the computer. In an important sense one might say that the digital organisms in Ray's Tierra system are living. This type of experimentation is in its infancy, but since it combines the great power of computers with the even greater power of natural selection, it is hard to see where it might end.

Suggestions for further reading (1) P. Priedland and L.H. Kedes, Discovering the secrets of DNA, Comm. of the ACM, 28, 1164-1185 (1985). (2) E.F. Meyer, The first years of the protein data bank, Protein Science 6, 1591-7, July (1997). (3) C. Kulikowski, Artificial intelligence in medicine: History, evolution and prospects, in Handbook of Biomedical Engineering, J. Bronzine editor, 181.1-181.18, CRC and IEEE Press, Boca Raton Fla., (2000). (4) C. Gibas and P. Jambeck, Developing Bioinformatics Computer Skills, O'Reily, (2001). (5) F.L. Carter, The molecular device computer: point of departure for large-scale cellular automata, Physica D, 10, 175-194 (1984). (6) K.E. Drexler, Molecular engineering: an approach to the development of general capabilities for molecular manipulation, Proc. Natl. Acad. Sci USA, 78, 5275-5278 (1981). (7) K.E. Drexler, Engines of Creation, Anchor Press, Garden City, New York, (1986). (8) D.M. Eigler and E.K. Schweizer, Positioning single atoms with a scanning electron microscope, Nature, 344, 524-526 (1990). (9) E.D. Gilbert, editor, Miniaturization, Reinhold, New York, (1961). (10) R.C. Haddon and A.A. Lamola, The molecular electronic devices and the biochip computer: present status, Proc. Natl. Acad. Sci. USA, 82, 1874-1878 (1985). (11) H.M. Hastings and S. Waner, Low dissipation computing in biological systems, BioSystems, 17, 241-244 (1985). (12) J.J. Hopfield, J.N. Onuchic and D.N. Beritan, A molecular shift register based on electron transfer, Science, 241, 817-820 (1988). (13) L. Keszthelyi, Bacteriorhodopsin, in Bioenergetics, P. P. Graber and G. Millazo (editors), Birkhauaer Verlag, Basil Switzerland, (1997). (14) F.T. Hong, The bacteriorhodopsin model membrane as a prototype molecular computing element, BioSystems, 19, 223-236 (1986). (15) L.E. Kay, Life as technology: Representing, intervening and molecularizing, Rivista di Storia della Scienzia, II, vol.1, 85-103 (1993). (16) A.P. Alivisatos et al., Organization of 'nanocrystal molecules' using



DNA, Nature, 382, 609-611, (1996). (17) T. Bj0rnholm et al., Self-assembly of regioregular, amphiphilic polythiophenes into highly ordered pi-stacked conjugated thin films and nanocircuits, J. Am. Chem. Soc. 120, 7643 (1998). (18) L.J. Fogel, A.J.Owens, and M.J. Walsh, Artificial Intelligence Through Simulated Evolution, John Wiley, New York, (1966). (19) L.J. Fogel, A retrospective view and outlook on evolutionary algorithms, in Computational Intelligence: Theory and Applications, 5th Fuzzy Days, B. Reusch, editor, Springer-Verlag, Berlin, (1997). (20) P.J. Angeline, Multiple interacting programs: A representation for evolving complex behaviors, Cybernetics and Systems, 29 (8), 779806 (1998). (21) X. Yao and D.B. Fogel, editors, Proceedings of the 2000 IEEE Symposium on Combinations of Evolutionary Programming and Neural Networks, IEEE Press, Piscataway, NJ, (2001). (22) R.M. Brady, Optimization strategies gleaned from biological evolution, Nature 317, 804-806 (1985). (23) K. Dejong, Adaptive system design - a genetic approach, IEEE Syst. M. 10, 566-574 (1980). (24) W.B. Dress, Darwinian optimization of synthetic neural systems, IEEE Proc. ICNN 4, 769-776 (1987). (25) J.H. Holland, A mathematical framework for studying learning in classifier systems, Physica 22 D, 307-313 (1986). (26) R.F. Albrecht, C.R. Reeves, and N.C. Steele (editors), Artificial Neural Nets and Genetic Algotithms, Springer Verlag, (1993). (27) L. Davis, editor, Handbook of Genetic Algorithms, Van Nostrand Reinhold, New York, (1991). (28) Z. Michalewicz, Genetic Algorithms + Data Structures = Evolution Programs, Springer-Verlag, New York, (1992), second edition, (1994). (29) K.I. Diamantaris and S.Y. Kung, Principal Component Neural Networks: Theory and Applications, John Wiley and Sons, New York, (1996). (30) A. Garliauskas and A. Soliunas, Learning and recognition of visual patterns by human subjects and artificial intelligence systems, Informatica, 9 (4), (1998). (31) A. Garliauskas, Numerical simulation of dynamic synapse-dendritesoma neuronal processes, Informatica, 9 (2), 141-160, (1998). (32) U. Seifert and B. Michaelis, Growing multi-dimensional selforganizing maps, International Journal of Knowledge-Based Intelligent Engineering Systems, 2 (1), 42-48, (1998). (33) S. Mitra, S.K. Pal, and M.K. Kundu, Finger print classification using fuzzy multi-layer perceptron, Neural Computing and Applications, 2,




227-233 (1994). (34) M. Verleysen (editor), European Symposium on Artificial Neural Networks, D-Facto, (1999). (35) R.M. Golden, Mathematical Methods for Neural Network Analysis and Design, MIT Press, Cambridge MA, (1996). (36) S. Haykin, Neural Networks - (A) Comprehensive Foundation, MacMillan, New York, (1994). (37) M.A. Gronroos, Evolutionary Design of Neural Networks, Thesis, Computer Science, Department of Mathematical Sciences, University of Turku, Finland, (1998). (38) D.E. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning, Addison-Wesley, (1989). (39) M. Mitchell, An Introduction to Genetic Algorithms, MIT Press, Cambridge MA, (1996). (40) L. Davis (editor), Handbook of Genetic Algorithms, Van Nostrand and Reinhold, New York, (1991). (41) J.H. Holland, Adaptation in Natural and Artificial Systems, MIT Press, Cambridge MA, (1992). (42) J.H. Holland, Hidden Order; How Adaptation Builds Complexity, Addison Wesley, (1995). (43) W. Banzhaf, P. Nordin, R.E. Keller and F. Francone, Genetic Programming - An Introduction; On the Automatic Evolution of Computer Programs and its Applications, Morgan Kaufmann, San Francisco CA, (1998). (44) W. Banzhaf et al. (editors), (GECCO)-99: Proceedings of the Genetic Evolutionary Computation Conference, Morgan Kaufman, San Francisco CA, (2000). (45) W. Banzhaf, Editorial Introduction, Genetic Programming and Evolvable Machines, 1, 5-6, (2000). (46) W. Banzhaf, The artificial evolution of computer code, IEEE Intelligent Systems, 15, 74-76, (2000). (47) J.J. Grefenstette (editor), Proceedings of the Second International Conference on Genetic Algorithms and their Applications, Lawrence Erlbaum Associates, Hillsdale New Jersey, (1987). (48) J. Koza, Genetic Programming: On the Programming of Computers by means of Natural Selection, MIT Press, Cambridge MA, (1992). (49) J. Koza et al., editors, Genetic Programming 1997: Proceedings of the Second Annual Conference, Morgan Kaufmann, San Francisco, (1997). (50) W.B. Langdon, Genetic Programming and Data Structures, Kluwer, (1998). (51) D. Lundh, B. Olsson, and A. Narayanan, editors, Bio-Computing and



Emergent Computation 1997, World Scientific, Singapore, (1997). (52) P. Angeline and K. Kinnear, editors, Advances in Genetic Programming: Volume 2, MIT Press, (1997). (53) J.H. Holland, Adaptation in Natural and Artificial Systems, The University of Michigan Press, Ann Arbor, (1975). (54) David B. Fogel and Wirt Atmar (editors), Proceedings of the First Annual Conference on Evolutionary Programming, Evolutionary Programming Society, La Jolla California, (1992). (55) M. Sipper et al., A phylogenetic, ontogenetic, and epigenetic view of bioinspired hardware systems, IEEE Transactions in Evolutionary Computation 1, 1 (1997). (56) E. Sanchez and M. Tomassini, editors, Towards Evolvable Hardware, Lecture Notes in Computer Science, 1062, Springer-Verlag, (1996). (57) J. Markoff, A Darwinian creation of software, New York Times, Section C, p.6, February 28, (1990). (58) A. Thompson, Hardware Evolution: Automatic design of electronic circuits in reconfigurable hardware by artificial evolution, Distinguished dissertation series, Springer-Verlag, (1998). (59) W. McCulloch and W. Pitts, A Logical Calculus of the Ideas Immanent in Nervous Activity, Bulletin of Mathematical Biophysics, 7, 115-133, (1943). (60) F. Rosenblatt, Principles of Neurodynamics, Spartan Books, (1962). (61) C. von der Malsburg, Self-Organization of Orientation Sensitive Cells in the Striate Cortex, Kybernetik, 14, 85-100, (1973). (62) S. Grossberg, Adaptive Pattern Classification and Universal Recoding: 1. Parallel Development and Coding of Neural Feature Detectors, Biological Cybernetics, 23, 121-134, (1976). (63) J.J. Hopfield and D.W. Tank, Computing with Neural Circuits: A Model, Science, 233, 625-633, (1986). (64) R.D. Beer, Intelligence as Adaptive Behavior: An Experiment in Computational Neuroethology, Academic Press, New York, (1990). (65) S. Haykin, Neural Networks: A Comprehensive Foundation, IEEE Press and Macmillan, (1994). (66) S.V. Kartalopoulos, Understanding Neural Networks and Fuzzy Logic: Concepts and Applications, IEEE Press, (1996). (67) D. Fogel, Evolutionary Computation: The Fossil Record, IEEE Press, (1998). (68) D. Fogel, Evolutionary Computation: Toward a New Philosophy of Machine Intelligence, IEEE Press, Piscataway NJ, (1995). (69) J.M. Zurada, R.J. Marks II, and C.J. Robinson, editors, Computational Intelligence: Imitating Life, IEEE Press, (1994). (70) J. Bezdek and S.K. Pal, editors, Fuzzy Models for Pattern Recogni-




tion: Methods that Search for Structure in Data, IEEE Press, (1992). (71) M.M. Gupta and G.K. Knopf, editors, Neuro-Vision Systems: Principles and Applications, IEEE Press, (1994). (72) C. Lau, editor, Neural Networks. Theoretical Foundations and Analysis, IEEE Press, (1992). (73) T. Back, D.B. Fogel and Z. Michalewicz, editors, Handbook of Evolutionary Computation, Oxford University Press, (1997). (74) D.E. Rumelhart and J.L. McClelland, Parallel Distributed Processing: Explorations in the Micro structure of Cognition, Volumes I and II, MIT Press, (1986). (75) J. Hertz, A. Krogh and R.G. Palmer, Introduction to the Theory of Neural Computation, Addison Wesley, (1991). (76) J.A. Anderson and E. Rosenfeld, Neurocomputing: Foundations of Research, MIT Press, (1988). (77) R.C. Eberhart and R.W. Dobbins, Early neural network development history: The age of Camelot, IEEE Engineering in Medicine and Biology 9, 15-18 (1990). (78) T. Kohonen, Self-Organization and Associative Memory, SpringerVerlag, Berlin, (1984). (79) T. Kohonen, Self-Organizing Maps, Springer-Verlag, Berlin, (1997). (80) G.E. Hinton, How neural networks learn from experience, Scientific American 267, 144-151 (1992). (81) K. Swingler, Applying Neural Networks: A Practical Guide, Academic Press, New York, (1996). (82) B.K. Wong, T.A. Bodnovich and Y. Selvi, Bibliography of neural network business applications research: 1988-September 1994, Expert Systems 12, 253-262 (1995). (83) I. Kaastra and M. Boyd, Designing neural networks for forecasting financial and economic time series, Neurocomputing 10, 251-273 (1996). (84) T. Poddig and H. Rehkugler, A world model of integrated financial markets using artificial neural networks, Neurocomputing 10, 2251273 (1996). (85) J.A. Burns and G.M. Whiteside, Feed forward neural networks in chemistry: Mathematical systems for classification and pattern recognition, Chem. Rev. 93, 2583-2601, (1993). (86) M.L. Action and P.W. Wilding, The application of backpropagation neural networks to problems in pathology and laboratory medicine, Arch. Pathol. Lab. Med. 116, 995-1001 (1992). (87) D.J. Maddalena, Applications of artificial neural networks to problems in quantitative structure activity relationships, Exp. Opin. Ther. Patents 6, 239-251 (1996).



(88) W.G. Baxt, Application of artificial neural networks to clinical medicine, [Review], Lancet 346, 1135-8 (1995). (89) A. Chablo, Potential applications of artificial intelligence in telecommunications, Technovation 14, 431-435 (1994). (90) D. Horwitz and M. El-Sibaie, Applying neural nets to railway engineering, AI Expert, 36-41, January (1995). (91) J. Plummer, Tighter process control with neural networks, 49-55, October (1993). (92) T. Higuchi et al., Proceedings of the First International Conference on Evolvable Systems: From Biology to Hardware (ICES96), Lecture Notes on Computer Science, Springer-Verlag, (1997). (93) S.A. Kaufman, Antichaos and adaption, Scientific American, 265, 78-84, (1991). (94) S.A. KaufFman, The Origins of Order, Oxford University Press, (1993). (95) M.M. Waldrop, Complexity: The Emerging Science at the Edge of Order and Chaos, Simon and Schuster, New York, (1992). (96) H.A. Simon, The Science of the Artificial, 3rd Edition, MIT Press, (1996). (97) M.L. Hooper, Embryonic Stem Cells: Introducing Planned Changes into the Animal Germline, Harwood Academic Publishers, Philadelphia, (1992). (98) F. Grosveld, (editor), Transgenic Animals, Academic Press, New York, (1992). (99) G. Kohler and C. Milstein, Continuous cultures of fused cells secreting antibody of predefined specificity, Nature, 256, 495-497 (1975). (100) S. Spiegelman, An approach to the experimental analysis of precellular evolution, Quarterly Reviews of Biophysics, 4, 213-253 (1971). (101) M. Eigen, Self-organization of matter and the evolution of biological macromolecules, Naturwissenschaften, 58, 465-523 (1971). (102) M. Eigen and W. Gardiner, Evolutionary molecular engineering based on RNA replication, Pure and Applied Chemistry, 56, 967-978 (1984). (103) G.F. Joyce, Directed molecular evolution, Scientific American 267 (6), 48-55 (1992). (104) N. Lehman and G.F. Joyce, Evolution in vitro of an RNA enzyme with altered metal dependence, Nature, 361, 182-185 (1993). (105) E. Culotta, Forcing the evolution of an RNA enzyme in the test tube, Science, 257, 31 July, (1992). (106) S.A. Kauffman, Applied molecular evolution, Journal of Theoretical Biology, 157, 1-7 (1992). (107) H. Fenniri, Combinatorial Chemistry. A Practical Approach, Oxford




University Press, (2000). (108) P. Seneci, Solid-Phase Synthesis and Combinatorial Technologies, John Wiley & Sons, New York, (2001). (109) G.B. Fields, J.P. Tam, and G. Barany, Peptides for the New Millennium, Kluwer Academic Publishers, (2000). (110) Y.C. Martin, Diverse viewpoints on computational aspects of molecular diversity, Journal of Combinatorial Chemistry, 3, 231-250, (2001). (111) C.G. Langton et al., editors, Artificial Life II: Proceedings of the Workshop on Artificial Life Held in Santa Fe, New Mexico, AdisonWesley, Reading MA, (1992). (112) W. Aspray and A. Burks, eds., Papers of John von Neumann on Computers and Computer Theory, MIT Press, (1967). (113) M. Conrad and H.H. Pattee, Evolution experiments with an artificial ecosystem, J. Theoret. Biol., 28, (1970). (114) C. Emmeche, Life as an Abstract Phenomenon: Is Artificial Life Possible ?, in Toward a Practice of Artificial Systems: Proceedings of the First European Conference on Artificial Life, MIT Press, Cambridge MA, (1992). (115) C. Emmeche, The Garden in the Machine: The Emerging Science of Artificial Life, Princeton University Press, Princeton NJ, (1994). (116) S. Levy, Artificial Life: The Quest for New Creation, Pantheon, New York, (1992). (117) K. Lindgren and M.G. Nordahl, Cooperation and Community Structure in Artificial Ecosystems, Artificial Life, 1, 15-38 (1994). (118) P. Husbands and I. Harvey (editors), Proceedings of the 4th Conference on Artificial Life (ECAL '97), MIT Press, (1997) (119) C.G. Langton, (editor), Artificial Life: An Overview, MIT Press, Cambridge MA, (1997). (120) C.G. Langton, ed., Artificial Life, Addison-Wesley, (1987). (121) A.A. Beaudry and G.F. Joyce, Directed evolution of an RNA enzyme, Science, 257, 635-641 (1992). (122) D.P. Bartel and J.W. Szostak, Isolation of new ribozymes from a large pool of random sequences, Science, 261, 1411-1418 (1993). (123) K. Kelly, Out of Control,, (2002). (124) K. Kelly, The Third Culture, Science, February 13, (1998). (125) S. Blakeslee, Computer life-form "mutates" in an evolution experiment, natural selection is found at work in a digital world, New York Times, November 25, (1997). (126) M. Ward, It's life, but not as we know it, New Scientist, July 4, (1998). (127) P. Guinnessy, "Life" crawls out of the digital soup, New Scientist,



April 13, (1996). (128) L. Hurst and R. Dawkins, Life in a test tube, Nature, May 21, (1992). (129) J. Maynard Smith, Byte-sized evolution, Nature, February 27, (1992). (130) W.D. Hillis, Intelligence as an Emergent Behavior, in Artificial Intelligence, S. Graubard, ed., MIT Press, (1988). (131) T.S. Ray, Evolution and optimization of digital organisms, in Scientific Excellence in Supercomputing: The IBM 1990 Contest Prize Papers, K.R. Billingsly, E. Derohanes, and H. Brown, III, editors, The Baldwin Press, University of Georgia, Athens GA 30602, (1991). (132) S. Lloyd, The calculus of intricacy, The Sciences, October, (1990). (133) M. Minsky, The Society of Mind, Simon and Schuster, (1985). (134) D. Pines, ed., Emerging Synthesis in Science, Addison-Wesley, (1988). (135) P. Prusinkiewicz and A. Lindenmeyer, The Algorithmic Beauty of Plants, Springer-Verlag, (1990). (136) T. Tommaso and N. Margolus, Cellular Automata Machines: A New Environment for Modeling, MIT Press, (1987). (137) W.M. Mitchell, Complexity: The Emerging Science at the Edge of Order and Chaos, Simon and Schuster, (1992). (138) T.S. Ray, Artificial Life, in From Atoms to Mind, W. Gilbert amd T.V. Glauco. eds., Istituto della Encyclopedia Italiana Treccani, (Rome) (in press). (139) T.S. Ray et al., Kurtzweil's Turing Fallacy, in Are We Spiritual Machines?: Ray Kurzweil vs. the Critics of Strong AI, J. Richards, ed., Viking, (2002). (140) T.S. Ray, Aesthetically Evolved Virtual Pets, in Artificial Life 7 Workshop Proceedings, C.C. Maley and E. Bordreau, eds., (2000). (141) T.S. Ray and J.F. Hart, Evolution of Differentiation in Digital Organisms, in Artificial Life VII, Proceedings of the Seventh International Conference on Artificial Life, M.A. Bedau, J.S. McCaskill, N.H. Packard, and S. Rasmussen, eds., MIT Press, (2000). (142) T.S. Ray, Artificial Life, in Frontiers of Life, Vol. 1: The Origins of Life, R. Dulbecco et al., eds., Academic Press, (2001). (143) T.S. Ray, Selecting naturally for differentiation: Preliminary evolutionary results, Complexity, 3 (5), John Wiley and Sons, (1998). (144) K. Sims, Artificial Evolution for Computer Graphics, Computer Graphics, 25 (4), 319-328 (1991). (145) K. Sims, Galapagos, , (1997).

Chapter 9

LOOKING TOWARDS THE FUTURE Tensions created by the rapidity of technological change In human cultural evolution, information transfer and storage through the language of molecular complementarity is supplemented by new forms of biological information flow and conservation - spoken language, writing, printing, and more recently electronic communication. The result has been a shift into a much higher evolutionary gear. Because of new, self-reenforcing mechanisms of information flow and accumulation, the rate of evolutionary change has increased enormously: It took 3 billion years for the first autocatalytic systems to develop into multicellular organisms. Five hundred million years were required for multicellular organisms to rise from the level of sponges and slime molds to the degree of complexity and organization that characterizes primates and other mammals; but when a branch of the primate family developed a toolusing culture, spoken language, and an enlarged brain, only 40,000 years were required for our ancestors to change from animal-like hunter-gatherers into engineers, poets and astronomers. During the initial stages of human cultural evolution, the rate of change was slow enough for genetic adaptation to keep pace. The co-evolution of speech, tool use, and an enlarged brain in hominids took place over a period of several million years, and there was ample time for genetic adaptation. The prolonged childhood which characterizes our species, and the behavior patterns of familial and tribal solidarity, were built into the genomes of our ancestors during the era of slow change, when cultural and genetic evolution moved together in equilibrium. However, as the pace of cultural information accumulation quickened, genetic change could no longer keep up. Genetically we are almost identical with our neolithic ancestors; but their world has been replaced by a world of quantum theory, relativity,




supercomputers, antibiotics, genetic engineering and space telescopes - unfortunately also a world of nuclear weapons and nerve gas. Because of the slowness of genetic evolution in comparison to the rapid and constantlyaccelerating rate of cultural change, our bodies and minds are not perfectly adapted to our new way of life. They reflect more accurately the way of life of our hunter-gatherer ancestors. In addition to the contrast between the slow pace of genetic evolution when compared with the rapid and constantly-accelerating rate of cultural evolution, we can also notice a contrast between rapidly- and slowly-moving aspects of cultural change: Social institutions and structures seem to change slowly when compared with the lightning-like pace of scientific and technological innovation. Thus, tensions and instability characterize informationdriven society, not only because science and technology change so much more rapidly than institutions, laws, and attitudes, but also because human nature is not completely appropriate to our present way of life. In particular, human nature seems to contain an element of what might be called "tribalism", because our emotions evolved during an era when our ancestors lived in small, mutually hostile tribes, competing with one another for territory on the grasslands of Africa. Looking towards the future, what can we predict? Detailed predictions are very difficult, but it seems likely that information technology and biotechnology will for some time continue to be the most rapidly-developing branches of science, and that these two fields will merge. We can guess with reasonable certainty that much progress will be made in understanding the mechanism of the brain, and in duplicating its functions artificially. Scientists of the future will undoubtedly achieve greatly increased control over the process of evolution. Thus it seems probable that the rapidity of scientific and technological change will produce ethical dilemmas and social tensions even more acute than those which we experience today. It is likely that the fate of our species (and the fate of the biosphere) will be made precarious by the astonishing speed of scientific and technological change unless this progress is matched by the achievement of far greater ethical and political maturity than we have yet attained. Science has proved to be double-edged - capable of great good, but also of great harm. Information-driven human cultural evolution is a spectacular success - but can it become stable? Terrestrial life can look back on almost four billion years of unbroken evolutionary progress. Can we say with confidence that an equal period stretches ahead of us?



Can information-driven society achieve stability? "We are living in a very special time", Murray Gell-Mann1 remarked in a recent interview, "Historians hate to hear this, because they have heard it so many times before, but we are living in a very special time. One symptom of this is the fact that human population has for a long time been increasing according to a hyperbolic curve - a constant divided by 2020 minus the year." The graph of global human population as a function of time, to which Gell-Mann refers in this quotation, is shown in Figure 6.1. Estimates of population are indicated by dots on the graph, while the smooth curve shows the hyperbola P = C/(2020 — y), P being the population, y, the year, and C a constant. The form of the smooth curve, which matches the dots with reasonable accuracy, is at first surprising. One might have expected it to be an exponential, if the rate of increase were proportional to the population already present. The fact that the curve is instead a hyperbola can be understood in terms of the accumulation of cultural information. New techniques (for example the initial invention of agriculture, the importation of potatoes to Europe, or the introduction of high-yield wheat and rice varieties) make population growth possible. In the absence of new techniques, population is usually held in check by the painful Malthusian forces - famine, disease, and war. The curve in Figure 6.1 shows an explosive growth of human population, driven by an equally explosive growth of stored cultural information - especially agricultural and medical information, and the information needed for opening new land to agriculture. As Gell-Mann remarks, population cannot continue to increase in this way, because we are rapidly approaching the limits of the earth's carrying capacity. Will human numbers overshoot these limits and afterwards crash disastrously? There is certainly a danger that this will happen. Besides the challenge of stabilizing global population, the informationdriven human society of the future will face another daunting task: Because of the enormously destructive weapons that have already been produced through the misuse of science, and because of the even worse weapons that may be invented in the future, the long-term survival of civilization can only be insured if society is able to eliminate the institution of war. This task will be made more difficult by the fact that human nature seems to contain an element of tribalism. Humans tend to show great kindness towards close relatives and members of their own group, and are even willing to sacrifice their lives in 1 Gell-Mann is an American physicist who was awarded a Nobel Prize in 1969 for his contributions to the theory of elementary particles.



battle in defense of their own family, tribe or nation. This tribal altruism is often accompanied by inter-tribal aggression - great cruelty towards the "enemy", i.e. towards members of a foreign group which is perceived to be threatening ones own. The fact that human nature seems to contain a genetically-programmed tendency towards tribalism is the reason why we find football matches entertaining, and the reason why Arthur Koestler once remarked: "We can control the movements of a space-craft orbiting about a distant planet, but we cannot control the situation in Northern Ireland." How could evolutionary forces have acted to make the pattern of tribal altruism and inter-tribal aggression a part of human nature? To put the same question differently, how could our ancestors have increased the chances for survival of their own genes by dying in battle? The statistician R.A. Fisher and the evolutionary biologist J.B.S. Haldane considered this question in the 1920's2. Their solution was the concept of population genetics, in which the genetically homogeneous group as a whole - now sometimes called the "deme" - is taken to be the unit upon which evolutionary forces act. Haldane and Fisher postulated that the small tribes in which our ancestors lived were genetically homogeneous, since marriage within the tribe was more probable than marriage outside it. This being the case, a patriotic individual who died for the tribe, killing many members of a competing tribe in the process, increased the chance of survival for his or her own genes, which were carried into the future by the surviving members of the hero's group. The tribe as a whole either lived or died; and those with the best "team spirit" survived most frequently. Because of the extraordinarily bitter and cruel conflicts between ethnic groups which can be found in both ancient and modern history, it is necessary to take the ideas of Haldane and Fischer seriously. This does not mean that the elimination of the institution of war is impossible, but it means that the task will require the full resources and full cooperation of the world's educational systems, religions, and mass media. It will be necessary to educate children throughout the world in such a way that they will think of humanity as a single group - a large family to which all humans belong, and to which they owe their ultimate loyalty. In addition to educational reform, and reform of the images presented by the mass media, the elimination of war will require the construction of a democratic, just, and humane system of international governance, whose laws will act on individuals rather than on states. The problems involved are very difficult, but they must be solved if the information-driven society 2 More recently the evolution of tribal altruism and inter-tribal aggression has also been discussed by W.D. Hamilton and Richard Dawkins.



of the future is to achieve stability.

Respect for natural evolution The avalanche of new techniques in biotechnology and information technology will soon give scientists so much power over evolution that evolutionary ethical problems will become much more acute than they are today. It is already possible to produce chimeras, i.e. transgenic animals and plants incorporating genetic information from two or more species. Will we soon produce hybrids which are partly machines and partly living organisms? What about artificial life? Will humans make themselves obsolete by allowing far more intelligent beings to evolve in cyberspace, as Thomas Ray proposes? What about modification and improvement of our own species? Is there a limit beyond which we ought not to go in constructing new organisms to suit human purposes? Perhaps one answer to these questions can be found by thinking of the way in which evolution has operated to produce the biosphere. Driven by the flood of Gibbs free energy which the earth receives from the sun, living organisms are generated and tested by life. New generations are randomly modified by the genetic lottery, sometimes for the worse, and sometimes for the better; and the instances of improvement are kept. It would be hard to overestimate the value of this mechanism of design by random modification and empirical testing, with the preservation of what works. The organisms which are living today are all champions! They are distillations of vast quantities of experience, end products of four billion years of solar energy income. The beautiful and complex living organisms of our planet are exquisitely adapted to survive, to live with each other, and to form harmonious ecological systems. Whatever we do in biotechnology ought to be guided by caution and by profound respect for what evolution has already achieved. We need a sense of evolutionary responsibility, and a non-anthropocentric component in our system of ethics.

Construction versus destruction It is often said that ethical principles cannot be derived from science that they must come from somewhere else. Nevertheless, when nature is viewed through the eyes of modern science, we obtain some insights which seem almost ethical in character. Biology at the molecular level has shown us the complexity and beauty of even the most humble living organisms,



and the interrelatedness of all life on earth. Looking through the eyes of contemporary biochemistry, we can see that even the single cell of an amoeba is a structure of miraculous complexity and precision, worthy of our respect and wonder. Knowledge of the second law of thermodynamics - the statistical law favoring disorder over order - reminds us that life is always balanced like a tight-rope walker over an abyss of chaos and destruction. Living organisms distill their order and complexity from the flood of thermodynamic information which reaches the earth from the sun. In this way, they create local order; but life remains a fugitive from the second law of thermodynamics. Disorder, chaos, and destruction remain statistically favored over order, construction, and complexity. It is easier to burn down a house than to build one, easier to kill a human than to raise and educate one, easier to force a species into extinction than to replace it once it is gone, easier to burn the Great Library of Alexandria than to accumulate the knowledge that once filled it, and easier to destroy a civilization in a thermonuclear war than to rebuild it from the radioactive ashes. Knowing this, scientists can form an almost ethical insight: To be on the side of order, construction, and complexity, is to be on the side of life. To be on the side of destruction, disorder, chaos and war is to be against life, a traitor to life, an ally of death. Knowing the precariousness of life knowing the statistical laws that favor disorder and chaos, we should resolve to be loyal to the principle of long continued construction upon which life depends.

Suggestions for further reading (1) D.F. Noble, Forces of Production: A Social History of Industrial Automation, Knopf, New York, (1984). (2) E. Morgan, The Scars of Evolution, Oxford University Press, (1990). (3) W.D. Hamilton, The genetical theory of social behavior. I and II, J. Theor. Biol. 7, 1-52 (1964). (4) R.W. Sussman, The Biological Basis of Human Behavior, Prentice Hall, Englewood Cliffs, (1997). (5) H. von Foerster, KybernEthik, Merve Verlag, Berlin, (1993). (6) L. Westra, An Environmental Proposal for Ethics: The Principle of Integrity, Rowman and Littlefield, Lanham MD, (1994). (7) M. Murphy and L. O'Neill, editors, What is Life? The Next Fifty Years: Speculations on the Future of Biology, Cambridge University Press, (1997). (8) Konrad Lorenz, On Aggression, Bantam Books, New York (1977).



(9) Irenaus Eibl-Eibesfeldt, The Biology of Peace and War, Thames and Hudson, New York (1979). (10) R.A. Hinde, Biological Bases of Human Social Behavior, McGrawHill, New York (1977). (11) R.A. Hinde, Towards Understanding Relationships, Academic Press, London (1979). (12) Albert Szent-Gyorgyi, The Crazy Ape, Philosophical Library, New York (1970). (13) E.O. Wilson, Sociobiology, Harvard University Press (1975). (14) C. Zhan-Waxier, Altruism and Aggression: Biological and Social Origins, Cambridge University Press (1986). (15) R. Axelrod, The Evolution of Cooperation, Basic Books, New York (1984). (16) B. Mazlish, The Fourth Discontinuity: The Coevolution of Humans and Machines, Yale University Press, (1993).

Appendix A


In Chapter 4, we mentioned that Boltzmann was able to establish a relationship between entropy and missing information. In this appendix, we will look in detail at his reasoning. The reader will remember that Boltzmann's statistical mechanics (seen from a modern point of view) deals with an ensemble of N weaklyinteracting identical systems which may be in one or another of a set of discrete states, i = 1,2,3,... with energies e*, with the number of the systems which occupy a particular state denoted by rii, State number

1 2 3 ... i ...


ei e2 e3 ... e, ...


Occupation number n\ n 2 713 ... rii ... A "macrostate" of the N identical systems can be specified by writing down the energy levels and their occupation numbers. This macrostate can be constructed in many ways, and each of these ways is called a "microstate". From combinatorial analysis it is possible to show that the number of microstates corresponding to a given macrostate is given by: W =



TV1 ;




Boltzmann assumed that for very large values of N, the most probable macrostate predominates over all others. He also assumed that the amount of energy which is shared by the iV identical systems has a constant value, E, so that ^mti-E^O i 191




He knew, in addition, that the sum of the occupation numbers must be equal to the number of weakly-interacting identical systems:

J2ni~N = 0



It is logical to assume that all microstates which fulfill these two conditions are equally probable, since the TV systems are identical. It then follows that the probability of a particular macrostate is proportional to the number of microstates from which it can be constructed, i.e. proportional to W, so that if we wish to find the most probable macrostate, we need to maximize W subject to the constraints (3) and (4). It turns out to be more convenient to maximize In W subject to these two constraints, but maximizing In W will of course also maximize W. Using the method of undetermined Lagrange multipliers, we look for an absolute maximum of the function IUW-XI^TH-N)

-0 lY^mei-E^


Having found this maximum, we can use the conditions (3) and (4) to determine the values of the Lagrangian multipliers A and /?. For the function shown in equation (5) to be a maximum, it is necessary that its partial derivative with respect to each of the occupation numbers shall vanish. This gives us the set of equations

In TV! - ] T lnfa!) - A - /?e* = 0


which must hold for all values of i. For very large values of TV and n;, Sterling's approximation, ln(nj!)«ni(lnni-l)


can be used to simplify the calculation. With the help of Sterling's approximation and the identity — [ni(lnni-l)]=lnni



we obtain the relationship -lnni-A-/8e< = 0


which can be rewritten in the form m = e"*-^




and for the most probable macrostate, this relationship must hold for all values of i. Substituting (10) into (4), we obtain:

N = j2ni = e~xT,e~pei i



so that

s=s^- H V


Z = J2e~0U

( A - 13 )


The sum Z is called the "partition function" (or in German, Zustandssumme) of a system, and it plays a very central role in statistical mechanics. All of the thermodynamic functions of a system can be derived from it. The factor e~^u is called the "Boltzmann factor". Looking at equation (12), we can see that because of the Boltzmann factor, the probability „.


fl = £ = V

< A ' 14 >

that a particular system will be in a state i is smaller for the states of high energy than it is for those of lower energy. We mentioned above that the constraints (3) and (4) can be used to find the values of the Lagrangian multipliers A and fi. The condition E = N^jPiei



can be used to determine j3. By applying his statistical methods to a monatomic gas at low pressure, Boltzmann found that

?= w

< A - 16 )

where T is the absolute temperature and k is the constant which appears in the empirical law relating the pressure, volume and temperature of a perfect gas; PV

= NkT




From experiments on monatomic gases at low pressures, one finds that the "Boltzmann constant" k is given by k = 1.38062 x 10- 2 3 £ ^ (A.18) Kelvin We mentioned that Boltzmann's equation relating entropy to disorder is carved on his tombstone. With one minor difference, this equation is SN = klnW


(The minor difference is that on the tombstone, the 5 lacks a subscript.) How did Boltzmann identify k\nW with the entropy of Clausius, dS = dq/Tl In answering this question we will continue to use modern picture of a system with a set of discrete states i, whose energies are e». Making use of Sterling's approximation, equation (9), and remembering the definition of W, (2), we can rewrite (19) as SN = kln\

j— Ln1!n2!n3!...nj!...J (A-20)

= k ln(JV!)-£ln(r»i!)L-*JV£^ln^ i J i Equation (20) gives us the entropy of the entire collection of N identical weakly-interacting systems. The entropy of a single system is just this quantity divided by N: S =^

= -jfc ^ Pi In Pj = -ife(lnP>



where Pi = rn/N, denned by equation (14), is the probability that the system is in state i. According to equation (14), this probability is just equal to the Boltzmann factor, [email protected], divided by the partition function, Z, so that

S = -kJ2^ln

("—-) = - f £ e ~ ^ (-te - lnZ) (A.22)

or S = klnZ+^


U = J2dPi


where i



The quantity U defined in equation (24) is called the "internal energy" of a system. Let us now imagine that a very small change in U is induced by an arbitrary process, which may involve interactions between the system and the outside world. We can express the fact that this infinitesimal alteration in internal energy may be due either to slight changes in the energy levels €j or to slight changes in the probabilities Pi by writing:

dU = J2 Pidei + ^2 eidPi i

( A - 25 )


To the first term on the right-hand side of equation (25) we give the name udw":

dw = J2 pidei

( A - 26 )


while the other term is named "dq". dq = dU - dw = Y^ ddPi



What is the physical interpretation of these two terms? The first term, dw, involves changes in the energy levels of system, and this can only happen if we change the parameters defining the system in some way. For example, if the system is a cylinder filled with gas particles and equipped with a piston, we can push on the piston and decrease the volume available to the gas particles. This action will raise the energy levels, and when we perform it we do work on the system - work in the sense defined by Carnot, force times distance, the force which we apply to the piston multiplied by the distance through which we push it. Thus dw can be interpreted as a small amount of work performed on the system by someone or something on the outside. Another way to change the internal energy of the system is to transfer heat to it; and when a small amount of heat is transferred, the energy levels do not change, but the probabilities Pi must change slightly, as can be seen from equations (13), (14) and (16). Thus the quantity dq in equation (27) can be interpreted as an infinitesimal amount of heat transferred to the system. We have in fact anticipated this interpretation by giving it the same name as the dq of equations (4.2) and (4.3). If the probabilities Pi are changed very slightly, then from equation (21) it follows that the resulting small change in entropy is

dS = -k J2 [ln pidpi + dpi\ i

( A - 28 )



From equations (13) and (14) it follows that £Pi

= l



as we would expect from the fact that P; is interpreted as the probability that the system is in a particular state i. Therefore

Y^dPi = dJ2pi = Q i



and as a consequence, the second term on the right-hand side of equation (4.31) vanishes. Making use of equation (14) to rewrite lnP;, we then have: dS = -k Y^ [(-/fei - In Z)dPi]



or n£eidPi i



The somewhat complicated discussion which we have just gone through is a simplified paraphrase of Boltzmann's argument showing that if he defined entropy to be proportional to \nW (the equation engraved on his tombstone) then the function which he defined in this way must be identical with the entropy of Clausius. (We can perhaps sympathize with Ostwald and Mach, who failed to understand Boltzmann!)

Appendix B


The Oxford Dictionary of Biochemistry and Molecular Biology (Oxford University Press, 1997) defines biosemiotics as "the study of signs, of communication, and of information in living organisms". The biologists Claus Emmeche and K. Kull offer another definition of biosemiotics: "biology that interprets living systems as sign systems". The American philosopher Charles Sanders Peirce (1839-1914) is considered to be one of the founders of semiotics (and hence also of biosemiotics). Peirce studied philosophy and chemistry at Harvard, where his father was a professor of mathematics and astronomy. He wrote extensively on philosophical subjects, and developed a theory of signs and meaning which anticipated many of the principles of modern semiotics. Peirce built his theory on a triad: (1) the sign, which represents (2) something to (3) somebody. For example, the sign might be a broken stick, which represents a trail to a hunter, it might be the arched back of a cat, which represents an aggressive attitude to another cat, it might be the waggle-dance of a honey bee, which represents the coordinates of a source of food to her hive-mates, or it might be a molecule of £rans-10-c«s-hexadecadienol, which represents irresistible sexual temptation to a male moth of the species Bombyx mori. The sign might be a sequence of nucleotide bases which represents an amino acid to the ribosome-transfer-RNA system, or it might be a cell-surface antigen which represents self or non-self to the immune system. In information technology, the sign might be the presence or absence of a pulse of voltage, which represents a binary digit to a computer. Semiotics draws our attention to the sign and to its function, and places much less emphasis on the physical object which forms the sign. This characteristic of the semiotic viewpoint has been expressed by the Danish biologist Jesper Hoffmeyer in the following words: "The sign, rather than the molecule, is the basic unit for studying life." A second important founder of biosemiotics was Jakob von Uexkiill




(1894-1944). He was born in Estonia, and studied zoology at the University of Tartu. After graduation, he worked at the Institute of Physiology at the University of Heidelberg, and later at the Zoological Station in Naples. In 1907, he was given an honorary doctorate by Heidelberg for his studies of the physiology of muscles. Among his discoveries in this field was the first recognized instance of negative feedback in an organism. Von UexkiilPs later work was concerned with the way in which animals experience the world around them. To describe the animal's subjective perception of its environment he introduced the word Umwelt; and in 1926 he founded the Institut fur Umweltforschung at the University of Heidelberg. Von Uexkiill visualized an animal - for example a mouse - as being surrounded by a world of its own - the world conveyed by its own special senses organs, and processed by its own interpretative systems. Obviously, the Umwelt will differ greatly depending on the organism. For example, bees are able to see polarized light and ultraviolet light; electric eels are able to sense their environment through their electric organs; many insects are extraordinarily sensitive to pheromones; and a dog's Umwelt far richer in smells than that of most other animals. The Umwelt of a jellyfish is very simple, but nevertheless it exists3. Von Uexkiill's Umwelt concept can even extend to one-celled organisms, which receive chemical and tactile signals from their environment, and which are often sensitive to light. The ideas and research of Jakob von Uexkiill inspired the later work of the Nobel Lauriate ethologist Konrad Lorenz, and thus von Uexkiill can be thought of as one of the founders of ethology as well as of biosemiotics. Indeed, ethology and biosemiotics are closely related. Biosemiotics also values the ideas of the American anthropologist Gregory Bateson (1904-1980), who was mentioned in Chapter 7 in connection with cybernetics and with the Macy Conferences. He was married to another celebrated anthropologist, Margaret Mead, and together they applied Norbert Wiener's insights concerning feedback mechanisms to sociology, psychology and anthropology. Bateson was the originator of a famous epigrammatic definition of information: "..a difference which makes a difference" . This definition occurs in Chapter 3 of Bateson's book, Mind and Nature: A Necessary Unity, Bantam, (1980), and its context is as follows: "To produce news of a difference, i.e. information", Bateson wrote, "there must be two entities... such that news of their difference can be represented as a difference inside some information-processing entity, such as a brain or, perhaps, a computer. There is a profound and unanswerable question about 3 It is interesting to ask to what extent the concept of Umwelt can be equated to that of consciousness. To the extent that these two concepts can be equated, von Uexkiill's Umweltforshcung offers us the opportunity to explore the phylogenetic evolution of the phenomenon of consciousness.



the nature of these two entities that between them generate the difference which becomes information by making a difference. Clearly each alone is - for the mind and perception - a non-entity, a non-being... the sound of one hand clapping. The stuff of sensation, then, is a pair of values of some variable, presented over time to a sense organ, whose response depends on the ratio between the members of the pair."

Suggestions for further reading (1) J. Hoffmeyer, Some semiotic aspects of the psycho-physical relation: the endo-exosemiotic boundary, in Biosemiotics. The Semiotic Web, T.A. Sebeok and J. Umiker-Sebeok, editors, Mouton de Gruyter, Berlin/New York, (1991). (2) J. Hoffmeyer, The swarming cyberspace of the body, Cybernetics and Human Knowing, 3(1), 1-10 (1995). (3) J. Hoffmeyer, Signs of Meaning in the Universe, Indiana University Press, Bloomington IN, (1996). (4) J. Hoffmeyer, Biosemiotics: Towards a new synthesis in biology, European J. Semiotic Stud. 9(2), 355-376 (1997). (5) J. Hoffmeyer and C. Emmeche, Code-duality and the semiotics of nature, in On Semiotic Modeling, M. Anderson and F. Merrell, editors, Mouton de Gruyter, New York, (1991). (6) C. Emmeche and J. Hoffmeyer, From language to nature - The semiotic metaphor in biology, Semiotica, 84, 1-42 (1991). (7) C. Emmeche, The biosemiotics of emergent properties in a pluralist ontology, in Semiosis, Evolution, Energy: Towards a Reconceptualization of the Sign, E. Taborsky, editor, Shaker Verlag, Aachen, (1999). (8) S. Brier, Information and consciousness: A critique of the mechanistic concept of information, in Cybernetics and Human Knowing, 1 ( 2 / 3 ) , 71-94 (1992). (9) S. Brier, Ciber-Semiotics: Second-order cybernetics and the semiotics of C.S. Peirce, Proceedings from the Second European Congress on System Science, Prague, October 5-8, 1993, AFCET, (1993). (10) S. Brier, A cybernetic and semiotic view on a Galilean theory of psychology, Cybernetics and Human Knowing, 2 (2), 31-46 (1993). (11) S. Brier, Cybersemiotics: A suggestion for a transdisciplinary framework for description of observing, anticipatory, and meaning producing systems, in D.M. Dubois, editor, Computing Anticipatory Systems, CASYS - First International Conference, Liege, Belgium 1997, AIP Conference Proceedings no. 437, (1997). (12) S. Oyama, The Ontogeny of Information, Cambridge University Press,



(1985). (13) J.L. Casti and A. Karlqvist, editors, Complexity, Language, and Life: Mathematical Approaches, Springer, Berlin, (1985). (14) H. Maturana and F. Varla, Autopoiesis and Cognition: The Realization of the Living, Reidel, London, (1980). (15) J. Mingers, Self-Producing Systems: Implications and Application of Autopoiesis, Plenum Press, New York, (1995). (16) J. Buchler, editor, Philosophical Writings of Peirce: Selected and Edited with an Introduction by Justus Buchler, Dover Publications, New York, (1955). (17) T.L. Short, Peirce's semiotic theory of the self, Semiotica, 9 1 ( 1 / 2 ) , 109-131 (1992). (18) J. von Uexkiill, Umwelt und Innenwelt der Tiere. 2. verm, und verb. Aufl., Springer, Berlin, (1921). (19) J. von Uexkiill, The theory of meaning, Semiotica, 42(1), 25-87 (1982 [1940]). (20) T. von Uexkiill, Introduction: Meaning and science in Jacob von Uexkull's concept of biology, Semiotica, 42, 1-24 (1982). (21) T. von Uexkiill, Medicine and semiotics, Semiotica, 6 1 , 201-217 (1986). (22) G. Bateson, Form, substance, and difference. Nineteenth Annual Korzybski Memorial Lecture, (1970). Reprinted in G. Bateson, Steps to an Ecology of Mind, Balentine Books, New York, (1972), pp. 448-464. (23) G. Bateson, Mind and Nature: A Necessary Unity, Bantam Books, New York, (1980). (24) G. Bateson, Sacred Unity: Further Steps to an Ecology of Mind, Harper Collins, New York, (1991). (25) J. Ruesch and G. Bateson, Communication, Norton, New York, (1987). (26) E.F. Yates, Semiotics as a bridge between information (biology) and dynamics (physics), Recherches Semiotiques/Semiotic Inquiry 5, 347360 (1985). (27) T.A. Sebeok, Communication in animals and men, Language, 39, 448-466 (1963). (28) T.A. Sebeok, The Sign and its Masters, University of Texas Press, (1979). (29) P. Bouissac, Ecology of semiotic space: Competition, exploitation, and the evolution of arbitrary signs, Am. J. Semiotics, 10, 145-166 (1972). (30) F. Varla, Autopoiesis: A Theory of Living Organization, North Holland, New York, (1986). (31) R. Posner, K. Robins and T.A. Sebeok, editors, Semiotics: A Handbook of the Sign- Theoretic Foundations of Nature and Culture, Walter



de Gruyter, Berlin, (1992). (32) R. Paton, The ecologies of hereditary information, Cybernetics and Human Knowing, 5(4), 31-44 (1998). (33) T. Stonier, Information and the Internal Structure of the Universe, Springer, Berlin, (1990). (34) T. Stonier, Information and Meaning: An Evolutionary Perspective, Springer, Berlin, (1997).


Alien intelligence, 173 Allowed energy bands, 139 Alpha-proteobacteria, 65 Alphabets, 117 Altman, Robert, 64 Altman, Sydney, 57 Altruism, 6, 67, 173 Amino acid sequences, 58, 151 Amino acids, 40, 54, 99, 197 Ammonia, 53 Amoebae, 67 Anaerobic niches, 66 Analogue computers, 137 Analysis of sequence information, 151 Analytical Machine, 134 Anatomy, 2 Ancestral adaptations, 26 Andes mountains, 18 Aniline dyes, 96 Animal Behavior, 31 Animal languages, 104 Animals, similarity to humans, 6 Anthropology, 138, 198 Antibiotic resistance, 48 Antibiotic-resistant pathogens, 98 Antigens, 197 Apes, 29 Archaea, 58 Archaebacteria, 58, 66, 69, 155 Argentine Pampas, 17 Aristotle, 1, 2, 24, 90, 164 ARPANET, 143

Absolute temperature, 74, 193 Abstraction, 103, 104 Academy, 1 Accelerated rate of change, 144 Accelerating accumulation of information, 118 Accounts, 117 Acetylcholine, 101 Acquired characteristics, 11 ACTH, 49 Action potential, 158 Activation, 160 Activation energy, 99 Active site, 40, 41, 99 Adaptation to cold, 18 Adaptive systems, theory of, 162 Adaptor molecule, 42 Adenine, 39 Adenosine triphosphate, 53, 56, 103 Advanced Research Products Agency, 143 Affinity, 96 Africa, 29 Agassiz, Louis, 28 Age of the earth, 4, 16, 18 Aggression, 107 Agricultural revolution, 116 Agriculture, 49 Agrobacterium tumefaciens, 49 Aiken, Howard, 135, 138 Alaska, 114 Algae, 62 203



Arrays of receptors, 104 Artificial evolution, 174 Artificial evolution of electronic circuits, 163 Artificial intelligence, 133, 151, 184 Artificial life, 49, 164, 170, 171, 174, 187 Artificial life conference, 170 Artificial molecular evolution, 165 Artificial neural networks, 156, 160 ASCC, 136 ATP, 53, 103 Augmented input pattern vector, 160 Augmented scalar product, 161 Augmented weight vector, 160, 161 Autoassembly, 151, 152, 155 Autocatalysts, ix, 89 Autocatalytic systems, viii, 56, 127, 183 Automata, 165, 167 Automatic Sequence Controlled Calculator, 136 Automatons, 89 Averroes, 2 Avery, O.T., 38 Avogadro's number, 82 Axon, 103, 157 Axons, 101 Babbage, Charles, 133, 134 Bacteria, 58 Bacterial rhodopsin, 69 Bacterial spores, 90 Bacteriophage, 46, 47 Bacteriorhodopsin, 155, 156 Band structure of crystals, 139 Baran, Paul, 143 Bardeen, John, 139 Barley, 116 Barnacles, 23 Bartel, D.P., 165 Base pairs, 39 Base sequences in DNA and RNA, 58 Bateson, Gregory, 138, 198 Beadle, George, 41 Beagle, H.M.S., 15, 21, 28

Beetles, 14 Behavior, 29 Bell Telephone Laboratories, 136, 138, 139 Benda, A., 64 Benedin, Edouard van, 37 Berg, Paul, 47 Bering Strait, 114 Bernal, J.D., 40 Bilayer membranes, 152 Binary digits, 78, 79, 197 Binary numbers, 136, 137, 141, 163 Binning, Gerd, 155 Bio-information technology, 151 Bioenergetics, 88 Bioinformatics, 151 Biological neural networks, 156, 158 Biology, 1 Biosemiotics, 171, 197 Biosphere, 90, 187 Biotechnology, 151, 164, 184, 187 Bits, 78, 79, 83, 106, 141 Bj0rnholm, Thomas, 154 Boltzmann factor, 193 Boltzmann's constant, 77, 78, 193 Boltzmann, Ludwig, ix, 76, 191 Bombykol, 105 Bombyx mori, 105, 197 Bonnet, Charles, 16 Books, 119, 120 Boolean functionality, 162 Bottom-up synthesis, 154 Boyer, Herbert, 48 Brain, 25 Brain case, 26 Brain mechanism, 151, 162, 184 Brain size, 112 Brain structure, 157 Brattain, Walter, 139 Brazil, 17 Brenner, Sidney, 46 British Association, 28 Broom, Robert, 110 Bryan, William Jennings, 28 Buddhism, 120 Buffon, Comte de, 4


Bukht-Yishu family, 123 Bumble bees, 25, 30 Burks, Arthur, 167, 169, 170 Bush, Vannevar, 137 Bushbabies, 106 Bytes, 141 Byzantium, 123 Calvin, Melvin, viii, 52 Cambrian explosion of diversity, 172 Cambridge University, 14, 39, 40, 134 Carbohydrates, 99 Carbon chauvinism, 174 Carbon-dioxide fixation, 61 Carnot, Sadi, 73, 195 Carrying capacity, 185 Catalysis, 41 Catastrophists, 16 Cats, 26 Cech, Thomas R., 57 Cell body, 157 Cell division, 37, 97 Cell membrane, 58, 127 Cell nucleus, 39, 42, 58, 66 Cell walls, 66 Cell-surface antigens, 98, 197 Cellular automata, 167-169 Center for Nonlinear Studies, 170, 172 Central processing unit, 139, 157 Centrosomes, 64 Cerf, Vinton, 143 Channels, 142 Chaos, 170, 188 Chargaff's rules, 39 Chargaff, Erwin, 39 Charge distributions, 99 Charges, 96 Chemical Darwinism, viii Chemical evolution, 51, 56, 57 Chemical messenger, 101 Chemical signals, 67, 68, 105, 198 Chemical structure of genes, 38 Chemical trails, 105 Chen, J., 156 Childhood, prolonged in humans, 6, 183


Chimeras, 48, 49, 68, 164, 187 China, 121 Chinese characters, 118, 122 Chips, 140 Chloroplasts, 64, 67 Chromatography, 40 Chromophore, 156 Chromosome maps, 37 Chromosomes, 37, 64, 66, 163 Church-Turing hypothesis, 136 Citric acid cycle, 61 Clark, W., 143 Classes, 8 Classical civilizations, 126 Classical cultures, 124 Classification, 1, 8, 25, 58, 161 Classifier networks, 162 Clausius, Rudolf, 73, 82, 194 Cloning, 48, 50, 164 Closed system, 74 Clotting factors, 49 Clover, 25 Cocoons, 30 Codd, E.F., 167, 169 Codes, 95 Codons, 46 Coffman, K.G, 141 Cohen, Stanley, 48 Cohesive ends, 47 Collective human consciousness, 145 Colossal extinct animals, 17, 22 Colossus, 136 Comb-making instinct, 30 Combinatorial analysis, 191 Communication, 100, 138, 197 Communications networks, 143 Complementarity, 96, 98-100, 151 Complementary surface contours, 153 Complex systems, 57, 171 Complexity, vii, 66, 69, 73, 90, 95, 167, 168, 173, 174, 183, 188 Composite organisms, 62 Computer memories, 141 Computer networks, 141 Computer virus, 90, 172 Computers, 133



Condorcet, Marquis de, vii, 5, 127 Conduction bands, 139 Conductor, 139 Conjugal bridge, 98 Consciousness, 198 Constraints, 193 Construction versus destruction, 187 Contrast, 104 Control and communication involving machines, 138 Conway's Life game, 167, 169 Conway, John Horton, 167 Cooperative behavior of cells, 99 Copernicus, 126 Coptic, 118 Copying, 39 Coral atolls, 21 Corpuscular theory of matter, 3 Crabs, 23 Creationists, 28 Crick, Sir Francis, 39, 42, 46, 57, 88 Crossing, 37, 163 Crown gall, 49 Crystallization, 152, 153 Crystallography, 40 Crystals, 139 Cultural complexity, 95 Cultural evolution, vii, x, 8, 11, 109, 114, 118, 120, 127, 144, 183 Culture, ix Culture and language, 114 Cuneiform script, 117 Cuvier, Baron, 8, 11 Cyanobacteria, 65, 67 Cybernetic information, 90, 95, 174 Cybernetics, 137, 138, 171 Cyclic AMP, 67, 99, 169 Cytochrome C, 58 Cytoplasmic membrane, 155 Cytosine, 39 Cytoskeleton, 66 Dale, Sir Henry, 101 Dalton, John, 76 Dart, Raymond, 110 Darwin's finches, 19

Darwin, Charles, vii, 1, 4, 5, 11, 13, 35, 46, 51, 106, 109 Darwin, Erasmus, vii, 9, 13, 24 Davis, Ron, 47 Dawkins, Richard, 170, 186 De Duve, Christian, viii, 61 De Vries, Hugo, viii, 36, 163 Decision hyperplane, 160 Definition of life, 174 Dehydration reactions, 56 Delbriick, Max, 88 Deme, 186 Demotic script, 118 Dendrites, 101, 157 Dendritic membrane potential, 158 Deoxyribonucleic acid, 38 Dependency, 6 Depolarization, 158 Descent of Man, 27, 29 Diamond Sutra, 121 Dickerson, R.A., 58 Differentiation, 68, 100, 169 Diffraction effects, 154 Digital computers, 144 Digital organisms, 172 Disease, 7 Disorder, viii, 76, 89, 188, 194 Distributed communications networks, 143 DNA, 38, 39 DNA ligase, 47, 48 DNA polymerase, 50 DNA sequencing, 58 DNA technology, vii Domestic species, 164 Domestication of animals, 116 Dominant genes, 35 Doolittle, 65 Dopamine, 101 Doping, 139, 140 Down, 21 Dubois, Eugene, 110 Dwarf peas, 35 E. coli, 47 Eckert, J.P., 136


Eco RI, 47 Ecological niche, 25 Ecological systems, 187 Ecology, 25 Ecosystems, 95 Edinburgh University, 13 Education, 6 Educational equality, 7 Educational reform, 187 Effector, 101 Egg cells, 37 Egypt, 120 Egyptian hieroglyphs, 118 Ehrlich, Paul, 95, 152 Eigen, Manfred, viii Elamite writing, 117 Electric organs, 198 Electrochemical gradient, 156 Electromechanical calculators, 136 Electromechanical computers, 135 Electron beams, 154 Electron microscopy, 40 Electron spin resonance, 40 Electron transfer chain, 58 Electronic circuits, artificial evolution of, 163 Electronic communication, 183 Electronic digital computers, 137 Electronic mail, 143 Electronic valves, 139 Electrophoresis, 40, 51 Electrostatic forces, 40, 96, 99 Embryo-derived stem cells, 49 Embryology, 2 Embryos, 26, 164, 169 Emmeche, Claus, 197 Emotions, 29, 184 Endoplasmic reticulum, 66 Endosymbionts, 62 Endothermic reactions, 85 Energy and information, 69 Energy sources, 57 Energy-rich molecules, viii, 56, 69, 89 Engineering, 138 England, 39 ENIAC, 136, 167


Enthalpy, 84 Entropy, vii, ix, 74, 76, 81, 85, 88, 138, 174 Entropy and disorder, 194 Entropy of the universe, 85, 153 Environmental component of learning, 31 Enzymes, 38, 40, 43, 47, 87, 99 Equilibrium, thermodynamic, 82 Esquisse, 5 Ester lipids, 58 Ether lipids, 58 Ethics, 6, 184, 188 Ethics, non-anthropocentric component, 187 Ethnic conflicts, 7 Ethology, 25, 29, 198 Eubacteria, 58, 62 Eukaryotes, 58, 62, 66 Evolution, 1, 2, 4-7, 11, 13, 23, 49, 90, 127, 162, 163, 169, 175 Evolution of electronic circuits, 163 Evolution of vision, 69, 156 Evolution, control over, 184 Evolutionary computation, 163 Evolutionary ethical problems, 187 Evolutionary genetics, 151 Evolutionary responsibility, 187 Excess charge, 96, 99, 152, 153 Exothermic reactions, 85 Expression of Emotion, 29 Extinction, 4, 11, 188 F-factors, 47 Falkland Islands, 17 Family structure, 6, 112 Family trees in evolution, vii, 58 Farmer, James, 172 Feed-back, 138, 198 Feral animals, 25 Ferrous iron, 65 Fertilization of flowers, 64 Fiber optics, 142 Fire, use of, 112 Firing of a neuron, 158 Fisher, R.A., 79, 138, 186



Fitness-proportional reproduction, 163 FitzRoy, Captain Robert (later Admiral), 14, 28 Flemming, Walther, 37 Flightless birds, 17 Floating-point operations, 141 Flops, 141 Foerster, Heinz von, 138 Food, 89 Food molecules, 57 Food supply, 56 Forbidden energy bands, 139 Form and function, 11 Formaldehyde, 53 Formic acid, 53 Fossil animals, 17, 19, 22 Fossils, 3, 4 Fox, Sydney, viii, 54 FOXP2 gene, 113 Frank, Albert Bernard, 62 Franklin, Rosalind, 39 Fredkin, Edward, 168 Free energy, 57, 73, 85, 89, 174 French Revolution, 5 Frisch, Karl von, 30, 106 Fruit flies, 37 Fruiting body, 67 Fungi, 58 Fungus, 62 Galapagos Islands, 19 Galactic Network, 143 Galagos, 106 Games, Theory of, 138 Gametes, 98 Gamma-amino-butyric acid, 101 Gangleons, 103 Gardner, Martin, 168 Garrod's hypothesis, 41 Garrod, Archibald, 41 Gell-Mann, Murray, 185 Gene promoting speech, 113 Gene-splicing, 48 Genera, 8 Generalization, 157, 162

Genes, 35, 88 Genesis, Book of, 8, 16 Genetic adaptation, 183 Genetic algorithms, 151, 162, 163, 167, 172 Genetic code, 43, 46 Genetic engineering, 68 Genetic evolution, x, 109, 118 Genetic fusion, viii, 11 Genetic information, 37, 42, 48, 68, 97, 98, 187 Genetic linkages, 37 Genetic lottery, 37, 187 Genetic predisposition, 31 Genetic predisposition to learn languages, 113 Genetics, 35 Genomes, 151, 169, 183 Genomic DNA, 51 Genotypes, 163 Geological record, 61 Geology, 3, 4, 14, 16, 18, 20 Germanium, 139 Giant axon of the squid, 102 Gibbs free energy, vii, ix, 84, 85, 95, 103, 152, 153, 174, 187 Gibbs, Josiah Willard, vii, ix, 76, 84 Gigaflop 11, 141 Gilbert, Walter, 48 Glass fibers, 142 Global disorder, ix Globular proteins, 40 Glossopetrae, 3 Glucose, 86, 100 Glutamate, 101 Goldberg, David, 163 Golgi apparatus, 66 Gondasapur, 123 Gracile bones, 112 Grandmother's face cells, 161 Grant, R.E., 13 Greek alphabet, 118 Grey, Michael, 65 Guanine, 39 Gutenberg, Johannes, 125


Haeckel, Ernst, 28, 58 Haemophilus influenzae, 47 Haldane, J.B.S., 186 Halobacterium salinarum, 68, 155 Hamilton, W.D., 186 Hardware, 151 Harvard University, 135, 138 HCN, 56 Heat, 82, 90, 95 Heat content, 74, 84 Heat transfer, 195 Helix, 39 Hellenistic era, 120 Helmholtz, Hermann von, 84 Hemoglobin, 40, 41 Henslow, John Stevens, 14, 16, 20 Hereditary component of learning, 31 Hereditary disease, 41 Heredity, 39 Herring gulls, 30 Heterotrophs, 61, 65, 66 Hieroglyphic writing, 117 High-energy phosphate bond, 156 Histology, 157 Hixton symposium, 166 Hodgkin, Alan Lloyd, 102 Hodgkin, Dorothy, 40 Hoffmeyer, Jesper, 171, 197 Holland, John, 162, 167, 170, 172 Hollerith, Hermann, 135 Homeostasis, 100, 138 Hominids, 110, 183 Homo erectus, 110, 112 Homo habilis, 110 Homo sapiens neand., 110, 112 Homo sapiens sapiens, 110 Homologies, 11, 26 Honey-bees, 30, 105 Hooke, Robert, 4 Hooker, Sir Joseph, 22, 23, 51 Hormones, 100 Hubel, David H., 104, 162 Human cultural evolution, 109 Human growth factor, 49 Human language, 107 Human nature, 184, 185


Human proteins from animal milk, 164 Humboldt, Alexander von, 14, 15, 17 Hunter-gatherers, 112, 116, 184 Hutton, James, 4 Huxley, Andrew Fielding, 102 Huxley, Thomas Henry, 27, 64, 102, 109 Hybrids, 35 Hydra, 64 Hydrogen bonds, 39, 152, 156 Hydrogen cyanide, 56 Hydrophilic head, 152 Hydrophilic residues, 40, 99 Hydrophobic residues, 40, 99 Hydrophobic tail, 152 Hydrothermal vents, viii, 57, 58 Hyperplane in pattern space, 160 Hyperthermophils, 58, 61 Hyrax Hill, 116 IBM Corporation, 135 Ideal gas, 77 Identical systems, 191 Immunity, 96 Improbability, 90 Impurities, 139, 140 Imune system, 197 India ink, 121 Industrial revolution, 126 Inequality, economic, 7 Information, vii, 42, 141, 171, 174 Information accumulation, 109, 118, 143, 183 Information and entropy, 138 Information and Maxwell's demon, 76 Information conservation, 118 Information content of free energy, 57, 69 Information density, 145 Information destruction, 104 Information explosion, 122, 126, 145, 185 Information flow, 98, 183 Information Flow in Large Communication Nets, 142



Information in living organisms, 197 Information on the genome, 169 Information technology, 151, 184, 187 Information theory, ix, 73, 78 Information transfer, 97, 98, 183 Information transmission, ix, 114, 118 Information, Bateson's definition, 199 Information, thermodynamic, 82, 85, 89 Information-containing molecules, 88 Information-driven cultural evolution, 118, 184 Initial conditions, 174 Ink, 120 Input channel, 157 Input signals, 160 Instincts, 25, 29, 30 Institutions, 184 Insulator, 139 Insulin, 41, 49, 100 Insurance, 7 Integrated circuits, 139, 151, 153, 155 Intelligence, 173 Interactive calculations, 141 Interbreeding, 35 Interferon, 49 Internal energy, 195 Internet, 142, 143 Internet traffic, 145 Internuncial nervous system, 101 Interrelatedness of life, 188 Intuition, 157, 162 Invention of writing, 118 Invertebrate zoology, 10, 13 Ion pump, 103, 155 Islamic civilization, 123 Jackson, David, 47 Jacquard, Joseph Marie, 133, 135 JANET, 143 Japan Unix Network, 143 Jellyfish, 1, 101, 198 Jerico, 116 Joyce, G.F., 165 Kahn, Robert F., 143

Kaiser, Dale, 47 Kauffman model, 57 Kauffman, Stuart, viii, 57 Kelvin, Lord, 74 Kendrew, John C., 40 Keszthelyi, Lajos, 155 Khorana, H. Gobind, 46 Kinematic model, 165 Kings College, London, 39 Kleinrock, Leonard, 142 Knowledge, diffusion and accumulation of, 126 Koch, Robert, 96 Koestler, Arthur, 186 Kornberg, Arthur, 43 Kuffler, Stephen W., 103, 162 Kull, K., 197 Laetoli footprints, 111 Lagrange multipliers, 192 Laing, Richard, 166, 170 Lake Rudolf, 110 Lamarck, Chevalier de, vii, 10, 13, 23, 24 Langton's A parameter, 170 Langton's loops, 170 Langton, Christopher, 169-172 Language, ix, 7, 31, 95, 112, 183 Language and culture, 114 Language of ants, 105 Language of humans, 107 Language of molecular complementarity, 95, 151, 152, 183 Languages of animals, 104 Lapps, 114 Leaky, Louis, 110 Leaky, Mary, 110 Leaky, Richard, 110 Learning, 31, 157, 162 Learning by artificial neural networks, 161 Learning of language, 113 Lederberg, Joshua, 47 Lehn, J.M., 154 Leibniz, G.W., 133 Leonardo da Vinci, 3, 126


Leuteinizing hormone, 49 Lewin, Kurt, 138 Lewis, G.H., 26 Libraxy at Alexandria, 120, 188 Lichens, 62 Licklider, J.C.R, 143 Life, 168 Life, prolongation of, 7 Light-receptor cells, 103 Lightning, 52 Lindenmeyer, Astrid, 170 Linear separability, 161 Links, 142 Linnaeus, Carolus, vii, 8, 109 Linnean Society, 23 Lipid bilayer, 153 Lipids, 56, 58, 155 Lithoautotrophs, 61 Lobban, Peter, 47 Local order, ix Lock and key fitting, 96, 152 Loewi, Otto, 101 Logic density, 140 Logic of Computers Group, 167, 170 London, 40 Lorenz, Konrad, 30, 107, 198 Lovelace, Augusta Ada, Lady, 135 Lyceum, 1 Lyell's hypothesis, 16 Lyell, Sir Charles, vii, 4, 16, 18, 21-23 Lysozyme, 40 Mach, 196 Macrostate, 77, 81, 191 Macy Conferences, 138, 171, 198 Magdalenian culture, 115 Magnetic disk storage, 141 Magnetite, 65 Malthus, T.R., 8, 22, 23 Malthusian forces, 185 Mammalian eye, 103 Mammalian retina, 162 Mammals, 2, 20, 25 Man's Place in Nature, 29, 109 Man-made forms of life, 49 Mapping of genes, 48


Marsupials, 20 Massachusetts Institute of Technology, 137, 142, 162 Maternal behavior, 105 Mating, 105 Matthaei, Heinrich, 46 Mauchley, J.W., 136 Maxwell's demon, 75 Maxwell, James Clerk, ix, 75 Mayan gliphs, 118 McCulloch, Warren, 138, 157, 165 Mead, Margaret, 138, 198 Mechanical calculators, 134 Mechanical computer, 135 Mechanisms of the brain, 151, 162, 184 Meischer, Friedrich, 38 Membrane potential, 158 Membrane-bound proteins, 100, 155 Membranes, 152 Memory, 157 Memory density, 141, 153 Mendel's laws, 36 Mendel, Gregor, 35 Mendelian genetics, 49, 68 Mertz, Janet, 47 Mesopotamia, 116 Mesopotamian cuneiform, 118 Messenger RNA, 42, 98 Metabolism, 61, 65, 90 Metallo-porphyrins, 57 Metallographic printing, 125 Meteoric impacts, 52 Methane, 52, 53 Microelectronics, 138, 139, 153, 170 Microprinting, 154 Microprocessors, 140 Microscope, 140 Microstates, 77, 81, 191 Migrations, 114 Miller, Stanley, viii, 53 Miller-Urey experiment, 53 Miniaturization, 139, 140, 151 Minicomputer, 140 Missing information, 79, 81, 191 Mission IV Group, 166



MIT Artificial Intelligence Lab, 168, 172 Mitochondria, 64, 66, 67, 101 Mitotic cell division, 66 Molecular biology, vii, 39, 40, 188 Molecular complementarity, 95, 152 Molecular Darwinism, 56 Molecular evolution, viii, 165 Molecular information transfer, 127 Molecular oxygen, viii, 65 Molecular switches, 151, 155 Molluscs, 23 Moore's law, 140, 145, 151 Moore, Gordon E., 140 Moral improvement, 7 Morgan, Thomas Hunt, 37, 163 Morphogenesis, 164, 168-170 Morphology, vii, 26, 29 Movable type, 121, 125 Muller, Hermann J., viii, 38, 88, 163 Mullis, Kary, 49 Multi-state cell, 167 Multicellular animals, 169 Multicellular organisms, 61, 66, 67, 98, 100, 101, 127, 183 Muscular contraction, 101 Mutant sheep, 36 Mutant strains of mold, 41 Mutants, 46 Mutations, viii, 36, 38, 41, 88, 163, 165, 166, 173 Mycorrhizal fungi, 64 Myoglobin, 40 Nanoscale circuits, 154, 155 Nanoscience, 151 Nanotechnology, 154, 155 Nathans, Daniel, 47 Natural selection, viii, 2, 4, 11, 22-24, 27, 56, 89, 162, 163, 165, 171, 173 Neanderthal man, 109 Negative entropy, 88 Negative feedback, 198 Neolithic agricultural revolution, 118 Nervous systems, 100 Nest scent, 105

Nestorians, 123 Networks, 141 Neumann, John von, 81, 89, 137, 138, 157, 165-167, 169 Neural networks, 156, 157, 162, 163 Neurons, 100, 101, 157 Neurophysiology, 138, 151, 157 Neurotransmitter, 101, 158 Newman, M.H.A., 136 Nirenberg, Marshall, 46 Nitrogen-fixing bacteria, 64 Nodes, 142 Noisy data, 157 Nonverbal signs, 107 Noradrenalin, 101 Norepinepherine, 101 Novick, Richard, 48 NSFNET, 143 Nucleic acids, 98 Nucleoli, 64 Nucleotide bases, 41 Nucleotide sequence, 151, 197 Nucleotides, 50, 57, 98 Occupation numbers, 191 Ochoa, Severo, 43 Octopus eye, 104 Odlyzko, A.M, 141 Olduvai Gorge, 110 Oligonucleotide primers, 51 Ontogeny, 58 Oparin, A.I., viii, 52 Open-ended artificial evolution, 172 Optical computer memories, 156 Optical storage devices, 141 Optical switching, 156 Optimization, 162 Order, viii, ix, 57, 88, 188 Orders, 8 Organization, 138 Orgel, Lesley, viii, 57 Origin of life, vii, 51, 69, 172 Origin of Species, 22, 24, 27 Osterhelt, D., 155 Ostwald, 196 Output channel, 157


Overshoot and crash, 185 Oxford debate, 28 Oxygen, 52, 61, 65 Pacific islands, 20 Pack leader, 31 Package switching systems, 142, 143 Palade, George, 42 Palaeolithic cultures, 114 Paleontology, 11 Paper, 120, 125, 126 Papermaking, 122, 123 Papyrus, 119, 120 Parallel arrays, 157 Parallel-processing, 141 Paramecia, 64 Parasites, 98 Parasitism, 62, 173 Parchment, 120, 125 Paris, 2 Partition function, 193, 194 Pascal, Blaise, 133 Pattern abstraction, 104, 162 Pattern recognition, 103, 157, 162 Pattern space, 160 Pauling, Linus, 39, 41 PCR technique, 50 Peacock, George, 14 Pebble tools, 112 Peking man, 111, 112 Penicillin, 40 Pennsylvania, University of, 136 Pensions, 7 Peptides, 41, 56 Perfect gas, 193 Perfectibility, 5 Pergamum, 120 Periodicity, 170 Permeability, 103 Peroxysomes, 66 Perrin, J.B., 76 Perutz, Max, 40 Phagocytes, 96 Phagocytosis, 67 Phase space, 77 Phenotypes, 163


Phenylalanine, 46 Pheromones, 104, 198 Phillips, D.C., 40 Phoenicians, 117 Phonecian alphabet, 118 Phonetic scripts, 117, 121, 122 Phosphate bonds, 174 Phospholipid bilayer, 152 Photoautotrophs, 61 Photolithography, 154 Photons, ix, 83 Photoresist, 140, 154 Photosynthesis, 69, 127, 155 Photosynthetic bacteria, 61 Photosystems I and II, 65 Phyla, 8 Phylogenetic tree, 58 Physiology of muscles, 198 Pictographs, 118, 121 Pierce, Charles Sanders, 167, 197 Piezoelectric crystal, 155 Pithecanthropus erectus, 110 Pitts, Walter, 138, 157, 165 Planarian worms, 64 Plasmids, 11, 47, 97 Pneumococci, 38 Polar groups, 99 Polarizable groups, 152 Polarization, 158 Polarized light, 198 Pollination, 35 Polymerase, 50 Polymerase chain reaction, 50, 57 Polynomials, 134 Polynucleotides, 42, 57 Polypeptides, 41, 54, 56, 57 Polyphenylalanine, 46 Ponnamperuma, Cyril, 53 Population, 22, 23, 118, 185 Population explosion, 144, 185 Population genetics, 186 Porphyrins, 56 Post-synaptic cleft, 158 Potential barriers, 87 Power, hereditary, 7 Precursors, 56



Pressure, 193 Primate hand, 112 Primer, 50 Primitive atmosphere, 52 Printing, 120, 123, 125, 126, 144, 183 Printing press, 125 Printing with movable type, 121 Probability, 193, 195 Probability theory, 5 Programable computer, 135 Programs of the brain, 105 Progress, 5, 7, 90, 184 Prokaryotes, 47, 66, 127 Prolonged childhood, 6, 183 Protein structure, 40, 99 Protein synthesis, 68 Proteins, 38, 51 Protists, 61, 67 Proton pump, 156 Psychology, 25, 198 Public education, 7 Punched cards, 135 Purple membrane, 155 Pyrite formation, 60 Quantum dot technology, 140 Quantum effects, 155 Quantum mechanical tunneling, 155 Quantum theory, 77, 139, 144 Queen substance, 105 Quorum sensing, 67 R-factors, 48, 97 R-type, 38 Radiation damage, 38 Radioactive decay, 52 Radioactive tracers, 40, 42 Rapidity of technological change, 183 Rasmussen, Steen, 172 Rate of change, 183 Rational thought, 5 Ray, Thomas, 172, 173, 175 Reaction pathways, 87 Reaction rates, 87 Receptors, 100, 158 Recessive genes, 35

Recombinant DNA, 47 Reducing agent, 61 Reductionism, 171 Reflexive catalysis, 57 Refractive index, 142 Reproduction, 105, 163, 166 Respect for life, 187 Respiratory metabolism, 58, 66 Resting potential, 103 Restriction enzymes, 47 Restriction map, 47 Retina, 103, 162 Reversible ATPase, 156 Rhodopsin, 69 Ribosomes, 41, 42, 58, 98, 197 Ribozymes, 165 Rice, 116 RNA, 41 RNA replicase, 165 RNA replication, 165 RNA sequencing, 58 RNA world hypothesis, 57, 165 Robespierre, 7 Robustness, 157, 162 Rockefeller Institute, 38, 42 Rohrer, Heinrich, 155 Rosenblueth, Arturo, 138 Rosetta stone, 118 Round dance, 106 Rousseau, 6 Royal Institution, 40 Royal Society, 134 S-type, 38 Sagan, Carl, viii, 53 Salamanders, 26 Saliva, 106 Sanger, Frederick, 41, 48 Santa Fe Institute, 57, 171 Scalar product, 161 Scanning tunneling microscope, 155 Scent glands, 106 Schimper, Andreas, 64 Schneider, Albert, 64 Schrodinger, Erwin, viii, 87, 138 Scientific articles, 145


Scientific progress, x Scientific revolution, 126 Scopes, John T., 28 Scribes, 120 Sea anemone, 64 Second law of thermodynamics, vii, viii, 74, 75, 85, 188 Sedgwick, Adam, 14, 16, 20 Sedimentary rocks, 4 Selection, 163 Selective breeding, 9, 49 Self replication, 164 Self-assembly, 151-154 Self-organization, 152 Self-replicating automaton, 165, 167 Self-replicating programs, 172 Semiconductors, 139 Semiotic information, 95 Semiotics, 167, 197 Sense organs, 198 Sensor, 101 Sequencing of DNA, 48 Sequencing of macromolecules, vii Sequencing of proteins, 41 Sequencing techniques, 58 Serial homologies, 26 Serotonin, 101 Setter, K.O., 61 Sex, 173 Sex, analogous to, 97 Sexual reproduction, 37, 98 Shamanism, 114 Shannon entropy, 81 Shannon's formula, 78 Shannon, Claude, viii, ix, 78, 138 Shapiro, J.A., 68 Shark's teeth, 3 Sheep-dogs, 31 Shockley, William, 139 Shrimps, 23 Siberia, 114 Sickle-cell anemia, 41 Side chains, 96 Sign stimulus, 31 Sign systems, 197 Signal molecules, 104, 169


Signals, 95 Signs, 95, 107, 197 Silicon, 139 Sinanthropus pekinensis, 111 Slavery, 7 Sleeping sickness, 96 Slime molds, 67, 169, 183 Smith, Hamilton, 47 Social communication, 138 Social institutions, 184 Social interactions, 143 Social sciences, 5 Social tensions, 184 Societal information exchange, 145 Sociology, 138, 198 Software, 151 Solar energy income, 187 Solutrian culture, 115 Soma, 157, 158 Space exploration, 139, 166 Spain, 2 Species, 1, 8, 19, 20, 22 Specific catalytic activity, 165 Specificity, 96 Speech, 112, 113, 144, 183 Speed of computers, 141, 153 Speed of light, 139 Sperm, 37 Spiegelman, S., 165, 174 Sponges, 1, 64, 67, 183 Spontaneous chemical reactions, 86, 87, 153 Spontaneous process, 84, 85 Spores, 67, 90 St. Hilaire, Etienne Geoffroy, 11, 24 St. Jago, 16 Stability, 187 Staining, 37, 96 Stanford University, 41, 47 Start primer, 50 Statistical mechanics, ix, 73, 76, 84, 174, 191 Steam engines, 73 Stem cells, 49 Steno (Niels Stensen), 3 Steno's Law of Strata, 3



Steric complementarity, 99 Sterling's approximation, 80, 192, 194 Stibitz, George, 136 Sticky ends, 47 Stock breeding, 116 Stoeckenius, Walter, 155 Stop primer, 50 Stromatolites, 61, 65 Sub-species, 25 Subcellular particles, 42 Substrate molecules, 99 Sugar-phosphate backbone, 39 Sumerian civilization, 117 Sunlight, 61, 90 Super-normal sign stimulus, 31 Supramolecular chemistry, 154 Supramolecular structures, 151, 153 Susa, 117 Symbiosis, viii, 11, 62, 64, 66, 173 Synapses, 101, 158 Synchrotron radiation, 154 Syphilis, 96 Syriac, 123 Szent-Gyorgyi, Albert, 88 Szilard, Leo, 78, 81, 138 T'ang dynasty, 121 Tactile signals, 198 Tadpoles, 26 Target cells, 100 Target segment, 50 Tatum, Edward, 41 Teaching neural networks, 161 Technological change, 183 Technological change, rapidity of, 183 Temperature, 85, 193 Template, 39, 42, 154, 155 Tepe Yahya, 117 Terabytes, 141 Terminal transferase, 47 Terror, 7 Tertiary conformation, 98 Tertiary structure of proteins, 40 Theory of games, 138 Thermal reservoir, 85 Thermodynamic functions, 193

Thermodynamic information, ix, 82, 85, 87, 89, 90, 95, 174, 188 Thermodynamics, vii, 73, 84, 171, 174 Thermus aquaticus, 50 Thin films, 140 Threshold Logic Unit, 157, 161 Threshold value, 158, 160 Thymine, 39 Ti plasmids, 49 Tierra, 172, 175 Tierra del Fuego, 17, 18 Tigris and Euphrates rivers, 117 Tinbergen, Nikolaas, 30 TLU, 157, 160 Tobacco mosaic virus, 90 Tokens, 117 Toledo, 124 Tool-using culture, 183 Tools, 112 Total internal reflection, 142 Trade, 117 Training algorithms, 162 Transfer RNA, 42, 197 Transgenic animals and plants, 187 Transgenic organisms, 49, 68 Transgenic species, 164 Transistors, 139, 140, 144, 151 Translation into Arabic, 123, 126 Translation into Latin, 126 Translation into Syriac, 126 Transmitter, 101 Transmitter molecules, 99 Tribal social structure, 113 Tribalism, x, 185 Tropical forests, 14, 17 Turing machine, 136, 165-167, 170, 172 Turing, A.M., 136 Typography, 125 Uexkiill, Jakob von, 197 Ulam, Stanislaw, 167 Ultraminiaturization, 153 Ultraviolet light, 198 Ultraviolet radiation, 52 Umwelt, 198



Uncontrolled evolution, 173 Undersea hydrothermal vents, 58 Uniformitarian principles, 4 Universal phylogenetic tree, 58 Universal Turing machine, 136, 165 University of Chicago, 53 Upright locomotion, 110, 111 Uracil, 42 Urey, Harold, viii, 53 Urine bath, 106 Vaccines, 49 Vacuum tubes, 136, 139 Valence bands, 139 Van der Waals forces, 99, 153 Variability, 98 Variation, 24 Variation under domestication, 22, 24,49 Variations, 35 Varieties, 23 Vectors in pattern space, 160 VENUS, 172 Vertebrae, 26 Vertebrates, 11 Vestigial organs, 4, 11, 25 Vice, 7 Virtual computer, 172 Virus, 90 Visual cortex, 103, 161, 162 Visual displays, 106 Vitamin B12, 40 Volcanic islands, 24 Volcanism, 18, 52 Volume, 193 Wachterhauser, Giinther, 60 Wafers, 140 Waggle dance of bees, 106, 197 Wallace, Alfred Russell, 2, 23 War, 7, 185, 188 Water-air interfaces, 154 Watson, James, 39 Watson-Crick model of DNA, 39 Watt, James, 73 Weapons, 112

Wedgwood, Emma, 21 Wedgwood, Josiah, 15, 21 Weight vector, 161 Weighted sum, 160 Weights, 160 Wessel, Torsten N., 104, 162 Whales, 25 What is Life?, 88 Wheat, 116 Wiener, Norbert, 162, 198 Wilberforce, Bishop Samuel, 28 Wildlife reserves in cyberspace, 173 Wilkins, Maurice, 39 Willadsen, Steen, 164 Woese, Carl, 57, 58, 65 Wolfram, Stephen, 168, 170 Wolves, 31 Woodblock printing, 121, 125 Work, 73, 195 World War II, 136 World Wide Web, 142, 143 Writing, 114, 116, 117, 122, 144, 183 Wurm glacial period, 114 X-ray diffraction, 39, 40 X-ray sources, 154 X-rays, 38 Young, J.Z., 104 Zinjanthropus boisei, 110 Zona pelucida, 164 Zoonomia, 9, 13 Zuse, Konrad, 136 Zustandssumme, 193