Back | Next

MINDS, MACHINES AND
EVOLUTION

If a "machine" is any kind of system created by man, and "think" means everything we normally mean when we use the word, will a machine ever be able to think?

This question is asked a lot these days. It actually implies two different questions, and much of the confusion on this subject results from a failure to distinguish between them. The first asks if the suggestion is possible in principle, and might be rephrased: Given that a human brain is a system which thinks, is there any reason to suppose that no man-made system, as opposed to one that happens to have evolved "naturally," can be capable of doing likewise? The second asks if it will ever be achieved in practice. My experience from sitting on many artificial intelligence panels at science-fiction conventions has been that writers tend to answer the first question, and researchers in the field, the second—which usually leads to two separate dialogues between groups who aren't talking about the same thing. Since there is no question to be asked about the practicability of something that's impossible on principle, the first question is the one to start with in a discussion of the subject. It's also the more intriguing philosophically.

The question of whether man-made intelligence is possible in principle amounts to asking if "mind" can be adequately accounted for by the principles of physics, and nothing else. If it can, then there's no compelling reason to suppose that a man-made system—which would operate by the same principles—shouldn't be able to emulate it. If it can't, then we must conclude that the phenomenon of self-aware consciousness possesses something "extra"—some qualitative difference that sets it above being explainable by the same laws that explain everything else in the universe, and that therefore it will forever be beyond our ability to duplicate.

The first reaction of many people is to insist that something as intricate as "mind" could never arise solely from arrangements of molecules, neural circuits, and the other things we find inside the human brain. It violates their subjective notions of what makes sense. But this kind of intuition is dangerous and has littered the trail of human discovery with the wreckage of all kinds of "proofs" that something or other was impossible. It pays to be open-minded. Ever since primitive man found himself hard put to account for the sun, the moon, the winds, the tides, and so forth, dismissing unexplainable phenomena as "supernatural" has provided many with a quick and easy alternative to expanding their powers of explaining. The "vitalists" of the nineteenth century were similarly convinced that the laws of physics were insufficient to account for living matter, and proposed the existence of a "life force," which set it apart from the inanimate world. Today, life processes can be satisfactorily accounted for in terms of molecular chemistry. The vitalist argument has not gone away as a consequence, however, but has simply shifted levels and reappeared in a new disguise: instead of between the living and the nonliving, it now searches for some fundamental difference between the thinking and the nonthinking, between mind and body.

A basic principle of science asserts that the safest hypothesis to adopt is the simplest one that explains all the facts. In the present context, this asks if there is any way in which the things we observe could account for mind in a way that's at least possible. If there is, then the simplest explanation will have sufficed and there will be no need or justification for introducing additional influences.

There is effectively no limit to the number of different books that have been written or could be written. We might describe our impression of one that we've read as "inspiring," "passionate," "entertaining and witty," and so on. Similarly a symphony might strike us as "majestic," or "somber." Well, where in such creations do properties like these exist?

At its elementary level every book consists of letters drawn from the same, very limited, alphabet. Clearly it would be ridiculous to look for such qualities as "inspiration" or "passion" at that kind of level. A letter of the alphabet can be one of only twenty-six possibilities, and hence the amount of information it can convey is very limited. But the act of stringing letters together to form words can convey enough different concepts to fill all the dictionaries of all the world's languages, plus form all the strings of letters that might have been words, but which, as it happens, aren't. Just this simple raising to a higher level of organization brings about an increase in the richness and variety of possible expression that is staggering. And beyond that, words can be arranged into sentences, sentences into paragraphs, until at higher levels it becomes possible to express every shade of thought and meaning from Alice in Wonderland to Kant's Critique of Pure Reason.

Language is organized as a hierarchy of increasing complexity, in which the variety of possible expression increases by a stupendous degree over even a few levels. New orders of meaning and relationship come into existence that cannot be expressed as properties of the elements that form the building blocks at a given level, but which arise as emergent properties of the way the elements are put together. In the same kind of way, every musical composition is built ultimately from the same set of notes, and every chemical substance from the same three subatomic particles. We see nothing remarkable in any of this, and while we might be astonished by the diversity that can arise from combining simple elements in different ways, we feel no need to invoke supernatural agencies to explain it.

The same applies to an even more striking degree in the process of biological evolution, in which more complex systems of organization emerge from simpler ones under the influence of selection. Originally, simple inorganic compounds gave rise to more complicated substances, which in the course of time evolved self-replication and progressed through single-celled organisms to the advanced, multicellular life forms of today. Again we see a hierarchy of progressively increasing organizational complexity, and at each successive level new properties become manifest which exist only in the context of that level, and which can't be described in terms of the subsidiary components. Thus, a single molecule does not possess any attribute of "elephantness"; a sufficiently large number of them, however, when put together in the right way, do.

Now, if the laws of physics, plus selection, a lot of time, and nothing more are sufficient to produce physical forms as sophisticated as the nose of a bloodhound or the airframe of a hummingbird, isn't it to be expected that the same processes should result in similarly sophisticated patterns of behavior? After all, survival is what matters in evolution, and behavior—how an organism interacts with its environment—is just as important to its survival as its physical attributes, and frequently more so. Having big teeth isn't much good without the capacity to recognize a threat and the motivation to defend yourself. Interacting with an environment consists of acquiring information from that environment, evaluating it, and responding in some way, all of which is performed by the nervous system. It follows that improvements to the nervous system as part of the general evolutionary process would confer significant survival benefits. It's interesting to see that when we trace the sequence that such improvements are believed to have followed, we see emerging the qualities most people would consider essential to characterizing that which we call "mind."

Primitive life-forms such as sponges evolved special cells that reacted to stimuli in the environment to trigger responses that, for example, increased the chances of capturing food. In later organisms like jellyfish these cells developed into simple neural networks capable of coordinating the movements of the entire animal to make possible such revolutionary strategies as directed mobility, with all the attendant advantages. In higher forms still, these networks developed concentrations of neural tissue which eventually acquired the structure and organization of the modern mammalian brain, and with it the ability to apply steadily more sophisticated processing techniques to the information gathered by the senses.

Here, then, is another example of a hierarchy of increasing complexity taking shape. And as was the case with the structure of language, the evolving nervous system forms a hierarchy of increasing information-processing complexity. With language, the concepts dealt with become more abstract at higher levels of organization—farther removed from the "mechanical" low-level world of alphabet and syntax. Different units of information operate at different levels of the hierarchy. Similarly, the units of information being processed at higher organizational levels in the nervous system become more distant from the raw-data world of the stimuli that impinge on the senses. Thus, we have the beginnings of a mechanism for assembling sensory data into higher-level symbols, and manipulating aggregates of symbols into a model of the world outside—a model inhabited not by wavelengths and energy quanta, but by objects, attractions, aversions, goals, and all the other factors that affect a higher-level entity interacting with a higher-level perceived environment.

Models that reflected the real world more accurately would enhance an organism's survival chances and hence be favored selectively for further improvement. In creating progressively more elaborate world models, the evolving brain would learn to synthesize a representation of the three-dimensional space in which it moved, the other objects inhabiting that space, and the interactions taking place between them. A crucial need in a survival-dominated environment would be the ability to distinguish the "self," whose survival is at stake, from the rest of the world around it. Assigning a special status to the focal zone of sensory impressions mapped into the world model gives rise to a self-model, which makes possible the emergence of directed action toward the goal of self-preservation, superseding purely automatic reflexes.

Given the ability of the brain to manipulate conceptual symbols that mimic the world that actually exists, it doesn't seem such a gigantic step to go on to manipulating the same symbols into representations of worlds that could exist. This would enable, for example, scenarios of a potential danger to be constructed from previously accumulated experiences and played through in advance, before it became a reality, allowing timely action to avoid it—or in a word, the faculty of anticipation. And once we're in a position to play with models of worlds and situations that don't exist, surely we're well on the way to displaying imagination and creativity.

This is all very well, but it won't do very much for our evolving organism's survival prospects if it gets so wrapped up in its internal fantasizings that it loses track of reality and fails to notice the tiger coming at it down the hill. Hence, this variety of complex activity going on inside the brain requires some kind of overseeing function to monitor its own processes, evaluate their relative importance, and decide which should take priority over which from moment to moment. This implies a degree of awareness. Being aware of the images being manipulated in the mind, and aware of the preferences that arise from evaluating their implications, adds up, does it not, to experiencing feelings (emotions, if you will) and exercising judgment. And when coupled to the self-model that we already have, it yields self-awareness.

Do we really mean any more than this when we talk about mind and consciousness? I'm not at all convinced that we do. The brain's ability to think requires no supernatural ingredient, but arises purely as an emergent property of its organizational complexity. What led to the phenomenon we call mind was the fact of an adaptive system being operated on by selection. The selection happened to be "natural," and the adaptive system happened to be biological, but those weren't the significant factors in yielding an intelligent, self-aware end product. Therefore there's no reason to suppose that other systems of comparable complexity shouldn't be capable of doing likewise. Hence man-made intelligence ought, in principle, to be possible.

That being so gives a point to the second question that we asked: What, then, is the likelihood in practice? The question these days is usually asked with reference to computers.

The appearance of human intelligence enabled selection to be guided by choice instead of by the unconscious processes that had operated previously; speech and written language transmitted new information through populations virtually instantaneously compared to genetic encoding. This has enabled the development and spread of human culture at the staggering rate that history has recorded. But we have merely accelerated the evolutionary process, not altered it in essence. Whether we're producing a better political system, a Boeing 747, or a bigger and tastier tomato, we apply the same basic method that nature used to turn jelly into vertebrates and vertebrates into us: We experiment with variations of the themes we've got, forget the ones that don't work so well, and try further variations of the ones we decide are worth keeping. That's evolution—by artificial selection. And we are applying it vigorously to systems that are designed to do just what nervous systems evolved to do, namely process information and vary their behavior in response: computers. This is what sets computers apart from other, earlier technologies that have been offered as models of the brain. Perhaps, too, our familiarity with computers has helped make "mind" less mysterious than it used to be.

Of course, I'm not trying to suggest that what goes on inside even the most powerful of today's computers constitutes "thinking," or even comes close. But they do seem to be off to the right start, and only decades after their inception are mimicking in intriguing ways the reflexive, yet sometimes surprisingly elaborate, behavior of primitive nervous systems. I find it hard to believe that a jellyfish can think either; but obviously there was nothing to prevent it—or at least, something akin to it—from evolving into something that could.

For a start, computers possess a comparable hierarchical organization, in which the units of information being processed take on progressively more abstract meaning as we ascend through higher levels. The lowest level is that of the physical hardware, where the circuit chips lead a somewhat monotonous life shuffling binary digits through registers and combining them according to totally mechanical rules. At the higher levels of software activity which this traffic supports, the "bits" combine into codes that represent numbers, characters, instructions, and command strings to convey meaning at the more symbolic level in which programmers, rather than hardware engineers, think. And at higher levels still, these entities in turn are subsumed into programs, files, display formats, and so forth, which have lost all connection with electronics, and relate instead to things like bank accounts, airline flights, Adventure games, and the rest of the world of human affairs.

A common objection to the suggestion that this could ever lead to intelligence is that a computer, however elaborate, is still by nature a "machine," operating according to rigid, mechanical rules that will always cause it to respond to the same inputs in the same, predictable manner. Whatever tricks might be built into it to give an illusion to the contrary (such as deriving some input from internal randomness generators), it can still only do what it's programmed to do. Nothing that qualifies as "thinking," which ought to exhibit some element of free choice, or even capriciousness, could ever result from it.

It is true that at its elementary level a computer system is constructed from components that function mechanically and repetitively. But the same could also be said about us. The DNA, RNA, enzymes, and other constituents of the cells that make up our bodies function in ways that are quite mechanical and repetitive. The neural hardware that supports our mental "software" consists of bewildering interconnections of an enormous number of neurons, each of which behaves predictably. If the signals applied to a neuron add up in such a way as to exceed its activation threshold, it will fire; if they don't, it won't. The neuron doesn't go through agonies of indecision trying to make up some microscopic mind about what to do. At its level there isn't any property of "mind" to make up. The decision is made according to fixed rules, just like the decision of a computer logic circuit to generate an output.

The earth's atmosphere consists of a vast number of interacting elements, each of which is very simple in itself and behaves completely mechanically. At the microscopic "hardware" level, each molecule responds to a combination of forces exerted by its neighbors in a way that can be calculated precisely. But at the macroscopic level, totally new emergent properties manifest themselves as storm centers, cloud banks, rainfall, and other phenomena that cannot be expressed in terms applicable to molecules. Instead, we describe them in macroscopically meaningful terms, such as temperature and pressure—statistical measures of the composite effects of huge numbers of molecules whose individual motions can never be known with certainty. In the process we define a qualitatively new set of concepts which lose the precision and predictability that characterize the lower-level activity, and which in the process acquire an increasing degree of uncertain, "whimsical" behavior.

The fallacy with the objection is that it compares the activities taking place at the brain's highest, most abstract level with those at a computer's lowest, most mechanical level. It's a bit like saying that tree shrews could never evolve into humans because humans can build cities and write symphonies, whereas a tree shrew is just a collection of nucleic acids and proteins that are obviously incapable of such feats.

It's interesting to note, however, that the qualities of unpredictability and "whimsy" that many people insist on as indispensable prerequisites for intelligence are in fact beginning to appear in computer systems, too. A large "realtime" system, for example—perhaps for controlling an industrial plant or a communications network—typically contains hundreds or even thousands of different programs for carrying out various tasks that need to be performed at different times and in different circumstances. It would also contain a list of priorities, specifying which task is the most important at any given time and should therefore run if it is ready to, which task is second priority and should run if the first is held up, and so on. Through thousands of input signals coming in from sensors around the plant, or from the network, the system constantly monitors and reacts to the changing conditions, perhaps by suspending the operation of one task to allow a higher-priority response to a critical situation somewhere, or by activating lower-priority fill-in jobs when there's nothing more pressing to attend to. The result can be a bewildering activity pattern of different programs being started, interrupted, waiting to execute, of interrupting programs themselves being interrupted by higher priorities still, all interlaced with the operations of supervisor programs to keep track of what's going on and orchestrate the other programs. Since this activity is all being driven by unpredictable events unfolding in the outside world, it's impossible to say in advance what state such a system will be in or what, precisely, it will be doing at any particular time (unlike a "batch" system, where it's always possible to say, for instance, that payroll is run on Thursday mornings.)

Hence it's not really true, even today, to say that a computer system will always respond in the same way to the same set of inputs. Its response will depend not only on the information coming in through its sensors from the outside world, but also on its own internal "state" at the time, and this in turn will depend on its earlier history, i.e., its "experiences." What's programmed in is the potential to react to various external stimuli in different ways, without any specific large-scale behavior being predefined—just as is true of sponges, jellyfish, tree shrews, and with much broader ranges of variability, people.

Because a human brain is far more complex than these systems, the number of different internal states that it can assume is vastly greater. So if it's true that even with today's large computer systems the same external inputs do not elicit the same responses, it will be much more true of the brain. In fact, the brain can never revert exactly to any state that it was once in previously; however close it gets, the very fact of having once been in the earlier state will have left impressions—not necessarily conscious—that weren't there the first time, and hence the state that exists later must be different by at least that much. I find this a far more plausible basis for the variability of human behavior than attempts to derive it from random quantum mechanical effects at the molecular level. Variability of behavior implies a degree of correlation of our responses to the macroscopic realm that we perceive, and this is a different thing from randomness. As Schrödinger conjectured, the reason we evolved to be so much bigger than atoms could be precisely because with objects at the macroscopic level, the uncertainties that dominate the quantum realm are swamped out. In other words, only at higher scales of magnitude does a predictable and repeatable world in which rational intelligence can evolve become possible. Linking our mental activities to the quantum fluctuations of neural atoms would appear to put us back where we started—literally.

The highest-level activity of our brains, our experience of awareness, doesn't extend down to the operations taking place at the lowermost neural hardware level. We think and communicate in terms of persons, places, ideas, and things, with no innate knowledge of the streams of electrical impulses swirling around in our heads, or the chemical codes by which various cells and organs in our bodies exchange messages. They, at their own levels, communicate in their own languages; we, at our level as conscious totalities, communicate in ours.

It's not difficult to see why consciousness should have become shut off in this way from lower-level processes. The simple act of raising an arm involves the coordinated action of something like forty muscles, each of which needs a discrete neural signal to tell it to contract by the right amount at the right time. Such muscular sequences are controlled by fixed "microprograms" hardwired into the brain, which are triggered by high-level commands that we initiate voluntarily. If we had to monitor every step of such sequences consciously, our brains would be constantly saturated with mundane detail. Leaving such routine chores to a subconscious realm frees up our voluntary and conscious abilities for more valuable problem solving.

Again, the beginnings of the same kind of thing are evident in today's computer systems. At the basement level, the machine's elementary operations are controlled by microcode embedded in the hardware. Microcode is the language of circuit chips and hardware designers. At the first software level, a single "machine instruction," typically written in alphanumerics—to execute an ADD operation, for example—triggers a whole sequence of microcode functions, and such instructions form the units in which "machine-language" programs are written. The machine-language programmer does not have to understand microcode to write a program, or even have to be aware that it exists. However, a machine-language program does reflect the architecture of the machine it's designed for, hence its name. A step further removed from the hardware are "high-level" languages, consisting of commands that initiate sequences of machine instructions, which make it possible, for example, for researchers to write programs directly in scientific and mathematical terms, without having to learn machine language. And at higher levels still we find "system" and "user" commands which control the operation of entire programs and are meaningful in real-world terms, without the user having to know or care if what exists behind the buttons is electronics, clockwork, or black magic. Indeed, it's difficult to see how it could be very much different. If every user had to understand microcode to check a bank balance or play Adventure, the computer industry wouldn't have gotten very far.

Designing any kind of system involves tradeoffs. Some of the requirements that the system has to meet will always conflict with others, and improving the design in one direction invariably extracts penalties in others. Thus a toaster is great for making toast but not much good as a blender; F-15s wouldn't be the right buy for Pan Am, and so on. "General purpose" systems offer a compromise by fulfilling a number of roles moderately well without excelling at any of them, for example, the family car or a home computer. You could say that these trade off excellence for versatility. The human nervous system is probably the best example of versatility that we know. It can do practically anything to a degree, but its performance in any given area is limited. So we supplement it with all kinds of specialized accessories such as microscopes, high-speed calculators, and long-range communications equipment, each of which outperforms it by orders of magnitude in its own field, but is useless for anything else.

Conceivably, if we ever did produce a system of comparable versatility to the brain, we might find that one of nature's basic trade-offs is that thinking wide and thinking narrow are mutually exclusive. In return for versatility, we could find that we have to sacrifice many of the features that we associate with the highly specialized machines of today. In the same way that our consciousness operates without any awareness of what its neurons are doing, or even that it has any, a man-made electronic (or photonic, or biosynthetic, or whatever) intelligence might find itself shut off from the substrate levels at which its fast and mathematically precise activities were taking place. Perhaps, therefore, it wouldn't be able to perform astronomic calculations in seconds, or recall word for word a conversation that it had a week ago, or make a decision without wrestling with all kinds of imponderables. So what would it do if it wanted to know pi to a few thousand decimal places? Well, I suppose it would have to either build itself a computer or buy one. And that's a good reason to suppose that long before then we'd have started calling it something else.

It's funny how the right people have a knack of popping up at just the right time. I became interested in writing a book about machine intelligence at about the time I moved to the U.S.A. with my second wife, Lyn, in late 1977. I'd developed a few thoughts and ideas, but before getting serious, I felt I needed to bounce them off somebody who knew a lot more about the subject. One Saturday morning over breakfast I said, "Who do we know who's an Artificial Intelligence expert?"

She replied, "But we've only just arrived in this country. We hardly know anyone yet"—which was about all that could be said on that.

And then, the very next morning, Sunday, the phone rang and a voice said, "Hi, my name is Marvin Minsky. We haven't met, but I'm director of the AI department at MIT. Nobody has written a good book about AI yet. I read Inherit the Stars and liked it, and I think maybe you could. How would you like to come along and take a look at what we're doing here, and talk about ideas for fiction?"

I got to know Marvin and his family, and the outcome of our talks about ideas for fiction was The Two Faces of Tomorrow, which was published in the summer of 1979.

Perhaps the sign of when artificial systems have become smarter than we are will be when they start making up ethnic jokes about people:

"How many humans does it take to change a light bulb?"

"How many?"

"One hundred thousand and one."

"How come?"

"One to change the light bulb. The rest as biological ancestors to produce him. How inefficient can you get?"

Back | Next
Framed