Back | Next
Contents

CHAPTER EIGHT




The next morning Dyer and Richter met as arranged, boarded an autocab and specified the Department of Communications and Information Management Headquarters, Washington, D.C., as their destination. The cab navigated Manhattan and was pipelined across to New Jersey along one of a battery of ten monorails suspended in a single span four hundred feet above the Hudson. There it turned south and merged with forty-nine other cabs to form a train which accelerated as one unit into the New York-Washington tube, through which it hurtled in vacuum, riding on magnetic suspension at speeds touching 800 mph for most of the way. At the far end the train broke up to become independent cabs once again, which dispersed into the Washington local system. Twenty minutes after leaving New York, Dyer and Richter were in an elevator ascending from the autocab terminal located below the CIM HQ building.

The meeting was chaired by Dr. Irwin Schroder, the U.S. Secretary for CIM. Attendees included Fritz Muller, vice chairman of the Advisory Committee on Information Technology to the Supreme Council of World Governments, which managed jointly the global operation of TITAN as one of their functions. A group of Muller’s Committee advisers representing several national interests, a selection of individuals from various academic and commercial institutions and a small delegation from CIM completed the gathering.

Proceedings began with introductions and a résumé of the incident at Maskelyne. After that, Schroder took some time to summarize the main question of concern that the meeting had been called to consider. If TITAN had developed the ability to act independently to this degree after only one year, what might it do later and what should be done about it? One choice, of course, was to downgrade TITAN by de-HESPERing it; then it wouldn’t be able to do anything. Opposition to this move came mainly from the academics, who favored the policy that Richter had already advocated privately to Dyer—pushing ahead to replace HESPER with a perfected development of FISE as quickly as possible. They pointed to the vast improvements in living standards and general affluence that had been realized within the last fifty years and attributed most of it to the effects of TITAN. Even this was just a beginning, they declared, and would pale into insignificance compared to what the next century held in store. It would be tragic if all that were to be thrown away because of nothing more definite than a few what-ifs. To defend their case they presented elaborate contrasts between the evolutionary processes governing the growth of organic and inorganic systems, and argued that it was ludicrous to suggest that a machine would parallel human motivations and ambitions. In this they were echoing the orthodox line that Dyer had given his visitors from Princeton two days before. They finished by reiterating Richter’s insistence that problems were nothing new in life and were there to be solved not evaded. If problems did develop to an intolerable degree for some reason, the option to pull the plug would always be there as a last resort.

Dyer then summarized his progress with FISE and endorsed the view that though a supercomplex the size of TITAN, equipped with processors that were superior to FISE, might well evolve behavior that would have to be classed as intelligent, the supposition that it might come to think and feel like a human being was too farfetched to be worth considering. A mumbled chorus of assent from the academic fraternity greeted his words. By his side Richter began breathing more easily.

At that point Schroder sat forward to sum up his interpretation of what Dyer had said. “I take it then, Dr. Dyer, that you are adding your support to the recommendation that we heard earlier. We should press ahead with FISE with the aim of upgrading the net at the earliest opportunity. Whatever risks are entailed by living with HESPER in the meantime would not justify the cost of going backward to EARTHCOM.”

“We’ve got ’em hooked,” Richter whispered jubilantly. “They’re coming around.”

“Yes and no, Mr. Chairman,” Dyer responded. “FISE research has to be pursued vigorously. There can be no question about that. But I think the question of upgrading the net should be thought of as something that belongs in the indefinite future . . . if we ever do it at all.”

Mutterings of surprise broke out around the room. Richter brought his hands up to cover his face.

“Don’t say any more,” he said from the corner of his mouth. “You’ll blow the whole damn thing. Just get ’em to give the okay on FISE now. We can leave the arguing about it going into the net until some other time.” But it was too late; Schroder wanted to take the point further.

“I thought the problem with HESPER was that it’s only half smart,” Schroder said, looking surprised. “If FISE would fix that and there’s not likely to be any problem of it trying to take over the world, why shouldn’t we use it?”

“I only agreed that it wouldn’t think like a human being,” Dyer replied. “I didn’t say it mightn’t act like one.”

Jan Van der Waarde from Cape Town University shook his head perplexedly. “I don’t understand. What is the difference? Why should it act like it doesn’t think?”

“The problem is that an intelligence that is totally alien but totally rational could emulate certain types of human behavioral traits but for completely different reasons,” Dyer replied. “It could act in ways that we would see amounting to rivalry with Man, but not for the same reasons as a human would act that way. In fact the very concepts of rivalry and Man would almost certainly mean nothing to it.”

“You mean it could wind up setting itself against us without even knowing it,” Paul Fierney, a technical adviser with CIM, looked dubious. “How would it do that?”

“You’d agree that the things that would worry us would be if it ever began to exhibit tendencies which in human terms we’d describe as anger, resentment, aggression, feelings of superiority or any of that kind of thing,” Dyer said.

“Okay,” Fierney agreed. “But I thought we’d all agreed that a machine wouldn’t feel things like that.”

“You’re right,” Dyer said, nodding. “But when you say that a person feels any of those things, how do you know? How do you know what he feels inside his own head?” He gave them a few seconds to reflect on this and then supplied his own answer. “Obviously you can’t. All you can know is what you see him do and hear him say—in other words by his observable behavior. What I’m saying is that different causes can result in identical effects. If some other cause were to result in the kinds of behavior that go with the emotions I’ve just listed, as far as we would be concerned there wouldn’t be any difference. If somebody comes at you with an axe it doesn’t make any difference if he’s doing it because he hates your guts or because he’s quite rational but thinks you’re a monster from Venus. The result’s the same.”

“I think maybe we go away from the point.” The speaker was Emilio Gerasa from Spain, one of Fritz Muller’s contingent. “Isn’t the problem with HESPER that of incompetence, not all these other things? Why do we speak of these other things, like the anger and so on?”

“FISE would solve the competence problem,” Dyer assured them. “I don’t have any worries in that direction. I’m more worried that it might end up being too competent.” A few mystified glances were exchanged in parts of the room.

“The emotional traits that we’ve mentioned, along with pretty well all the rest, can be traced back to one root—survival! Dyer told them. “If an enhanced TITAN ever evolved the motivational drive to preserve its own existence, the very fact that it’s a rational system would enable it to devise very effective ways of going about it. Also, since it’s an extremely powerful learning machine that operates at computer speeds, once it started to do something, it would do it very fast! If the machine interpreted agencies in the universe around it as constituting real or imagined threats to its existence, then the rational thing for it to do would be to experiment until it identified measures that were effective in neutralizing those agencies.” Dyer shrugged. “If one of them turned out to be us or our vital interests, we could have real problems.”

Schroder leaned across to confer with Muller for a few seconds, Muller nodded, then shook his head and gestured in Dyer’s direction. Schroder looked up again.

“Maybe I’ve missed the point,” he said. “But I thought you agreed a little while ago that a machine wouldn’t possess a human survival drive because it hadn’t come from the same origins as humans. Now you seem to be saying that it will. Could you clarify that, please.”

“He is talking in circles,” Van der Waarde muttered.

“And why should it feel threatened and act against us when it doesn’t share any of our survival-based emotions?” Frank Wescott, who was present to represent CIT, challenged. Richter was by this time sitting back glumly, resigned to hearing whatever Dyer was going to say.

“Because it wouldn’t even know it was doing so,” Dyer answered. “That kind of question still presumes that it would think in human terms. I’m talking about a totally rational entity that simply modifies its reactions to an environment around it. It hasn’t had the evolutionary conditioning that we’ve had to understand the concept of rivalry or even that beings other than itself exist. All it’s aware of is itself and influences impinging on it that are external to itself. Now do you see what I’m getting at? It wouldn’t consciously or deliberately take on Man as an opponent because in all probability it would have no concept of Man per se.”

“Very well, Dr. Dyer.” Fritz Muller held up a hand. “We take your point. But tell me, what kinds of circumstances do you envisage occurring that might equate to a clash of interest between us and it? Let us not worry for now about whether or not the two parties look upon the situation in the same way.”

Dyer paused to consult the notes that he had prepared beforehand. The CIM people and the advisers from the World Council committee were watching him intently while the academics were looking unhappy and muttering among themselves. Richter was glowering up at him over folded arms.

“Consider the following scenario,” Dyer resumed. “The system has evolved some compulsive trait that reflects the reasons for its having been built in the first place—a counterpart to the survival drive of organic systems. The other day, somebody I was talking to suggested that it might become insatiably curious, so let’s take that as its overriding compulsion. It doesn’t know why it wants to be that way any more than we know why we want to survive. It’s just made that way. To discover more about the universe, it requires resources—energy, instruments, vehicles to carry the instruments to places, and, of course, a large share of its own capacity. Moreover, the system finds that it has access to vast amounts of such resources—a whole world full of them. So it follows its inclinations and begins diverting more of those resources toward its own ends and away from the things that they were intended for. As far as we were concerned, it would have manifested the feeling of indifference. Our goals would cease to figure in its equations and we’d face the prospect of being reduced to second-class citizens on our own planet.”

“Only if we just sat there and allowed it to help itself,” a professor from Hamburg interjected. “I can’t see that we would. Why should we?”

“Which brings us to a second scenario,” Dyer carried on. “We take active steps to deny it access to the resources it wants. The system retaliates in kind by denying us the resources that we want, say by progressively shutting down energy plants, grounding air traffic, blacking out cities . . . all kinds of things.” He raised his hands to stifle the objections that appeared written across several faces. “Don’t forget, I am not postulating that the system has any concept of Man or sees its behavior in the same terms as we would. But this is a powerful learning machine! All it knows is that certain events in the environment around it are capable of obstructing its goals, and that certain coordinated actions on its part have the effect of stopping those events from happening. It’s like a dog scratching. It just feels uncomfortable and learns that doing certain things makes it feel better. The dog doesn’t have to be an entomologist or know that it’s fleas that are causing the discomfort.”

“But how could it possibly know that cutting off power to cities or anything like that would help?” Gary Corbertson, Director of Software Engineering from Datatrex Corporation, shook his head in disbelief. “I thought you said it wouldn’t know anything about people. How could it figure out how to blackmail them if it didn’t even know about them? That doesn’t make sense.”

“It wouldn’t have to figure out why it worked,” Dyer replied. “All it would need to know is that it did. Suppose it decided that it wanted a Jupiter probe all to itself, but we tried to take the probe away and it responded by shutting down cities at the rate of ten per night. Suppose also that we knew why it was doing it. What do you think we’d do?” He nodded slowly around the room. “It’d get its Jupiter probe pretty soon, wouldn’t it?”

“Mmm . . . I think I see the point,” Schroder said slowly. “All a baby has to know about the world is that when it screams loud enough it gets what it wants.”

“Good analogy,” Dyer agreed. “I’m not suggesting the system would do anything as sophisticated as that to start with, but like a baby it would experiment, observe, connect and hypothesize. Pretty soon it would have a fair grasp of what actions resulted in what effects.

“And now take our supposition one step further,” he went on. “What if the coordinated actions that it learned amounted not merely to blackmail but to overt aggression? As far as the machine’s concerned there’d be no difference—certain actions simply makes the discomfort go away or the comfort increase. That’s where the fact that it doesn’t possess any human values or concepts at all becomes really worrying. Another scenario—it discovers that it gets far faster and more positive results when it doesn’t stop at threats; it carries them out. Now it’s exhibiting open hostility as far as we’re concerned, but it doesn’t know it.

“So, without invoking any human attributes at all, we’ve just taken it through the whole spectrum from indifference to hostility—a perfectly plausible simulation of behavior that we thought we wouldn’t have to worry about because it couldn’t evolve the emotions that normally accompany it. But now we see that it wouldn’t have to evolve any such emotions.”

As Dyer sat down, Richter, now looking less disgruntled, leaned toward him.

“Christ, Ray,” he said over the hubbub of voices that broke out on every side. “Is FISE really capable of going all the way to that extent?”

“Not one of them in a lab,” Dyer told him. “But what happens when you connect thousands of them up together? Would you want to put money on it?” Richter sat back, shaking his head slowly and frowning to himself. The meeting subsided to silence again as Campbell Roberts, Muller’s representative from Australia/New Zealand, began to speak.

“I still think we’re exaggerating the whole thing,” he declared loudly. “So there are risks. Nobody ever said there weren’t. All through history men have taken risks where the benefits they stood to gain justified them. But as we said earlier on, if the system starts doing things we don’t like, we can always pull the plug on it. If we have to, we can always take the bloody thing to bits again. Why in hell’s name are we getting so hung up about some lousy machine developing a mind of its own? We’ve got minds too, dammit, and we’ve been around a lot longer. If it wants to play survival games I reckon we could teach it a thing or two. I say put FISE in and make damn sure it never forgets who’s boss. Homo sapiens have had plenty of practice at that!”

“Maybe it won’t let you pull the plug,” Fierney pointed out.

“That’s bloody ridiculous!”

“I’m not so sure it is,” Muller commented. “Even now TITAN controls its own power sources and the manufacture of most of its own parts. If current forecasts are anything to go by, it will soon control everything related to its own perpetuation—from surveying sources of raw material to installing extensions to itself and carrying out one-hundred-percent self-repair. On top of that it controls other machinery of every description. It might not reach the point of becoming incapable of being switched off, but it could conceivably make the job of switching it off an extremely difficult and possibly costly undertaking.”

“But why should it want to do that in the first place if it doesn’t have a survival instinct?” Roberts objected.

“What have you got to say to that, Dr. Dyer?” Schroder invited.

“The same thing applies as before,” Dyer said without hesitation. “If the system evolved some overriding purpose that its programming compelled it to strive to achieve, it wouldn’t take it long to figure out that its own continued existence was an essential prerequisite to being sure of achieving it. Its own observations would tell it that its existence could not be guaranteed as things stood, so its immediate response would be to experiment in order to find out what it could do to remedy the situation. The rest follows logically from there. In other words, here we have a mechanism via which something tantamount to a survival instinct could emerge without the need for any survival-dominated origins at all. And as I said before, once you’ve got a survival instinct established, all the emotions that go with it will follow in the course of time.”

Dyer paused to allow his words time to sink in and then summarized his view on the things that had been said.

“If the system started to exhibit any of the traits we’ve been talking about, that in itself wouldn’t add up to an insurmountable problem because, as Campbell says, we can always pull the plug. As long as that’s true, the benefits outweigh the risks; and if that was all there were to it, my vote would be to upgrade the net. But that isn’t all there is to it. If the system were to evolve a survival drive, logically we would expect it to attempt making itself an unpullable plug. Even that, in itself, wouldn’t be a problem if it didn’t succeed. After all, it wouldn’t matter much what the system wanted as long as it was incapable of doing much about it. If we could guarantee that, I’d still say upgrade the net. But we can’t.

“It all boils down to two questions. One: Could the system evolve a survival instinct? Two: If it did, what could it do about it? The second is really the key. Until we can find some way of answering that with confidence, I can’t see our way clear to taking things further.”

A long silence followed Dyer’s words. Then Schroder took up the debate.

“I’m inclined to agree that we can’t recommend putting FISE into the net at this stage. As to the question of continuing with FISE research, that’s a funding issue that doesn’t concern this meeting. But something else bothers me. Everything that has been said this morning has assumed that we’ve been talking about a supercomplex that includes FISE machines. But the business at Maskelyne happened with the system as it is now. Even with just HESPER, TITAN showed itself to be capable of integrating its activities to a degree that nobody thought possible.” He gestured vaguely toward the door. “Out there is a world that’s being run by a super-complex of HESPERS. What guarantee do we have that the kinds of behavior you’ve described can’t happen even today with the system we’ve got?”

Dyer had been expecting the question. He held Schroder’s eye and replied simply. “None.”

Schroder considered the answer for a long time. At last he sighed and stretched his arms forward across the table in front of him.

“The objective of the meeting was to agree what to do about HESPER,” he reminded them all. “We have three choices: Allow TITAN to grow further, freeze it where it is now, or downgrade it by taking HESPER out. We can’t allow it to grow further until we have some way of obtaining guaranteed answers to Dr. Dyer’s two key questions. If we leave it as it is, we risk a repetition of the Maskelyne kind of accident but maybe on a catastrophic scale, which would clearly be totally unacceptable. Therefore, as I see it, the only choice open to us is the third. Does anybody here disagree?”

“Except you’re only going to have to cross the bridge sometime later anyway,” Richter threw in. “It doesn’t matter what kind of machines you develop in labs after you downgrade the net, the only way you’ll ever know how a planetwide complex of them will perform is by building it. In the end you’d still wind up in the same position.”

“But we don’t know what’s on the other side of this particular bridge,” Schroder pointed out. “It may lead to somewhere we don’t want to go, and if it does it will only be one-way. The key question is: Could the system make itself invulnerable? The only way we could answer that for certain would be if it did. That is obviously unacceptable because once it had reached that point it could do anything it pleased afterward. As Dr. Dyer says, we can’t proceed further until we know the answer. On the other hand, the only way we can find the answer is by getting there. It’s a vicious circle. The only alternative is not to try getting there at all but stay where we are. But the present position is unstable because of HESPER and Maskelyne, so the only way open is back.”

“So civilization levels out on a plateau,” Richter objected. “And from that line of argument there’s no way past it. What happens when we find we need something heftier than TITAN?”

“Let’s try some positive thinking,” Fritz Muller suggested. “Everybody is saying we can’t go further because we don’t know the answers to Dr. Dyer’s two questions. That’s negative. Let us say instead that we could go further if we knew the answers. Positive.”

“I like it,” Richter said immediately. He looked at Schroder. “That’s what you should put in your recommendation! Recommend to CIM and to the World Council that we do something to get some answers instead of backing off.”

The meeting concurred and Schroder duly entered the point in his notes.

“Exactly what are we recommending them to do?” he inquired, looking up at Richter as he finished writing. Richter hadn’t really thought about it. He blinked, frowned to himself, rubbed the tip of his nose, and at last looked back at Schroder.

“I don’t know,” he confessed simply.

Schroder shifted his eyes inquiringly toward Dyer.

“I don’t know,” Dyer told him.

The meeting went on to examine the problem from a dozen different angles, but at the end of the session it was obvious that nobody else knew either.




Back | Next
Framed