Welcome!

Please login or sign up for a new account.

I forgot my password

Password Reset

R.I.P., MOC

by Terry Burlison


We're all familiar with the heroism of astronauts and how they risk, and sometimes sacrifice, their lives to explore space. Stories have recently emerged about NASA's flight controllers: the men and women of Mission Control who kept those spacecraft flying and ensured their safe return to Earth. But laboring tirelessly below them all, figuratively and literally, was the Mission Operations Computer—the machine that for four decades ran Mission Control. Here is the story of the MOC, and how a machine with far less computing power than a cheap cell phone once enabled us to walk on the moon.


On a blistering Houston summer day in 2002, a legend of the manned space program retired. The ceremony took place in the Mission Control Center, where the retiree had labored tirelessly for nearly four decades. Few people attended. No cake or ice cream was served, no gold watch or plaque presented. Indeed, most NASA employees took no notice at all, because the retiree wasn’t a person.

It was a machine.

For nearly forty years, the Mission Operations Computer was the electronic heart of the MCC. The MOC, or “mock” as it was lovingly (and sometimes not-so-lovingly) called, drove the console displays and indicators, calculated orbits and trajectories, and enabled men and women to fly in space, build space stations, and land on the moon. It was alternately the object of admiration and annoyance, of fondness and frustration, but without it America could not have had a manned space program.


The Birth: Project Gemini

The MOC was born in the early 1960s, as NASA prepared to fly the Gemini program. Prior to this, flight controllers monitored the Mercury flights from ships and ground stations scattered around the planet. Tracking data was sent back to the main computing complex at Goddard Space Flight Center in Maryland, and forwarded to the primary control center (“Mercury Control”) at Cape Canaveral, Florida.

Deploying flight control teams around the world created nightmarish problems of logistics, training, and communication. Medical care sometimes bordered on barbaric. Travel or living arrangements could get fouled up, leaving controllers scrambling for a room—or a ride home. Communication was so primitive that flight controllers wrote spacecraft data down on paper and then sent it back to Mission Control by teletype.

This ad-hoc situation, designed in the heat of the Cold War battle to put men in space, worked for the simple Mercury flights, but for upcoming Project Gemini, NASA needed a better system.

Gemini was a much more ambitious project, its goal to teach NASA how to work in space—indeed, how to perform nearly all the functions required to land on the moon. This required a centralized location for the flight control teams, with vastly more computational power at their disposal. Thus, in 1962 ground was broken outside Houston, Texas for the Manned Spacecraft Center (later renamed the Johnson Space Center).

The nerve center of the MSC was Mission Control, a massive three-story windowless concrete box known as Building 30. In the middle of the top two floors sat identical Mission Operations Control Rooms, or MOCRs (pronounced moh-kurs). These are the control rooms television made famous during Gemini and Apollo—dark, cold rooms with a dozen or so crew-cut flight controllers huddled over their consoles while astronauts soared through space. Dual MOCRs enabled NASA to control two vehicles at once, a task required for Apollo lunar missions.


The first computers arrive at the MSC in 1963

Figure 1: The first computers arrive at the MSC in 1963 (Photo courtesy of NASA)


Small “back rooms” surrounded the twin MOCRs and housed the support teams for the front room controllers.

And on the first floor dwelt the MOC.

The MOC was the central component of the Real-Time Computer Complex (RTCC), a system of computers for designing missions, testing software, running simulations, and, of course, controlling the space flights.


The IBM 7094

Figure 2: The IBM 7094—Yes, everything in the photo! (Photo courtesy of Paul Pierce)


When Mission Control “went live” for its first mission, Gemini IV in 1965, the RTCC housed five state-of-the-art IBM 7094 computers, each composed of several cabinets the size and bulk of large refrigerators and filling the cavernous computing room. Two of the machines were reserved for software development, where programmers tested new code. Another ran simulations, enabling flight controllers to practice missions. The fourth was the Dynamic Standby Computer, a back-up to be used in an emergency.

The fifth was the Mission Operations Computer. The MOC handled all computing requirements during space flights, such as displaying telemetry—information sent down from the spacecraft that essentially tells ground controllers, “Here’s how things are going.” The MOC also enabled controllers to send commands to the vehicle, such as turning on equipment. Most challenging, the MOC computed the spacecraft orbits and maneuvers, tasks that required tremendous capability because of the complex math involved. To accomplish these tasks, the MOC drove some 40 displays and 5,500 event lights on consoles throughout the MCC.


The RTCC in 1966

Figure 3: The RTCC in 1966 (Photo courtesy of NASA)


The vast majority of the MOC’s code (ultimately, some 17 million lines) was written in IBM Assembler Language, an arcane computer language one step removed from binary machine code and best understood by human/machine cyborgs. (These programmers sometimes refer to themselves as “assembly lizards.”) The rest, primarily the trajectory code for launch, orbit/rendezvous, and entry was coded in FORTRAN due to the intense mathematics required. (The complexity of this code would haunt future upgraders and keep the MOC online years after it was scheduled for the scrap heap.)

The MOC’s original IBM 7094 was the fastest, most powerful computer of its day. It sported a whopping 64K of main core storage, the area of a computer where programs actually run. To get a sense of scale, consider that a modern laptop has about the same memory capacity as 100,000 MOCs. (Playing my copy of Angry Birds would require the memory of 500 IBM 7094s. And while fast for its day, the machine would require several months of 40-hour weeks to play a single minute of the game!)

Since this paltry memory storage was not remotely enough to hold those millions of lines of code, any of which a flight controller might need at a moment’s notice, a system needed to be devised to preload anticipated functions flight controllers might need.

Hard drives were neither reliable nor capable enough at this time, so the blocks of code were stored on tape drives: half-inch-wide magnetic tape spooled onto reels the size of dinner plates. Each tape, nearly a half-mile in length, held only a few megabytes of data—roughly a millionth the capacity of a modern hard drive. Since a flight controller could grow old, retire, and die waiting for information to load from the slowly spinning tapes, IBM added something called Large Core Storage (LCS), a form of magnetic-core RAM. This acted like additional core storage, enabling programs to be pre-loaded from the glacially slow tapes to the LCS, where they could be quickly loaded into the MOC when requested.

This required ballet like choreography between the flight controllers and the keepers of the MOC. A space mission is broken into phases, for example, Ops 1 for launch/insertion, Ops 2 for on-orbit/rendezvous, and Ops 3 for entry/landing. As a mission transitioned between phases, computing personnel (sometimes called “tape apes”) mounted the appropriate tapes and transferred the needed functions to the LCS, where they waited to be fed to the MOC.

This lead to problems, as I discovered shortly after coming to work as a Flight Dynamics Officer (Fido), one of the front-room positions. It was July 10, 1979—aka The Day I Crashed the MOC.

Skylab, America’s abandoned space station, was clawing its way through the upper atmosphere on its final orbits before plummeting to Earth. NASA was carefully tracking the station, using Mission Control to monitor its demise. I had been at JSC for only a month. I’d sat behind the Fidos and Trajectory officers at many simulations, but since this was the “real thing,” I was not allowed in the front room. So I found an empty console in one of the back rooms, opened my Fido Console Handbook, and began monitoring Skylab on my own.

After a couple hours of watching it “bore holes in the sky,” I got bored myself and started punching up different mission phases on my console’s Display Request Keyboard: Launch. Orbit. Entry. Rendezvous.

Rendezvous sounded interesting, so I requested a display. Instantly, all the data froze on my screen. Moments later, the call came over my headset, “We’ve lost the MOC.” During simulations, that meant the MOC had crashed and it would take a half-hour or so to resuscitate it. Essentially, the entire Mission Control Center was now offline.

I unplugged my headset and returned to my office, thinking what an odd coincidence that the MOC crashed the moment I requested the display. Casually, I asked my office mate, an Apollo/Skylab veteran, about the rendezvous displays.

“Don’t request one of those,” he warned. “You’ll crash the MOC!”

“Well,” I replied, “I’m glad I asked!” I never again requested a display unless I knew exactly what the hell it did!


The Glory Years: Apollo

During the stand-down between Gemini and Apollo (which got extended 18 months after the tragic Apollo 1 fire), the computing complex was upgraded to new IBM 360 computers. Apollo would push the boundaries of real-time computation far beyond Gemini, and required faster, more robust hardware. The 360s had a million bytes of core memory, but the code had also grown in size and complexity. Tapes were still used (and would be well into the Shuttle era) for “checkpoints”: backup snapshots of the system that could be loaded into the MOC to restart a simulation or to recover from crashes. Flight functions, such as computing orbits or solving rendezvous problems, were still pre-loaded onto the LCS. At times the workload became too much for the solitary MOC, laboring as fast as its data buses would allow, and the Dynamic Standby Computer would be called into service to offload computations for some mission phases.

Despite the fact the IBM machines were never designed to support real-time operations—and had less memory and slower processors than some modern wristwatches—the MOC performed brilliantly for both Gemini and Apollo. But it wouldn’t be able to handle what came next.


The Space Shuttle

During the six-year hiatus between Apollo and Shuttle, the computer complex was upgraded yet again. New IBM 370 computers boasted 8MB of storage. The Large Core Storage machine disappeared; hard drives now spun in Building 30, though due to their limited capacity they were still loaded from tape drives. The consoles hadn’t changed: flight controllers still risked their eyesight staring at grainy, low-resolution monochrome screens, old-fashioned nixie-tubes still counted down mission events, and hardcopies were still printed on greasy thermal paper and sent in metal cylinders via pneumatic tubes to the various stations (along with the occasional banana, bagel, and other items best left unmentioned). But the machine driving it all was once more state-of-the-art.


The Mission Operations Control Room

Figure 4: The Mission Operations Control Room. Note the P-tube canisters in foreground. (Photo courtesy of NASA)


The shuttle was the most complex machine ever devised. Unlike Gemini and Apollo, which were built for specific objectives, the shuttle was designed for flexibility, with missions ranging from simple orbital test flights to multiple-rendezvous and satellite deployments to space station construction. Missions might be scientific or military, and could involve delivering satellites or retrieving and servicing them.

Consequently, the mission operations computing requirements were not only ambitious in scale, but also in scope, and required flexibility never before seen in real-time operations. Further, once the shuttle flew and NASA gained flight experience, the trajectory code often changed. This put more and more pressure on the MOC and its personnel as all functions grew in complexity: real-time operations, simulation/training, and mission planning and flight design. The MOC was once more bulging at its digital seams.

Then tragedy struck. In 1986, Challenger exploded and NASA faced another stand-down. A five-year plan to gradually upgrade the MOC got compressed into the thirty-two month hiatus after the disaster. Programmers and engineers labored long hours to upgrade the Mission Control computers (including the MOC) again, this time to IBM 3083s.

The task was complete by the next flight, STS-26 in 1988, but the relief was once more temporary. A few years later the MOC was upgraded a final time, to IBM ESA 9000s. Memory was no longer a problem. Tape drives disappeared. MOC crashes became the stuff of legend, stories told by graybeard flight controllers to scare new-hires.

However, by this time the world was moving away from mainframe computers and into distributed, networked architectures. Rather than a single machine driving many terminals, individual workstations—often much more powerful than the mainframes they replaced—were now being used by businesses, the military, and the government. To support the International Space Station, a new annex was added to Building 30. Sexy new Flight Control Rooms housed networked UNIX workstations boasting high resolution color displays and laser printers instead of Ford-Philco consoles with grainy black-and-white screens and p-tubes. The time had come to replace the venerable MOC.

But it didn’t quite work out that way.


The Flight Control Room

Figure 5: The new Flight Control Room (Photo courtesy of NASA)


The MOC’s Finale

The MOC performed several key functions. Telemetry handling showed downlinked data from the vehicle such as cabin temperature, acceleration levels, switch settings, and thousands of other items. Transferring this function to workstations was fairly straightforward since it primarily meant taking a piece of information and displaying it.

The MOC also provided Command capability—the ability for flight controllers to uplink orders to the spacecraft computers, such as resetting software flags, turning on or off specific functions, etc. Again, this transition was relatively easy.

So why did moving the trajectory functions prove so nightmarish?

To figure out the shuttle’s location, and guide it to its destination, the MOC had to combine radar, telemetry, and its own estimation data about the spacecraft’s position, velocity, and acceleration, and then employ sophisticated gravity, atmosphere, and vehicle models to predict where the shuttle was heading and what actions needed to be taken. These included launch trajectory and abort calculations, orbital mechanics and rendezvous computations, and deorbit maneuvers and atmospheric flight predictions.

Consider for a moment the scope of this problem. During ascent, a launch vehicle burns thousands of pounds of fuel each second, meaning it is constantly changing mass which in turn changes how it responds to any force acting on it. Its acceleration continuously changes as it plows through layers of atmosphere with winds coming from different directions and at different speeds. Engine nozzles swivel; valves change fuel flow; control surfaces like elevons and rudder are flexing, changing the ship’s aerodynamics. Gravity weakens as it rises. Air is vented overboard from inside the payload bay and crew cabin. Engines may not perform as expected or might fail completely, triggering an abort. Now the guidance software must quickly recompute a new, safe trajectory: a different orbit, a path to an emergency landing site, or even an immediate turnaround and return to the launch site.

Once on orbit, things are hardly simpler. The shuttle is constantly maneuvering or venting gases and liquids, all of which change its mass and orbit. The tendrils of the upper atmosphere claw at it with unpredictable force which varies with latitude, sunspot activity, time of year, even time of day. The earth’s very shape is complex and its crust varies in density, defying efforts to accurately model gravity. Now add another vehicle, like the International Space Station, also flexing, venting, maneuvering. The moon’s location—even the pressure of sunlight—changes their orbits. And despite all these math models and decades of experience, the estimate of the ISS or shuttle’s position could still easily be off by several miles in just a few hours. Now, try to predict where the two craft will be days in advance and bring them safely together, at 18,000 miles per hour, with limited fuel onboard.

And finally, at the end of the mission, comes entry: an airliner-sized vehicle plummeting at twenty times the speed of a bullet through an atmosphere that changes density, temperature, winds, and barometric pressure with each passing second. The shuttle might enter the atmosphere at night over a southern hemisphere locked in winter and land in Florida in the middle of summer—in less than an hour! All while performing sophisticated maneuvers to bleed off excess speed (while not overheating the tiles), and then arriving at the landing site at exactly the right altitude and velocity.

Orbital mechanics, vehicle dynamics and mass properties, thermal models, atmosphere models, gravity models . . . the list of problems to be solved seems endless, and entire books have been written about each. None of these things, or hundreds of others, is perfectly understood or predictable, yet they all must be mathematically modeled—accurately—for flight controllers to do their jobs. And a mistake in any of them could destroy the vehicle and kill the crew.

This was the MOC’s job.

Engineers made several efforts to move these sophisticated, highly critical and time-sensitive functions to the workstations. The requirement for speed and absolute accuracy is greatest during these highly dynamic mission phases. If the MOC doesn’t display a cabin fan setting, there’s plenty of time to realize and correct it. Not so if the MOC fouls up the abort computations during the 500 seconds of explosive mayhem known as launch and ascent.

“If it ain’t broke, don’t fix it,” held true for years after the telemetry and control functions had been transferred. Hundreds of thousands of lines of trajectory code had to be rewritten into C and C++ for the new workstations, then exhaustively tested, simulated, and verified before it was finally deemed right. Eventually, the changeover was approved and the new Flight Control Room began performing the trajectory computations, but even then only as backup: The MOC—some of its software now older than the people running it—still acted as primary.

Eventually, however, youth won out, as in most things. Once the new distributed system had performed flawlessly (as backup) during several flights, it was finally brought off the bench for STS-110 in June 2002, with the MOC now riding the pine as backup.

The STS-110 mission was an outstanding success. The new Flight Control Room with its distributed workstations performed flawlessly through all mission phases. The end of an era had arrived.

On August 12th, 2002, a handful of NASA employees—mostly engineers, managers, and programmers who had worked on the handover—gathered in the depths of Building 30, in a room that helped fly Neil Armstrong to the moon, return Apollo 13 safely to Earth, build the ISS and repair Hubble. With no fanfare, but with a few tears, the Emergency Power Off switch was thrown.

And the MOC fell silent for the last time.


Legacy

While the MOC enabled America to accomplish legendary feats, its legacy extends far beyond the walls of Building 30. To support NASA’s real-time mission control demands, IBM pioneered new techniques of configuration control and program management as well as revolutionary new hardware and software, including multi-tasking processors, asynchronous communications, multi-channel processing, and real-time operations support—developments that flow through the computing industry even today.

So, the next time you power up your PC, play a game on your cell phone, or just glance at your digital watch, take a moment to appreciate one of the true, yet unheralded, heroes of the Space Age:

Mission Control’s MOC.



Copyright © 2013 by Terry Burlison


Terry Burlison graduated from Purdue University with a degree in Aeronautical and Astronautical Engineering: the same school/degree as Neil Armstrong and Gene Cernan, the first and last men to walk on the moon. He then worked for NASA's Johnson Space Center as a Trajectory Officer for the first space shuttle missions. After leaving NASA, Terry spent ten years at Boeing, supporting numerous civilian and defense space projects. Until recently, he was a private consultant for many of the new commercial space ventures. Terry is now a full-time writer, or maybe just unemployed. His web site is www.terryburlison.com.