September 21, 2012. Mounted atop its 747 Shuttle Carrier Aircraft (SCA), space shuttle Endeavour launched from Edwards Air Force Base, Calif., today, to complete the final leg of its ferry flight to the California Science Center in Los Angeles. Lockheed Martin photographers captured this image as it taxied by an F-35A conventional takeoff and landing variant.
And by the way, the Shuttle, now there is a plane with lots of software code to deal with.
The space shuttle’s five general purpose computers computers, or GPCs, are slow and have little memory compared to modern home computers. On the other hand, no one straps the latest-and-greatest desktop computer inside a machine that vibrates like an old truck on a washboard road while requiring it to get a spacecraft into orbit and back safely.
In other words, when it comes to flying the shuttle, reliability means far more than performance.
“The environment of space is very harsh and unfriendly and not just space, but getting into space,” said Roscoe Ferguson, a space shuttle flight software operating system engineer for the United Space Alliance. “Something like a desktop might not even survive all the vibration. Then once you get into space you have the radiation.”
Even after a major computer upgrade in 1991, the primary flight system has a storage capacity of one megabyte and runs at a speed of 1.4 million instructions per second. While this was more memory and much faster computing speed than could be achieved with the original 1970s-era Shuttle flight computers, it doesn’t compare to today’s desktop computers.
“The GPCs serve as the brains of the space shuttle,” Ferguson said. “It’s really the heart of the control system.”
The GPCs include 24 input/output links that collect the signals from the shuttle’s myriad sensors and sends them to the GPCs. The computers plug the readings from the sensors into elaborate mathematical algorithms to determine when to swivel the three main engines during launch, how much to move the elevons on the wings for landing and which thrusters to fire in space to set up a rendezvous with the International Space Station, for example. That process is completed about 25 times every second.
The shuttle’s computer-driven flight control system was a first for a production spacecraft. The fly-by-wire design, tested on modified research aircraft, does not have any mechanical links from the pilot to the control surfaces and thrusters. Instead, the pilot moves the control stick in the cockpit and the computers transmit signals to the control mechanisms to make them move.
The shuttle system is so dependent on computers that a fraction of a second without them could be catastrophic during the critical parts of flight.
“We have a range where if you can’t control the vehicle for 120 milliseconds, you could lose the vehicle,” said Andrew Klausman, the United Space Alliance technical manager for the backup flight system and multifunction electronic display subsystem. He’s been working with the shuttle computers since 1986.
That’s why engineers put so much time into testing and improving the system. A software change typically goes through about nine months of in-house simulator testing and then another six months of testing in a unique NASA lab before it is accepted for flight. The results of the strenuous testing regimen? Well, it has been 24 years since the last time a software problem required an on-orbit fix during a mission. In the last 12 years, only three software errors have appeared during a flight. But perhaps the most meaningful statistic is that a software error has never endangered the crew, shuttle or a mission’s success.
“The current quality of this software system is really almost unimaginable,” said USA’s Jim Orr, who has been working with the shuttle’s computer systems and software in different positions since 1978. “It’s that good.”
The networked computers are set up so that four are operational and one is a backup that could fly the launch and entry if the others failed. The computers receive their information from a host of sensors and actuators throughout the orbiter, external fuel tank and solid rocket boosters.
It sounds like a lot of work for any electronic device, let alone ones that are running on far less memory than a cell phone. And keep in mind that the first few dozen shuttle missions used the first-generation GPCs, which boasted memory capacities of 416 kilobytes and were a third as fast. They also weighed twice as much and it took two boxes to do the job of one of today’s GPCs.
That’s where the software comes in.
Just like the computers themselves, the software code involved is much smaller than modern commercial counterparts. The shuttle’s primary flight software contains about 400,000 lines of code. For comparison, a Windows operating system package includes millions of lines of source code.
“From a complexity point of view, Microsoft Windows is probably more complex because it has to do so very, very, very much,” Orr said.
Shuttle programmers, on the other hand, focus solely on what the software must do for a mission to succeed. The machines simply don’t have the room to support programming for other things.
“There are a lot of things that have to happen very precisely,” Orr said.
Plus, shuttle software is written to successfully adjust to failures, such as when one main engine shut down early during the launch of the STS-51F mission in 1985. The software steered the shuttle safely into a lower-than-planned orbit and the Spacelab research mission still was successful. The computers also operated the shuttle safely during the launch of Columbia’s STS-93 mission in 1999, when an electrical short in a main engine controller and a pinhole leak in a main engine occurred during ascent.
A single shuttle flight requires a series of software sets to operate at different times on the computers. There are overlays for pre-launch, launch, in-orbit operations, in-orbit checkout and entry.
“Ascent is certainly the most challenging,” Orr said. “There is some really critical timing at main engine cutoff to close the propellant valves at just the right times to manage the engine shutdown and if some of those valve closures don’t occur at the right time, you could get a catastrophic failure.”
Although shuttle designers anticipated the importance of computers to the spacecraft, the GPC memory size limitations were a major hurdle before the first mission. After all, that was the first time anyone tried to program a system that could accurately guide the largest manned spacecraft ever built into orbit and back safely.
“Getting to STS-1 was just this huge, huge challenge with a large amount of code,” Orr said. “You had the constraints of the CPU and memory, you had a lot of new technology. You had to integrate that into the vehicles and make all that stuff work together.”
“The flight software that was done back in the 70s was very complex,” Ferguson said. “They went and analyzed the concepts and the algorithms and everything that was required to fly the vehicle, the physics and things related to that. And once that was taken up, you had the developers come in and implement those in the actual programming language.”
After the shuttle began flying, software adjustments were difficult to make without going over the memory limit.
Before the GPCs were upgraded in 1991, “You literally had to remove something or code something more efficiently in order to add anything,” Orr said.
The shuttle computers went through a modernization effort that increased the capacity to the current 1 megabyte and let designers include more features. Later on a modern “glass cockpit” replaced the original mechanical dials and readouts with electronic screens which astronauts could dial through for the information they needed at the moment.
But still, there was no room for extras, and programmers work within strict limits.
“If (the Shuttle) had come along later, it would have had a lot more memory that we would have tried to fill” Klausman said. “It actually turns out to be the right amount of memory to fly the shuttle with all the necessary capability.”
Although the GPCs run the spacecraft during a mission, astronauts take a number of relatively modern computers with them into orbit in the form of laptops. Crews carry modified IBM ThinkPad A31p computers into space with them and use them for rendezvous assistance, entry and landing simulations and e-mailing Earth.
The laptops also are much faster than the GPCs and connect with devices not available to the GPCs. The Thinkpads use one of these connections to relay photos of the external tank falling away after launch to mission control at NASA’s Johnson Space Center in Houston.
But that modernity has a trade-off: the laptops are not nearly as reliable as the GPCs due to radiation effects and use of less critical commercial off-the-shelf software, Klausman said.
The laptops, however, don’t work on life-support or high-criticality systems that require the reliability found in the GPCs.
“For critical operations, I can’t come anywhere close to that reliability with the laptops,” Klausman said. “They are wonderful items, but they are susceptible to radiation particles, they are susceptible to badly written software. I could put five laptops on board and all five would suffer radiation upsets within the first day.”
With a ThinkPad 760XD laptop, two to three memory changes due to radiation occur during a shuttle flight to the Station, Klausman said. That number balloons up to 30 for a mission to NASA’s Hubble Space Telescope. The reason is that Hubble orbits about 150 miles higher than the station, where the radiation protection from Earth’s magnetic field is not as strong.
Designers also found out that laptops would crash when the shuttle passes through the “South Atlantic Anomaly,” which is an area where the magnetic field draws in to Earth, again offering less radiation filtering for spacecraft flying through it.
The GPCs don’t crash for radiation concerns because the GPC hardware includes a memory scrubber that prevents the system from reading radiation-changed memory.
While the GPCs are well-regarded for handling navigation and control duties, they are not set up for performance-intensive work such as complex graphical displays and word processing. That’s why the astronauts started carrying fold-up computers originally made by GRiD into space.
“Back in the GRiD days, the idea was to include something that the little payloads could use,” Klausman said. Since then, astronauts outlined new needs for the computers and NASA began using more-powerful Thinkpads and developing modifications and custom software.
For example, the laptops run a program that shows the crew where they are in space to help them navigate to the space station and dock. “They get a graphical display of where they are and where their orbit will take them if they do nothing,” Klausman said.
A day or two before landing, the shuttle commander uses a laptop and a custom controller to run a landing simulation program.
Klausman points to the first launch of a Thinkpad in December 1993 as a highlight of his career. The laptops were aboard Endeavour for the first repair mission to NASA’s Hubble Space Telescope.
“The STS-61 launch where we had worked really hard for a couple years to get the ThinkPads ready for flight and to actually be there and see them go was . . . wow.”
The designers also continue to experiment with different ways to incorporate the proven shuttle flight instructions into modern equipment. For example, Ferguson said engineers were able to load all of the shuttle’s GPC software onto a computer chip weighing only a couple ounces and found out the software still worked.
Such innovations are expected to play a large role in any future spacecraft, so software engineers continue to make adjustments to shuttle programs with an eye on seeing them incorporated in coming designs.
Steven Siceloff
NASA’s John F. Kennedy Space Center