By Richard E. Hawley, General (Ret), USAF
General Hawley served as the Commander, Air Combat Command and Commander US Air Forces in Europe. Additionally he was the Principal Deputy Assistant Secretary of the Air Force for Acquisition.
***
PART TWO
This is not an easy question to answer. There are three broad categories of cost – development, procurement and sustainment; each of which is driven by a different set of factors. While development and procurement costs attract the most attention from the media and from Congress, they account for just 40% of total weapons system cost. Operating these systems over their 20 to 40+ years of service accounts for 60% of life cycle costs, and can best be constrained through investments in reliability and maintainability during development. Unfortunately, our intense focus on the cost to develop and procure weapons systems often discourages such investments.
Development Cost Drivers
The operational environment, anticipated threats and concept of employment are the primary drivers of development cost. Designing systems to operate in demanding environments, e.g., at sea, in mountainous terrain, desert conditions or extreme heat or cold adds complexity and hence cost. Our forecast of adversary capabilities against which a weapon system will operate at the mid-point of its life cycle also determines the need for complexity and hence cost. Systems intended for operation within secure airspace such as Predator and Reaper, or against modest adversary defenses are far less complex than those meant to operate within a high threat environment, e.g., F-35 or N-UCAS. Finally, concepts of employment are important drivers of development cost. Systems meant to launch standoff weapons will cost less than those designed to penetrate advanced air defenses, as will those that can depend on off board sensors rather than conduct autonomous operations. Of course standoff platforms require relatively costly standoff weapons and off board sensors don’t come free either. That’s why it is important to consider the cost to accomplish a mission, not just the cost of a weapon or weapons system.
Acquisition strategy is another important driver of development cost and the most important variable those strategies must control is time in development. Developing a major weapon system involves the creation of an army of engineers, and the longer those armies exist the more they will cost. Time really is money, and that’s why controlling schedule should have as high a priority as satisfying operational requirements. The F-15, F-16 and F-18 were all developed in less than seven years with minimal cost overruns, but only because they relied on mature technologies and the operational users accepted tradeoffs in performance to maintain schedule. Schedule risk is multiplied when advanced technologies must be developed within a major acquisition program, as opposed to using technology demonstrations or prototypes. Fly by wire technologies first deployed in the F-16 were developed in the lab and demonstrated in an F-4 technology demonstration project. Unfortunately, it is often hard to find support for such projects within DoD or the Congress.
Testing new weapons systems is a very expensive process and often amounts to 20% or more of total development cost. Unfortunately, there is no incentive for the government’s independent test organizations to control those costs. In fact, in an era when new weapon programs and major modifications are few in number, the incentive is to prolong testing so the test infrastructure and engineering talent can be sustained. It is the programs that pay these costs.
A fourth driver of development cost is funding stability. Program plans usually assume a funding profile that resembles the classic bell curve; modest funding in the early years ramping up rapidly as the army of engineers is assembled, then tapering off as subsystem developments are completed. When DoD or the Congress cut funding to pay other bills, program managers must re-plan and reflow a very complex and interdependent set of activities. That process consumes engineering man hours that could have been spent keeping the program on schedule. Experience shows that an arbitrary cut of $100 million adds about $400 million to development costs because that army of engineers must be sustained for several months longer than planned.
Finally, requirements creep is the enemy of cost control. If program managers cannot freeze the requirements baseline, or fail to do so if so empowered, program plans suffer from constant disruption. The result is similar to unstable funding, with schedules slipping to the right and costs increasing. The key to cost control is strong program management, with well trained and experienced program managers operating in an environment in which they are empowered to make all key programmatic decisions and are held accountable for the results they produce (see graphic below).
Production Cost Drivers
The cost to produce weapons systems is a function of size, density of advanced technology content, the number of systems that DoD buys and the rate at which those systems are produced. It is often said that we buy aircraft by the pound, and within any class of weapons system, that remains the most reliable predictor of production cost. The first production unit of a 50,000 pound fighter aircraft will cost roughly twice that of a 25,000 pound fighter. The same holds true for ships, submarines and tanks.
Across classes of weapons systems the primary driver of cost per pound is the density of technological content. Satellites contain the most advanced technology per pound of any class of weapons system yet developed, so the cost of a two ton satellite often rivals that of a 900 ton submarine.
Industry can sell laptop computers at a profit for less than $1,000 because they sell millions of them. Unfortunately, weapons systems will never benefit from those economies of scale. Nevertheless, the number of systems produced does have a significant effect on average unit production cost. The tenth unit of any weapons system will cost substantially less than the first due to a phenomenon called the “learning curve”. As workers gain experience and processes are refined, production costs decline. Suppliers who expect to sell hundreds or thousands of a component can accept a much smaller profit margin than if they sell a handful or a few dozen. Efficiencies possible from automated production processes often don’t yield a return on investment unless production runs are quite large. There are many other ways in which production costs are driven down as quantities increase.
Production rates are the other important driver of unit production costs. Very low rates make it impractical to invest in many automated processes, so there is a threshold below which costs may increase rapidly. Industry builds tooling and facilities to support the government’s planned rate of production, so when those plans are changed costs will rise. Lower rates leave capacity idle, while higher rates can introduce inefficiencies in the production process. Either way, costs rise. Unlike quantity, where the more we buy the lower the cost, there is an optimum rate of production.
As with development, erratic production funding will drive production costs up. It generates workforce turbulence and results in sub-optimum use of tooling and facilities. Labor agreements can even generate a ripple effect, where production cuts in one program can generate cost increases in another. How can this happen? Labor agreements typically give production workers “bumping rights” so a worker displaced from a program where rates have been reduced can “bump” a worker with less tenure from another program. The result is learning curve regression and an increase in the penalty paid by the government for its inaccurate production forecast. Multi-year contracts are a tried and true way to stabilize funding. They enable contractors to buy materials in economic quantities, schedule the workforce and facilities efficiently, and reduce costs associated with preparing annual proposals. Savings over annual procurements are typically in the range of 5-10%.
Finally, small investments in engineering for producibility during development can pay significant dividends during production. Seemingly small design issues, like leaving inadequate clearance for a mechanic to torque a nut, can drive significant cost increases.
Sustainment Cost Drivers
Sustainment is where the real money is. Typically 60% of total life cycle costs. The drivers are manpower, modernization to keep the system viable, component repair and replacement, and consumables. The best, if not the only way to control these costs is to invest in cost of ownership reduction during development.
The Navy’s DD(X) destroyer program made investments during development to reduce the crew complement by 70% from earlier destroyers. If realized, the savings would exceed $18B over the life of a 32 ship fleet. (GAO-03-520 June 9, 2003)
In the mid 1970s the Air Force opted for the single engine F-16 over a two engine alternative primarily to reduce life cycle costs associated with engine replacement and fuel consumption.
The F-22 program was one of the first to establish hard requirements for reliability and maintainability, and substantial investments were made during development to drive maintenance man hours per flying hour below 12.
Three different approaches on three different programs; but all illustrate good ways to control the cost to operate and maintain our weapon systems. Each required an upfront investment in an environment where every dollar of development cost is subjected to continuous and intense scrutiny, and in which little credit is given for promised future savings.
———-
***Posted January 5th, 2010