Grace Murray Hopper USNR, David and Goliath, Selected computer articles 78, Department of Defense computer institute
“People are not ‘well-behaved’ mathematical functions.”…Grace Murray Hopper, “David and Goliath”
“It is insufficient to plan on the past alone; the plan must be examined in the light of ‘all possible future developments.'”…Grace Murray Hopper, “David and Goliath”
I have a book given to me by Admiral Grace Murray Hopper. She was a Captain when I met her. During the 1979-80 academic year I escorted her to a speech she gave at the small college in NC where I taught Computer Science. I had the great fortune and honor to have over an hour of one on one time conversing with her.
I have not decided what to do with the book. I could only find one reference to it on the internet. There is a copy in the Jimmy Carter Library.
Even though Ms Hopper is often quoted from her article, I could find no reference to it on the internet. Duty called and I subsequently typed in the article and it is now available to read.
David and Goliath
By Captain Grace Murray Hopper, USNR
From “Computers in the Navy” 1976.
Included in “Selected Computer Articles 78”, Department of Defense Computer Institute
“Captain Grace Murray Hopper, USNR, graduated from Vassar College and received her M.A. and Ph.D. from Yale University. She entered the Naval Reserve in 1943 under the V-9 program. Completing Midshipman School, she was commissioned a lieutenant (junior grade) and reported to the Bureau of Ordnance Computation Project at Harvard which was then operating the Mark I computer and designing the Mark II. Released to inactive duty in 1946, she joined Sperry UNIVAC as a senior mathematician, advanced to staff scientist, and was retired, in absentia, in 1971. Capt. Hopper was placed on the Naval Reserve Retired list on 31 December 1966 with the rank of commander and recalled to active duty 1 August 1967. She is presently serving in the Information Management Division (Op-91) of OPNAV. Capt. Hopper is a member of Phi Beta Kappa and of sigma Xi, a Fellow of the Institute of Electrical and Electronic Engineers and the American Association for the Advancement of Science, a Distinguished Fellow of the British Computer Society and a member of the National Academy of Engineering. She has received many awards for work in computer software including the Harry Goode Memorial Award from the American Association of Information Processing Societies and the Wilbur Lucious Cross Medal from Yale. She has received honorary doctorates in science from Long Island University, in engineering from the New Jersey College of Engineering and of laws from the University of Pennsylvania. At the request of the Department of the Navy, she was promoted by an Act of Congress in 1973 to Captain on the Naval Reserve Retired List.
Early in the 1950’s, a Naval War College correspondence course included a task which, in part, and paraphrased, read:
1. Make a plan to take an island.
2. Review the plan in the light of all possible enemy actions.
3. Examine the cost of failure to execute the plan.
This technique must be applied to all plans for the use of computer equipment. It is insufficient to plan on the past alone; the plan must be examined in the light of “all possible future developments.” Further, the lost-opportunity cost of omitting the plan or any part of the plan must be evaluated against the cost of implementing the plan. In a practical sense, we examine feasible alternatives and future developments that we can foresee.
The examination of future possibilities is dictated by factors endemic in the computer world: the acceleration of change in hardware developments, the exponential increase in the complexity of application systems, the changing ratio of software against hardware costs and the steadily increasing demand for information–processed data for management decision-making. All of these are pressured by an increasingly complex and interdependent world with its growing population, greater demand for supplies of food and goods, recurring shortages and unpredictable economic and political events. More rapid decisions require fast acquisition of more data and more timely interrelation and reporting of the derived information.
One of Parkinson’s laws says in effect that the growth of a system increases its complexity and that this increase of complexity leads ultimately to confusion and chaos. Even before this final stage is reached, however, facilities can be so overloaded that a small breakdown anywhere in the system produces a close to catastrophic result.
The proper preventative measure is to divide the system into subsystems, each module being as nearly independent as possible. When an enterprise created and directed by a single man grows beyond his ability to manage alone, he will divide it into divisions and sections and appoint vice-presidents to manage them. No engineer would attempt to design a missile alone; rather he will identify sections–nosecone with payload, guidance system, fuel section, motor–and the interfaces between them. Each section is then contracted to a specialist in that category. To divide the work properly, he must have a clear understanding of the how and why of each subdivision’s contribution to the whole.
The application of systems has been successful in scientific and technical applications and to some extent in business situations. It meets difficulty when it is applied in social and political situations largely because people are not “well-behaved” mathematical functions, but can only be represented by statistical approximations, and all of the extremes can and do occur.
Entire books have been and will be written on the application of systems concepts to the development, transmission and use of information by computers. It is clear that as the quantity of information grows, so also does the complexity of its structure, the difficulty of selecting the information pertinent to a particular decision, and the amount of “noise” infiltrating the information. A flow is smooth and clear so long as the quantity of flow matches the size of the conduit. If the flow is increased or the conduit roughened, turbulence and confusion or “noise” appear.
Yet, information in large quantities must flow smoothly to decision makers if large systems are to be managed efficiently. Computers can process, control and direct this flow of information, but the speed and capacity of a single computer are limited by physical factors such as the velocity of electronic and optical circuits and the ability to dissipate heat, by cost of hardware and software, and by the ability of human beings to construct error-free, monolithic systems. Only by paralleling processors can the physical limits be overcome. The division of systems into subsystems also provides an answer to the complexity and cost of software as well as reducing error potentials. Fortunately, the rapid reduction in the cost of hardware, occurring simultaneously with an increase in the power of hardware, makes modular systems possible in the present–and practical in the near future.
Parrelleling of peripheral operations with central processor operations using multiprogramming techniques is common practice. Computer systems encompassing co-equal multiprocessors are also available. But, both types of systems are monolithic. The first is controlled by the single central processor and the second by a single executive (operating system). Hence, to cope with more information, processor and executive alike can only grow larger, faster, more complex, more subject to the increase of turbulance and noise and more demanding of controls and housekeeping to maintain the information flow. The system begins to resemble a dinosaur with a large, unwieldy load on his back. At some point an added requirement, like the proverbial last straw, will cause a collapse.
Thus at some point in the life of any system it becomes too large to be sustained by a single control. Tasks must be divided, subsystems defined and responsibilities delegated. Two elements were necesary before the concept of “systems of computers” could become a reality–switching systems and minicomputers.
Consider a system of computers controlling, for example, an inventory system. At a remote site, A, a transaction is entered on a console called Alpha. The console contains a minicomputer which checks the message and reduces it to a minimal format for transmission using a “telephone number” or an “address” to dial the switching center, Beta, for the inventory minicomputers. Unless all are “busy,” in which case Alpha must try again after a delay, computer Gamma accepts Alpha’s message. Gamma determines that the message indicates the receipt of a particular shipment, B, at a certain warehouse, C; Gamma “dials” Delta, the computer controlling the necessary Master-file entry. Delta locates and transmits the master entry to Gamma who processes the transactions and returns the master entry to Delta for storage. Gamma probably also dials Epsilon with a message to increase the value of the inventory at warehouse C by the amount indicated. Epsilon is a minicomputer controlling a mini-database. Each element of the system contains, and is controlled by, its own minicomputer. Thus, input is processed before it is transmitted to the working-processing computers. The librarian minicomputer, Delta, will control the storage, withdrawal and updating of programs and will deliver a program to other minicomputerss in the system upon request.
Al of the computers in such a system operate asynchronously and can be replaced at will–with a few spares on hand, the system cannot really go completely down. The system can shrink or grow since units can be added or interchanged. Functions of an operating system are either eliminated or distributed. “Busy signals” limit access to a particular file controller to one mini at a time. Data security can be ensured by the file controllers which can reject requests or transmissions unless properly identified.
Savings in software costs can be considerable. Programs are short, modular and easily debuggable. Compilers break up into input compilers, processing compilers, computer compilers, editing compilers and output compilers. The difficulties encountered in debugging large, complex, interactive software systems are reduced to the lesser difficulties of debugging the component subsystems. Very large computational problems might require an assembly line of minicomputers, manufacturing results just as automobiles are manufactured.
Having cut back on overhead by breaking up the hardware and software into subassemblies and linking them by communications, we must now consider the effect on management information. Since computers were first installed to handle basic record-keeping, most systems operate by first updating basic records. Later, reports are accumulated from the basic record files. Taking a simple example, when a life insurance policy is sold, a record of all the facts about the insured is transmitted to a master file, including name, address, beneficiary, social security number, type of policy, amount, salesman, selling office and region. The insurance company marketing manager is not concerned with the facts about the beneficiary, but only the type of policy, amount and where sold. Hence, a mini-data-base can be maintained on-line for the local manager’s use, in which are stored current totals, such as total sales by salesman, by type and by area for this week or month, as well as last month, last year or any selected comparison bases. Another mini-data-base can be maintained for national headquarters. However, here the totals would be by office or by region rather than by individual salesman. For any basic record file, those quantities which can be totaled are collected. Management will be concerned with such totals and their comparisons and relationships. The mini-data-bases at each level will hold but a fraction of the raw data stored in the basic record files. Alternatively, a system of minicomputers can collect management totals as soon as a transaction enters a system, possibly even on-line, and update the basic files later, possibly in batches.
New concepts such as “systems of computers” and “mini-data-bases” will have to be employed to meet the challenging problems of the future. If such concepts are so obviously needed, why do the dinosaurs continue to proliferate? Three factors tend to retard the development of the new methods I have described: human allergy to change, economic arguments against disposing of existing hardware and software instantly, and the dearth of systems analysts trained in the new systems architecture.
An examination of coming developments should impel the creation of the dispersed systems. But visions of the future collide with the reluctance to alter old ways. Even in a world of accelerating change, it is still difficult to convince people that new ways of doing things can be better and cheaper.
Introduction of the new systems must proceed gradually–maybe helped along by a little persuasion, a catch phrase oe two (“don’t get a bigger computer, get another computer”). If an existing system is nearing saturation, it’s load can be eased by a front-end computer for validating and editing (which will later move to the data source). A back-end computer can guard and control the data base (and later become the data manager providing access to a system of computers serving the files and segments of files). Wholesale replacement is not required; rather the change may be made step by step as the potential of the new system becomes obvious.
In areas where hardware plays a larger role, such as industrial operations, the microcomputers will appear within the sensors on the equipment. They will communicate with concentrators making local decisions. Concentrators will forward processed and condensed information to local directing computers for action. Messages forwarded to and from the management data base will report production and receive instructions for alteration of a mix to be forwarded to controlling minicomputers. Only through such a system of computers operating in parallel will it be possible to provide the speed essential to the control of critical operations.
Thus, there is a challenge, a challenge to bring down the myth of the monolithic, expensive, powerful computer and replace it with a more powerful, more economical, more reliable and, above all, more manageable system of computers. A world concerned with more complex problems will require a quantum jump in information processing to meet management requirements. The computers can assist, but only insofar as they are recognized as sophisticated tools and as they are reformed and organized to meet specific needs for processed data.
The need for standards will become clear as communications among the components of such a system grow. Data elements, communications protocols, high level languages and more–all must be defined, standardized and conformance insured if large and flexible systems are to prove viable and costs are to be held to acceptable levels.
One concept must always govern planning–that it consider the future. Ignoring the future results in inadequate, outmoded systems continually in need of costly change and updating, and never quite in tune with the work requirements. Concomitantly, no innovation or standard should be rejected as too costly without careful evaluation of the “cost of not doing it.”
More on Admiral Grace Murray Hopper: