[Ceruzzi heeft twee boeken geschreven over de geschiedenis van de computer. Dit is het eerste boek over de 'prehistorie' van 1935 tot 1945. Het tweede boek gaat over de periode van 1945 tot 2001. De Amerikaan Paul Ceruzzi is curator van de afdeling Aerospace Electronics and Computing aan het Smithsonian’s National Air and Space Museum in Washington, D.C.. Hij zit ook in de groep van adviseurs voor het Computer History Museum.]
[Het zijn goed geschreven boeken, met veel aandacht voor historische feiten. En het is niet eenzijdig of bevooroordeeld Amerikaans. Dit boek bevat bijvoorbeeld - als enige boek dat ik tot dusver ken - uitgebreid het verhaal van de Duitse ingenieur Zuse en diens computers. Ceruzzi schreef dit al in 1983 en dan wordt het bijna onbegrijpelijk dat allerlei Amerikaanse auteurs Zuse niet eens noemen.]
[In 1983 was er overigens nog nauwelijks wat bekend over de historische ontwikkelingen in de UK en in Frankrijk. In de UK is heel lang alles over Bletchley park en de Colossus en andere computers geheim gehouden in de nasleep van de Tweede Wereldoorlog. Veel kon Ceruzzi nog niet weten. Dat verklaart dat in dit boek toch drie van de vier keuzes gaan naar computers in de VS.]
[Het verhaal van Ceruzzi is zo helder dat ik voornamelijk citaten overneem.]
Ceruzzi plaatst zaken hier in een verhelderend perspectief. Hij vindt geschiedenis van computers belangrijk omdat het een beter beeld geeft van een samenleving waarin computers zo'n grote rol spelen.
"That the computer is having a profound effect on modern life is hardly at issue. Just how and why such a profound change in our society is happening because of computers can better be understood with a grasp of how this technology emerged."(2)
Hij maakt duidelijk dat al die grote computers uit de begintijd bijvoorbeeld voortkwamen uit een grote behoefte aan snel numeriek rekenwerk, bv. door de invloed van de Tweede Wereldoorlog. De grote zalen met vrouwen die mathematisch geschoold waren en in ploegverband het rekenwerk deden en die nog in de dertiger jaren 'computers' genoemd werden, werden vervangen door machines omdat door de oorlogsomstandigheden de snelheid van dit werk omhoog moest.
"In the 1930's the word computer meant a human being who calculated with the aid of a calculating machine. After 1945 the word meant a machine which did that."(1)
"Taken together, the person, the calculator, the pencil and 'scratch' paper, and the list of instructions formed a system which could solve a wide range of numerical problems, as long as the solutions to those problems could be specified as a series of elementary steps. That human and mechanical system was precisely what the first digital computers replaced."(5)
Een ander aanleiding van het uitvinden van computers was natuurlijk dat er mensen waren die gewoonweg uit nieuwsgierigheid of luiheid zochten naar manieren om machines al het rekenwerk te laten doen en daarbij gebruik maakten van de middelen van de tijd (mechanische middelen, elektromechanische relais, elektronische vacuümbuizen, transistors etc.) Dat geldt bijvoorbeeld heel duidelijk voor Zuse en Aiken.
En tot slot werden er computers uitgevonden om praktische problemen op te lossen die anders niet opgelost konden worden, zoals in het geval van Bell Labs, waar computers hielpen bij het rekenwerk om problemen met lange afstandsverbindingen in de telefonie mogelijk te maken.
Het boek kiest voor vier hoofdstukken over vier verschillende computerprojecten waarin alle drie die motivaties naar voren komen.
"I have chosen four projects from that era that best illustrate how the computer was invented. These are by no means all that happened, but they are representative of the kinds of activities going on.
The first is the set of electromechanical computers built in Germany by Konrad Zuse, who because of the war had no knowledge of similar activities in America and England. His independent line of work makes for an interesting and self- contained case study of just how one goes about building a computer from scratch.
The second is the Harvard Mark I, built by Professor Howard Aiken and first shown to the public in 1944. This machine was one of the first truly large-scale projects, and because it was well publicized it served notice to the world that the computer age had dawned.
The third project is the series of relay computers built by George Stibitz of the Bell Telephone Laboratories between 1939 and 1946. These machines represented the best that could be done with electromechanical devices (telephone relays), and as such mark the end of that phase of invention and the beginning of another.
The final project is the ENIAC, the world's first working electronic numerical computer, using vacuum tubes for its computing elements, and operating at the speed of light."(1)
Maar eerst een wat algemenere schets van de achtergronden voordat die vier projecten besproken worden. Dat grote rekentuig werd eerst bekend voor het grote publiek door de aandacht die de media er aan besteden. Pas na de Tweede Wereldoorlog ontstond de theorievorming dat computers helemaal niet alleen konden rekenen, maar ook symbolen konden manipuleren, en eigenlijk voor van alles en nog wat gebruikt konden worden.
[Je ziet aan de citaten hoe genuanceerd Ceruzzi is. Hij is nooit chauvinistisch eenzijdig, erkent de rol van Euopeanen in het hele proces, heeft niet te neiging om mee te gaan in allerlei gedweep over computertechniek zoals in de AI. Op die manier houden computers hun menselijke dimensie, het blijven middelen voor mensen.]
"Aiken's 'Mark I' was the first large computing device to be made public (in 1944), and as such its name is appropriate - it marks the beginning of the computer age, despite the fact that it used mechanical components and a design that soon would become obsolete."(6)
"The public dedication of the ENIAC in 1946 marked the dawn of the electronic computer age; actually it was more like the herald of the dawn. The ten years from 1935 to 1945 saw the convergence of various traditions to make the computer; the ten years following that saw both a continuation of the projects begun during the war, and an intensive study of the theory of computing itself not so much how to build a computer as how one ought to build a computer. This activity was made visible as conferences, reports, memorandums, lectures, and short courses in computing that were held throughout America and Europe. John von Neumann was one central figure; others who contributed to this phase of activity were D. R. Hartree, Alan Turing, and Maurice Wilkes in England; Howard Aiken and George Stibitz in America; and Konrad Zuse, Eduard Stiefel, and Alwin Walther in continental Europe, to mention only a few.
What they accomplished can be summed up in a few words: the computer, as before, was seen as a device that did sequences of calculations automatically, but more than that, it was seen as not being restricted to numerical operations. Problems such as sorting and retrieving non-numeric information would be just as appropriate and in fact, from a theoretical standpoint, even more fundamental to computing than numerical problems.
Second, the realized that the instructions that told a computer what to do at each step of a computation should be kept internally in its memory alongside the data for that computation. Both would be kept in a memory giving access to any data or instructions at as high a speed as possible. This criterion allowed the execution of steps at speeds that matched those of the arithmetic unit of the machine, but it also allowed for more than that. The data and the instructions were stored alongside one another because they were not really different entities, an it would be artificial to keep them separate. An understanding of that startling fact, when implemented, made the computer not just a machine that 'computed' but one which also 'reckoned' - it made decisions and learned from its previous experiences. It became a machine could think, at least in a way that many human beings had defined thinking."(7)
"We frequently hear that these machines are smarter than we are, and sooner or later they will 'take over' (whatever that means). Can a computer really think? Indeed, does it even make sense to ask a question like that?"(8)
"Today we often hear the command that we must learn about computers if we want to keep up with the pace of modern society. We hear further that computers are bringing us a technological Utopia (at last!), but if we do not learn about them, all we can do is forlornly press our noses against the window looking in; we may never enter. I have always felt uncomfortable with that scenario - I do not like to be coerced into doing something I otherwise might never have thought of doing. Nor do I feel that learning about computers is absolutely necessary to manage in the world today. Humans can get by without them, just as many live comfortable lives without telephones or automobiles. Why not learn about computers because they are inherently interesting, and because it is fun to see what makes them tick? They are, after all, 'only' creations of ordinary human beings. And learning about them can tell us something about how we tick, as well. And that should not threaten or intimidate anyone."(8)
Zuse was deels autodidact. Misschien was dat just de reden dat hij anders tegen zijn problemen en mogelijke oplossingen aan keek. Want als zo veel mensen met originele ideeën kreeg hij vaak te horen dat alles wat hij wilde niet kon. Zuse koos vaak voor elegante eenvoud, deels noodgedwongen - gebrek aan geld - deels uit overtuiging. Vandaar bijvoorbeeld dat hij meteen koos voor het binaire getalsysteem en het decimale getalsysteem liet voor wat het was.
"Zuse's ignorance of calculating machine construction may have been to his benefit; as it turned out, his approach was fresh, original, and not hindered by what experts said could not be done. His ignorance of advanced mathematics may have been a hindrance at first, but it did not remain so long.
The first step he took to implement his ideas turned out to be the most important. That was to abandon the use of mechanical devices which stored numbers by assuming one of ten physical positions, in favor of a simpler system which could assume not ten but only two positions in total. He was thinking of something like a switch that was either on or off, or a lever that was either forward or back.
Thus from the start Zuse was committed to using the binary system of enumeration for his machine. (He never seems to have considered using the decimal or any other base.) He came to the idea not as a mathematician but as a mechanical engineer concerned with keeping the physical apparatus as simple as possible."(17)
Geen invloed van Leibniz. Geen invloed ook van Babbage die nooit binair heeft gedacht en zijn systeem helemaal decimaal had opgezet. Wel invloed van een vriend Helmut Schreyer, maar gek genoeg nu net niet wat betreft diens vooruitziende blik om elektronisch te gaan werken met vacuübuizen.
"Throughout those early days in Zuse's home workshop, he could always rely on the help of Helmut Schreyer, a college mate, a student of electrical engineering, and a handsome amateur actor of considerable charm. Schreyer was one of those who helped cut out all the metal plates for the Z1. While a student Schreyer had worked as a film projectionist - he and Zuse were especially fond of the sensational American film King Kong, then just released in Germany. He remembered that in a movie projector, the film advances through a gate where each frame is stopped for a moment so that it can be projected on the screen.
That was precisely the kind of motion the programming unit of Zuse's computer needed - a quick reading of the calculating plan, one command at a time. From then on Zuse designed his computers to have their programs supplied by perforated movie film instead of paper tape. (Discarded 35mm film was cheaper than commercial paper tape anyway.
From Schreyer's training as an electrical engineer came a conviction that electrical computing elements could be made to work much better than Zuse's mechanical elements. Schreyer had a confidence and familiarity with telephone relays that Zuse did not have. Telephone relays were not that expensive - a few dollars each-but remember, thousands would be required. Nevertheless he convinced Zuse to adopt them. They managed to find very cheap second-hand telephone relays and they rebuilt them to work in a computer.
Zuse had already been using the relay notation as a design aid, so the transition was not hard to make. In 1938 and 1939 Zuse pressed on with that approach: a mechanical memory, an arithmetic and control unit made of relays, and program control by perforated 35mm movie film.
Schreyer went one step further. Why not build the binary functions out of vacuum tube circuits? Up until that time tubes were used to amplify continuous signals, but he knew he could build a tube circuit that would have the same on-or-off properties that relays and mechanical linkages have. Such a 'tube relay,' as he called it, would switch by moving streams of electrons and would be many times faster than any electromechanical relay.
By that time Zuse was convinced that his chances of success with telephone relays were too good to pass up. There was a lot of developmental work needed to perfect the 'tube relay' circuit, which he was not willing to undertake. Schreyer in turn suggested to Professor Stablein at the Berlin Technical College that an investigation of the electronic switching circuit be the subject of his doctoral thesis. Stablein agreed, and in 1941 Schreyer was awarded a doctoral degree for his thesis, 'The Tube Relay and the Techniques of its Switching.'"(26)
"While Konrad Zuse was going ahead with a calculating unit that used telephone relays, Schreyer proposed to build one out of vacuum tubes, using, of course, Zuse's overall design and the binary scale. He managed to obtain special tubes well suited for his circuits from the Telefunken Company. But when he submitted his proposal for a full-scale computer to the German Army Command (OKH) in 1942, he was turned down. At that time, the German authorities thought the war would be over within the two or three years Schreyer knew it would take to complete the machine. And they did not want to fund anything that would not directly benefit the waging of the war. The proposed machine was to have about 1,500 tubes, and as many glow-lamps.
The story of Schreyer's attempt to build an electronic computer in Germany during the war is one of the more interesting 'what-ifs' of that entire period. By 1942 he was working independently of Zuse, and had his proposal been accepted, Germany might have had a working electronic computer before either the Americans or the British. It was true that his circuits were not as fast, nor was his proposed machine as large as the American ENIAC, but with his use of Zuse's elegant overall design, and especially with his use of the binary system, he might have come up with a very powerful computer.
So for a moment, then, Schreyer cracked open the door to an awesome and strange new world, but that door slammed shut before he could pass through. Only after the war did a now-divided Germany enter the electronic computer age, but as a follower, not a leader.
All that Schreyer built was a test model containing about 150 tubes that converted three-digit decimal numbers to and from binary. (It was financed not by the army but by the Aerodynamics Research Institute.) It was damaged in a bombing attack on Berlin in late 1943, and with Schreyer's evacuation from Berlin in 1945, Germany's first steps toward electronic computing came to an end. After the war Schreyer left Germany and abandoned work on computers. (Konrad Zuse was not a member of the Nazi Party, but Helmut Schreyer was: he had joined in 1933. After the war he emigrated to Rio de Janeiro, where he worked for Brazil's telecommunications network, and where he still lives.)"(27)
Zuse slaagde er op een gegeven moment wel in om een financier te vinden die een probleem opgelost wilde hebben: het Aerodynamics Research Institute (DVL in de Duitse afkorting).
"He still relied mostly on his own funds and still worked in his home workshop, but with the encouragement of the DVL, he proceeded at once to build a relay computer that functioned reliably in every respect. It was completed by December, 1941, and was the first fully operational program-controlled computing machine in the world. It is now known as the Z3 computer.(29)"
"Such is the story of the Z3. It lived for about four years and never performed routine useful work, but it stood at the gateway to the computer age nonetheless. How significant was it? The modern computer owes far more to the American and British wartime machines than it does to the Z3. Indeed, its very existence was hardly known to historians or computer scientists until the 1960's.
None of that detracts from what the Z3 was: possibly the first machine to be programmable in a general way, to use the binary system, to use floating-point arithmetic, and to have an elegant logical design.
Actually the Z3 was only one part of the German contribution to computing; besides the prototype electronic circuits that Helmut Schreyer built during the war, there were several other relay machines that Zuse constructed which are also part of the story."(38)
De Z3 was een testmodel met een nog beperkt geheugen. Daarna bouwde hij de Z4 als het echte productiemodel.
"The Z4 was finally moved out of Berlin in early 1945, first to Gottingen (where it performed test computations), then to an underground fortification in the Harz Mountains, then to a village in the Bavarian Alps. There it stayed during the chaotic postwar years, until it was finally refurbished and put to work at the Federal Technical Institute in Zurich in 1950. For a few years it was the only functional computer, electromechanical or otherwise, in continental Europe, and it initiated much of Europe's postwar activity in computing thereafter.
So despite incredible hardships and hard luck, the work of Konrad Zuse, Helmut Schreyer, and their co-workers was not totally lost. The Z4's influence was modest, but the machine worked well and was heavily used at Zurich, even after the computing world had turned from relay machines to the faster electronic technology. During his stay in Hinterstein, the Alpine village where the Z4 rested from 1945 to 1949, Konrad Zuse did not abandon work on computing. Work on the Z4 itself was impossible: even getting enough to eat was difficult in those years, and Zuse lacked the tools and manpower to do much more than just try and save what he had. He turned his attention to the problem of programming computers-pencil-and-paper work that few at the time thought was crucial to making use of a computer's power. Today the cost of writing programs for computers far exceeds the cost of the machinery itself. Now that the computing world faces this 'software crisis,' it should be remembered that here, too, Konrad Zuse made original contributions. His work on a 'Plan calculus,' begun in 1945, was one of the first attempts to design a programming language for a computer.
After the Z4 was installed in Zurich in 1950, Zuse founded a private firm that made and sold relay machines to industrial and educational customers. For the Leitz Optical works he built a relay machine that he called the Z5, and after that he built a series of other relay computers, the last being the Z11, built in the mid-1950's. His company also built electronic computers, and for a while it prospered. In the 1960's it had trouble raising capital and was absorbed into the Siemens company. That left Konrad Zuse free once again to pursue the theoretical side of computing. He is still active, a remarkable feat in a field that is so volatile and fast-developing.
So what do we have in the story of the first computers in Germany? They are not the ancestors of the modern computer; that honor goes elsewhere. But their overall design and their use of binary numbers and floating-point arithmetic make them resemble the modern computer far more than an ancestor like the ENIAC does. The German story is a fascinating one, and it has a lesson for us as well. The key concepts of computing were discovered independently in different places by different persons. That the present computer world bears the imprint of the American projects was not inevitable; things could have turned out very differently."(39-40)
"In the summer of 1944, the American press reported the waging of the war: grim news of battles in France, casualties, sacrifices and shortages at home. Among the stories on August 7 was another report that probably did not arouse much attention. On that day a large electromechanical computing machine was publicly unveiled at Harvard University, in a ceremony attended by the presidents of both Harvard and the IBM Corporation. The machine they dedicated that day was called the ASCC, short for Automatic Sequence Controlled Calculator; later it would be known as the Harvard Mark I as other similar devices appeared there. It was not really the first automatic computer. Zuse's Z3 was already in use by then, and there were other top-secret projects in America and Britain that also could have vied for that claim. But nonetheless its dedication is as good a moment as any to mark the beginning of the computer age, for it was on that summer day that the existence of the computer became public knowledge."(43)
"Like Konrad Zuse's relay computers, the Mark I was the product of the vision and hard work of one man, although its enormous size and scale required the cooperation of a large team of skilled collaborators. That man was Howard H. Aiken, Commander in the U.S. Naval Reserve and Professor of Applied Mathematics at Harvard."(44)
"The Mark I was completed only a few years after he proposed building it in 1937. Whatever else may be said about its design, it worked. It succeeded where Babbage had failed, and its dedication in 1944 served as an existence theorem for the large-scale digital computer. (By 1944 there was at least one other large-scale computing machine that had been built that was the Differential Analyzer which was built under the direction of Vannevar Bush at MIT. But it was an analog, not a digital computer, and so its existence did not resolve the question of whether or not Babbage's century-old dream of a general-purpose large-scale digital computer would ever be more than just that-a dream)."(48)
"The rift between Aiken and IBM that surfaced before the dedication ceremony grew after the war. It was not just the dispute over whether the ASCC was to be covered or not. Thomas Watson felt that Aiken was unwilling to share any credit for the success of the project. In 1944 IBM was not the industrial giant it is today. Aiken felt that the computer was his idea; whoever happened to build it was of little import. The Harvard Lab continued building computers under Aiken's direction-but not with IBM's help. (The Harvard machines took on the names 'Mark I,' 'Mark II,' and so on.)
Aiken was never convinced that vacuum tubes would make good computer components, and so his machines continued to use relays long after much faster electronic computers were completed elsewhere. He did use tubes in the Mark IV, but only for a few units where speed was absolutely needed; everywhere else he used relays. But by 1950 other pioneers were arguing that computers should use only vacuum tubes, handle numbers in binary not decimal form, and store their programs internally. On all those features Aiken swam against the stream, and he did not prevail."(68)
"Between 1937 and 1946 engineers and scientists at Bell Telephone Laboratories built a number of digital relay computers, among the first working programmable machines anywhere. Their experience with the technology of switching - that second aspect of telephony - was the basis for Bell's entry into digital computing. But the first aspect - the transmission of analog voice signals - played a role too, as we shall see. The invention of the computer at Bell Laboratories, like its invention elsewhere, resulted from a convergence of technical skill, social need, and talent. Those preconditions were there by the mid-1930's. It remained for one of Bell's employees, Dr. George Stibitz, to serve as the catalyst to bring them together."(73)
"So the success of the Bell System depended on its ability to transmit and amplify telephone messages over long distances. That success in turn came from recognizing the potential value of the 'audion' vacuum tube (invented by Lee DeForest in 1906) and improving it to the point where it would function as a reliable amplifier. DeForest had not really understood the fundamentals of his own invention; a strong research effort by Bell Telephone revealed the nature of the tube's function - and turned it into a practical device.
It was for that purpose that a laboratory for basic research was established in 1911, as a branch of Western Electric, Bell's manufacturing company. In 1925 the lab was incorporated as the Bell Telephone Laboratories, and given charge of basic research for the Bell System, which by then had become a powerful monopoly under the leadership of Theodore N. Vail. Thus began an institution that has become synonymous with fundamental and exciting research at the frontiers of physics, chemistry, and other branches of modern science. Research on the physical problems of long-distance telephony was what started Bell Laboratories, but by the 1930's it was busy with that other aspect of telephony, switching, as well."(74-75)
"George Stibitz saw that the switching circuits the relay engineers were working on in one part of the Labs could compute complex products for the voice circuit engineers in another part of the Labs."(78)
"Sometime in the fall of 1937 Stibitz drew up plans for a complex number computer using the above coding scheme. Bell Labs approved, and construction began in November, 1938, under the direction of Samuel B. Williams. Stibitz's role was one of developing the logical flow of numbers and operations that the machine was to carry out, and of translating those expressions into designs for relay circuits. Williams in turn applied his knowledge of relay technology to decide how best to construct the circuits that Stibitz had designed. The two men's skills complemented each other. But their roles were not so specialized that each did not help out with the other's work."(84)
"Most of Stibitz's work at that time consisted of writing out the logical expressions for binary arithmetic, then simplifying them until they were in a form suitable for relay implementation.
He did not know that in Berlin Konrad Zuse was doing almost the exact same thing, only for a pure binary, not a decimal, computer. Stibitz did know that Claude Shannon also had studied the correspondence of statements of symbolic logic with binary relay circuits while a graduate student at MIT. Shannon wrote his graduate thesis (published in 1938) on that subject, and then went to Bell Labs, where he and Stibitz learned of each other's work. But Shannon was not actively involved in the design of the Complex Number Computer. Clearly the idea of using relays to implement binary logic was common in the late 1930's. (Also in the 1930's there was a similar discovery in Japan, but that, too, was unknown in both Germany and America.)"(84)
"The Complex Computer was not really a computer by the definition I have been using throughout this book; it was not programmable. Nonetheless it rightly belongs among the early computing devices because of other things that made up for that deficiency. First of all, it marked the beginning of Bell Labs in the computing field; later on they would build relay devices that did have flexible programming facilities, including conditional branching and subroutines, as we shall see. Second, the 'Model I' just described introduced the idea of automatic digital computing to the mathematics community at an early date, well before the Z3 or the Harvard Mark I were operational. That was in the summer of 1940, at a meeting of the American Mathematical Society at Dartmouth College in Hanover, New Hampshire. This public demonstration was a pivotal moment in the history of computing - it not only was a demonstration of a working digital calculating machine; it was also the first example of remote access to a computer, something that would not be repeated for another ten years. It marks a beginning of 'telecommunications' - the use of telephone lines to carry not voice messages, but coded computer data. The computer was finished by January, 1940."(92)
"Bell Labs took only a moderate interest in Stibitz's work. They were interested in machines that could solve specific problems, but they did not see building general-purpose computers as a worthwhile goal in itself. As the Complex Number Computer was nearing completion, Stibitz was designing machines with more advanced features such as floating point and suppression of leading and trailing zeros. Above all he recognized that it would not be difficult to program future relay computers by paper tape and so not restrict them to only one specific problem."(92)
"His proposal to build a successor to the Model I, a relay computer that would be programmable by paper tape, was at first rejected. Then the United States entered World War Two in December, 1941. Bell Labs found itself facing a new set of priorities. Domestic phone service was not among them, but projects involving computing were.(...)
In this and other computing projects the government was willing to underwrite development costs for new and untried technology. Stibitz moved to the National Defense Research Committee (NDRC), under the supervision of Warren Weaver, but he still kept close ties with Bell. For the duration of the war his main job concerned the design and use of programmable digital calculating machines.
Bell Labs paralleled IBM's building of the Mark I in its first steps into the large-scale computer world: both considered the computer as being far outside their normal sphere of business, but both entered that field and built computers that contributed to the war effort. But the projects themselves were funded mostly by the United States government."(93-94)
"Today's telephone system uses a variety of special-purpose digital computers for things like switching, while for other jobs like billing it uses ordinary commercial computers. AT&T is one of IBM's best customers; it also buys a lot of computers from the other manufacturers as well. Bell Labs designs and builds computer chips for their own use, and for those chips they have developed powerful and sophisticated programming tools (for example, the 'C' programming language and the Unix operating system). Whatever else may be said about Bell Labs' position in the modern computing world, they were involved in computing from the start of the age, they still are involved, and there is little question they will continue to be involved."(95)
"In conclusion, the story of Bell Laboratories' role in the invention of the computer is one of significant contributions in many areas. These are
1. The coding of information as a subject of theoretical interest in itself. From Stibitz and Claude Shannon came a notion of information as a quantity that can be treated like any other abstract quantity, as it is processed, transmitted, and coded in machinery.
2. The binary-coded-decimal system of coding numbers, still widely used where information is input or output frequently by a machine.
3. The design philosophy of incorporating redundancy in a computer code to ensure error-free operation, thus relieving the user of the burden of testing the integrity of the system's hardware.
At the same time it is clear that Bell Labs stands outside of the direct ancestral line that leads to the modern computer. In part that is because they were a regulated monopoly, in part because the relay technology they knew best was seen as unsuitable for computers after 1950. But they were, and are, pioneers just the same."(100-101)
"People often speak of computers as belonging to one of several 'generations' beginning not with the machines discussed in this book but rather with the first commercial computers. That term implies a linear evolution of machines and technology. But it is a misleading analogy: the computer did not descend from one or two ancestors. The first ones incorporated bits and pieces from this and that machine: punched tape from telegraphs, relays from telephones, punched cards from accounting machines, vacuum tubes from radios. And so it goes today. Living creatures do not evolve that way.(...) "(104)
"But if there is a 'granddaddy' of all the generations of digital computers, it would have to be the ENIAC, the first working electronic numeric digital computer. It was not a 'first generation' computer, since it was not used commercially. Perhaps it belongs to the 'zero' generation, since nearly every computer built thereafter owes something to it. What they owe is the subject of this chapter.
The ENIAC was an electronic computer: it was the only early computer except for the British Colossus that computed at electronic speeds. But like all the machines of its day it was a transitional device. It had some relay circuits. It was programmed like an IBM accounting machine. It was a computer, a powerful and fast computer at that, but its design and programming did not become the basis for any computers thereafter. Nothing quite like it was ever built again. Like the Harvard Mark 1, the most important thing about the ENIAC was that it existed: it proved to the world that computing at electronic speeds was indeed possible, just as Howard Aiken had proved that Babbage's dream was more than just a dream. In 1945, when the ENIAC ran its first programs, that was important.
What is so important about computing at electronic speeds? The ENIAC computed about 500 times faster than any of the electromechanical computers, a difference of scale that made it an entirely different type of machine. With a relay machine it was always possible to measure computing power in terms of the number of human beings it could replace-that was the measure used in the published accounts of the Bell Labs machines and the Mark 1. But the ENIAC was built precisely to tackle a job that by nature was beyond the capabilities of human-or electromechanical-computers. (...) (Incidentally, although little information about the British Colossus has been made public, it is clear that its designers also recognized the absolute importance of electronic speeds for the job of breaking the German codes the British had intercepted. No electromechanical machine could have done it.)"(104-105)
"The British physicist D. R. Hartree used the ENIAC between April and July, 1946, for a problem involving the flow of a compressible fluid over a surface, such as air over the surface of a wing travelling faster than the speed of sound. Hartree had already used the Differential Analyzer for that problem. He found the ENIAC's memory too limited for its solution by an iterative method, but he tried a different approach more suitable for the ENIAC's structure, and it worked well. He divided the area to be computed into 250 intervals, and the machine computed and punched answers to seven-place decimal accuracy.
Hartree's work is especially interesting because it was a full-scale problem that taxed the ENIAC's full capabilities, but it was not a ballistics calculation. Although it involved the numerical solution of an ordinary differential equation, its set-up was very different from the kinds of set-ups Eckert and Mauchly had envisioned for computing firing tables. So from the very beginning, the ENIAC showed that it had a broad range of applicability.34 Another consequence of Hartree's work was that he published an account of it in a series of notes in the British journal Nature, in 1946, as well as in a longer paper in the Philosophical Transactions of the Royal Society in 1948. Those articles helped publicize the speed and power of the new invention. In general, the ENIAC never strayed far from numerical integration of differential equations. That was what it did best. But at least it could do other things, easily enough for some persons to make the attempt. After it was installed at Aberdeen, it finally got a chance to compute firing tables, which it did reliably and well for many years."(128)
"With the lifting of wartime secrecy the development of the digital computer entered a new phase. Communication barriers were not lifted entirely or immediately, but the published descriptions of the various projects meant that subsequent computer projects would proceed more along common lines. The 'pre-historic' age ended, and there followed a more direct line of descent to the present day. From 1945 to 1950 a consensus emerged: the computer's future lay in machines that were electronic, not electromechanical, and digital, not analog. They would compute in the binary scale, and they would have as much high-speed read-write memory as possible. The memory would be restricted to storage and retrieval of data; all arithmetic would be concentrated in a separate unit specially built for that purpose. Most important, future computers would store not only numerical data but also their own programs in that same internal high-speed memory. How those concepts, especially the last one, emerged, will be discussed next."(128)
"Finally there was the ENIAC: an electronic, programmable digital computer. In some ways it was like the other computers. It cost as much as the Mark I; it was as complex as the Differential Analyzer; it computed with electronic circuits like the IBM 603 Multiplier. But its combination of features made it unique. And in Britain there was a corresponding electronic computer, the Colossus (actually, ten copies were built by 1946). It did not do arithmetic, but used vacuum tube circuits to help break the codes of intercepted German radio messages. (Only a handful of persons knew of its existence; not until 1970 was it first publicly mentioned in a paper by I. J. Good.) Like the ENIAC the Colossus was to be an influential machine, but for most in Britain the first news of an electronic digital computer was Hartree's note on the ENIAC in Nature in 1946."(131)
"In 1936, a decade before the first real computing machines were built, the British mathematician Alan Turing wrote a paper concerning the concept of computing, in which he ester the equivalence of data and instructions.5 Alan Turing's contribution to the history of computing is difficult to measure. In the postwar conferences on the design of computers he was almost never cited; few persons attending them even knew who he was. Even in the development of British computers there is little agreement as to what he contributed. He was a member of the team at Bletchley Park that designed and built the Colossus. But just what he did there is still classified information, although it is known that he made absolutely crucial contributions to their cracking of the coded messages sent by the Germans' 'Enigma' and Geheimschreiber ('secret writer') machines. After the war Turing was involved with two important British computing projects, the Automatic Computing Engine (ACE) at the National Physical Laboratory in Teddington, and later the computing projects at the University of Manchester. But his contribution to those projects is by no means as clear as, say, Eckert's to the ENIAC or Stibitz's to the Complex Number Computer. (He died in 1954 before it occurred to anyone to ask him those questions. So we may never know.)"(135)
"Compiler programs did not appear until the 1950s. The immediate impact of the stored program principle was to increase the range of problems a computer could solve. Computers before 1945 were best suited to numerical problems coded as long sequences of arithmetic operations. Conditional branches were rare, if they appeared at all. The early machines like the Bell Labs Model V, which had conditional branch facilities in the form of separate loops of tapes, constrained the programmer to use it only in cases that were well known in advance. He or she had to know just where in the execution of a problem a branch to a subroutine tape would occur, roughly how often that would occur, and so on
But with, a stored-program computer the conditional branch facility becomes an integral part of the programming process. The programmer may specify loops be performed over and over until some limit is reached, as before, but his program need only sketch out the general flow of instructions. He need not know the particulars of just where and when those loops enter into a program - the computer itself can keep track of that. By having the computer modify the program, a short piece of code can express a much longer sequence of machine instructions. To a computer, a terse string of a few program statements is like a sonnet: it evokes a much more complex mix of machine actions and states. That in turn reduces the length of the code that has to be stored internally, reducing the computer's memory requirements to the point where they become technically feasible."(144-145)
"All that activity, and with it the debate over machine vs. human intelligence, is the consequence of the stored program principle, coupled with the high switching speeds of electronic circuits. The more immediate consequence was the rapid development of programming languages, letting persons from all walks of life avail themselves of the new invention."(147)
"From the writings of the computer pioneers in the late 1930's one sees a modest vision: a few computers in the universities, some in government, modest-sized computers for industrial use, but not much more. They hardly foresaw the giant scale of the first computers, nor did they see the impact of their creation on everyday life. In America today, large and complex computer systems are an important part of industrial production, payroll and accounting, taxation, telephone and television service, transportation, and military defense."(149)
"What they did not foresee was that the computer would soon be doing work of a very different kind, work that no human beings could do. With the adoption of the stored program principle, it became possible to program computers to sort and otherwise keep track of vast quantities of information, a task that human beings simply cannot do above a small threshold, and for which manual aids like file cards are likewise inadequate. The other unforeseen development was the adoption of high-level languages in lieu of arcane binary codes to get a computer to do its work. That innovation, also a consequence of the stored program principle, meant that a trained mathematician does not have to accompany every computer to its every job. And finally there has been the revolution in electronics technology that reduced the size and electric power requirements for computer who could have foreseen that?"(150)
"But has there really been a computer 'revolution' after all? Some have argued that its invention has not fundamentally altered social or economic patterns of American life; if anything, the computer has made it easier for the patterns existing before 1945 to persist. But 'revolution' is an appropriate word aside from the question of the computer's impact on society. Its invention has triggered a revolution in thinking, in our understanding of our place in the universe, much as Copernicus's On the Revolution of the Heavenly Orbs did after A.D. 1600.
For one thing, it changed the whole concept of a 'machine.' By the classical definition, machine is an assemblage of stiff elements, so arranged as to perform one specific motion. Any other motions are defined as 'play,' of which a good machine has little. By that definition a computer is hardly a machine, since it is certainly not restricted to one specific 'motion.' A computer is a universal machine; it can do 'anything' - that is, whatever we can program it to do. And we continue to be startled at the ingenuity of human beings in dreaming up new applications for the computer. Once programmed it begins to look more like a classical machine, doing one specific thing, but whatever that thing is can neither be specified nor foreseen by the computer manufacturer as it is sent out the factory door. That quality is a natural consequence of the fact that even when programmed, it is not possible to determine the actions of a computer in advance."(151)
"Computer pioneers used a biological analogy is sketching out their thoughts on building their machinery - for example, they called the storage unit a 'memory,' the programming code a 'language.' Von Neumann even based his description of the EDVAC on a neural notation, referring to the computer's various 'organs.' Those terms have probably caused more confusion than anything else. But they have won out over more prosaic terms like store or programming plan.
But the analogy works the other way, too: the structure and function of the computer provides human beings with a powerful model to aid our understanding of human thought. Mechanical analogies are certainly nothing new (Thomas Hobbes's introduction to his Leviathan being a famous example) but the analogy of the computer to the brain is much more powerful precisely because it is not a simple machine in the classical sense. The rich and complex behavior of a computer, set in motion by its executing an internally stored program of modest length, is an appropriate model for biological systems, in which an organism's characteristics represent an 'expression' of information coded in its DNA. Modern computers contain read-only-memories (ROMs) which, like an organism's DNA, cannot be altered. They also contain random-access memories (RAMS) whose contents can be changed at will; these information stores also have their biological counterparts in those parts of the brain that can remember experiences and alter their future behavior based on what they have learned.
All that reasoning by analogy has its limits, of course: the brain is not a computer, nor is human or animal behavior the expression of 'programs.' But there is enough in common in that both represent complex systems. Mathematical theories of such systems have proven useful to our understanding of both."(151-152)