No one can say positively when this most unscientific word became common coin. Perhaps the reason is that electronic computers were not just inventcd on one particular day, like the electric light bulb. They have been developing over the years. What year shall we credit with the birth of the electronic computer? 1944? That was the year when the first all-automatic calculating machine started work in America. It was called "ASCC," the "Automatic Sequence Controlled Calculator," and its intellectual father was Howard H. Aiken, director of the Mathematical Institute of Harvard University.
Or perhaps 1938, when the plans for the "ASCC" were drawn up? We might take 1934 too, the year Konrad Zuse began experimenting with automatic calculators in Germany. Zuse has never stopped experimenting since then. The "Z 22" at Saarbrucken, which we were talking about at the beginning of this book, came from his factory .
1934, 1938 or 1944 - the dream of building an electronic computer is much older. For once, however, it was not the anclent Romans, but the German theologian and mathematician, Wilhelm Schickard, who in 1630 first envisaged a mechanical calculating machine. It could add, subtract, multiply and divide. Only 15 years later, Blaise Pascal, the famous French philosopher and mathematician, working independently built a similar machine.
The idea of building a really big automatic computer did not strike anyone, as far as we know, until 200 years later, when it occurred to the British mathematician, Charles Babbage. In the years between 1835 and 1860 he designed, with government support, a mechanical calculating machine which was so ingenious that the inventor's drawings are still regarded as models of their kind. Unfortunately the construction had a slight drawback: it was so complicated that no contemporary mechanic could make a copy of it.
Disheartenment prevailed until about 1930. That was the year when plans for building a fully automatic super-calculating machine began to occupy the scientists again. A British professor, Alan Turing, tried out the possibilities of using electrical impulses instead of mechanically operated levers. He planned on paper the idea of an electronic calculating machine. His fundamental principle was to convert numbers into electronic signals and then to have vacuum tubes add them up. His ideas, together with the knowledge gained by Professor Wiener and his colleagues in their experiments with artificial animals, form the foundation of all modern methods of electronic calculating.
After Turing had given thought to the matter, however, almost another 15 years were to pass before the first calculator - the "ASCC" already mentioned - really started work. Afterwards it was given the proud name of "Mark I." And it was in fact a milestone. For the first time it became possible to calculate without the use of levers, cogwheels and other similar relatively crude mechanical methods of power transmission, and to use vacuum tubes, relays and other electrical contacts. The technicians had acquired enough experience of these in the building of radio apparatus to be able to construct calculating devices of any desired size and speed.
So they imagined.
It was a hard blow to them when they discovered that limits had to be set even to the electronic computers. In the "ENIAC" ("Electronic Numerical Integrator And Calculator") completed by the University of Pennsylvania in 1946, this fact became painfully obvious. "ENIAC" contained no fewer than 18,000 electronic tubes - in effect, radio vacuum tubes - and they would certainly have been enough for it to master very extensive calculations, if only the machine had not kept breaking down. Every day one or more tubes burned out. Even without the aid of an electronic computer, it was easy to work out on one's fingers that if still larger computers were to be built, tube failures would accumulate to such an extent that it would not be possible to carry through a single calculation to a successful conclusion.