<

Note: This site is moving to KnowledgeJump.com. Please reset your bookmark.

Computers and Transistors

Two great inventions of the twentieth-century are locked together — Computers and Transistors.

In the late 1930s, Howard Aiken, a brilliant Harvard Graduate student, talked IBM's president Thomas Watson into bankrolling his Automatic Sequence-Controlled Calculator — the Harvard Mark I that would solve complex differential equations. This was a highly specialized machine, but one that military planners wanted.

In 1943, the first programmable calculating machine was built by Alan Turing — the Colossus. It took up eight rooms, had almost no memory, could not store programs (they had to be set up each day by an army of technicians who programmed it by manipulating hundreds of plugs and switches); but was able to solve in two hours, calculations that would take humans eight weeks to finish. It went on to crack the German's Enigma code.

About the same time, John Mauchly and J. Presper Eckert Jr. were building ENIAC — the Electronic Numerical Integrator and Computer. Its primary purpose was to calculate trajectory tables for U.S. Army Artillery crews. While the Colossus had 1,500 vacuum tubes, the ENIAC had 18,000.

It was the vacuum tubes that gave computer builders their biggest headaches. They were expensive (some cost up to a $1,000) and they generated heat, which of course caused frequent burnouts. Enter William Shockley, Walter Brattain, and John Bardeen, researchers at Bell Labs, who created the transistor in 1947. Transistors can either transfer or resist electricity, hence the name.

In 1958, transistors became further miniaturized when Texas Instruments engineer Jack Kilby and Fairchild Semiconductor scientist Robert Noyce independently added capacitors and resistors to the transistor to form a complete circuit — the integrated circuit (IC).

Atoms and Bits

Two theories behind computers and transistors drive the new economy.

In 1965, Gordon Moore realized that an IC in 1959 cost $1,000, but had fallen in price to $10; thus Moore's Law was born — the number of transistors on a chip would double every 12 to 18 months, reducing the price of each new generation of computers while increasing their speed. Moore joined forces with Noyce in 1968 to found Intel. Three years later Ted Hoff added memory and programmability to ICs and called it the microprocessor.

Moore's Law has operated with remarkable accuracy for at least thirty years, and should do so for at least the next five or six generations of processors. Thus, everything having to do with digital technology gets faster, smaller, and cheaper. And as it does so, this technology is put to use in newer technologies. For example, in 1980, a gigabyte of storage costs several hundred thousand dollars and took up an entire room. Today, the same amount of storage fits on a credit card and cost less than $200.

Robert Metcalfe, founder of 3Com Corporation and designer of ethernet, observed that new technologies are only valuable if a lot of people use them and theorized that the usefulness or a network equals the square of the number of users. Thus, the more people who use a software, network, standard, book, etc., the more valuable it becomes and the more new users it will attract. For example, if only a few people own telephones, then they are of little use.

In Unleashing the Killer App, Larry Downs and Chunka Mui write how these two laws combine to form the Law of disruption: Social, political, and economic systems change incrementally, but technology changes exponentially. Thus, while most laws have a slow and study growth curve, technology accelerates and creates chasm between the different rates of change.For example, television redefines family; cloning challenges character and personhood; electronic commerce catches governments off guard.

Next Steps

Return to the History of Knowledge

Return to main History page: