A History of Computing
Model 1 Objectives

  a. Describe the machine, designed over a hundred years ago, considered to be the first computer

 

  b. Discuss historical currents that led to the development of modern computers

  c. Describe the computer Generations (One to Four or Five ?)

  d. Provide you with an opportunity to develop your surfing skills using a browser program and the World Wide Web

 

 

 

I. The historic high lights:

      1. Abacus (Ancient time) -The abacus is a mechanical aid used for counting. Addition, subtraction, division and multiplication can be performed on a standard abacus.

     2. •Mr. Charles Babbage (Page 2, 6, 7,8) –A British scientist, who built the Difference Engine (a machine to solve polynomials) and analytical engine (a general purpose computer) in 1822.

 

    (1)The first device that might be considered to be a computer in the modern sense of the word was conceived by the eccentric British mathematician and inventor Charles Babbage.

 

     (2) In 1822, Babbage proposed building a machine called the Difference Engine to automatically calculate mathematical tables. The Difference Engine was only partially completed when Babbage conceived the idea of another, more sophisticated machine called an Analytical Engine.

 

 

     3. The Difference EngineThe Difference Engine

  (© IBM) The Harvard Mark I (© IBM)

 

In 1936 Howard Aiken, a young professor from Harvard, read Lady Lovelace's notes – which concerned Charles Babbage’s development and began thinking about designing and building a programmable analytical engine. Aiken prepared a proposal for its development and asked IBM for financial support. IBM invested 1 million dollars and, as a result, the Harvard Mark I was born in 1944. The Mark I was 8 feet high, 55 feet long and made of streamlined steel and glass. The Mark I created major publicity and interest for research and discovery in this field. This computer was the main reason for IBM's commitment to computer technology development.

 

 

 

II. The Historical Perspective of the Computer

           

 

Computers are not a new invention, but early computing devices were mechanical. The first known automatic calculating machine was invented in France in 1642 by Blaise Pascal. In recognition of Pascal’s contribution to the computing field, a computer programming language has been named for him. This language, Pascal, is now used to teach compute programming for education major’s student at Ball State University.  The next significant improvement in calculating devices was made in 1673 by Gottfried Wilhelm von Leibniz. Leibniz is best known for his work in developing the branch of mathematics known as calculus. Leibniz invented a calculator that could add, subtract, multiply, and divide accurately.  As inventors worked to improve mechanical calculators, they needed a better way to input data than setting clockwork dials. In 1801, a French weaver named Joseph Jacquard developed a loom (Page 5) that could be programmed by using holes punched in cards.

 

In 1822, Charles Babbage, born and raised in England in the early 1800s, created the first modern computer design. Later, Babbage turned to develop a new device that called analytical engine. This machine was designed to use a form of punched cards similar to Jacquard’s punched cards for data inputs. This device would have been a full-fledged modern computer with a recognized IPOS cycle (input, processing, output, and storage).

           

In 1887, the next major figure in the history of computing was Dr. Herman Hollerith, an American statistician. Dr. Hollerith devised a punched card system for tabulating the result of the United States census. These innovations enabled the 1890 census was completed in six weeks. It was a big improvement over the 1880 census.

           

Coming to 1930s, the technology had been geared toward the modern computer. In 1973, a U.S. court declared John Atanasoff, a professor at Iowa State University, to be the “Inventor of the electronic computer,” based on an electronic calculator Atanasoff built in the 1930s. World War II created a need for the American military to calculate missile trajectories quickly. The military asked Dr. John Mauchly at the University of Pennsylvania to develop a machine for this purpose.

 

Dr. Mauchly and Mr. Eckert built a huge device called ENIAC (Page 14) Historians agree that ENIAC was the first large-scale electronic digital computer, which also led directly to the world’s first commercial computer system. ENIAC used 17,480 vacuum tubes, and it is said that the lights would dim in Philadelphia whenever ENIC was turned on. ENIC was 10 feet high, 3 feet wide, and 100 feet long, and weighed 30 tons. ENIAC was a true programmable digital computer rather than an electronic calculator. One thousand times faster than any existing calculator, ENIAC gripped the public’s imagination after newspaper reports described it as an “Electronic Brain”. ENIAC took only 30 seconds to compute trajectories that would have required 40 hours of hand calculation.

 

III. The Computer Generations (Model 1.4  Page 15)

 

 

 

 

 

 

 

 

 

 

 

 

 

1. The period, 1951 to 1959, was called the first modern computer generation. The characteristics of the first-generation computer were vacuum tubes and magnetic drums. First generation computers were large and slow, and they produced a lot of heat. First generation computers were large and slow, and they produced a lot of heat. The vacuum tubes failed frequently, so first-generation computers were down much of the time.

 

In 1953, IBM announced its first commercial computer, the IBM 701.  IBM made a total of 19 of these computers. At the time, industry leaders felt that 19 computers should be sufficient to take care of the computing needs of American business! Large, slow, and expensive, these first computers required special facilities and highly trained personnel. In 1957, the magnetic tape was introduced as a faster and more convenient secondary storage medium. A single tape could hold the contents of approximately 1100 punched cards.

 

2. The Second Generation (1959 to 1963) Transistor

 

 

 

The second modern computer generation occurred during 1956 to 1963.

The characteristic of the second-generation computer was the transistor. The transistor was at work in the computer by 1956 Coupled with early advances in magnetic-core memory, transistors led to second generation computers that were smaller, faster, more reliable and more energy-efficient than their predecessors. The first large-scale machines to take advantage of this transistor technology were early supercomputers(IBM). The second generation computers replaced machine language with assembly language, allowing abbreviated programming codes to replace long, difficult binary codes. Throughout the early 1960's, there were a number of commercially successful second-generation computers used in business, universities, and government from companies such as Burroughs, Control Data, IBM, Sperry-Rand, and others. These second generation computers were also of solid-state design, and contained transistors in place of vacuum tubes. They also contained all the components we associate with the modern day computer: printers, tape storage, disk storage, memory, operating systems, and stored programs. By 1965, most large businesses routinely processed financial information using second-generation computers. A second-generation computer could print customer invoices and minutes later design products or calculate paychecks. More sophisticated high-level languages such as COBOL (Common Business-Oriented Language) and FORTRAN (Formula Translator) came into common use during this time, and has expanded to the current day. New types of careers (programmer, analyst, and computer systems expert) and the entire software industry began with second-generation computers.

 

3. The Third Generation (1963 to 1975) – integrated circuit

 

 

The third modern computer generation occurred during 1964-1971. The characteristic of the third generation computer was the integrated circuit. Though transistors were clearly an improvement over the vacuum tube, they still generated a great deal of heat, which damaged the computer's sensitive internal parts. Jack Kilby, an engineer with Texas Instruments, developed the integrated circuit (IC) in 1958. The IC combined three electronic components onto a small silicon disc, which was made from quartz. Scientists later managed to fit even more components on a single chip, called a semiconductor. As a result, computers became ever smaller as more components were squeezed onto the chip.

 

4. The Fourth Generation (1975 to Today) –PC

 

The characteristic of the fourth generation computer was the microprocessor chip. In the early 1970s, an Intel Corporation engineer, Dr. Ted Hoff, was given the task of designing an integrated circuit to power a digital watch. Hoff decoded that he could avoid costly redesigns by creating a tiny computer on a chip. The result was the Intel 4004, the world’s first microprocessor. A microprocessor chip holds on a single chip the entire control unit and arithmetic-logic unit of a computer . In1976, Steve Jobs and Steve Woznik started their own business in Job’s parent’s garage in California to assemble microcomputers. By, 1982, their company, Apple Computer, INC. was listed among the largest 500 companies in the United States.

 

 

In 1981, IBM introduced its personal computer (PC) for use in the home, office and schools. The 1980's saw an expansion in computer use in all three arenas as clones of the IBM PC made the personal computer even more affordable. The number of personal computers in use more than doubled from 2 million in 1981 to 5.5 million in 1982. Ten years later, 65 million PCs were being used.

 

 

A Fifth Generation?
AI and Natural Languages

 

 

 

 

 

If there is a fifth generation, according Baber and Meyer (1999), it has been slow in coming. For years, experts have forecast that the trademark of the next generation will be artificial intelligence, in which computers will exhibit some of the characteristics of human intelligences. LaMorte and Lilly (2000) indicated that using recent engineering advances, computers are able to accept Spoken Word Instruction (voice recognition) and imitate human reasoning. The ability to translate a foreign language is also moderately possible with fifth generation computers. The ability to translate a foreign language is also moderately possible with fifth generation computers.