Evolution Of The Computer- A Brief History




 History

 Computing in the mechanical era

  The concept of calculating machines evolved long before the invention of electrical and electronic devices. The first mechanical calculating apparatus was the abacus, which was invented in 500 BC in Babylon. It was used extensively without any improvement until 1642 when Blaise Pascal designed a calculator that employed gears and wheels.But it was not until the early 1800s that a practical, geared, mechanical computing calculator became available. This machine could calculate facts but was not able to use a program to compute numerical facts.

In 1823, Charles Babbage, aided by Augusta Ada Byron, the Countess of Lovelace, started an ambitious project of producing a programmable calculating machine for the Royal Navy of Great Britain. Input to this mechanical machine, named the Analytical Engine, was given through punchedcards. This engine stored 1,000, 20-digit decimal numbers and a modifiable program, which could vary the operation of the machine so that it could execute different computing jobs. But even after several years of effort, the machine that had more than 50,000 mechanical parts could not operate reliably because the parts could not be machined to precision.

Topics You May Be Interested In
Introduction - What Is A Computer Primary Memory
Introduction To Number System Cache Memory
Computer Fundamentals And Programming In C Microprocessor
Conversion Of A Decimal Number To Its Octal Equivalent And Gate
Primary Memory Not Gate

Computing in the electrical era

With the availability of electric motors in 1800, a host of motor-operated calculating machines based on Pascal’s calculator was developed. A mechanical machine, driven by a single electric motor, was developed in 1889 by Herman Hollerith to count, sort, and collate data stored on punched cards. Hollerith formed the Tabulating Machine Company in 1896. This company soon merged into International Business Machines (IBM) and the mechanical computing machine business thrived.

In 1941, Konrad Zuse developed the first electronic calculating computer, Z3. It was used by the Germans in World War II. However, Alan Turing is credited with developing the first electronic computer in 1943. This computer system, named the Colossus, was a fixed program computer; it was not programmable. J.W. Mauchly and S.P. Eckert of the University of Pennsylvania completed the first general-purpose electronic digital computer in 1946. It was called the ENIAC, Electronic Numerical Integrator and Calculator. It used 17,000 vacuum tubes, over 500 miles of wires, weighed 30 tons, and performed around 100,000 operations per second. The IAS computer system, under development till 1952 by John von Neumann and others at the Princeton Institute, laid the foundation of the general structure of subsequent general-purpose computers.

In the early 1950s, Sperry-Rand Corporation launched the Uni vac I, Uni vac II, Uni vac 1103 series while IBM brought out Mark I and 701 series. All these machines used vacuum tubes. The transistor was invented at Bell Labs in 1948. In 1958, IBM, International Computers Limited (ICL), Digital Equipment Corporation (DEC), and others brought out general-purpose computers using transistors that were faster, smaller in size, weighed less, needed less power, and were more reliable. Meanwhile, at Texas Instruments, Jack Kilby invented the integrated circuit in 1958 that led to the development of digital integrated circuits in the 1960s. This led to the development of IBM 360/370, PDP 8/1, and HP 9810 in 1966. These computers used medium- and small-scale integrated circuits (MSI and SSI). Thereafter, in 1971, Intel Corporation announced the development of the single-chip microprocessor 4004, a very large-scale integrated circuit. In 1972, the 8008 8-bit microprocessor was introduced. Subsequently, the 8080 and MC 6800 appeared in 1973, which were improved 8-bit microprocessors. The last of the 8-bit microprocessor family from Intel, 8085, was introduced as a general-purpose processor in 1974. In 1978, the 8086, and in 1979, the 8088 microprocessors were released. Though desktop computers were available from 1975 onwards, none could gain as much popularity as the IBM PC.

Topics You May Be Interested In
Classification Of Computers Mainframe Computer
Anatomy Of A Computer Introduction To Number Systems
Introduction To Operating System Secondary Memory
Operational Overview Of Cpu Cache Memory
Introduction To Number System And Gate

In 1981, IBM used the 8088 microprocessor in the personal computer. The 80286 16-bit microprocessor came in 1983 as an updated version of 8086. The 32-bit microprocessor 80386 arrived in 1986 and the 80486 arrived in 1989. With the introduction of the Pentium in 1993, a highly improved personal computer was available at an affordable price. With the development of the desktop computers, in the form of personal computers, and networking, the whole scenario of computing has  undergone a sea change. Now, portable computers such as the laptop and palmtop are available, which can execute programs, store data, and output information at speeds higher than that possible with all the earlier computers. Efforts are now being made to integrate a palmtop computer with a mobile phone unit. Along with the development of computer hardware, programming languages were devised and perfected. In the 1950s, Assembly language was developed for uni vac computers. In 1957, IBM developed fortran language. Then in the years that followed came programming languages such as algol, cobol, basic, pascal, c/c++, ada, and java. Further, with the creation of the operating system (OS), a supervisor program for managing computer resources and controlling the CPU to perform various jobs, the computer’s operational capability touched a new dimension. There are a variety of operating systems today. Some which gained popularity are unix for large and mini-computers and msdos and ms-windows for personal computers. However, with the availability of linux, a trend to change over to this operating system is on.

 



Frequently Asked Questions

+
Ans: The definition clearly categorizes a computer as an electronic apparatus. view more..
+
Ans: A computer is a data processor. It can accept input, which may be either data or instructions or both. The computer remembers the input by storing it in memory cells. It then processes the stored input by performing calculations or by making logical comparisons or both. view more..
+
Ans: The concept of calculating machines evolved long before the invention of electrical and electronic devices , mechanical and mechanical devices. view more..
+
Ans: With advancement in the generation, the performance of computers improved . view more..
+
Ans: Computers can be classified in variety of ways on the basis of various parameters . view more..
+
Ans: A computer can accept input, process or store data, and produce output according to a set of instructions which are fed into it. view more..
+
Ans: There are different types of memories with particular functions. view more..
+
Ans: An operating system may be defined as a system software which acts as an intermediary between the user and the hardware. view more..
+
Ans: The processing required for a single instruction is called an instruction cycle. view more..
+
Ans: A number system defines a set of values used to represent quantity. view more..
+
Ans: The base, or radix, of any number system is determined by the number of digit symbols in the system. view more..
+
Ans: C stands out among general-purpose programming languages for its unrivaled mix of portability,power,flexibility and elegance.The language has block structures,stand-alone functions,a compact set of keywords,and very few restrictions. view more..
+
Ans: A computer is 'an automatic electronic apparatus for making calculations or controlling operations that are expressible in numerical or logical terms'. view more..
+
Ans: Most designs of computers today are based on concepts developed by John von Neumann and are referred to as the von Neumann architecture. Computers can be classified in variety of ways on the basis of various parameters such as usage, cost, size, processing power, and so on. The classification of computers is presented below based on their power and their use. view more..
+
Ans: Supercomputer is the most expensive and fastest type of computer that performs at or near the currently highest operational rate for computers. A Cray supercomputer is a typical example. These are employed for specialized applications that require immense amounts of mathematical calculations such as weather forecasting, nuclear energy research, and petroleum exploration etc. view more..
+
Ans: A mainframe computer supports a vast number of users to work simultaneously and remotely. Apart from providing multi-user facility, it can process large amounts of data at very high speeds and support many input, output and auxiliary storage devices. These computers are very large in size, and expensive. The main difference between a supercomputer and a mainframe is that a supercomputer can execute a single program faster than a mainframe, whereas a mainframe uses its power to execute many programs concurrently. view more..
+
Ans: A number system defines a set of values used to represent quantity. For example, the number of mobile phones kept in a shop, the number of persons standing in a queue, and the number of students attending a class. view more..




Rating - 3/5
530 views

Advertisements