History of Database Systems




Information processing drives the growth of computers, as it has from the earliest
days of commercial computers. In fact, automation of data processing tasks
predates computers. Punched cards, invented by Herman Hollerith, were used
at the very beginning of the twentieth century to record U.S. census data, and
mechanical systemswere used to process the cards and tabulate results. Punched
cards were later widely used as a means of entering data into computers.


Techniques for data storage and processing have evolved over the years:


• 1950s and early 1960s:

 Magnetic tapes were developed for data storage. Data
processing tasks such as payroll were automated, with data stored on tapes.
Processing of data consisted of reading data from one or more tapes and
writing data to a new tape. Data could also be input from punched card
decks, and output to printers. For example, salary raises were processed by
entering the raises on punched cards and reading the punched card deck in
synchronization with a tape containing themaster salary details. The records
had to be in the same sorted order. The salary raises would be added to the
salary read from the master tape, and written to a new tape; the new tape
would become the new master tape.


Tapes (and card decks) could be read only sequentially, and data sizeswere
much larger than main memory; thus, data processing programs were forced
to process data in a particular order, by reading and merging data fromtapes
and card decks.


• Late 1960s and 1970s:

 Widespread use of hard disks in the late 1960s changed
the scenario for data processing greatly, since hard disks allowed direct access
to data. The position of data on disk was immaterial, since any location on
disk could be accessed in just tens of milliseconds. Data were thus freed from
the tyranny of sequentiality.With disks, network and hierarchical databases
could be created that allowed data structures such as lists and trees to be
stored on disk. Programmers could construct and manipulate these data
structures.


A landmark paper by Codd [1970] defined the relational model and
nonprocedural ways of querying data in the relational model, and relational
databaseswere born. The simplicity of the relational model and the possibility
of hiding implementation details completely from the programmer were
enticing indeed. Codd later won the prestigious Association of Computing
Machinery Turing Award for his work.


• 1980s:

Although academically interesting, the relational model was not used
in practice initially, because of its perceived performance disadvantages; relational
databases could notmatch the performance of existing network and hierarchical
databases. That changed with System R, a groundbreaking project
at IBM Research that developed techniques for the construction of an efficient
relational database system. Excellent overviews of System R are provided by
Astrahan et al. [1976] and Chamberlin et al. [1981]. The fully functional System
R prototype led to IBM’s first relational database product, SQL/DS. At
the same time, the Ingres system was being developed at the University of
California at Berkeley. It led to a commercial product of the same name. Initial
commercial relational database systems, such as IBM DB2, Oracle, Ingres,
and DEC Rdb, played a major role in advancing techniques for efficient processing
of declarative queries. By the early 1980s, relational databases had
become competitivewith network and hierarchical database systems even in
the area of performance. Relational databases were so easy to use that they
eventually replaced network and hierarchical databases; programmers using
such databases were forced to deal with many low-level implementation details,
and had to code their queries in a procedural fashion. Most importantly,
they had to keep efficiency in mind when designing their programs, which
involved a lot of effort. In contrast, in a relational database, almost all these
low-level tasks are carried out automatically by the database, leaving the
programmer free to work at a logical level. Since attaining dominance in the
1980s, the relational model has reigned supreme among data models.


The 1980s also saw much research on parallel and distributed databases,
as well as initial work on object-oriented databases.


• Early 1990s:

The SQL language was designed primarily for decision support
applications, which are query-intensive, yet the mainstay of databases in the
1980s was transaction-processing applications, which are update-intensive.
Decision support and querying re-emerged as a major application area for
databases. Tools for analyzing large amounts of data saw large growths in
usage.


Many database vendors introduced parallel database products in this
period. Database vendors also began to add object-relational support to their
databases.


• 1990s:

The major event of the 1990s was the explosive growth of the World
WideWeb. Databaseswere deployedmuchmore extensively than ever before.
Database systems now had to support very high transaction-processing rates,
as well as very high reliability and 24 × 7 availability (availability 24 hours
a day, 7 days a week, meaning no downtime for scheduled maintenance
activities). Database systems also had to supportWeb interfaces to data.


• 2000s:

The first half of the 2000s saw the emerging of XML and the associated
query language XQuery as a new database technology. Although XML is
widely used for data exchange, as well as for storing certain complex data
types, relational databases still form the core of a vast majority of large-scale
database applications. In this time periodwe have also witnessed the growth
in “autonomic-computing/auto-admin” techniques for minimizing system
administration effort. This period also saw a significant growth in use of
open-source database systems, particularly PostgreSQL and MySQL.


The latter part of the decade has seen growth in specialized databases for
data analysis, in particular column-stores, which in effect store each column
of a table as a separate array, and highly parallel database systems designed
for analysis of very large data sets. Several novel distributed data-storage
systems have been built to handle the data management requirements of very
large Web sites such as Amazon, Facebook, Google, Microsoft and Yahoo!,
and some of these are now offered as Web services that can be used by
application developers. There has also been substantialwork onmanagement
and analysis of streaming data, such as stock-market ticker data or computer
network monitoring data. Data-mining techniques are now widely deployed;
example applications include Web-based product-recommendation systems
and automatic placement of relevant advertisements on Web pages.



Frequently Asked Questions

+
Ans: A primary goal of a database system is to retrieve information from and store new information into the database. People who work with a database can be categorized as database users or database administrators. view more..
+
Ans: Researchers have developed several data-models to deal with these application domains, including object-based data models and semi-structured data models. view more..
+
Ans: The term data mining refers loosely to the process of semi-automatically analysing large databases to find useful patterns. view more..
+
Ans: Information processing drives the growth of computers, as it has from the earliest days of commercial computers. In fact, automation of data processing tasks predates computers. view more..
+
Ans: A relational database consists of a collection of tables, each of which is assigned a unique name. view more..
+
Ans: The database schema is the logical design of the database. view more..
+
Ans: A super-key is a set of one or more attributes that, taken collectively, allow us to identify uniquely a tuple in the relation. view more..
+
Ans: DBMS typically includes a database security and authorization subsystem that is responsible for ensuring the security of portions of a database against unauthorized access view more..
+
Ans: The typical method of enforcing discretionary access control in a database system is based on the granting and revoking of privileges. Let us consider privileges in the context of a relational DBMS. view more..
+
Ans: This chapter discusses techniques for securing databases against a variety of threats. It also presents schemes of providing access privileges to authorized users. view more..
+
Ans: This chapter discusses techniques for securing databases against a variety of threats. It also presents schemes of providing access privileges to authorized users. view more..
+
Ans: Object databases is the power they give the designer to specify both the structure of complex objects and the operations that can be applied to these objects view more..
+
Ans: XML (Extensible Markup Language)—has emerged as the standard for structuring and exchanging data over the Web. XML can be used to provide information about the structure and meaning of the data in the Web pages rather than just specifying how the Web pages are formatted for display on the screen view more..
+
Ans: A database schema, along with primary key and foreign key dependencies, can be depicted by schema diagrams. view more..
+
Ans: A query language is a language in which a user requests information from the database. view more..
+
Ans: All procedural relational query languages provide a set of operations that can be applied to either a single relation or a pair of relations. view more..
+
Ans: An object database is a database management system in which information is represented in the form of objects as used in object-oriented programming. Object databases are different from relational databases which are table-oriented. Object-relational databases are a hybrid of both approaches. view more..
+
Ans: IBM developed the original version of SQL, originally called Sequel, as part of the System R project in the early 1970s. view more..




Rating - 3/5
475 views

Advertisements