What is Concurrency Control?




Concurrency Control

We move next to the issue of concurrency control. In this section, we show how certain of the concurrency-control schemes discussed in Chapter 6 can be modified for use in a distributed environment. The transaction manager of a distributed database system manages the execution of those transactions (or subtransactions) that access data stored in a local site.

Each such transaction may be either a local transaction (that is, a transaction that executes only at that site) or part of a global transaction (that is, a transaction that executes at several sites). Each transaction manager is responsible for maintaining a log for recovery purposes and for participating in an appropriate concurrency-control scheme to coordinate the conciirrent execution of the transactions executing at that site. As we shall see, the concurrency schemes described in Chapter 6 need to be modified to accommodate the distribution of transactions.

Locking Protocols

Topics You May Be Interested In
Computer System Organization Thread Scheduling
Semaphore In Operation System Transforming I/o Requests To Hardware Operations
Contiguous Memory Allocation Implementing Real-time Operating Systems
Allocation Of Frames Atomic Transactions
Deadlock Avoidance Introduction To Memory Management

 The two-phase locking protocols described in Chapter 6 can be used in a distributed environment. The only change needed is in the way the lock manager is implemented. Here, we present several possible schemes. The first deals with the case where no data replication is allowed. The others apply to the more general case where data can be replicated in several sites. As in Chapter 6, we assume the existence of the shared and exclusive lock modes

Nonreplicated Scheme

 If no data are replicated in the system, then the locking schemes described in Section 6.9 can be applied as follows: Each site maintains a local lock manager whose function is to administer the lock and unlock requests for those data items stored in that site.

When a transaction wishes to lock data item Q at site Si, it simply sends a message to the lock manager at site S; requesting a lock (in a particular lock mode). If data item Q is locked in an incompatible mode, then the request is delayed until that request can be granted. Once it has been determined that the lock request can be granted, the lock manager sends a message back to the initiator indicating that the lock request has been granted. 18.4 Concurrency Control 673 This scheme has the advantage of simple implementation. It requires two message transfers for handling lock requests and one message transfer for handling unlock requests. However, deadlock handling is more complex.

Topics You May Be Interested In
Distributed Operating Systems An Example: Cineblitz
Operating System Structure Rc 4000
Os Design And Implementation Networking In Windows Xp
An Example-windows Xp How Is Cpu Scheduling Done In Multimedia Systems?
Multiprocessor Scheduling What Is Election Algorithms ?

Since the lock and unlock requests are no longer made at a single site, the various deadlock-handling algorithms discussed in Chapter 7 must be modified; these modifications are discussed in Section 18.5.

 Single-Coordinator Approach

Several concurrency-control schemes can be used in systems that allow data replication. Under the single-coordinator approach, the system maintains a single lock manager that resides in a single chosen site—say, S,\ All lock and unlock requests are made at site S,. When a transaction needs to lock a data item, it sends a lock request to S;. The lock manager determines whether the lock can be granted immediately. If so, it sends a message to that effect to the site at which the lock request was initiated. Otherwise, the request is delayed until it can be granted; and at that time, a message is sent to the site at which the lock request was initiated.

The transaction can read the data item from any one of the sites at which a replica of the data item resides. In the case of a write operation, all the sites where a replica of the data item resides must be involved in the writing. The scheme has the following advantages:

Topics You May Be Interested In
Various Operating System Services Disk Attachment
Demand Paging Network Structure
Allocation Of Frames Design Issues
Free Space Management I/o Performance
Operating System Services What Is Compression In Mutimdedia?

• Simple implementation. This scheme requires two messages for handling lock requests and one message for handling unlock requests.

 • Simple deadlock handling. Since all lock and unlock requests are made at one site, the deadlock-handling algorithms discussed in Chapter 7 can be applied directly to this environment. The disadvantages of the scheme include the following:

• Bottleneck. The site S, becomes a bottleneck, since all requests must be processed there.

 • Vulnerability. If the site S, fails, the concurrency controller is lost. Either processing must stop or a recovery scheme must be used. A compromise between these advantages and disadvantages can be achieved through a multiple-coordinator approach, in which the lockmanager function is distributed over several sites. Each lock manager administers the lock and unlock requests for a subset of the data items, and the lock managers reside in different sites. This distribution reduces the degree to which the coordinator is a bottleneck, but it complicates deadlock handling, since the lock and unlock requests are not made at a single site.

Topics You May Be Interested In
Computer System Organization Atlas
File System Mounting Rc 4000
Special Purpose Systems What Is Ctss?
Capability-based Systems Programmer Interface
An Example: Cineblitz Distributed System-motivation

Majority Protocol

The majority protocol is a modification of the nonreplicated data scheme presented earlier. The system maintains a lock manager at each site. Each manager controls the locks for all the data or replicas of data stored at that site. When a transaction wishes to lock a data item Q that is replicated in n different 674 Chapter 18 Distributed Coordination sites, it must send a lock request to more than one-half of the n sites in which Q is stored.

Each lock manager determines whether the lock can be granted immediately (as far as it is concerned). As before, the response is delayed until the request can be granted. The transaction does not operate on Q until it has successfully obtained a lock on a majority of the replicas of chl8/18. This scheme deals with replicated data in a decentralized manner, thus avoiding the drawbacks of central control. However, it suffers from its own disadvantages:

 • Implementation. The majority protocol is more complicated to implement than the previous schemes. It requires 2(n/2 + 1) messages for handling lock requests and (n/2 + 1) messages for handling unlock requests.

Topics You May Be Interested In
System Programs And Calls Firewalling To Protect Systems And Networks
Operating System Structure Environmental Subsystems
Deadlock Prevention Rc 4000
Computing Environments- Traditional Computing, Client-server Computing, Peer-to-peer Computing, Web-based Computing What Is Vxworks 5.x?
Deadlock Characteristics Distributed System-motivation

• Deadlock handling. Since the lock and unlock requests are not made at one site, the deadlock-handling algorithms must be modified (Section 18.5). In addition, a deadlock can occur even if only one data item is being locked. To illustrate, consider a system with four sites and full replication. Suppose that transactions T\ and" T2 wish to lock data item Q in exclusive mode. Transaction T\ may succeed in locking Q at sites S\ and S3, while transaction T2 may succeed in locking Q at sites Sj and S4. Each then must wait to acquire the third lock, and hence a deadlock has occurred

 Biased Protocol

 The biased protocol is similar to the majority protocol. The difference is that requests for shared locks are given more favorable treatment than are requests for exclusive locks. The system maintains a lock manager at each site. Each manager manages the locks for all the data items stored at that site. Shared and exclusive locks are handled differently.

 • Shared locks. When a transaction needs to lock data item Q, it simply requests a lock on Q from the lock manager at one site containing a replica oichlS/18.

Topics You May Be Interested In
Network Operating System Naming And Transparency
Operating System Operations- Dual-mode Operation, Timer Remote File Access
Operating System Services Introduction To Memory Management
Special Purpose Systems Introduction To Storage Management
Linux-input & Output Summary Of Os Structures

• Exclusive locks. When a transaction needs to lock data item Q, it requests a lock on Q from the lock manager at each site containing a replica of chl8/18. As before, the response to the request is delayed until the request can be granted. The scheme has the advantage of imposing less overhead on read operations than does the majority protocol.

This advantage is especially significant in common cases in which the frequency of reads is much greater than the frequency of writes. However, the additional overhead on writes is a disadvantage. Furthermore, the biased protocol shares the majority protocol's disadvantage of complexity in handling deadlock.

 Primary Copy

 Yet another alternative is to choose one of the replicas as the primary copyThus, for each data item Q, the primary copy of Q must reside in precisely one site, which we call the primary site ofQ. When a transaction needs to lock a data item Q, it requests a lock at the primary site of chl8/18. As before, the response to the request is delayed until the request can be granted. This scheme enables us to handle concurrency control for replicated data in much the same way as for unreplicated data. Implementation of the method is simple. However, if the primary site of Q fails, Q is inaccessible even though other sites containing a replica may be accessible

Topics You May Be Interested In
Operating System Concepts ( Multi Tasking, Multi Programming, Multi-user, Multi-threading ) Stable Storage Implementation
Computer System Organization Special Purpose Systems
Critical Section Problems Memory Mapped Files
Disk Scheduling User Authentication
Operating System Services What Is Multics?

Timestamping

The principal idea behind the timestamping scheme discussed in Section 6.9 is that each transaction is given a unique timestamp, which is used to decide the serialization order. Our first task, then, in generalizing the centralized scheme to a distributed scheme is to develop a method for generating unique timestamps. Our previous protocols can then be applied directly to the nonreplicated environment.

Generation of Unique Timestamps

Two primary methods are used to generate unique timestamps; one is centralized, and one is distributed. In the centralized scheme, a single site is chosen for distributing the timestamps. The site can use a logical counter or its own local clock for this purpose. In the distributed scheme, each site generates a local unique timestamp using either a logical counter or the local clock. The global unique timestamp is obtained by concatenation of the local unique timestamp with the site identifier, which must be unique (Figure 18.2).

Topics You May Be Interested In
Monolithic Architecture - Operating System Stable Storage Implementation
System Programs And Calls Disk Attachment
Paging In Operating System Stateful Versus Stateless Service
Allocating Kernel Memory An Example-windows Xp
File Sharing What Is Vxworks 5.x?

What is Concurrency Control?

The order of concatenation is important! We use the site identifier in the least significant position to ensure that the global timestamps generated in one site are not always greater than those generated in another site. Compare this technique for generating unique timestamps with the one we presented in Section 18.1.2 for generating unique names.

We may still have a problem if one site generates local timestamps at a faster rate than do other sites. In such a case, the fast site's logical counter will be larger than those of other sites. Therefore, all timestamps generated by the fast site will be larger than those generated by other sites. A mechanism is needed to ensure that local timestamps are generated fairly across the system. To accomplish the fair generation of timestamps, we define within each site S; a logical clock {LC,), which generates the local timestamp (see Section 18.1.2).

To ensure that the various logical clocks are synchronized, we require that a site S; advance its logical clock whenever a transaction T, with timestamp visits that site and x is greater than the current value of Ld. In this case,?site S, advances its logical clock to the value x + 1. If the system clock is used to generate timestamps, then timestamps are assigned fairly, provided that no site has a system clock that runs fast or slow. Since clocks may not be perfectly accurate, a technique similar to that used for logical clocks must be used to ensure that no clock gets far ahead or far behind another clock.

Topics You May Be Interested In
Various Operating System Services Directory Implementation
Multiprocessor Systems Kernel Modules
Operating System Operations- Dual-mode Operation, Timer Application I/o Interface
Paging In Operating System Multiprocessor Scheduling
File Protection What Is Ibm Os/360?

Timestamp-Ordering Scheme

The basic timestamp scheme introduced in Section 6.9 can be extended in a straightforward manner to a distributed system. As in the centralized case, cascading rollbacks may result if no mechanism is used to prevent a transaction from reading a data item value that is not yet committed. To eliminate cascading rollbacks, we can combine the basic timestamp scheme of Section 6.9 with the 2PC protocol of Section 18.3 to obtain a protocol that ensures serializability with no cascading rollbacks. We leave the development of such an algorithm to you.

The basic timestamp scheme just described suffers from the undesirable property that conflicts between transactions are resolved through rollbacks, rather than through waits. To alleviate this problem, we can buffer the various read and write operations (that is, delay them) until a time when we are assured that these operations can take place without causing aborts. A read(x) operation by T, must be delayed if there exists a transaction Ty that will perform a write(;e) operation but has not yet done so and TS(Ty) < TS(Tj). Similarly, a wr ite(x) operation by T, must be delayed if there exists a transaction T,- that will perform either a read(x) or a write(x) operation and TS(T/) < TS(Ti). Various methods are available for ensuring this property.

 One such method, called the conservative timestamp-ordering scheme, requires each site to maintain a read queue and a write queue consisting of all the read and write requests that are to be executed at the site and that must be delayed to preserve the above property. We shall not present the scheme here. Again, we leave the development of the algorithm to you.

Topics You May Be Interested In
Computer System Organization Algorithm Evaluation
Allocation Of Frames Afs - Andrew File System
File System Structure What Is The Security Problem?
File Access Methods What Is Election Algorithms ?
User Authentication Programmer Interface


Frequently Asked Questions

+
Ans: Atomicity We introduced the concept of an atomic transaction, which is a program unit that must be executed atomically. That is, either all the operations associated with it are executed to completion, or none are performed. When we are dealing with a distributed system, ensuring the atomicity of a transaction becomes much more complicated than in a centralized system. This difficulty occurs because several sites may be participating in the execution of a single transaction. The failure of one of these sites, or the failure of a communication link connecting the sites, may result in erroneous computations. Ensuring that the execution of transactions in the distributed system preserves atomicity is the function of the transaction coordinator. Each site has its own local transaction coordinator, which is responsible for coordinating the execution of all the transactions initiated at that site. view more..
+
Ans: Reaching Agreement For a system to be reliable, we need a mechanism that allows a set of processes to agree on a common value. Such an agreement may not take place, for several reasons. First, the communication medium may be faulty, resulting in lost or garbled messages. Second, the processes themselves may be faulty, resulting in unpredictable process behavior. The best we can hope for in this case is that processes fail in a clean way, stopping their execution without deviating from their normal execution pattern. In the worst case, processes may send garbled or incorrect messages to other processes or even collaborate with other failed processes in an attempt to destroy the integrity of the system. view more..
+
Ans: Election Algorithms Many distributed algorithms employ a coordinator process that performs functions needed by the other processes in the system. These functions include enforcing mutual exclusion, maintaining a global wait-for graph for deadlock detection, replacing a lost token, and controlling an input or output device in the system. If the coordinator process fails due to the failure of the site at which it resides, the system can continue only by restarting a new copy of the coordinator on some other site. The algorithms that determine where a new copy of the coordinator should be restarted are called election algorithms. Election algorithms assume that a unique priority number is associated with each active process in the system. For ease of notation, we assume that the priority number of process P, is /. To simplify our discussion, we assume a one-to-one correspondence between processes and sites and thus refer to both as processes. view more..
+
Ans: Concurrency Control We move next to the issue of concurrency control. In this section, we show how certain of the concurrency-control schemes discussed in Chapter 6 can be modified for use in a distributed environment. The transaction manager of a distributed database system manages the execution of those transactions (or subtransactions) that access data stored in a local site. Each such transaction may be either a local transaction (that is, a transaction that executes only at that site) or part of a global transaction (that is, a transaction that executes at several sites). Each transaction manager is responsible for maintaining a log for recovery purposes and for participating in an appropriate concurrency-control scheme to coordinate the conciirrent execution of the transactions executing at that site. As we shall see, the concurrency schemes described in Chapter 6 need to be modified to accommodate the distribution of transactions. view more..
+
Ans: Features of Real-Time Kernels In this section, we discuss the features necessary for designing an operating system that supports real-time processes. Before we begin, though, let's consider what is typically not needed for a real-time system. We begin by examining several features provided in many of the operating systems discussed so far in this text, including Linux, UNIX, and the various versions of Windows. These systems typically provide support for the following: • A variety of peripheral devices such as graphical displays, CD, and DVD drives • Protection and security mechanisms • Multiple users Supporting these features often results in a sophisticated—and large—kernel. For example, Windows XP has over forty million lines of source code. view more..
+
Ans: Implementing Real-Time Operating Systems Keeping in mind the many possible variations, we now identify the features necessary for implementing a real-time operating system. This list is by no means absolute; some systems provide more features than we list below, while other systems provide fewer. • Preemptive, priority-based scheduling • Preemptive kernel • Minimized latency view more..
+
Ans: VxWorks 5.x In this section, we describe VxWorks, a popular real-time operating system providing hard real-time support. VxWorks, commercially developed by Wind River Systems, is widely used in automobiles, consumer and industrial devices, and networking equipment such as switches and routers. VxWorks is also used to control the two rovers—Spirit and Opportunity—that began exploring the planet Mars in 2004. The organization of VxWorks is shown in Figure 19.12. VxWorks is centered around the Wind microkernel. Recall from our discussion in Section 2.7.3 that microkernels are designed so that the operating-system kernel provides a bare minimum of features; additional utilities, such as networking, file systems, and graphics, are provided in libraries outside of the kernel. This approach offers many benefits, including minimizing the size of the kernel—a desirable feature for an embedded system requiring a small footprint view more..
+
Ans: Mutual Exclusion In this section, we present a number of different algorithms for implementing mutual exclusion in a distributed environment. We assume that the system consists of n processes, each of which resides at a different processor. To simplify our discussion, we assume that processes are numbered uniquely from 1 to n and that a one-to-one mapping exists between processes and processors (that is, each process has its own processor). view more..
+
Ans: Event Ordering In a centralized system, we can always determine the order in which two events occurred, since the system has a single common memory and clock. Many applications may require us to determine order. For example, in a resourceallocation scheme, we specify that a resource can be used only after the resource has been granted. A distributed system, however, has no common memory and no common clock. Therefore, it is sometimes impossible to say which of two events occurred first. The liappened-before relation is only a partial ordering of the events in distributed systems. Since the ability to define a total ordering is crucial in many applications, we present a distributed algorithm for exterding the happened-before relation to a consistent total ordering of all the events in the system. view more..
+
Ans: Types of System Calls System calls can be grouped roughly into five major categories: process control, file manipulation, device manipulation, information maintenance, and communications. In Sections 2.4.1 through 2.4.5, we discuss briefly the types of system calls that may be provided by an operating system. view more..
+
Ans: Overview of Mass-Storage Structure In this section we present a general overview of the physical structure of secondary and tertiary storage devices. view more..
+
Ans: Atomic Transactions The mutual exclusion of critical sections ensures that the critical sections are executed atomically. That is, if two critical sections are executed concurrently, the result is equivalent to their sequential execution in some unknown order. Although this property is useful in many application domains, in many cases we would like to make sure that a critical section forms a single logical unit of work that either is performed in its entirety or is not performed at all. An example is funds transfer, in which one account is debited and another is credited. Clearly, it is essential for data consistency either that both the credit and debit occur or that neither occur. Consistency of data, along with storage and retrieval of data, is a concern often associated with database systems. Recently, there has been an upsurge of interest in using database-systems techniques in operating systems. view more..
+
Ans: Programmer Interface The Win32 API is the fundamental interface to the capabilities of Windows XP. This section describes five main aspects of the Win32 API: access to kernel objects, sharing of objects between processes, process management, interprocess communication, and memory management. view more..
+
Ans: Memory Management The main memory is central to the operation of a modern computer system. Main memory is a large array of words or bytes, ranging in size from hundreds of thousands to billions. Each word or byte has its own address. Main memory is a repository of quickly accessible data shared by the CPU and I/O devices. The central processor reads instructions from main memory during the instruction-fetch cycle and both reads and writes data from main memory during the data-fetch cycle (on a Von Neumann architecture). The main memory is generally the only large storage device that the CPU is able to address and access directly. view more..
+
Ans: Storage Management To make the computer system convenient for users, the operating system provides a uniform, logical view of information storage. The operating system abstracts from the physical properties of its storage devices to define a logical storage unit, the file. The operating system maps files onto physical media and accesses these files via the storage devices view more..
+
Ans: Protection and Security If a computer system has multiple users and allows the concurrent execution of multiple processes, then access to data must be regulated. For that purpose, mechanisms ensure that files, memory segments, CPU, and other resources can be operated on by only those processes that have gained proper authorization from the operating system. For example, memory-addressing hardware ensures that a process can execute only within its own address space. view more..
+
Ans: Distributed Systems A distributed system is a collection of physically separate, possibly heterogeneous computer systems that are networked to provide the users with access to the various resources that the system maintains. Access to a shared resource increases computation speed, functionality, data availability, and reliability. Some operating systems generalize network access as a form of file access, with the details of networking contained in the network interface's device driver. view more..
+
Ans: Special-Purpose Systems The discussion thus far has focused on general-purpose computer systems that we are all familiar with. There are, however, different classes of computer systems whose functions are more limited and whose objective is to deal with limited computation domains. view more..




Rating - 4/5
461 views

Advertisements