Database systems concurrency control applies to the use of multiple processors by assigning each transaction a control flow. In most modern operating systems, different tasks or threads run in an interleaved manner (even in the case of machines with multiple processors). That is the operating system decides on its own when to suspend a task and give it some runtime to another. If there are simultaneous or concurrent tasks on the same database, the collation can result in reads and writes for different tasks or applications in the physical environment are made in any order and sequence.
A data warehouse, having more fetches, or SELECTS, and fewer DMLs, or UPDATES, INSERTS, and DELETES, is usually a physically distinct data store than one that relate to data application in operational settings. Because of this distinction, a data warehouse will not require transaction processing, concurrency control mechanisms, and recovery. In place, database warehouse requires two operations for ensuring data access: initial loading of data and access of the data itself. In particular, if the data is stored in main memory and the workload consists of transactions that can be performed without delays caused by users, there is no need for parallel execution of transactions for the full utilization of one CPU (Wilton & Colby 2005). Instead, each transaction can be fully executed before processing the next transaction.
Optimistic Concurrency
In optimistic concurrency, locks are established and maintained only while accessing the database. The locks prevent other users try to update records on the spot. The data are always available except during the precise moment when an update is taking place. When trying to upgrade, we compare the original version of a modified row with existing row in the database. If the two are different, the upgrade is not due to ...