Manage database concurrency
- 10/9/2017
In this sample chapter from Exam Ref 70-762 Developing SQL Databases, explore skills related to the goal of managing database concurrency, including how to implement transactions, manage isolation levels, and more.
In a typical environment, a database receives multiple requests to perform an operation and often these requests can occur concurrently. As an administrator, you must understand how SQL Server handles these requests by default and the available options for changing this default behavior. Your overarching goal is to prevent unexpected results, while enabling as many processes as possible.
The 70-762 exam tests your skills related to this goal of managing database concurrency. Here in Skill 3.1, we review the basic properties and behaviors of transactions in SQL Server and the role of transactions in high-concurrency databases. Skill 3.2 addresses the available options for managing concurrency in SQL Server by using isolation levels and explores in detail the differences between isolation levels as well as the effect each isolation level has on concurrent transactions, system resources, and overall performance. Then in Skill 3.3 we explore the tools at your disposal to better understand locking behavior in SQL Server and the steps you can take to remediate deadlocks. Skill 3.4 introduces memory-optimized tables as another option for improving concurrency by explaining the use cases for which this approach is best, how to optimize performance when tables are held in memory instead of on disk, and considerations for using and analyzing performance of natively compiled stored procedures.
Skill 3.1: Implement transactions
SQL Server protects data integrity by using transactions to control how, when, or even whether data changes in a database. A transaction is a unit of work consisting of one or more read and write commands that SQL Server executes completely or not at all. In the exam, you must be able to recognize scenarios in which transactions can complete successfully or not, and know how to use T-SQL statements to manage transaction behavior. You must also understand potential problems with transactions executing concurrently and how SQL Server uses locks to mitigate these problems.
Identify DML statement results based on transaction behavior
The results of a DML statement depends on transaction behavior. If the transaction succeeds, then the inserts, the updates, or the deletes that SQL Server executes as part of that transaction are committed to the database and permanently change the data in the affected tables. If the transaction fails for any reason, you can cancel or rollback the transaction to reverse any changes made to the database by the transaction prior to the failure. SQL Server has various methods for managing transaction behavior, but you also have options for changing this behavior when writing code to execute transactions.
In this section, we explore the ways that SQL Server supports the following set of properties collectively known in database theory as ACID to ensure data is protected in case of system or hardware failure:
Atomicity An atomic transaction is a set of events that cannot be separated from one another and must be handled as a single unit of work. A common example is a bank transaction in which you transfer money from your checking account to your savings account. A successful atomic transaction not only correctly deducts the amount of the transfer from one account, but also adds it to the other account. If the transaction cannot complete all of its steps successfully, it must fail, and the database is unchanged.
Consistency When a transaction is consistent, any changes that it makes to the data conform to the rules defined in the database by constraints, cascades, and triggers and thereby leave the database in a valid state. To continue the previous example, the amount removed from your checking account must be the same amount added to your savings account when the transaction is consistent.
Isolation An isolated transaction behaves as if it were the only transaction interacting with the database for its duration. Isolation ensures that the effect on the database is the same whether two transactions run at the same time or one after the other. Similarly, your transfer to the savings account has the same net effect on your overall bank balances whether you were the only customer performing a banking transaction at that time, or there were many other customers withdrawing, depositing, or transferring funds simultaneously.
Durability A durable transaction is one that permanently changes the database and persists even if the database is shut down unexpectedly. Therefore, if you receive a confirmation that your transfer is complete, your bank balances remain correct even if your bank experienced a power outage immediately after the transaction completed.
Before we start exploring transaction behavior, let’s set up a new database, add some tables, and insert some data to establish a test environment as shown in Listing 3-1.
LISTING 3-1 Create a test environment for exploring transaction behavior
CREATE DATABASE ExamBook762Ch3; GO USE ExamBook762Ch3; GO CREATE SCHEMA Examples; GO CREATE TABLE Examples.TestParent ( ParentId int NOT NULL CONSTRAINT PKTestParent PRIMARY KEY, ParentName varchar(100) NULL ); CREATE TABLE Examples.TestChild ( ChildId int NOT NULL CONSTRAINT PKTestChild PRIMARY KEY, ParentId int NOT NULL, ChildName varchar(100) NULL ); ALTER TABLE Examples.TestChild ADD CONSTRAINT FKTestChild_Ref_TestParent FOREIGN KEY (ParentId) REFERENCES Examples.TestParent(ParentId); INSERT INTO Examples.TestParent(ParentId, ParentName) VALUES (1, 'Dean'),(2, 'Michael'),(3, 'Robert'); INSERT INTO Examples.TestChild (ChildId, ParentId, ChildName) VALUES (1,1, 'Daniel'), (2, 1, 'Alex'), (3, 2, 'Matthew'), (4, 3, 'Jason');
Even a single statement to change data in a table is a transaction (as is each individual INSERT statement in Listing 3-1). Consider this example:
UPDATE Examples.TestParent SET ParentName = 'Bob' WHERE ParentName = 'Robert';
When you execute this statement, if the system doesn’t crash before SQL Server lets you know that the statement completed successfully, the new value is committed. That is, the change to the data resulting from the UPDATE statement is permanently stored in the database. You can confirm the successful change by running the following SELECT statement.
SELECT ParentId, ParentName FROM Examples.TestParent;
The result of the UPDATE statement properly completed as you can see in the SELECT statement results.
ParentId ParentName ---------- ------------ 1 Dean 2 Michael 3 Bob
Atomicity
The execution of one statement at a time as a transaction does not clearly demonstrate the SQL Server support for the other ACID properties. Instead, you need a transaction with multiple statements. To do this, use the BEGIN TRANSACTION (or BEGIN TRAN) and COMMIT TRANSACTION (or COMMIT TRAN) statements (unless you implement implicit transactions as we describe in the next section).
You can test atomicity by attempting to update two different tables in the same transaction like this:
BEGIN TRANSACTION; UPDATE Examples.TestParent SET ParentName = 'Mike' WHERE ParentName = 'Michael'; UPDATE Examples.TestChild SET ChildName = 'Matt' WHERE ChildName = 'Matthew'; COMMIT TRANSACTION;
When the transaction commits, the changes to both tables become permanent. Check the results with this query:
SELECT TestParent.ParentId, ParentName, ChildId, ChildName FROM Examples.TestParent FULL OUTER JOIN Examples.TestChild ON TestParent.ParentId = TestChild.ParentId;
The transaction updated both tables as you can see in the query results:
ParentId ParentName ChildId ChildName ---------- ------------ --------- ----------- 1 Dean 1 Daniel 1 Dean 2 Alex 2 Michael 3 Matt 3 Bob 4 Jason
On the other hand, if any one of the statements in a transaction fails, the behavior depends on the way in which you construct the transaction statements and whether you change the SQL Server default settings. A common misconception is that using BEGIN TRANSACTION and COMMIT TRANSACTION are sufficient for ensuring the atomicity of a transaction. You can test the SQL Server default behavior by adding or changing data in one statement and then trying to delete a row having a foreign key constraint in another statement like this:
BEGIN TRANSACTION; INSERT INTO Examples.TestParent(ParentId, ParentName) VALUES (4, 'Linda'); DELETE Examples.TestParent WHERE ParentName = 'Bob'; COMMIT TRANSACTION;
In this case, the deletion fails, but the insertion succeeds as you can see by the messages that SQL Server returns.
(1 row(s) affected) Msg 547, Level 16, State 0, Line 24 The DELETE statement conflicted with the REFERENCE constraint "FKTestChild_Ref_ TestParent". The conflict occurred in database "ExamBook762Ch3", table "Examples. TestChild", column 'ParentId'. The statement has been terminated.
When you check the data again, you see a total of four rows in the Examples.TestParent table:
ParentId ParentName ---------- ------------ 1 Dean 2 Michael 3 Bob 4 Linda
If you want SQL Server to roll back the entire transaction and thereby guarantee atomicity, one option is to use the SET XACT_ABORT option to ON prior to executing the transaction like this:
SET XACT_ABORT ON; BEGIN TRANSACTION; INSERT INTO Examples.TestParent(ParentId, ParentName) VALUES (5, 'Isabelle'); DELETE Examples.TestParent WHERE ParentName = 'Bob'; COMMIT TRANSACTION;
In this case, SQL Server rolls back all successfully completed statements in the transaction and returns the database to its state at the start of the transaction in which only four rows exist in the Examples.TestParent table as shown in the previous example. The SET XACT_ABORT option is set to OFF by default, therefore you must enable the option when you want to ensure that SQL Server rolls back a failed transaction.
What if the error raised is not a constraint violation, but a syntax error? Execute the following code that first disables the SET XACT_ABORT option (to prove the roll back works correctly with the default SQL Server setting) and then attempts an INSERT and a DELETE containing a deliberate syntax error:
SET XACT_ABORT OFF; BEGIN TRANSACTION; INSERT INTO Examples.TestParent(ParentId, ParentName) VALUES (5, 'Isabelle'); DELETE Examples.TestParent WHEN ParentName = 'Bob'; COMMIT TRANSACTION;
Although the INSERT is successful and would commit if the subsequent error were a constraint violation, SQL Server does not commit the insertion, and the database remains in its original state when it encounters a syntax error in a transaction.
Another option to consider is to explicitly include a roll back instruction in your transaction by enclosing it in a TRY block and adding a ROLLBACK TRANSACTION (or ROLLBACK TRAN) statement in a CATCH block:
BEGIN TRY BEGIN TRANSACTION; INSERT INTO Examples.TestParent(ParentId, ParentName) VALUES (5, 'Isabelle'); DELETE Examples.TestParent WHERE ParentName = 'Bob'; COMMIT TRANSACTION; END TRY BEGIN CATCH IF @@TRANCOUNT > 0 ROLLBACK TRANSACTION; END CATCH
Because the transaction includes a DELETE statement that fails due to a constraint violation, the CATCH block is invoked and the transaction rolls back. Therefore, the Examples.Parent table still contains only four rows.
Notice also in the previous example that the execution of the ROLLBACK TRANSACTION requires the current status of the transaction (obtained by the @@TRANCOUNT variable) to be greater than 0, which means that a transaction is active. We explore the use of this variable in more detail in the section covering implicit and explicit transactions.
Consistency
These last two examples not only demonstrate atomicity compliance in SQL Server, but also consistency. Another commonly used term for consistency is data integrity. To preserve data integrity in a database, you cannot remove a row from a table when there is an existing dependency on that row. Similarly, you cannot add a row to a table having foreign key constraints without providing a valid foreign key value in the new row. Any rule that you add to the database as we described in Chapter 2, “Implement programmability objects,” is enforced by SQL Server to guarantee consistency.
Isolation
Now let’s take a look at how SQL Server handles isolation by default. We explore your options for managing isolation in detail in Skill 3.2, but for Skill 3.1 you must understand what happens if you rely on the behavior of READ COMMITTED, the SQL Server default isolation level. To observe this behavior, set up two separate sessions in SQL Server Management Studio.
In one session, execute the following statement:
BEGIN TRANSACTION; INSERT INTO Examples.TestParent(ParentId, ParentName) VALUES (5, 'Isabelle');
The omission of the COMMIT statement in this example is deliberate. At this point, the transaction is still active, but it is not yet committed. Furthermore, the uncommitted transaction continues to hold a lock on the table preventing any other access to the table as long as the transaction remains uncommitted.
In the second session, execute the following statement:
SELECT ParentId, ParentName FROM Examples.TestParent;
When you attempt to read rows from the locked table, the query continues to execute indefinitely because it is waiting for the transaction in the first session to complete. This behavior is an example of a write operation blocking a read operation. By default, SQL Server uses the READ COMMITTED isolation level to protect the transaction by preventing other operations from returning potentially incorrect results as a result of reading uncommitted inserts that could later be rolled back. It also insulates the transaction from premature changes to the values of those inserts by another transaction’s update operation.
In the first session, end the transaction like this:
COMMIT TRANSACTION;
As soon as you commit the transaction, the query in the second session returns five rows and includes the newly inserted row:
ParentId ParentName ---------- ------------ 1 Dean 2 Michael 3 Bob 4 Linda 5 Isabelle
Durability
SQL Server guarantees full transaction durability by default. If the system crashes for some reason after SQL Server confirms a successful commit, the changes made by the transaction are visible after the system returns to an operable status even if the transaction operations had not been written to disk prior to the system failure.
To make this possible, SQL Server uses write-ahead logging to first hold data changes in a log buffer and then writes the changes to the transaction log on disk when the transaction commits or if the log buffer becomes full. The transaction log contains not only changes to data, but also page allocations and de-allocations, and changes to indexes. Each log record includes a unique log sequence number (LSN) to that every record change that belongs to the same transaction can be rolled back if necessary.
Once the transaction commits, the log buffer flushes the transaction log and writes the modifications first to the data cache, and then permanently to the database on disk. A change is never made to the database without confirming that it already exists in the transaction log. At that point, SQL Server reports a successful commit and the transaction cannot be rolled back.
What if a failure occurs after the change is written to the transaction log, but before SQL Server writes the change to the database? In this case, the data changes are uncommitted. Nonetheless, the transaction is still durable because you can recreate the change from the transaction log if necessary.
SQL Server also supports delayed durable transactions, also known as lazy commits. By using this approach, SQL Server can process more concurrent transactions with less contention for log IO, thereby increasing throughput. Once the transaction is written to the transaction log, SQL Server reports a successful transaction and any changes that it made are visible to other transactions. However, all transaction logs remain in the log buffer until the buffer is full or a buffer flush event occurs, at which point the transaction is written to disk and becomes durable. A buffer flush occurs when a fully durable transaction in the same database commits or a manual request to execute sp_flush_log is successful.
Delayed durability is useful when you are willing to trade potential data loss for reduced latency in transaction log writes and reduced contention between transactions. Such a trade-off is acceptable in a data warehouse workload that runs batches frequently enough to pick up rows lost in a previous batch. The eventual resolution of data loss is acceptable alternative to durability only because the data warehouse is not the system of record. Delayed durability is rarely acceptable in an online transaction processing (OLTP) system.
Recognize differences between and identify usage of explicit and implicit transactions
An important aspect of transaction management is knowing which commands are in scope. That is, you must know which commands are grouped together for execution as a single transaction. SQL Server supports the following methods for transaction control:
Auto-commit Any single statement that changes data and executes by itself is automatically an atomic transaction. Whether the change affects one row or thousands of rows, it must complete successfully for each row to be committed. You cannot manually rollback an auto-commit transaction, although SQL Server performs a rollback if a system failure occurs before the transaction completes.
Implicit An implicit transaction automatically starts when you execute certain DML statements and ends only when you use COMMIT TRANSACTION or ROLLBACK TRANSACTION. However, you must first configure a session to run in implicit transaction mode by first executing the SET IMPLICIT_TRANSACTIONS ON statement. After you do this, any of the following statements begin a new transaction: ALTER TABLE, BEGIN TRANSACTION, CREATE, DELETE, DROP, FETCH, GRANT, INSERT, OPEN, REVOKE, SELECT (only if selecting from a table), TRUNCATE TABLE, and UPDATE.
Explicit An explicit transaction has a specific structure that you define by using the BEGIN TRANSACTION at the beginning of the transaction and the COMMIT TRANSACTION or ROLLBACK TRANSACTION at the end of the transaction.
Implicit transactions
Let’s examine the behavior of implicit transactions by executing a series of statements incrementally. First, enable the implicit transaction mode like this:
SET IMPLICIT_TRANSACTIONS ON;
Next, execute an INSERT statement and then check the status of open transactions:
INSERT INTO Examples.TestParent(ParentId, ParentName) VALUES (6, 'Lukas'); SELECT @@TRANCOUNT;
The SELECT statement returns a 1 because SQL Server starts a new transaction when implicit transactions are enabled and the INSERT statement is executed. At this point, the transaction remains uncommitted and blocks any readers of the Examples.TestParent table.
Now you can end the transaction, check the status of open transactions, and check the change to the table by executing the following statements:
COMMIT TRANSACTION; SELECT @@TRANCOUNT; SELECT ParentId, ParentName FROM Examples.TestParent;
The results of the SELECT statements show that the COMMIT statement both ended the transaction and decremented the @@TRANCOUNT variable and that a new row appears in the Examples.Parent table:
(No column name) ----------------- 0 ParentId ParentName ---------- ------------ 1 Dean 2 Michael 3 Bob 4 Linda 5 Isabelle 6 Lukas
Now disable the implicit transaction mode:
SET IMPLICIT_TRANSACTIONS OFF;
Just as you can see in many of the transaction examples in the previous section, an implicit transaction can contain one or more statements and ends with an explicit execution of a COMMIT TRANSACTION or ROLLBACK TRANSACTION statement. Apart from the absence of a BEGIN TRANSACTION statement, an implicit transaction resembles an explicit transaction and behaves in the same way as well.
You might use implicit transactions when migrating an application from a different database platform or when you need to run your application across multiple database platforms because fewer code changes are required. In most cases, however, best practice dictates avoiding the use of implicit transactions. When you rely on auto-commit or explicit transactions instead, changes are committed as quickly as possible and performance is less likely to be adversely affected.
Explicit transactions
When you want complete control over transaction behavior, use an explicit transaction. You have nothing to configure at the server or database level to enable explicit transactions. Simply enclose your transaction statements in the BEGIN TRANSACTION and COMMIT TRANSACTION statements. Furthermore, you should include logic to handle errors, such as a TRY/CATCH block, as shown in an example in the “Atomicity” section, or an IF/ELSE construct like this:
BEGIN TRANSACTION; INSERT INTO Examples.TestParent(ParentId, ParentName) VALUES (7, 'Mary'); DELETE Examples.TestParent WHERE ParentName = 'Bob'; IF @@ERROR != 0 BEGIN ROLLBACK TRANSACTION; RETURN END COMMIT TRANSACTION;
The following commands cannot be used in an explicit transaction:
ALTER DATABASE
ALTER FULLTEXT CATALOG
ALTER FULLTEXT INDEX
BACKUP
CREATE DATABASE
CREATE FULLTEXT CATALOG
CREATE FULLTEXT INDEX
DROP DATABASE
DROP FULLTEXT CATALOG
DROP FULLTEXT INDEX
RECONFIGURE
RESTORE
You can nest explicit transactions, although this capability is not ANSI-standard transaction behavior. As one example, consider a situation in which you have a set of statements in a transaction and one of the statements calls a stored procedure that starts its own transaction. Remember that each BEGIN TRANSACTION increments the @@TRANCOUNT variable and each COMMIT TRANSACTION decrements it. The ROLLBACK TRANSACTION resets the variable to zero and rolls back every statement to the beginning of the first transaction, but does not abort the stored procedure. When @@TRANCOUNT is zero, SQL Server writes to the transaction log. If the session ends before @@TRANCOUNT returns to zero, SQL Server automatically rolls back the transaction.
Let’s test this behavior by creating a stored procedure and calling it in a transaction as shown in Listing 3-2.
LISTING 3-2 Create and execute a stored procedure to test an explicit transaction
CREATE PROCEDURE Examples.DeleteParent @ParentId INT AS BEGIN TRANSACTION; DELETE Examples.TestParent WHERE ParentId = @ParentId; IF @@ERROR != 0 BEGIN ROLLBACK TRANSACTION; RETURN; END COMMIT TRANSACTION; GO BEGIN TRANSACTION; INSERT INTO Examples.TestParent(ParentId, ParentName) VALUES (7, 'Mary'); EXEC Examples.DeleteParent @ParentId=3; IF @@ERROR != 0 BEGIN ROLLBACK TRANSACTION; RETURN END COMMIT TRANSACTION; GO
When you execute these statements, several error messages display:
(1 row(s) affected) Msg 547, Level 16, State 0, Procedure DeleteParent, Line 6 [Batch Start Line 16] The DELETE statement conflicted with the REFERENCE constraint "FKTestChild_Ref_TestParent". The conflict occurred in database "ExamBook762Ch3", table "Examples.TestChild", column 'ParentId'. The statement has been terminated. Msg 266, Level 16, State 2, Procedure DeleteParent, Line 0 [Batch Start Line 16] Transaction count after EXECUTE indicates a mismatching number of BEGIN and COMMIT statements. Previous count = 1, current count = 0. Msg 3903, Level 16, State 1, Line 25 The ROLLBACK TRANSACTION request has no corresponding BEGIN TRANSACTION.
The first transaction begins with an INSERT statement at which point @@TRANCOUNT is 1. Then the call to the stored procedure results in the start of a second transaction and increments @@TRANCOUNT to 2. The constraint violation causes an error that then calls the ROLLBACK TRANSACTION statement, which in turn resets @@TRANCOUNT to 0 and rolls back the INSERT. The error message regarding the mismatching transaction count occurs because the @@TRANCOUNT value when the stored procedure ends no longer matches its value when the stored procedure started. That error leads to the ROLLBACK TRANSACTION statement in the first transaction. However, because @@TRANCOUNT is still 0, effectively there is no open transaction and therefore the message about no corresponding BEGIN TRANSACTION displays.
This situation highlights a potential problem with nested transactions in stored procedures. If you want each stored procedure to roll back only its own work if it encounters an error, you should test for an existing transaction, skip the step to begin a new transaction if one exists, and use a savepoint to roll back the to the start of the current transaction if an error occurs in the stored procedure. (We discuss savepoints in more detail in the next section.) Furthermore, the COMMIT statement in the stored procedure should execute only if the stored procedure starts its own transaction. By storing the @@TRANCOUNT value in a variable before you execute the remaining stored procedure’s statements, you can later test whether a transaction existed at the start. If it did not, the variable’s value is 0 and you can then safely commit the transaction that the stored procedure started. If a transaction did exist, no further action is required in the stored procedure.
We can revise the previous example to avoid nesting transactions as shown in Listing 3-3.
LISTING 3-3 Create a stored procedure that avoids a nested transaction
CREATE PROCEDURE Examples.DeleteParentNoNest @ParentId INT AS DECLARE @CurrentTranCount INT; SELECT @CurrentTranCount = @@TRANCOUNT; IF (@CurrentTranCount = 0) BEGIN TRANSACTION DeleteTran; ELSE SAVE TRANSACTION DeleteTran; DELETE Examples.TestParent WHERE ParentId = @ParentId; IF @@ERROR != 0 BEGIN ROLLBACK TRANSACTION DeleteTran; RETURN; END IF (@CurrentTranCount = 0) COMMIT TRANSACTION; GO BEGIN TRANSACTION; INSERT INTO Examples.TestParent(ParentId, ParentName) VALUES (7, 'Mary'); EXEC Examples.DeleteParentNoNest @ParentId=3; IF @@ERROR != 0 BEGIN ROLLBACK TRANSACTION; RETURN END COMMIT TRANSACTION; GO
When you execute the statements in Listing 3-3 and then check the table, you find that the new row is committed in the table and the row with the ParentId value of 3 remains in the table because the foreign key constraint caused SQL Server to roll back that transaction.
ParentId ParentName ---------- ------------ 1 Dean 2 Michael 3 Bob 4 Linda 5 Isabelle 6 Lukas 7 Mary
The explicit transactions described to this point are all local transactions. Another option is to execute a distributed transaction when you need to execute statements on more than one server. To do this, start the transaction with the BEGIN DISTRIBUTED TRANSACTION and then end it with either COMMIT TRANSACTION or ROLLBACK TRANSACTION statements. The server on which you execute the distributed transaction controls the completion of the transaction.
Implement savepoints within transactions
A savepoint is a named location from which a transaction can restart if part of it is conditionally canceled. That means you can rollback a transaction to a specific savepoint if a statement does not complete successfully, as shown in the previous example.
When you assign a savepoint name, you should use 32 characters or less. SQL Server allows you to assign a longer name, but the statement uses only the first 32 characters. Bear in mind that the savepoint name is case-sensitive even if SQL Server is not configured for case sensitivity. Another option is to use a variable in the SAVE TRANSACTION statement, but the data type must be char, varchar, nchar, or nvarchar. If you use the same savepoint name multiple times in the same transaction, the ROLLBACK TRANSACTION statement rolls back to the most recent savepoint.
Normally, a ROLLBACK TRANSACTION resets the value of @@TRANCOUNT to 0. However, when a transaction rolls back to a savepoint, @@TRANCOUNT is not reset. The SAVE TRANSACTION statement also has no effect on @@TRANCOUNT.
In Listing 3-4, the transaction has multiple savepoints and SELECT statements illustrate the effect of modifying data, and then rolling back to a specific savepoint.
Listing 3-4 Create a transaction with multiple savepoints
BEGIN TRANSACTION; INSERT INTO Examples.TestParent(ParentId, ParentName) VALUES (8, 'Ed'); SAVE TRANSACTION StartTran; SELECT 'StartTran' AS Status, ParentId, ParentName FROM Examples.TestParent; DELETE Examples.TestParent WHERE ParentId = 7; SAVE TRANSACTION DeleteTran; SELECT 'Delete 1' AS Status, ParentId, ParentName FROM Examples.TestParent; DELETE Examples.TestParent WHERE ParentId = 6; SELECT 'Delete 2' AS Status, ParentId, ParentName FROM Examples.TestParent; ROLLBACK TRANSACTION DeleteTran; SELECT 'RollbackDelete2' AS Status, ParentId, ParentName FROM Examples.TestParent; ROLLBACK TRANSACTION StartTran; SELECT @@TRANCOUNT AS 'TranCount'; SELECT 'RollbackStart' AS Status, ParentId, ParentName FROM Examples.TestParent; COMMIT TRANSACTION; GO
The queries interspersed throughout this transaction give us visibility into the behavior of the savepoint and roll back operations:
Status ParentId ParentName --------- ----------- ----------------------------------------------------------------- StartTran 1 Dean StartTran 2 Mike StartTran 3 Bob StartTran 4 Linda StartTran 5 Isabelle StartTran 6 Lukas StartTran 7 Mary StartTran 8 Ed Status ParentId ParentName -------- ----------- ------------------------------------------------------------------ Delete 1 1 Dean Delete 1 2 Mike Delete 1 3 Bob Delete 1 4 Linda Delete 1 5 Isabelle Delete 1 6 Lukas Delete 1 8 Ed Status ParentId ParentName -------- ----------- ------------------------------------------------------------------ Delete 2 1 Dean Delete 2 2 Mike Delete 2 3 Bob Delete 2 4 Linda Delete 2 5 Isabelle Delete 2 8 Ed Status ParentId ParentName --------------- ----------- ----------------------------------------------------------- RollbackDelete2 1 Dean RollbackDelete2 2 Mike RollbackDelete2 3 Bob RollbackDelete2 4 Linda RollbackDelete2 5 Isabelle RollbackDelete2 6 Lukas RollbackDelete2 8 Ed TranCount ----------- 1 Status ParentId ParentName ------------- ----------- -------------------------------------------------------------- RollbackStart 1 Dean RollbackStart 2 Mike RollbackStart 3 Bob RollbackStart 4 Linda RollbackStart 5 Isabelle RollbackStart 6 Lukas RollbackStart 7 Mary RollbackStart 8 Ed
The eight rows in the query with status StartTran show the condition of the table after the INSERT operation and reflects the state of the data for the StartTran savepoint. Next, the seven rows in the query with status Delete 1 include one less row due to the DELETE operation. The DeleteTran savepoint includes this version of the table. After another DELETE operation executes, the query with status Delete 2 returns six rows. The first ROLLBACK TRANSACTION statement restores the version of data for the DeleteTran savepoint, and the query with status RollbackDelete2 correctly shows the seven rows prior to the second DELETE operation. Next, we can see that the @@TRANCOUNT variable at this point is still 1 because the ROLLBACK TRANSACTION statement did not reset it to 0. Last, another ROLLBACK TRANSACTION returns the table to its earlier state, which is committed at the end of the transaction.
Determine the role of transactions in high-concurrency databases
A high concurrency database should support a high number of simultaneous processes that do not interfere with one another while preserving the consistency of the data affected by those processes. Processes modifying data can potentially adversely affect processes trying to read or change the same data at the same time. To prevent simultaneous attempts to change the same data, SQL Server acquires locks for the current transaction, thereby blocking all other transactions.
Potential problems with concurrent processes
A failure to control concurrency in database can result in a variety of side effects. Typically, you want to design applications that avoid these problems. In some cases, your business requirements might allow a behavior. For now, let’s focus on which potential problems might arise. In Skill 3.2, we explain how to use isolation levels to manage the behavior of concurrent transactions.
Dirty reads
A dirty read, also known as an uncommitted dependency, can occur when an uncommitted transaction updates a row at the same time that another transaction reads that row with its new value. Because the writing transaction is not committed, the row could revert to its original state and consequently the reading transaction has data that is not valid.
SQL Server does not allow dirty reads by default. However, by controlling the isolation level of the reading transaction, you can specify whether it reads both uncommitted and committed data or committed data only.
Non-repeatable reads
A non-repeatable read can occur when data is read more than once within the same transaction while another transaction updates the same data between read operations. Let’s say that a transaction reads the current in-stock quantity of a widget from an inventory table as 5 and continues to perform other operations, which leaves the transaction in an uncommitted state. During this time, another transaction changes the in-stock quantity of the widget to 3. Then the first transaction reads the in-stock quantity of the widget again, which is now inconsistent with the initial value read.
Phantom reads
Closely related to a non-repeatable read is a phantom read. This potential problem can occur when one transaction reads the same data multiple times while another transaction inserts or updates a row between read operations. As an example, consider a transaction in which a SELECT statement reads rows having in-stock quantities less than 5 from the inventory table and remains uncommitted while a second transaction inserts a row with an in-stock quantity of 1. When the first transaction reads the inventory table again, the number of rows increases by one. In this case, the additional row is considered to be a phantom row. This situation occurs only when the query uses a predicate.
Lost updates
Another potential problem can occur when two processes read the same row and then update that data with different values. This might happen if a transaction first reads a value into a variable and then uses the variable in an update statement in a later step. When this update executes, another transaction updates the same data. Whichever of these transactions is committed first becomes a lost update because it was replaced by the update in the other transaction. You cannot use isolation levels to change this behavior, but you can write an application that specifically allows lost updates.
Resource locks
SQL Server locks the minimum number of resources required to complete a transaction. It uses different types of locks to support as much concurrency as possible while maintaining data consistency and transaction isolation. The SQL Server Lock Manager chooses the lock mode and resources to lock based on the operation to be performed, the amount of data to be affected by the operation, and the isolation level type (described in Skill 3.2). It also manages the compatibility of locks on the same resources, resolves deadlocks when possible, and escalates locks when necessary (as described in Skill 3.3).
SQL Server takes locks on resources at several levels to provide the necessary protection for a transaction. This group of locks at varying levels of granularity is known as a lock hierarchy and consists of one or more of the following lock modes:
Shared (S) This lock mode, also known as a read lock, is used for SELECT, INSERT, UPDATE, and DELETE operations and is released as soon as data has been read from the locked resource. While the resource is locked, other transactions cannot change its data. However, in theory, an unlimited number of shared (s) locks can exist on a resource simultaneously. You can force SQL Server to hold the lock for the duration of the transaction by adding the HOLDLOCK table hint like this:
BEGIN TRANSACTION; SELECT ParentId, ParentName FROM Examples.TestParent WITH (HOLDLOCK); WAITFOR DELAY '00:00:15'; ROLLBACK TRANSACTION;
Another way to change the lock’s duration is to set the REPEATABLE_READ or SERIALIZABLE transaction isolation levels, which we explain in more detail in Skill 3.2.
Update (U) SQL Server takes this lock on a resource that might be updated in order to prevent a common type of deadlocking, which we describe further in Skill 3.3. Only one update (U) lock can exist on a resource at a time. When a transaction modifies the resource, SQL Server converts the update (U) lock to an exclusive (X) lock.
Exclusive (X) This lock mode protects a resource during INSERT, UPDATE, or DELETE operations to prevent that resource from multiple concurrent changes. While the lock is held, no other transaction can read or modify the data, unless a statement uses the NOLOCK hint or a transaction runs under the read uncommitted isolation level as we describe in Skill 3.2
Intent An intent lock establishes a lock hierarchy to protect a resource at a lower level from getting a shared (S) lock or exclusive (X) lock. Technically speaking, intent locks are not true locks, but rather serve as an indicator that actual locks exist at a lower level. That way, another transaction cannot try to acquire a lock at the higher level that is incompatible with the existing lock at the lower level. There are six types of intent locks:
Intent shared (IS) With this lock mode, SQL Server protects requested or acquired shared (S) locks on some resources lower in the lock hierarchy.
Intent exclusive (IX) This lock mode is a superset of intent shared (IS) locks that not only protects locks on resources lower in the hierarchy, but also protects requested or acquired exclusive (X) locks on some resources lower in the hierarchy.
Shared with intent exclusive (SIX) This lock mode protects requested or acquired shared (S) locks on all resources lower in the hierarchy and intent exclusive (IX) locks on some resources lower in the hierarchy. Only one shared with intent exclusive (SIX) lock can exist at a time for a resource to prevent other transactions from modifying it. However, lower level resources can have intent shared (IS) locks and can be read by other transactions.
Intent update (IU) SQL Server uses this lock mode on page resources only to protect requested or acquired update (U) locks on all lower-level resources and converts it to an intent exclusive (IX) lock if a transaction performs an update operation.
Shared intent update (SIU) This lock mode is a combination of shared (S) and intent update (IU) locks and occurs when a transaction acquires each lock separately but holds them at the same time.
Update intent exclusive (UIX) This lock mode results from a combination of update (U) and intent exclusive (IX) locks that a transaction acquires separately but holds at the same time.
Schema SQL Server acquires this lock when an operation depends the table’s schema. There are two types of schema locks:
Schema modification (Sch-M) This lock mode prevents other transactions from reading from or writing to a table during a Data Definition Language (DDL) operation, such as removing a column. Some Data Manipulation Language (DML) operations, such as truncating a table, also require a schema modification (Sch-M) lock.
Schema stability (Sch-S) SQL Server uses this lock mode during query compilation and execution to block concurrent DDL operations and concurrent DML operations requiring a schema modification (Sch-M) lock from accessing a table.
Bulk Update (BU) This lock mode is used for bulk copy operations to allow multiple threads to bulk load data into the same table at the same time and to prevent other transactions that are not bulk loading data from accessing the table. SQL Server acquires it when the table lock on bulk load table option is set by using sp_tableoption or when you use a TABLOCK hint like this:
INSERT INTO Examples.TestParent WITH (TABLOCK) SELECT <columns> FROM <table>;
Key-range A key-range lock is applied to a range of rows that is read by a query with the SERIALIZABLE isolation level to prevent other transactions from inserting rows that would be returned in the serializable transaction if the same query executes again. In other words, this lock mode prevents phantom reads within the set of rows that the transaction reads.
RangeS-S This lock mode is a shared range, shared resource lock used for a serializable range scan.
RangeS-U This lock mode is a shared range, update resource lock used for a serializable update scan.
RangeI-N This lock mode is an insert range, null resource lock that SQL Server acquires to test a range before inserting a new key into an index.
RangeX-X This lock mode is an exclusive range, exclusive resource lock used when updating a key in a range.
While many locks are compatible with each other, some locks prevent other transactions from acquiring locks on the same resource, as shown in Table 3-1. Let’s consider a situation in which one transaction has a shared (S) lock on a row and another transaction is requesting an exclusive (X) lock. In this case, the request is blocked until the first transaction releases its lock.
TABLE 3-1 Lock compatibility for commonly encountered lock modes
|
Existing granted mode |
|||||
Requested mode |
S |
U |
X |
IS |
IX |
SIX |
S |
Yes |
Yes |
No |
Yes |
No |
No |
U |
Yes |
No |
No |
Yes |
No |
No |
X |
No |
No |
No |
No |
No |
No |
IS |
Yes |
Yes |
No |
Yes |
Yes |
Yes |
IX |
No |
No |
No |
Yes |
Yes |
No |
SIX |
No |
No |
No |
Yes |
No |
No |
SQL Server can acquire a lock on any of the following resources to ensure that the user of that resource has a consistent view of the data throughout a transaction:
RID A row identifier for the single row to lock within a heap and is acquired when possible to provide the highest possible concurrency.
KEY A key or range of keys in an index for a serializable transaction can be locked in one of two ways depending on the isolation level. If a transaction runs in the READ COMMITTED or REPEATABLE READ isolation level, the index keys of the accessed rows are locked. If the table has a clustered index, SQL Server acquires key locks instead of row locks because the data rows are the leaf-level of the index. If a transaction runs in the SERIALIZABLE isolation mode, SQL Server acquires key-range locks to prevent phantom reads.
PAGE An 8-kilobyte (KB) data or index page gets locked when a transaction reads all rows on a page or when page-level maintenance, such as updating page pointers after a page-split, is performed.
EXTENT A contiguous block of eight data or index pages gets a shared (S) or exclusive (X) locks typically during space allocation and de-allocation.
HoBT A heap or B-Tree lock can be an entire index or all data pages of a heap.
Table An entire table, including both data and indexes, can be locked for SELECT, UPDATE, or DELETE operations.
File A database file can be locked individually.
Application A resource defined by your application can be locked by using sp_getapplock so that you can lock any resource you want with a specified lock mode.
Metadata Any system metadata can be locked to protect system catalog information.
Allocation unit An database allocation unit used for storage of data can be locked.
Database An entire database gets a shared (S) lock to indicate it is currently in use so that another process cannot drop it, take it offline, or restore it.
To increase concurrency, SQL Server uses dynamic lock management. That is, in a large table for which many row locks are required (as determined by the query optimizer), SQL Server might instead take a page or table lock at the beginning of a transaction. SQL Server can also escalate lock modes dynamically during a transaction. For example, if the transaction initially has a set of row locks, and later requests more row locks, SQL Server releases the row locks and takes a table lock. This behavior simplifies lock management, but reduces concurrency.